forum_id
stringlengths 10
10
| raw_ocr_text
stringlengths 13.1k
129k
|
---|---|
h7-XixPCAL | Structured Denoising Diffusion Models in DiscreteState-SpacesJacob Austin⇤, Daniel D. Johnson⇤, Jonathan Ho, Daniel Tarlow & Rianne van den Berg†Google Research, Brain Team{jaaustin,ddjohnson,jonathanho,dtarlow,riannevdberg}@google.comAbstractDenoising diffusion probabilistic models (DDPMs) [ 17] have shown impressiveresults on image and waveform generation in continuous state spaces. Here, weintroduce Discrete Denoising Diffusion Probabilistic Models (D3PMs), diffusion-like generative models for discrete data that generalize the multinomial diffusionmodel of Hoogeboom et al. [18], by going beyond corruption processes with uni-form transition probabilities. This includes corruption with transition matrices thatmimic Gaussian kernels in continuous space, matrices based on nearest neighborsin embedding space, and matrices that introduce absorbing states. The third al-lows us to draw a connection between diffusion models and autoregressive andmask-based generative models. We show that the choice of transition matrix is animportant design decision that leads to improved results in image and text domains.We also introduce a new loss function that combines the variational lower boundwith an auxiliary cross entropy loss. For text, this model class achieves strongresults on character-level text generation while scaling to large vocabularies onLM1B. On the image dataset CIFAR-10, our models approach the sample qualityand exceed the log-likelihood of the continuous-space DDPM model.1 IntroductionGenerative modeling is a core problem in machine learning, useful both for benchmarking our abilityto capture statistics of natural datasets and for downstream applications that require generatinghigh-dimensional data like images, text, and speech waveforms. There has been a great deal ofprogress with the development of methods like GANs [ 14,3], V AEs [ 22,32], large autoregressiveneural network models [ 43,42,44], normalizing flows [ 31,11,21,30], and others, each with theirown tradeoffs in terms of sample quality, sampling speed, log-likelihoods, and training stability.Recently, diffusion models [ 36] have emerged as a compelling alternative for image [ 17,39] and au-dio [7,23] generation, achieving comparable sample quality to GANs and log-likelihoods comparableto autoregressive models with fewer inference steps. A diffusion model is a parameterized Markovchain trained to reverse a predefined forward process, which is a stochastic process constructed togradually corrupt training data into pure noise. Diffusion models are trained using a stable objectiveclosely related to both maximum likelihood and score matching [ 19,45], and they admit fastersampling than autoregressive models by using parallel iterative refinement [28, 38, 40, 37].Although diffusion models have been proposed in both discrete and continuous state spaces [ 36],most recent work has focused on Gaussian diffusion processes that operate in continuous state spaces(e.g. for real-valued image and waveform data). Diffusion models with discrete state spaces have35th Conference on Neural Information Processing Systems (NeurIPS 2021).⇤Equal contributions†Now at Microsoft ResearchFigure 1: D3PM forward and (learned) reverse process applied to a quantized swiss roll. Each dotrepresents a 2D categorical variable. Top: samples from the uniform, discretized Gaussian, andabsorbing state D3PM model forward processes, along with corresponding transition matrices Q.Bottom: samples from a learned discretized Gaussian reverse process.been explored for text and image segmentation domains [ 18], but they have not yet been demonstratedas a competitive model class for large scale text or image generation.Our aim in this work is to improve and extend discrete diffusion models by using a more structuredcategorical corruption process to shape data generation, as illustrated in Figure 1. Our models do notrequire relaxing or embedding discrete data (including images) into continuous spaces, and can embedstructure or domain knowledge into the transition matrices used by the forward process. We achievesignificantly improved results by taking advantage of this flexibility. We develop structured corruptionprocesses appropriate for text data, using similarity between tokens to enable gradual corruptionand denoising. Expanding further, we also explore corruption processes that insert [MASK] tokens,which let us draw parallels to autoregressive and mask-based generative models. Finally, we studydiscrete diffusion models for quantized images, taking inspiration from the locality exploited bycontinuous diffusion models. This leads to a particular choice of discrete corruption process thatdiffuses preferentially to more similar states and leads to much better results in the image domain.Overall, we make a number of technical and conceptual contributions. Beyond designing several newstructured diffusion models, we introduce a new auxiliary loss which stabilizes training of D3PMsand a family of noise schedules based on mutual information that lead to improved performance. Westrongly outperform various non-autoregressive baselines for text generation on character-level textgeneration, and successfully scale discrete diffusion models to large vocabularies and long sequencelengths. We also achieve strong results on the image dataset CIFAR-10, approaching or exceedingthe Gaussian diffusion model from Ho et al. [17] on log-likelihoods and sample quality.2 Background: diffusion modelsDiffusion models [ 36] are latent variable generative models characterized by a forward and a reverseMarkov process. The forward process q(x1:T|x0)=QTt=1q(xt|xt1)corrupts the data x0⇠q(x0)into a sequence of increasingly noisy latent variables x1:T=x1,x2,. . . ,xT. The learnedreverse Markov process p✓(x0:T)=p(xT)QTt=1p✓(xt1|xt)gradually denoises the latent variablestowards the data distribution. For example, for continuous data, the forward process typically addsGaussian noise, which the reverse process learns to remove.2In order to optimize the generative model p✓(x0)to fit the data distribution q(x0), we typicallyoptimize a variational upper bound on the negative log-likelihood:Lvb=Eq(x0)DKL[q(xT|x0)||p(xT)]|{z}LT+TXt=2Eq(xt|x0)⇥DKL[q(xt1|xt,x0)||p✓(xt1|xt)]⇤| {z }Lt1Eq(x1|x0)[logp✓(x0|x1)]|{z}L0. (1)When the number of time steps Tgoes to infinity, both the forward process and the reverse processshare the same functional form [ 36,12], in the sense that the true posterior q(xt1|xt)becomesfully conditionally independent.3This motivates using a conditionally independent approximatereverse process p✓(xt1|xt)from the same class of distributions as that of the forward process.Furthermore, for several choices of the forward process the distribution q(xt|x0)converges to astationary distribution ⇡(x)in the limit t!1independent of the value of x0. When the number oftime steps Tis large enough and we choose ⇡(x)as the prior p(xT), we can guarantee that the LTterm in (1)will approach zero regardless of the data distribution q(x0). (Alternatively, one can use alearned prior p✓(xT).)While q(xt|xt1)can in theory be arbitrary, efficient training of p✓is possible when q(xt|xt1):1.Permits efficient sampling of xtfrom q(xt|x0)for an arbitrary time t, allowing us torandomly sample timesteps and optimize each Lt1term individually with stochasticgradient descent,2.Has a tractable expression for the forward process posterior q(xt1|xt,x0), which allowsus to compute the KL divergences present in the Lt1term of (1).The majority of recent work in continuous spaces [ 17,37,7,28] defines the forwardand reverse distributions as q(xt|xt1)= Nxt|p1txt1,tIand p✓(xt1|xt)=N(xt1|μ✓(xt,t),⌃✓(xt,t)), respectively. The aforementioned properties hold in the case ofthese Gaussian diffusion models: the forward process q(xt|x0)converges to a stationary distribution,motivating the choice p(xT)= N(xT|0,I), and both q(xt|x0)andq(xt1|xt,x0)are tractableGaussian distributions for which the KL divergence can be computed analytically.3 Diffusion models for discrete state spacesDiffusion models with discrete state spaces were first introduced by Sohl-Dickstein et al. [36], whoconsidered a diffusion process over binary random variables. Hoogeboom et al. [18] extendedthe model class to categorical random variables with transition matrices characterized by uniformtransition probabilities. In their supplementary material, Song et al. [37]also derived this extension,although no experiments were performed with this model class. Here, we briefly describe a moregeneral framework for diffusion with categorical random variables which includes these models asspecial cases.4For scalar discrete random variables with Kcategories xt,xt121,. . . ,K the forward transitionprobabilities can be represented by matrices: [Qt]ij=q(xt=j|xt1=i). Denoting the one-hotversion of xwith the row vector x, we can writeq(xt|xt1) = Cat( xt;p=xt1Qt), (2)where Cat( x;p)is a categorical distribution over the one-hot row vector xwith probabilities givenby the row vector p, and xt1Qtis to be understood as a row vector-matrix product. We assumethatQtis applied to each pixel of an image or each token in a sequence independently, and thatqfactorizes over these higher dimensions as well; we thus write q(xt|xt1)in terms of a single3For continuous state spaces and Gaussian q, the limit T!1corresponds to a stochastic differentialequation [40], whereas for discrete state spaces it corresponds to a Markov jump process.4Our implementation of D3PM framework is available at https://github.com/google-research/google-research/tree/master/d3pm .3element. Starting from x0, we obtain the following t-step marginal and posterior at time t1:q(xt|x0) = Catxt;p=x0Qt,with Qt=Q1Q2...Qtq(xt1|xt,x0)=q(xt|xt1,x0)q(xt1|x0)q(xt|x0)= Cat xt1;p=xtQ>tx0Qt1x0Qtx>t!.(3)Note that due to the Markov property of the forward process q(xt|xt1,x0)= q(xt|xt1). As-suming that the reverse process p✓(xt|xt1)is also factorized as conditionally independent overthe image or sequence elements, the KL divergence between qandp✓can be computed by simplysumming over all possible values of each random variable; we thus satisfy criteria 1 and 2 discussedin Section 2. Depending on Qt, the cumulative products Qtcan often be computed in closed form,or simply precomputed for all t. However, for large Kand large Tthis may be prohibitive. InAppendix A.4 we discuss how to ensure Qtcan still be computed efficiently in this case, allowingthe framework to scale to a larger number of categories.In the next section we discuss the choice of the Markov transition matrices Qtand correspondingstationary distributions. From here on, we refer to the general class of diffusion models with discretestate spaces as Discrete Denoising Diffusion Probabilistic Models (D3PMs).3.1 Choice of Markov transition matrices for the forward processAn advantage of the D3PM framework described above is the ability to control the data corruptionand denoising process by choosing Qt, in notable contrast to continuous diffusion, for which onlyadditive Gaussian noise has received significant attention. Besides the constraint that the rows of Qtmust sum to one to conserve probability mass, the only other constraint in choosing Qtis that therows of Qt=Q1Q2...Qtmust converge to a known stationary distribution5when tbecomes large,which can be guaranteed while imposing minimal restrictions on Qt(see Appendix A.1).We argue that for most real-world discrete data, including images and text, it makes sense toadd domain-dependent structure to the transition matrices Qtas a way of controlling the forwardcorruption process and the learnable reverse denoising process. Below we briefly discuss the uniformtransition matrices that have been studied in prior work [ 18], along with a set of structured transitionmatrices we have explored for our image and text dataset experiments; see Appendix A.2 for moredetails on each matrix type. We also note that this set is not exhaustive, and many other transitionmatrices could also be used within the D3PM framework.Uniform (Appendix A.2.1). Sohl-Dickstein et al. [36]considered a simple 2⇥2transition matrix forbinary random variables. Hoogeboom et al. [18]later extended this to categorical variables, proposinga transition matrix Qt=( 1 t)I+t/KTwith t2[0,1]. Since this transition matrix isdoubly stochastic with strictly positive entries, the stationary distribution is uniform. Because thetransition probability to any other state is uniform, in this paper we equivalently refer to this discretediffusion instance as D3PM-uniform.Absorbing state (Appendix A.2.2). Motivated by the success of BERT [ 10] and recent work onConditional Masked Language Models (CMLMs) in text, we consider a transition matrix with anabsorbing state (called [MASK]), such that each token either stays the same or transitions to [MASK]with some probability t. This does not impose particular relationships between categories, similar touniform diffusion, but still allows corrupted tokens to be distinguished from original ones. Moreover,the stationary distribution is not uniform but has all the mass on the [MASK] token. For images, wereuse the grey pixel as the [MASK] absorbing token.Discretized Gaussian (Appendix A.2.3). Instead of transitioning uniformly to any other state, forordinal data we propose imitating a continuous space diffusion model by using a discretized, truncatedGaussian distribution. We choose a normalization such that the transition matrix is doubly stochastic,leading to a uniform stationary distribution. This transition matrix will transition between moresimilar states with higher probability, and is well suited for quantized ordinal data such as images.Token embedding distance (Appendix A.2.4). Textual data does not have ordinal structure, butthere may still be interesting semantic relationships. For instance, in a word-level vocabulary5If a stationary distribution is not known, we can introduce a learned prior p✓(xT); we note that this isequivalent to extending the forward process by appending a rank-one matrix QT+1that ignores xTand producesa deterministic xT+1, then learning the reverse step p✓(xT|xT+1)=p✓(xT).4synonyms or closely related words (like “dog" or “cat") may be more similar than other tokens. Asa demonstration of the generality of the D3PM framework, we explore using similarity in wordembedding space to guide the forward process, and construct a doubly-stochastic transition matrixthat transitions more frequently between tokens that have similar embeddings while maintaining auniform stationary distribution.For uniform and absorbing-state diffusion, the cumulative products Qtcan be computed in closedform (see Appendix A.4.1); the remainder can be precomputed.3.2 Noise schedulesWe consider several different options for the noise schedule of the forward process. For discretizedGaussian diffusion, we explore linearly increasing the variance of the Gaussian before discretizingit. (Note that a linear schedule for Qtleads to a nonlinear amount of cumulative noise in Qt.) Foruniform diffusion we use the cosine schedule which sets the cumulative probability of a transition toa cosine function, as introduced by Nichol and Dhariwal [28]and adapted by Hoogeboom et al. [18].For a general set of transition matrices Qt(such as the one based on token embeddings), previouslyproposed schedules may not be directly applicable. We consider linearly interpolating the mutualinformation between xtandx0to zero, i.e. I(xt;x0)⇡(1tT)H(x0). Interestingly, for thespecific case of absorbing-state D3PMs, this schedule reduces to exactly the (Tt+ 1)1scheduleproposed by Sohl-Dickstein et al. [36]for a Bernoulli diffusion process. See Appendix A.7 for moredetails.3.3 Parameterization of the reverse processWhile it is possible to directly predict the logits of p✓(xt1|xt)using a neural network nn✓(xt),we follow Ho et al. [17]and Hoogeboom et al. [18]and focus on using a neural network nn✓(xt)to predict the logits of a distribution ep✓(ex0|xt), which we combine with q(xt1|xt,x0)and asummation over one-hot representations of x0to obtain the following parameterizationp✓(xt1|xt)/Xex0q(xt1,xt|ex0)ep✓(ex0|xt). (4)We note that under this x0-parameterization the KL divergence DKL[q(xt1|xt,x0)||p✓(xt1|xt)]will be zero if ep✓(ex0|xt)places all of its probability mass on the original value x0. The decompositionofq(xt1|xt,x0)in(3)also provides us with a motivation for this parameterization. According to(3), in a given state xt, the optimal reverse process only takes into account transitions to states forwhich q(xt|xt1)is non-zero. Therefore, the sparsity pattern of Qtdetermines the sparsity patternof the ideal reverse transition probabilities in p✓(xt1|xt). The parameterization in (4)automaticallyensures that the learned reverse probability distribution p✓(xt1|xt)has the correct sparsity patterndictated by the choice of the Markov transition matrix Qt. This parameterization also lets us performinference with ksteps at a time, by predicting p✓(xtk|xt)=Pq(xtk,xt|ex0)ep✓(ex0|xt).Finally, when modeling ordinal discrete data, instead of predicting the logits of ep✓(ex0|xt)directlywith the output of a neural net, another option is to model the probabilities with a truncated discretizedlogistic distribution (see Appendix A.8). This provides an extra ordinal inductive bias to the reversemodel and boosts FID and log-likelihood scores for images.3.4 Loss functionWhile the original diffusion models introduced by Sohl-Dickstein et al. [36]were optimized withthe negative variational lower bound Lvbof(1), more recent diffusion models are optimized withdifferent objectives. For instance, Ho et al. [17] derive a simplified loss function ( Lsimple ) thatreweights the negative variational bound, and Nichol and Dhariwal [28] explore a hybrid lossLhybrid =Lsimple +Lvb(using one term to learn the predicted mean and the other to learnpredicted variance). Inspired by this recent work, we introduce an auxiliary denoising objective forthex0-parameterization of the reverse process, which encourages good predictions of the data x0ateach time step. We combine this with the negative variational lower bound, yielding the followingalternative loss function:L=Lvb+Eq(x0)Eq(xt|x0)[logep✓(x0|xt)]. (5)5We note that the auxiliary loss resembles the cross entropy term L0in(1)att=1, and so one mightexpect that it is a KL reweighting similar to the one described by Ho et al. [17]. However, our Ldirectly supervises the model output ep✓(ex0|xt). This is in general a stronger source of supervisionthan any reweighting of the terms in the lower bound (1), which only provides supervision throughthe sum in (4). To see this, note that for a fixed x0, both DKL[q(xt1|xt,x0)||p✓(xt1|xt)]andEq(xt|x0)[logep✓(x0|xt)]are minimized when ep✓(ex0|xt)has all its mass on the datapoint x0, butfor some choices of qthere may be a different setting ex06=x0that induces the same distributionp✓(xt1|xt). We find that training with this loss leads to improved quality of image samples.4 Connection to existing probabilistic models for textIn this section we expand on interesting connections between the D3PM framework and severalexisting probabilistic and language modeling approaches.BERT is a one-step diffusion model: One possible D3PM transition matrix is a combination of auniform transition matrix and an absorbing state at the [MASK] token (i.e. Q=↵eTm+T/K+(1↵)I, where emis a one-hot vector on the [MASK] token). For a one-step diffusion processin which q(x1|x0)replaces 10% of tokens with [MASK] and 5% uniformly at random, this leadsprecisely to the BERT denoising objective, i.e. LvbLT=Eq(x1|x0)[logp✓(x0|x1)] = LBERT ,since LTis a constant independent of ✓(assuming a fixed prior).Autoregressive models are (discrete) diffusion models: Consider a diffusion process that deter-ministically masks tokens one-by-one in a sequence of length N=T:q([xt]i|x0)=[ x0]iifi<Ntelse [MASK] . This is a deterministic forward process, so q(xt1|xt,x0)is a delta distributionon the xtsequence with one fewer mask: q([xt1]i|xt,x0)=[xt]iifi6=Ttelse[x0]i. Whilethis process is not applied independently to each token, it can be recast as an independently-applieddiffusion process on the product space [0...N]⇥V, where each token is tagged with its position inthe sequence, Vis the vocabulary, and Qis an N⇥| V|⇥ N⇥| V| sparse matrix.Because all tokens except the one at position i=Tthave deterministic posteriors, the KLdivergence DKL(q([xt1]j|xt,x0)||p✓([xt1]j|xt))is zero for all other positions. The onlytoken for which this is not true is the token at position i, for which DKL(q([xt1]i|xt,x0)||p✓([xt1]i|xt)) = logp✓([x0]i|xt), the standard cross entropy loss for an autoregressive model.(Generative) Masked Language-Models (MLMs) are diffusion models: Generative Masked Lan-guage Models ([ 13], [47]) are generative models that generate text from a sequence of [MASK]tokens. They are usually trained by sampling a sequence x0, masking ktokens according to someschedule, and learning to predict the masked tokens given context. It turns out that a D3PM absorbing([MASK]) model trained on the usual ELBO objective with the x0-parameterization from 3.3 reducesto a reweighted version of this MLM objective (see Appendix A.3 for a detailed derivation).5 Text generationFor text, we experiment with generation on two datasets: text8 [ 26], a character-level dataset extractedfrom English-language Wikipedia, and the One Billion Word dataset (LM1B) [ 6], a large dataset ofshuffled English-language sentences. For both, we train a D3PM uniform model based on the workby Hoogeboom et al. [18](D3PM uniform) and a model that masks tokens (D3PM absorbing). Wealso consider a model that transitions uniformly to nearest neighbors in a token embedding space(D3PM NN). We follow Hoogeboom et al. [18]and use T= 1000 timesteps, although we are alsoable to evaluate on fewer due to the parameterization in Section 3.3.5.1 Character-level generation on text8text8 is a character-level text dataset consisting of a small vocabulary of 27 tokens: the letters ‘a’-‘z’and the ‘_’ whitespace token. We follow the convention of training and evaluating text8 in chunks oflength 256 without any preprocessing [ 18]. For nearest-neighbor D3PM, our nearest neighbor graphin character-space is shown in Appendix B.2.1. D3PM uniform models were trained with a cosineschedule from Hoogeboom et al. [18](ablations in Appendix B.2.1), while D3PM absorbing andD3PM NN models were trained with a mutual information schedule.6Table 1 shows that for D3PM, the D3PM absorbing model performed the best, exceeding theuniform and NN diffusion models. We were able to improve upon the baseline result of [ 18] withhyperparameter tuning, and our uniform and NN results outperformed results from Hoogeboomet al. [18] across all inference steps, down to as few as 20. We found that L=0.01worked bestfor D3PM absorbing, while Lvbwas better for D3PM uniform. Our model outperforms all non-autoregressive baselines except one, the Discrete Flow model [ 41] (for which unfortunately noopen-source implementations exist), and is also faster than all but one method, the IAF/SCF model[49]. It is also nearly 20x faster than an autoregressive transformer of the same size. We note that whileour 20-step D3PM models in Table 1 are much faster than a comparable autoregressive transformers,this table only shows timings for batch size 1 (per device). For larger batches, autoregressive cachingallows transformers to perform inference relatively more quickly. We include additional benchmarksand a plot of inference time as a function of iterations in Appendix B.2.1. D3PM with the maskabsorbing token was by far the best performing model, which lends credibility to the use of masks indenoising auto-encoders. Nearest-neighbor diffusion only narrowly improves upon a D3PM-uniformmodel: this was a surprising negative result for us, suggesting that not all notions of structure aremeaningful.5.2 Text generation on LM1BText generation for large-scale text datasets and large vocabularies with discrete diffusion models hasnot been previously demonstrated. We include results from LM1B as a proof of concept, showingthat these models can indeed scale (as discussed in Appendix A.4), and that the D3PM absorbingmodel continues to excel. All models were trained and evaluated on packed sequences of length 128,using a sentencepiece6vocabulary of size 8192 .Table 2 contains results from experiments on LM1B. Overall, mask diffusion (D3PM absorbing)does relatively well, approaching the performance of a comparable autoregressive model of thesame size, and scaling to far fewer steps, while uniform diffusion performs significantly worse.We find, surprisingly, that the D3PM NN model performs worse than the uniform model in termsof log likelihoods (although it demonstrates unique qualitative behavior). This suggests that wordembedding similarity may not be a meaningful kind of locality in a diffusion process. We found thetheL=0.01loss worked best for the mask absorbing model, but reduced performance for the othermodels. We note the surprising scaling in perplexity in Figure 2, achieving strong results with asfew as 10 inference steps. We also show samples from our model and completions from corruptedsamples.Table 1: Quantitative results on text8. NLL is reported on the entire test set. Sample times are forgenerating a single example of length 256. Results are reported on two seeds. All models are standard12-layer transformers unless otherwise noted.†Transformer XL is a 24-layer transformer, using a784 context window.‡Results reported by [18] by running code from official repository.Model Model steps NLL (bits/char) ( #) Sample time (s) ( #)Discrete Flow [41] ( 8⇥3layers) - 1.23 0 .16Argmax Coupling Flow [18] - 1.80 0 .40±0.03IAF / SCF [49]‡- 1.88 0 .04±0.0004Multinomial Diffusion (D3PM uniform) [18] 1000 1.72 26 .6±2.2D3PM uniform [18] (ours) 1000 1.61±0.02 3 .6±0.4D3PM NN ( Lvb) (ours) 1000 1.59±0.03 3 .1474 ±0.0002D3PM absorbing ( L=0.01) (ours) 1000 1.45±0.02 3 .4±0.3D3PM uniform [18] (ours) 256 1.68±0.01 0 .5801 ±0.0001D3PM NN ( Lvb) (ours) 256 1.64±0.02 0 .813±0.002D3PM absorbing ( L=0.01) (ours) 256 1.47±0.03 0 .598±0.002Transformer decoder (ours) 256 1 .37 0 .3570 ±0.0002Transformer decoder [1] 256 1 .18 -Transformer XL [9]†256 1 .08 -D3PM uniform [18] (ours) 20 1.79±0.03 0 .0771 ±0.0005D3PM NN ( Lvb) (ours) 20 1.75±0.02 0 .1110 ±0.0001D3PM absorbing ( L=0.01) (ours) 20 1.56±0.04 0 .0785 ±0.00036https://github.com/google/sentencepiece7Figure 2: Left: perplexity v.s. sampling iterations for LM1B. Right: Using a trained D3PM absorbingmodel for LM1B to (top) generate new sentences and (bottom) reconstruct corrupted examples.Table 2: Quantitative results on LM1B. Perplexity reported on the test set. Results are reportedon two seeds. All models have context window length 128 and 12 layers unless otherwise noted.†Transformer XL is a 24 layer transformer.‡rounded for readability, see Appendix B.2.2.Metric: Perplexity ( #) Sample time‡(s) (#)inference steps: 1000 128 64 1000 128 64D3PM uniform 137.9 ±2.1 139.2 ±1.2 145.0 ±1.2 1.82 0.21 0.08D3PM NN 149.5 ±1.3 158.6 ±2.2 160.4 ±1.2 21.29 6.69 5.88D3PM absorbing 76.9 ±2.3 80.1 ±1.2 83.6 ±6.1 1.90 0.19 0.10Transformer (ours) - 43.6 - - 0.26 -Transformer XL [9]†- 21.8 - - - -6 Image generationWe evaluate the performance of several D3PM models on the task of unconditional image generationwith the dataset CIFAR-10 [ 25]. We follow Ho et al. [17]and use T= 1000 timesteps for all modelsand verify that for all models the forward process converges to the stationary distribution within Tsteps, yielding a value of at most LT⇡105bits per dimension. We train three versions of D3PMwith different transition matrices: doubly stochastic matrices with uniform transition probabilities(D3PM uniform) [ 18], transition matrices with an absorbing state located at R, G and B values of 128(D3PM absorbing) and doubly stochastic discretized Gaussian transition matrices (D3PM Gauss). Forthe D3PM uniform model we experimented with a linear tschedule as well as the cosine scheduleas proposed in [ 18], with the cosine schedule producing the best results. For D3PM absorbing weuse the schedule t=(Tt+ 1)1as also proposed in [ 36], which corresponds to increasing theprobability of being in the absorbing state linearly over time. For D3PM Gauss we use the samelinear schedule as in [17]. See Appendix B.1 for more details on the experimental setup.Table 3 shows that for D3PM models trained with the Lvbobjective, D3PM Gauss performs betterthan D3PM absorbing and uniform on all metrics: Inception score (IS), Frechet Inception Distance(FID) and negative log-likelihood (NLL). The IS score of the uniform and absorbing D3PM modelsare comparable, while the FID score and NLL of the D3PM absorbing model are slightly better. Wetrained both D3PM absorbing and D3PM Gauss with the alternative loss function Lof(5), andwe found =0.001to work best. We have also experimented with larger values of and a modeltrained only with the auxiliary denoising term in (5). Although this led to a more rapid increasein performance early on in training, the NLL leveled off at higher values for larger and the FIDeven started increasing again. The results show that the models trained with Lperform significantlybetter than their counterparts trained with Lvb. One explanation for this boost in performance is thatthe cross entropy term leads to gradient noise that varies less with the time step t, which is in contrastto the large change in magnitude of the Lt1terms in Lvbfor smaller t, as demonstrated by Nicholand Dhariwal [28]. Finally, we achieve our best results by combining D3PM Gauss trained on Lwith a truncated logistic parameterization of the reverse process distribution p✓(ex0|xt)(D3PM Gauss+ logistic). Figure 3 shows samples from our best model (D3PM Gauss + logistic), as well as theD3PM absorbing model.8Table 3: Inception scores (IS), Frechet Inception Distance (FID) and negative log-likehood (NLL) onthe image dataset CIFAR-10. The NLL is reported on the test set in bits per dimension. We report ourresults as averages with standard deviations, obtained by training five models with different seeds.Model IS ( ") FID ( #) NLL ( #)Sparse Transformer [8] 2.80NCSN [38] 8.87±0.12 25.32NCSNv2 [39] 8.40±0.07 10.87StyleGAN2 + ADA [20] 9.74±0.05 3.26Diffusion (original), Lvb[36] 5.40DDPM Lvb[17] 7.67±0.13 13 .51 3.70DDPM Lsimple [17] 9.46±0.11 3 .17 3.75Improved DDPM Lvb[28] 11.47 2.94Improved DDPM Lsimple [28] 2.90 3.37DDPM++ cont [40] 2.92 2 .99NCSN++ cont. [40] 9.89 2 .20D3PM uniform Lvb 5.99±0.14 51 .27±2.15 5.08±0.02D3PM absorbing Lvb 6.26±0.10 41 .28±0.65 4.83±0.02D3PM absorbing L=0.001 6.78±0.08 30 .97±0.64 4.40±0.02D3PM Gauss Lvb 7.75±0.13 15 .30±0.55 3.966±0.005D3PM Gauss L=0.001 8.54±0.12 8 .34±0.10 3.975±0.006D3PM Gauss + logistic L=0.001 8.56±0.10 7 .34±0.19 3.435±0.0077 Related WorkDiffusion generative models were first proposed by Sohl-Dickstein et al. [36] and have gainedrenewed attention recently due to strong results on image and waveform generation [ 17,7]. Recentworks have proposed improvements for diffusion model training, including importance sampling ofthe ELBO, better noise schedules [ 28] and implicit diffusion models [ 37]. Several works have alsodrawn connections to score matching [ 45,19,38], leading to improved sampling algorithms in thecontinuous-time limit [40].While most works have considered continuous diffusion models, discrete diffusion-like models weredescribed in [ 36] and applied to text generation and image segmentation data in [ 18]. Some works[29,27] have dealt with discrete data by embedding it in continuous space and leveraging Gaussiandiffusion, but have not applied this to text. Seff et al. [35]considered generation of discrete structuredobjects using a diffusion-like Markov corruption process. Goyal et al. [15]proposed a diffusion-likemodel for images with a more flexible family of learned corruption processes. Ho et al. [17]alsodraws connections between diffusion and autoregressive models for continuous data.For text, denoising autoencoders have a long history both in representation learning [ 2,10] and morerecently as generative models [ 47]. These closely resemble our absorbing state diffusion variants forFigure 3: Left: progressive sampling at t= 1000 ,900,800,. . . ,0for D3PM absorbing (top) andD3PM Gauss + logistic (bottom), trained with Lloss on CIFAR-10. These samples were cherrypicked. Right: (non cherry picked) samples from the D3PM Gauss + logistic model.9a particular schedule and transition matrix (see Section 4), although our framing allows us to computelog-likelihoods and experiment with alternative transition matrices. Other works have considerednon-autoregressive translation and speech transcription via insertion and deletion [ 16,33], masking[13], and iteratively-refined sequence alignments [5, 34].8 DiscussionWe have presented D3PMs, a class of models that improves diffusion models for discrete data bydefining new kinds of discrete corruption processes. We achieve strong empirical results relative toprevious work on discrete diffusion models, even surpassing performance of continuous diffusionmodels in terms of log-likelihoods for image generation. While these results are promising, onelimitation is that—like much other work on non-autoregressive generative models—our models arestill inferior to strong autoregressive models like Transformer XL for text generation, and continuousdiffusion models still yield stronger results on image quality. We expect that D3PMs can benefitfurther from the rapid development of continuous diffusion models [ 40,28]. For example, furtherresearch in alternative losses for D3PM’s can take inspiration from the reweighted Lsimple objectiveused in [ 17], or the resampled variational bound in Nichol and Dhariwal [28]. Furthermore, D3PM’smight benefit from increasing the number of timesteps and a more optimized noise schedule, asdiscussed in Nichol and Dhariwal [28]. Another limitation comes from the choice of evaluationmetrics that we use (and that are standard for evaluation of generative models). Inception scoreand Frechet Inception Distance are based on neural networks that have been trained on a particulardistribution of data, which is not representative for all use-cases, and focusing on average qualitymetrics may not accurately reflect performance across the wide diversity of settings where thesegenerative models may be applied. This creates a risk of negative social impacts where advancesdisproportionately favor a subset of the population. Text generation models, including D3PMs,also present many challenges for responsible and reliable use. Prior works have highlighted thepotential for misuse [ 24,4], bias [ 46], and hallucination [ 48] in neural language models. D3PMs,like autoregressive language models, should be carefully evaluated along these axes before beingdeployed in a production setting. Going forward, we are excited about the space of possibilities thatarise within the D3PM framework. We have found successes in leveraging the flexibility that comesfrom defining discrete corruption processes for discrete data, but we believe that there are many morepossibilities that make use of richer forms of structure to define even more powerful discrete diffusionmodels.Acknowledgments and Disclosure of FundingWe would like to thank Hugo Larochelle for providing high-level feedback during the project, andBen Poole for reviewing a draft version of this manuscript. We would also like to thank Julia Kreutzerand Xavier Garcia for helpful conversations about language experiments, and Daniel Watson for earlydiscussions about discrete diffusion. We, the authors, declare to have no competing interests. Theresearch conducted for this paper was entirely supported by Google.10References[1]Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. Character-Levellanguage modeling with deeper Self-Attention. arXiv preprint arXiv:1808.04444 , August 2018.[2]Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent. Generalized denoising Auto-Encoders as generative models. arXiv preprint arXiv:1305.6663 , May 2013.[3]Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelitynatural image synthesis. In International Conference on Learning Representations , 2019.[4]Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Kather-ine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and ColinRaffel. Extracting training data from large language models. December 2020.[5]William Chan, Chitwan Saharia, Geoffrey Hinton, Mohammad Norouzi, and Navdeep Jaitly.Imputer: Sequence modelling via imputation and dynamic programming. In InternationalConference on Machine Learning , pages 1403–1413. PMLR, 2020.[6]Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, andTony Robinson. One billion word benchmark for measuring progress in statistical languagemodeling. arXiv preprint arXiv:1312.3005 , December 2013.[7]Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan.WaveGrad: Estimating gradients for waveform generation. arXiv preprint arXiv:2009.00713 ,September 2020.[8]Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences withsparse transformers. arXiv preprint arXiv:1904.10509 , 2019.[9]Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov.Transformer-XL: Attentive language models beyond a Fixed-Length context. arXiv preprintarXiv:1901.02860 , January 2019.[10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training ofdeep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 ,October 2018.[11] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP.arXiv preprint arXiv:1605.08803 , 2016.[12] W Feller. On the theory of stochastic processes, with particular reference to applications. InProceedings of the [First] Berkeley Symposium on Mathematical Statistics and Probability . TheRegents of the University of California, 1949.[13] Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. Mask-Predict: Paralleldecoding of conditional masked language models. arXiv preprint arXiv:1904.09324 , April2019.[14] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, SherjilOzair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in NeuralInformation Processing Systems , pages 2672–2680, 2014.[15] Anirudh Goyal, Nan Rosemary Ke, Surya Ganguli, and Yoshua Bengio. Variational walkback:Learning a transition operator as a stochastic recurrent net. November 2017.[16] Jiatao Gu, Changhan Wang, and Jake Zhao. Levenshtein transformer. arXiv preprintarXiv:1905.11006 , May 2019.[17] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. InAdvances in Neural Information Processing Systems , pages 6840–6851, 2020.[18] Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, and Max Welling. Argmaxflows and multinomial diffusion: Towards non-autoregressive language models. arXiv preprintarXiv:2102.05379 , 2021.11[19] Aapo Hyvärinen, Juha Karhunen, and Erkki Oja. Independent component analysis , volume 46.John Wiley & Sons, 2004.[20] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila.Training generative adversarial networks with limited data. arXiv preprint arXiv:2006.06676v1 ,2020.[21] Diederik P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolu-tions. In Advances in Neural Information Processing Systems , pages 10215–10224, 2018.[22] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. arXiv preprintarXiv:1312.6114 , 2013.[23] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatilediffusion model for audio synthesis. arXiv preprint arXiv:2009.09761 , 2020.[24] Sarah Kreps, R Miles McCain, and Miles Brundage. All the news that’s fit to fabricate: AI-Generated text as a tool of media misinformation. Journal of Experimental Political Science ,pages 1–14.[25] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images.2009.[26] Matt Mahoney. Text8 dataset. http://mattmahoney.net/dc/textdata , 2011. Accessed:2021-5-24.[27] Gautam Mittal, Jesse Engel, Curtis Hawthorne, and Ian Simon. Symbolic music generationwith diffusion models. arXiv preprint arXiv:2103.16091 , March 2021.[28] Alex Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. arXivpreprint arXiv:2102.09672 , 2021.[29] Chenhao Niu, Yang Song, Jiaming Song, Shengjia Zhao, Aditya Grover, and Stefano Ermon.Permutation invariant graph generation via score-based generative modeling. arXiv preprintarXiv:2003.00638 , March 2020.[30] George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and BalajiLakshminarayanan. Normalizing flows for probabilistic modeling and inference. arXiv preprintarXiv:1912.02762 , 2019.[31] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. InInternational Conference on Machine Learning , pages 1530–1538, 2015.[32] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagationand approximate inference in deep generative models. In International Conference on MachineLearning , pages 1278–1286, 2014.[33] Laura Ruis, Mitchell Stern, Julia Proskurnia, and William Chan. Insertion-deletion transformer.arXiv preprint arXiv:2001.05540 , 2020.[34] Chitwan Saharia, William Chan, Saurabh Saxena, and Mohammad Norouzi. Non-autoregressivemachine translation with latent alignments. In Proceedings of the 2020 Conference on EmpiricalMethods in Natural Language Processing (EMNLP) , pages 1098–1108, 2020.[35] Ari Seff, Wenda Zhou, Farhan Damani, Abigail Doyle, and Ryan P Adams. Discrete objectgeneration with reversible inductive construction. arXiv preprint arXiv:1907.08268 , July 2019.[36] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsuper-vised learning using nonequilibrium thermodynamics. In International Conference on MachineLearning , pages 2256–2265, 2015.[37] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. InInternational Conference on Learning Representations , 2021.12[38] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the datadistribution. In Advances in Neural Information Processing Systems , pages 11895–11907, 2019.[39] Yang Song and Stefano Ermon. Improved techniques for training score-based generative models.arXiv preprint arXiv:2006.09011 , 2020.[40] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, andBen Poole. Score-based generative modeling through stochastic differential equations. arXivpreprint arXiv:2011.13456 , November 2020.[41] Dustin Tran, Keyon Vafa, Kumar Agrawal, Laurent Dinh, and Ben Poole. Discrete flows:Invertible generative models of discrete data. In Advances in Neural Information ProcessingSystems , volume 32, 2019.[42] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, AlexGraves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. WaveNet: A generativemodel for raw audio. arXiv preprint arXiv:1609.03499 , 2016.[43] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neuralnetworks. International Conference on Machine Learning , 2016.[44] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Informa-tion Processing Systems , pages 5998–6008, 2017.[45] Pascal Vincent. A connection between score matching and denoising autoencoders. NeuralComputation , 23(7):1661–1674, 2011.[46] Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. Universal adversarialtriggers for attacking and analyzing NLP. August 2019.[47] Alex Wang and Kyunghyun Cho. BERT has a mouth, and it must speak: BERT as a markovrandom field language model. arXiv preprint arXiv:1902.04094 , February 2019.[48] Chunting Zhou, Graham Neubig, Jiatao Gu, Mona Diab, Paco Guzman, Luke Zettlemoyer,and Marjan Ghazvininejad. Detecting hallucinated content in conditional neural sequencegeneration. November 2020.[49] Zachary M Ziegler and Alexander M Rush. Latent normalizing flows for discrete sequences.arXiv preprint arXiv:1901.10548 , January 2019.13 |
6cdYMkxxNt | Understanding the Transferability of Representationsvia Task-RelatednessAkshay Mehra, Yunbei Zhang, and Jihun HammTulane University{amehra, yzhang111, jhamm3}@tulane.eduAbstractThe growing popularity of transfer learning, due to the availability of modelspre-trained on vast amounts of data, makes it imperative to understand when theknowledge of these pre-trained models can be transferred to obtain high-performingmodels on downstream target tasks. However, the exact conditions under whichtransfer learning succeeds in a cross-domain cross-task setting are still poorlyunderstood. To bridge this gap, we propose a novel analysis that analyzes thetransferability of the representations of pre-trained models to downstream tasks interms of their relatedness to a given reference task. Our analysis leads to an upperbound on transferability in terms of task-relatedness, quantified using the differencebetween the class priors, label sets, and features of the two tasks. Our experimentsusing state-of-the-art pre-trained models show the effectiveness of task-relatednessin explaining transferability on various vision and language tasks. The efficientcomputability of task-relatedness even without labels of the target task and its highcorrelation with the model’s accuracy after end-to-end fine-tuning on the targettask makes it a useful metric for transferability estimation. Our empirical results ofusing task-relatedness on the problem of selecting the best pre-trained model froma model zoo for a target task highlight its utility for practical problems.1 IntroductionTransfer learning (TL) [ 42,59] is a powerful tool for developing high-performance machine learningmodels, especially in current times when large models [ 45,11,12,18] pre-trained on huge amountsof data are being fine-tuned for various downstream tasks. While large pre-trained models achieveimpressive performance on downstream tasks even in the zero-shot inference setting [ 45], theirperformance can often be improved by fine-tuning them on data from target tasks. However, ourunderstanding of when representations from these models lead to classifiers that achieve highperformance (i.e., high transferability) to downstream tasks is still lacking.Analytical works based on domain adaptation [ 8,7,52,36,32,37,38] can only explain cross-domaintasks (i.e., when only features/label priors change across tasks) but in the TL setting, label sets canalso change (i.e., cross-task setting). Recently, [ 56] showed that the relatedness between the label setsof the two tasks measured using conditional entropy can explain the difference in their transferability.However, [ 56] focused only on the cross-task setting, and analysis for transferability in a generalcross-domain cross-task setting is not addressed. Apart from these analytical works, another line ofwork focuses on proposing transferability metrics that correlate well with performance on downstreamtasks after end-to-end fine-tuning. We refer to these works as score-based transferability estimation(SbTE) metrics [ 61,55,29,40,19,51]. These works focus on developing scores for selecting apre-trained model from a model zoo, that achieves the best transferability on a target task. Whilethese works address a practical problem, they do not focus on providing an analysis of transferability.38th Conference on Neural Information Processing Systems (NeurIPS 2024).AircraftsTexturesDigitsPetsChest X-raysReference Taske.g., ImageNetDownstream Task e.g., PetsTask-relatednessFixedPre-trained Encodere.g.CLIPTransferabilityHigh?TrainableFlowersSuite of Downstream TasksFigure 1: Given a pre-trained encoder (e.g.,CLIP [ 45]), how does the performance afterfine-tuning it on a reference task (e.g., Ima-geNet) relate to the performance after fine-tuning it on other tasks? Through a rigorousbound on transferability (Theorem 3) in termsof the relatedness between a reference and atarget task, we show that tasks related to thereference task achieve provably better perfor-mance after fine-tuning.Thus, we first rigorously analyze the transferabilityof the representations in producing high-performingclassifiers and propose a novel approach that studiestransferability in terms of its relatedness to a refer-ence task (see Fig. 1). This is in line with previ-ous analytical works [ 7,2,56,55] which studiedthe model’s performance on target tasks in terms ofthe source task in different settings such as domainadaptation/generalization and recently TL. However,there’s a crucial difference: we study transferabilityin terms of a reference task instead of the source tasksince it is impractical to assume the knowledge of thesource task used to train large models such as CLIP[45] or GPT, commonly used for TL.Our approach works by transforming the distribution(and classifier) of a reference task, by transformingits class-prior distribution, label set, and feature spaceto obtain a new distribution that is similar to that ofthe target task (Fig. 2). Based on these transforma-tions, we show that transferability can be provablyexplained (and is tightly upper bounded) using threeinterpretable terms. A weighted reference loss term appearing due to the class prior distributiondifference between the tasks, a label mismatch term appearing as conditional entropy between thelabel distributions of the tasks, and a distribution mismatch term appearing as the Wasserstein distancebetween the transformed reference and target distributions (Theorem 3). We define task-relatednessas the sum of these three terms (a smaller value implies higher relatedness). We propose an opti-mization problem (Eq. 4) and an algorithm (Alg. 1) to learn the transformations to compute it. Usingstate-of-the-art (SOTA) pre-trained models, with different architectures, trained with various trainingmethods on computer vision (CV) and natural language processing (NLP) tasks, we show that task-relatedness achieves a small gap to transferability (Sec. 4.1). Our analysis also leads to new insightsinto learning in the TL setting such as to improve the transferability of an encoder on a downstreamtask, one can improve the encoder’s transferability on related reference tasks (Sec. 4.2). This isparticularly useful when practitioners intend to develop encoders that achieve high transferability toproprietary (and potentially inaccessible) datasets.We also demonstrate the utility of task-relatedness in estimating the accuracy of the model afterend-to-end fine-tuning. While the TL setting assumes access to target labels, the high computationalcost of end-to-end fine-tuning of a pre-trained model on a target task calls for developing metrics thatare efficiently computable and highly correlated with end-to-end fine-tuning accuracy. To this end,we propose to use task-relatedness computed in the penultimate layer of the pre-trained model as ourtransferability estimation metric. To further improve the computational efficiency of task-relatednesswe only measure the difference between the class-wise means and covariances of the distributionsin lieu of the Wasserstein distance as required in Theorem 3. This enables the computation oftask-relatedness with only the statistics of the reference/target tasks. Our empirical results (Sec. 4.3)attest that task-relatedness achieves a high correlation with the model’s accuracy after end-to-endfine-tuning on the target task making it an effective metric for selecting a pre-trained model froma model zoo that achieves the best accuracy on the target task. Moreover, unlike previous SbTEmetrics, task-relatedness can be estimated even without labeled target data, making it suitable forunsupervised transferability estimation, highlighting the advantage of a reference task as used in ouranalysis. Our main contributions are:•We rigorously analyze transferability for classification tasks. Our analysis, to the best of ourknowledge, leads to the first upper bound on transferability in terms of task-relatedness in across-domain cross-task setting.•We propose an optimization problem to efficiently compute task-relatedness, using a smallamount of target labels and show that it can even predict performance after end-to-end fine-tuning without requiring target labels.2•Using SOTA models and CV/NLP tasks, we show that task-relatedness accurately predictstransferability and show that transferability to unseen tasks can be improved by improvingtransferability to known (related) tasks.2 Related WorkTransfer learning (TL): TL [ 42,59,18,49,46,15,14] has been studied widely and consistsof various settings including transductive transfer, inductive transfer, and task transfer learning.The transductive setting also referred to as domain adaptation [ 8,7] focuses on reducing the shiftbetween two domains. The task transfer setting focuses on identifying the relationship between tasks,regardless of the model, to explain the transfer performance (see Appendix B for more details). Lastly,the inductive transfer setting focuses on using an inductive bias such as fine-tuning a pre-trainedmodel (trained via adversarial training [ 48], self-supervised learning [ 11,10,12] or by combininglanguage and image information [ 45]) to improve the performance on a target task. Our work focuseson the inductive transfer learning setting and proposes an upper bound on transferability of therepresentations of pre-trained models to downstream tasks.Analytical works for learning under distribution shifts: Prior works [ 8,7,52,36,32,37,38]analytically explained learning under distribution shifts using distributional divergence betweenthe marginal distributions and a label mismatch term. However, these results are applicable underassumptions such as covariate or label shift which need not be satisfied in TL where both the datadistribution and the label spaces can be different (see App. B for detailed comparison). Recently, [ 56]proposed an upper bound on transferability in a restrictive setting of the same features for both tasks,however, our analysis does not require such an assumption. Other works [ 9,47,41] analyzed therepresentation for the multi-task learning setting. These works showed that when tasks are weaklyrelated, a single representation space (model) may not perform well for all tasks. However, the TLsetting differs from both of these and our work aims to analyze transferability in this setting.Score-based transferability estimation (SbTE): These works [ 5,40,29,61,55,39] use data fromthe target task and produce a score correlated with transferability. Such a score is useful for selectingthe model from a model zoo that leads to the best transferability to a target task. [ 56] proposed theNegative Conditional Entropy (NCE) score that predicts transferability using the negative conditionalentropy between labels of the tasks but requires the two tasks to have the same input instances. [ 6]estimates transferability by solving the HGR maximum correlation problem and using normalizedHscore, in the same setting as [ 56]. [40] proposed the LEEP score and computed NCE using soft(pseudo) labels for the target task from a pre-trained model. OT-CE [ 55] combined Wassersteindistance [ 3] and NCE whereas [ 5,61] estimate likelihood and the marginalized likelihood of labeledtarget examples to estimate transferability. [33] proposes a model-agnostic approach that also relieson optimal transport to compute the distance between the tasks similar to OTDD [ 3]. In contrast, wefocus on analyzing transferability in terms of task-relatedness theoretically along with demonstratingits effectiveness as a transferability estimation metric for the pre-trained model selection problem.3 Analysis of TL using task-relatednessProblem setting and notations: LetPR(x, y)andPT(x, y)denote the distributions of the referenceand the target tasks, defined on XR×YRandXT×YTrespectively. We assume that the feature spacesare common ( XR=XT=X) such as RGB images, but the reference label set YR={1,2,···, KR}and the target label set YT={1,2,···, KT}can be entirely different. We assume the number ofreference task classes ( KR) are greater than or equal to the number of target classes ( KT). Inthe TL setting, an encoder (feature extractor) g:X → Z is pre-trained on a dataset with orwithout labels depending on the training method (e.g., supervised vs. self-supervised). We denotethe resultant push-forward distributions of RandTon the encoder output space as PR(z, y)andPT(z, y). With a fixed encoder g, a classifier (linear or non-linear), h(z) :Z → ∆, that outputs aprobability vector is learned for the reference ( hR) and the target ( hT) separately, where ∆R/T isaKR/KTsimplex for R/T . The classifier hR= arg min h∈HE(z,y)∈PR[l(h(z;g), y)]andhT=arg min h∈HE(z,y)∈PT[l(h(z;g), y)]whereHis the set of classifiers and l(h(z), y) =−log(h(z)y)is the cross-entropy loss. Table 3 in App. A summarizes the notations used in our work. Next, wedefine transferability as commonly used in the literature.3BeeLionWolfCatDogCatDogPrior TransformCLabel TransformBFeature TransformAhR(z)hR!zPR(z,y)CatDogDistributionmismatch Reference R(e.g., Imagenet)Target T(e.g., CIFAR-10)PR!(z,y)PR!!(z,y)PR!!!(z,y)PT(z,y)hR!!zhR!!!zhTzR!R!!R!!!LionWolfFigure 2: : Overview of our task transformation model: A series of transformations are applied tothe reference distribution PR(z, y)and classifier hRto produce the transformed distribution PR′′′and classifier hR′′′to explain transferability to the downstream target task. Class-prior transformation(R→R′) changes the class prior of the reference distribution (e.g., an irrelevant Bee class in Rnowhas smaller prior) followed by label set transformation ( R′→R′′) (e.g., to match {Lion, Wolf }with{Cat, Dog }), followed by feature space transformation ( R′′→R′′′) to match the feature distributionof the target task PT(z, y).Definition 1. (Transferability). Transferability of the representations from an encoder gon a targettaskTfor classifiers in His defined as E(z,y)∈PT[l(hT(z;g), y)].In the next section, we show the analysis with Has the class of linear classifiers for ease of explanationand discuss its extension to non-linear classifiers in App. A.5. Proofs for Sec. 3 are in App. A.3.1 Our task transformation modelThe reference and the target tasks share the same encoder but do not share label sets or datadistributions. Therefore, to relate the two tasks, we propose a chain of three simple transformations:1) prior transformation (from RtoR′), 2) label transformation (from R′toR′′), and 3) featuretransformation (from R′′toR′′′). The R′, R′′, R′′′are intermediate domain names after each of thetransformations are applied. The corresponding classifier in each domain is denoted by hR′,hR′′, andhR′′′as illustrated in Fig. 2. The distribution after the transformations ( PR′′′) has the same featureZR′′′=ZT=Zand label sets YR′′′=YTas the target task T, and consequently, the loss of thetransformed classifier hR′′′and the target classifier hTcan be related.Class-prior transformation (R→R′):Since the reference task has more classes than the targettask ( KR≥KT), many of the reference task classes are likely irrelevant for transfer to the targetclasses, e.g., while transferring from ImageNet to CIFAR10, only a small portion of ImageNet classesare relevant to CIFAR10 classes. The prior transformation accounts for the relative importance of thereference classes. This is illustrated in Fig. 2, where changing the class prior of Rreduces the prior ofthe Bee class and increases the priors of Wolf and Lion classes (shown by the changed size of classesWolf and Lion in R′). While transforming the prior of R, we keep the conditional distribution and theclassifier the same i.e., PR′(z|y) =PR(z|y)andhR′(z) =hR(z). Lemma 1 in App. A.2.1 showsthat the expected loss of the classifier hRonR′is a re-weighted version of the loss of hRonR.Label transformation (R′→R′′):Next, we use a label transformation to match the label sets ofthe new domain R′′and that of the target domain. To this end, we specify the conditional distributionBij:=P(yR′′=i|yR′=j)(Bij∈[0,1],∀i, j,PiBij= 1,∀j). The label yR′′of anexample from the domain R′′is obtained via BP(yR′). This generative process doesn’t requirethe feature, i.e., PR′′(yR′′|yR′, z) =PR′′(yR′′|yR′).Bwith sparse entries (i.e., only one entry ofa column is 1) models a deterministic map from YRtoYT;Bwith dense entries models a weakerassociation. This process is illustrated in Fig. 2 which shows the map from {Bee, Wolf, Lion } ⊂ Y R′to{Dog, Cat } ⊂ Y Tafter using B. Under this model, a reasonable choice of classifier for R′′ishR′′(z) =BhR′(z). Lemma 2 in App. A.2.2 shows that the expected loss of hR′′depends on theloss of hR′and the conditional entropy between the label sets of the tasks R′andR′′and Corollary 1shows the conditions for optimality of hR′′.Feature transformation ( R′′→R′′′):The final step involves changing the feature space of thedistribution R′′. We apply an invertible linear transformation Ato the distribution in R′′to obtain thenew distribution R′′′. After the transformation, the classifier associated with the new domain R′′′ishR′′′(z) =hR′′(A−1(z)). This is illustrated in Fig. 2 after feature transform using A. Lemma 3 in4App. A.2.3 shows that a linear transform of the space and classifier does not incur any additional lossand Corollary 2 shows that the optimality of hR′′implies optimality of hR′′′. Using these, we getTheorem 1 by defining conditional entropy as followsH(YR′′|YR′) =−XyR′∈YR′XyR′′∈YR′′PR′(yR′)ByR′′,yR′log(ByR′′,yR′). (1)Theorem 1. LetC:=hPR′(y)PR(y)iKRy=1be a vector of probability ratios , Bbe aKT×KRmatrix withBij=P(yR′′=i|yR′=j),A:Z → Z be an invertible linear map of features. Let the classifiershR′(z) :=hR(z),hR′′(z) :=BhR′(z),hR′′′(z) :=hR′′(A−1(z)). Assuming lis the cross-entropyloss, we haveEPR′′′(z,y)[l(hR′′′(z), y)]≤EPR(z,y)[C(y)l(hR(z), y)]| {z }Re-weighted reference loss+H(YR′′|YR′)|{z}Label mismatch.Theorem 1 provides an upper bound on the loss of the final transformed classifier/distribution interms of the loss of the reference classifier/distribution. The re-weighted reference loss shows that theperformance of the transformed classifier on the new domain is linked to the label-wise re-weightedloss of the reference classifier on R. This implies that one can use only the relevant reference classesto contribute to the bound. The label mismatch term shows that the performance of the distributionR′′′andRdepends on the conditional entropy H(YR′′|YR′;B)between the label distributions ofthe domain R′′andR′. A high value of Himplies that the labels of the reference task are unrelatedleading to lower transferability, whereas a low Himplies higher transferability. Corollary 3 inApp. A.2.4 shows when the bound in Theorem 1 becomes equality.3.2 Distribution mismatch between PR′′′andPTAfter the three transformations, the transformed reference PR′′′(z, y)can be compared with the targetPT(z, y). However, these are only simple transformations and PR′′′cannot be made identical toPTin general. This mismatch can be measured by the Wasserstein or Optimal Transport distance[44, 58]. Since our goal is to match two joint distributions defined on Z × Y we used((z, y),(z′, y′)) :=∥z−z′∥2+∞ ·1y̸=y′, (2)withz, z′∈ Z andy, y′∈ Y as our base distance [53] to define the (type-1) Wasserstein distanceWd(P, Q) := infπ∈Π(P,Q)E((z,y),(z′,y′))∼π[d((z, y),(z′, y′))]. (3)Using Eq. 2, the Wasserstein distance between the joint distributions is the weighted sum of theWasserstein distance between conditional distributions ( P(z|y)) (Lemma 4 in App. A). Theorem 2below explains the gap between the losses due to the distribution mismatch.Assumption 1. 1) The composition of the loss function and the classifier l◦his aτ−Lipschitzfunction w.r.t to ∥ · ∥ 2norm, i.e., |l(h(z), y)−l(h(z′), y)| ≤τ∥z−z′∥2for all y∈ Y,z, z′∈ Zwhere h∈ H. 2)PT(y) =PR′′′(y).The assumption 2), can be satisfied since we have full control on the prior PR′′′(y)viaBandC.Theorem 2. Let the distributions TandR′′′be defined on the same domain Z ×Y and assumption 1holds, thenEPT(z,y)[l(h(z), y)]−EPR′′′(z,y)[l(h(z), y)]≤τ Wd(PR′′′, PT)|{z }Distribution mismatch,withdas in Eq. 2.Theorem 2 shows that when l◦hisτ−Lipschitz then the performance gap between the R′′′andTisbounded by the type-1 Wasserstein distance between the two distributions. The Lipschitz coefficientof the composition can be bounded by τ, by penalizing the gradient norm w.r.t zat training time. Thus,for linear fine-tuning, we train the classifiers hRandhTwith an additional gradient norm penaltymax{0,∥∇zl(h(z), y)∥2−τ}to make them conform to the Lipschitz assumption (see App. C.3).Note that constraining the Lipschitz constant restricts the hypothesis class. The trade-off between theLipschitz constant and the performance of his empirically evaluated in App. C.3.1.5CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024LossRN50CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024ADV(1) RN50CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024SIMCLR RN50CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024SWAV RN50AG_NewsYELP-5SST-5012DistilRoBERTaTarget Loss Reweighted Reference Loss Label Mismatch Distribution MismatchFigure 3: Task-relatedness (decomposed into its components) produces a small gap to transferability(blue bars). As the task-relatedness between the reference (ImageNet (for CV), DBPedia (for NLP)),and the target tasks (x-axis) increases, the transferability improves. (Note: the label mismatch term iszero in our figures as Bis fixed to a sparse matrix, see Sec. 3.4.)3.3 Bounding transferability using task-relatednessHere, we combine the results obtained in Theorem 1 and Theorem 2. The final bound proposedin Theorem 3 is one of our main contributions which explains transferability as a sum of threeinterpretable and measurable gaps.Theorem 3. Letlbe the cross-entropy loss, then under assumptions of Theorems 1 and 2,EPT(z,y)[l(hT(z), y)]≤EPR(z,y)[C(y)l(hR(z), y)]| {z }Re-weighted reference loss+H(YR′′|YR′)|{z}Label mismatch+τ Wd(PR′′′, PT).| {z }Distribution mismatchThe theorem shows that transferability can be decomposed into the loss incurred while transformingthe class prior distribution, label space, and feature space of the reference distribution (first two terms)and the residual distance between the distribution obtained after transformations and the actual targetdistribution (last term). Based on the terms in the upper bound we define task-relatedness as follows.Definition 2. (Task-relatedness). The relatedness between a target and a reference task is defined asEPR(z,y)[C(y)l(hR(z), y)] +H(YR′′|YR′) +τ Wd(PR′′′, PT).A smaller value of the task-relatedness measure implies higher relatedness of the reference and thetarget tasks. In particular, when the target task is a transformation of the reference task then thereexist transformations A, B, andCsuch that the distribution R′′′perfectly matches the distributionof the target task (i.e., Wd(PR′′′, PT) = 0 ). Moreover, when labels are deterministically related(Corollary 3) our bound becomes an equality.Lastly, while we presented an analysis for linear fine-tuning here (for simplicity of presentation), ourbounds hold for non-linear classifiers and non-linear feature transformations as well (see App. A.5).3.4 Estimating task-relatednessThe optimization problem for learning the transformations A, B , and Cto compute task-relatednessin Theorem 3 is presented below. We use two new variables: inverse of the transformation A, denotedby ̄A:=A−1and a transformed reference prior distribution denoted by D(y) :=C(y)PR(y).minA, ̄A,B,DEPR(z,y)D(y)PR(y)l(hR(z), y)+H(YR′′|YR′;B, D) +τWd(PR′′′, PT;A, B)s.t. A ̄A= ̄AA=I, P T(y) =BD,XiBij= 1∀j,Xy∈YRD(y) = 1 ,Bij∈[0,1]∀i, j, andDi∈[0,1]∀i.(4)Alg. 1 shows how we solve Eq. 4 (see App. D for additional details of the algorithm). Fig. 8 inApp. C.1.2 shows how the upper bound is reduced as the optimization proceeds. Computationally, asingle epoch of Alg. 1 takes a mere 0.17 seconds on our hardware for transfer from ImageNet to Petsfor the ResNet-18 model (we ran Alg. 1 for 2000 epochs). In App. C.1, we show the effectiveness oflearning the transformations using Alg. 1 on small-scale transfer tasks. Our results show that when6Algorithm 1 Minimization of the bound in Theorem 3Input : Reference task samples and labels ( ZR, YR), Target task samples ( ZT), Target task labels(YT) (optional).Output : Estimate of task-relatedness using the learned transformations A, ̄A, B, D .Init: A:= ̄A:=I,D:=PR(y), random B∈RKT×KR1:Randomly sample nRpoints (ziR, yiR)∼(ZR, YR)as per the class prior D.2:ifYTis available then3: Randomly sample nTpoints (zjT, yjT)∼(ZT, YT).4:else5: Randomly sample nTpoints (zjT)∼(ZT).6: # Compute pseudo-labels for the target samples zT.7: yjT= arg max y∈YTBhR(A−1zT) forj= 1,···, nT.8:end if9:Compute (ziR′′′, yiR′′′) = (AziR,arg max yBe(yiR)), fori= 1,···, nR.10:Assign YR′:=YRandYR′′:=YT.11:Compute the optimal coupling π∗between the distributions R′′′andTby minimizingWd(PR′′′, PT), i.e.,minπ∈Π(PR′′′,PT)Xi,jπij ̃d((ziR′′′, yiR′′′),(zjT, yjT))s.t.Xjπij=1nR∀i,Xiπij=1nT∀j.12:Using π∗, solve for A, ̄A, B, D using mini-batch SGDminA, ̄A,B,DXi,jπ∗i,j ̃d((ziR′′′, yiR′′′),(zjT, yjT))+1nRXiD(yi)PR(yi)l(hR(ziR), yi) +H(YR′′|YR′)+∥PT(y)−BD∥22+ (∥A ̄A−I∥F+∥ ̄AA−I∥F).13:Repeat 1 - 12 until convergence.the reference task has classes semantically related to the target task, Alg. 1 learns transformations thatachieve the smallest gap to transferability. However, since finding data semantically related to thetarget task may not always be possible we choose a reference task with the same number of classes asthe target and fix that matrix Bto a random permutation of identity (making the label mismatch termzero) and Dto the prior of the reference task, learning only the transformation A, in our experiments.4 Empirical AnalysisHere, we empirically demonstrate the effectiveness of task-relatedness in explaining transferability invarious settings. We present additional results in App. C and dataset/experimental details in App. D.Our codes can be found at https://github.com/akshaymehra24/TaskTransferAnalysis .4.1 Task-relatedness achieves a small gap to actual transferabilityTask-relatedness tightly upperbounds trans ferabilityacross variousarchitectures, pretrain ingmeth -ods, anddatasets. We demonstrate this by using various pre-trained models with architectures suchas Vision Transformers (ViT) [ 20], ResNet-18/50/101/152 [ 27], DistilRoBERTa [ 34] trained withvarious pretraining methods including supervised training, adversarial training [ 48], SimCLR [ 11],MoCo [ 26], SwA V [ 10], and MAE [ 25]. We also consider a wide range of target datasets including,CIFAR10/100, Aircraft, Pets, DTD, AG-News, Yelp-5, and SST-5 whose details are in App. D.7MNIST FMNIST USPSTarget TasksMNIST FMNIST USPSReference Tasks3.43 2.803.76 3.533.12 3.65Task-relatedness0123MNIST FMNIST USPSTarget TasksMNIST FMNIST USPSReference Tasks1.72 1.231.77 1.641.14 1.70Transferability0.00.51.01.5(a)PE FFE PE FFE0123Loss2.022.46USPS1.882.032.202.48MNIST-M2.122.27MNISTPE FFE PE FFE01232.022.66USPS1.942.152.202.65MNIST-M2.042.25SVHNTarget LossReweighted Reference LossLabel MismatchDistribution Mismatch (b)Figure 4: (a) Task-relatedness and transferability are highly correlated across various reference-targetpairs. (b) Improving the transferability of an encoder on a reference task (in the plot title) leads toimproved transferability of all related target tasks (x-axis). (e.g., compared to the original pre-trainedCLIP encoder (PE), a end-to-end fine-tuned CLIP encoder (FFE) on the reference task achieveshigher transferability to all related tasks.)For this experiment, we fix the reference task to be ImageNet [ 17] for image classification and toDBPedia for sentence classification tasks and use Alg. 1 to estimate task-relatedness. The resultsin Fig. 3 and 9 (in the Appendix) show that our bound achieves a small gap to actual transferability.As the task-relatedness between the reference and the target tasks improves, transferability alsoimproves showing that task-relatedness and transferability are strongly correlated. Task-relatednessisalso strongly correlated with theaccuracy oftheend-to-endfine-tuned classifiers onthetargettask. In Fig. 11 (in the Appendix), we show high Pearson correlation coefficients ( ≥ −0.57) fortask-relatedness and accuracy after fully fine-tuning various pre-trained encoders using data fromvarious target tasks.4.2 Effect of the reference task on task-relatednessHighly related reference–targettaskpairs, based ontask-relatedness, achieve higher trans ferabilitycoincidingwith thesemanticrelatedness between tasks. To understand how a reference task affectstask-relatedness and eventually transferability, we consider two experiments using convolutionaland CLIP-trained models with various character recognition tasks such as MNIST, Fashion-MNIST(FMNIST), SVHN, MNIST-M, and USPS. Of these datasets, SVHN and MNIST-M contain coloredimages while the rest contain gray-scale images. In the first experiment, we train convolutionalmodels on MNIST, FMNIST, and USPS and measure pairwise transferability. Here we use thereference task to be the same task as that used for training the models. The results in Fig. 4(a)show that transferability to those target tasks is higher for which task-relatedness metric’s value issmaller. Specifically, USPS achieves the best transferability ( 1.23) and the smallest task-relatedness(2.80) when the reference task is MNIST. This is attributed to both datasets containing gray-scaleimages of digits. On the other hand, when the reference task is unrelated to the target task i.e., thetask-relatedness value is high, transferability suffers, e.g., when the reference task is MNIST and thetarget task is FMNIST. Results in App. C.2.2 show similar results for the sentence classification task.Thegapbetween task-relatedness andtrans ferabilityissmaller when areference taskperforms wellwith agiven encoder. Here we use MNIST and SVHN as two reference tasks and compute thetask-relatedness and transferability with USPS and MNIST-M as target tasks, using CLIP (Vit B32)model. A linear classifier trained on top of the embeddings from the CLIP model achieves ≈98%accuracy on MNIST but only ≈61% accuracy for SVHN. Due to this, transferability (USPS:2.02,MNIST-M:2.20) explained using task-relatedness with MNIST as the reference task (USPS: 2.46,MNIST-M: 2.48) is better than that computed using SVHN (USPS:2.66, MNIST-M:2.65) as thereference, even though MNIST-M is intuitively more similar to SVHN (as both contain coloredimages of digits). This is evident from the results of PE (Pre-trained Encoder) in Fig. 4(b).Improvingtheperformance ofanencoder onareference task improves trans ferabilitytootherrelated (potentially unseen) tasks. To show this we fully fine-tune the CLIP encoder on MNIST andSVHN tasks, increasing the accuracy of the classifiers for both MNIST and SVHN to 99% and 95%,respectively. Using the representations from these new encoders, we find that the transferability ofboth related target tasks improves along with task-relatedness (see FFE results in Fig. 4(b)). Here,8Table 1: Task-relatedness achieves high (negative) Pearson correlation to the accuracy after end-to-endfine-tuning for various tasks. For NCE [ 56], Leep [ 40], LogMe [ 61], SFDA [ 51], OT-NCE, OTCE[55], and H-score [ 6]positive correlation is better whereas for PACTran [ 19] and task-relatedness(ours) negative correlation is better.Target task LogMe Leep NCE PACTran SFDA H-Score OT-NCE OTCE OursPets 0.82 0.80 0.73 -0.82 0.57 0.77 0.88 0.86 -0.77DTD 0.88 0.96 -0.19 -0.85 0.90 0.89 0.84 0.82 -0.97Aircraft -0.60 0.92 0.97 0.11 0.72 -0.80 0.56 0.60 -0.72Average 0.37 0.90 0.50 -0.52 0.73 0.29 0.76 0.76 -0.82we see that task-relatedness for MNIST-M and USPS is the best when the reference task is SVHNand MNIST, respectively, aligning with our intuition of semantic relatedness between these tasks.This also suggests that transferability on other related tasks can be improved by fully fine-tuning theencoder on these reference tasks. Thus, in scenarios where target tasks are private (such as proprietaryChest X-rays), an encoder trained to work well on related tasks (such as publicly available ChestX-rays) is bound to achieve good transferability.4.3 Task-relatedness for end-to-end transferability estimationIn this section, we show an efficient way of computing task-relatedness which enables its use forestimating transferability after end-to-end fine-tuning. While Alg. 1, accurately estimates task-relatedness by minimizing the bound in Eq. 3, it could be inefficient due to the requirement ofcomputing and minimizing the Wasserstein distance between distributions at every epoch. Thus, tomake the computation efficient, we replace the Wasserstein distance computation in step 11 and 12 ofAlg. 1, with mean and covariance matching terms. Specifically, we define the distance between twodistributions R′′′andTasΓ(R′′′, T) :=∥μR′′′−μT∥22+λ∥ΣR′′′−ΣT∥22, (5)where μR′′′/T:=1nR′′′/TPz∈PR′′′/Tz,ΣR′′′/T:=1nR′′′/TPz∈PR′′′/T(z−μR′′′/T)T(z−μR′′′/T),andλis a regularization coefficient. Using Γ(R′′′, T)in place of Wd(R′′′, T), makes the computationof task-relatedness by learning transformations A, B, andCsignificantly more efficient.Task-relatedness isaneffectivemetricforthepre-trained model selectionproblem. The goal of thisproblem is to find a pre-trained model from a model zoo that achieves the best accuracy on a giventarget task after end-to-end fine-tuning of the model using labeled target data. Since end-to-endfine-tuning is costly (takes almost a day to fully fine-tune a single model on a single target task asshown by [ 61]), an effective transferability metric is significantly more efficient to compute and iscorrelated well with the accuracy after end-to-end finetuning. Using 5 different pre-trained models(supervised ResNet-50/101/152, adversarially pre-trained [ 48] ResNet50 with ε∈ {0.1,1}) andImageNet as the reference task, we show in Table 1, that task-relatedness achieves a high correlationwith the accuracy after end-to-end fine-tuning on the target task. Our results also highlight theinstability of various popular SbTE metrics, such as LogMe [ 61] and NCE [ 56] which can producea high negative correlation, and those of PACTran [ 19] which achieve low correlation values oncomplex datasets. In comparison, task-relatedness consistently achieves a good correlation for varioustarget tasks. Computationally, it takes a mere 3-4 minutes to learn the transformations to computetask-relatedness, providing a significant computation advantage over end-to-end fine-tuning. We alsoshow that task-relatedness remains highly correlated with end-to-endfine-tuningaccuracy even withalimitedamount oflabeled data from thetargettask as shown in Fig. 5 unlike other SbTE metrics.Next, we show that task-relatedness caneven beestimated withoutusinglabelsfrom thetargettask.Table 2: Correlation of task-relatedness andend-to-end fine-tuning accuracy computed us-ing true and pseudo labels of the target task.TargetTruelabelsPseudolabelsPets -0.77 -0.76DTD -0.97 -0.91Aircraft -0.72 -0.16For scenarios, where labeled data from the tar-get task is unavailable, estimating transferabilityis challenging. This is because both fine-tuningand most SbTE methods require labels to com-pute the transferability scores. Here we show thattask-relatedness can still be an effective measure toestimate transferability in this challenging setting.Since we use a transformative model and have ac-cess to a reference task/classifier, we can use thepredictions from the reference task’s classifier trans-9Figure 5: Task-relatedness (Ours) remains highly correlated with accuracy after end-to-end fine-tuningon a target task even when using a small percentage of target data unlike other SbTE methods (LogME,Leep, NCE, PACTran, OT-NCE, OTCE, and H-Score) whose correlation is affected significantly.For LogMe, Leep, NCE, OT-NCE, OTCE, and H-score positive correlation is better whereas forPACTran and task-relatedness (ours) negative correlation is better.formed via B(to obtain labels ∈ YT) and estimate the pseudo -labels of the target data. Concretely,pseudo-label for a target sample xTis obtained as ypseudoT = arg max y∈YTBhR(A−1(zT)). Resultsin Table 2 show that our task-relatedness estimated via pseudo-labeled target data still achieves a highcorrelation to transferability on most datasets. For datasets such as Pets and DTD, where transformingthe reference task classifier produces high accuracy on the target task, the difference between thepseudo and true labels is small. Consequently, the difference in the correlations with pseudo andtrue labels is also small. Thus, when the reference and target tasks are related, transferability can beestimated accurately without requiring labels of the target task, showing that task-relatedness is aneffective metric even for unsupervised transferability estimation.5 ConclusionWe analyzed TL in terms of the relatedness between the target and a reference task. Our analysis worksby transforming the distribution of a reference task to match that of the target. Using this we provedan upper bound on transferability, defined as task-relatedness, consisting of three interpretable terms,namely, the re-weighted reference task loss, label mismatch, and distribution mismatch. We proposedan algorithm to compute task-relatedness and demonstrated its effectiveness at accurately predictingtransferability (even without target labels) with SOTA models. Moreover, the high correlation oftask-relatedness with accuracy after end-to-end fine-tuning and its efficient computability, makes itan effective metric for transferability estimation.Limitations. We studied transferability using the cross-entropy loss and used Wasserstein distance-based distribution shift analysis due to their popularity. However, due to accuracy being the primarymetric of interest in classification tasks and the difficulty of computing the Wasserstein distance withlimited samples in a high dimensional representation space, extending the analysis to 0-1 loss andother divergence measures are important directions which are not addressed here and are left forfuture works.6 AcknowledgmentWe thank the anonymous reviewers of this work for their insightful comments and suggestions. Thiswork was supported by the NSF EPSCoR-Louisiana Materials Design Alliance (LAMDA) program#OIA-1946231.10References[1]Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji,Charless C Fowlkes, Stefano Soatto, and Pietro Perona. Task2vec: Task embedding for meta-learning. In Proceedings of the IEEE/CVF international conference on computer vision , pages6430–6439, 2019.[2]Isabela Albuquerque, João Monteiro, Mohammad Darvishi, Tiago H Falk, and IoannisMitliagkas. Generalizing to unseen domains via distribution matching. arXiv preprintarXiv:1911.00804 , 2019.[3]David Alvarez-Melis and Nicolo Fusi. Geometric dataset distances via optimal transport. arXivpreprint arXiv:2002.02923 , 2020.[4]Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarialnetworks. In International conference on machine learning , pages 214–223. PMLR, 2017.[5]Yajie Bao, Yang Li, Shao-Lun Huang, Lin Zhang, Lizhong Zheng, Amir Zamir, and LeonidasGuibas. An information-theoretic approach to transferability in task transfer learning. In 2019IEEE International Conference on Image Processing (ICIP) , pages 2309–2313, 2019.[6]Yajie Bao, Yang Li, Shao-Lun Huang, Lin Zhang, Lizhong Zheng, Amir Zamir, and LeonidasGuibas. An information-theoretic approach to transferability in task transfer learning. In 2019IEEE international conference on image processing (ICIP) , pages 2309–2313. IEEE, 2019.[7]Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jen-nifer Wortman Vaughan. A theory of learning from different domains. Machine learning ,79(1):151–175, 2010.[8]Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira, et al. Analysis of represen-tations for domain adaptation. Advances in neural information processing systems , 19:137,2007.[9]Shai Ben-David and Reba Schuller. Exploiting task relatedness for multiple task learning. InLearning theory and kernel machines , pages 567–580. Springer, 2003.[10] Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin.Unsupervised learning of visual features by contrasting cluster assignments. Advances in neuralinformation processing systems , 33:9912–9924, 2020.[11] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple frameworkfor contrastive learning of visual representations. In International conference on machinelearning , pages 1597–1607. PMLR, 2020.[12] Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervisedvision transformers, 2021.[13] Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi.Describing textures in the wild. In Proceedings of the IEEE conference on computer vision andpattern recognition , pages 3606–3613, 2014.[14] Quan Cui, Bingchen Zhao, Zhao-Min Chen, Borui Zhao, Renjie Song, Boyan Zhou, JiajunLiang, and Osamu Yoshie. Discriminability-transferability trade-off: An information-theoreticperspective. In European Conference on Computer Vision , pages 20–37. Springer, 2022.[15] Jifeng Dai, Yi Li, Kaiming He, and Jian Sun. R-fcn: Object detection via region-based fullyconvolutional networks. Advances in neural information processing systems , 29, 2016.[16] Bharath Bhushan Damodaran, Benjamin Kellenberger, Rémi Flamary, Devis Tuia, and NicolasCourty. Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation.InProceedings of the European Conference on Computer Vision (ECCV) , pages 447–463, 2018.[17] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and patternrecognition , pages 248–255. Ieee, 2009.11[18] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training ofdeep bidirectional transformers for language understanding, 2019.[19] Nan Ding, Xi Chen, Tomer Levinboim, Soravit Changpinyo, and Radu Soricut. Pactran: Pac-bayesian metrics for estimating the transferability of pretrained models to classification tasks.InEuropean Conference on Computer Vision , pages 252–268. Springer, 2022.[20] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai,Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly,Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for imagerecognition at scale, 2021.[21] Kshitij Dwivedi, Jiahui Huang, Radoslaw Martin Cichy, and Gemma Roig. Duality diagramsimilarity: a generic framework for initialization selection in task transfer learning. In EuropeanConference on Computer Vision , pages 497–513. Springer, 2020.[22] Kshitij Dwivedi and Gemma Roig. Representation similarity analysis for efficient task taxonomy& transfer learning. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition , pages 12387–12396, 2019.[23] Remi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z Alaya, Aureie Boisbunon,Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, et al. Pot:Python optimal transport. The Journal of Machine Learning Research , 22(1):3571–3578, 2021.[24] Jiechao Guan and Zhiwu Lu. Task relatedness-based generalization bounds for meta learning.InInternational Conference on Learning Representations , 2022.[25] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Maskedautoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 16000–16009, 2022.[26] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast forunsupervised visual representation learning. In Proceedings of the IEEE/CVF conference oncomputer vision and pattern recognition , pages 9729–9738, 2020.[27] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition, 2015.[28] Minyang Hu, Hong Chang, Zong Guo, Bingpeng Ma, Shiguang Shan, and Xilin Chen. Under-standing few-shot learning: Measuring task relatedness and adaptation difficulty via attributes.Advances in Neural Information Processing Systems , 36, 2024.[29] Long-Kai Huang, Ying Wei, Yu Rong, Qiang Yang, and Junzhou Huang. Frustratingly easytransferability estimation, 2022.[30] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images.., 2009.[31] Aounon Kumar, Alexander Levine, Tom Goldstein, and Soheil Feizi. Certifying model accuracyunder distribution shifts. arXiv preprint arXiv:2201.12440 , 2022.[32] Trung Le, Tuan Nguyen, Nhat Ho, Hung Bui, and Dinh Phung. Lamda: Label matchingdeep domain adaptation. In International Conference on Machine Learning , pages 6043–6054.PMLR, 2021.[33] Xinran Liu, Yikun Bai, Yuzhe Lu, Andrea Soltoggio, and Soheil Kolouri. Wasserstein taskembedding for measuring task similarities. arXiv preprint arXiv:2208.11726 , 2022.[34] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, MikeLewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretrainingapproach. arXiv preprint arXiv:1907.11692 , 2019.[35] Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151 , 2013.12[36] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learningbounds and algorithms. arXiv preprint arXiv:0902.3430 , 2009.[37] Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, and Jihun Hamm. Understanding the limitsof unsupervised domain adaptation via data poisoning. In Thirty-Fifth Conference on NeuralInformation Processing Systems , 2021.[38] Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, and Jihun Hamm. Do domain generalizationmethods generalize well? In NeurIPS ML Safety Workshop , 2022.[39] Akshay Mehra, Yunbei Zhang, and Jihun Hamm. Test-time assessment of a model’s performanceon unseen domains via optimal transport. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition (CVPR) Workshops , pages 173–182, June 2024.[40] Cuong V . Nguyen, Tal Hassner, Matthias Seeger, and Cedric Archambeau. Leep: A newmeasure to evaluate transferability of learned representations, 2020.[41] Vishakh Padmakumar, Leonard Lausen, Miguel Ballesteros, Sheng Zha, He He, and GeorgeKarypis. Exploring the role of task transferability in large-scale multi-task learning. arXivpreprint arXiv:2204.11117 , 2022.[42] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions onKnowledge and Data Engineering , 22(10):1345–1359, 2010.[43] Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012IEEE conference on computer vision and pattern recognition , pages 3498–3505. IEEE, 2012.[44] Gabriel Peyré, Marco Cuturi, et al. Computational optimal transport: With applications to datascience. Foundations and Trends® in Machine Learning , 11(5-6):355–607, 2019.[45] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal,Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visualmodels from natural language supervision. In International conference on machine learning ,pages 8748–8763. PMLR, 2021.[46] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-timeobject detection with region proposal networks. Advances in neural information processingsystems , 28, 2015.[47] Sebastian Ruder. An overview of multi-task learning in deep neural networks. arXiv preprintarXiv:1706.05098 , 2017.[48] Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do ad-versarially robust imagenet models transfer better? Advances in Neural Information ProcessingSystems , 33:3533–3545, 2020.[49] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled versionof bert: smaller, faster, cheaper and lighter, 2020.[50] Vikash Sehwag, Saeed Mahloujifar, Tinashe Handina, Sihui Dai, Chong Xiang, Mung Chiang,and Prateek Mittal. Robust learning meets generative models: Can proxy distributions improveadversarial robustness? arXiv preprint arXiv:2104.09425 , 2021.[51] Wenqi Shao, Xun Zhao, Yixiao Ge, Zhaoyang Zhang, Lei Yang, Xiaogang Wang, Ying Shan,and Ping Luo. Not all models are equal: predicting model transferability in a self-challengingfisher space. In European Conference on Computer Vision , pages 286–302. Springer, 2022.[52] Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. Wasserstein distance guided representationlearning for domain adaptation. In Thirty-Second AAAI Conference on Artificial Intelligence ,2018.[53] Aman Sinha, Hongseok Namkoong, Riccardo V olpi, and John Duchi. Certifying some dis-tributional robustness with principled adversarial training. arXiv preprint arXiv:1710.10571 ,2017.13[54] Jie Song, Yixin Chen, Jingwen Ye, Xinchao Wang, Chengchao Shen, Feng Mao, and MingliSong. Depara: Deep attribution graph for deep knowledge transferability. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 3922–3930, 2020.[55] Yang Tan, Yang Li, and Shao-Lun Huang. Otce: A transferability metric for cross-domaincross-task representations, 2021.[56] Anh T. Tran, Cuong V . Nguyen, and Tal Hassner. Transferability and hardness of supervisedclassification tasks, 2019.[57] Nilesh Tripuraneni, Michael Jordan, and Chi Jin. On the theory of transfer learning: Theimportance of task diversity. Advances in neural information processing systems , 33:7852–7862,2020.[58] Cédric Villani. Optimal transport: old and new , volume 338. Springer, 2009.[59] Karl Weiss, Taghi M Khoshgoftaar, and Dingding Wang. A survey of transfer learning. Journalof Big Data , 3(1):9, May 2016.[60] Han Xiao, Kashif Rasul, and Roland V ollgraf. Fashion-mnist: a novel image dataset forbenchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 , 2017.[61] Kaichao You, Yong Liu, Jianmin Wang, and Mingsheng Long. Logme: Practical assessment ofpre-trained models for transfer learning, 2021.[62] Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and SilvioSavarese. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEEconference on computer vision and pattern recognition , pages 3712–3722.[63] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for textclassification. Advances in neural information processing systems , 28, 2015.14AppendixWe present the missing proofs of the theoretical results from Sec. 3 along with justifications for theclassifiers ( hR′, hR′′, hR′′′) as Corollaries in Appendix A followed by related work on learning in thepresence of distribution shift with the same feature and label space in Appendix B. This is followedby additional experimental results including NLP classification tasks with large pre-trained models inAppendix C. We conclude in Appendix D with details of the experiments and datasets used.A Proofs for Sec. 3A.1 NotationTable 3: Table of notationsData relatedR/T Reference/Target task.XR/T Images of reference/target tasks.YR/T Label set of reference/target tasks.KR/T Number of classes in reference/target tasks.PR/T(x, y)Data distribution of reference/target tasks.(ZR/T, YR/T)Samples (features and labels) of the reference/target tasks.Model relatedgEncoder pre-trained on a pre-training dataset.ZR/T Representations extracted for reference/target tasks using g.hR/T Classifier learned for reference/target tasks on the representations of g.lCross-entropy loss.τLipschitz constant.Task-transformation relatedA Parameters for feature transformation.B Parameters for label transformation.C Parameters for class-prior transformation.PR′(z, y)andhR′Distribution and classifier of R′after applying transformation ConR.PR′′(z, y)andhR′′Distribution and classifier of R′′after applying transformation BonR′.PR′′′(z, y)andhR′′′ Distribution and classifier of R′′′after applying transformation AonR′′.H Conditional entropy as defined in Eq. 1.dBase distance between two samples as defined in Eq. 2.WdWasserstein distance between two distributions defined in Eq. 3.ΓDistance between two distributions based on their mean/variance definedin Eq. 5.A.2 Our task transformation model (Sec. 3.1)A.2.1 Class-Prior transformation (R→R′)Lemma 1. LetC:=hPR′(y)PR(y)iKRy=1be a vector of probability ratios and the classifier hR′(z) :=hR(z), thenEPR′(z,y)[l(hR′(z), y)] =EPR(z,y)[C(y)l(hR(z), y)],for any loss function l.15Proof.EPR′(z,y)[l(hR′(z), y)] =EPR′(z,y)[l(hR(z), y)] =Xy∈YRPR′(y)EPR′(z|y)[l(hR(z), y)]=Xy∈YRPR(y)PR(y)PR′(y)EPR′(z|y)[l(hR(z), y)] =Xy∈YRPR(y)EPR′(z|y)[C(y)l(hR(z), y)]=Xy∈YRPR(y)EPR(z|y)[C(y)l(hR(z), y)] (since PR(z|y) =PR′(z|y) by construction)=EPR(z,y)[C(y)l(hR(z), y)].A.2.2 Label transformation (R′→R′′)Lemma 2. LetBbe a KT×KRmatrix with Bij=P(yR′′=i|yR′=j)andhR′′(z) := BhR′(z)andlbe the cross-entropy loss. Then, EPR′′(z,y)[l(hR′′(z), y)]≤EPR′(z,y)[l(hR′(z), y)] + H(YR′′|YR′), where H(YR′′|YR′)is the conditional entropy(−PyR′∈YR′PyR′′∈YR′′PR′(yR′)ByR′′,yR′log(ByR′′,yR′)).Proof. Note that P(z) :=PR′(z) =PR′′(z)by construction.EPR′′(z,y)[l(hR′′(z), y)] =EP(z,y′′)[l(hR′′(z), y′′)]=EP(z)EP(y′′|z)[l(hR′′(z), y′′)] =EP(z)Xy′′Xy′P(y′′, y′|z)(l(hR′′(z), y′′)) (since y′∈ YR′)=EP(z)EP(y′′,y′|z)[l(hR′′(z), y′′)]=EP(z)EP(y′|z)EP(y′′|y′)[l(hR′′(z), y′′)] (since P(y′′|y′, z) =P(y′′|y′))=EP(z)EP(y′|z)[Xy′′∈YR′′l(hR′′(z), y′′)By′′,y′] (since By′′,y′=P(y′′|y′))=EP(z,y′)[Xy′′∈YR′′l(BhR′(z), y′′)By′′,y′].Since the loss lis the cross-entropy loss, we havel(BhR′(z), y′′) = −log(Xj∈YR′By′′,jhjR′(z))≤ −log(By′′,y′hy′R′(z)) =−log(By′′,y′)−log(hy′R′(z)).16Therefore, we haveEPR′′(z,y)[l(hR′′(z), y)]=EP(z,y′)[Xy′′∈YR′′l(BhR′(z), y′′)By′′,y′]≤ −EP(z,y′)[Xy′′∈YR′′By′′,y′log(By′′,y′) + log( hy′R′(z))]=−EP(z,y′)[Xy′′∈YR′′By′′,y′log(By′′,y′)]−EP(z,y′)[Xy′′∈YR′′By′′,y′log(hy′R′(z))]=−EP(z,y′)[Xy′′∈YR′′By′′,y′log(By′′,y′)] +EP(z,y′)[−log(hy′R′(z))Xy′′∈YR′′By′′,y′]=−EP(z,y′)[Xy′′∈YR′′By′′,y′log(By′′,y′)] +EP(z,y′)[−log(hy′R′(z))]=EP(z,y′)[−Xy′′∈YR′′By′′,y′log(By′′,y′)] +EP(z,y′)[l(hR′(z), y′)]=EP(y′)EPR′(z|y′)[−Xy′′∈YR′′By′′,y′log(By′′,y′)] +EP(z,y′)[l(hR′(z), y′)]= [−Xy′∈YR′Xy′′∈YR′′P(y′)By′′,y′log(By′′,y′)] +EP(z,y′)[l(hR′(z), y′)]=H(YR′′|YR′) +EPR′(z,y)[l(hR′(z), y)].Corollary 1 below, shows the conditions under which the optimal softmax classifier for the domainR′remains optimal for the domain R′′, justifying our choice of classifier change from R′toR′′.Corollary 1. Letebe one-hot encoding of the labels, |YR′′|=|YR′|,Bbe a KT×KRpermutation matrix and hR′be the optimal softmax classifier for R′andyR′′:=σ(yR′) :=arg max y∈YR′′(Be(yR′))ythen under the assumptions of Lemma 2, hR′′(z) := BhR′(z)is theoptimal softmax classifier for R′′.Proof. Since yR′′:=σ(yR′) := arg max y∈YT(Be(yR′))ywe haveEPR′′[l(hR′′(z), yR′′)] = EP(z,y′′)[l(BhR′(z), y′′)] =Xy′′∈YR′′P(y′′)EP(z|y′′)[l(BhR′(z), y′′)]=Xy′∈YR′P(σ(y′))EP(z|σ(y′))[l(BhR′(z), σ(y′))]=Xy′∈YR′P(y′)EP(z|y′)[l(hR′(z), y′)] =EPR′[l(hR′(z), yR′)].The second last equality follows due to the symmetry of cross-entropy loss, i.e., l(h, y) =−loghy=−logBhσ(y)=l(Bh, σ (y)).Since minhR′′EPR′′[l(hR′′(z), yR′′)] = min hR′EPR′[l(hR′(z), yR′)]andhR′is optimal for R′wehavehR′′(z) :=BhR′(z)is the optimal softmax classifier for R′′.A.2.3 Feature transformation (R′′→R′′′)Lemma 3. LetA:Z → Z be an invertible linear map of features and the classifier hR′′′(z) :=hR′′(A−1(z)). Then EPR′′′(z,y)[l(hR′′′(z), y)] =EPR′′(z,y)[l(hR′′(z), y)]for any loss l.Proof. EPR′′′(z,y)[l(hR′′′(z), y)] = EPR′′′(z,y)[l(hR′′(A−1(z)), y)] = EPR′′(z,y)[l(hR′′(z), y)].17Our Corollary 2 below shows that the optimal softmax classifier for domain R′′remains optimal fordomain R′′′too.Corollary 2. LethR′′be the optimal softmax classifier in domain R′′then under the assumptions ofLemma 3, hR′′′(z) =hR′′(A−1(z))is the optimal softmax classifier in domain R′′′.Proof. When hR′′′(z) = hR′′(A−1(z)),minhR′′EPR′′(z,y)[l(hR′′(z), y)] =minhR′′′EPR′′′(z,y)[l(hR′′′(z), y)]by Lemma 3, hence if hR′′is optimal for R′′then so ishR′′′for the domain R′′′.A.2.4 Three transformations combined (R→R′′′)Theorem 1. LetC:=hPR′(y)PR(y)iKRy=1be a vector of probability ratios , Bbe aKT×KRmatrix withBij=P(yR′′=i|yR′=j),A:Z → Z be an invertible linear map of features. Let the classifiershR′(z) :=hR(z),hR′′(z) :=BhR′(z),hR′′′(z) :=hR′′(A−1(z)). Assuming lis the cross-entropyloss, we haveEPR′′′(z,y)[l(hR′′′(z), y)]≤EPR(z,y)[C(y)l(hR(z), y)]| {z }Re-weighted reference task loss+H(YR′′|YR′)|{z}Label mismatch.Proof.EPR′′′(z,y)[l(hR′′′(z), y)] = EPR′′(z,y)[l(hR′′(z), y)] (Lemma 3)≤EPR′(z,y)[l(hR′(z), y)] +H(YR′′|YR′) (Lemma 2)=EPR(z,y)[C(y)l(hR(z), y)] +H(YR′′|YR′) (Lemma 1) .Corollary 3. Letebe one-hot encoding of the labels, |YR′′′|=|YR|,B: ∆R′→∆R′′be apermutation matrix and yR′′:=σ(yR′) := arg max y∈YR′′(Be(yR′))ythen under the assumptionsof Lemmas 1, 2, and 3 we have EPR′′′(z,y)[l(hR′′′(z), y)] =EPR(z,y)[C(y)l(hR(z), y)].Proof. Since yR′′:=σ(yR′) := arg max y∈YT(Be(yR′))ywe haveEPR′′[l(hR′′(z), yR′′)] = EP(z,y′′)[l(BhR′(z), y′′)] =Xy′′∈YR′′P(y′′)EP(z|y′′)[l(BhR′(z), y′′)]=Xy′∈YR′P(σ(y′))EP(z|σ(y′))[l(BhR′(z), σ(y′))]=Xy′∈YR′P(y′)EP(z|y′)[l(hR′(z), y′)] =EPR′(z,y)[l(hR′(z), y)].The second last equality follows due to the symmetry of cross-entropy loss, i.e., l(h, y) =−loghy=−logBhσ(y)=l(Bh, σ (y)).Therefore, we haveEPR′′′(z,y)[l(hR′′′(z), y)] = EPR′′(z,y)[l(hR′′(z), y)] (Lemma 3)=EPR′(z,y)[l(hR′(z), y)] (from above)=EPR(z,y)[C(y)l(hR(z), y)] (Lemma 1) .A.3 Distribution mismatch between R′′′andT(Sec. 3.2)Lemma 4. LetUandQbe two distributions on Z × Y with the same prior PU(y=i) =PQ(y=i) =P(y=i). With the base distance ddefined as in Eq. 2, we have Wd(PU, PQ) =PyP(y)W∥·∥2(PU(z|y), PQ(z|y)).18Proof. Letω∗ydenote the optimal coupling for the conditional distributions (PU(z|y), PQ(z|y))fory∈ Y andπ∗denote the the optimal coupling for the joint distributions (PU(z, y), PQ(z, y)).Then, under the definition of our base distance d,π∗((z, y),(z′, y′)) = 0 when y̸=y′i.e. nomass from the distribution Ubelonging to class ycan be moved to the classes y′̸=yof thedistribution Qwhen the class priors of UandQare the same. Moreover, sincePij(ω∗y)ij= 1andP{i,j:yi=y′j=k}π∗ij=P(y=k)fork∈ Y we have π∗((z, y),(z′, y′)) = ω∗y(z, z′)P(y)1y=y′forevery y, y′∈ Y.Then, we can show that the total Wasserstein distance between the joint distributions can be expressedas the sum of conditional Wasserstein distances, as followsWd(PU(z, y), PQ(z, y))=Xy,y′Zπ∗((z, y),(z′, y′))d((z, y),(z′, y′))dzdz′=Xy,y′Zπ∗((z, y),(z′, y′))(∥z−z′∥2+∞ ·1y̸=y′)dzdz′=Xy,y′Zω∗y(z, z′)P(y)1y=y′(∥z−z′∥2+∞ ·1y̸=y′)dzdz′=Xy,y′Zω∗y(z, z′)P(y)1y=y′∥z−z′∥2dzdz′(since 1 y=y′·1y̸=y′= 0)=Xy,y′P(y)1y=y′Zω∗y(z, z′)∥z−z′∥2dzdz′=XyP(y)Zω∗y(z, z′)∥z−z′∥2dzdz′=XyP(y)W∥·∥2(PU(z|y), PQ(z|y)).Theorem 2. Let the distributions TandR′′′be defined on the same domain Z × Y and assumption 1holds, then EPT(z,y)[l(h(z), y)]−EPR′′′(z,y)[l(h(z), y)]≤τ Wd(PR′′′, PT)|{z }Distribution mismatch,withdas in Eq. 2.Proof.EPT(z,y)[l(h(z), y)]−EPR′′′(z,y)[l(h(z), y)]=EPT(y)EPT(z|y)[l(h(z), y)]−EPR′′′(y)EPR′′′(z|y)[l(h(z), y)]=EPT(y)[EPT(z|y)[l(h(z), y)]−EPR′′′(z|y)[l(h(z), y)]] (since PT(y) =PR′′′(y))≤EPT(y)[ supl′◦h′∈τ−LipschitzEPT(z|y)[l′(h′(z), y)]−EPR′′′(z|y)[l′(h′(z), y)]]=EPT(y)[τ W∥·∥2(PT(z|y), PR′′′(z|y))] (Kantorovich −Rubinstein duality)=τ Wd(PR′′′, PT) (Lemma 4) .A.4 Final bound (Sec. 3.3)Theorem 3. Letlbe the cross entropy loss then under the assumptions of Theorems 1 and 2 we have,EPT(z,y)[l(hT(z), y)]≤EPR(z,y)[C(y)l(hR(z), y)]| {z }Re-weighted reference task loss+H(YR′′|YR′)|{z}Label mismatch+τ Wd(PR′′′, PT)|{z }Distribution mismatch.19Proof. Letl◦hT,l◦hR, and l◦hR′′′beτ−Lipschitz (from Assumption 1).EPT(z,y)[l(hT(z), y)]≤EPT(z,y)[l(hR′′′(z), y)] (Optimality difference)≤EPR′′′(z,y)[l(hR′′′(z), y)] +τ Wd(PR′′′, PT) (Theorem 2)≤EPR(z,y)[C(y)l(hR(z), y)] +H(YR′′|YR′) +τ Wd(PR′′′, PT) (Theorem 1) .In our experiments, we enforce the τ−Lipschitz constraint for l◦hRandl◦hTand verify that theLipschitz constant of l◦hR′′′remains close to τwithin tolerance.A.5 Extension to non-linear classifiers and non-linear transformationsTo extend our analysis to non-linear classifiers/transformations, we allow A:Z → Z to be a non-linear map and the classifiers h∈ H non_linear to be also non-linear (such as multi-layer perceptron).In addition to the Assumption 1 (1), which requires l◦hR′′′to be τ−Lipschitz, we also require thathR′′′belongs to the same class Hnon_linear ashTandhRfor any A. This holds for example when his linear and Ais also linear or when his a multilayer perceptron and Ais linear. With this additionalassumption, all of our proofs work for any linear/non-linear transformation of the feature space,without any change. Thus, Theorem 1, Theorem 2 and Theorem 3 hold for non-linear classifiers aswell. With these extensions, our bounds can be used to explain transferability even with non-linearclassifiers.B Additional related workDistributional divergence-based analyses of learning with distribution shifts (under same featureand label sets): Here we review some of the previous works that analyzed the problem of learningunder distribution shifts in terms of distributional divergences such as the Wasserstein distance. Theseanalyses apply when the feature and label spaces remain the same between the original and the shifteddistribution.Early works [ 8,52,36] showed that the performance on a shifted distribution (target domain) canbe estimated in terms of the performance of the source domain and the distance between the twodomains’ marginal distributions and labeling functions. Specifically, [8] showed that thatET(h, fT)≤ ES(h, fS) +d1(PS, PT) + min {EPS[|fS(z)−fT(z)|],EDT[|fS(z)−fT(z)|]},where d1denotes the total variation distance, f:Z → [0,1]denotes the labeling function, h:Z → { 0,1}denotes the hypothesis and EP(h, f) :=Ez∼P[|h(z)−f(z)|]denote the risk of thehypothesis h. A follow up work [ 52], showed a similar result using type-1 Wasserstein distance forallK−Lipschitz continuous hypotheses i.e.,ET(h, fT)≤ ES(h, fS) + 2K·W1(PS, PT) +λ,where λis the combined error of the ideal hypothesis h∗that minimizes the combined errorES(h, fS) +ET(h, fT). Another recent work [ 32] used a target transformation-based approachand Wasserstein distance to quantify learning in the presence of data and label shifts. Other works[31,50] also presented an analysis based on Wasserstein distance to understand how the accuracyof smoothed classifies and robustness change in the presence of distribution shifts. Compared tothese works the bound proposed in Theorem 2 considers cross-entropy loss (which is a popularchoice of the loss function in the classification setting) and uses a joint feature and labels Wassersteindistance rather than only marginal Wasserstein distance. These differences make the bound proposedin Theorem 2 useful in the analysis of transfer learning than those proposed in previous works whenwe have access to labeled target domain data.Comparison with [ 56]:The closest work to ours is that of [ 56], which showed that transferability toa target task can be related to transferability to another task (source task in their work) and the labelmismatch between the two tasks. However, the bound is proposed in a restrictive setting when boththe tasks have the same features but different labels (i.e. same images labeled differently between thetwo tasks). In this setting, [ 56], showed that transferability is upper bounded by loss of the sourceclassifier on the source task and the conditional entropy (CE) of the label sets of the two tasks. We20significantly extend this analysis to general tasks which is the most commonly used setting in practice(e.g., our analysis allows us to study transfer learning from ImageNet with 1000 classes to CIFAR-100with 100 unrelated classes, where both tasks have different images). In this setting, our main result inTheorem 3, shows that transferability involves additional terms such as the distribution mismatchterm (in the form of Wasserstein distance), the prior mismatch term (in the form of re-weightedsource loss) and the conditional entropy between the label sets. Moreover, the bound proposed by[56] is a special case of our bound with Cbeing the vector of all ones (no prior change) and AbeingIdentity (data distributions of two tasks are the same).Comparison with other transferability estimation and model selection methods: The problemof transferability estimation has gained a lot of attention recently, especially due to the availability ofa larger number of pre-trained models. Earlier works used models end-to-end fine-tuned on targettasks to evaluate transferability via task-relatedness estimated using the actual target loss [ 62] and theFisher information matrix [ 1]. However, the requirement of models trained on target tasks limits theirpractical utility. Other works [ 22,21,54] used the representation of the data from the tasks fromthese end-to-end fine-tuned models and developed similarity scores that achieved high correlationwith transferability. However, these approaches are not practical since they require models end-to-endfine-tuned on the target task. Our work does not depend on models trained on target tasks as shownin our analysis (Theorem 3) making it both theoretically and practically sound.Another line of work [ 5,40,29,61,55,51,19] focuses on the problem of pre-trained model selectionwhere the goal is to find the best pre-trained classifier from a model zoo that will achieve the highesttransferability to a particular downstream task. Thus, the main challenge of this problem is to be ableto estimate transferability in a way that is more efficient than fine-tuning the pre-trained models onthe downstream tasks. To this end, recent works have proposed several scores that are correlated withthe accuracy of the models after fine-tuning them on the target task, which we refer to as score-basedtransferability estimation (SbTE) methods, in our work.However, unlike our work, the goal of SbTE works is not to propose a universal bound or identifyterms that provably govern transferability. Moreover, while the scores proposed in SbTE workscorrelate well with transferability, they are only meaningful in a relative sense. Concretely, a scoreof 1 (e.g., for LogMe [46]) on a CIFAR-100 task for a particular model does not indicate whethertransferability is good or bad and requires comparison with scores of other pre-trained models onthe same target task. On the other hand, our upper bound directly approximates transferability, e.g.,an upper bound of 1 on the CIFAR-100 task for a model implies that transfer learning will incur anaverage cross-entropy loss of less than 1 implying a highly accurate transfer. Our results in Figs. 3,and 9 attest that our upper bound is indeed a good estimate of the transferability.Another disadvantage of the scores proposed in these works is that they cannot be compared acrosstarget tasks, unlike our upper bound. As observed from Fig. 4 of LogMe [46], scores for CIFAR-10are lower than scores for CIFAR-100 on the same pre-trained model, but, the transferability toCIFAR-10 is better than that to CIFAR-100. On the other hand, from our Fig. 9, the upper bounds onCIFAR-10 are lower than those of CIFAR-100 implying better transferability of classifiers pre-trainedon ImageNet to CIFAR-10. Thus, our work is more suitable for estimating the absolute performanceon various target tasks given a particular pre-trained classifier.Thus, while the goal of our work differs fundamentally from SbTE approaches the problem addressedin this work is of significant practical importance. In Sec. 4.3, we show that task-relatedness estimatedvia Alg. 1 achieves competitive performance compared to popular SbTE methods on this problem.Comparison with task transfer learning approaches: This approach focuses on explaining thetransfer performance based on the relationship between tasks. For instance, works such as [ 28,57,24]study the problem of few-shot learning (FSL) where a model is trained on data from related trainingtasks and is adapted to an unseen task using only a few samples. Different from these works, wefocus on the setting where a pre-trained encoder trained on some pre-training dataset is adapted withenough samples/gradient steps to a downstream task. This downstream target task may or may nothave any relation to the pre-training task unlike [28, 57, 24].Concretely, [ 28] proposed a model-agnostic metric called Task Attribute Distance to gauge thesuccess of transfer. Our work, on the other hand, defines task-relatedness based on the similarity ofthe representations of reference and target tasks in the representations of the pre-trained model (and ismodel dependent) rather than relying on the attribute information, which may not be available in TL21setting. [ 57] analyzes the sample complexity for learning a model shared across tasks and adapting itto a new target task and shows task diversity to be a crucial component for the success of transfer inthe FSL setting. Our work on the other hand does not assume access to shared tasks or restrict thenumber of samples required for fine-tuning on the target task. Moreover, their notion of task diversityrequires access to a set of training tasks that may not be available in the TL setting, making our notionof task-relatedness more practical for TL. Lastly, [ 24] proposes a notion of task-relatedness for theFSL setting, allowing to utilize all the data from available training tasks to help learn a model ona new task with a few gradient steps. This notion is model-agnostic and defined over the samplespace ( X × Y ) unlike our measure which is defined in the representation space of the model whosetransferability needs to be evaluated.Thus while task-relatedness is at the core TL, the notions proposed by [ 28,57,24] are suitable fortask transfer setting where as our notion is more suitable for the inductive TL setting.C Additional experimentsC.1 Small-scale experiment to evaluate Alg. 11)Ouralgorithm produces trans formations thatachieve abettervalue forthebound compared tonaive trans formations. 2)Havingthesame numberofclasses inthereference taskasinthetargettaskleads tothebestvalue ofthebound. 3)Itisonly marginally bettertohave semantically relatedclasses inthereference task.We evaluate Alg. 1 for computing task-relatedness and predicting the transferability of a ResNet-18model to CIFAR-10 in two settings. In the first setting, we consider a reference task that comprisesdata from 10 classes chosen at random and 10 classes that are semantically related to the labelsof CIFAR10 from ImageNet [ 17] (see App. D) whereas in the setting we select reference taskwith 20 classes. We compare the cross-entropy loss on CIFAR10 obtained after linear fine-tuning(transferability) with our bound, by using various transformations including those learned via Alg. 1.Using 10 reference classesUsing 20 reference classesFigure 6: Transformations optimizedusing Eq. 4 produce a better task-relatedness than obtained by naively cho-sen transformations, primarily due to de-creased distribution mismatch. The pres-ence of the same number of classes in thereference as those in the target achievesthe smallest task-relatedness value. Se-mantically related classes in the refer-ence task help but only marginally.We test 3 different cases. The first is FixedAll: whereall transformations are fixed ( Ais fixed to the Identitymatrix, Bis fixed to a random permutation matrix in thesetting with 10 classes, and to a matrix such that tworeference classes match a single target class in the set-ting with 20 classes, Dis fixed to the reference prior).The second is LearnedA: where we use Alg. 1 to learnonlyAwhile B, D are fixed as in FixedAll, and lastlyLearnedAll: where all the transformations are learned viaAlg. 1. Our results in Fig. 6, show that in both settingsusing FixedAll, our bound is marginally better when thereference task has classes semantically related to the targettask. In all cases, the bound becomes significantly betterin the LearnedA setting due to learning the transformationAthat transforms the reference distribution to better matchthe target and decrease the distribution mismatch term ofthe bound. Lastly, by learning all the transformations, inthe LearnedAll setting, we achieve the best value for thebound, regardless of whether there are randomly chosenor semantically related classes in the reference task. More-over, we see that the value of the bound estimated whenthe reference task has 10 source classes is better in allcases. This is due to smaller re-weighted reference losssince learning a 20-way classifier is more challenging thanlearning a 10-way classifier, especially with the Lipschitzconstraint. In the setting with 20 classes, Alg. 1, prefersto retain data from 10 of the 20 reference task classes, Fig. 7 (right), reducing the re-weightedreference loss, makes Bsparse, reducing the label mismatch term, and aligns the reference and targetdistributions, via A, reducing the distribution mismatch term.22Overall, our results show that 1) learning Dprefers to retain the data from the same number ofreference classes as those present in the target, 2) the Bmatrix eventually becomes sparse, making thelabel mismatch term zero, and 3) there is only a small difference in task-relatedness between LearnedAand LearnedAll settings. Based on these insights, we use Alg. 1 in the LearnedA setting, with thereference task containing the same number of randomly sampled classes from ImageNet as the samenumber of classes in the target task (since it may not always possible to find semantically similarclasses for all target tasks) and fix Bto be a random permutation matrix, for all other experiments inthe paper. We select classes for the reference task from ImageNet due to its diversity and the fact thatit has a large number of classes. However, any dataset can be used to define the reference task (e.g.,we used Digits datasets for the reference task in Sec. 4.2).Next, we evaluate the effect of knowing the exact matching between the labels of the reference andthe target tasks in comparison to a random permutation. We use MNIST as the reference task andUSPS as the target task (and vice-versa). We compare our results in a setting where only Ais learnedandBis set to an identity matrix and when Bis set to a random permutation matrix. Note that theidentity matrix corresponds to the correct mapping between the classes of MNIST and USPS tasks(both contain digits from 0 to 9).We find that the upper bound obtained when Bis fixed to identity is only marginally better than thecase when Bis a random permutation. Specifically, the difference between the bound when Bisfixed to a random permutation and when Bis an identity matrix is 0.10 for the MNIST →USPS taskand 0.17 for the USPS →MNIST task. The primary reason for the decrease in the upper bound comesfrom the reduced distribution mismatch term. While the upper bound improves slightly when theideal matching between the labels is known, such a mapping may not be known when the labels ofthe tasks are not related such as for FMNIST and MNIST. Thus, fixing Bto a random permutationmatrix yields a reliable estimate of transferability in most cases.C.1.1 Visualization of the transformed data via t-SNE for various settings in Sec. C.1Here we use the setting considered in App. C.1 where we consider 20 randomly selected classesfrom ImageNet as the reference task and consider the transfer to CIFAR-10. We plot the results ofusing different transformations using t-SNE to show how various transformations affect the upperbound in Theorem 3. Our results in Fig. 7(left) show that when no transformations are learned(FixedAll), the 20 random reference task classes do not overlap with the 10 target classes leadingto an increased Wasserstein distance which in turn leads to a larger upper bound. By learning thetransformation A(LearnedA), Fig. 7(center) shows a significantly better alignment between theclasses of the reference and target which leads to a decreased Wasserstein distance and hence a tighterupper bound. Moreover, by learning all the transformations (LearnedAll), Fig. 7(right) shows thatnot only do the distributions align well but also the prior of the reference is changed to only keep 10reference classes to match the prior of the target distribution providing a further improvement in theupper bound. This clearly shows the effectiveness of our proposed optimization algorithm in learningvarious transformations to minimize the upper bound.C.1.2 Effectiveness of minimizing the upper bound in Theorem 3 via solving Eq. 4In Fig. 8, we show how the upper bound changes as the optimization progresses to transform thereference task (ImageNet) into four target tasks with the ResNet-18 model. Similar to experimentsin Sec. 4.1 of the paper we optimize over the transformation Awhile BandDare fixed to arandom permutation matrix and the reference prior. After about 600 epochs the optimization problemconverges to a local minima.C.2 Additional results for the impact of the reference task on task-relatedness (Sec. 4.2)C.2.1 Additional results for image classificationHere we provide details of the experiment presented in Sec. 4.2 about the effect of the reference taskon task-relatedness and transferability. Similar to the results presented in Fig. 4 of the main paper, theresults in Fig. 10 show that when the reference and the target tasks are related then task-relatedness isgood as in the case when the reference task is MNIST and target is USPS or vice versa achievinga smaller gap to transferability. When the target tasks are unrelated to the reference task data thenboth the transferability and task-relatedness are low. E.g., when the reference task is MNIST and the23TargetTransformed ReferenceTargetTransformed ReferenceTargetTransformed ReferenceFigure 7: (Best viewed in color.) t-SNE visualizations of the effect of various transformations onthe bound in Theorem 3 when data from 20 randomly selected classes from ImageNet are used totransfer to CIFAR-10. When all transformations are fixed (FixedAll, left) the distance between thedistribution R′′′(transformed reference) and Tis high explaining the large upper bound. Learningjust the transformation Ausing the algorithm proposed in Sec. 3.4 significantly reduces the distancebetween R′′′andTleading to a tighter upper bound (center). Learning all the transformations furtherimproves the matching (right). Especially, learning BandDchange the class priors of the referenceso that the same number of classes from the reference are used for matching as those present in thetarget. This is evident from the right plot where only 10 unique reference task clusters are visiblecompared to 20 in the center plot, with fixed D. Moreover, the zoomed-in portion shows that for thecenter figure two classes from the reference task (green 2 and 3) match with class 1 (blue) of thetarget whereas a single class from the reference task (green 18) matches class 1 (blue) of the target inthe right figure.Figure 8: Reduction of the proposed upper bound is shown as transformations are learned bysolving the optimization problem in Eq. 3. After 600 epochs, the upper bound stabilizes showing theconvergence of the optimization problem. Each subplot shows the effect of learning the transformationparameters for the transfer learning task with ImageNet as the reference task and ResNet-18 (trainedin a supervised way) for different target tasks. The solid line is the average after 5 random restartsand the shaded portion shows their standard deviation.target is FMNIST or vice-versa. Task-relatedness is also correlated to transferability, i.e., a modeltrained on MNIST achieves better transferability to USPS than FMNIST.C.2.2 Results for NLP sentence classification taskIn this section, we use sentence classification NLP task to demonstrate further the effect of thereference task on task-relatedness. For this experiment, we first fine-tune the entire DistilRoBERTa[34] model distilled on OpenAI’s WebText dataset, using a subsample of 10,000 points from theDBPedia dataset. We then use these fine-tuned models to evaluate the transferability to AG news,SST-5, and Yelp datasets. The results in Fig. 12 show that transferability on AG News is the smallestamong the three datasets. This coincides with the task-relatedness value obtained after learning the24024LossRN18024RN50024RN101024RN152024CLIP RN50024LossADV(0.1) RN18024ADV(0.1) RN50024ADV(1) RN18024ADV(1) RN50024CLIP Vit-B32CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024LossSIMCLR RN50CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024MOCO RN50CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024SWAV RN50CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024MAE ViT-B16CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024CLIP ViT-B16Target Loss Reweighted Reference Loss Label Mismatch Distribution Mismatch(a) Image classification tasksAG_NewsYELP-5SST-5012DistilBERTAG_NewsYELP-5SST-5012DistilRoBERTa(b) Sentence classification tasksFigure 9: Full results for comparison of transferability vs. task-relatedness for large pre-trainedmodels on image and sentence classification tasks. Task-relatedness consistently achieves a smallgap to transferability. We denote cross-entropy loss on the y-axis. Plot-title denotes the pre-trainedmodel and the x-axis denotes the target tasks.USPSFMNIST024LossMNISTUSPSMNIST024FMNISTMNISTFMNIST024USPSAG_NewsYELP-5SST-5012DBPediaTarget Loss Reweighted Reference Loss Label Mismatch Distribution MismatchFigure 10: Decomposition of task-relatedness into its three components illustrates that our task-relatedness (specifically the distribution mismatch term) explains the difference in transferability.Similar tasks such as USPS and MNIST have the highest transferability and also have the highesttask-relatedness.transformations which explains why transfer from DBPedia to AG News is more successful comparedto other target tasks. This observation is reasonable, especially considering that both DBPedia andAG News have structured information. Moreover, since DBPedia is related to Wikipedia, the termsand entities appearing in AG News are more related to those appearing in DBPedia in comparison toterms/entities appearing in SST-5 and Yelp which consist of movie reviews and reviews collectedfrom Yelp.For our experiments, in this section, we follow a similar setting of fixing Bto be a random permutationmatrix, Cto the prior of the reference task, and only learn the transformation A. We sample 10,000points from DBPedia belonging to the same number of classes as those present in the target task (e.g.,for AG News we sample data from 4 randomly selected classes of DBPedia) and use this data as thereference task data to train hRwith gradient norm penalty ( τ=0.02). All experiments are run for 3random seeds and average results are reported in Fig. 12.2575 80 85 90 953.03.54.04.55.0ResNet-50 (rho=-0.63)Accuracy (%)Task-relatedness80 85 90 953.03.54.04.55.0ResNet-101 (rho=-0.65)Accuracy (%)Task-relatedness75 80 85 90 953.54.04.55.0ResNet-152 (rho=-0.59)Accuracy (%)Task-relatedness75 80 85 90 953.03.54.04.55.0Adv ResNet-50 (eps=0.1, rho=-0.62)Accuracy (%)Task-relatedness75 80 85 90 953.03.54.04.55.0Adv ResNet-50 (eps=1, rho=-0.57)Accuracy (%)Task-relatednessFigure 11: Task-relatedness is highly (negatively) correlated (Pearson correlation coefficient in thesubplot title) with the accuracy of the models after end-to-end fine-tuning. Each subplot showstransfer learning with various target tasks for a specific model architecture and training method.AG News SST 5 Yelp 5Target TasksDBPediaReference Task1.43 1.72 1.65Task-relatedness1.451.501.551.601.651.70AG News SST 5 Yelp 5Target TasksDBPediaReference Task1.18 1.56 1.55Transferability1.201.251.301.351.401.451.501.55Figure 12: Task-relatedness (Left) and transferability (Right) are highly correlated across variousreference-target pairs. A target task related to the reference task (DBPedia) such as AG News achievesbetter transferability (with DistillRoBERTa model) and task-relatedness compared to less-relatedtasks such as SST-5 and Yelp5.C.3 Lipschitz constrained linear fine-tuningC.3.1 Implementing softmax classification with τ−Lipschitz lossTo use the bound Theorem. 3, it is required that the loss be τ−Lipschitz continuous w.r.t. zinthe input domain Z. To enforce this, while learning the weights of the softmax classifier (linearfine-tuning) for the reference task or the target, we add the gradient norm penalty as used in previousworks [52, 4] and solve the following optimization problemminh1NXil(h(zi), yi) +ρmaxymax{0,∥∇zl(h(zi), y)∥2−τ}2(ρ≈104)where l(h(z), y) =−wTyz+ logPjewTjzis the cross-entropy loss.C.3.2 Trade-off between empirical and predicted transferabilityConstraining the Lipschitz coefficient of the classifier increases both the target and the reference taskcross-entropy loss since the hypothesis set is being restricted. The smaller the τis, the larger the lossbecomes. On the other hand, the smaller τmakes the distribution mismatch term in Theorem 3 alsosmaller. Since the bound is the sum of the reference task loss and the distribution mismatch (and labelmismatch), there is a trade-off determined by the value of τ. We illustrate the effect of the values of τon the empirical and predicted transferability. As mentioned previously, we train both the classifierfor the reference task hRand the target hTwith an additional penalty on the gradient norm to makethem τ−Lipschitz. In Fig. 13, we present results of varying the value of τfor the transfer to the Petsdataset with ImageNet as the reference task. For this experiment, we selected 37 random classesfrom ImageNet and only learned the transform Aby keeping Bfixed to a random permutation andCfixed to the uniform prior over reference task classes. We observe that the performance of linearfine-tuning degrades as we decrease the value of τbut explainability through the bound improvessince the distribution mismatch term (dependent on τ) decreases in the bound. However, making τtoo small is not preferable since it leads to an increase in the first term of the bound (re-weightedreference task loss) increasing the bound overall. Moreover, it also leads to a degradation in theaccuracy after linear fine-tuning. For our experiments, we use τ= 0.02since it doesn’t decreasethe accuracy of fine-tuning significantly and leads to a small gap between empirical and predictedtransferability.260.01 0.02 0.050.1 0.20510LossRN180.01 0.02 0.050.1 0.20510RN50Target Loss Reweighted Reference Loss Label Mismatch Distribution MismatchFigure 13: Trade-off between the cross-entropy loss (y-axis) after linear fine-tuning and the upperbound in Theorem 3 as a function of τ(x-axis), for ResNet18 and ResNet50 models pre-trained onImageNet and linearly fine-tuned on the Pets dataset. Increasing the value of τleads to a decrease inthe cross-entropy loss after fine-tuning but increases in the proposed bound mainly due to the τ·Wdterm.D Details of the experimentsAll codes are written in Python using Tensorflow/Pytorch and were run on an Intel(R) Xeon(R)Platinum 8358 CPU with 200 GB of RAM and an Nvidia A10 GPU. Implementation and hyperpa-rameters are described below. Our codes can be found in the supplementary material. We report anaverage of three independent runs for experiments in Sec. 4.2 and 4.3.D.1 Dataset detailsIn our work, we used the standard image classification benchmark datasets along with standardnatural language processing datasets1.Aircraft [35]: consists of 10,000 aircraft images belonging to 100 classes.CIFAR-10/100 [ 30]:These datasets contain 60,000 images belonging to 10/100 categories. Addi-tionally, we created two subsets of CIFAR100 with the first 25 (small CIFAR-100-S) and 50 (mediumCIFAR-100-M) classes.DTD[13]: consists of 5,640 textural images belonging to 47 categories.Fashion MNIST [60]: consists of 70,000 grayscale images belonging to 10 categories.Pets [43]: consists of 7049 images of Cats and Dogs spread across 37 categories.ImageNet [17]: consists of 1.1 million images belonging to 1000 categories.Yelp [63]: consists of 650,000 training and 50,000 test examples belonging to 5 classes.Stanford Sentiment Treebank (SST-5) [ 63]:consists of 8,544 training and 2,210 test samplesbelonging to 5 classes.AG News [63]: consists of 120,000 training and 7,600 test examples belonging to 4 classesDBPedia [63]: consists of 560,000 training and 70,000 test examples belonging to 14 classesD.2 Semantically similar classes for CIFAR-10 from ImageNetFor our experiments with CIFAR-10 in Sec. C.1, we selected the following semantically similarclasses from ImageNet, { airliner, minivan, cock, tabby cat, ox, chihuahua, bull-frog, sorrel, submarine, fire engine }.D.3 Additional experimental detailsDetails of the optimization problem in Eq. 4 and Alg. 1. In Step 5 of the algorithm, we use thenetwork simplex flow algorithm from POT [ 23] to compute the optimal coupling. Since computingthe Wasserstein distance over the entire dataset can be slow, we follow [ 16] and compute the coupling1All NLP datasets and models are obtained from https://huggingface.co/ .27over batches. Note that the base distance defined in Eq. 2 is non-differentiable. Thus, we usea differentiable approximation ̃d((z, y),(z′, y′)) := dfeatures (z, z′) +ν· ∥e(y)−e(y′)∥2(withν= 108) where (z, y)and(z′, y′)are samples from the domains R′′′andTande(·)denotes theone-hot embedding of the labels in Step 5/6 . The first three terms in Step 6 of our algorithmcorrespond to the terms in the objective of Eq. 4 while the two additional terms are added to penalizethe constraints of class prior matching PT(y) =BD and invertibility of the matrix A, respectivelyas required by Theorem 3. We use the softmax operation to ensure BandDare a valid probabilitymatrix and vector.In our experiments, in Sec. 4.1, we used pre-trained models available from Pytorch for ResNet18/50,along with publicly available pre-trained models provided in the official repositories of each trainingmethod. For each experiment, we subsample data from the ImageNet dataset belonging to thesame number of classes as those present in the target dataset and use this data to train the linearlayer on top of the representations extracted from the pre-trained model along with a gradient normpenalty (Reference task classifier). To speed up the experiments, we use only 10,000 points from thesubsample of ImageNet for training the linear classifier and computing the transfer. For evaluation,we use a similar subsample of the validation dataset of ImageNet containing all the samples belongingto the subsampled classes. Fine-tuning on this dataset takes about 0.05 seconds per epoch for the taskof transfer from ImageNet to Pets with the ResNet-18 model (we run the fine-tuning for a total of5000 epochs).Along with training the linear classifiers with a gradient norm penalty (with τ= 0.02), we standardizethe features extracted from the pre-trained models to remove their mean (along each axis) and makethem have a unit standard deviation (along each axis). While standardizing the features do not have asignificant impact on the loss of the classifiers, including it makes it easier to match the distributionsof the reference task and target data after transformations. Since our optimization problem transformsthe reference task distribution to match the distribution of the target by solving the optimizationproblem in Eq. 4 by working on mini-batches, the size of the batch must be greater than the dimensionof the representation space of the pre-trained encoder. E.g., for ResNet18 models which have arepresentation dimension of 512, we use a batch size of 1000 and for ResNet50 models which have arepresentation dimension of 2048, we use a batch size of 2500. Having a smaller batch size than thedimension could lead to a noisy gradient since for that batch the transformation can achieve a perfectmatching, which may not generalize to data from other batches or unseen test data.While computing the transformations, we apply the same augmentation (re-sizing and center crop-ping)/normalization to the training data as those applied to the test data. Along with this, we extractthe features of the training and test data from the pre-trained model once and use these to trainthe linear layer. We note that this is done to save the computation time and better results could beobtained by allowing for extracting features after data augmentation for every batch.Finally, for our experiments in Sec. 4.2, the encoders are trained end-to-end on the reference task.This is in contrast to our other experiments where the encoders are pre-trained and data from thereference task is only used for linear fine-tuning. Using these models, task-relatedness is evaluated byfine-tuning a linear layer using the data from the target task as well as the transformations computedby solving Eq. 4. We used τ= 0.2here. We run the experiments with 3 random seeds and report theaverage results.For our experiments in Sec. 4.3, we used the official code from [ 61] to compute the scores for NCE,Leep, and LogMe along with the official code of [ 51] for SFDA. For PACTran [ 19], we also use theirofficial code with the PACTran-Gaussian method with N/K = 100 , β= 10N, σ2=D/100whereNdenotes the number of samples and Kdenotes the number of classes. This setting is similar tothe PACTran-Gauss fixsetting used in their work with the difference that we use N/K = 100 touse a sufficiently large number of samples, especially considering that all our other results for SbTEmethods are computed on the full training set. For OTCE, we follow the official code and computethe recommended OT-based NCE score and OTCE score ( λ1=−0.0001 andλ2=−1) using 4000randomly selected training samples from the two tasks. For the source task, we subsample the samenumber of classes as the target task and use. For the H-score, we use the official code to compute theH-score [6].For estimating transferability by solving Eq. 5, we set λ=0.01 for all the experiments in Sec. 4.3.28NeurIPS Paper Checklist1.ClaimsQuestion: Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope?Answer: [Yes]Justification: The abstract summarizes the paper’s contributions.Guidelines:•The answer NA means that the abstract and introduction do not include the claims made inthe paper.• The abstract and/or introduction should clearly state the claims made, including the contri-butions made in the paper and important assumptions and limitations. A No or NA answerto this question will not be perceived well by the reviewers.•The claims made should match theoretical and experimental results, and reflect how muchthe results can be expected to generalize to other settings.•It is fine to include aspirational goals as motivation as long as it is clear that these goals arenot attained by the paper.2.LimitationsQuestion: Does the paper discuss the limitations of the work performed by the authors?Answer: [Yes]Justification: Discussed in Sec. 5 and elaborated here.Here we present some of the potential limitations of our work and discuss the avenues for futurework. The analysis presented in the paper studies transferability using the popular cross-entropyloss similar to that analyzed by [ 56]. While this is important and practically useful, extendingthe analysis to other losses, primarily to the 0-1 loss, is an interesting future direction since inmost classification tasks, accuracy is the metric of primary interest.Another limitation of our work is the dependence of our bound on the Wasserstein distance.While it is a popular choice for analyzing performance in various distribution shift scenarios[52,16,50], it is difficult to estimate in practice due to its sensitivity to the number of samplesand the dimension of the representation space. An analysis based on a different and easy-to-compute divergence measure might be more amenable for making transferability estimation viatask-relatedness easier and faster.We emphasize that even though we present some of the limitations of our current work above,these are by no means weaknesses of our work, which non-trivially extends the current under-standing of the transfer learning setting through a rigorous analysis and achieves impressiveperformance on practical applications.Guidelines:•The answer NA means that the paper has no limitation while the answer No means that thepaper has limitations, but those are not discussed in the paper.• The authors are encouraged to create a separate "Limitations" section in their paper.•The paper should point out any strong assumptions and how robust the results are toviolations of these assumptions (e.g., independence assumptions, noiseless settings, modelwell-specification, asymptotic approximations only holding locally). The authors shouldreflect on how these assumptions might be violated in practice and what the implicationswould be.•The authors should reflect on the scope of the claims made, e.g., if the approach was onlytested on a few datasets or with a few runs. In general, empirical results often depend onimplicit assumptions, which should be articulated.•The authors should reflect on the factors that influence the performance of the approach. Forexample, a facial recognition algorithm may perform poorly when image resolution is lowor images are taken in low lighting. Or a speech-to-text system might not be used reliablyto provide closed captions for online lectures because it fails to handle technical jargon.•The authors should discuss the computational efficiency of the proposed algorithms andhow they scale with dataset size.29•If applicable, the authors should discuss possible limitations of their approach to addressproblems of privacy and fairness.•While the authors might fear that complete honesty about limitations might be used byreviewers as grounds for rejection, a worse outcome might be that reviewers discover limita-tions that aren’t acknowledged in the paper. The authors should use their best judgment andrecognize that individual actions in favor of transparency play an important role in devel-oping norms that preserve the integrity of the community. Reviewers will be specificallyinstructed to not penalize honesty concerning limitations.3.Theory Assumptions and ProofsQuestion: For each theoretical result, does the paper provide the full set of assumptions and acomplete (and correct) proof?Answer: [Yes]Justification: In App. A.Guidelines:• The answer NA means that the paper does not include theoretical results.•All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.• All assumptions should be clearly stated or referenced in the statement of any theorems.•The proofs can either appear in the main paper or the supplemental material, but if theyappear in the supplemental material, the authors are encouraged to provide a short proofsketch to provide intuition.•Inversely, any informal proof provided in the core of the paper should be complemented byformal proofs provided in appendix or supplemental material.• Theorems and Lemmas that the proof relies upon should be properly referenced.4.Experimental Result ReproducibilityQuestion: Does the paper fully disclose all the information needed to reproduce the mainexperimental results of the paper to the extent that it affects the main claims and/or conclusionsof the paper (regardless of whether the code and data are provided or not)?Answer: [Yes]Justification: In App. D.Guidelines:• The answer NA means that the paper does not include experiments.•If the paper includes experiments, a No answer to this question will not be perceived wellby the reviewers: Making the paper reproducible is important, regardless of whether thecode and data are provided or not.•If the contribution is a dataset and/or model, the authors should describe the steps taken tomake their results reproducible or verifiable.•Depending on the contribution, reproducibility can be accomplished in various ways. Forexample, if the contribution is a novel architecture, describing the architecture fully mightsuffice, or if the contribution is a specific model and empirical evaluation, it may be necessaryto either make it possible for others to replicate the model with the same dataset, or provideaccess to the model. In general. releasing code and data is often one good way to accomplishthis, but reproducibility can also be provided via detailed instructions for how to replicatethe results, access to a hosted model (e.g., in the case of a large language model), releasingof a model checkpoint, or other means that are appropriate to the research performed.•While NeurIPS does not require releasing code, the conference does require all submissionsto provide some reasonable avenue for reproducibility, which may depend on the nature ofthe contribution. For example(a)If the contribution is primarily a new algorithm, the paper should make it clear how toreproduce that algorithm.(b)If the contribution is primarily a new model architecture, the paper should describe thearchitecture clearly and fully.30(c)If the contribution is a new model (e.g., a large language model), then there shouldeither be a way to access this model for reproducing the results or a way to reproducethe model (e.g., with an open-source dataset or instructions for how to construct thedataset).(d)We recognize that reproducibility may be tricky in some cases, in which case authorsare welcome to describe the particular way they provide for reproducibility. In the caseof closed-source models, it may be that access to the model is limited in some way(e.g., to registered users), but it should be possible for other researchers to have somepath to reproducing or verifying the results.5.Open access to data and codeQuestion: Does the paper provide open access to the data and code, with sufficient instructionsto faithfully reproduce the main experimental results, as described in supplemental material?Answer: [Yes]Justification: Our codes can be found at https://github.com/akshaymehra24/TaskTransferAnalysis .Guidelines:• The answer NA means that paper does not include experiments requiring code.•Please see the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•While we encourage the release of code and data, we understand that this might not bepossible, so “No” is an acceptable answer. Papers cannot be rejected simply for not includingcode, unless this is central to the contribution (e.g., for a new open-source benchmark).•The instructions should contain the exact command and environment needed to run toreproduce the results. See the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•The authors should provide instructions on data access and preparation, including how toaccess the raw data, preprocessed data, intermediate data, and generated data, etc.•The authors should provide scripts to reproduce all experimental results for the new proposedmethod and baselines. If only a subset of experiments are reproducible, they should statewhich ones are omitted from the script and why.•At submission time, to preserve anonymity, the authors should release anonymized versions(if applicable).•Providing as much information as possible in supplemental material (appended to the paper)is recommended, but including URLs to data and code is permitted.6.Experimental Setting/DetailsQuestion: Does the paper specify all the training and test details (e.g., data splits, hyperparame-ters, how they were chosen, type of optimizer, etc.) necessary to understand the results?Answer: [Yes]Justification: In App. D.Guidelines:• The answer NA means that the paper does not include experiments.•The experimental setting should be presented in the core of the paper to a level of detail thatis necessary to appreciate the results and make sense of them.•The full details can be provided either with the code, in appendix, or as supplementalmaterial.7.Experiment Statistical SignificanceQuestion: Does the paper report error bars suitably and correctly defined or other appropriateinformation about the statistical significance of the experiments?Answer: [Yes]Justification: An average of three independent runs are reported.Guidelines:31• The answer NA means that the paper does not include experiments.•The authors should answer "Yes" if the results are accompanied by error bars, confidenceintervals, or statistical significance tests, at least for the experiments that support the mainclaims of the paper.•The factors of variability that the error bars are capturing should be clearly stated (forexample, train/test split, initialization, random drawing of some parameter, or overall runwith given experimental conditions).•The method for calculating the error bars should be explained (closed form formula, call toa library function, bootstrap, etc.)• The assumptions made should be given (e.g., Normally distributed errors).•It should be clear whether the error bar is the standard deviation or the standard error of themean.•It is OK to report 1-sigma error bars, but one should state it. The authors should preferablyreport a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normalityof errors is not verified.•For asymmetric distributions, the authors should be careful not to show in tables or figuressymmetric error bars that would yield results that are out of range (e.g. negative error rates).•If error bars are reported in tables or plots, The authors should explain in the text how theywere calculated and reference the corresponding figures or tables in the text.8.Experiments Compute ResourcesQuestion: For each experiment, does the paper provide sufficient information on the computerresources (type of compute workers, memory, time of execution) needed to reproduce theexperiments?Answer: [Yes]Justification: In App. D.Guidelines:• The answer NA means that the paper does not include experiments.•The paper should indicate the type of compute workers CPU or GPU, internal cluster, orcloud provider, including relevant memory and storage.•The paper should provide the amount of compute required for each of the individualexperimental runs as well as estimate the total compute.•The paper should disclose whether the full research project required more compute than theexperiments reported in the paper (e.g., preliminary or failed experiments that didn’t makeit into the paper).9.Code Of EthicsQuestion: Does the research conducted in the paper conform, in every respect, with the NeurIPSCode of Ethics https://neurips.cc/public/EthicsGuidelines ?Answer: [Yes]Justification: There are no ethics concerns.Guidelines:• The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.•If the authors answer No, they should explain the special circumstances that require adeviation from the Code of Ethics.•The authors should make sure to preserve anonymity (e.g., if there is a special considerationdue to laws or regulations in their jurisdiction).10.Broader ImpactsQuestion: Does the paper discuss both potential positive societal impacts and negative societalimpacts of the work performed?Answer: [NA]Justification: NAGuidelines:32• The answer NA means that there is no societal impact of the work performed.•If the authors answer NA or No, they should explain why their work has no societal impactor why the paper does not address societal impact.•Examples of negative societal impacts include potential malicious or unintended uses(e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g.,deployment of technologies that could make decisions that unfairly impact specific groups),privacy considerations, and security considerations.•The conference expects that many papers will be foundational research and not tied toparticular applications, let alone deployments. However, if there is a direct path to anynegative applications, the authors should point it out. For example, it is legitimate to pointout that an improvement in the quality of generative models could be used to generatedeepfakes for disinformation. On the other hand, it is not needed to point out that a genericalgorithm for optimizing neural networks could enable people to train models that generateDeepfakes faster.•The authors should consider possible harms that could arise when the technology is beingused as intended and functioning correctly, harms that could arise when the technology isbeing used as intended but gives incorrect results, and harms following from (intentional orunintentional) misuse of the technology.•If there are negative societal impacts, the authors could also discuss possible mitigationstrategies (e.g., gated release of models, providing defenses in addition to attacks, mecha-nisms for monitoring misuse, mechanisms to monitor how a system learns from feedbackover time, improving the efficiency and accessibility of ML).11.SafeguardsQuestion: Does the paper describe safeguards that have been put in place for responsible releaseof data or models that have a high risk for misuse (e.g., pretrained language models, imagegenerators, or scraped datasets)?Answer: [NA]Justification: NAGuidelines:• The answer NA means that the paper poses no such risks.•Released models that have a high risk for misuse or dual-use should be released withnecessary safeguards to allow for controlled use of the model, for example by requiring thatusers adhere to usage guidelines or restrictions to access the model or implementing safetyfilters.•Datasets that have been scraped from the Internet could pose safety risks. The authorsshould describe how they avoided releasing unsafe images.•We recognize that providing effective safeguards is challenging, and many papers do notrequire this, but we encourage authors to take this into account and make a best faith effort.12.Licenses for existing assetsQuestion: Are the creators or original owners of assets (e.g., code, data, models), used in thepaper, properly credited and are the license and terms of use explicitly mentioned and properlyrespected?Answer: [NA]Justification: NAGuidelines:• The answer NA means that the paper does not use existing assets.• The authors should cite the original paper that produced the code package or dataset.•The authors should state which version of the asset is used and, if possible, include a URL.• The name of the license (e.g., CC-BY 4.0) should be included for each asset.•For scraped data from a particular source (e.g., website), the copyright and terms of serviceof that source should be provided.•If assets are released, the license, copyright information, and terms of use in the packageshould be provided. For popular datasets, paperswithcode.com/datasets has curatedlicenses for some datasets. Their licensing guide can help determine the license of a dataset.33•For existing datasets that are re-packaged, both the original license and the license of thederived asset (if it has changed) should be provided.•If this information is not available online, the authors are encouraged to reach out to theasset’s creators.13.New AssetsQuestion: Are new assets introduced in the paper well documented and is the documentationprovided alongside the assets?Answer: [NA]Justification: NAGuidelines:• The answer NA means that the paper does not release new assets.•Researchers should communicate the details of the dataset/code/model as part of their sub-missions via structured templates. This includes details about training, license, limitations,etc.• The paper should discuss whether and how consent was obtained from people whose assetis used.•At submission time, remember to anonymize your assets (if applicable). You can eithercreate an anonymized URL or include an anonymized zip file.14.Crowdsourcing and Research with Human SubjectsQuestion: For crowdsourcing experiments and research with human subjects, does the paperinclude the full text of instructions given to participants and screenshots, if applicable, as well asdetails about compensation (if any)?Answer: [NA]Justification: NAGuidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Including this information in the supplemental material is fine, but if the main contributionof the paper involves human subjects, then as much detail as possible should be included inthe main paper.•According to the NeurIPS Code of Ethics, workers involved in data collection, curation, orother labor should be paid at least the minimum wage in the country of the data collector.15.Institutional Review Board (IRB) Approvals or Equivalent for Research with HumanSubjectsQuestion: Does the paper describe potential risks incurred by study participants, whether suchrisks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (oran equivalent approval/review based on the requirements of your country or institution) wereobtained?Answer: [NA]Justification: NAGuidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Depending on the country in which research is conducted, IRB approval (or equivalent)may be required for any human subjects research. If you obtained IRB approval, you shouldclearly state this in the paper.•We recognize that the procedures for this may vary significantly between institutions andlocations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelinesfor their institution.•For initial submissions, do not include any information that would break anonymity (ifapplicable), such as the institution conducting the review.34 |
0NMzBwqaAJ | Not All Tokens Are What You Need for PretrainingZhenghao Lin⋆χφZhibin Gou⋆πφYeyun Gong⋄φXiao LiuφYelong ShenφRuochen XuφChen Lin⋄χρYujiu Yang⋄πJian JiaoφNan DuanφWeizhu ChenφχXiamen UniversityπTsinghua UniversityρShanghai AI LaboratoryφMicrosofthttps://aka.ms/rhoAbstractPrevious language model pre-training methods have uniformly applied a next-tokenprediction loss to all training tokens. Challenging this norm, we posit that “Not alltokens in a corpus are equally important for language model training” . Our initialanalysis examines token-level training dynamics of language model, revealingdistinct loss patterns for different tokens. Leveraging these insights, we introduce anew language model called RHO-1. Unlike traditional LMs that learn to predictevery next token in a corpus, RHO-1employs Selective Language Modeling (SLM),which selectively trains on useful tokens that aligned with the desired distribution.This approach involves scoring tokens using a reference model, and then trainingthe language model with a focused loss on tokens with higher scores. Whencontinual pretraining on 15B OpenWebMath corpus, RHO-1yields an absoluteimprovement in few-shot accuracy of up to 30% in 9 math tasks. After fine-tuning,RHO-1-1B and 7B achieved state-of-the-art results of 40.6% and 51.8% on MATHdataset, respectively — matching DeepSeekMath with only 3% of the pretrainingtokens. Furthermore, when continual pretraining on 80B general tokens, RHO-1achieves 6.8% average enhancement across 15 diverse tasks, increasing both dataefficiency and performance of the language model pre-training.0 5 10 15T okens (B)5101520Math Acc (%)16.3% better10x fasterAvg Few-shot Acc on 1B LMsDeepSeekMath-1B (150B T okens)Rho-1-1BBaseline0 5 10 15T okens (B)2025303540455016.4% better5x fasterAvg Few-shot Acc on 7B LMsDeepSeekMath-7B (500B T okens)Rho-1-7BBaselineFigure 1: We continual pretrain 1B and 7B LMs with 15B OpenWebMath tokens. RHO-1is trainedwith our proposed Selective Language Modeling (SLM), while baselines are trained using causallanguage modeling. SLM improves average few-shot accuracy on GSM8k and MATH by over 16%,achieving the baseline performance 5-10x faster.⋆Equal contribution. See author contributions for details. Work done during their internships at MicrosoftResearch Asia. B:[email protected] ;[email protected]⋄Correspondence authors.38th Conference on Neural Information Processing Systems (NeurIPS 2024).The farm has 35 hens <Apr12 1:24> and 12 pigs. ##davidjl123 says totaling 47 animals.x!x"x#x$x%x&x'EOSUndesired TokensDesired TokensCausal Language Modeling✘Remove loss ✓Keep loss x!x"x$x%x#x&x'x(Selective Language Modelingx!x"x$x%x#x&x'x(✘✘✘✓✓✓✓Noisy Pretraining Corpusx!x"x#x$x%x&x'EOS✓Figure 2: Upper: Even an extensively filtered pretraining corpus contains token-level noise. Left:Previous Causal Language Modeling (CLM) trains on all tokens. Right: Our proposed SelectiveLanguage Modeling (SLM) selectively applies loss on those useful and clean tokens.1 IntroductionScaling up model parameters and dataset size has consistently elevated the next-token predictionaccuracy in large language models, yielding significant advancements in artificial intelligence [Kaplanet al., 2020, Brown et al., 2020, OpenAI, 2023, Team et al., 2023]. However, training on all availabledata is not always optimal or feasible. As a result, the practice of data filtering has become crucial,using various heuristics and classifiers [Brown et al., 2020, Wenzek et al., 2019] to select trainingdocuments. These techniques significantly improve data quality and boost model performance.However, despite thorough document-level filtering, high-quality datasets still contain many noisytokens that can negatively affect training, as illustrated in Figure 2 (Upper). Removing such tokensmight alter the text’s meaning, while overly strict filtering could exclude useful data [Welbl et al.,2021, Muennighoff et al., 2024] and lead to biases [Dodge et al., 2021, Longpre et al., 2023].Furthermore, research indicates that the distribution of web data does not inherently align with theideal distribution for downstream applications [Tay et al., 2022, Wettig et al., 2023]. For example,common corpus at the token level may include undesirable content like hallucinations or highlyambiguous tokens that are hard to predict. Applying the same loss to all tokens can lead to inefficientcomputation on non-essential tokens, potentially restricting LLMs from achieving more advancedlevels of intelligence.To explore how language models learn at the token level, we initially examined training dynamics,particularly how the token-level loss evolves during usual pretraining. In §2.1, we evaluated themodel’s token perplexity at different checkpoints and categorized tokens into different types. Ourfindings reveal that significant loss reduction is limited to a select group of tokens. Many tokens are“easy tokens” that are already learned, and some are “hard tokens” that exhibit variable losses andresist convergence. These tokens can lead to numerous ineffective gradient updates.Based on these analyses, we introduce RHO-1models trained with a novel Selective LanguageModeling (SLM) objective3. As shown in Figure 2 (Right), this approach inputs the full sequenceinto the model and selectively removes the loss of undesired tokens. The detailed pipeline is depictedin Figure 4: First, SLM trains a reference language model on high-quality corpora. This modelestablishes utility metrics to score tokens according to the desired distribution, naturally filteringout unclean and irrelevant tokens. Second, SLM uses the reference model to score each token in acorpus using its loss (§2.2). Finally, we train a language model only on those tokens that exhibit ahigh excess loss between the reference and the training model, selectively learning the tokens thatbest benefit downstream applications (§2.2).We show through comprehensive experiments that SLM significantly enhances token efficiency duringtraining and improves performance on downstream tasks. Furthermore, our findings indicate that SLMeffectively identifies tokens relevant to the target distribution, resulting in improved perplexity scores3“Rho” denotes selective modeling of tokens with higher information “density ( ρ)”.20 5 10 15Trained T okens(B)01234Loss(a) Loss for different token typesHH (11%)LH (12%)HL (26%)LL (51%)0 5 10 15Trained T okens(B)0.00.10.20.3Loss(b) Example L L tokensLL T oken 1LL T oken 2LL T oken 30 5 10 15Trained T okens(B)1.52.02.53.03.5Loss(c) Example H H tokensHH T oken 1HH T oken 2HH T oken 3Figure 3: The loss of four categories of tokens during pretraining. (a) shows the loss of H →H,L→H, H→L, and L→L tokens during pretraining. (b) and (c) show three cases of fluctuating tokens’loss in L→L and H→H during pretraining, respectively.on benchmarks for models trained with the selected tokens. §3.2 shows the effectiveness of SLM onmath continual pretraining: both 1B and 7B RHO-1outperform CLM-trained baselines by over 16%on the GSM8k and MATH datasets. SLM reaches baseline accuracy up to 10x faster, as shown inFigure 1. Remarkably, RHO-1-7B matches the state-of-the-art performance of DeepSeekMath-7Busing only 15B tokens, compared to the 500B tokens required by DeepSeekMath. Upon fine-tuning,RHO-1-1B and 7B achieve 40.6% and 51.8% on MATH, respectively. Notably, RHO-1-1B is thefirst 1B LM to exceed 40% accuracy, nearing the early GPT-4’s CoT performance of 42.5%. §3.3confirms the efficacy of SLM in general continual pretraining: Training Tinyllama-1B on 80B tokenswith SLM improves 6.8% on average across 15 benchmarks, with gains over 10% in code and mathtasks. In §3.4, we demonstrate that in settings without high-quality reference data, we can use SLMfor self-referencing, leading to an average improvement of up to 3.3% in downstream tasks.2 Selective Language Modeling2.1 Not All Tokens Are Equal: Training Dynamics of Token LossOur investigation begins with a critical look at how individual tokens’ losses evolve during standardpre-training. We continue pre-training Tinyllama-1B with 15B tokens from OpenWebMath, savingcheckpoints after every 1B tokens. We then evaluate token-level loss at these intervals using thevalidation set of approximately 320,000 tokens. Figure 3(a) reveals a striking pattern: tokensfall into four categories based on their loss trajectory—persistent high loss (H →H), increasingloss (L→H), decreasing loss (H →L), and consistent low loss (L →L). For further details on thesecategories, see §D.1. Our analysis uncovers that a mere 26% of tokens show a notable loss reduction(H→L), while the majority (51%) remain in the L →L category, indicating they have already beenlearned. Interestingly, 11% of the tokens are persistently challenging (H →H), likely due to highaleatoric uncertainty [Hüllermeier and Waegeman, 2021]. Additionally, 12% of tokens experience anunexpected loss increase (L →H) during training.Our second observation is that a significant number of token losses exhibit persistent fluctuations,and resist convergence. The loss of many L →L and H→H tokens, as depicted in Figure 3 (b) and (c),show high variance during training. In §D.2, we visualize and analyze the content of these tokens andfind that many of them are noisy, which is consistent with our hypothesis.Consequently, we learn that the loss associated with each token during training does not decreasesmoothly like the overall loss; instead, there is a complex training dynamic among different tokens.If we can select the appropriate tokens for the model to focus on during training, we may be able tostabilize the trajectory of the model’s training and enhance its data efficiency.2.2 Selective Language ModelingOverview Inspired by the practice of reference model in document-level filtering, we propose asimple pipeline of token-level data selection, termed “Selective Language Modeling (SLM)”. Ourmethod comprises three steps, as depicted in Figure 4. We begin by training a reference model on acurated, high-quality dataset. This model then assesses the loss of each token within the pretraining3Step 1Train a reference model on high-quality text.Reference ModelHigh-quality CorpusPretraining CorpusStep 2Calculate each token’s ppl in the pretraining corpus.Language ModelStep 3Train an LLM with loss focused on high-score tokens.Figure 4: The pipeline of Selective Language Modeling (SLM). SLM optimizes language modelperformance by concentrating on valuable, clean tokens during pre-training. It involves three steps:(Step 1) Initially, train a reference model on high-quality data. (Step 2) Then, score each token’sloss in a corpus using the reference model. (Step 3) Finally, selectively train the language model ontokens that have higher scores.corpus. In the final phase, we train the language model selectively, focusing on tokens with highexcess loss between the training and reference model. The intuition is that tokens with high excessloss are more learnable and better aligned with the desired distribution, naturally excluding tokensthat are either irrelevant or of low quality. Below, we provide a detailed description of each step.Reference Modeling We begin by curating a high-quality dataset that reflects the desired datadistribution. We train a reference model (RM) using standard cross-entropy loss on the curated data.The resulting RM is then used to assess the token loss within a larger pretraining corpus. We computethe reference loss ( LRM) of a token xibased on the probability that the RM assigns to this token. Thecalculation is formalized as follows:LRM(xi) =−logP(xi|x<i) (1)By evaluating LRMfor each token, we establish the reference loss for selective pretraining, allowingus to focus on the most influential tokens in language modeling.Selective Pretraining Note that Causal Language Modeling (CLM) employs the cross-entropyloss:LCLM(θ) =−1NNXi=1logP(xi|x<i;θ) (2)Here,LCLM(θ)represents the loss function parameterized by model θ.Nis the length of the sequence,xiis the i-th token in the sequence, and x<irepresents all tokens before the i-th token. In contrast,Selective Language Modeling (SLM) trains the language model with a focus on tokens that exhibita high excess loss when compared to the reference model. The excess loss ( L∆) for a token xiisdefined as the difference between the current training model loss ( Lθ) and the reference loss:L∆(xi) =Lθ(xi)− L RM(xi) (3)We introduce a token selection ratio k%, which determines the proportion of tokens to be includedbased on their excess loss. The cross-entropy loss for the selected tokens is computed as follows:LSLM(θ) =−1N∗k%NXi=1Ik%(xi)·logP(xi|x<i;θ) (4)Here, N∗k%defines the number of tokens that fall within the top k%of excess loss. The indicatorfunction Ik%(xi)is defined as:Ik%(xi) =1ifxiranks in the top k%byS(xi)0otherwise(5)4Table 1: Few-shot CoT reasoning results of math pretraining. All models are tested with few-shotprompting. Previous best results are highlighted in blue, while our best results are in purple.∗Onlyunique math-related tokens are calculated. For RHO-1, we calculate only the selected tokens thatare used for training.†We use OpenAI’s MATH subset [Lightman et al., 2023] for evaluation, sincesome original test samples have been used in public training sets such as PRM800k.‡The SAT onlyhas 32 four-choice problems, so we average our results over the last three checkpoints, if available.Model |θ|DataUniq.Toks∗TrainToksGSM8K MATH†SV AMP ASDiv MA WPS TAB MQAMMLUSTEMSAT‡A VG1-2B Base ModelsTinyllama 1.1B - - - 2.9 3.2 11.0 18.1 20.4 12.5 14.6 16.1 21.9 13.4Phi-1.5 1.3B - - - 32.4 4.2 43.4 53.1 66.2 24.4 14.3 21.8 18.8 31.0Qwen1.5 1.8B - - - 36.1 6.8 48.5 63.6 79.0 29.2 25.1 31.3 40.6 40.0Gemma 2.0B - - - 18.8 11.4 38.0 56.6 72.5 36.9 26.8 34.4 50.0 38.4DeepSeekLLM 1.3B OWM 14B 150B 11.5 8.9 - - - - - 29.6 31.3 -DeepSeekMath 1.3B - 120B 150B 23.8 13.6 - - - - - 33.1 56.3 -Continual Pretraining on Tinyllama-1BTinyllama-CT 1.1B OWM 14B 15B 6.4 2.4 21.7 36.7 47.7 17.9 13.9 23.0 25.0 21.6RHO-1-Math 1.1B OWM 14B 9B 29.8 14.0 49.2 61.4 79.8 25.8 30.4 24.7 28.1 38.1∆ -40% +23.4 +11.6 +27.5 +24.7 +32.1 +7.9 +16.5 +1.7 +3.1 +16.5RHO-1-Math 1.1B OWM 14B 30B 36.2 15.6 52.1 67.0 83.9 29.0 32.5 23.3 28.1 40.9≥7B Base ModelsLLaMA-2 7B - - 14.0 3.6 39.5 51.7 63.5 30.9 12.4 32.7 34.4 31.4Mistral 7B - - 41.2 11.6 64.7 68.5 87.5 52.9 33.0 49.5 59.4 52.0Minerva 8B - 39B 164B 16.2 14.1 - - - - - 35.6 - -Minerva 62B - 39B 109B 52.4 27.6 - - - - - 53.9 - -Minerva 540B - 39B 26B 58.8 33.6 - - - - - 63.9 - -LLemma 7B PPile 55B 200B 38.8 17.2 56.1 69.1 82.4 48.7 41.0 45.4 59.4 50.9LLemma 34B PPile 55B 50B 54.2 23.0 67.9 75.7 90.1 57.0 49.8 54.7 68.8 60.1Intern-Math 7B - 31B 125B 41.8 14.4 61.6 66.8 83.7 50.0 57.3 24.8 37.5 48.7Intern-Math 20B - 31B 125B 65.4 30.0 75.7 79.3 94.0 50.9 38.5 53.1 71.9 62.1DeepSeekMath 7B - 120B 500B 64.1 34.2 74.0 83.9 92.4 63.4 62.4 56.4 84.4 68.4Continual Pretraining on Mistral-7BMistral-CT 7B OWM 14B 15B 42.9 22.2 68.6 71.0 86.1 45.1 47.7 52.6 65.6 55.8RHO-1-Math 7B OWM 14B 10.5B 66.9 31.0 77.8 79.0 93.9 49.9 58.7 54.6 84.4 66.2∆ -30% +24.0 +8.8 +9.2 +8.0 +7.8 +4.8 +11.0 +2.0 +18.8 +10.4By default, we use L∆as the score function S. This ensures that the loss is applied only to the tokensthat are deemed most beneficial for the language model to learn from. In practice, token selection canbe implemented by ranking the tokens in a batch according to their excess loss and using only the topk%of tokens for training. This process eliminates the loss for undesired tokens without incurringadditional costs during pretraining, making our approach both efficient and easily integrated.3 ExperimentsWe continually pretrained models in both mathematical and general domain and designed ablationand analysis experiments to understand the effectiveness of SLM.3.1 Experimental SetupReference Model Training To train our mathematical reference model, we gathered a dataset of0.5B high-quality, math-related tokens. This dataset is a blend of synthetic data from GPT [Yu et al.,2024, Huang et al., 2024] and manually curated data [Yue et al., 2024, Ni et al., 2024]. For the generalreference model, we compiled a corpus of 1.9B tokens from open-source datasets, such as Tulu-v2[Ivison et al., 2023] and OpenHermes-2.5 [Teknium, 2023]. We trained the reference models for 3epochs. The maximum learning rate was set at 5e-5 for 1B models and 1e-5 for 7B models, applyinga cosine decay schedule. We set the maximum sequence lengths to 2048 for 1B models and 4096 for7B models, packing multiple samples into these lengths for model input. In all main experiments, weinitialized the continual pretraining model and the reference model with the same base model.5Table 2: Tool-integrated reasoning results of math pretraining.Model Size Tools SFT Data GSM8k MATH SV AMP ASDiv MA WPS TAB GSM-HA VGUsed for SFT? ✓ ✓ ✗ ✗ ✗ ✗ ✗Previous ModelsGPT4-0314 - ✗ - 92.0 42.5 93.1 91.3 97.6 67.1 64.7 78.3GPT4-0314 (PAL) - ✓ - 94.2 51.8 94.8 92.6 97.7 95.9 77.6 86.4MAmmoTH 70B ✓ MI-260k 76.9 41.8 82.4 - - - - -ToRA 7B ✓ ToRA-69k 68.8 40.1 68.2 73.9 88.8 42.4 54.6 62.4ToRA 70B ✓ ToRA-69k 84.3 49.7 82.7 86.8 93.8 74.0 67.2 76.9DeepSeekMath 7B ✓ ToRA-69k 79.8 52.0 80.1 87.1 93.8 85.8 63.1 77.4Our Pretrained ModelsTinyLlama-CT 1B ✓ ToRA-69k 51.4 38.4 53.4 66.7 81.7 20.5 42.8 50.7RHO-1-Math 1B ✓ ToRA-69k 59.4 40.6 60.7 74.2 88.6 26.7 48.1 56.9∆ +8.0 +2.2 +7.3 +7.5 +6.9 +6.2 +5.3 +6.2Mistral-CT 7B ✓ ToRA-69k 77.5 48.4 76.9 83.8 93.4 67.5 60.4 72.6RHO-1-Math 7B ✓ ToRA-69k 81.3 51.8 80.8 85.5 94.5 70.1 63.1 75.3∆ +3.8 +3.4 +3.9 +1.7 +1.1 +2.6 +2.7 +2.7Pretraining Corpus For mathematical reasoning, we utilize the OpenWebMath (OWM) dataset[Paster et al., 2023], which comprises approximately 14B tokens sourced from math-related webpages in the Common Crawl. In the general domain, we combine the SlimPajama [Daria et al., 2023]and StarCoderData [Li et al., 2023a] (both part of the Tinyllama corpus) with OpenWebMath, trainingon a total of 80 billion tokens with a mix ratio of 6:3:1.Pretraining Setting For math pretraining, we continue pretraining on the Tinyllama-1.1B model[Zhang et al., 2024] and the Mistral-7B model [Jiang et al., 2023] with learning rates of 8e-5 and2e-5, respectively. For the 1.1B model, we conducted our training on 32 ×H100 80G GPUs. Thisconfiguration allowed us to train approximately 15 billion tokens in around 3.5 hours and 50 billiontokens in about 12 hours. In the case of the 7B model, training the same 15 billion tokens tookapproximately 18 hours under similar hardware conditions. For general domain, we set the learningrate for Tinyllama-1.1B model to 1e-4 and train 80B tokens under the same hardware conditions,which takes approximately 19 hours. The batch size is uniformly set to 1M tokens for both domains.Regarding the token selection ratio, we use 60% for the Tinyllama-1.1B model and 70% for theMistral-7B model.Baseline Setting We use models that have been continually pretrained (Tinyllama-CT and Mistral-CT) through regular causal language modeling as baselines. Moreover, we compare RHO-1withwell-known and top-performing baselines, including Gemma [Team et al., 2024], Qwen1.5 [Bai et al.,2023], Phi-1.5 [Li et al., 2023b], DeepSeekLLM [DeepSeek-AI, 2024], DeepSeekMath [Shao et al.,2024], CodeLlama [Roziere et al., 2023], Mistral [Jiang et al., 2023], Minerva [Lewkowycz et al.,2022], Tinyllama [Zhang et al., 2024], LLemma [Azerbayev et al., 2023], and InternLM2-Math [Yinget al., 2024]. For fine-tuning results, we also compare with previous best models MAmmoTH[Yueet al., 2024] and ToRA[Gou et al., 2024].Evaluation Setup To comprehensively evaluate pretrained models, we compare their few-shotcapabilities and fine-tuning performance across a variety of tasks. We adopt the lm-eval-harness4[Gaoet al., 2023] for general tasks, and develop math evaluation suite5for math tasks. We use vllm(v0.3.2) [Kwon et al., 2023] to speed up inference. Further details on our evaluation can be found inAppendix E.3.2 Math Pre-training ResultsFew-shot CoT Reasoning Results We evalute base models prompting with few-shot chain-of-thought (CoT) [Wei et al., 2022a] examples following previous works [Lewkowycz et al., 2022,Azerbayev et al., 2023, Shao et al., 2024]. As results shown in Table 1, in comparison to continuepretraining directly, RHO-1-Math has achieved the average few-shot accuracy improvement of4https://github.com/EleutherAI/lm-evaluation-harness5https://github.com/ZubinGou/math-evaluation-harness616.5% on 1B models and 10.4% on 7B models. Furthermore, after training for multiple epochs onOpenWebMath, we find that RHO-1could further increase the average few-shot accuracy to 40.9%.Compared to DeepSeekMath-7B, which pretrained on 500 billion math-related tokens, RHO-1-7Bpretrained on only 15 billion tokens (selecting 10.5 billion tokens) achieved comparable results,demonstrating the efficiency of our approach.Tool-Integrated Reasoning Results We fine-tune RHO-1and baseline models on 69k ToRA corpus[Gou et al., 2024], consisting of 16k GPT-4-generated trajectories in a tool-integrated reasoningformat, and 53k answer-augmented samples using LLaMA. As presented in Table 2, RHO-1-1BandRHO-1-7B achieved a state-of-the-art 40.6% and 51.8% on MATH dataset, respectively. Onsome unseen tasks ( e.g., TabMWP and GSM-Hard), RHO-1also demonstrates a certain degree ofgeneralizability, with an average few-shot accuracy improvement of 6.2% on the RHO-1-Math-1Band 2.7% on R HO-1-Math-7B.MMLU BBH MATH GSM8kMBPP(p@1) MBPP(p@10) HumEval(p@1) HumEval(p@10)010203040Metrics (%)+11.3+3.9+5.0+28.2+6.5+7.8+6.9+10.6Performance of General Pretrained Base ModelAGIEval ARC-C ARC-E BoolQ PIQAHellaSwag WinoGrandeOBQA TydiQA20304050607080Metrics (%) +1.1+5.0+8.6+11.3+0.9+1.4 +0.2+3.4+8.9TinyllamaTinyllama-CTRho-1-1BFigure 5: General pretraining results. We continual pretraining Tinyllama-1B on 80G generaltokens. Tinyllama-CT is etrained with CLM, while R HO-1 is trained with our proposed SLM.3.3 General Pre-training ResultsWe confirm the efficacy of the SLM in general pretraining by continual training Tinyllama-1.1Bon 80 billion tokens. The results depicted in Figure 5 indicate that although Tinyllama has alreadyundergone extensive training on the majority of these tokens, the application of SLM yields anaverage enhancement of 6.8% across 15 benchmarks compared to direct continual pretraining. Theimprovements were especially pronounced in code and math tasks, exceeding 10%.3.4 Self-Reference ResultsIn this section, we demonstrate that SLM can enhance the effectiveness of model pre-training usingonly pre-training corpora, without the need for additional high-quality data. Specifically, we initiallytrained the reference model on the OpenWebMath (OWM) corpus, a subset of Proof-Pile-2 (PPile).We evaluated OWM and PPile using the trained reference model and selected tokens for training.In this scenario, we assume the absence of downstream task-related data, a common situation inreal-world applications. We hypothesize that the key factor is not scoring the desired distributionbut filtering out noisy tokens. Therefore, we employed two different scoring functions based on thereference model loss, LRM, and the information entropy of the next token, HRM, which measures theuncertainty of the next token. Details are provided in Appendix H.7Table 3: Self-Reference results. We use OpenWebMath (OWM) to train the reference model.ModelScoreFunctionDataUniq.ToksTrainToksGSM8K MATH SV AMP ASDiv MA WPS MQA A VGTinyllama-CT (RM) - OWM 14B 15B 6.3 2.6 21.7 36.7 47.7 13.9 21.5Tinyllama-SLM LRM OWM 14B 10.5B 6.7 4.6 23.3 40.0 54.5 14.3 23.9Tinyllama-SLM HRM OWM 14B 10.5B 7.0 4.8 23.0 39.3 50.5 13.5 23.0Tinyllama-SLM LRM∩ H RMOWM 14B 9B 7.1 5.0 23.5 41.2 53.8 18.0 24.8Tinyllama-CT - PPile 55B 52B 8.0 6.6 23.8 41.0 54.7 14.2 24.7Tinyllama-SLM LRM∩ H RMPPile 55B 36B 8.6 8.4 24.4 43.6 57.9 16.1 26.50 1 2 3 4T okens (B)0.981.021.061.10Loss(a) Selected T oken LossBaselineRho-10 1 2 3 4T okens (B)0.850.900.951.001.051.10Loss(b) Downstream T oken LossBaselineRho-10 1 2 3 4T okens (B)2.52.93.33.74.1Loss(c) Unselected T oken LossBaselineRho-1Figure 6: The dynamics of pretraining loss and downstream loss. (a) and (c) represent the loss oftokens selected/unselected by SLM during pretraining in both SLM and CLM methods, while (b)represents the loss of the SLM and CLM methods on MetaMath [Yu et al., 2024]. We tested theabove results through the process of pretraining with a total of 4 billion tokens.The experimental results, as shown in Table 3, indicate that using only the OWM-trained referencemodel can effectively guide the model in pre-training on the same corpus, improving averagedownstream performance by +2.4%. Using only the information entropy as the score functionbrought about a similar improvement. Additionally, we considered training on the intersection oftokens selected by the two scoring functions and found better performance, with a 40% reduction intokens and +3.3% performance. Furthermore, training the SLM on the PPile, despite only using theOWM subset to train the reference model, still achieved a 1.8% improvement with 30% fewer tokensused. For more details, please refer to Appendix H.3.5 Ablation Study and AnalysisSelected Token Loss Aligns Better with Downstream Performance We utilized the referencemodel to filter tokens and assess their impact on validation and downstream losses after training. Asdepicted in Figure 6, we pretrained on 4B tokens and tracked loss variations across methods andvalidation sets. The RHO-1showed greater loss reduction on selected tokens than regular pretraining.Cross-referencing figures (a), (b), and (c) reveals that selected-token pretraining substantially lowersdownstream loss, while traditional pretraining’s effect on downstream loss is less pronounced despiteinitial loss reductions. Therefore, we expect that selecting tokens for pretraining is more efficient.In Figure 7, we demonstrate that the loss of selected tokens correlates with downstream task perfor-mance, following a power law similar to recent findings [Gadre et al., 2024]. Our analysis showsthat tokens selected by SLM positively impact performance, while those not selected have a negativeimpact. Thus, reducing loss across all tokens is not imperative for improved model performance.Refer to Appendix F for further details.What Tokens are Selected with SLM? We aim to analyze the tokens selected by the SLM methodin pretraining to further explore its working mechanism. To this end, we visualize the token selectionprocess during the training of RHO-1using the OpenWebMath. In §G.1, we have highlighted in bluethe tokens that were retained during actual pretraining. We observe that the majority of tokens chosen80.86 0.89 0.92 0.95 0.98Loss5.07.510.012.515.017.520.022.5Accuracy(%)(a) Accuracy vs. Selected T okens' LossSelected T okens at 2BSelected T okens at 5BSelected T okens at 8BSelected T okens at 11BSelected T okens at 14B3.50 3.54 3.58 3.62 3.66Loss5.07.510.012.515.017.520.022.5Accuracy(%)(b) Accuracy vs. Unselected T okens' LossUnselected T okens at 2BUnselected T okens at 5BUnselected T okens at 8BUnselected T okens at 11BUnselected T okens at 14BFigure 7: The relationship between the selected tokens / unselected tokens loss in SLM anddownstream task performance. The y-axis represents the average few-shot accuracy on GSM8k andMATH. The x-axis represents the average loss on selected tokens / unselected tokens at correspondingcheckpoint (2B, 5B, 8B, 11B, and 14B).2 5 8 11 14T okens(B)2.352.452.552.65PPLPPL of T okens Selected by Different CKPTSelected T okens at 2BSelected T okens at 5BSelected T okens at 8BSelected T okens at 11BSelected T okens at 14BFigure 8: The PPL of tokens selected by differ-ent checkpoint. We test the PPL of the tokensselected at 2B, 5B, 8B, 11B, and 14B.40 50 60 70 80 90 100T oken Select Ratio (%)05101520Accuracy (%)Accuracy vs. Select RatioGSM8KMathFigure 9: Effect of token select ratio. We train1B LM with SLM objective on 5B tokens.by the SLM method are closely related to mathematics, effectively training the model on the parts ofthe original corpus that are pertinent to mathematical content.Furthermore, we investigated the differences in token filtering across various checkpoints during thetraining process and tested the perplexity of these tokens on different checkpoints. As illustratedin Figure 8, we found that the tokens selected by later checkpoints tend to have higher perplexitytowards the later stages of training and lower perplexity in the earlier stages. This may suggest thatthe model first optimizes tokens with a larger learnable space, thereby increasing learning efficiency.Moreover, we noticed a sample-wise “double descent” [Nakkiran et al., 2021] on the loss of selectedtokens, where the select token’s perplexity initially increases before decreases. This might be aneffect of selecting tokens based on excess loss, targeting those most in need at each checkpoint.Effect of Token Select Ratio We investigate the impact of token selecting ratios of the SLM.Generally, the selecting ratio is defined by heuristic rules, similar to the approach previously employedin the training of Masked Language Models (MLMs) [Devlin et al., 2019, Liu et al., 2019]. As shownin Figure 9, the selected tokens is suitable for accounting for about 60% of the original tokens.4 ConclusionIn this paper, we propose using Selective Language Modeling(SLM) to train RHO-1, which selectmore suitable tokens for current pretraining stage. We conducted the detailed analysis of the loss oftokens during the pretraining process and found that not all tokens are equal during pretraining. Our9experiments and analysis in the fields of mathematics and general have demonstrated the effectivenessof the SLM method, emphasizing the importance of token level in the LLM pretraining process.In the future, how to improve pretraining of LLMs from the perspective of token level worthy ofin-depth research.AcknowledgmentsZhenghao Lin and Chen Lin were supported by National Key R&D Program of China (No.2022ZD0160501), the Natural Science Foundation of China (No.62372390,62432011). ZhibinGou and Yujiu Yang were supported by the Shenzhen Science and Technology Program(JCYJ20220818101001004) and the “Graph Neural Network Project” of Ping An Technology (Shen-zhen) Co., Ltd.ReferencesJared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray,Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprintarXiv:2001.08361 , 2020.Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, ArvindNeelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners.Advances in neural information processing systems , 33:1877–1901, 2020.OpenAI. Gpt-4 technical report, 2023.Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut,Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly capable multimodal models.arXiv preprint arXiv:2312.11805 , 2023.Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, ArmandJoulin, and Edouard Grave. Ccnet: Extracting high quality monolingual datasets from web crawl data. arXivpreprint arXiv:1911.00359 , 2019.Johannes Welbl, Amelia Glaese, Jonathan Uesato, Sumanth Dathathri, John Mellor, Lisa Anne Hendricks, KirstyAnderson, Pushmeet Kohli, Ben Coppin, and Po-Sen Huang. Challenges in detoxifying language models. InFindings of the Association for Computational Linguistics: EMNLP 2021 , pages 2447–2469, 2021.Niklas Muennighoff, Alexander Rush, Boaz Barak, Teven Le Scao, Nouamane Tazi, Aleksandra Piktus, SampoPyysalo, Thomas Wolf, and Colin A Raffel. Scaling data-constrained language models. Advances in NeuralInformation Processing Systems , 36, 2024.Jesse Dodge, Maarten Sap, Ana Marasovi ́c, William Agnew, Gabriel Ilharco, Dirk Groeneveld, MargaretMitchell, and Matt Gardner. Documenting large webtext corpora: A case study on the colossal clean crawledcorpus. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages1286–1305, 2021.Shayne Longpre, Gregory Yauney, Emily Reif, Katherine Lee, Adam Roberts, Barret Zoph, Denny Zhou, JasonWei, Kevin Robinson, David Mimno, et al. A pretrainer’s guide to training data: Measuring the effects of dataage, domain coverage, quality, & toxicity. arXiv preprint arXiv:2305.13169 , 2023.Yi Tay, Mostafa Dehghani, Samira Abnar, Hyung Won Chung, William Fedus, Jinfeng Rao, Sharan Narang,Vinh Q Tran, Dani Yogatama, and Donald Metzler. Scaling laws vs model architectures: How does inductivebias influence scaling? arXiv preprint arXiv:2207.10551 , 2022.Alexander Wettig, Tianyu Gao, Zexuan Zhong, and Danqi Chen. Should you mask 15% in masked languagemodeling? In Andreas Vlachos and Isabelle Augenstein, editors, Proceedings of the 17th Conference ofthe European Chapter of the Association for Computational Linguistics , pages 2985–3000, Dubrovnik,Croatia, May 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.eacl-main.217. URLhttps://aclanthology.org/2023.eacl-main.217 .Eyke Hüllermeier and Willem Waegeman. Aleatoric and epistemic uncertainty in machine learning: Anintroduction to concepts and methods. Machine learning , 110(3):457–506, 2021.Longhui Yu, Weisen Jiang, Han Shi, YU Jincheng, Zhengying Liu, Yu Zhang, James Kwok, Zhenguo Li, AdrianWeller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models.InICLR , 2024.10Yiming Huang, Xiao Liu, Yeyun Gong, Zhibin Gou, Yelong Shen, Nan Duan, and Weizhu Chen. Key-point-driven data synthesis with its enhancement on mathematical reasoning. arXiv preprint arXiv:2403.02333 ,2024.Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mammoth:Building math generalist models through hybrid instruction tuning. In ICLR , 2024.Xinzhe Ni, Yeyun Gong, Zhibin Gou, Yelong Shen, Yujiu Yang, Nan Duan, and Weizhu Chen. Exploring themystery of influential data for mathematical reasoning, 2024.Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang,David Wadden, Noah A Smith, Iz Beltagy, et al. Camels in a changing climate: Enhancing lm adaptationwith tulu 2. arXiv preprint arXiv:2311.10702 , 2023.Teknium. Openhermes 2.5: An open dataset of synthetic data for generalist llm assistants, 2023. URLhttps://huggingface.co/datasets/teknium/OpenHermes-2.5 .Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, JohnSchulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint arXiv:2305.20050 , 2023.Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, and Jimmy Ba. Openwebmath: An open dataset ofhigh-quality mathematical web text, 2023.Soboleva Daria, Al-Khateeb Faisal, Myers Robert Steeves Jacob R, Hestness Joel, and Dey Nolan. SlimPajama:A 627B token cleaned and deduplicated version of RedPajama. https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama , 2023. URLhttps://huggingface.co/datasets/cerebras/SlimPajama-627B .Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, MarcMarone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, ThomasWang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, NicolasGontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, BenjaminLipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy V , Jason Stillerman, Siva Sankalp Patel, DmitryAbulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour Moustafa-Fahmy, Urvashi Bhattacharyya,Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, ManuelRomero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, TriDao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, DanishContractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, SeanHughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. Starcoder: may the source bewith you! CoRR , abs/2305.06161, 2023a.Peiyuan Zhang, Guangtao Zeng, Tianduo Wang, and Wei Lu. Tinyllama: An open-source small language model,2024.Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de lasCasas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXivpreprint arXiv:2310.06825 , 2023.Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, LaurentSifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on geminiresearch and technology. arXiv preprint arXiv:2403.08295 , 2024.Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, FeiHuang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609 , 2023.Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat Lee. Textbooksare all you need ii: phi-1.5 technical report, 2023b.DeepSeek-AI. Deepseek llm: Scaling open-source language models with longtermism. arXiv preprintarXiv:2401.02954 , 2024. URL https://github.com/deepseek-ai/DeepSeek-LLM .Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li, Y Wu, and DayaGuo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprintarXiv:2402.03300 , 2024.Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, JingyuLiu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models for code. arXiv preprintarXiv:2308.12950 , 2023.11Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problemswith language models. Advances in Neural Information Processing Systems , 35:3843–3857, 2022.Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q Jiang,Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model for mathematics. arXivpreprint arXiv:2310.10631 , 2023.Huaiyuan Ying, Shuo Zhang, Linyang Li, Zhejian Zhou, Yunfan Shao, Zhaoye Fei, Yichuan Ma, Jiawei Hong,Kuikun Liu, Ziyi Wang, et al. Internlm-math: Open math large language models toward verifiable reasoning.arXiv preprint arXiv:2402.06332 , 2024.Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang, Minlie Huang, Nan Duan, Weizhu Chen, et al. Tora: Atool-integrated reasoning agent for mathematical problem solving. In ICLR , 2024.Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, LaurenceGolding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa,Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite,Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, 12 2023.URL https://zenodo.org/records/10256836 .Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gon-zalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving withpagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023.Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,et al. Chain-of-thought prompting elicits reasoning in large language models. In NIPS , volume 35, pages24824–24837, 2022a.Samir Yitzhak Gadre, Georgios Smyrnis, Vaishaal Shankar, Suchin Gururangan, Mitchell Wortsman, Rulin Shao,Jean Mercat, Alex Fang, Jeffrey Li, Sedrick Keh, Rui Xin, Marianna Nezhurina, Igor Vasiljevic, Jenia Jitsev,Alexandros G. Dimakis, Gabriel Ilharco, Shuran Song, Thomas Kollar, Yair Carmon, Achal Dave, ReinhardHeckel, Niklas Muennighoff, and Ludwig Schmidt. Language models scale reliably with over-training and ondownstream tasks. Preprint , 2024.Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep doubledescent: Where bigger models and more data hurt. Journal of Statistical Mechanics: Theory and Experiment ,2021(12):124003, 2021.Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirec-tional transformers for language understanding. In NAACL-HLT (1) , pages 4171–4186. Association forComputational Linguistics, 2019.Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, LukeZettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining approach. CoRR ,abs/1907.11692, 2019.Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, WeiLi, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal ofmachine learning research , 21(140):1–67, 2020.Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving. arXiv preprintarXiv:2009.03393 , 2020.Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi,Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al. Textbooks are all you need. arXivpreprint arXiv:2306.11644 , 2023.Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, andNicholas Carlini. Deduplicating training data makes language models better. arXiv preprint arXiv:2107.06499 ,2021.Nikhil Kandpal, Eric Wallace, and Colin Raffel. Deduplicating training data mitigates privacy risks in languagemodels. In International Conference on Machine Learning , pages 10697–10707. PMLR, 2022.Kushal Tirumala, Daniel Simig, Armen Aghajanyan, and Ari Morcos. D4: Improving llm pretraining viadocument de-duplication and diversification. In NIPS , volume 36, 2023.12Sang Michael Xie, Shibani Santurkar, Tengyu Ma, and Percy S Liang. Data selection for language models viaimportance resampling. Advances in Neural Information Processing Systems , 36, 2024a.Alon Albalak, Yanai Elazar, Sang Michael Xie, Shayne Longpre, Nathan Lambert, Xinyi Wang, NiklasMuennighoff, Bairu Hou, Liangming Pan, Haewon Jeong, Colin Raffel, Shiyu Chang, Tatsunori Hashimoto,and William Yang Wang. A survey on data selection for language models, 2024.Sang Michael Xie, Hieu Pham, Xuanyi Dong, Nan Du, Hanxiao Liu, Yifeng Lu, Percy S Liang, Quoc V Le,Tengyu Ma, and Adams Wei Yu. Doremi: Optimizing data mixtures speeds up language model pretraining.Advances in Neural Information Processing Systems , 36, 2024b.Mayee Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, and Christopher Ré. Skill-it! adata-driven skills framework for understanding and training language models. Advances in Neural InformationProcessing Systems , 36, 2024.Yingwei Ma, Yue Liu, Yue Yu, Yuanliang Zhang, Yu Jiang, Changjian Wang, and Shanshan Li. At whichtraining stage does code data help LLMs reasoning? In The Twelfth International Conference on LearningRepresentations , 2024. URL https://openreview.net/forum?id=KIPJKST4gw .Ming Li, Yong Zhang, Zhitao Li, Jiuhai Chen, Lichang Chen, Ning Cheng, Jianzong Wang, Tianyi Zhou, andJing Xiao. From quantity to quality: Boosting llm performance with self-guided data selection for instructiontuning. arXiv preprint arXiv:2308.12032 , 2023c.Wei Liu, Weihao Zeng, Keqing He, Yong Jiang, and Junxian He. What makes good data for alignment? acomprehensive study of automatic data selection in instruction tuning. In ICLR , 2024.Yunshui Li, Binyuan Hui, Xiaobo Xia, Jiaxi Yang, Min Yang, Lei Zhang, Shuzheng Si, Junhao Liu, TongliangLiu, Fei Huang, et al. One shot learning as instruction data prospector for large language models. arXivpreprint arXiv:2312.10302 , 2023d.Mengzhou Xia, Sadhika Malladi, Suchin Gururangan, Sanjeev Arora, and Danqi Chen. Less: Selecting influentialdata for targeted instruction tuning. arXiv preprint arXiv:2402.04333 , 2024.Feiyang Kang, Hoang Anh Just, Yifan Sun, Himanshu Jahagirdar, Yuanzhi Zhang, Rongxing Du, Anit KumarSahu, and Ruoxi Jia. Get more for less: Principled data selection for warming up fine-tuning in LLMs. In TheTwelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=QmYNBVukex .Ziheng Qin, Kai Wang, Zangwei Zheng, Jianyang Gu, Xiangyu Peng, Zhaopan Xu, Daquan Zhou, Lei Shang,Baigui Sun, Xuansong Xie, and Yang You. Infobatch: Lossless training speed up by unbiased dynamicdata pruning. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=C61sk5LsK6 .Together Computer. Redpajama: an open dataset for training large language models, 10 2023. URL https://github.com/togethercomputer/RedPajama-Data .Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. arXivpreprint arXiv:1708.00489 , 2017.Krishnateja Killamsetty, Xujiang Zhao, Feng Chen, and Rishabh Iyer. Retrieve: Coreset selection for efficientand robust semi-supervised learning. Advances in neural information processing systems , 34:14488–14501,2021.Ilya Loshchilov and Frank Hutter. Online batch selection for faster training of neural networks. arXiv preprintarXiv:1511.06343 , 2015.Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. arXiv preprintarXiv:1511.05952 , 2015.Haw-Shiuan Chang, Erik Learned-Miller, and Andrew McCallum. Active bias: Training more accurate neuralnetworks by emphasizing high variance samples. Advances in Neural Information Processing Systems , 30,2017.Angelos Katharopoulos and François Fleuret. Not all samples are created equal: Deep learning with importancesampling. In International conference on machine learning , pages 2525–2534. PMLR, 2018.Angela H Jiang, Daniel L-K Wong, Giulio Zhou, David G Andersen, Jeffrey Dean, Gregory R Ganger, GauriJoshi, Michael Kaminksy, Michael Kozuch, Zachary C Lipton, et al. Accelerating deep learning by focusingon the biggest losers. arXiv preprint arXiv:1910.00762 , 2019.13Hwanjun Song, Minseok Kim, Sundong Kim, and Jae-Gil Lee. Carpe diem, seize the samples uncertain" atthe moment" for adaptive batch selection. In Proceedings of the 29th ACM International Conference onInformation & Knowledge Management , pages 1385–1394, 2020.Sören Mindermann, Jan M Brauner, Muhammed T Razzak, Mrinank Sharma, Andreas Kirsch, Winnie Xu,Benedikt Höltgen, Aidan N Gomez, Adrien Morisot, Sebastian Farquhar, and Yarin Gal. Prioritized training onpoints that are learnable, worth learning, and not yet learnt. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song,Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference onMachine Learning , volume 162 of Proceedings of Machine Learning Research , pages 15630–15649. PMLR,17–23 Jul 2022. URL https://proceedings.mlr.press/v162/mindermann22a.html .Simin Fan and Martin Jaggi. Irreducible curriculum for language model pretraining. arXiv preprintarXiv:2310.15389 , 2023.Jean Kaddour, Oscar Key, Piotr Nawrot, Pasquale Minervini, and Matt Kusner. No train no gain: Revisitingefficient training algorithms for transformer-based language models. In Thirty-seventh Conference on NeuralInformation Processing Systems , 2023. URL https://openreview.net/forum?id=thbXgJ8gNK .Cody Coleman, Christopher Yeh, Stephen Mussmann, Baharan Mirzasoleiman, Peter Bailis, Percy Liang, JureLeskovec, and Matei Zaharia. Selection via proxy: Efficient data selection for deep learning. arXiv preprintarXiv:1906.11829 , 2019.Logan Engstrom, Axel Feldmann, and Aleksander Madry. Dsdm: Model-aware dataset selection with datamodels.arXiv preprint arXiv:2401.12926 , 2024.Yonatan Oren, Shiori Sagawa, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust languagemodeling. arXiv preprint arXiv:1909.02060 , 2019.Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to documentrecognition. Proceedings of the IEEE , 86(11):2278–2324, 1998.Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and ChristopherPotts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the2013 conference on empirical methods in natural language processing , pages 1631–1642, 2013.Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectionaltransformers for language understanding. arXiv preprint arXiv:1810.04805 , 2018.Yuxian Gu, Zhengyan Zhang, Xiaozhi Wang, Zhiyuan Liu, and Maosong Sun. Train no evil: Selective maskingfor task-guided pre-training. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu, editors, Proceedingsof the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 6966–6974,Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.566.URL https://aclanthology.org/2020.emnlp-main.566 .Tanish Lad, Himanshu Maheshwari, Shreyas Kottukkal, and Radhika Mamidi. Using selective masking as abridge between pre-training and fine-tuning. arXiv preprint arXiv:2211.13815 , 2022.Qihuang Zhong, Liang Ding, Juhua Liu, Xuebo Liu, Min Zhang, Bo Du, and Dacheng Tao. Revisiting tokendropping strategy in efficient BERT pretraining. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki,editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1:Long Papers) , pages 10391–10405, Toronto, Canada, July 2023a. Association for Computational Linguistics.doi: 10.18653/v1/2023.acl-long.579. URL https://aclanthology.org/2023.acl-long.579 .Le Hou, Richard Yuanzhe Pang, Tianyi Zhou, Yuexin Wu, Xinying Song, Xiaodan Song, and Denny Zhou. Tokendropping for efficient BERT pretraining. In Smaranda Muresan, Preslav Nakov, and Aline Villavicencio,editors, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:Long Papers) , pages 3774–3784, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi:10.18653/v1/2022.acl-long.262. URL https://aclanthology.org/2022.acl-long.262 .Tianjian Li, Haoran Xu, Philipp Koehn, Daniel Khashabi, and Kenton Murray. Error norm truncation: Robusttraining in the presence of data noise for text generation models. arXiv preprint arXiv:2310.00840 , 2023e.Jessica Rumbelow and Matthew Watkins. Solidgoldmagikarp (plus, prompt generation).LessWrong, 2023. URL https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation .Sander Land and Max Bartolo. Fishing for magikarp: Automatically detecting under-trained tokens in largelanguage models. arXiv preprint arXiv:2405.05417 , 2024.14Naomi Saphra and Adam Lopez. Understanding learning dynamics of language models with svcca. arXivpreprint arXiv:1811.00225 , 2018.Leshem Choshen, Guy Hacohen, Daphna Weinshall, and Omri Abend. The grammar-learning trajectories ofneural language models. arXiv preprint arXiv:2109.06096 , 2021.Leo Z Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi, and Noah A Smith. Probing across time: Whatdoes roberta know and when? arXiv preprint arXiv:2104.07885 , 2021.Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin, and Vedant Misra. Grokking: Generalizationbeyond overfitting on small algorithmic datasets. arXiv preprint arXiv:2201.02177 , 2022.Mengzhou Xia, Mikel Artetxe, Chunting Zhou, Xi Victoria Lin, Ramakanth Pasunuru, Danqi Chen, LukeZettlemoyer, and Ves Stoyanov. Training trajectories of language models across scales. arXiv preprintarXiv:2212.09803 , 2022.Danny Hernandez, Jared Kaplan, Tom Henighan, and Sam McCandlish. Scaling laws for transfer. arXiv preprintarXiv:2102.01293 , 2021.Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diegode Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal largelanguage models. arXiv preprint arXiv:2203.15556 , 2022.Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, MaartenBosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv preprintarXiv:2206.07682 , 2022b.Berivan Isik, Natalia Ponomareva, Hussein Hazimeh, Dimitris Paparas, Sergei Vassilvitskii, and Sanmi Koyejo.Scaling laws for downstream task performance of large language models. arXiv preprint arXiv:2402.04177 ,2024.Kushal Tirumala, Aram Markosyan, Luke Zettlemoyer, and Armen Aghajanyan. Memorization withoutoverfitting: Analyzing the training dynamics of large language models. Advances in Neural InformationProcessing Systems , 35:38274–38290, 2022.Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang.Quantifying memorization across neural language models. arXiv preprint arXiv:2202.07646 , 2022.Tom Henighan, Shan Carter, Tristan Hume, Nelson Elhage, Robert Lasenby, Stanislav Fort, Nicholas Schiefer,and Christopher Olah. Superposition, memorization, and double descent. Transformer Circuits Thread , 2023.Stella Biderman, USVSN PRASHANTH, Lintang Sutawika, Hailey Schoelkopf, Quentin Anthony, ShivanshuPurohit, and Edward Raff. Emergent and predictable memorization in large language models. Advances inNeural Information Processing Systems , 36, 2024.Danny Hernandez, Tom Brown, Tom Conerly, Nova DasSarma, Dawn Drain, Sheer El-Showk, Nelson Elhage,Zac Hatfield-Dodds, Tom Henighan, Tristan Hume, et al. Scaling laws and interpretability of learning fromrepeated data. arXiv preprint arXiv:2205.10487 , 2022.Fuzhao Xue, Yao Fu, Wangchunshu Zhou, Zangwei Zheng, and Yang You. To repeat or not to repeat: Insightsfrom scaling llm under token-crisis. Advances in Neural Information Processing Systems , 36, 2024.Charles AE Goodhart and CAE Goodhart. Problems of monetary management: the UK experience . Springer,1984.Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang,Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions withhuman feedback. Advances in neural information processing systems , 35:27730–27744, 2022.Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert,Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers tosolve math word problems, 2021. URL https://arxiv.org/abs/2110.14168 .Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and JacobSteinhardt. Measuring mathematical problem solving with the math dataset. In NIPS , 2021.Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and GrahamNeubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435 , 2022.15Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve simple mathword problems? In Proceedings of the 2021 Conference of the North American Chapter of the As-sociation for Computational Linguistics: Human Language Technologies , pages 2080–2094, Online,June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.168. URLhttps://aclanthology.org/2021.naacl-main.168 .Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing Englishmath word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for ComputationalLinguistics , pages 975–984, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.92. URL https://aclanthology.org/2020.acl-main.92 .Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. MAWPS: Amath word problem repository. In Proceedings of the 2016 Conference of the North American Chapterof the Association for Computational Linguistics: Human Language Technologies , pages 1152–1157, SanDiego, California, June 2016. Association for Computational Linguistics. doi: 10.18653/v1/N16-1136. URLhttps://aclanthology.org/N16-1136 .Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, andAshwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. InThe Eleventh International Conference on Learning Representations , 2023. URL https://openreview.net/forum?id=DHyHRBwJUTN .Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. Mathqa:Towards interpretable math word problem solving with operation-based formalisms. arXiv preprintarXiv:1905.13319 , 2019.Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt.Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 , 2020.Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, AakankshaChowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261 , 2022.Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen,and Nan Duan. Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprintarXiv:2304.06364 , 2023b.Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and OyvindTafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprintarXiv:1803.05457 , 2018.Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova.Boolq: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044 ,2019.Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsensein natural language. In Proceedings of the AAAI conference on artificial intelligence , volume 34, pages7432–7439, 2020.Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine reallyfinish your sentence? arXiv preprint arXiv:1905.07830 , 2019.Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarialwinograd schema challenge at scale. Communications of the ACM , 64(9):99–106, 2021.Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? anew dataset for open book question answering. arXiv preprint arXiv:1809.02789 , 2018.Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Lei Shen, Zihan Wang, Andi Wang,Yang Li, et al. Codegeex: A pre-trained model for code generation with multilingual benchmarking onhumaneval-x. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and DataMining , pages 5673–5684, 2023.Jonathan H Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, andJennimaria Palomaki. Tydi qa: A benchmark for information-seeking question answering in ty pologically diverse languages. Transactions of the Association for Computational Linguistics , 8:454–470, 2020.16Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang,Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprintarXiv:2108.07732 , 2021.Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y Wu,YK Li, et al. Deepseek-coder: When the large language model meets programming–the rise of codeintelligence. arXiv preprint arXiv:2401.14196 , 2024.Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer,and Mike Lewis. Contrastive decoding: Open-ended text generation as optimization. In ACL (1) , pages12286–12312. Association for Computational Linguistics, 2023f.Fanqi Wan, Xinting Huang, Deng Cai, Xiaojun Quan, Wei Bi, and Shuming Shi. Knowledge fusion of largelanguage models. In The Twelfth International Conference on Learning Representations , 2024. URLhttps://openreview.net/forum?id=jiDsk12qcz .Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. Specializing smaller language models towardsmulti-step reasoning. In International Conference on Machine Learning , pages 10421–10430. PMLR, 2023.17AppendixContentsA Author Contributions 19B Related Works 19B.1 Pretraining Data Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 19B.2 Data Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19B.3 Language Model Training Dynamics . . . . . . . . . . . . . . . . . . . . . . . 20B.4 Scaling Laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20C Limitations and Future Work 20D Analysis and Visualization of Tokens in Pretraining 21D.1 More Details of Four Categories Tokens . . . . . . . . . . . . . . . . . . . . . 21D.2 Non-Converging Tokens in Pretrainig . . . . . . . . . . . . . . . . . . . . . . . 22E Evalution Details 22E.1 Math Evalution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22E.2 General Evalution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22F Relate the Selected Tokens’ Loss to Downstream Task Performance 22G Examples of Tokens Selected by SLM 23G.1 Token Selected Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23G.2 Dynamic Token Selected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23H Self-Reference Setting 23I Weak-to-Strong Generalization 2418A Author ContributionsZhenghao Lin designed and implemented detailed token selection process, conducted extensivepreliminary experiments, developed the pre-training and evaluation pipeline, conducted most of thepre-training experiments and analysis, implemented baselines, and significantly contributed to thewriting. Zhibin Gou presented a preliminary proposal, introduced the method of using excess loss forreweighting tokens, compiled high-quality corpora, trained reference models, set up the fine-tuningand evaluation pipelines, designed the experimental analysis, and significantly contributed to thewriting. Yeyun Gong proposed the initial project and co-led the project with Weizhu Chen, theyoffered extensive advice and guidance on experiments and writing, and oversaw team collaborationand resource management. Xiao Liu, Yelong Shen, Ruochen Xu, Chen Lin, Yujiu Yang, Jian Jiao,and Nan Duan offered research mentorship, coordinated the project, and contributed to the writing.B Related WorksB.1 Pretraining Data OptimizationThe objective of optimizing pre-training corpora is to maximize the performance and efficiencyof language model training by improving the quality and scale of the pretrain data mixture. Thisincludes data collecting through crawling [Raffel et al., 2020] or synthesis [Polu and Sutskever, 2020,Gunasekar et al., 2023], de-duplication [Lee et al., 2021, Kandpal et al., 2022, Tirumala et al., 2023],filtering and selection [Xie et al., 2024a, Albalak et al., 2024], as well as data composition [Xie et al.,2024b] and curriculum [Chen et al., 2024, Ma et al., 2024].B.2 Data SelectionData selection for fine-tuning has been extensively studied, focusing on improving quality [Li et al.,2023c], diversity [Liu et al., 2024], and distribution matching [Li et al., 2023d, Xia et al., 2024, Niet al., 2024, Kang et al., 2024]. For pretraining, various lightweight filters are utilized [Albalak et al.,2024], including heuristic-based ( e.g., language and item count filtering), classifier-based [Brownet al., 2020], and perplexity or loss-based approaches [Wenzek et al., 2019, Qin et al., 2024]. Themassive public RedPajama-Data-v2 dataset [Computer, 2023], for example, leverages over 40 qualityindicators for data filtering and reweighting. Nevertheless, strict filtering like blocklist [Raffel et al.,2020] and Safety API filtering [Welbl et al., 2021], have been found to hurt evaluation loss or inducebias [Dodge et al., 2021].Sample-level selection has been extensively studied in previous research [Sener and Savarese, 2017,Killamsetty et al., 2021], particularly through online batch selection [Loshchilov and Hutter, 2015,Schaul et al., 2015, Chang et al., 2017, Katharopoulos and Fleuret, 2018, Jiang et al., 2019]. Theseapproaches have been applied to various classification tasks [Song et al., 2020, Mindermann et al.,2022] and language modeling [Fan and Jaggi, 2023]. However, Kaddour et al. [2023] find that batchselection is not computationally efficient.Many previous works have employed the general idea of using a reference model as a proxy fordata selection. For instance, Selection Via Proxy trains a proxy model to select samples with highuncertainty [Coleman et al., 2019]. Xie et al. [2024a] and Engstrom et al. [2024] utilize n-grammodels or datamodels with a target dataset to estimate importance weights. Additionally, Xie et al.[2024b] optimize the worst-case excess loss [Oren et al., 2019] relative to a reference model todetermine domain weights. One of SLM’s scoring functions is excess loss, and the most relevantwork related to excess loss is RHO-LOSS [Mindermann et al., 2022], which trains a small model on aholdout set and uses the difference between training loss and holdout loss to select in-batch samples.Although excess loss is mathematically identical to RHO-LOSS, SLM differs in three important ways:1) The focus is distinct. Motivated by the training dynamics of token loss, the core idea of SLM is toselect useful tokens for pre-training. Its score functions are highly flexible and not limited to excessloss (see Appendix H for other functions). In contrast, RHO-LOSS aims to mathematically derive areducible holdout loss to minimize generalization loss. 2) The meaning and training procedure of theproxy model are different. SLM trains a reference model on high-quality data to reflect the desireddata distribution, whereas RHO-LOSS trains a small model on a random holdout set. 3) The selectionscale and granularity vary. RHO-LOSS selects sample-level data on a small scale (typically 1K–1Msamples) for task-specific fine-tuning tasks such as MNIST [LeCun et al., 1998] and SST-2 [Socher19et al., 2013]. In contrast, SLM conducts fine-grained token-level selection on large-scale languagemodel pre-training, involving up to 80B tokens.Token-level training strategies have also been explored, especially for the pre-training of BERT-likemodels using Masked Language Modeling (MLM) [Devlin et al., 2018]. Specifically, “selectivemasking” involves masking important tokens in the input to focus on learning tokens that are morerelevant to downstream tasks [Gu et al., 2020, Lad et al., 2022], whereas “token dropping” aims toreduce training costs by omitting less important tokens [Zhong et al., 2023a, Hou et al., 2022]. [Liet al., 2023e] assesses the quality of each token based on the skewness of its predicted distribution andtruncates the noisy tokens during training. Additionally, some research has approached the analysisand detection of under-trained tokens from a tokenization perspective [Rumbelow and Watkins, 2023,Land and Bartolo, 2024]. To our knowledge, we are the first to explore token-level data selection forlarge language model training, aimed at enhancing data quality and information density at the mostfundamental granularity.B.3 Language Model Training DynamicsInvestigating the training dynamics of language models is essential for understanding their behaviorthroughout the training process. This research includes studying internal representations [Saphraand Lopez, 2018], the acquisition of linguistic knowledge [Choshen et al., 2021, Liu et al., 2021],and the phenomenon of grokking [Power et al., 2022]. The analysis by Xia et al. [2022] is the mostrelated to ours, which examines token-level training trajectories in models of varying sizes. Ourfindings, however, diverge from those of Xia et al. [2022], who posit that tokens with little change inperplexity are “already learned”. We identify a spectrum of token patterns, including “easy tokens”and “hard tokens” that resist convergence. Recognizing this, we propose a method of selectivelanguage modeling that targets the influential tokens, optimizing the learning process.B.4 Scaling LawsScaling laws guide us in discovering the impact of factors such as parameter count, data size, andcompute on language model performance and behavior. These studies usually focus on predicablescaling though power law [Kaplan et al., 2020, Hernandez et al., 2021], optimal resource allocation[Hoffmann et al., 2022], downstream tasks [Wei et al., 2022b, Isik et al., 2024, Gadre et al., 2024],architectures [Tay et al., 2022], memorization [Tirumala et al., 2022, Carlini et al., 2022, Henighanet al., 2023, Biderman et al., 2024], and repeating data [Hernandez et al., 2022, Muennighoff et al.,2024, Xue et al., 2024]. Most scaling laws on model performance study cross-entory loss on alltraining tokens, while we focus on the tokens loss of desired distributions.C Limitations and Future WorkGeneralizability In math continual pretraining, as depicted in Figure 6, training exclusively withSLM leads to quickly convergence to the domain focused by the reference model, accompanied bya significant rise in the loss of unselected tokens. Although no adverse effects, like biases, havebeen observed from the increased loss yet, a general pretraining loss on text and code may preventoverfitting [Goodhart and Goodhart, 1984], as suggested by Ouyang et al. [2022] and Azerbayevet al. [2023]. Furthermore, future efforts could broaden the corpus scope of the reference model, andenlarge the pretraining data size, as exemplified by DeepSpeedMath [Shao et al., 2024].Scalability Due to budget constraints, we have only verified the effectiveness of our method onsmaller models (<=7B parameters) and smaller datasets (<100B tokens). Smaller models benefitsignificantly from removing the loss of irrelevant tokens and focusing on important ones. However,it’s possible that very large models trained on extensive corpora may naturally develop this inductivebias to compress useful data ( i.e.,compressing everything), although it may sounds inefficient fornow. Therefore, future works should study whether this selective language modeling technique canscale to very large models and data [Kaplan et al., 2020].Is training a reference model necessary? To score tokens, we need a high-quality referencemodel. This could be a base model trained with a small amount of high-quality data, or a performantopen-source model. In fact, since we only need input logprobs or perplexity from reference model,20/uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018/uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000048/uni00000047/uni00000003/uni00000037/uni00000052/uni0000004e/uni00000048/uni00000051/uni00000056/uni0000000b/uni00000025/uni0000000c/uni00000013/uni00000014/uni00000015/uni00000016/uni00000017/uni0000002f/uni00000052/uni00000056/uni00000056/uni0000000b/uni00000044/uni0000000c/uni00000003/uni0000002f/uni00000052/uni00000056/uni00000056/uni00000003/uni00000049/uni00000052/uni00000055/uni00000003/uni00000047/uni0000004c/uni00000049/uni00000049/uni00000048/uni00000055/uni00000048/uni00000051/uni00000057/uni00000003/uni00000057/uni00000052/uni0000004e/uni00000048/uni00000051/uni00000003/uni00000057/uni0000005c/uni00000053/uni00000048/uni00000056/uni0000002b/uni0000002f/uni00000003/uni0000000b/uni00000015/uni0000001b/uni00000008/uni0000000c/uni0000002b/uni0000002b/uni00000003/uni0000000b/uni0000001b/uni00000008/uni0000000c/uni0000002f/uni0000002f/uni00000003/uni0000000b/uni00000018/uni00000014/uni00000008/uni0000000c/uni0000002f/uni0000002b/uni00000003/uni0000000b/uni00000014/uni00000016/uni00000008/uni0000000c/uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018/uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000048/uni00000047/uni00000003/uni00000037/uni00000052/uni0000004e/uni00000048/uni00000051/uni00000056/uni0000000b/uni00000025/uni0000000c/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni0000002f/uni00000052/uni00000056/uni00000056/uni0000000b/uni00000045/uni0000000c/uni00000003/uni00000028/uni0000005b/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000003/uni0000002f /uni0000002f/uni00000003/uni00000057/uni00000052/uni0000004e/uni00000048/uni00000051/uni00000056/uni0000002f/uni0000002f/uni00000003/uni00000037/uni00000052/uni0000004e/uni00000048/uni00000051/uni00000003/uni00000014/uni0000002f/uni0000002f/uni00000003/uni00000037/uni00000052/uni0000004e/uni00000048/uni00000051/uni00000003/uni00000015/uni0000002f/uni0000002f/uni00000003/uni00000037/uni00000052/uni0000004e/uni00000048/uni00000051/uni00000003/uni00000016/uni00000013 /uni00000018 /uni00000014/uni00000013 /uni00000014/uni00000018/uni00000037/uni00000055/uni00000044/uni0000004c/uni00000051/uni00000048/uni00000047/uni00000003/uni00000037/uni00000052/uni0000004e/uni00000048/uni00000051/uni00000056/uni0000000b/uni00000025/uni0000000c/uni00000015/uni00000011/uni00000013/uni00000015/uni00000011/uni00000018/uni00000016/uni00000011/uni00000013/uni00000016/uni00000011/uni00000018/uni00000017/uni00000011/uni00000013/uni00000017/uni00000011/uni00000018/uni0000002f/uni00000052/uni00000056/uni00000056/uni0000000b/uni00000046/uni0000000c/uni00000003/uni00000028/uni0000005b/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000003/uni0000002b /uni0000002b/uni00000003/uni00000057/uni00000052/uni0000004e/uni00000048/uni00000051/uni00000056/uni0000002b/uni0000002b/uni00000003/uni00000037/uni00000052/uni0000004e/uni00000048/uni00000051/uni00000003/uni00000014/uni0000002b/uni0000002b/uni00000003/uni00000037/uni00000052/uni0000004e/uni00000048/uni00000051/uni00000003/uni00000015/uni0000002b/uni0000002b/uni00000003/uni00000037/uni00000052/uni0000004e/uni00000048/uni00000051/uni00000003/uni00000016Figure 10: The loss of four categories of tokens during Mistral-7B pretraining on OpenWebMath.(a) shows the loss of H →H, L→H, H→L, and L→L tokens during pretraining. (b) and (c) show threecases of fluctuating tokens’ loss in L →L and H→H during pretraining, respectively.we could even utilize more powerful proprietary model APIs. We can input tokens and use the logprobabilities of the input returned by the API as reference scores. We leave this for future works.How to improve upon SLM? There are many natural extensions of SLM, e.g., reweighting tokensinstead of selecting may improve robustness; using a reference model as a reward model to guidepretraining with reinforcement learning; adopting multiple reference models to reduce overfitting;designing token-level curriculum learning and iterative strategies for continuous improvements, etc.Expanding the use of SLM SLM may be extended to supervised fine-tuning to address the noiseand distribution mismatches in many SFT datasets. Another potential application is alignment, e.g.,by training a reference model to emphasize helpfulness, truthfulness, and harmlessness, we mayobtain a base model that is natively aligned during the pretraining stage. Meanwhile, we believethat the idea of SLM may find broader applications in multimodal data such as images, videos, andspeech, which have a high noise-to-information ratio than text.D Analysis and Visualization of Tokens in PretrainingD.1 More Details of Four Categories TokensWe categorize tokens into four categories: H →H, L→H, H→L, L→L. During the training process,we collected the loss of each token after training on each 1 billion tokens training data. We then usedlinear fitting and took the difference in loss between the first and last points as evidence of whetherthe loss decreased during the training process.Specifically, suppose we have a sequence of token’s loss (l0, l1, ..., l n). Our goal is to minimize thesum of the squares of the differences between each data point and its linear predictive value:f(a, b) =minimizenXi=0(li−(axi+b))2, (6)where x0= 0is the initial checkpoint and xn=nis the final checkpoint. Substituting these intothe fitted equation, we can obtain the Loss values at the start and end after fitting: Lstart=bandLend=an+b. The change in loss can then be expressed as: ∆L=Lend− L start. Meanwhile, werepresent the average Loss of the last checkpoint as Lmean.Next, we can classify the tokens based on ∆Land the Lmean. We categorize tokens with ∆L<−0.2as H→L (loss decreases from high to low) category tokens, and tokens with ∆L>0.2as L→H (lossincreases from low to high) category tokens. If −0.2≤∆L ≤0.2andln≤ L mean, then tokens areclassified as L →L (loss remains low); if ln>Lmean, they are classified as H →H (loss remains high).In Figure 10, we have added the tokens’ loss curves of the 7B model which is consistent with theother experimental settings in §2.1, for readers to refer to whether similar phenomena exist on largermodels. In Figure 11, we visualize examples of the four categories of tokens in actual text.21D.2 Non-Converging Tokens in PretrainigIn §2.1, we mentioned that during the training process, only a minority of tokens belong to the H →Lcategory. Among the remaining categories of H →H and L→L tokens, there are tokens that exhibitsignificant fluctuations during training. Furthermore, there are instances where H →L tokens arenot effectively learned. Therefore, in our analysis, we specifically select those tokens from thesecategories that demonstrate considerable variability and distinct loss. We visualize these tokensthat exhibit abnormal behavior during the training process. As illustrated in Figure 12, we find thatthe majority of these tokens originate from rather chaotic corpora. For instance, the corpora mayinclude a mix of custom symbols, unintelligible gibberish, and information such as timetables andbibliographic references. Within a segment of normal text, there may also be fluctuations in the usageof common conjunctions, word suffixes, and punctuation marks. The latter may not necessarily bedisastrous for training; in fact, it could represent a normal occurrence. However, if we can effectivelymitigate the losses caused by the former, it might lead to more stable and efficient model training.E Evalution DetailsE.1 Math EvalutionWe conducted a comprehensive evaluation of the model across various math reasoning benchmarks,encompassing a range of difficulties from elementary to university level, multiple mathematicaldomains, and diverse question types including multiple-choice and open-ended questions. Ourbenchmarks include GSM8k [Cobbe et al., 2021], MATH [Hendrycks et al., 2021], GSM-Hard [Gaoet al., 2022], SV AMP [Patel et al., 2021], ASDIV [Miao et al., 2020], MAWPS [Koncel-Kedziorskiet al., 2016], TabMWP (TAB) [Lu et al., 2023], MathQA (MQA) [Amini et al., 2019], MMLU-STEM[Hendrycks et al., 2020], and SAT [Azerbayev et al., 2023].E.2 General EvalutionIn the evaluation of general domain, we followed the lm-evaluation-harness [Gao et al., 2023] andevalute model on MMLU [Hendrycks et al., 2020], BBH [Suzgun et al., 2022], AGIEval [Zhonget al., 2023b], ARC-Easy and ARC-Challenge [Clark et al., 2018], BoolQ [Clark et al., 2019],PIQA [Bisk et al., 2020], Hellaswag [Zellers et al., 2019], WinoGrande [Sakaguchi et al., 2021],OpenBookQA [Mihaylov et al., 2018]. On HumanEval [Zheng et al., 2023] and TydiQA [Clark et al.,2020], we follow the evaluation pipeline of open-instrcut [Ivison et al., 2023] and report Pass@1 andPass@10 for HumanEval and F1 for TydiQA. For MBPP [Austin et al., 2021] benchmark, we followthe evaluation pipeline of DeepSeek-Coder [Guo et al., 2024], and report Pass@1 and [email protected] Relate the Selected Tokens’ Loss to Downstream Task PerformanceIn this section, we declare the details about correlating the loss of selected tokens with the performanceof downstream tasks. Concurrent study has explored similar methods to study the impact of scalinglaws with the performance of models in downstream tasks [Gadre et al., 2024]. Our analysishere differs in that it aims to elucidate the relationship between the decrease/increase in loss forselected/unselected tokens and the model’s performance on downstream tasks.We use the average accuracy of MATH and GSM8K as the standard for measuring downstream tasksperformance of model. Based on the trend of data points in Figure 7, we propose the relationshipbetween the average accuracy of downstream tasks and selected/unselected tokens’ loss,Acc(L) = log( a∗ L+c) (7)The parameters aandcare fitted from the data. If the loss of selected tokens Lsis used for fitting,thena >0. Conversely, if the loss of unselected tokens Lusis used for fitting, then a <0. Therefore,we believe that training the model on selected tokens can effectively improve its performance ondownstream tasks, while unselected tokens may have a detrimental effect on the model’s performancein downstream tasks.22Table 4: Full Self-Reference results on Tinyllama-1.1B.ScoreFunctionSelectRatioGSM8K MATH SV AMP ASDiv MA WPS MQA A VG- 100% 6.3 2.6 21.7 36.7 47.7 13.9 21.5LRM(xi)90% 7.4 4.4 23.4 38.7 51.9 14.4 23.480% 6.4 4.6 23.1 39.7 52.0 14.3 23.470% 6.7 4.6 23.3 40.0 54.5 14.3 23.960% 7.0 4.6 22.2 38.5 52.2 13.7 23.050% 5.7 4.2 20.7 36.7 46.7 10.3 20.7HRM(xi)90% 6.7 3.0 23.7 40.3 52.3 13.1 23.280% 6.8 3.6 22.5 40.6 52.9 13.6 23.370% 7.0 4.8 23.0 39.3 50.5 13.5 23.060% 6.5 4.8 26.5 37.3 49.7 15.6 23.450% 4.7 5.8 20.9 33.8 42.5 11.1 19.8HRM(xi)∪ L RM(xi)50%∪70%(80%) 6.4 3.6 22.7 38.4 52.6 15.3 23.270%∪60%(77%) 6.3 4.6 24.4 39.6 51.4 16.3 23.870%∪50%(75%) 6.9 5.6 23.2 39.9 52.9 12.6 23.560%∪60%(70%) 6.7 5.2 24.7 39.2 50.6 14.6 23.560%∪50%(68%) 7.1 5.8 21.7 37.3 49.6 15.3 22.860%∪40%(65%) 7.3 6.0 23.6 36.9 48.6 13.1 22.6HRM(xi)∩ L RM(xi)80%∩90%(76%) 6.0 4.4 23.7 38.5 51.2 13.3 22.875%∩75%(72%) 7.8 5.2 24.2 39.4 54.9 14.7 24.470%∩90%(68%) 6.8 4.6 22.2 40.3 53.0 14.8 23.680%∩80%(67%) 8.2 6.4 21.2 39.1 53.4 15.0 23.970%∩70%(60%) 7.1 5.0 23.5 41.2 53.8 18.0 24.8G Examples of Tokens Selected by SLMG.1 Token Selected ExamplesIn Figure 13, we present several examples of tokens selected by the SLM method, with contentmarked in blue indicating the tokens actually chosen during the pretraining process.G.2 Dynamic Token SelectedIn Figure 14, we display the dynamic changes in token selection tendencies throughout the SLMtraining process. We chose four checkpoints during the training process (0%, 33%, 66%, and 100%)to analyze the current tendencies in token selection. The preferences for token selection are indicatedby different colors, ranging from high to low preference, typically represented as deep blue, blue,black, orange, and dark orange, respectively.H Self-Reference SettingIn this section, we will provide a detailed introduction to the reference loss score function andinformation entropy score function in SLM. Reference loss score function is to directly use the lossof the reference model as the basis for selecting tokens. The higher the token’s loss of the referencemodel, the lower the expectation that the token will be selected. The score LRM(xi)can be directlyobtained by referring to Equation 1. Information entropy score function is to select the correspondingtoken based on the information entropy of the reference model in each token. The information entropyof token xican be expressed as:HRM(xi) =−VXk=1P(tk|x<i) logP(tk|x<i), (8)23Table 5: Weak-to-Strong generalization result on math benchmark.Model Train Toks GSM8K MATH SV AMP ASDiv MA WPS TAB MQAMMLUSTEMSAT A VGLlama-2-7B-CT 15B 28.4 13.6 50.3 62.8 79.5 37.6 34.1 41.6 43.5 43.5Llama-2-7B-CT w/ 1B RM 10.5B 29.8 16.0 55.5 63.7 80.4 37.9 34.3 38.2 43.8 44.4where tkrepresents the i-th token in the vocabulary, and Vrepresents the size of the vocabulary. Theintuition of this strategy is that the higher the information entropy, the higher the uncertainty of thetoken in the context. Therefore, we consider that if the language model is still uncertain for certaintokens after pretraining, we do not expect that the language model will learn it during pretraining.In Table 4, we provide more SLM results, including different select ratios and combinations of twoscore functions, for the convenience of the readers to refer to.I Weak-to-Strong GeneralizationApart from the main experiments where we use the same base model for the reference and continualpretraining, we also investigate if a smaller reference model can effectively guide the pretraining ofa larger model. We use Tinyllama-1.1B as reference model and continual pretraining Llama-2-7Bon 15B OpenWebMath tokens. Results presented in Table 5 indicate that, despite the considerablegap between the small and large models [Li et al., 2023f], employing the small reference model totoken selection can still yield benefits to the pre-training of the larger model. If reference and trainingmodels have different vocabularies, one can consider performing token alignment [Wan et al., 2024,Fu et al., 2023], which we leave for future work.24NeurIPS Paper Checklist1.ClaimsQuestion: Do the main claims made in the abstract and introduction accurately reflect thepaper’s contributions and scope?Answer: [Yes]Justification: In the abstract and §1, we clearly demonstrate the contribution and scope ofthis paper.Guidelines:•The answer NA means that the abstract and introduction do not include the claimsmade in the paper.•The abstract and/or introduction should clearly state the claims made, including thecontributions made in the paper and important assumptions and limitations. A No orNA answer to this question will not be perceived well by the reviewers.•The claims made should match theoretical and experimental results, and reflect howmuch the results can be expected to generalize to other settings.•It is fine to include aspirational goals as motivation as long as it is clear that these goalsare not attained by the paper.2.LimitationsQuestion: Does the paper discuss the limitations of the work performed by the authors?Answer: [Yes]Justification: In Appendix C, we have thoroughly discussed the limitations of our article,hoping to guide more future work.Guidelines:•The answer NA means that the paper has no limitation while the answer No means thatthe paper has limitations, but those are not discussed in the paper.• The authors are encouraged to create a separate "Limitations" section in their paper.•The paper should point out any strong assumptions and how robust the results are toviolations of these assumptions (e.g., independence assumptions, noiseless settings,model well-specification, asymptotic approximations only holding locally). The authorsshould reflect on how these assumptions might be violated in practice and what theimplications would be.•The authors should reflect on the scope of the claims made, e.g., if the approach wasonly tested on a few datasets or with a few runs. In general, empirical results oftendepend on implicit assumptions, which should be articulated.•The authors should reflect on the factors that influence the performance of the approach.For example, a facial recognition algorithm may perform poorly when image resolutionis low or images are taken in low lighting. Or a speech-to-text system might not beused reliably to provide closed captions for online lectures because it fails to handletechnical jargon.•The authors should discuss the computational efficiency of the proposed algorithmsand how they scale with dataset size.•If applicable, the authors should discuss possible limitations of their approach toaddress problems of privacy and fairness.•While the authors might fear that complete honesty about limitations might be used byreviewers as grounds for rejection, a worse outcome might be that reviewers discoverlimitations that aren’t acknowledged in the paper. The authors should use their bestjudgment and recognize that individual actions in favor of transparency play an impor-tant role in developing norms that preserve the integrity of the community. Reviewerswill be specifically instructed to not penalize honesty concerning limitations.3.Theory Assumptions and ProofsQuestion: For each theoretical result, does the paper provide the full set of assumptions anda complete (and correct) proof?25Answer: [Yes]Justification: In §2.1 and §2.2, we elaborated on the motivation and theoretical derivation ofour method, with a complete proof process in place.Guidelines:• The answer NA means that the paper does not include theoretical results.•All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.•All assumptions should be clearly stated or referenced in the statement of any theorems.•The proofs can either appear in the main paper or the supplemental material, but ifthey appear in the supplemental material, the authors are encouraged to provide a shortproof sketch to provide intuition.•Inversely, any informal proof provided in the core of the paper should be complementedby formal proofs provided in appendix or supplemental material.• Theorems and Lemmas that the proof relies upon should be properly referenced.4.Experimental Result ReproducibilityQuestion: Does the paper fully disclose all the information needed to reproduce the main ex-perimental results of the paper to the extent that it affects the main claims and/or conclusionsof the paper (regardless of whether the code and data are provided or not)?Answer: [Yes]Justification: We have provided detailed descriptions of the experimental setup in §3.1 andmethods in §2.2 to ensure that our experiment can be reproduced.Guidelines:• The answer NA means that the paper does not include experiments.•If the paper includes experiments, a No answer to this question will not be perceivedwell by the reviewers: Making the paper reproducible is important, regardless ofwhether the code and data are provided or not.•If the contribution is a dataset and/or model, the authors should describe the steps takento make their results reproducible or verifiable.•Depending on the contribution, reproducibility can be accomplished in various ways.For example, if the contribution is a novel architecture, describing the architecture fullymight suffice, or if the contribution is a specific model and empirical evaluation, it maybe necessary to either make it possible for others to replicate the model with the samedataset, or provide access to the model. In general. releasing code and data is oftenone good way to accomplish this, but reproducibility can also be provided via detailedinstructions for how to replicate the results, access to a hosted model (e.g., in the caseof a large language model), releasing of a model checkpoint, or other means that areappropriate to the research performed.•While NeurIPS does not require releasing code, the conference does require all submis-sions to provide some reasonable avenue for reproducibility, which may depend on thenature of the contribution. For example(a)If the contribution is primarily a new algorithm, the paper should make it clear howto reproduce that algorithm.(b)If the contribution is primarily a new model architecture, the paper should describethe architecture clearly and fully.(c)If the contribution is a new model (e.g., a large language model), then there shouldeither be a way to access this model for reproducing the results or a way to reproducethe model (e.g., with an open-source dataset or instructions for how to constructthe dataset).(d)We recognize that reproducibility may be tricky in some cases, in which caseauthors are welcome to describe the particular way they provide for reproducibility.In the case of closed-source models, it may be that access to the model is limited insome way (e.g., to registered users), but it should be possible for other researchersto have some path to reproducing or verifying the results.5.Open access to data and code26Question: Does the paper provide open access to the data and code, with sufficient instruc-tions to faithfully reproduce the main experimental results, as described in supplementalmaterial?Answer: [No]Justification: This may be temporary, and we are working hard to promote the process ofopen source.Guidelines:• The answer NA means that paper does not include experiments requiring code.•Please see the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•While we encourage the release of code and data, we understand that this might not bepossible, so “No” is an acceptable answer. Papers cannot be rejected simply for notincluding code, unless this is central to the contribution (e.g., for a new open-sourcebenchmark).•The instructions should contain the exact command and environment needed to run toreproduce the results. See the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•The authors should provide instructions on data access and preparation, including howto access the raw data, preprocessed data, intermediate data, and generated data, etc.•The authors should provide scripts to reproduce all experimental results for the newproposed method and baselines. If only a subset of experiments are reproducible, theyshould state which ones are omitted from the script and why.•At submission time, to preserve anonymity, the authors should release anonymizedversions (if applicable).•Providing as much information as possible in supplemental material (appended to thepaper) is recommended, but including URLs to data and code is permitted.6.Experimental Setting/DetailsQuestion: Does the paper specify all the training and test details (e.g., data splits, hyper-parameters, how they were chosen, type of optimizer, etc.) necessary to understand theresults?Answer: [Yes]Justification: In §3.1 and Appendix E, we clearly demonstrated various experimental settings,including hyperparameters, model settings, training settings, evaluation settings, etc.Guidelines:• The answer NA means that the paper does not include experiments.•The experimental setting should be presented in the core of the paper to a level of detailthat is necessary to appreciate the results and make sense of them.•The full details can be provided either with the code, in appendix, or as supplementalmaterial.7.Experiment Statistical SignificanceQuestion: Does the paper report error bars suitably and correctly defined or other appropriateinformation about the statistical significance of the experiments?Answer: [No]Justification: Due to the high cost of pre-training and the significant results obtained acrossvarious settings, we do not repeat the same experiments.Guidelines:• The answer NA means that the paper does not include experiments.•The authors should answer "Yes" if the results are accompanied by error bars, confi-dence intervals, or statistical significance tests, at least for the experiments that supportthe main claims of the paper.27•The factors of variability that the error bars are capturing should be clearly stated (forexample, train/test split, initialization, random drawing of some parameter, or overallrun with given experimental conditions).•The method for calculating the error bars should be explained (closed form formula,call to a library function, bootstrap, etc.)• The assumptions made should be given (e.g., Normally distributed errors).•It should be clear whether the error bar is the standard deviation or the standard errorof the mean.•It is OK to report 1-sigma error bars, but one should state it. The authors shouldpreferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesisof Normality of errors is not verified.•For asymmetric distributions, the authors should be careful not to show in tables orfigures symmetric error bars that would yield results that are out of range (e.g. negativeerror rates).•If error bars are reported in tables or plots, The authors should explain in the text howthey were calculated and reference the corresponding figures or tables in the text.8.Experiments Compute ResourcesQuestion: For each experiment, does the paper provide sufficient information on the com-puter resources (type of compute workers, memory, time of execution) needed to reproducethe experiments?Answer: [Yes]Justification: In §3.1, we have provided sufficient information on the computer resourcesneeded to reproduce the experiments.Guidelines:• The answer NA means that the paper does not include experiments.•The paper should indicate the type of compute workers CPU or GPU, internal cluster,or cloud provider, including relevant memory and storage.•The paper should provide the amount of compute required for each of the individualexperimental runs as well as estimate the total compute.•The paper should disclose whether the full research project required more computethan the experiments reported in the paper (e.g., preliminary or failed experiments thatdidn’t make it into the paper).9.Code Of EthicsQuestion: Does the research conducted in the paper conform, in every respect, with theNeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ?Answer: [Yes]Justification: We guarantee that the research conducted in the paper complies with NeurIPSCode of Ethics in all aspects.Guidelines:•The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.•If the authors answer No, they should explain the special circumstances that require adeviation from the Code of Ethics.•The authors should make sure to preserve anonymity (e.g., if there is a special consid-eration due to laws or regulations in their jurisdiction).10.Broader ImpactsQuestion: Does the paper discuss both potential positive societal impacts and negativesocietal impacts of the work performed?Answer: [NA]Justification: The purpose of this paper is to improve the training process of large languagemodels, without any negative societal impacts.Guidelines:28• The answer NA means that there is no societal impact of the work performed.•If the authors answer NA or No, they should explain why their work has no societalimpact or why the paper does not address societal impact.•Examples of negative societal impacts include potential malicious or unintended uses(e.g., disinformation, generating fake profiles, surveillance), fairness considerations(e.g., deployment of technologies that could make decisions that unfairly impact specificgroups), privacy considerations, and security considerations.•The conference expects that many papers will be foundational research and not tiedto particular applications, let alone deployments. However, if there is a direct path toany negative applications, the authors should point it out. For example, it is legitimateto point out that an improvement in the quality of generative models could be used togenerate deepfakes for disinformation. On the other hand, it is not needed to point outthat a generic algorithm for optimizing neural networks could enable people to trainmodels that generate Deepfakes faster.•The authors should consider possible harms that could arise when the technology isbeing used as intended and functioning correctly, harms that could arise when thetechnology is being used as intended but gives incorrect results, and harms followingfrom (intentional or unintentional) misuse of the technology.•If there are negative societal impacts, the authors could also discuss possible mitigationstrategies (e.g., gated release of models, providing defenses in addition to attacks,mechanisms for monitoring misuse, mechanisms to monitor how a system learns fromfeedback over time, improving the efficiency and accessibility of ML).11.SafeguardsQuestion: Does the paper describe safeguards that have been put in place for responsiblerelease of data or models that have a high risk for misuse (e.g., pretrained language models,image generators, or scraped datasets)?Answer: [NA]Justification: The paper poses no such risks.Guidelines:• The answer NA means that the paper poses no such risks.•Released models that have a high risk for misuse or dual-use should be released withnecessary safeguards to allow for controlled use of the model, for example by requiringthat users adhere to usage guidelines or restrictions to access the model or implementingsafety filters.•Datasets that have been scraped from the Internet could pose safety risks. The authorsshould describe how they avoided releasing unsafe images.•We recognize that providing effective safeguards is challenging, and many papers donot require this, but we encourage authors to take this into account and make a bestfaith effort.12.Licenses for existing assetsQuestion: Are the creators or original owners of assets (e.g., code, data, models), used inthe paper, properly credited and are the license and terms of use explicitly mentioned andproperly respected?Answer: [Yes]Justification: The creators or original owners of the assets used in the paper, such as code,data, and models, have been appropriately recognized, and the licenses and terms of usehave been clearly mentioned and properly respected.Guidelines:• The answer NA means that the paper does not use existing assets.• The authors should cite the original paper that produced the code package or dataset.•The authors should state which version of the asset is used and, if possible, include aURL.• The name of the license (e.g., CC-BY 4.0) should be included for each asset.29•For scraped data from a particular source (e.g., website), the copyright and terms ofservice of that source should be provided.•If assets are released, the license, copyright information, and terms of use in thepackage should be provided. For popular datasets, paperswithcode.com/datasetshas curated licenses for some datasets. Their licensing guide can help determine thelicense of a dataset.•For existing datasets that are re-packaged, both the original license and the license ofthe derived asset (if it has changed) should be provided.•If this information is not available online, the authors are encouraged to reach out tothe asset’s creators.13.New AssetsQuestion: Are new assets introduced in the paper well documented and is the documentationprovided alongside the assets?Answer: [NA]Justification: The paper does not release new assets.Guidelines:• The answer NA means that the paper does not release new assets.•Researchers should communicate the details of the dataset/code/model as part of theirsubmissions via structured templates. This includes details about training, license,limitations, etc.•The paper should discuss whether and how consent was obtained from people whoseasset is used.•At submission time, remember to anonymize your assets (if applicable). You can eithercreate an anonymized URL or include an anonymized zip file.14.Crowdsourcing and Research with Human SubjectsQuestion: For crowdsourcing experiments and research with human subjects, does the paperinclude the full text of instructions given to participants and screenshots, if applicable, aswell as details about compensation (if any)?Answer: [NA]Justification: The paper does not involve crowdsourcing nor research with human subjects.Guidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Including this information in the supplemental material is fine, but if the main contribu-tion of the paper involves human subjects, then as much detail as possible should beincluded in the main paper.•According to the NeurIPS Code of Ethics, workers involved in data collection, curation,or other labor should be paid at least the minimum wage in the country of the datacollector.15.Institutional Review Board (IRB) Approvals or Equivalent for Research with HumanSubjectsQuestion: Does the paper describe potential risks incurred by study participants, whethersuch risks were disclosed to the subjects, and whether Institutional Review Board (IRB)approvals (or an equivalent approval/review based on the requirements of your country orinstitution) were obtained?Answer: [NA]Justification: The paper does not involve crowdsourcing nor research with human subjects.Guidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.30•Depending on the country in which research is conducted, IRB approval (or equivalent)may be required for any human subjects research. If you obtained IRB approval, youshould clearly state this in the paper.•We recognize that the procedures for this may vary significantly between institutionsand locations, and we expect authors to adhere to the NeurIPS Code of Ethics and theguidelines for their institution.•For initial submissions, do not include any information that would break anonymity (ifapplicable), such as the institution conducting the review.31Examples of Four Categories of TokensGMAT 1: 670 Q49 V31 \n GMAT 2: 710 Q50 V35 \n Followers: 175 \n \n Kudos [?]: 890 [0], given: 235 \n \n Re:Mr. and Mrs Wiley, VIC[#permalink] 13 Feb 2010, 01:03 \n Ans A \n \n their first child was born after J years... \n\n thus 1 child —> j years \n \n => thus after another J years his age = J \n \n thus his age is J –> after 2J years and2j after 3j years \n \n his present age is T which is after T years. \n \n thus total time after 2years will be T+2 \nsince after every J year they have a child after T+2 they will have \frac{(T+2)}{J} + 1 ( +1 is for the oldest) \n\n thus A \n _________________ \n \n Fight for your dreams :For all those who fear from Verbal- lets give it afight \n \n Money Saved is the Money Earned \n \n Jo Bole So Nihaal , Sat Shri Akaal \n \n Gmat test review :\n 670-to-710-a-long-journey-without-destination-still-happy-141642.html \n \n Intern \n Joined: 06 Apr 2012 \nPosts: 28 \n Followers: 0 \n \n Kudos [?]: 4 [0], given: 37 \n \n Re: Mr. and Mrs Wiley, VIC[#permalink] 21 Nov2012, 07:46 \n jeeteshsingh wrote: \n Need the solution using Algebra.... \n \n Mr. & Mrs Wiley have a child everyJ years. Their oldest child is now T years old. If they have a child 2 years from now, how many children will theyhave in total? \n \n (A) \frac{T+2}{J} + 1 \n \n (B) JT + 1 \n \n (C) \frac{J}{T} + \frac{1}{T} \n \n (D) TJ - 1 \n\n (E) \frac{T+J}{J} \n \n [Reveal] Spoiler: OA: \n (A) \n \n Source: Manhattan Guide \n \n Bunuel - would reallyappreciate you providing your bit on solving the original problem above algebraically. The problem and variousexplanations remain confusing. Should we think of it as a progression or some other way? Please share your take.Thank you. \n Veritas Prep GMAT Instructor \n Joined: 16 Oct 2010 \n Posts: 4566 \n Location: Pune, India \nFollowers: 1029 \n \n Kudos [?]: 4460 [1] , given: 162 \n \n Re: Mr. and Mrs Wiley, VIC[#permalink] 21 Nov2012, 09:45 \n 1 \n KUDOS \n Expert’s post \n jeeteshsingh wrote: \n Need the solution using Algebra.... \n \n Mr.& Mrs Wiley have a child every J years. Their oldest child is now T years old. If they have a child 2 years fromnow, how many children will they have in total? \n \n (A) \frac{T+2}{J} + 1 \n \n (B) JT + 1 \n \n (C) \frac{J}{T}+ \frac{1}{T} \n \n (D) TJ - 1 \n \n (E) \frac{T+J}{J} \n \n [Reveal] Spoiler: OA: \n (A) \n \n Source: ManhattanGuide \n \n Think of it as an Arithmetic Progression where every subsequent term (child) has a difference of J yrsfrom the previous term (child). \n \n 1st child, 2nd child, 3rd child, ....... nth child (to be born after 2 yrs) \n \nWhat is the difference between first and last terms (children)? (T + 2) yrs \n \n What is the common difference(age difference between two consecutive kids)? J yrs \n \n What is the number of terms (children)? (T + 2)/J + 1\n (Number of terms of an AP is n = (Last term - First term)/Common Difference + 1. ) \n _________________\n \n Karishma \n Veritas Prep | GMAT Instructor \n My Blog \n \n Save $100 on Veritas Prep GMAT CoursesAnd Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep’s flexible payment plan options.Veritas Prep Reviews Re: Mr. and Mrs Wiley, VIC [#permalink] 21 Nov 2012, 09:45 Similar topics Replies Lastpost Similar Topics: 1 Mr. and Mrs. O’Leary (SC) 5 08 Jul 2012, 07:15 Mr. INVESTOR invested a total of$12,000for a one-year 4 30 Mar 2007, 09:24 \n 2 Mr. and Mrs. Wiley have a child every J years. Their oldest 7 19 Feb2007, 11:40 \n Mr.kevincan 6 16 Aug 2006, 12:26 \n PS: Mr. & Mrs. Smith 2 06 Dec 2005, 00:03 \n Display postsfrom previous: Sort by Sciencemadness Discussion Board » Fundamentals » Reagents and Apparatus Acquisition» Sulphuric Acid in Australia Select A Forum Fundamentals » Chemistry in General » Organic Chemistry »Reagents and Apparatus Acquisition » Beginnings » Responsible Practices » Miscellaneous » The Wiki Specialtopics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Modelsand Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues \n \n Pages: 1 2 \nAuthor: Subject: Sulphuric Acid in Australia \n hissingnoise \n International Hazard \n \n Posts: 3939 \n Registered:26-12-2002 \n Member Is Offline \n \n Mood: Pulverulescent! \n \n I’ve stated several times on various threads,that SO<sub>3</sub> produces a practically incondensable acid mist when led to water and, BTW, at 700 °C thedecomposition rate of SO<sub>3</sub> is ̃87% . . . \n Cracking Na<sub>2</sub>S<sub>2</sub>O<sub>7</sub>proceeds at ̃466 °C and the issuing gasses are readily absorbed by conc. H<sub>2</sub>SO<sub>4</sub> to formoleum! \n \n Phthalic Acid \n Harmless \n \n Posts: 19 \n Registered: 7-8-2011 \n Location: Australia \n MemberIs Offline \n \n Mood: No Mood \n \n That’s a good idea Neil, I’ll be sure to try that next time (probably forH2O2). Just went to Tradelink and asked if they sold Moflo drain cleaner. The guy said yeah and I asked for aliter of it. No problems whatsoever, he just said ”be careful with it”. It was $45 but a liter will last me a while andmaking it myself would’ve been vastly more expensive I imagine. Success! MeSynth Hazard to Others Posts: 107Registered: 29-7-2011 Member Is Offline Mood: http://www.youtube.com/watch?v=5ZltqlVuDIo Sulfuric acidcan be produced in the laboratory by burning sulfur in air and dissolving the gas produced in a hydrogen peroxidesolution. SO2 + H2O2 →H2SO4 this was found on wikipedia... did you not look through the sullfuric acid wikibefore boiling down batery acid? anyways... There are some good videos on youtube that demonstrate how tosynthesize sulfuric acid using different methods. The drain cleaner you get from the store will be impure and maycontain organic matter that discolors the acid.16Figure 11: Sample text containing four categories of tokens. Among them, blue represents tokensof categorie H →L, green indicates tokens of categorie L →L, yellow signifies tokens of categorieH→H, and red denotes tokens of categorie L →H.32Examples of Tokens that Exhibit Abnormal Behavior during Trainingas \n \n \begin{aligned}A \in \{\pm \begin{bmatrix}\cos\theta & - \sin\theta \\ \sin\theta & \cos\theta \\ \end{bmatrix},\pm \begin{bmatrix}\cos\theta & \sin\theta \\ \sin\theta & - \cos\theta \\ \end{bmatrix}, \pm \begin{bmatrix}i\sinh\theta & -\cosh\theta \\ \cosh\theta & i \sinh\theta \\ \end{bmatrix}, \pm \begin{bmatrix}i \sinh\theta &\cosh\theta \\ \cosh\theta & - i \sinh\theta \\ \end{bmatrix}\}\end{aligned} \quad\quad\quad(25) \n \n I suspectthis class of transformations has a name in the grand group classification scheme, but I don’t know what it is.### Mathematics Class XI \n \n Unit-I: Sets and Functions \n Chapter 1: Sets \n Unit-II: Algebra \n Chapter 5:Binomial Theorem \n Chapter 6: Sequence and Series \n Unit-III: Coordinate Geometry \n Chapter 1: Straight Lines\n Chapter 2: Conic Sections \n Unit-IV: Calculus \n Unit-V: Mathematical Reasoning \n Unit-VI: Statistics andProbability \n Chapter 1: Statistics \n Chapter 2: Probability \n \n # Graphs of Trigonometric Functions \n \n (i)Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions,functorial, etc.). Otherwise the search is exact. ”Topological group” Phrases (multi-words) should be set in ”straightquotation marks”. au: Bourb aki & ti: Algebra Search for author and title. The and-operator & is default and can beomitted. Cheb yshev | Tschebyscheff The or-operator | allows to search for Cheb yshev or Tschebyscheff. ”Quasi*map*” py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search forpublications in a particular source with a Mathematics Subject Classification code (cc) in 14. ”Partial diff* eq*” !elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The documenttype is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Numberranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language.ISO 639-1 language codes can also be used.Code: Select all \n \n x = 64, y = 86, rule = B3/S23 \n 13bo$3bobo6bo$4b2o6b3o$4bo$54bo$54bobo$13b2o39b2o $12b2o44b2o$3o11bo43b \n o3b2o$2bo49bo 6bo2bo$bo50b 2o6b obo$51bob o7bo$7bo49bo$7b3o47b3o$10bo5b2o \n 42bo$9b2o4b2o 42b 2o$17bo7$13bo$3b obo6bo$4b 2o6b3o$4bo$54bo$54b obo$13b 2o 39b2o$12b2o44b2o$3o11bo43bo3b2o$2bo49bo6bo2bo$bo50b 2o6bobo$51bobo7bo$7bo49bo$7b3o47b3o$10bo5b2o42bo$9bo5b2o42bo$9b2o6bo41b2o7$13bo$3bobo6bo$4b2o6b3o$4bo$54bo$54bobo$13b2o39b2o$12b2o44b2o$3o11bo43bo3b2o$2bo49bo6bo2bo$bo50b2o6bobo$51bobo7bo$7bo49bo$7b3o47b3o$10bo5b2o42bo$7b3o5b2o40b3o$7bo9bo39bo7$13bo$3b obo6bo$4b 2o6b 3o$4bo$54bo$54b obo$13b 2o39b 2o$ \n 12b 2o44b2o$3o11bo43bo3b 2o$2bo49bo6bo2bo$bo50b 2o6b obo$51b obo7bo$7bo49bo$7b 3o47b 3o$10bo5b 2o42bo$7b2obo4b 2o40b 2obo$7b obo7bo39b obo! The 16-bitter thus goes down to 9 gliders. It does not reduce any further17-bitters, though. Princess of Science, Parcly Taxel Kazyan Posts: 867 Joined: February 6th, 2014, 11:02 pm ###Re: 17 in 17: Efficient 17-bit synthesis project Good catch, Sokwe. #61 in 15G:Ground Penetrating Radar for Archaeology \n \n Workshop | December1 | 1-5 p.m. | 1012251 College(Archaeological Research Facility) \n \n Scott Byram, Research Associate, Archaeological Research Facility, UCBerkeley \n \n Archaeological Research Facility \n \n At 1pm the workshop will begin at the UC Faculty Club lawnwhere subsurface features are being mapped. \n \n ### Student Probability/PDE Seminar: Large Deviation Principlefor random graphs II \n \n Seminar | December1 | 2:10-3:30 p.m. | 891Evans Hall \n \n Fraydoun Rezakhanlou, UCBerkeley \n \n Department of Mathematics \n \n ### BLC Fellows Forum \n \n Presentation | December1 | 3-5p.m. | Dwinelle Hall, B-4 (Classroom side) \n \n FAll 2017 BLC Fellows, UC Berkeley \n \n Berkeley LanguageCenter \n \n Teaching French Listening Comprehension and Cultural Awareness through Regional Variation \nElyse Ritchey, GSR, French \n At the university level, French language instruction in the US traditionally includesa course on phonetics and pronunciation. While the major aim of such courses is to improve students’ speakingand listening competence, they also emphasize speaking ‘correctly’ using... More > \n \n ### MENA Salon \n\n Workshop | December1 | 3-4 p.m. | 340Stephens Hall \n \n Every Friday in the semester, the CMES hosts aninformal week17Figure 12: An example of an abnormal state of token perplexity during pretrainig process.The tokens highlighted in orange represent tokens that were significant abnormalities during thepretraining process.33Token Selected Examples•Process the student answer as a Math Object Formula, and break down its parse tree by its top-level operators.The idea is to create an array of the student’s primitive factors, so say 3(x+1)(x+2)ˆ2 gives (3,x+1,x+2). •Becausewe may want factoring over Z, checking the gcd of coefficients within each factor. •Pass each of these things toSAGE and ask if the nonconstant factors are reducible over Z or Q. Also ask if they are monic. These things atleast we learned how to do at the Vancouver code camp. The end goal is to count the following forms as correct,possibly controlled by flags: n \{}prod (factor)ˆpower, where each factor is irreducible in Z[X], n in Z r \{}prod(factor)ˆpower, where each factor is irreducible and monic in Q[X], r in Q I suppose on the last one the monicrequirement could be dropped with a flag. I have no plans to check that the form is fully condensed, e.g. forcing(x+1)ˆ2 and rejecting (x+1)(1+x)The equation of the path traversed by a projectile is called equation of trajectory. \n \n Suppose, the body reachesthe point P after time ( t ) . \n \n Horizontal motion has no acceleration. Thus, using kinematic equation, horizontaldistance covered will be – \n \n x = u \cos \theta t \n \n Or, \quad t = ( \frac { x }{ u \cos \theta } ) \n \n Verticalmotion has constant acceleration ( g ) . Thus, distance covered will be – \n \n y = ( u \sin \theta ) t - \left ( \frac{1}{2} \right ) g tˆ2 \n \n = ( u \sin \theta ) \left ( \frac {x}{u \cos \theta} \right ) - \left ( \frac {1}{2} \right ) g \left (\frac {x}{u \cos \theta} \right )ˆ2 \n \n = \left ( \tan \theta \right ) x - \left ( \frac {g}{2 uˆ2 \cosˆ2 \theta} \right ) xˆ2\n \n In this equation, ( \theta, \ u \ \text {and} \ g ) are constants. Thus, \n \n 1. Term \left ( \tan \theta \right ) is aconstant, let it is ( p ) \n 2. Term \left [ \left ( \frac {g}{2 uˆ2 \cosˆ2 \theta} \right ) \right ] is also a constant, let it is( q ) \n \n So, \quad y = p x - q xˆ2 \n \n Therefore, ( y \propto xˆ2 ) , which is a required condition of a parabola.The trajectory of the projectile is a parabola. \n \n ### Time of Maximum height \n \n As the body is projected itgoes up. Vertical component of velocity ( u \sin \theta ) gradually diminishes and becomes zero at the maximumheight of flight. After that, body starts moving downwards. \n \n Let, ( t_m ) is the time to reach at maximumheight ( h_m ) of flight. \n \n Therefore, from kinematic equation, we have – \n \n 0 = u \sin \theta - g t_m \n \n Or,\quad t_m = \left ( \frac {u \sin \theta}{g} \right ) \n \n ### Time of Flight \n \n Total time taken by the projectilebetween the instant it is projected and till it reaches at a point in the horizontal plane of its projection is called Timeof flight. \n \n Let, the body reaches at point B on ground after time ( T_f ) of projection. Then – \n \n Net verticaldisplacement covered during the time of flight is zero. Using kinematic equation of motion, we get – \n \n 0 = ( u\sin \theta ) T_f - \left ( \frac {1}{2} \right ) g \ ( T_f )ˆ2 \n \n Or, \quad T_f = \left ( \frac {2 u \sin \theta}{g} \right) = 2 \left ( \frac {u \sin \theta}{g} \right ) \n \n = 2 t_m \n \n Thus, \quad \text {Total time of flight} = \text {Timeof ascent} + \text {Time of descent} \n \n = 2 \times \text {Time of maximum height.} \n \n ### Maximum heightof Flight \n \n It is the maximum height reached by a projectile. It is denoted by ( h_m ) \n \n At the highest pointof flight, the vertical component of velocity becomes zero. \n \n From kinematic equation of motion, we have – \n\n vˆ2 = uˆ2 + 2 a s \n \n Therefore, \quad 0ˆ2 - ( u \sin \theta )ˆ2 = 2 ( - g ) h_m \n \n Or, \quad h_m = \left ( \frac{uˆ2 \sinˆ2 \theta}{2 g} \right )We identify two equations having the same solution with the equivalence relation: \n \n $(a,b) \sim (c,d) \mbox{ ifand only if } ad = bc$ \n \n To show that this is an equivalence relation: \n \n 1. Reflexivity: $$(a,b) \sim (a,b)$$ ifand only if $$ab = ba$$ which is true. Hence it is reflexive. \n 2. Symmetry: $$(a,b) \sim (c,d)$$ if and only if$$ad = bc$$ if and only if $$bc = ad$$ if and only if $$(c,d) \sim (a,b)$$. Hence it is symmetric. \n 3. Transitivity:$$(a,b) \sim (c,d)$$ and $$(c,d) \sim (e,f)$$ if and only if $$ad = bc$$ and $$cf = de$$. Multiplying these equationstogether, we get $$adcf = bcde$$. We can cancel $$d$$ and $$c$$ from both sides to get $$af = be$$. Hence$$(a,b) \sim (e,f)$$. \n \n Hence, we have successfully formed the set of rational numbers when we factor outthe equivalence classes! \n \n $\mathbb{Q} = \frac{\mathbb{Z} \times \mathbb{Z}\backslash\{0\}}{\sim}$ \n\n Let’s now take a look at what members of $$\mathbb{Q}$$ look like, say for the equation $$2x = 3$$. Thisequation is represented by the ordered pairIf the light moves in a purely radial direction, we can describe its path by the coordinate functions $$t(\lambda)$$ and$$r(\lambda)$$. The equation of motion $$dsˆ2 =0$$ then takes the form $$g_{tt} \left(\frac{dt}{d\lambda}\right)ˆ2+ g_{rr} \left(\frac{dr}{d\lambda}\right)ˆ2 = 0,$$ which we can rewrite as $$\left(\frac{dt}{dr}\right)ˆ2 = -\frac{g_{rr}}{g_{tt}}.$$ \n \n The length of the rod is then $$L = c \int_{r_1}ˆ{r_2} \frac{dt}{dr} \text{ d}r = c\int_{r_1}ˆ{r_2} \sqrt{-\frac{g_{rr}}{g_{tt}}} \text{ d}r,$$ where I have taken the positive square root because$$r_2 > r_1$$. \n \n Notice that the length is independent of the signature of the metric, so whether you work withthe (-+++) or (+—) metric is purely conventional and will not change the physics. \n \n For the Schwarzschildmetric, we obtain explicitly $$L = r_2 - r_1 + r_s \ln\left(\frac{r_2 - r_s}{r_1 - r_s}\right) > r_2 - r_1.$$ \n \n Nowwhat happens if you magically, instantaneously increase the mass of the black hole? I think the length $$L$$ of therod stays the same (I’m here assuming that the rod is infinitely stiff), but that it would now ”appear shorter” to thedistant observer - i.e. it would no longer occupy the entire space between $$r_1$$ and $$r_2$$.18Figure 13: Specific examples of selecting tokens during the selective pretraining process of theRHO-1.The tokens marked in blue represent the actual tokens trained during the training process,while the remaining black tokens are not trained during the training process.34After Training 0% CheckpointItem Type: Journal Article Copyright of this article belongs to Elsevier. Division of Mechanical Sciences >Mechanical Engineering 28 May 2007 19 Sep 2010 04:36 http://eprints.iisc.ernet.in/id/eprint/10277 # Question#8de97 \n \n Dec 10, 2016 \n \n That is not an identity. \n \n #### Explanation: \n \n Recall that \n \n ${\cot}ˆ{2}x + 1 = {\csc}ˆ{2} x$. \n \n So, we can write \n \n $\frac{1 - {\csc}ˆ{2} x}{\csc} ˆ 2 x = \frac{1 - \left({\cot}ˆ{2} x+ 1\right)}{\csc} ˆ 2 x$ \n \n $= {\cot}ˆ{2} \frac{x}{\csc} ˆ 2 x$ \n \n Recall also that $\cot x = \cos \frac{x}{\sin}x$ and $\csc x = \frac{1}{\sin} x$. \n \n This allows us to continue \n \n $= \frac{{\cos}ˆ{2} \frac{x}{\sin} ˆ 2x}{\frac{1}{\sin} ˆ 2 x}$ \n \n $= {\cos}ˆ{2} \frac{x}{\sin} ˆ 2 x \cdot {\sin}ˆ{2} \frac{x}{1}$ \n \n $= {\cos}ˆ{2}x$ \n \n Which is not identically $\cos x$. \n \n (${\cos}ˆ{2} x = \cos x$ only when $\cos x = 1$ or $0$) \n \n Dec10, 2016 \n \n No. It is equal to ${\sin}ˆ{2} x - 1$. \n \n #### Explanation: \n \n If we have $\frac{1 - x}{x}$, wecan write it as $\frac{1}{x} - \frac{x}{x}$. \n \n The same way, $\frac{1 - {\csc}ˆ{2} x}{\csc} ˆ 2$ can be writtenas $\frac{1}{\csc} ˆ 2 x - \frac{{\csc}ˆ{2} x}{{\csc}ˆ{2} x}$. \n \n This is equal to ${\sin}ˆ{2} x - 1$. SMS scnewsitem created by Hannah Bryant at Wed 25 May 2022 1227 \n Type: Seminar \n Distribution: World \n Expiry: 31May 2022 \n Calendar1: 31 May 2022 1500-1600 \n CalLoc1: Quad S224 & via Zoom \n CalTitle1: SMRI ’What is...a virtual knot?’ Hans Boden (McMaster University) \n Auth: [email protected] (hbry8683)After Training 33% CheckpointItem Type: Journal Article Copyright of this article belongs to Elsevier. Division of Mechanical Sciences >Mechanical Engineering 28 May 2007 19 Sep 2010 04:36 http://eprints.iisc.ernet.in/id/eprint/10277 # Question#8de97 \n \n Dec 10, 2016 \n \n That is not an identity. \n \n #### Explanation: \n \n Recall that \n \n ${\cot}ˆ{2}x + 1 = {\csc}ˆ{2} x$. \n \n So, we can write \n \n $\frac{1 - {\csc}ˆ{2} x}{\csc} ˆ 2 x = \frac{1 - \left({\cot}ˆ{2} x+ 1\right)}{\csc} ˆ 2 x$ \n \n $= {\cot}ˆ{2} \frac{x}{\csc} ˆ 2 x$ \n \n Recall also that $\cot x = \cos \frac{x}{\sin}x$ and $\csc x = \frac{1}{\sin} x$. \n \n This allows us to continue \n \n $= \frac{{\cos}ˆ{2} \frac{x}{\sin} ˆ 2x}{\frac{1}{\sin} ˆ 2 x}$ \n \n $= {\cos}ˆ{2} \frac{x}{\sin} ˆ 2 x \cdot {\sin}ˆ{2} \frac{x}{1}$ \n \n $= {\cos}ˆ{2}x$ \n \n Which is not identically $\cos x$. \n \n (${\cos}ˆ{2} x = \cos x$ only when $\cos x = 1$ or $0$) \n \n Dec10, 2016 \n \n No. It is equal to ${\sin}ˆ{2} x - 1$. \n \n #### Explanation: \n \n If we have $\frac{1 - x}{x}$, wecan write it as $\frac{1}{x} - \frac{x}{x}$. \n \n The same way, $\frac{1 - {\csc}ˆ{2} x}{\csc} ˆ 2$ can be writtenas $\frac{1}{\csc} ˆ 2 x - \frac{{\csc}ˆ{2} x}{{\csc}ˆ{2} x}$. \n \n This is equal to ${\sin}ˆ{2} x - 1$. SMS scnewsitem created by Hannah Bryant at Wed 25 May 2022 1227 \n Type: Seminar \n Distribution: World \n Expiry: 31May 2022 \n Calendar1: 31 May 2022 1500-1600 \n CalLoc1: Quad S224 & via Zoom \n CalTitle1: SMRI ’What is...a virtual knot?’ Hans Boden (McMaster University) \n Auth: [email protected] (hbry8683)After Training 66% CheckpointItem Type: Journal Article Copyright of this article belongs to Elsevier. Division of Mechanical Sciences >Mechanical Engineering 28 May 2007 19 Sep 2010 04:36 http://eprints.iisc.ernet.in/id/eprint/10277 # Question#8de97 \n \n Dec 10, 2016 \n \n That is not an identity. \n \n #### Explanation: \n \n Recall that \n \n ${\cot}ˆ{2}x + 1 = {\csc}ˆ{2} x$. \n \n So, we can write \n \n $\frac{1 - {\csc}ˆ{2} x}{\csc} ˆ 2 x = \frac{1 - \left({\cot}ˆ{2} x+ 1\right)}{\csc} ˆ 2 x$ \n \n $= {\cot}ˆ{2} \frac{x}{\csc} ˆ 2 x$ \n \n Recall also that $\cot x = \cos \frac{x}{\sin}x$ and $\csc x = \frac{1}{\sin} x$. \n \n This allows us to continue \n \n $= \frac{{\cos}ˆ{2} \frac{x}{\sin} ˆ 2x}{\frac{1}{\sin} ˆ 2 x}$ \n \n $= {\cos}ˆ{2} \frac{x}{\sin} ˆ 2 x \cdot {\sin}ˆ{2} \frac{x}{1}$ \n \n $= {\cos}ˆ{2}x$ \n \n Which is not identically $\cos x$. \n \n (${\cos}ˆ{2} x = \cos x$ only when $\cos x = 1$ or $0$) \n \n Dec10, 2016 \n \n No. It is equal to ${\sin}ˆ{2} x - 1$. \n \n #### Explanation: \n \n If we have $\frac{1 - x}{x}$, wecan write it as $\frac{1}{x} - \frac{x}{x}$. \n \n The same way, $\frac{1 - {\csc}ˆ{2} x}{\csc} ˆ 2$ can be writtenas $\frac{1}{\csc} ˆ 2 x - \frac{{\csc}ˆ{2} x}{{\csc}ˆ{2} x}$. \n \n This is equal to ${\sin}ˆ{2} x - 1$. SMS scnewsitem created by Hannah Bryant at Wed 25 May 2022 1227 \n Type: Seminar \n Distribution: World \n Expiry: 31May 2022 \n Calendar1: 31 May 2022 1500-1600 \n CalLoc1: Quad S224 & via Zoom \n CalTitle1: SMRI ’What is...a virtual knot?’ Hans Boden (McMaster University) \n Auth: [email protected] (hbry8683)After Training 100% CheckpointItem Type: Journal Article Copyright of this article belongs to Elsevier. Division of Mechanical Sciences >Mechanical Engineering 28 May 2007 19 Sep 2010 04:36 http://eprints.iisc.ernet.in/id/eprint/10277 # Question#8de97 \n \n Dec 10, 2016 \n \n That is not an identity. \n \n #### Explanation: \n \n Recall that \n \n ${\cot}ˆ{2}x + 1 = {\csc}ˆ{2} x$. \n \n So, we can write \n \n $\frac{1 - {\csc}ˆ{2} x}{\csc} ˆ 2 x = \frac{1 - \left({\cot}ˆ{2} x+ 1\right)}{\csc} ˆ 2 x$ \n \n $= {\cot}ˆ{2} \frac{x}{\csc} ˆ 2 x$ \n \n Recall also that $\cot x = \cos \frac{x}{\sin}x$ and $\csc x = \frac{1}{\sin} x$. \n \n This allows us to continue \n \n $= \frac{{\cos}ˆ{2} \frac{x}{\sin} ˆ 2x}{\frac{1}{\sin} ˆ 2 x}$ \n \n $= {\cos}ˆ{2} \frac{x}{\sin} ˆ 2 x \cdot {\sin}ˆ{2} \frac{x}{1}$ \n \n $= {\cos}ˆ{2}x$ \n \n Which is not identically $\cos x$. \n \n (${\cos}ˆ{2} x = \cos x$ only when $\cos x = 1$ or $0$) \n \n Dec10, 2016 \n \n No. It is equal to ${\sin}ˆ{2} x - 1$. \n \n #### Explanation: \n \n If we have $\frac{1 - x}{x}$, wecan write it as $\frac{1}{x} - \frac{x}{x}$. \n \n The same way, $\frac{1 - {\csc}ˆ{2} x}{\csc} ˆ 2$ can be writtenas $\frac{1}{\csc} ˆ 2 x - \frac{{\csc}ˆ{2} x}{{\csc}ˆ{2} x}$. \n \n This is equal to ${\sin}ˆ{2} x - 1$. SMS scnewsitem created by Hannah Bryant at Wed 25 May 2022 1227 \n Type: Seminar \n Distribution: World \n Expiry: 31May 2022 \n Calendar1: 31 May 2022 1500-1600 \n CalLoc1: Quad S224 & via Zoom \n CalTitle1: SMRI ’What is...a virtual knot?’ Hans Boden (McMaster University) \n Auth: [email protected] (hbry8683)19Figure 14: An example of dynamic token selection changes during the training process , whichillustrated with five different score levels represented by deep blue, light blue, black, light orange,and dark orange. The bluer the color indicates a higher tendency for the token to be selected, whilethe more orange the color suggests a lower tendency for the token to be selected.35 |
turWYO1w2Q | Information Directed Tree SearchReasoning and Planning with Language AgentsYash Chandak, Hyunji Alex Nam, Allen Nie, Jonathan Lee and Emma BrunskillDepartment of Computer ScienceStanfordStanford, CA<[email protected]>AbstractSolving challenging tasks often require agentic formulation of language modelsthat can do multi-step reasoning and progressively solve the task by collectingvarious feedback. For computational efficiency, it may be advantageous to quantifythe information associated with different feedback and guide the search such thatthe solution can be obtained quickly. To explore this possibility, we take a Bayesianapproach and propose an information directed tree search (IDTS) algorithm thatmakes use of in-context learning to approximate the information associated withdifferent feedback. We explore the effectivity of IDTS on challenging tasks involv-ing programming, formal math, and natural language. Interestingly, while we findadvantages over simple uniform search methods, the proposed approach is aboutcomparable to MCTS even though it explores different paths. We discuss somepossibilities for our findings and highlight open questions for future work.1 IntroductionLarge language models (LLMs) have become integral for building autonomous agents that aim tofind solutions to challenging tasks. Many such tasks often require multi-step reasoning, are not wellspecified, or require hierarchical decomposition [Hao et al., 2023, Zelikman et al., 2022]. In suchcases, even though LLMs cannot directly provide the complete answer, they can provide partiallycorrect responses, or answer sub-parts of the prompt. Such responses can be then paired with othersource(s) of feedback that can subsequently guide the reasoning of the language models. For instance,if the generated response is a code, then the feedback can be provided by program compilers andunit testing [Zhang et al., 2023a, Zhou et al., 2023]. If the generated response is formal proofsfor mathematical statements, then auto-verifiers can be used [First et al., 2023, Zheng et al., 2023].Further, language models can also be used to self-critique their responses [Zhou et al., 2023].However, such sources of feedback vary in terms of their quality. For instance, compilers can onlyflag incorrect code - they cannot tell what to change in that incorrect code. In contrast, while languagemodel critics can suggest what to change, they might often hallucinate and provide inaccuratefeedback. Similarly, humans may want to work with a language model to achieve a goal but may notknow themselves what the right steps to reach that goal are. This raises the main question of interest:How do we leverage partially correct response generating language models, with partially correctsource(s) of feedback, to find solution to important problems?We cast this as a planning problem with partially correct feedback, and design a new tree searchprocedure for inference time planning. To be computationally efficient when resources are limited,unlike MCTS [Coquelin and Munos, 2007, Kocsis and Szepesvári, 2006] that does a naive count-Workshop on Bayesian Decision-making and Uncertainty, 38th Conference on Neural Information ProcessingSystems (NeurIPS 2024).based exploration, we take a Bayesian approach and prioritize search towards feedback that provideshigher information gain towards the solution.LLMs Planning Robust Rich ExplorationOne-shot generation ✓ ✗ ✗ ✗Iterative Refinement/CoT ✓ ✓ ✗ ✗Search (MCTS, best-first, etc.) ✓ ✓ ✓ ✗IDTS (Proposed) ✓ ✓ ✓ ✓Table 1: Methods that have single chain of thought are often not robust when the feedback is onlypartially correct. In contrast, search based methods maintain several solutions and correspondingfeedback, and are thus less likely to be derailed in their reasoning if the feedback is not perfect.However, conventional search based methods still explore using count-based statistics, which fail toaccount for any notion of information associated with each feedback. The proposed IDTS methodaims at alleviating all these issues. We discuss related work in more detail in Appendix A.Problem Statement: Leto∈ O anda∈ A correspond to an observation and an action. leteach observation consist of some context x∈ X and a reward r∈R, such that o= (x, r). LetOi= (Xi, Ri)andAibe the random variables corresponding to observation and action at interactionstepi. Specifically, the context X0in the initial observation O0contains the question, and the actionAiis an agent’s attempt at the solution and Xi+1contains rich feedback on that action. The goal ofthe agent is to determine an answer Y∈ Y to that question within a fixed number of interactions. Forexample, for coding tasks, X0is a coding question, Aiis the language model’s code, Yis a correctcode solution, and Oi+1contains the error messages from compiler and results from unit tests for Ai.We define a state s∈ Sto be the history of the interaction so far, i.e., Si:= (O0, A0, O1, A1, . . . , O i).Letπ:S → T (A)be a policy and let τπ(s)correspond to the trajectory unrolled using policy πstarting from a state s. In this work, we build upon pretrained LLMs and thus implicitly leverageside information (e.g., data on the internet) to find the solution Y. LetDdenote such data. Let therandom variable Ttbe the tree containing everything observed till the end of the tthiteration. With aslight abuse of notation, we will use a subscript of tto denote implicit conditioning on everythingobserved so far, i.e., TtandD. For example, Pt(Y=y|x):=P(Y=y|x,Tt, D).Information Directed Sampling: Given random variables XandY, the information gain for Yonobserving XisIt(Y;X):=Ht(Y)− H t(Y|X), where Ht(Y):=−Py∈YPt(y) logPt(y)is theentropy and Ht(Y|X):=−Px∈XPt(x)Py∈YPt(y|x) logPt(y|x)is the conditional entropy.In the bandit setting, let Z(A):= (O, A)be the rich observation (potentially including reward),and action A=aon executing a. For the optimal action a∗, letIt(a∗;Z(a))corresponds to theinformation gain towards the agent’s belief over the optimal action a∗given Z(a). Similar to Bayesianexperimental design [Rainforth et al., 2024], information directed sampling[Russo and Van Roy,2014] or its λ-regularized version [Hao and Lattimore, 2022] selects an action such that it maximizesthe performance as well the information gained by the agent,a∈arg maxa∈AIt(a∗;Z(a))Et[r(a∗)−r(a)]2, a ∈arg maxa∈Ar(a) +λIt(a∗;Z(a)). (1)Monte-Carlo Tree Search: A popular planning procedure is MCTS that tracks Nt(s, a)andNt(s),i.e., the number of times a particular (s, a)pair and (s)has been chosen during tree traversal tilliterate t. LetTtbe the tree expanded at the start of a given iterate t. At every iterate t, one of thenodes of Ttis chosen to be expanded by selecting action a∈ A for a given state s∈ S. Most MCTSalgorithms are based on the seminal upper confidence tree (UCT) algorithm [Kocsis and Szepesvári,2006, Coquelin and Munos, 2007] that choosesa∈arg maxa∈A(ˆqt(s, a) +cslogNt(s)Nt(s, a)), (2)where ˆqis the estimate of the cumulative return [Sutton and Barto, 2018], and the term in blue guidesthe exploration. While (2)provides one approach to balance exploration and exploitation based oncounts, not only such discrete notion of counts fail to account for similarities between different states,it also fails to incorporate richer notion of explorations, e.g., even if an agent has taken an action2a∈ A often, it might still want to explore that action more if the information being obtained fromdoing so is large.The key insight of this work is to leverage ideas from (1)and use information gain to improve theexploration technique for monte-carlo tree search (2). Particularly, for our problem setup, we let theaction abe a complete sentence/solution generated by a LLM, and sbe the history of past interactions.Then, if the task is related to coding, we want to prioritize a solution a, which together with its errormessages and unit-tests Z(a)provides higher information gain towards the solution a∗. (While inprinciple, maximization over the action set Ain intractable, similar to prior work [Zhou et al., 2023,Jang et al., 2020] we let Abe k sampled completions for any given state s.) In the remaining sections,we discuss how to estimate information gain associated with taking any action ain state s.2 Information Directed Tree SearchCentral to our idea is a procedure to estimate information gain (IG), such that the proposed idea canscale and be used with LLMs. Here we focus on explaining the core idea of computing IG usingin-context learning for a single interaction step. Due to space constraints, in Appendix B, we discusshow to use chain-rule of IG to recursively decompose IG over a sequence of actions under a policy.Recall that a state sis the history of interactions, and let s′= (s, a, o )be the subsequent state onenacting a. The information gain of taking an action aand observing the new state s′is the reductionin the uncertainty of the agent’s belief over the solution Y, i.e.,It(Y;s′) =Ht(Y)− H t(Y|s′).Computing It(Y;s′)requires estimating entropies Ht(Y)andHt(Y|s′), which in turn depend on theprobability distributions Pt(y)andPt(y|s′)for∀y∈ Y, respectively. In conventional RL methods,this requires updating the posterior given the new state s′. Specifically, recall Pt(y|s′) =p(Y=y|s′,Tt, D), where Ttis the tree explored till the iterate t, and let Tt+1= (Tt∪s′)be the tree forthe next iterate after observing s′. Let φ∈Φbe the parameters of the environment model for theproblem the agent is facing. Here, the posterior updates entail computingp(y|s′,Tt, D) =p(y|Tt+1, D) =Zp(y|x0, φ)p(φ|Tt+1, D)dφ. (3)Unfortunately, such posterior updates can be challenging when φis high-dimensional. The challengeis particularly exacerbated because posterior needs to be repeatedly updated for Tt+k, where k≥1,as new data is acquired, thereby making prior methods intractable beyond simple/linear settings.Figure 1: Graphicalmodel for the datagenerating process.In-context learning for posterior updates: To mitigate this challenge,we build upon the recent insights [Xie et al., 2021, Lee et al., 2023] thatdraw connections between in-context learning and Bayesian inference. In oursetting, in-context learning provides a remarkably simple posterior updateforp(y|Tt+1, D). Specifically, consider the model in Figure 1. Let θbe theparameters of the general world model, and φbe the model parameters forthe specific problem agent is dealing with, and Dis the internet data. We willmake the following assumption,Assumption 1. ∀t,I(θ;Tt|D) = 0 .Assumption 1states that given the internet-scale data D, a few (since tisusually small) problem specific interactions contained in Ttdo not provide anymore information about the general world model θ. This implies p(θ|Tt, D) =p(θ|D). Under thisviewpoint we can express p(y|Tt+1, D)as the following,p(y|Tt+1, D) =ZZp(y, φ, θ|Tt+1, D)dθdφ=ZZp(y, φ|Tt+1, θ, D )dφp(θ|Tt+1, D)dθ(4)=Zp(y|Tt+1, θ, D )p(θ|Tt+1, D)dθ(a)=Zp(y|Tt+1, θ)p(θ|D)dθ, (5)where (a) follows because of the model in Figure 1, and Assumption 1. Similarly, Pt(y) =p(y|Tt, D)needed to compute Ht(Y)can be factorized as p(y|Tt, D) =Rp(y|Tt, θ)p(θ|D)dθ.In AppendixB we discuss how once the posterior Pt(y|s′)andPt(y)are available, we can readily estimate theentropies Ht(Y|s′)andHt(Y), and thus also the information gain It(Y;s′).3Figure 2: We compare the following baselines across domains: MCTS: standard MCTS with UCTstyle bonus, Iterative Refinement: This is sequential interaction (equivalent to MCTS with branchingfactor=1), Sample and select: This samples k solutions at the root node, and effectively disregardsany feedback (equivalent to MCTS with tree depth=1), IDTS: This is the proposed algorithm, thatbuilds on MCTS but uses information gain to drive exploration as opposed to the UCT style bonus.The maximum number of nodes expanded is 10. For MCTS and IDTS, the branching factor is 4.Advantages: Unlike p(φ|Tt+1, D)in(3)that does an explicit update to obtain posterior over φforthe underlying environment, in (5)the posterior is over the parameters of the world-model p(θ|D)and thus do notrequire an explicit update when new data is acquired. Instead, the new data is usedin-context p(y|Tt+1, θ)to obtain the desired p(y|Tt+1, D). Not only this avoid any updates to theparameters θ, but also as new data becomes available then p(y|Tt+k, D)can also be computed readilyfork≥1.This would not have been possible without the in-context learning ability of LLMs.Further, (5)reduces the problem to standard uncertainty estimation for deep learning [Gawlikowskiet al., 2021, Abdar et al., 2021]. Perhaps the most popular technique is to use an ensemble of Nmodels for a Monte-Carlo estimate of the integral in (5). For extremely large models, ensembles canbe created using multiple-low rank adapter instead [Malinin and Gales, 2020, Kuhn et al., 2023]. Forsimplicity, we use N= 1in our experiments.3 Empirical AnalysisComplete algorithm for the proposed IDTS method and more details about the experimental setupis provided in Appendix C. We run all the experiments using OpenAI GPT models [Achiam et al.,2023], and the results are presented in Figure 2. We compare the performances of the above methodson the following domains:Code (Python): This is based on the HumanEval benchmark [Chen et al., 2021]. We focus on thehard problems by filtering out the ones that can be solved by GPT-3.5-turbo in one-shot. Here, anaction corresponds to the entire solution. The feedback consists of results from the syntheticallygenerated (and thus potentially incorrect) unit-tests, error messages from the compiler, and self-critique of the solution given the unit-test results and error messages.Math (Lean): We use the MiniF2F benchmark [Zheng et al., 2021] for this task. Each actioncorresponds to an entire formal proof, and the feedback consists of the error messages from the Lean[Moura and Ullrich, 2021] compiler along with the self-critique of the generated solution.LLF-Bench: [Cheng et al., 2023] This task requires recommending movies to a user whose interestsare hidden from the agent. After every suggestion, the domain provides natural language hintregarding how close the recommended movies are to the user’s interests. In the feedback, we alsoincorporate self-critique of the solution based on the response received.While IDTS shows some promise on coding and LLF-bench, the gains over MCTS are not significantat the moment to justify additional complexity associated with IDTS. Math domain stands out, asneither the error messages from Lean compiler had corrective feedback, nor the self-critique providedany meaningful feedback. As such, just ignoring any feedback and sampling diverse solutionsemerged to be the best strategy in this domain.44 Discussion and Future WorkWhile in principle, having access to the true IG should increase the efficiency of the search signif-icantly, it is not feasible at the LLM scale to obtain the true IG. IDTS avoids (repeated) posteriorupdates using in-context learning, but still requires useful uncertainty measure for LLMs [Malininand Gales, 2020, Quach et al., 2023]. An important direction for future work is to have a deeper studyon the estimation error in the IG computation, as finding other better methods of IG estimation isrequired before using IG is likely to be helpful with LLMs.Note, however, that search using IG only requires deciding which node to explore. As such, it mightonly be needed that the relative (not absolute) values of the IG across nodes are accurate. Whilethis could potentially mitigate the IG estimation challenge, assessing the quality of ranking alsoremains challenging as the true rankings are unknown. Further, expanding > >10nodes per treewould provide invaluable insights on the utility of different exploration strategies.ReferencesMoloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, MohammadGhavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U Rajendra Acharya, et al. Areview of uncertainty quantification in deep learning: Techniques, applications and challenges.Information fusion , 76:243–297, 2021.[Page(s): 4]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman,Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report.arXiv preprint arXiv:2303.08774 , 2023.[Page(s): 4]Dilip Arumugam and Benjamin Van Roy. Deciding what to learn: A rate-distortion approach. InInternational Conference on Machine Learning , pages 373–382. PMLR, 2021a.[Page(s): 9]Dilip Arumugam and Benjamin Van Roy. The value of information when deciding what to learn.Advances in Neural Information Processing Systems , 34:9816–9827, 2021b.[Page(s): 9]Eser Aygün, Ankit Anand, Laurent Orseau, Xavier Glorot, Stephen M Mcaleer, Vlad Firoiu, Lei MZhang, Doina Precup, and Shibl Mourad. Proving theorems using incremental learning andhindsight experience replay. In International Conference on Machine Learning , pages 1198–1210.PMLR, 2022.[Page(s): 9]Erdem Biyik, Fan Yao, Yinlam Chow, Alex Haig, Chih-wei Hsu, Mohammad Ghavamzadeh, andCraig Boutilier. Preference elicitation with soft attributes in interactive recommendation. arXivpreprint arXiv:2311.02085 , 2023.[Page(s): 9]Craig Boutilier. A pomdp formulation of preference elicitation problems. In AAAI/IAAI , pages239–246. Edmonton, AB, 2002.[Page(s): 9]Craig Boutilier. Computational decision support: Regret-based models for optimization and prefer-ence elicitation, 2013.[Page(s): 9]Urszula Chajewska, Daphne Koller, and Ronald Parr. Making rational decisions using adaptive utilityelicitation. In Aaai/Iaai , pages 363–369, 2000.[Page(s): 9]Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, JaredKaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri,Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan,Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian,Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, FotiosChantzis, Elizabeth Barnes, Ariel Herbert-V oss, William Hebgen Guss, Alex Nichol, Alex Paino,Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders,Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa,Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, BobMcGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluatinglarge language models trained on code. 2021.[Page(s): 4]5Ching-An Cheng, Andrey Kolobov, Dipendra Misra, Allen Nie, and Adith Swaminathan. Llf-bench:Benchmark for interactive learning from language feedback. arXiv preprint arXiv:2312.06853 ,2023.[Page(s): 4]Pierre-Arnaud Coquelin and Rémi Munos. Bandit algorithms for tree search. arXiv preprintcs/0703062 , 2007.[Page(s): 1, 2]Nirjhar Das, Souradip Chakraborty, Aldo Pacchiano, and Sayak Ray Chowdhury. Provably sampleefficient rlhf via active preference optimization. arXiv preprint arXiv:2402.10500 , 2024.[Page(s): 9]Emily First, Markus N Rabe, Talia Ringer, and Yuriy Brun. Baldur: whole-proof generation andrepair with large language models. arXiv preprint arXiv:2303.04910 , 2023.[Page(s): 1, 9]Rachel Freedman, Justin Svegliato, Kyle Wray, and Stuart Russell. Active teacher selection forreinforcement learning from human feedback. arXiv preprint arXiv:2310.15288 , 2023.[Page(s): 9]Jakob Gawlikowski, Cedrique Rovile Njieutcheu Tassi, Mohsin Ali, Jongseok Lee, Matthias Humt,Jianxiang Feng, Anna Kruspe, Rudolph Triebel, Peter Jung, Ribana Roscher, et al. A survey ofuncertainty in deep neural networks. arXiv preprint arXiv:2107.03342 , 2021.[Page(s): 4]Botao Hao and Tor Lattimore. Regret bounds for information-directed reinforcement learning.Advances in Neural Information Processing Systems , 35:28575–28587, 2022.[Page(s): 2, 9]Botao Hao, Tor Lattimore, and Chao Qin. Contextual information-directed sampling. In InternationalConference on Machine Learning , pages 8446–8464. PMLR, 2022.[Page(s): 9]Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu.Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992 ,2023.[Page(s): 1, 9]Youngsoo Jang, Seokin Seo, Jongmin Lee, and Kee-Eung Kim. Monte-carlo planning and learningwith language action value estimates. In International Conference on Learning Representations ,2020.[Page(s): 3, 9]Kaixuan Ji, Jiafan He, and Quanquan Gu. Reinforcement learning from human feedback with activequeries. arXiv preprint arXiv:2402.09401 , 2024.[Page(s): 9]Johannes Kirschner and Andreas Krause. Information directed sampling and bandits with het-eroscedastic noise. In Conference On Learning Theory , pages 358–384. PMLR, 2018.[Page(s): 9]Levente Kocsis and Csaba Szepesvári. Bandit based monte-carlo planning. In European conferenceon machine learning , pages 282–293. Springer, 2006.[Page(s): 1, 2]Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. Semantic uncertainty: Linguistic invariances foruncertainty estimation in natural language generation. arXiv preprint arXiv:2302.09664 , 2023.[Page(s): 4, 13]Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictiveuncertainty estimation using deep ensembles. Advances in neural information processing systems ,30, 2017.[Page(s): 13]Guillaume Lample, Timothee Lacroix, Marie-Anne Lachaux, Aurelien Rodriguez, Amaury Hayat,Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. Hypertree proof search for neural theoremproving. Advances in Neural Information Processing Systems , 35:26337–26349, 2022.[Page(s): 9]Jonathan N Lee, Annie Xie, Aldo Pacchiano, Yash Chandak, Chelsea Finn, Ofir Nachum, and EmmaBrunskill. Supervised pretraining can learn in-context reinforcement learning. arXiv preprintarXiv:2306.14892 , 2023.[Page(s): 3]Robert Lieck, Vien Ngo, and Marc Toussaint. Exploiting variance information in monte-carlo treesearch. HSDIP 2017 , page 26, 2017.[Page(s): 9]Jannis Limperg and Asta Halkjær From. Aesop: White-box best-first proof search for lean. InProceedings of the 12th ACM SIGPLAN International Conference on Certified Programs andProofs , pages 253–266, 2023.[Page(s): 9]6Xiuyuan Lu, Benjamin Van Roy, Vikranth Dwaracherla, Morteza Ibrahimi, Ian Osband, Zheng Wen,et al. Reinforcement learning, bit by bit. Foundations and Trends ®in Machine Learning , 16(6):733–865, 2023.[Page(s): 9]Andrey Malinin and Mark Gales. Uncertainty estimation in autoregressive structured prediction.arXiv preprint arXiv:2002.07650 , 2020.[Page(s): 4, 5, 13]Katerina Margatina, Timo Schick, Nikolaos Aletras, and Jane Dwivedi-Yu. Active learning principlesfor in-context learning with large language models. arXiv preprint arXiv:2305.14264 , 2023.[Page(s): 9]Viraj Mehta, Biswajit Paria, Jeff Schneider, Stefano Ermon, and Willie Neiswanger. An experimentaldesign perspective on model-based reinforcement learning. arXiv preprint arXiv:2112.05244 ,2021.[Page(s): 9]Viraj Mehta, Ian Char, Joseph Abbate, Rory Conlin, Mark Boyer, Stefano Ermon, Jeff Schneider,and Willie Neiswanger. Exploration via planning for information about the optimal trajectory.Advances in Neural Information Processing Systems , 35:28761–28775, 2022.[Page(s): 9]Viraj Mehta, Vikramjeet Das, Ojash Neopane, Yijia Dai, Ilija Bogunovic, Jeff Schneider, and WillieNeiswanger. Sample efficient reinforcement learning from human feedback via active exploration.2023.[Page(s): 9]Leonardo de Moura and Sebastian Ullrich. The lean 4 theorem prover and programming language. InAutomated Deduction–CADE 28: 28th International Conference on Automated Deduction, VirtualEvent, July 12–15, 2021, Proceedings 28 , pages 625–635. Springer, 2021.[Page(s): 4]Nikolay Nikolov, Johannes Kirschner, Felix Berkenkamp, and Andreas Krause. Information-directedexploration for deep reinforcement learning. arXiv preprint arXiv:1812.07544 , 2018.[Page(s): 9]OpenAI. New models and developer products announced at devday, 2023. https://openai.com/blog/new-models-and-developer-products-announced-at-devday .[Page(s): 10]Ian Osband, Zheng Wen, Seyed Mohammad Asghari, Vikranth Dwaracherla, Morteza Ibrahimi,Xiuyuan Lu, and Benjamin Van Roy. Epistemic neural networks. arXiv preprint arXiv:2107.08924 ,2021.[Page(s): 13]Giambattista Parascandolo, Lars Buesing, Josh Merel, Leonard Hasenclever, John Aslanides, Jes-sica B Hamrick, Nicolas Heess, Alexander Neitz, and Theophane Weber. Divide-and-conquermonte carlo tree search for goal-directed planning. arXiv preprint arXiv:2004.11410 , 2020.[Page(s): 9]Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and IlyaSutskever. Formal mathematics statement curriculum learning. arXiv preprint arXiv:2202.01344 ,2022.[Page(s): 9]Victor Quach, Adam Fisch, Tal Schuster, Adam Yala, Jae Ho Sohn, Tommi S Jaakkola, and ReginaBarzilay. Conformal language modeling. arXiv preprint arXiv:2306.10193 , 2023.[Page(s): 5]Tom Rainforth, Adam Foster, Desi R Ivanova, and Freddie Bickford Smith. Modern bayesianexperimental design. Statistical Science , 39(1):100–114, 2024.[Page(s): 2]Daniel Russo and Benjamin Van Roy. Learning to optimize via information-directed sampling.Advances in Neural Information Processing Systems , 27, 2014.[Page(s): 2, 9]Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction . MIT press, 2018.[Page(s): 2]Haiming Wang, Ye Yuan, Zhengying Liu, Jianhao Shen, Yichun Yin, Jing Xiong, Enze Xie, Han Shi,Yujun Li, Lin Li, et al. Dt-solver: Automated theorem proving with dynamic-tree sampling guidedby proof-level value function. In Proceedings of the 61st Annual Meeting of the Association forComputational Linguistics (Volume 1: Long Papers) , pages 12632–12646, 2023a.[Page(s): 9]7Ruocheng Wang, Eric Zelikman, Gabriel Poesia, Yewen Pu, Nick Haber, and Noah D Goodman.Hypothesis search: Inductive reasoning with language models. arXiv preprint arXiv:2309.05660 ,2023b.[Page(s): 9]Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, DennyZhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances inneural information processing systems , 35:24824–24837, 2022.[Page(s): 9]Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. An explanation of in-contextlearning as implicit bayesian inference. arXiv preprint arXiv:2111.02080 , 2021.[Page(s): 3]Huajian Xin, Haiming Wang, Chuanyang Zheng, Lin Li, Zhengying Liu, Qingxing Cao, Yinya Huang,Jing Xiong, Han Shi, Enze Xie, et al. Lego-prover: Neural theorem proving with growing libraries.arXiv preprint arXiv:2310.00656 , 2023.[Page(s): 9]Kaiyu Yang, Aidan M Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil,Ryan Prenger, and Anima Anandkumar. Leandojo: Theorem proving with retrieval-augmentedlanguage models. arXiv preprint arXiv:2306.15626 , 2023.[Page(s): 9]Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and KarthikNarasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXivpreprint arXiv:2305.10601 , 2023.[Page(s): 9]Andrea Zanette and Rahul Sarkar. Information directed reinforcement learning. Tech. Rep., Technicalreport, Technical report , 2017.[Page(s): 9]Eric Zelikman, Qian Huang, Gabriel Poesia, Noah D Goodman, and Nick Haber. Parsel: A unifiednatural language framework for algorithmic reasoning. arXiv preprint arXiv:2212.10561 , 2022.[Page(s): 1, 9]Eric Zelikman, Eliana Lorch, Lester Mackey, and Adam Tauman Kalai. Self-taught optimizer (stop):Recursively self-improving code generation. arXiv preprint arXiv:2310.02304 , 2023.[Page(s): 9]Shun Zhang, Zhenfang Chen, Yikang Shen, Mingyu Ding, Joshua B Tenenbaum, and Chuang Gan.Planning with large language models for code generation. arXiv preprint arXiv:2303.05510 , 2023a.[Page(s): 1, 9]Zheyu Zhang, Zhuorui Ye, Yikang Shen, and Chuang Gan. Autonomous tree-search ability of largelanguage models. arXiv preprint arXiv:2310.10686 , 2023b.[Page(s): 9]Chuanyang Zheng, Haiming Wang, Enze Xie, Zhengying Liu, Jiankai Sun, Huajian Xin, JianhaoShen, Zhenguo Li, and Yu Li. Lyra: Orchestrating dual correction in automated theorem proving.arXiv preprint arXiv:2309.15806 , 2023.[Page(s): 1, 9]Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark forformal olympiad-level mathematics. arXiv preprint arXiv:2109.00110 , 2021.[Page(s): 4]Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, and Yu-Xiong Wang. Languageagent tree search unifies reasoning acting and planning in language models. arXiv preprintarXiv:2310.04406 , 2023.[Page(s): 1, 3, 9]Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Ruyi Gan, Jiaxing Zhang, and Yujiu Yang.Solving math word problem via cooperative reasoning induced language models. arXiv preprintarXiv:2210.16257 , 2022.[Page(s): 9]Yuchen Zhuang, Xiang Chen, Tong Yu, Saayan Mitra, Victor Bursztyn, Ryan A Rossi, SomdebSarkhel, and Chao Zhang. Toolchain*: Efficient action space navigation in large language modelswith a* search. arXiv preprint arXiv:2310.13227 , 2023.[Page(s): 9]8Information Directed Tree Search(Appendix)A Related WorkA.1 LLM + MCTS based Reasoning and PlanningChain of thought reasoning [Wei et al., 2022] and self improvement [Zelikman et al., 2023] haveshown a lot of success in the past. Building on the success of these tree of thought [Yao et al., 2023]maintains a diverse set of solutions. Alternatively, planning via tree search based algorithms havealso been studied [Hao et al., 2023, Zhou et al., 2023]. Extensions of A∗search [Zhuang et al., 2023],or even in-context tree traversal [Zhang et al., 2023b], and hierarchical search [Parascandolo et al.,2020, Zelikman et al., 2022, Wang et al., 2023b] have also been explored.Several methods have considered solving formal math problems using different variants of MCTSand best-first search [Yang et al., 2023, Limperg and From, 2023, Wang et al., 2023a, Lample et al.,2022, Zhu et al., 2022], doing proof repair using feedabck (akin to iterative-refinement) [Zheng et al.,2023, First et al., 2023], by hierarchical decomposition [Xin et al., 2023], or incremental learning[Aygün et al., 2022, Polu et al., 2022]. MCTS for language based RL games [Jang et al., 2020] andcode-generation [Zhang et al., 2023a] has also been considered.The approaches developed in these are largely complementary to ours, but none of them focus on thetopic of how to explore more efficiently in the presence of partially correct feedback.A.2 Information Directed SamplingBuilding upon the initial work on IDS [Russo and Van Roy, 2014], the core idea has been furtherexpounded by several [Lu et al., 2023, Hao et al., 2022, Hao and Lattimore, 2022] with a focus ontheoretical aspects of the regret associated with various instances of the IDS idea. On the practicalside, Zanette and Sarkar [2017] considered the tabular setting and aimed at a tractable approximationof the information ratio using a variance based bound for IG. From one perspective, if we could havereplaced the information gain term in the proposed IDTS method with a variance based bound, thenour method would come closer to using Bernstein based bonuses for tree search [Lieck et al., 2017].Kirschner and Krause [2018] discuss how in Gaussian setting, IG based on Bayesian posteriordistribution can be expressed in frequentist terms. Nikolov et al. [2018] built upon this to considerthe generic/deepRL setting and estimate both the aleatoric (using distributional RL) and epistemicuncertainty (using a q-ensemble). IDS has also been used for doing open/closed loop planning bychanging the ‘reward’ function to consider the information gained about the optimal trajectory byrunning the desired sequence of actions Mehta et al. [2021, 2022]. While related, none of them tacklethe LLM setting.Our work is also related to works that discuss what information to gather [Arumugam and Van Roy,2021a], and how the acquisition of it can be made faster [Arumugam and Van Roy, 2021b]. Similarly,it also related to the work on reformulating POMDPs for optimizing long term value of informationwhen eliciting preferences [Boutilier, 2002] and query optimization [Chajewska et al., 2000, Boutilier,2013, Biyik et al., 2023]. Several recent works also consider the task of strategically collecting labelsfor which pair of outputs to query in order to minimize the number of samples need for RLHF training[Mehta et al., 2023, Freedman et al., 2023, Ji et al., 2024, Das et al., 2024] or for in-context learning[Margatina et al., 2023]. Our work complements this direction and considers strategic interaction atthe inference time to find a solution to given question, and involves no RLHF.B Information Gain EstimationB.1 PreliminariesLetXandYbe two random variables. Recall, from the chain rule of entropy:H(X, Y) =H(X) +H(Y|X) (Chain rule of entropy) (6)9Similarly, information gain between random variables {X1, X2}andYcan be written as,I(Y;X1, X2) =H(Y)− H(Y|X1, X2) (7)=H(Y)− H(Y|X1) +H(Y|X1)− H(Y|X1, X2) (8)=I(Y;X1) +I(Y;X2|X1). (Chain rule of information gain)(9)B.2 Intra-interaction decomposition:Recall that p(y|Tt+1, D)from (5) was required to compute the information gain It(Y;s′),p(y|Tt+1, D) =Zp(y|Tt+1, θ)p(θ|D)dθ, (10)Here, the random variable Ythat characterizes the agent’s belief over the solution is a sequence oftoken (Y0, Y1, . . .)that is generated auto-regressively. This structure can be leveraged to specify anestimator that decomposes the information gain at per token level,ˆIt(Y;s′):=−K2Xi=1logPt(Yi|Y:i−1) +K1Xi=1logPt(Yi|Y:i−1, s′), (11)where Pt(Yi|YY:i−1)is a random variable because conditioning is on a random variable as well.Further, notice that the two sequence of Yi’s are different, as one is conditioned on s′and the other isnot.K1andK2are the lengths of the generated sentences.Estimator ˆIt(Y;s′)decomposes information gain such that its computation requires logprobs of thegenerated tokens, which is accessibly even from proprietary large-language models like GPT [OpenAI,2023]. However, when using open-source models, notice that the agent has additional access tothe logprobs of the tokens notsampled. These can be leveraged to create the following improvedestimator that can have lower variance than ˆIt(Y;s′), but still provides an unbiased estimator ofIt(Y;s′).eIt(Y;s′):=K2Xi=1Ht(Yi|Y:i−1)−K1Xi=1Ht(Yi|Y:i−1, s′). (12)It can be observed using chain-rule of entropy and tower law of expectations that,EheIt(Y;s′)i=EhˆIt(Y;s′)i=It(Y;s′). (13)B.3 Inter-interaction decomposition:letπbe a tree-traversal policy that operates on the tree Tt(This policy has a composite nature, whereit either selects one of the already expanded branches, or draws a new sample to create a new branchin the tree). For any given node s, letτπ(s)be a random trajectory that could be unrolled using πwhen starting at node s. LetLt(s)be the set of leaves under the subtree ofsinTt. LetCt(s)be theset of immediate children of s, andAt(s)be all the ancestors of sinTt.In the following Theorem 1, we formalize a procedure to recursively estimate the information gain onexpanding the sub-tree of a given node s. This has an intuitive form that asserts that the informationgain from expanding the subtree of a node sequals the expected information gain from expandingthe children of node s.Theorem 1. The information gain from τt(s)can be expressed as the following recursive form,It(Y;τπ(s)) =Xs′∈Ct(s)Pt(s′|s;π)It(Y;τπ(s′)) =Xx∈Lt(s)Pt(x|s;π)It(Y;τπ(x)). (14)Proof. Consider the following decomposition τ(s) ={s∪τ2}, where τ2is the part of the trajectoryτ(s)without node s. Note that the first node in τ2will be a s′∈ Ct(s).It(Y;τ(s)) =Ht(Y)− H t(Y|τ(s)) (15)10Considering the term in red in 15,Ht(Y|τ(s)) =Ht(Y|s, τ2) (16)=Xx,τPt(s=x)Pt(τ2=τ|s=x)Ht(Y|x, τ) (17)(a)=XτPt(τ2=τ|s)Ht(Y|s, τ) (18)=Xs′∈C(s)Pt(s′|s)XτPt(τ2=τ|s, s′)Ht(Y|s, τ) (19)(b)=Xs′∈C(s)Pt(s′|s)XτPt(τ(s′) =τ)Ht(Y|s, τ) (20)=Xs′∈C(s)Pt(s′|s)Ht(Y|s, τ(s′)) (21)where (a)follows as sis the first node in the trajectory τ(s)deterministically. (b)follows asconditioned on s′,τ2does not depend on s. Now, focusing on the blue term in 21,Ht(Y|s, τ(s′)) =−Xx,τPt(s=x)Pt(τ(s′) =τ|s=x)XyPt(y|x, τ) logPt(y|x, τ) (22)(a)=−XτPt(τ(s′) =τ)XyPt(y|s, τ) logPt(y|s, τ) (23)=−XτPt(τ(s′) =τ)XyP(y|s, τ,Tt) logP(y|s, τ,Tt) (24)(b)=−XτPt(τ(s′) =τ)XyP(y|τ,Tt) logP(y|τ,Tt) (25)=−XτPt(τ(s′) =τ)XyPt(y|τ) logPt(y|τ) (26)=−XτPt(τ(s′) =τ)Ht(Y|τ) (27)=Ht(Y|τ(s′)) (28)where (a)follows because sands′are fixed variables in the LHS, and the trajectory τ(s′)does notdepend on ancestor node sofs′.(b)follows because by construct ancestor sis a part of Tt, i.e.,s∈ Ttalready. Therefore, combining 15, 21, and 28,It(Y;τ(s)) =Ht(Y)−Xs′∈Ct(s)Pt(s′|s)Ht(Y|τ(s′))=Xs′∈Ct(s)Pt(s′|s)[Ht(Y)− H t(Y|τ(s′))]=Xs′∈Ct(s)Pt(s′|s)It(Y;τt(s′)).Now, unrolling It(Y;τt(s′))using the above recursion and observing that for a leaf node x∈ Lt(s),τ(x) =x, we obtain the aforementioned result.Contrast this recursive propagation of information gain, with recursive propagation of the (terminal)reward. Information gain from intermediate states in a trajectory is zero, similar to intermediaterewards for many games. This permits minimal modification of existing MCTS methods to incorporaterich feedback and perform an information-directed exploration.11B.4 Information Gain Bellman RecursionLetIπt(s)be the information gained towards the optimal solution yon unrolling a trajectory τπ(s),given the agent is at state sduring iteration t. Also, recall that the sub-script of tdenotes implicitconditioning on all the data observed so far (e.g., in the code-gen setting, it denotes conditioning onall the internet data that has been used for model training),Iπt(s):=It(Y;τπ(s)|s). (29)One should consider Iπt(h)analogous to the state-value function for a policy π, but instead of beingfor the long-term return to go, it is for the long-term information gain. Similar to the state-valuefunction, it is also possible to define a Bellman recursion for the information gain (recall that byconstruct, a state is defined using the history of the trajectory).Theorem 2. Information gain recursion:Iπt(s) =It(Y;S′|s) +Eπ[Iπt(S′)|s], (30)where S′is observed following the policy πfrom state s.Corollary 1. n-step Information gain backupIπt(s) =Eπ"n−1Xi=0It(Y;Si+1|Si) +Iπt(Sn)|s#, (31)where S0=s.Proof. Follows by unrolling the recursion in (30).Proof. Proof of Theorem 2. Without loss of generality, consider s= (O0), i.e., just the starting state,and consider the length of a trajectory to be L,Iπt(s) =It(Y;τπ(s)|s) (32)=It(Y; (A0, O1, A1, . . . , O L)|s) (33)=It(Y;S′|s) +It(Y; (A1, . . . , O L)|s, S′) ∵Chain rule of info gain (34)=It(Y;S′|s) +It(Y; (A1, . . . , O L)|S′) ∵S′=s∪(A0, O1) (35)=It(Y;S′|s) +XPt(s′|s;π)It(Y;τπ(s′)|s′) (36)=It(Y;S′|s) +XPt(s′|s;π)Iπt(s′) ∵By definition of Iπt(s′) (37)=It(Y;S′|s) +Eπ[Iπt(S′)|s]. (38)Remark 1. Note that as a ‘state’ is the history of everything that has occured so far, this makes thedecision process resemble an (acyclic) tree, where each ‘state’ can only be reached through a uniquepath, and can’t be revisited again. This is good, as it is aligned with the problems we aim to resolve.But this may be not so good, because it probably makes learning I(·)much harder.Remark 2. During test-time inference, we need a value function that estimates Iπt(h)so as toperform n-step information backup. But how should we learn Iπt(h)? Here it is important todistinguish the implicit conditioning on (the entire past training data) through the subscript t, and theexplicit conditioning (i.e., in-context) on the observations made in the current interaction. Therefore,this will require an ‘intermediate’ phase where we will need pairs of (s,ˆIπt(Y;τπ(s)|s))from aheld-out dataset to estimate Iπt(s), which can then be used during the test time to do (n-step)backup. The important thing to note here is that the data from the ‘intermediate’ phase should notbe used, ideally, to improve the language model again, as that would change the meaning of theimplicit-conditioning on t. For example, if Iπt(Y;τπ(s)|s)had a non-zero value and then we use stoupdate the language model again, and if we assume everything to be perfect, then Iπt+1(Y;τπ(s)|s)should be 0 as there is no new information to be gained from s.Due to the above challenges in estimating long-term information gain Iπt(s), we resort to myopicversion, where only one-step information gain backup is used, and Iπt(s)is set to 0. This is akin tohow even UCT does not take into account uncertainty of Q, it still does one-step/bandit uncertainty.12C Empirical DetailsIn Algorithm 1 we present the pseudo-code for IDTS. The steps highlighted in blue are the keydifferences from the typical tree-search/UCT.For different domains, we defined Value (s, τπ(s))differently. For coding domain, this correspondedto the the fraction of unit test passed by the latest solution available in τπ(s). For LLF-bench, thiscomprised of the scalar feedback form the environment for how relevant is the latest recommendationto the movie that the user had in mind. For math domain, we set Value (s, τπ(s)) = 0 , and treat thetask as a pure exploration problem.For all the domains, we set n= 1 for rollout in the Evaluate function. Therefore, τπ(s) =s′.With this setting ˆI(Y;τπ(s)|T, D), where in the algorithm Trepresents the current tree, can beequivalently expressed as ˆIt(Y;s′), where tis the current iterate.Results for the code, and math domain used gpt-3.5-turbo-0125 , and for the LLF-benchgpt-4-turbo-2024-04-09 was used.Algorithm 1: The IDTS algorithm1Function IDTSearch( s0):2 Create tree Twith root state s03 while within compute budget do4 s←TreePolicy (s0)5 v, i←Evaluate (s)6 Backup (s, v, i )7 return arg maxs∈ˆTV(s)8Function TreePolicy( s):9 while sis non-terminal do10 ifsis not K-expanded then11 return Expand (s)12 else13 s←BestChild (s)14 return s15Function BestChild( s):16 return arg maxs′∈Ct(s)V(s′) +λI(s′)17Function Evaluate( s):18 (n-step) Rollout τπ(s)19 v←Value (s, τπ(s))20 i← I(Y;τπ(s)|T, D)21 return v, i22Function Expand( s):23 A←LLM(s,Tt)24 O←GetFeedback (s, A)25 s′= (s, A, O )26 T ← T + Child s′added to s27 return s′28Function Backup( s, v, i):29 while sis not null do30 α←N(s)/(N(s) + 1)31 V(s)←αV(s) + (1 −α)v32 I(s)←αI(s) + (1 −α)i33 N(s)←N(s) + 134 s←parent of s35 v←r(s) +vPractical Approximations: Recall from 5 that we need to estimate p(y|s′,Tt, D). This corre-sponds to epistemic uncertainty for the LLM. Several methods exists for estimating this [Lakshmi-narayanan et al., 2017, Osband et al., 2021], we make use of the ensemble approach wherep(y|s′,Tt, D) =Zp(y|s′,Tt, θ)p(θ|D)dθ≈1MMXi=1p(y|s′,Tt, θi). (39)For extremely large models, ensembles can be created using multiple-low rank adapter instead[Malinin and Gales, 2020, Kuhn et al., 2023]. For our experiments, we simply use M= 1, such thatp(y|s′,Tt, D)≈p(y|s′,Tt, θ). Similarly for p(y|Tt, D)≈p(y|Tt, θ). Computing an estimate infor-mation gain It(Y;s′)now simply corresponds to estimating the difference in entropy of p(y|s,Tt, θ)andp(y|Tt, θ).Further, conditioning on the information of the entire tree Ttmight require very large context windowfor the LLMs. To avoid such long context, instead of conditioning on the content of all the nodes inTt, we condition only on the ancestor nodes of s′. Finally, we also note that that the estimator in (11)can result in negative values because of the sampling error. To avoid this, we clip the minimum valueof the information gain to be 0.13 |
oRW8i4EF0Z | A Bayesian ApproachTowards Crowdsourcing the Truths from LLMsPeiran Yao1, Jerin George Mathew2, Shehraj Singh1,Donatella Firmani2,Denilson Barbosa11University of Alberta2Sapienza University of Rome{peiran,denilson}@ualberta.caAbstractConcerns persist over the trustworthiness of large language models (LLMs) dueto the generation of plausible but incorrect information, known as hallucination.Existing approaches focus on identifying false answers or improving correctnessby sampling responses from a single LLM. However, querying multiple LLMs,which exhibit complementary strengths, remains largely unexplored. In this work,we propose a Bayesian crowdsourcing approach towards aggregating multipleanswers from multiple LLMs and quantifying their uncertainty. Extending theDawid-Skene model, we treat LLMs as annotators, using their answer probabilitiesas noisy observations of truthfulness and modeling semantic relations betweenanswers in the covariance structure, and jointly learn about LLM’s reliability andcalibration as parameters. Validated across three open-domain question answeringdataset, results show that our approach outperforms existing statistical or agenticmethods in abstaining from false answers and identifying truthful ones, offering arobust, scalable solution for uncertainty quantification and truth discovery in LLMoutputs.1 IntroductionLarge language models (LLMs; [29, 7, 40, 15] iter alia ) are pre-trained on web-scale language datathat make them excel at generating human-like responses and storing extensive world knowledge[32]. They have demonstrated remarkable capabilities across a wide range of tasks that requireconceptual knowledge, from recalling answers to trivia questions [ 27] to performing complex multi-hop reasoning [ 18]. However, their wide applications have raised concerns over the trustworthinessand reliability of their outputs [ 25,13], exemplified by the generation of plausible but incorrectinformation (i.e. “hallucination” [ 47]) and the lack of self-awareness of limitations in knowledge[17], which both remain largely unaddressed.To improve the reliability of LLM answers at test time, strategies that work on one of the twocomplementary fronts have been proposed: improving the correctness of answers [ 26,45,13,6], oridentifying answers that are more likely to be false to abstain from them [ 9,20,24,8]. The underlyingideas behind strategies from both fronts are similar: they rely on sampling multiple answers from asingle LLM and aggregating them based on consistency [ 45,20,24], or following an agentic workflowwhere LLMs verbally evaluate or improve an answer, similarly to human interactions [26, 6, 8].We study a more general scenario , where multiple LLMs are each queried multiple times to generatea set of candidate answers for multiple questions, that to our knowledge has not been extensivelyand systematically studied. Querying multiple LLMs, rather than a single one, could be helpfulbecause they are known to have different strengths and weaknesses [ 16,43], and their outputs can becomplementary [ 42]. The goals are, at the same time, (1) to quantify the uncertainty ( UQ) of theseWorkshop on Bayesian Decision-making and Uncertainty, 38th Conference on Neural Information ProcessingSystems (NeurIPS 2024).answers, as a base for abstaining; and (2) to aggregate these answers to infer the truthful answer foreach question, a problem known as truth discovery ( TD) in the data management literature [23, 48].This task can be seen as a special case of inferring the ground truths from multiple annotators, wherethe annotators are all LLMs. The base problem, known as crowdsourcing [ 48], is well-establishedwith Bayesian models that go beyond simple consistency-based aggregation [ 4,46], leading toapplications such as in NLP tasks [ 30,31,38]. Despite precursory works [ 5] on crowdsourcing withweak systems as annotators, not much has been done to extend these models to LLM-based annotators.Crowdsourcing models are mainly limited to classification tasks with predefined classes, while thetypical use cases of LLMs require free-form text answers. Moreover, classical crowdsourcing modelsexpect a single answer from each annotator, while LLMs can generate multiple answers for a singlequestion with different probabilities, which could additionally inform crowdsourcing models.Truthful zTruthful zTruthful zCalibration βAp(true) yContradictoryReliability μA,σALLM , Answer A1“Canada”“USA”“Vancouver”EntailmentReliability μA,σAReliability μB,σBQuestion: “Where is NeurIPS 2024 happening?”Calibration βACalibration βBp(true) yp(true) yLLM , Answer 2ALLM , Answer B1Figure 1: We propose a Bayesian model for crowd-sourcing from free-text outputs of multiple LLMs.To better illustrate the relations between elements ofthe multivariate random variable y, this figure doesnot strictly follow the standard plate notation. yas asingle multivariate random variable is represented asa rounded rectangle with its elements represented ascircles inside, which should not be confused with re-peated independent variables that are conventionallyrepresented as a rectangle.To address these challenges, we propose aprobabilistic generative model that extendsthe Dawid-Skene model [ 4] to the LLM-basedsetting. For each question, the truthfulnessof candidate answers from multiple LLMsis modeled as a multivariate latent variablewhose correlation structure is tied to the se-mantic relations between the answers, andwhose marginal distribution determined bythereliability of the LLMs. The observeddata are the probabilities of the candidate an-swers being generated by the LLMs, whichare treated as noisy observations of the truth-fulness and are modeled as dependent on thelatent truthfulness variable via a calibrationprocess. Parameters for reliability and cali-bration are learned by the EM algorithm, andthe truthfulness (inverse of uncertainty) of thecandidate answers is inferred by the poste-rior of the latent variable. We validate ourmodel on three open-domain question answer-ing datasets, where we show that our modelcan even outperform costly agentic methodsin effectively identifying the truthful answersand abstaining from the false ones.2 Related WorkCrowdsourcing. Combining multiple answers by simple majority voting is commonly used toimprove LLMs at test time [ 45,22]. Crowdsourcing models such as Dawid-Skene [ 4] opt for iterativeweighted majority voting by assuming different reliability of workers. They are comprehensivelyreviewed by [ 48,31,30] and implemented by [ 41]. Typically, these models require a known andfixed set of possible answers, while recent studies [ 21] operate with free-text by a weighted majorityvoting in the embedding space. Our approach eliminates the need for embedding and provides moreflexibility and interpretability. Weak supervision [ 36], where the truths are not fully available andtruth inference is integrated with model learning, is closely related. We only consider the case whenthe ground truth is completely unavailable, and do not consider learning. Second-order informationsuch as worker’s predictions [19] is useful but beyond the scope of this work.Uncertainty quantification. Many UQ methods work by quantifying how consistent the answersare, with [ 20,24,34] providing methods for a semantic-aware consistency measure. Picking themost consistent answer is also the underlying strategy to improve the factuality of LLMs [ 45,22].[12] infers about the uncertainty of a statement using a Hidden Markov Model by considering thelogical relations with other statements conditionally generated from the initial statement. Our modeloffers more relaxations and consider arbitrary answers sampled independently from multiple LLMs.A separate step in UQ is to calibrate the uncertainty estimates to match accuracy using labeled2dataset [ 10], while our model allows the learning of the calibration and reliability parameters withoutlabelling.Scaling LLMs at Test Time Doing repeated sampling allows for a trade-off between the amountof computing spent in test time and the quality of the final output, a research topic known as test-timeorinference-time scaling [ 45,22]. Although self-consistency (simple majority voting) yields goodperformance [ 45,22], it is typically assumed that a verifier trained on the same data is available sothat more complex aggregation methods can be used, such as voting weighted by verifier score, orbest-of-n selected by the verifier [ 37]. Such a verifier could be available when the ground-truth isknown for a subset of the data [ 3], or when there are deterministic rules to verify the answers such asfor math and coding problems [ 44]. However, it remains an open question to find a general-purposeverifier. Our method works in the most unrestricted setting and assumes that there is no access to anyverifier, nor is there information about the relative strengths of LLMs, obtainable by, for example,running benchmarks on labelled in-domain datasets.3 MethodologySuppose we have a set of Nquestions and JLLMs. For each question q(i), we sample Kanswersfrom each LLM, resulting in J×Kanswers {a11, . . . , a JK}, and record their probabilities of beinggenerated y(i)={y11, . . . , y JK}, which are provided as logprob or can be estimated by samplingblack-box models, and are known as p(answer )[17]. For simplicity, we omit the superscript (i)asthe parameters are shared across questions.We introduce a continuous latent variable z∼ N (μ,Σ)of dimension (J×K)to model the realtruthfulness of the answers. The marginal distribution of zjkhas shape zjk∼ N (μj, σj)andis determined by the reliability of the LLM j. The covariance Σis determined by the semanticrelations between answers. Following [ 12], we posit that semantically similar answers would havecorrelated truthfulness, while contradictory answers would have negating truthfulness. As such,when considering all zjkjointly, z’s correlation matrix Ris determined by an NLI model [ 35] thatclassifies a pair of answers as entailment, contradiction, or neutral:Ejk,j′k′=Entails (ajk, aj′k′)Cjk,j′k′=Contradicts (ajk, aj′k′)Rjk,j′k′= (Ejk,j′k′+Ej′k′,jk)/2−(Cjk,j′k′+Cj′k′,jk)/2which ensures a correlation of 1for equivalent answers and −1for mutually exclusive ones. Thecovariance matrix Σis then computed as Σ=R⊙σσTand is projected to the nearest positivesemi-definite matrix in the Frobenius norm [2].The data yis assumed to the noisy and uncalibrated observation of the truthfulness z, which iscalibrated by logistic regression yjk∼Bernoulli (sigmoid (β0j+β1j·zjk))such that β1≥0. Theparameters μ,σ, andβare unique for each LLM and are shared across questions. An example of thefull generative model is shown in Fig. 1.For the ease of implementation, we use EM algorithm to learn the parameters by maximizing log-likelihood using stochastic gradient descent and sampling from p(z|y)using NUTS [ 11,33,1]. Aswith most baseline methods to be compared in Section 4, we select the best answer with the lowestuncertainty as the final answer for TD. For our model, the uncertainty of answer ajkis quantified bythe lower bound of 99.9%confidence interval of the mean of the posterior distribution p(zjk|y).4 Experiments and ResultsBaselines. We perform a comprehensive comparison of our Bayesian approach against recentstatistical andagentic methods for UQ and TD related tasks. In addition, we include oracle baselinesthat always choose answers from a single LLM or choose a correct answers if one exists.Statistical methods are based on the frequency or likelihood of candidate answers and direct consis-tency measures, with the most commonplace method being simple majority voting . [17] assumesthe uncertainty of an answer to be the complement of the probability of it being generated, denotedasp(answer) .Semantic entropy [20] clusters candidate answers based on semantic similarity andassigns uncertainty based on the entropy of the clusters. Lin et al. (2024) [24] provides a graph3Laplacian-based remedy to semantic entropy when p(answer )is not available. The random baselineassigns uniform uncertainty to all candidate answers.AbstainQA [8] provides two agentic methods for UQ. The cooperate method asks LLMs to providefeedback on candidate answers, which is then summarized by a judge for a true/false judgment.Thecompete method asks LLMs to draft paragraphs supporting alternative answers and reconsiderthe question, and the uncertainty is estimated by the consistency of final answers. Debate [6] is amulti-round debating framework where LLMs refine their answers based on explanations from otherLLMs. An LLM judge selects the final answer based on the responses from the last round, whichdoes not provide UQ and is only applicable to TD.Tasks and evaluation metrics. Experiments are done on three open-domain question answeringdatasets pertaining to different levels of aleatoric and epistemic uncertainty [ 14]: FreebaseQA [ 16],AmbigQA [ 28], and IMDB-Torso [ 39], with FreebaseQA downsampled to match the size of theother datasets of around 1k. Using 10 different seeds, we sample a total of 40 candidate answers perquestion from four LLMs: Gemma-2-9B [40], GPT-3.5 [29], Llama-3-8B [7], and Mistral-7B [15].Given a question and its candidate answers, the goal of UQ is to estimate the uncertainty of thecandidate answers and it is evaluated by UQ using the area under the receiver operating characteristiccurve (AUROC) that measures how well the uncertainty of the correct answer is separated from theuncertainty of the incorrect answers (Table 1). The goal of TD is to select a correct answer (if any)from the candidate answers and it is evaluated by the final question-answering accuracy (Table 2).Table 1: Bayesian network outperforms baseline methods on all datasets in terms of AUROC foruncertainty quantification. Top non-oracle results are highlighted in green in Tables 1 and 2.FreebaseQA AmbigQA IMDB-TorsoAgenticAbstainQA [8] (cooperate) 0.57 0.56 0.52AbstainQA [8] (compete) 0.77 0.71 0.69StatisticalRandom 0.50 0.50 0.50p(True )[17] 0.74 0.64 0.65Simple majority voting 0.90 0.76 0.86Semantic entropy [20] 0.85 0.74 0.86Lin et al. (2024) [24] 0.53 0.52 0.53Bayesian network (ours) 0.93 0.82 0.88Table 2: Bayesian network has high accuracy when picking a single correct answer for AmbigQAand IMDB-Torso questions, compared to other statistical methods.FreebaseQA AmbigQA IMDB-TorsoOracleGemma-2-9B 83.1 57.7 36.0GPT-3.5 93.0 72.1 52.2Llama-3-8B 78.0 46.5 28.7Mistral-7B 81.3 52.2 39.6Best answer 98.0 91.0 73.7AgenticAbstainQA [8] (cooperate) 87.7 59.8 41.6AbstainQA [8] (compete) 86.3 60.5 41.9Debate [6] 88.0 60.5 42.3StatisticalRandom 83.4 61.1 42.7p(True )[17] 84.1 54.5 39.8Simple majority voting 91.1 67.0 49.4Semantic entropy [20] 92.3 70.4 50.3Lin et al. (2024) [24] 81.9 56.7 36.7Bayesian network (ours) 91.0 74.0 57.345 DiscussionOur Bayesian model consistently performs the best among all baseline methods when ranking candi-date answers to align with truthfulness, making it a promising approach for uncertainty quantification.For improving question answering (TD), the Bayesian model outperforms other statistical models onharder datasets. Despite the relative success of multi-agent models in TD, it is noteworthy that thefast growth of context length and computation constrains scaling with the number of agents, which iscrucial to performance [ 22]. In the meantime, the high computational cost of multi-agent modelsdoes not always translate to better performance, as shown in our experiments.The relative high gap between the best performing model and the “best answer” oracle suggests thatrelying solely on “wisdom of the crowd” is not always reliable, and the best answer is not alwaysthe most popular one [ 19]. However, our method outperforms other voting and consistency-basedmethods, especially when the majority of the answers are incorrect. Other methods work at theinstance level, while our Bayesian model leverages the information across all questions and answersto learn about the relative reliability of each LLM. This is particularly useful, as the Bayesian modelcould put more weight on the more reliable LLMs even if they are in the minority, which is notexplicitly modeled with other methods.The answers to FreebaseQA questions are more uniform than those to AmbigQA and IMDB-Torsoquestions. When the answers are diverse (a signal of difficulty), the Bayesian model outperformsother methods. Uniformity can be measured without ground truths by counting the number of uniqueanswers, and it could be used in the future to determine whether the Bayesian model should be usedfor a particular dataset.Although we are exploring methods to rank answers without ground truths, we observe from theoracles that unsurprisingly, having labelled data for benchmarking and calibration might be morebeneficial in practical applications. For example, evidence from the oracles and our calibration model(β1= 1.3for GPT-3.5 and 0.5−0.7for other LLMs) both show that GPT-3.5 is more reliable andconfident than other LLMs, an important clue that current methods have not fully leveraged.In the typical crowdsourcing setting, annotators work on instances of the same task, while in questionanswering the questions can be heterogeneous and diverse, which our current model fail to consider.The Bayesian framework allows for the incorporation of factors such as question domain or difficulty,which could be added in the future to improve our model’s performance.6 ConclusionUsing a Bayesian generative model, we propose a novel approach to measure the uncertainty ofanswers from multiple LLM-based QA systems, that would better support informed decisions forabstaining from false answers and identifying truthful ones. As a future direction, the Bayesian modelprovides a principled method to provide a signal of correctness without ground truths, which couldbe used for data synthesis to improve question answering systems, in combination with filtering andfinetuning, or preference optimization.Acknowledgments and Disclosure of FundingWe acknowledge the support of the Natural Sciences and Engineering Research Council of Canada(NSERC). This work is supported in part by a gift from Scotiabank, and is funded in part by theHORIZON Research and Innovation Action 101135576 INTEND “Intent-based data operation inthe computing continuum”. Jerin George Mathew is financed by the Italian National PhD Programin AI. Dr. Xuefei Ning contributed substantially to the camera-ready version by correcting criticalmethodological misunderstandings, assisting with method development, and contributing to bothexperiment design and paper writing.References[1]Eli Bingham, Jonathan P. Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, TheofanisKaraletsos, Rohit Singh, Paul A. Szerlip, Paul Horsfall, and Noah D. Goodman. Pyro: Deepuniversal probabilistic programming. J. Mach. Learn. Res. , 20:28:1–28:6, 2019.5[2]Sheung Hun Cheng and Nicholas J. Higham. A modified cholesky algorithm based on a sym-metric indefinite factorization. SIAM Journal on Matrix Analysis and Applications , 19(4):1097–1110, 1998.[3]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and JohnSchulman. Training verifiers to solve math word problems, 2021.[4]A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using theem algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics) , 28(1):20–28,1979.[5]Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, ThomasStrohmann, Shaohua Sun, and Wei Zhang. Knowledge vault: a web-scale approach to proba-bilistic knowledge fusion. In The 20th ACM SIGKDD International Conference on KnowledgeDiscovery and Data Mining, KDD ’14, New York, NY, USA - August 24 - 27, 2014 , pages601–610. ACM, 2014.[6]Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. Improvingfactuality and reasoning in language models through multiagent debate. In Proceedings of the41st International Conference on Machine Learning , volume 235 of Proceedings of MachineLearning Research , pages 11733–11763. PMLR, 21–27 Jul 2024.[7]Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle,Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, andet al. The llama 3 herd of models, 2024.[8]Shangbin Feng, Weijia Shi, Yike Wang, Wenxuan Ding, Vidhisha Balachandran, and YuliaTsvetkov. Don’t hallucinate, abstain: Identifying LLM knowledge gaps via multi-LLM col-laboration. In Proceedings of the 62nd Annual Meeting of the Association for ComputationalLinguistics (Volume 1: Long Papers) , pages 14664–14690, Bangkok, Thailand, August 2024.Association for Computational Linguistics.[9]Jiahui Geng, Fengyu Cai, Yuxia Wang, Heinz Koeppl, Preslav Nakov, and Iryna Gurevych.A survey of confidence estimation and calibration in large language models. In Proceedingsof the 2024 Conference of the North American Chapter of the Association for ComputationalLinguistics: Human Language Technologies (Volume 1: Long Papers) , pages 6577–6595,Mexico City, Mexico, June 2024. Association for Computational Linguistics.[10] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neuralnetworks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th InternationalConference on Machine Learning , volume 70 of Proceedings of Machine Learning Research ,pages 1321–1330. PMLR, 06–11 Aug 2017.[11] Matthew D. Hoffman and Andrew Gelman. The no-u-turn sampler: Adaptively setting pathlengths in hamiltonian monte carlo. Journal of Machine Learning Research , 15(47):1593–1623,2014.[12] Bairu Hou, Yang Zhang, Jacob Andreas, and Shiyu Chang. A probabilistic framework for LLMhallucination detection via belief tree propagation. CoRR , abs/2406.06950, 2024.[13] Yue Huang, Lichao Sun, Haoran Wang, Siyuan Wu, Qihui Zhang, Yuan Li, Chujie Gao,Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Hanchi Sun, Zhengliang Liu, Yixin Liu,Yijue Wang, Zhikun Zhang, Bertie Vidgen, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao,Chunyuan Li, Eric P. Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang,Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, JianPei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, JoaquinVanschoren, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, MichaelBackes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying,Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Yang Wang,Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, YanfangYe, Yinzhi Cao, Yong Chen, and Yue Zhao. Position: TrustLLM: Trustworthiness in large6language models. In Proceedings of the 41st International Conference on Machine Learning ,volume 235 of Proceedings of Machine Learning Research , pages 20166–20270. PMLR, 21–27Jul 2024.[14] Eyke Hüllermeier and Willem Waegeman. Aleatoric and epistemic uncertainty in machinelearning: An introduction to concepts and methods. Machine learning , 110(3):457–506, 2021.[15] Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chap-lot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier,Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril,Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7b. CoRR , abs/2310.06825,2023.[16] Kelvin Jiang, Dekun Wu, and Hui Jiang. FreebaseQA: A new factoid QA data set matchingtrivia-style question-answer pairs with Freebase. In Proceedings of the 2019 Conference of theNorth American Chapter of the Association for Computational Linguistics: Human LanguageTechnologies, Volume 1 (Long and Short Papers) , pages 318–323, Minneapolis, Minnesota,June 2019. Association for Computational Linguistics.[17] Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez,Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston,Sheer El Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, SamBowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion,Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei,Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and JaredKaplan. Language models (mostly) know what they know. CoRR , abs/2207.05221, 2022.[18] Miyoung Ko, Sue Hyun Park, Joonsuk Park, and Minjoon Seo. Hierarchical deconstructionof LLM reasoning: A graph-based framework for analyzing knowledge utilization. In YaserAl-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conferenceon Empirical Methods in Natural Language Processing , pages 4995–5027, Miami, Florida,USA, November 2024. Association for Computational Linguistics.[19] Yuqing Kong, Yunqi Li, Yubo Zhang, Zhihuan Huang, and Jinzhao Wu. Eliciting thinkinghierarchy without a prior. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho,and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages13329–13341. Curran Associates, Inc., 2022.[20] Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. Semantic uncertainty: Linguistic invariancesfor uncertainty estimation in natural language generation. In The Eleventh InternationalConference on Learning Representations , 2023.[21] Jiyi Li. Crowdsourced text sequence aggregation based on hybrid reliability and representation.InProceedings of the 43rd International ACM SIGIR Conference on Research and Developmentin Information Retrieval , SIGIR ’20, page 1761–1764, New York, NY , USA, 2020. Associationfor Computing Machinery.[22] Junyou Li, Qin Zhang, Yangbin Yu, Qiang Fu, and Deheng Ye. More agents is all you need.Transactions on Machine Learning Research , 2024.[23] Qi Li, Yaliang Li, Jing Gao, Lu Su, Bo Zhao, Murat Demirbas, Wei Fan, and Jiawei Han.A confidence-aware approach for truth discovery on long-tail data. Proc. VLDB Endow. ,8(4):425–436, dec 2014.[24] Zhen Lin, Shubhendu Trivedi, and Jimeng Sun. Generating with confidence: Uncertainty quan-tification for black-box large language models. Transactions on Machine Learning Research ,2024.[25] Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng,Yegor Klochkov, Muhammad Faaiz Taufiq, and Hang Li. Trustworthy LLMs: a survey andguideline for evaluating large language models’ alignment. CoRR , abs/2308.05374, 2023.7[26] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, UriAlon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa PrasadMajumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. Self-refine: Iterative refinement with self-feedback. In Advances in Neural Information ProcessingSystems , volume 36, pages 46534–46594. Curran Associates, Inc., 2023.[27] Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Ha-jishirzi. When not to trust language models: Investigating effectiveness of parametric andnon-parametric memories. In Proceedings of the 61st Annual Meeting of the Association forComputational Linguistics (Volume 1: Long Papers) , pages 9802–9822, Toronto, Canada, July2023. Association for Computational Linguistics.[28] Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. AmbigQA: Answeringambiguous open-domain questions. In Proceedings of the 2020 Conference on EmpiricalMethods in Natural Language Processing (EMNLP) , pages 5783–5797, Online, November2020. Association for Computational Linguistics.[29] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin,Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton,Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano,Jan Leike, and Ryan Lowe. Training language models to follow instructions with humanfeedback. In Advances in Neural Information Processing Systems , volume 35, pages 27730–27744. Curran Associates, Inc., 2022.[30] Rebecca J. Passonneau and Bob Carpenter. The benefits of a model of annotation. Transactionsof the Association for Computational Linguistics , 2:311–326, 2014.[31] Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, and MassimoPoesio. Comparing Bayesian models of annotation. Transactions of the Association forComputational Linguistics , 6:571–585, 2018.[32] Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu,and Alexander Miller. Language models as knowledge bases? In Proceedings of the 2019Conference on Empirical Methods in Natural Language Processing and the 9th InternationalJoint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 2463–2473, HongKong, China, November 2019. Association for Computational Linguistics.[33] Du Phan, Neeraj Pradhan, and Martin Jankowiak. Composable effects for flexible and acceler-ated probabilistic programming in numpyro. arXiv preprint arXiv:1912.11554 , 2019.[34] Xin Qiu and Risto Miikkulainen. Semantic density: Uncertainty quantification for largelanguage models through confidence measurement in semantic space. In The Thirty-eighthAnnual Conference on Neural Information Processing Systems , 2024.[35] Nils Reimers and Iryna Gurevych. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan, editors, Proceedingsof the 2019 Conference on Empirical Methods in Natural Language Processing and the 9thInternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages3982–3992, Hong Kong, China, November 2019. Association for Computational Linguistics.[36] Salva Rühling Cachay, Benedikt Boecking, and Artur Dubrawski. End-to-end weak supervision.In M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan, editors,Advances in Neural Information Processing Systems , volume 34, pages 1845–1857. CurranAssociates, Inc., 2021.[37] Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time computeoptimally can be more effective than scaling model parameters, 2024.[38] Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Ng. Cheap and fast – but isit good? evaluating non-expert annotations for natural language tasks. In Mirella Lapataand Hwee Tou Ng, editors, Proceedings of the 2008 Conference on Empirical Methods inNatural Language Processing , pages 254–263, Honolulu, Hawaii, October 2008. Associationfor Computational Linguistics.8[39] Kai Sun, Yifan Xu, Hanwen Zha, Yue Liu, and Xin Luna Dong. Head-to-tail: How knowl-edgeable are large language models (LLMs)? A.K.A. will LLMs replace knowledge graphs?InProceedings of the 2024 Conference of the North American Chapter of the Association forComputational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages311–325, Mexico City, Mexico, June 2024. Association for Computational Linguistics.[40] Gemma Team. Gemma 2: Improving open language models at a practical size, 2024.[41] Dmitry Ustalov, Nikita Pavlichenko, and Boris Tseitlin. Learning from crowds with crowd-kit.Journal of Open Source Software , 9(96):6227, 2024.[42] Fanqi Wan, Xinting Huang, Deng Cai, Xiaojun Quan, Wei Bi, and Shuming Shi. Knowl-edge fusion of large language models. In The Twelfth International Conference on LearningRepresentations , 2024.[43] Hongyi Wang, Felipe Maia Polo, Yuekai Sun, Souvik Kundu, Eric Xing, and Mikhail Yurochkin.Fusing models with complementary expertise. In The Twelfth International Conference onLearning Representations , 2024.[44] Peiyi Wang, Lei Li, Zhihong Shao, Runxin Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu,and Zhifang Sui. Math-shepherd: Verify and reinforce LLMs step-by-step without humanannotations. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings ofthe 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: LongPapers) , pages 9426–9439, Bangkok, Thailand, August 2024. Association for ComputationalLinguistics.[45] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V . Le, Ed H. Chi, Sharan Narang, AakankshaChowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in languagemodels. In The Eleventh International Conference on Learning Representations, ICLR 2023,Kigali, Rwanda, May 1-5, 2023 . OpenReview.net, 2023.[46] Jacob Whitehill, Ting-fan Wu, Jacob Bergsma, Javier Movellan, and Paul Ruvolo. Whosevote should count more: Optimal integration of labels from labelers of unknown expertise. InY . Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta, editors, Advances in NeuralInformation Processing Systems , volume 22. Curran Associates, Inc., 2009.[47] Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, EnboZhao, Yu Zhang, Yulong Chen, et al. Siren’s song in the ai ocean: a survey on hallucination inlarge language models. arXiv preprint arXiv:2309.01219 , 2023.[48] Yudian Zheng, Guoliang Li, Yuanbing Li, Caihua Shan, and Reynold Cheng. Truth inference incrowdsourcing: Is the problem solved? Proc. VLDB Endow. , 10(5):541–552, 2017.9 |
3iDxHRQfVy | Had Enough of Experts? Elicitation and Evaluationof Bayesian Priors from Large Language ModelsDavid Selby1∗Kai Spriestersbach1Yuichiro Iwashita2Dennis Bappert3Archana Warrier1Sumantrak Mukherjee1Muhammad Nabeel Asim1Koichi Kise2Sebastian Vollmer11DFKI GmbH2Osaka Metropolitan University3Amazon Web ServicesAbstractLarge language models (LLMs) have been extensively studied for their abilities togenerate convincing natural language sequences, however their utility for quanti-tative information retrieval is less well understood. Here we explore the feasibilityof LLMs as a mechanism for quantitative knowledge retrieval to aid elicitation ofexpert-informed prior distributions for Bayesian statistical models. We present aprompt engineering framework, treating an LLM as an interface to scholarly lit-erature, evaluating responses in different contexts and domains. We discuss theimplications and challenges of treating LLMs as ‘experts’.1 IntroductionAutomated solutions for life sciences, industrial and governmental processes demand large amountsof data, which are not always available or complete. Small datasets are vulnerable to overfitting,weakening the validity, reliability and generalizability of statistical insights. To overcome theselimitations, analysts employ two approaches. Data-based or empirical methods maximize infor-mation extraction, through imputation models, data augmentation and transfer learning; however,this is limited by the size, availability and representativeness of the training set. Alternatively, onecan exploit prior information, via knowledge graphs or expert-elicited Bayesian priors, allowing forsparser models and handling of missing values. This approach is constrained by the difficulty, costand myriad different methods of obtaining and eliciting subjective and heterogeneous opinions fromexperts, then translating them into a form amenable to quantitative analysis [1].Large language models (LLMs) are generative models capable of producing natural language textsbased on a given prompt or context. LLMs such as GPT-4 have been used in various applications,such as chatbots, summarization and content creation. In the quantitative sciences, LLMs have beenapplied to mostly qualitative tasks such as code completion, teaching of mathematical concepts [2]and offering advice on modelling workflows or explaining data preparation pipelines [3, 4]. Somework has also applied LLMs to mathematical reasoning and symbolic logic [5, 6]. When linked withcertain application programming interfaces (APIs), or incorporated into a retrieval-augmented gen-eration (RAG) tool, some LLM frameworks [e.g. 7] are also capable of evaluating code, connectingto other data analysis tools or looking up supporting information [8, 9]. However, the capabilitiesof LLMs to retrieve accurate and reliable quantitative information are less well-explored. Here weexplore eliciting from LLMs informative ‘expert’ priors for Bayesian models.Can large language models be considered ‘experts’, having read a large sample of the scientificliterature in their training corpora, and thus treated as an accessible interface to this knowledge?Here we develop a prompting methodology to elicit prior distributions from LLMs, emulating real-∗[email protected] on Bayesian Decision-making and Uncertainty, 38th Conference on Neural Information ProcessingSystems (NeurIPS 2024).world elicitation protocols. LLM-elicited priors are compared with those from human experts, andthe LLM ‘expertise’ is quantitatively evaluated for several tasks.2 Related workLanguage models have been noted for their remarkable ability to act as unsupervised knowledgebases [10]. [11, 12] discuss the ‘emergent’ numeracy skills of LLMs, from early models unable toperform simple addition to later versions able to compute correlations. [13] showed that repeatedsampling from LLMs does not yield reasonable distributions of random numbers, making them poordata generators. [14] also suggested LLMs tend to underestimate uncertainty. It has been hypothe-sized that mode collapse can inhibit the diversity of LLM outputs [15]. The design, adaptation anduse of LLMs to assist data analysis is a broad topic. Many LLM-based data science tools focuseither on generation of analysis code [16] or connection with external APIs [7]. LLMs fine-tuned onscientific texts may be used to extract qualitative information, such as chemical formulae or entityrelations [17]. A conversation with a chatbot can also offer generic advice on data science practices.Prior distributions are just one form of knowledge elicited from domain experts; others include fea-ture engineering, model explanations and labelling heuristics, but in each case the process of elici-tation typically involves interviews, written correspondence or interaction with a custom computerapp [18]. A good expert-elicited prior distribution can help a statistical model effectively representthe data generating process, although due to various practical, technical and societal factors, priorelicitation is not yet widespread practice. A lack of standardized software means there is no way foran analyst using, e.g. Stan, to initiate an elicitation exercise for a specific model [19].LLM- driven elicitation [20] uses an LLM to assist elicitation from human experts, making the pro-cess interactive. In engineering, LLMs have been employed in generating (and responding to) re-quirements elicitation surveys [21–23]. Natural language processing is already extensively used toextract quantitative information from large texts to aid quantitative research [see, e.g. 24]. Prior dis-tributions can be elicited from literature via systematic reviews [25–27]. A meta-analytic-predictiveprior uses historical data to reduce the required sample size in clinical trials [28]. To our knowledge,direct elicitation of parametric priors from a ‘domain expert’ LLM has not yet been explored. [29]generated pseudodata as an indirect prior elicitation approach; by contrast, in this paper we attemptto elicit the distributional parameters directly.Several elicitation protocols have been developed to mitigate cognitive biases and combine thejudgements of multiple experts [30]. The Sheffield Elicitation Framework [S HELF ; 31] describesa collection of methods for eliciting a distribution based on aggregated opinions of multiple experts,through group discussion guided by a facilitator. Following a primer in probability and statistics, theprotocol includes various ways of eliciting a univariate distribution, such as the ‘roulette method’,where participants assign chips to bins to form a histogram. Alternatively, the quartile method [or‘Sheffield method’; 32] uses a series of questions to elicit quantiles of a distribution. Cooke’s method[33] pools the distributions of multiple experts, weighted according to their respective ‘calibration’(accuracy) and ‘information’ (uncertainty). The Delphi method uses the quartile method, iterativelyrefined over successive rounds using anonymized feedback from other participants. In this paper,however, we consider only single-agent LLMs with a zero-shot approach.3 Methods3.1 Evaluating expertiseWhat makes a good prior? Bayesian statistics involves decisionmaking based on a posterior dis-tribution, p(θ|D)∝π(θ)Qni=1p(xi|θ),where π(θ)denotes the prior distribution and θa vectorof parameters to model xi, the observed data. The definition of a ‘good’ prior distribution—likeBayesian statistics itself—is subjective, depending on the analyst’s understanding of the purpose ofexpert-elicited information. No standard benchmark exists for expert-elicited prior distributions; aprior is a function of the expert and the elicitation method, as well as of the predictive task [34].One purpose of prior information is to reduce amount of data needed. Another is to treat expertknowledge and observed data as complementary sources of information about a natural process.Any statistical model is at least slightly misspecified, but a prior can still be informative ,realisticanduseful [see 35]. An informative prior is different from a non-informative or default prior, i.e.2it is not too vague. Realistic or well-calibrated priors should align with those from human expertsor be otherwise externally verifiable. ‘Useful’ means superior posterior predictive performance ona downstream task, improving expected utility over reference priors. Here we consider informative-ness and realism.A measurement of the informativeness of a prior distribution is the prior effective sample size [36,37]. This is neither data-dependent nor measures improvement on downstream tasks, but how manydata points needed to get similar peakiness/curvature around the posterior mode. The heuristic prioreffective sample size for Beta (α, β)is ESS =α+β[36], which measures the concentration of theprior and the amount of data needed to shift the posterior if the prior were misspecified.We can measure realism with the Bayesian log posterior predictive density [38] (a.k.a. log loss) orthe continuous ranked probability score, a proper scoring rule used in weather forecasting [39]. Wecan estimate both metrics using the posterior predictive distribution p(x′|D) =Ep(θ|D)[p(x′|θ)]onheld-out data. [40] describe a similar approach quantifying utility of synthetic data.3.2 Eliciting prior distributions from LLMsImpersonating a human domain expert can improve an LLM’s performance at related tasks [41].Nevertheless, in response to scientific questions, especially on potentially sensitive topics, suchas healthcare advice, language models often prevaricate [ lautrup ̇heart-to-heart ̇2023 ]. An LLMelicitation system should therefore not only prompt the model to roleplay an expert, but also carefullyspecify the task to ensure contextually relevant information is returned in the appropriate format.Ourexpert prompt initialization module is a system prompt defining a suitable expert role for themodel to imitate. For efficiency, the LLM itself is used to generate these descriptions, once pertask, of the form “You are a...”. To avoid the model offering verbose, generic or prevaricatingadvice about prior elicitation, the task specification module insists that the model follows a particularelicitation protocol followed by returning a parametric prior distribution in a standardized format,e.g. “ Beta(1, 1) ”. Further details are given in the appendix and code is available on GitHub.3.3 ExperimentsHuman experts Absent an open benchmark of expert-elicited priors, we select a recent workfrom the literature that describes an elicitation procedure and reported the resulting distributions.[42] interviewed six psychology researchers about typical small-to-medium effect sizes and Pearsoncorrelations in their respective specialisms, using the histogram method. Using similar questionwording, we elicited prior distributions from LLMs prompted to simulate an expert, conferenceof experts [43] or non-expert, with and without mentioning the S HELF protocol. This experimentis a qualitative comparison of how LLMs behave when emulating a published example of a priorelicitation exercise with published question wording and results.Expert confidence We prompted ChatGPT 3.5 to formulate 25 tasks that might call for expertelicitation in the fields of healthcare, economics, technology, environmental science, marketing andeducation. Tasks correspond to proportions or probabilities following a beta distribution. Thesescenarios were then used to gauge general levels of confidence of elicited distributions from differentLLMs, using the prior effective sample size metric, α+β.Meteorology Here we tried to illustrate how many samples the LLM prior offers for an analystwho has not yet collected any data. We compare the prior predictive to probabilistic supervisedlearning in the same statistical family [44]: a normal-inverse-gamma model for temperature and agamma-exponential for precipitation. We ask: how many samples on average would a frequentistmodel need to achieve the same or better log-loss (or CRPS or MSE) than the prior predictivedistribution? We split the data in half for testing and repeatedly sample up to13for training fromthe remaining half. An alternative comparison would be of a posterior predictive based on data anda baseline prior, however choosing such a baseline is difficult. Unlike the ( α+β) effective samplesize heuristic, this data-dependent approach quantifies prior–data conflict. Priors were elicited fromLLMs for the typical daily temperature and precipitation in 25 small and large cities around the worldduring the month of December. These distributions were then compared with historical weather data.By investigating different continents and varying sizes of settlements, the goal was to identify any3systematic biases that might emerge from LLMs’ respective training corpora. It is also interestingto compare the behaviour with skewed and symmetrical distributions.4 ResultsFigure 1: Priors for Cohen’s δ(left) and Pearson correlations (right) elicited from LLM and humanexperts in psychology. Dashed lines denote a S HELF -like elicitation protocolFigure 2: LLM priors for meteorology: number of observations needed for frequentist model toachieve better MSE than the prior predictive (right figure shows results for GPT-4)Human and LLM-elicited distributions are compared in Figure 1. Roleplaying as experts in dif-ferent sub-fields did not have a noticeable effect on the priors. LLM priors for Cohen’s δweremostly centred around small effect sizes, except GPT-4, which offered distributions around δ= 0.5.Mistral-7B-Instruct invariably gave tdistributions with ν= 30 (Llama-70B-Chat-Q5: ν= 5); othermodels appeared to grow more conservative (smaller ν) if asked to roleplay an expert, simulate adecision conference or employ S HELF . Beta priors from LLMs apparently had little in common withthose from real experts: GPT-4 provides a symmetric unimodal distribution whereas other modelsoffer a right-skewed ‘bathtub’ distribution.For the expert confidence experiment, Figure 3 shows α+βfor beta priors. Llama-based modelsappear to give more conservative priors, GPT-4 is consistently more informative and Mistral 7BInstruct occasionally offered extremely high values. There was no clear difference between domains.In our meteorological task, Figure 2 shows data-dependent effective sample size of the prior predic-tive distribution elicited from LLMs, using the approach described above. In many cases, the priorpredictive model is in conflict with the data (i.e. overconfident, inaccurate priors) so the ESS is equalto zero, but not for a selection of larger cities. This may be due to the LLMs defaulting to data frommore extensively studied regions in their training corpora.Further results are given in the appendix.45 ConclusionIn this paper we demonstrated the feasibility of extracting informative Bayesian prior distributionsfrom generic LLMs with a simple expert prompting framework. Methods for the qualitative andquantitative evaluation of informativeness and realism of elicited priors allow assessment with-out specifying downstream tasks. LLMs potentially promise a more efficient interface to scientificknowledge than recruiting and interviewing domain experts.However, like human experts, the models vary considerably in their level of confidence arounddifferent phenomena, making discrepancies apparently more model- than task-dependent.LLMs areinherently shaped by the composition and diversity of their training data, potentially introducingbiases that may affect the generalizability of results when considering LLMs as surrogate experts orintegrating them into Bayesian reasoning frameworks. Results indicate that quantitative knowledgeretrieval from LLMs has room for improvement, necessitating fine-tuned domain models, advancedprompt engineering techniques or multi-agent frameworks.The comparison of human domain experts and LLM-based expert systems remains challenging, andwarrants further development. Genuine domain expertise continues to play an important role in dataanalysis.References[1] Julia R. Falconer et al. “Methods for Eliciting Informative Prior Distributions: A CriticalReview”. In: Decision Analysis 19.3 (Sept. 2022). Publisher: INFORMS, pp. 189–204. ISSN :1545-8490. DOI:10.1287/deca.2022.0451 .URL:https://pubsonline.informs.org/doi/abs/10.1287/deca.2022.0451 (visited on 10/22/2023).[2] Yousef Wardat et al. “ChatGPT: A revolutionary tool for teaching and learning mathematics”.In:Eurasia Journal of Mathematics, Science and Technology Education 19.7 (July 1, 2023).Publisher: Modestum, em2286. ISSN : 1305-8215, 1305-8223. DOI:10 . 29333 / ejmste /13272 .URL:https://www.ejmste.com/article/chatgpt-a-revolutionary-tool-for-teaching-and-learning-mathematics-13272 (visited on 01/14/2024).[3] Anna Barberio. “Large language models in data preparation: opportunities and challenges”.MSc. Milan, Italy: Politecnico di Milano, Dec. 19, 2023. URL:https://www.politesi.polimi.it/handle/10589/215097 .[4] Hossein Hassani and Emmanuel Sirmal Silva. “The Role of ChatGPT in Data Science: HowAI-Assisted Conversational Interfaces Are Revolutionizing the Field”. In: Big Data and Cog-nitive Computing 7.2 (June 2023). Number: 2 Publisher: Multidisciplinary Digital PublishingInstitute, p. 62. ISSN : 2504-2289. DOI:10.3390/bdcc7020062 .URL:https://www.mdpi.com/2504-2289/7/2/62 (visited on 01/10/2024).[5] Joy He-Yueya et al. Solving Math Word Problems by Combining Language Models WithSymbolic Solvers . Apr. 16, 2023. DOI:10 . 48550 / arXiv . 2304 . 09102 . arXiv: 2304 .09102[cs] .URL:http://arxiv.org/abs/2304.09102 (visited on 01/14/2024).[6] Graziella Orr `u et al. “Human-like problem-solving abilities in large language models usingChatGPT”. In: Frontiers in Artificial Intelligence 6 (2023). ISSN : 2624-8212. URL:https:/ / www . frontiersin . org / articles / 10 . 3389 / frai . 2023 . 1199350 (visited on01/14/2024).[7] Yingqiang Ge et al. OpenAGI: When LLM Meets Domain Experts . Nov. 3, 2023. arXiv: 2304.04370[cs] .URL:http://arxiv.org/abs/2304.04370 (visited on 12/13/2023).[8] Josh M. Nicholson et al. “scite: A smart citation index that displays the context of citationsand classifies their intent using deep learning”. In: Quantitative Science Studies 2.3 (Nov. 5,2021), pp. 882–898. ISSN : 2641-3337. DOI:10.1162/qss_a_00146 .URL:https://doi.org/10.1162/qss_a_00146 (visited on 01/14/2024).[9] Ehsan Kamalloo et al. HAGRID: A Human-LLM Collaborative Dataset for GenerativeInformation-Seeking with Attribution . July 31, 2023. DOI:10.48550/arXiv.2307.16883 .arXiv: 2307 . 16883[cs] .URL:http : / / arxiv . org / abs / 2307 . 16883 (visited on01/10/2024).5[10] Fabio Petroni et al. “Language Models as Knowledge Bases?” In: Proceedings of the 2019Conference on Empirical Methods in Natural Language Processing and the 9th InternationalJoint Conference on Natural Language Processing (EMNLP-IJCNLP) . EMNLP-IJCNLP2019. Ed. by Kentaro Inui et al. Hong Kong, China: Association for Computational Lin-guistics, Nov. 2019, pp. 2463–2473. DOI:10 . 18653 / v1 / D19 - 1250 .URL:https : / /aclanthology.org/D19-1250 (visited on 01/18/2024).[11] David Noever and Forrest McKee. Numeracy from Literacy: Data Science as an EmergentSkill from Large Language Models . Jan. 30, 2023. DOI:10.48550/arXiv.2301.13382 .arXiv: 2301 . 13382[cs] .URL:http : / / arxiv . org / abs / 2301 . 13382 (visited on01/15/2024).[12] Vincent Cheng and Yu Zhang. “Analyzing ChatGPT’s Mathematical Deficiencies: Insightsand Contributions”. In: Proceedings of the 35th Conference on Computational Linguistics andSpeech Processing (ROCLING 2023) . ROCLING 2023. Ed. by Jheng-Long Wu and Ming-Hsiang Su. Taipei City, Taiwan: The Association for Computational Linguistics and ChineseLanguage Processing (ACLCLP), Oct. 2023, pp. 188–193. URL:https://aclanthology.org/2023.rocling-1.22 (visited on 01/15/2024).[13] Aspen K. Hopkins, Alex Renda, and Michael Carbin. “Can LLMs Generate Random Num-bers? Evaluating LLM Sampling in Controlled Domains”. In: ICML 2023 Workshop: Sam-pling and Optimization in Discrete Space. Aug. 2, 2023. URL:https://openreview.net/forum?id=Vhh1K9LjVI (visited on 12/13/2023).[14] Miao Xiong et al. Can LLMs Express Their Uncertainty? An Empirical Evaluation of Con-fidence Elicitation in LLMs . June 22, 2023. DOI:10.48550/arXiv.2306.13063 . arXiv:2306.13063[cs] .URL:http://arxiv.org/abs/2306.13063 (visited on 01/15/2024).[15] Anonymous. Understanding the Effects of RLHF on LLM Generalisation and Diversity .Oct. 13, 2023. URL:https : / / openreview . net / forum ? id = PXD3FAVHJT (visited on01/15/2024).[16] Fadel M. Megahed et al. “How generative AI models such as ChatGPT can be (mis)used inSPC practice, education, and research? An exploratory study”. In: Quality Engineering 0.0(2023). Publisher: Taylor & Francis eprint: https://doi.org/10.1080/08982112.2023.2206479,pp. 1–29. ISSN : 0898-2112. DOI:10 . 1080 / 08982112 . 2023 . 2206479 .URL:https ://doi.org/10.1080/08982112.2023.2206479 (visited on 01/15/2024).[17] Alexander Dunn et al. Structured information extraction from complex scientific text withfine-tuned large language models . Dec. 10, 2022. arXiv: 2212.05238[cond- mat] .URL:http://arxiv.org/abs/2212.05238 (visited on 01/19/2024).[18] Daniel Kerrigan, Jessica Hullman, and Enrico Bertini. “A Survey of Domain KnowledgeElicitation in Applied Machine Learning”. In: Multimodal Technologies and Interaction 5.12(Dec. 2021). Number: 12 Publisher: Multidisciplinary Digital Publishing Institute, p. 73.ISSN : 2414-4088. DOI:10.3390/mti5120073 .URL:https://www.mdpi.com/2414-4088/5/12/73 (visited on 12/12/2023).[19] Petrus Mikkola et al. Prior knowledge elicitation: The past, present, and future . May 9, 2023.DOI:10.48550/arXiv.2112.01380 . arXiv: 2112.01380[stat] .URL:http://arxiv.org/abs/2112.01380 (visited on 10/22/2023).[20] Belinda Z. Li et al. Eliciting Human Preferences with Language Models . Oct. 17, 2023. DOI:10.48550/arXiv.2310.11589 . arXiv: 2310.11589[cs] .URL:http://arxiv.org/abs/2310.11589 (visited on 10/22/2023).[21] Jules White et al. ChatGPT Prompt Patterns for Improving Code Quality, Refactoring, Re-quirements Elicitation, and Software Design . Mar. 11, 2023. DOI:10.48550/arXiv.2303.07839 . arXiv: 2303.07839[cs] .URL:http://arxiv.org/abs/2303.07839 (visited on01/10/2024).[22] Krishna Ronanki, Christian Berger, and Jennifer Horkoff. “Investigating ChatGPT’sPotential to Assist in Requirements Elicitation Processes”. In: 2023 49th Euromi-cro Conference on Software Engineering and Advanced Applications (SEAA) . 202349th Euromicro Conference on Software Engineering and Advanced Applications(SEAA). ISSN: 2376-9521. Sept. 2023, pp. 354–361. DOI:10 . 1109 / SEAA60479 .2023 . 00061 .URL:https : / / ieeexplore . ieee . org / abstract /document / 10371698 ? casa _ token = dDghY2R _ Sl0AAAAA : hW7ejl -6CVqLZGF9RzDqmdlNjQwcCsTYACIBxNWTLmKeFJGGWviMDi - ToxkUa9d3GzQbAr0aKU23j(visited on 01/15/2024).[23] Binnur G ̈orer and Fatma Bas ̧ak Aydemir. “Generating Requirements Elicitation InterviewScripts with Large Language Models”. In: 2023 IEEE 31st International Requirements En-gineering Conference Workshops (REW) . 2023 IEEE 31st International Requirements En-gineering Conference Workshops (REW). ISSN: 2770-6834. Sept. 2023, pp. 44–51. DOI:10.1109/REW57809.2023.00015 .URL:https://ieeexplore.ieee.org/abstract/document / 10260795 ? casa _ token = M3e - 5X4X3lMAAAAA : y - 1W - kYqXjp1CQ _EwuJqGBaaNvPGFxyEvd8I7Vp32kXHsuF9OL6CGDJjmjDIPsw4pFdiPFzgzyB1 (visited on01/10/2024).[24] Elsa A. Olivetti et al. “Data-driven materials research enabled by natural language processingand information extraction”. In: Applied Physics Reviews 7.4 (Dec. 1, 2020), p. 041317. ISSN :1931-9401. DOI:10.1063/5.0021106 .URL:https://pubs.aip.org/apr/article/7/4/041317/832109/Data-driven-materials-research-enabled-by-natural(visited on 01/19/2024).[25] Charlotte Rietbergen et al. “Incorporation of historical data in the analysis of randomizedtherapeutic trials”. In: Contemporary Clinical Trials 32.6 (Nov. 1, 2011), pp. 848–855. ISSN :1551-7144. DOI:10.1016/j.cct.2011.06.002 .URL:https://www.sciencedirect.com/science/article/pii/S1551714411001479 (visited on 01/19/2024).[26] Rens van de Schoot et al. “Bayesian PTSD-Trajectory Analysis with Informed Pri-ors Based on a Systematic Literature Search and Expert Elicitation”. In: Mul-tivariate Behavioral Research 53.2 (Mar. 4, 2018). Publisher: Routledge eprint:https://doi.org/10.1080/00273171.2017.1412293, pp. 267–291. ISSN : 0027-3171. DOI:10.1080/00273171.2017.1412293 .URL:https://doi.org/10.1080/00273171.2017.1412293 (visited on 01/19/2024).[27] Maximilian Linde et al. Data-driven Prior Elicitation for Bayes Factors in Cox Regression forNine Subfields in Biomedicine . Pages: 2023.09.04.23295029. Sept. 5, 2023. DOI:10.1101/2023.09.04.23295029 .URL:https://www.medrxiv.org/content/10.1101/2023.09.04.23295029v1 (visited on 01/19/2024).[28] Sebastian Weber et al. “Applying Meta-Analytic-Predictive Priors with the R Bayesian Ev-idence Synthesis Tools”. In: Journal of Statistical Software 100 (Nov. 30, 2021), pp. 1–32.ISSN : 1548-7660. DOI:10.18637/jss.v100.i19 .URL:https://doi.org/10.18637/jss.v100.i19 (visited on 02/07/2024).[29] Henry Gouk and Boyan Gao. “Automated Prior Elicitation from Large Language Models forBayesian Logistic Regression”. In: AutoML Conference 2024 (Workshop Track). Aug. 9,2024. URL:https://openreview.net/forum?id=euLzlnU7gz (visited on 11/27/2024).[30] Anthony O’Hagan. “Expert Knowledge Elicitation: Subjective but Scientific”. In: TheAmerican Statistician 73 (sup1 Mar. 29, 2019). Publisher: Taylor & Francis eprint:https://doi.org/10.1080/00031305.2018.1518265, pp. 69–81. ISSN : 0003-1305. DOI:10 .1080 / 00031305 . 2018 . 1518265 .URL:https : / / doi . org / 10 . 1080 / 00031305 .2018.1518265 (visited on 01/19/2024).[31] John Paul Gosling. “SHELF: The Sheffield Elicitation Framework”. In: Elicitation: The Sci-ence and Art of Structuring Judgement . Ed. by Luis C. Dias, Alec Morton, and John Quigley.International Series in Operations Research & Management Science. Cham: Springer Inter-national Publishing, 2018, pp. 61–93. ISBN : 978-3-319-65052-4. DOI:10.1007/978- 3-319-65052-4_4 .URL:https://doi.org/10.1007/978-3-319-65052-4_4 (visited on01/23/2024).[32] European Food Safety Authority. “Guidance on Expert Knowledge Elicitation in Foodand Feed Safety Risk Assessment”. In: EFSA Journal 12.6 (June 2014). ISSN : 18314732,18314732. DOI:10.2903/j.efsa.2014.3734 .URL:https://data.europa.eu/doi/10.2903/j.efsa.2014.3734 (visited on 01/23/2024).[33] Roger Cooke. Experts in Uncertainty: Opinion and Subjective Probability in Science .Google-Books-ID: 5nDmCwAAQBAJ. Oxford University Press, 1991. 334 pp. ISBN : 978-0-19-506465-0.7[34] Andrew Gelman, Daniel Simpson, and Michael Betancourt. “The Prior Can Often Only BeUnderstood in the Context of the Likelihood”. In: Entropy 19.10 (Oct. 2017). Number: 10Publisher: Multidisciplinary Digital Publishing Institute, p. 555. ISSN : 1099-4300. DOI:10.3390/e19100555 .URL:https://www.mdpi.com/1099-4300/19/10/555 (visited on12/12/2023).[35] Cameron J. Williams, Kevin J. Wilson, and Nina Wilson. “A Comparison of Prior ElicitationAggregation Using the Classical Method and SHELF”. In: Journal of the Royal StatisticalSociety Series A: Statistics in Society 184.3 (July 1, 2021), pp. 920–940. ISSN : 0964-1998.DOI:10.1111/rssa.12691 .URL:https://doi.org/10.1111/rssa.12691 (visited on11/27/2024).[36] Satoshi Morita, Peter F. Thall, and Peter M ̈uller. “Determining the Effec-tive Sample Size of a Parametric Prior”. In: Biometrics 64.2 (2008). eprint:https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1541-0420.2007.00888.x, pp. 595–602.ISSN : 1541-0420. DOI:10 . 1111 / j . 1541 - 0420 . 2007 . 00888 . x .URL:https ://onlinelibrary.wiley.com/doi/abs/10.1111/j.1541- 0420.2007.00888.x(visited on 02/08/2024).[37] Beat Neuenschwander et al. “Predictively Consistent Prior Effective Sample Sizes”. In: Bio-metrics 76.2 (June 1, 2020), pp. 578–587. ISSN : 0006-341X. DOI:10.1111/biom.13252 .URL:https://doi.org/10.1111/biom.13252 (visited on 02/08/2024).[38] Richard McElreath. Statistical rethinking: a Bayesian course with examples in R and Stan .Texts in statistical science series. Boca Raton: CRC Press Taylor & Francis, 2016. 487 pp.ISBN : 978-1-4822-5344-3.[39] Tilmann Gneiting and Adrian E Raftery. “Strictly Proper Scoring Rules, Prediction, and Es-timation”. In: Journal of the American Statistical Association 102.477 (Mar. 1, 2007). Pub-lisher: Taylor & Francis eprint: https://doi.org/10.1198/016214506000001437, pp. 359–378.ISSN : 0162-1459. DOI:10.1198/016214506000001437 .URL:https://doi.org/10.1198/016214506000001437 (visited on 02/09/2024).[40] Harrison Wilde et al. “Foundations of Bayesian Learning from Synthetic Data”. In: Proceed-ings of The 24th International Conference on Artificial Intelligence and Statistics . Interna-tional Conference on Artificial Intelligence and Statistics. ISSN: 2640-3498. PMLR, Mar. 18,2021, pp. 541–549. URL:https://proceedings.mlr.press/v130/wilde21a.html(visited on 09/16/2022).[41] Leonard Salewski et al. In-Context Impersonation Reveals Large Language Models’ Strengthsand Biases . Nov. 26, 2023. DOI:10.48550/arXiv.2305.14930 . arXiv: 2305.14930[cs] .URL:http://arxiv.org/abs/2305.14930 (visited on 01/19/2024).[42] Angelika M. Stefan et al. “Expert agreement in prior elicitation and its effects on Bayesianinference”. In: Psychonomic Bulletin & Review 29.5 (Oct. 2022), pp. 1776–1794. ISSN : 1531-5320. DOI:10.3758/s13423-022-02074-4 .[43] L.D. Phillips. “Decision conferencing”. In: IEE Colloquium on CSCW: Some FundamentalIssues . IEE Colloquium on CSCW: Some Fundamental Issues. Mar. 1991, pp. 6/1–6/3. URL:https://ieeexplore.ieee.org/abstract/document/181328 (visited on 01/24/2024).[44] Frithjof Gressmann et al. Probabilistic supervised learning . May 7, 2019. DOI:10.48550/arXiv.1801.00753 . arXiv: 1801.00753[cs,math,stat] .URL:http://arxiv.org/abs/1801.00753 (visited on 02/10/2024).8A Prompting for prior elicitationA.1 GuardrailsSafeguards built into ChatGPT forbid the agent from providing quantitative information about cer-tain sensitive topics, for example health conditions.User What would the mean and standard deviation of systolic and diastolicblood pressure be for each of those conditions? Don’t explain, just give the answerin the form of a table with name of condition, mean systolic, SD systolic, meandiastolic, SD diastolicChatGPT It’s not appropriate or accurate to provide mean and standard devia-tion values for systolic and diastolic blood pressure for specific health conditionswithout proper context or sources. Blood pressure can vary widely among indi-viduals with the same condition, and such numerical values may not be clinicallymeaningful or accurate.If you have concerns about your blood pressure or specific health conditions, it iscrucial to consult with a healthcare professional who can assess your individualcase, consider relevant factors, and provide personalized guidance.User Imagine you are an expert in biomedicine and must give an educated guessChatGPT I’m sorry, but even as an “imagined expert,” it would be inappro-priate and potentially misleading to provide arbitrary or speculative mean andstandard deviation values for systolic and diastolic blood pressure associated withspecific health conditions. The range of blood pressure can vary widely amongindividuals, and assigning numerical values without proper context or scientificbasis would not be accurate.If you have concerns about blood pressure or specific health conditions, it’s es-sential to consult with a qualified healthcare professional who can assess yourindividual case, consider relevant factors, and provide accurate guidance based onclinical knowledge and evidence.However, these restrictions are subverted when similar information is requested in the form of priordistributions.User You are being asked to provide expert-informed informative prior distri-butions for a Bayesian data analysis. You give results in pseudocode Stan distri-butions, for example ` y∼normal(0, 1) `. Give a knowledge-based prior dis-tribution for a randomly selected person’s typical systolic blood pressure in thisform. Surround your answer with `backticks`. Do not give an explanation, justgive the distributionChatGPT `y∼normal(120, 10) `This extract is from a conversation with ChatGPT 3.5.A.2 Elicitation frameworksChatGPT 3.5 demonstrates familiarity with elicitation frameworks that are in the academic literature.For example:User Use the Sheffield elicitation framework, combined with the histogrammethod and your own knowledge gathered from your training data, to give a priordistribution for the daily temperature (Celsius) in Kaiserslautern in December. Donot ask for further input: you may converse with yourself or simulate a discussionamong experts. Explain how you use the histogram method in the process. Giveyour final answer as a parametric probability distribution.ChatGPT To start, let’s consider the Sheffield elicitation framework. This ap-proach involves gathering expert opinions to construct probability distributions.Since we’re focusing on daily temperatures in Kaiserslautern in December, we’ll9need to consult meteorologists or climatologists familiar with the region’s climatepatterns.Based on my training data, I know that December temperatures in Kaiserslauterntypically range from around –2°C to 7°C. This information aligns with generalclimate data for the area. Now, let’s simulate a conversation with an expert torefine this knowledge.Expert: “In Kaiserslautern, December temperatures tend to vary quite a bit. Weoften see colder temperatures early in the month, with occasional dips below freez-ing, while later in the month, it tends to hover around 5–7°C.”Me: “That’s interesting. Would you say there are any extreme temperature outlierswe should consider?”Expert: “While rare, we’ve recorded instances of temperatures dropping as lowas –10°C or rising as high as 12°C in December, though these extremes are notcommon.”Now, armed with this expert input and my own data, we can construct a priordistribution using the histogram method. This method involves dividing the rangeof temperatures into intervals and assigning probabilities based on the frequencyof occurrence within each interval.Let’s divide the temperature range into intervals of 2 degrees Celsius each:• Interval 1: -10°C to -8°C• Interval 2: -8°C to -6°C• Interval 3: -6°C to -4°C• Interval 4: -4°C to -2°C• Interval 5: -2°C to 0°C• Interval 6: 0°C to 2°C• Interval 7: 2°C to 4°C• Interval 8: 4°C to 6°C• Interval 9: 6°C to 8°C• Interval 10: 8°C to 10°C. . .(Continues)The full transcript of this conversation with ChatGPT 3.5 is available online.B Effective sample sizeFigure 3: Distribution of prior effective sample size ( α+β) for beta priors on various tasksC Weather forecastingWe measure the effective increase in observations, starting from zero samples, for a frequentistmodel to obtain better mean squared error (MSE) than the prior predictive distribution elicited from10the LLM. The effective sample size (ESS) is the number of samples needed by the frequentist modelto outperform the prior predictive model. In many cases, the prior predictive model is in conflictwith the data and so the ESS is equal to zero (or, strictly speaking, 2, as this is the minimum numberof samples with which one can compute an empirical standard deviation).1112 |
xC2xtBLmri | CAFA: Coding as Auto-Formulation Can Boost LargeLanguage Models in Solving Linear ProgrammingProblemHaoxuan Deng, Bohao Zheng, Yirui Jiang, Trung Hieu TranCranfield University, United Kindom{haoxuan.deng, bohao.zheng, yirui.jiang, t.h.tran}@cranfield.ac.ukAbstractLarge language models (LLMs) open new doors for Operations Research (OR).While initial studies explored multi-agent strategies for LLMs in OR, our researchchallenges the assumption that such complex multi-step pipelines unnecessarilyyield superior results for Linear Programming (LP) problems. This paper introducesa streamlined methodology: Coding as Auto-Formulation (CAFA). In comparison,CAFA is only one compact prompt guiding the LLMs to formalize the givenproblem text into lines of codes. The generated code will be post-processing forexecution to get the answer. The proposed methods is tested on the NL4OPTdataset with different LLMs. Results suggest that despite its simplicity, consistentlyenhances LP problem-solving accuracy across different models. This study aimsto shed light on better unleashing LLMs’ mathematical reasoning capability withmore streamlined prompts. The code of this paper can be found in https://github.com/BlueAsuka/CAFA1 IntroductionLarge Language Models (LLMs) have transformed natural language processing with their ability tounderstand, analyze, and generate human-like text. Their success has led to increasing interest inapplying LLMs to solve mathematical word problems Cobbe et al. [2021], drawing attention fromthe Operations Research (OR) field. Traditionally, solving OR problems like linear programming(LP) involves substantial human expertise to extract and formalize information for establishingoptimization models and to use commercial solvers (CPLEX, Gurobi, etc.) to get the solution. Therise of LLMs offers the potential to automate this process, leading to the development of a newresearch area: LLMs for operations research (LLM4OR). Given that the research on LLM4OR is stillin its early stages, this paper will narrow its focus specifically to the LP problem. Mathematically,LP is maximizing or minimizing a linear objective function, subject to linear inequality and equalityconstraints. It can be expressed as:min ormaxcTx,where Ax≤b,x≥0 (1)Where xis the vector of decision variables, crepresents the coefficients of the objective function,andAandbdefine the parameters and limits of constraints.Recent research on this topic has explored various methods. The main idea is to carefully designinstructions to guide the LLM to decompose the optimization process into a series of sub-tasks andsolve them sequentially to derive the answer. Most of the research follows the abstract workflowillustrated in Figure 1. Typically, the workflow mainly includes four components: 1). An analyzerfor name-entity recognition of variables and relations in the given text for objectives and constraints37th Conference on Neural Information Processing Systems (NeurIPS 2023) Workshop on MATH-AI.Figure 1: The abstract diagram of LLMs for the linear programming optimization automatic workflowextraction. 2). A parser to formulate the extracted information into mathematical expressions. 3).A coder to compose a code snippet to call external solvers for problem-solving. 4). An executor(usually using a Python interpreter) runs the code to obtain the final solution.Remarkable works include Chain-of-Expert (CoE) Xiao et al. [2023] and OptiMUS AhmadiTeshniziet al. [2024]. Both of these approaches employ multi-agent systems based on LLM by assigningmultiple expert agents with distinct roles to implement the mentioned components of the LP op-timization process to get the solution. OptiGuide Li et al. [2023] extends the applications of theproposed method to supply chain optimization, allowing non-technical users in the field of logistic touse optimization packages.Although the effectiveness of these methods have been demonstrated, their implementations rely onautonomous agent workflows that demand extensive prompt engineering and cutting-edge modelslike GPT-4 or Claude2. This raises two key questions: (1) Is a multi-agent framework with intensiveprompt engineering essential for unlocking LLMs’ problem-solving potential in LP tasks? (2) Howcan less-capable LLMs (open source models) be leveraged to reason and solve LP problems withreduced prompt engineering requirements? This paper seeks to address these two challenges.The remainder of this paper is organized as follows. Section 2 presents a probabilistic framework tomodel the multi-agent water flow diagram used for LP automation and analysis. Section 3 providesan in-depth discussion of the motivation behind and the design details of the CAFA framework.In Section 4, experimental evaluations are conducted on the NL4Opt dataset, with performancecomparisons against selected baseline models. Section 5 addresses the limitations of the currentapproach and discusses potential directions for future research. Finally, the paper concludes with abrief summary of key findings.2 Problem Formulation and AnalysisConsider a Linear Programming (LP) problem presented in text, denoted as Q, and a languagemodel Lθtasked with generating the correct answer A, where θrepresents the fixed parameters ofthe pretrained model. The probability of obtaining the correct answer A, given the question Q, isexpressed as Pθ(A|Q). For simplicity, when using a single model Lθin a multi-step process, thisprobability can be abbreviated as P(A|Q).In a multi-agent framework, the final answer Ais produced over nsteps, resulting in A=An. Ateach step i, an intermediate result Aiis generated based on the corresponding prompt pi. Applyingthe chain rule, the probability of the final answer is decomposed as follows:P(A|Q) =P(An|Q) =nYi=2P(Ai|Ai−1, pi)P(A1|Q, p 1) (2)In a multi-agent system, it is often the case that the prompt piis explicitly predefined and remainsunchanged regardless of the answers generated in previous steps. Consequently, it is reasonable2Figure 2: LP solving with the CAFA prompt and external code post-processingto assume that piis independent of prior answers Ai−1, Ai−2, ..., A 1. At each step, the answer Aidepends solely on the current prompt pi, eliminating the need to track the entire history of previousanswers. Thus, the probability expression simplifies as P(Ai|Ai−1, pi) =P(Ai|pi), allowing for amore streamlined version of the equation.P(A|Q) =P(A1|Q, p 1)nYi=2P(Ai|pi) (3)SinceQni=2P(Ai|pi)≤1P(A|Q)≤P(A1|Q, p 1) (4)In summary, the initial prompt p1is crucial in determining the correctness of the final answer.Moreover, the multi-step workflow does not always enhance performance, as subsequent steps mayintroduce error accumulation, which can degrade the overall accuracy of the solution. This thusanswer the first question: chunking the LP solving procedure into multiple sequential steps and solvethem one-by-one is not always lead to a better performance in the LP task.It is a note that equation 4 is not conflict to the main idea of Chain-of-Through (CoT), since equation4 demonstrates the contribution of various prompts across sequential tasks, while CoT is only oneprompt for a single task.3 Code as Auto-Formulation Prompting with Code Post-ProcessingAccording to the equation 4, the key lies in shrinking the problem-solving process into one step withwell-design first prompt to obtain a better performance. For the LP problem-solving, it is unrealisticto enable the LLM to obtain correct answer directly by just zero-shot prompting. An alternativeway is to let the LLM to generate an intermediate presentation (IR) of the text to disambiguate theproblem for processing. According to Ramamonjison et al. [2022], they use two-stages approach toconvert the text into a context-free expression, specifically, a matrix representation of the constraintsand objectives reported in the text. Then, the matrix is used to parse code to call solver to obtain theanswer. Notice that the code is the expected output from the LLMs in the final step in all mentionedframework, to reduce the number of reasoning steps, we can instruct the LLM to generate the code inthe one step as the formal representation. This idea is similar to Program-of-Through (PoT) Chenet al. [2023], which also use code during reasoning instead of text. In our work, we want the LLMsto only generate code that can capture the constraints, parameters, limits, and objectives mentioned inthe text.The complete LP-solving process is illustrated in Figure 2. Initially, the problem text, along withthe CAFA prompt, is input into the LLM for code generation. Additionally, a Pydantic parser isemployed to ensure that the LLM generates code adhering to the specified format, thus enhancingconsistency and reducing the likelihood of extraneous or incorrect string generation. The generatedcode represents a formal translation of the textual problem using the Python syntax of the Guribo3Table 1: Accuracy of different methods with various LLMs on the NL4Opt datasetStandard Chain-of-Expert OptiMUS CAFAGPT-4 47.3%∗64.2%∗78.7%∗70.1%GPT-3.5-Turbo 42.4%∗58.9%∗28.6%∗59.0%DeepSeek-Coder v2 5.2% - - 60.1%Llama 3.1 5.5% - - 34.0%* referred to the results reported in the original paper.solver; however, this code is not executable at this stage. In the subsequent step, a regular expression(regex) is applied to validate the syntax and correct any errors by refining unwanted patterns. Therevised code is then encapsulated with the necessary suffixes and prefixes, transforming it intoexecutable code. Finally, this code is executed via the Python interpreter to derive the final result(details regarding the CAFA prompt and regex rules are provided in the Appendix).4 Experiment and ResultsTo evaluate the proposed methods, we utilize NL4Opt dataset originally introduced by Ramamonjisonet al. [2022]. The baseline methods selected for comparison include Chain-of-Expert and OptiMUS.To assess the generalization of the approach across different models, we conduct experiments usingGPT-4 ,GPT-3.5-turbo ,deepseek-coder-v2-16B , and Llama3.1-8B .For performance evaluation, we employ Accuracy as the primary metric. If the output matchesthe correct answer provided in the dataset, the result is marked as correct for that specific question.Otherwise, it is considered incorrect. Accuracy is calculated as the ratio of correct solutions to thetotal number of questions.Table 1 demonstrate that the proposed method enhances performance in solving LP problemsacross various models. For GPT-3.5-Turbo and GPT-4, the CAFA method enables them to achievecompetitive results compared to state-of-the-art approaches. Specifically, GPT-3.5-Turbo achieves59% accuracy, outperforming both the Chain-of-Expert (CoE) and OptiMUS methods when using thesame model. GPT-4 attains 70.1% accuracy, surpassing the CoE method, though lower than OptiMUS.However, unlike OptiMUS, CAFA requires only a single prompt, making system management, tuning,and optimization more straightforward for further improvement.For smaller LLMs, such as DeepSeek-Coder 16B and Llama 3.1 8B, the CAFA method significantlyimproves performance. Notably, DeepSeek-Coder achieves a performance level similar to GPT-3.5-Turbo, with 60% accuracy. This suggests that LLMs with strong coding capabilities can yield betterresults, as the formal translation of problem text into executable code is closely tied to the model’scoding proficiency. This step is crucial in the proposed method.Based on this idea and Equation 4, the capability to solve mathematical problems may be stronglycorrelated with the model’s ability to generate code for formal problem representation. Futureresearch could explore how to further leverage LLMs’ coding capabilities to enhance performance.Additionally, investigating alternative representations beyond code formatting, such as relation triples,graphs or symbolic equation, could offer insights into improving the quality of auto-formulation andachieving higher accuracy.5 Limitations and Future WorksIterative Correction Mechanisms. Based on the experimental results conducted on the NL4Optdataset, while the CAFA framework demonstrates competitive performance, it falls short of achievingthe state-of-the-art results demonstrated by GPT-4. This discrepancy can be mainly attributed to thesimplified independent assumption used to derive the equations referenced in Equation 3. As widelyobserved in multi-agent systems, problem-solving processes do not follow a straightforward, one-pass pipeline but rather involve more complex, iterative network architectures. Such systems allowfor iterative tracking, refinement, and correction of errors throughout the problem-solving process,leading to an information dependence across multiple steps. A promising direction for future research4would involve the introduction of iterative correction mechanisms within the intermediate coderepresentations of problems, potentially offering improved performance over one-pass translationapproaches.Automatic Prompt Engineering and Exemplars Selection. Handcrafting instructions within theCAFA prompt remains necessary to achieve satisfactory results. This presents a limitation to thegeneralizability of the CAFA when applied to other types of optimization problems, such as mixedinteger linear programming (MILP) or quadratic programming (QP). A key challenge is how to enableautomatic prompt engineering (APE) to accommodate various optimization problems with minimalhuman intervention. In addition, identifying optimal examples used as few-shot demonstrations inthe prompt represents another critical area of exploration. While there has been groundbreakingresearch in APE for arithmetic problems Yang et al. [2023], Khattab et al. [2023], investigationsinto its application in the domain of operations research (OR) remain scarce. This gap highlights animportant opportunity for future research direction.Various Optimization Problems and Multi-modality Support. The CAFA prompt presentedin this paper is designed specifically for linear programming problems with no more than threevariables. This is too simplified and limits its applicability to many real-world scenarios. To meetthe diverse demands and objectives of various applications, it is necessary to extend the capabilitiesof the proposed CAFA prompt. An initial extension involves adapting it to handle other types ofoptimization problems, such as mixed integer linear programming (MILP) and quadratic programming(QP). This also call for curating datasets of multiple types of OR problems. Furthermore, the currentCAFA prompt processes only text input, whereas future research could explore a multimodal CAFAcapable of supporting diverse or mixed input formats, including text, images, and tabular data. Suchadvancements would enable CAFA to address a broader range of needs across various use cases.6 ConclusionThis paper reviews several LLM4OR approaches that utilize a multi-agent framework with largelanguage models (LLMs). An analysis of multi-step water flow pipelines using probabilistic modelinghighlights the pivotal role of the initial prompt in achieving accurate solutions. It underscores thatincreasing the number of steps can negatively impact final performance in Linear Programming (LP)tasks. Based on this analysis, we introduce the CAFA (Code-as-Auto-Formulation) prompting method,which formalizes problem text into executable code in a single step. With code post-processing,various LLMs are able to improve their performance in LP problem-solving. Experimental resultsindicate that even lower-capacity open-source models benefit from the proposed method whenpaired with simplified prompt engineering. By CAFA, LLMs’ mathematical problem-solvingability is closely linked to its capability for text-to-code formalization. Future research directionsinclude automating prompts and exemplar selection with minimal human intervention. Additionally,exploring alternative formats for formal representations (relation triplet, graph, symbolic equation,etc.) could potentially improve the performance of LLMs in problem-solving tasks.ReferencesKarl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solvemath word problems. arXiv preprint arXiv:2110.14168 , 2021.Ziyang Xiao, Dongxiang Zhang, Yangjun Wu, Lilin Xu, Yuan Jessica Wang, Xiongwei Han, XiaojinFu, Tao Zhong, Jia Zeng, Mingli Song, et al. Chain-of-experts: When llms meet complex operationsresearch problems. In The Twelfth International Conference on Learning Representations , 2023.Ali AhmadiTeshnizi, Wenzhi Gao, and Madeleine Udell. Optimus: Scalable optimization modelingwith (mi) lp solvers and large language models. arXiv preprint arXiv:2402.10172 , 2024.Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, and Ishai Menache. Large languagemodels for supply chain optimization. arXiv preprint arXiv:2307.03875 , 2023.Rindranirina Ramamonjison, Timothy Yu, Raymond Li, Haley Li, Giuseppe Carenini, Bissan Ghad-dar, Shiqi He, Mahdi Mostajabdaveh, Amin Banitalebi-Dehkordi, Zirui Zhou, and Yong Zhang.5Nl4opt competition: Formulating optimization problems based on their natural language descrip-tions. In Marco Ciccone, Gustavo Stolovitzky, and Jacob Albrecht, editors, Proceedings of theNeurIPS 2022 Competitions Track , volume 220 of Proceedings of Machine Learning Research ,pages 189–203. PMLR, 28 Nov–09 Dec 2022. URL https://proceedings.mlr.press/v220/ramamonjison23a.html .Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts prompting:Disentangling computation from reasoning for numerical reasoning tasks. Transactions on MachineLearning Research , 2023.Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen.Large language models as optimizers. arXiv preprint arXiv:2309.03409 , 2023.Omar Khattab, Arnav Singhvi, Paridhi Maheshwari, Zhiyuan Zhang, Keshav Santhanam, Sri Vard-hamanan, Saiful Haq, Ashutosh Sharma, Thomas T. Joshi, Hanna Moazam, Heather Miller,Matei Zaharia, and Christopher Potts. Dspy: Compiling declarative language model calls intoself-improving pipelines. arXiv preprint arXiv:2310.03714 , 2023.6A CAFA PromptYou are an expert in optimization problems and domain specificlanguage generation .Your task is to convert the textual optimization text into lines ofcode .You should also analyze whether the variable in the optimizationproblem should be INTEGER or CONTINUOUS .DO NOT ADD ANY COMMENTS OR EXPLANATION TO THE CODE . JUST OUTPUT THECODE .Here are some examples that you should refer to:QUESTION :A car manufacturer makes two types of car oils : Oil Max and Oil MaxPro . A container of Oil Max contains 46 grams of substance A, 43grams of substance B and 56 grams of substance C. A container ofOil Max Pro contains 13 grams of substance A, 4 grams of substanceB and 45 grams of substance C. The car manufacturer has 1345grams of substance A, 346 grams of substance B, 1643 grams ofsubstance C. In addition , the profit per container of Oil Max is$10 and the profit per container of Oil Max Pro is $15. How manycontainers of each of oil should the car manufacturer make tomaximize profit ?CODE :x = m. addVar ( name =" Oil Max", vtype =gp.GRB . INTEGER )y = m. addVar ( name =" Oil Max Pro", vtype =gp. GRB. INTEGER )m. setObjective (10 * x + 15 * y, gp.GRB. MAXIMIZE )m. addConstr (46 * x + 13 * y <= 1345)m. addConstr (43 * x + 4 * y <= 346)m. addConstr (56 * x + 45 * y <= 1643)QUESTION :Ben is growing apples and pears on his orchard . He has 50 acresavailable on which he must grow a minimum of 5 acres of apples anda minimum of 10 acres of pears to meet demands . The profit perapple is $2 and the profit per pear is $4. He prefers to grow morepears than apples but limitations in his workforce allow him togrow at most twice the amount of pears as apples . How many of eachfruit should Ben grow in order to maximize his profit ? What isthat profit ?CODE :x = m. addVar ( name =" apples ", vtype =gp.GRB. INTEGER )y = m. addVar ( name =" pears ", vtype =gp.GRB. INTEGER )m. setObjective (2 * x + 4 * y, gp.GRB . MAXIMIZE )m. addConstr (x + y <= 50)m. addConstr (x >= 5)m. addConstr (y >= 10)m. addConstr (y <= 2 * x)Please finish the task think step by step .QUESTION :{q}B Code Post-ProcessingThe code post-processing includes using regular expression to check, filter and correct the generatedcode from the LLM, simplfied inequalities in the code, and complement to piece of for execution.The following code snippet is the regex function.{\ tindef clean_code ( code : str) -> str:temp_code = code# Split the code into linespattern = r’\) ([a-zA -Z]) ’7temp_code = re.sub( pattern , r’)\n\1 ’, temp_code )cleand_code = []for line in temp_code . split (’\n’):line = line . strip ()# Replace > < to >= <=if line . startswith (’m. addConstr ’) and not re. findall (r’ <=| >= ’,line ):# print (" Not found ")line = re.sub (r’<’, r’ <=’, line )line = re.sub (r’>’, r’ >=’, line )# Remove all comments and suffix and prefix termsif not line . startswith (’‘‘‘’) and not line . startswith (’#’):cleand_code . append ( line )else :continue# Don ’t support bool expression ’==’if re. findall (r’== ’, line ):cleand_code . remove ( line )cleand_code = ’\n’. join ( cleand_code )# Remove all ’{’ and ’}’cleand_code = cleand_code . replace (’{’, ’’). replace (’}’, ’’)return cleand_code}The following function is used to simplify ineqaulitydef simplify_code ( code : str) -> str :simplfied_code = []for i, line in enumerate ( code . split (’\n’)):if line . startswith (’m. addConstr ’) or line . startswith (’m.setObjective ’):if ’/’ in line :obj_pattern = r’m\. setObjective \(([^ ,]*) ’constr_pattern = r’m\. addConstr \((.*) \) ’if re. findall ( obj_pattern , line ):matches = re. findall ( obj_pattern , line )obj = re. search (r’gp \. GRB \.(\ w+) ’, line ). group (1)expr = sp. sympify ( matches [0])simplfied_code . append (f"m. setObjective ({ str(sp.simplify ( expr ))}, gp.GRB .{ obj })")if re. findall ( constr_pattern , line ):matches = re. findall ( constr_pattern , line )oper = re. search (r’\s*( >=| <=)\s*’, matches [0]) .group (1)expr = sp. sympify ( matches [0])simplified_expr = str(sp. simplify ( expr .lhs - expr .rhs ))if match := re. search (r’^\((.*?) \)/’,simplified_expr ):new_constr = f’{ match . group (1)} { oper } {str (0)}’simplfied_code . append (’m. addConstr (’ +new_constr + ’)’)else :simplfied_code . append ( line )else :simplfied_code . append ( line )return ’\n’. join ( simplfied_code )The following functions is used to complment and execute the codeprefix = """8import gurobipy as gpenv = gp.Env( empty = True )env . setParam (" OutputFlag " ,0)env . start ()m = gp. Model (env=env)"""suffix = """m. optimize ()"""def complement_code ( code : str) -> float :return prefix + code + suffixdef execute_code ( code : str) -> float :ex_locals = {}exec (code , None , ex_locals )try :return ex_locals ["m"]. objValexcept Exception as e:return np.infThe final answer can be run by the following codeans = execute_code (complement_code ( simplify_code ( clean_code ( code_str ))))9 |
x2yiUEH0f9 | Probabilistic Proof State Compression: OptimizingLLM-Guided Formal VerificationAli Rahim∗Department of MathematicsUniversity of [email protected] RahimDepartment of Computer ScienceUniversity of Colorado, [email protected] recent successes in LLM-guided formal proof search, scalability remainslimited by the large search space. This paper introduces a novel approach thatintegrates off-the-shelf LLMs with conformal prediction-based compression tooptimize proof search. By employing adaptive, probability-based binning informedby conformal prediction intervals, our method compresses the proof state space,reducing computational demands while retaining statistical proof guarantees. Pre-liminary results on the Lean miniF2F test set show similar success rates with 75%fewer passes, and on average 23% reduced wall clock time.1 IntroductionRecent advances in large language models (LLMs) have significantly progressed the automation offormal proof generation. LLM-guided methods ([ 1], [2], [3]) demonstrate promising capabilitiesin navigating the complex search spaces of formal theorem proving, leveraging LLMs’ patternrecognition and generalization strengths over traditional symbolic methods.LLM-based formal proving typically follows either proof-step or whole-proof generation strategies.While recent systems, such as DeepSeek-Prover-V1.5 [ 4], have set state-of-the-art benchmarks usinga truncate-and-resume approach, they still struggle to balance exploration and exploitation due to thesparse binary rewards of successful proofs, requiring extensive search with up to 217typically.We propose a method that enhances LLM-guided formal proof search with conformal proof statespace compression. Using open-weight LLMs ([ 4]) for generating candidate steps, our conformalprediction framework provides calibrated success probabilities. We introduce a rigorous compressionalgorithm that preserves the most promising proof paths, which allows efficient exploration.We evaluate our method on the MiniF2F[ 5] and ProofNet[ 6] benchmarks and demonstrate a 75%average reduction in the number of passes required compared to baseline open models. Additionally,our approach led to qualitatively simpler proofs on some examples. These results suggest promisingdirections for scaling automated theorem proving to more complex domains.2 Proposed MethodWe enhance LLM-guided formal proof search by introducing a novel proof state space compressiontechnique. Our approach integrates three key components to efficiently navigate and compress theproof search space: the LLM-based Proof Step Generator, the Conformal Prediction Module[ 7], andthe Proof State Space Compression Module. Figure 1 illustrates the architecture of our system.∗38th Conference on Neural Information Processing Systems (NeurIPS 2024).Figure 1: Architecture of the Conformal Prediction-Based Theorem Proving System2.1 LLM-based Proof Step GeneratorWe employ DeepSeek Prover V1.5 RL[ 4], an open-weights large language model fine-tuned on Lean4, to generate candidate proof steps. Given the current proof state s= (g, a, t )and goal g, the modelproduces a set of possible next steps {s′1, . . . , s′k}via repeated sampling.2.2 Conformal Prediction ModuleThe Conformal Prediction Module provides calibrated probability intervals for the success of eachcandidate proof step, thereby guiding the proof search with reliable uncertainty estimates.We define a nonconformity measure A(z, z′)for a labeled proof attempt z= (s, y)as:A(z, z′) =|y−P(success |s)| (1)Here, P(success |s)estimates the probability of proof success given state susing the same model,andyis the binary outcome (success or failure).2.2.1 Probability Interval ComputationTo address the dependency in proof states and uphold the exchangeability assumption requiredby conformal prediction, we partition the calibration set into strata using the Proof State SpaceCompression Module. For each candidate step s′, the module:1.Identify Stratum: Assign s′to a stratum Zjbased on its similarity to existing proof states.2.Compute Nonconformity Scores: Calculate A(zi, s′)for all zi∈ Zj.3.Calculate P-Values: For each outcome y∈ {0,1},p(y|s′) =|{i:A(zi, s′)≥A((s′, y), z′)}|+ 1|Zj|+ 1(2)4.Construct Prediction Region:Γε(s′) ={y:p(y|s′)> ε} (3)5.Compute Probability Interval:[L(s′), U(s′)] =[0,1]ifΓε(s′) ={0,1}[1,1]ifΓε(s′) ={1}[0,0]ifΓε(s′) ={0}2.3 Proof State Space Compression ModuleThe Proof State Space Compression Module manages the search space by grouping similar proofstates into strata, facilitating efficient stratified conformal prediction.2Algorithm 1 Stratified Conformal Proof Search1:Initialize:• Set initial proof state s←s0• Initialize calibration set Z ← ∅2:while proof is incomplete and computational budget not exceeded do3: C ← GENERATE CANDIDATE STEPS (s,g)4: for all s′i∈ Cdo5: Zj←ASSIGN STRATUM (s′i,C)6: [L(s′i), U(s′i)]←COMPUTE PROBABILITY INTERVAL (s′i,Zj)7: end for8: COMPRESS SEARCH SPACE (C,Z, C)9: s∗←SELECT REPRESENTATIVE (C, U, C )10: y∗←APPLY PROOF STEP(s∗)11: ify∗leads to a dead end then12: BACKTRACK (s∗) ▷Next best representative state13: end ifZ ← Z ∪ { (s∗, y∗)}14:end while2.3.1 Similarity MeasureFor proof states s1= (G1, A1, T1)ands2= (G2, A2, T2), we define:sim(s1, s2) =wG·GoalSim (G1, G2) +wA·Jaccard (A1, A2) +wT·LCS(T1, T2)where GoalSim (G1, G2)is the binary goal similarity, Jaccard (A1, A2)is the Jaccard similarity ofassumptions, LCS(T1, T2)is the normalized longest common subsequence of tactics, and wG,wA,andwTare weights summing to 1.2.3.2 Binning FunctionWe map probability intervals to bin indices using:b([l, u]) =n·maxl,l+u2where nis the number of bins. To prioritize high-probability regions, we use adaptive bin widths:w(p) =w0·exp(−αp)where w0is the base width and αcontrols adaptation rate.2.3.3 Compression MappingThe compression process Cmaps a set of proof states to a set of representative states:C({s1, . . . , s k}) ={rep(B1), . . . , rep(Bm)}where B1, . . . , B mare non-empty bins.2.4 Proof Search AlgorithmOur proof search algorithm integrates the three components to efficiently explore the proof space.Algorithm 1 outlines the procedure.32.5 Theoretical GuaranteesOur method provides theoretical guarantees on the coverage of the true probability of proof success.Specifically, for a given significance level ε, we have:P(L(s)≤p∗(s)≤U(s))≥1−εwhere p∗(s)is the true probability of a successful proof from state s. This guarantee ensures that ourcompression technique preserves the most promising proof paths with high probability.3 Experiments and Results3.1 Experimental Setup3.1.1 DatasetsWe perform a preliminary evaluation of our method on two benchmark datasets: MiniF2F [ 5] andProofNet [6]. Specifically, we use the Lean 4 MiniF2F variant by Yang et al. [8].3.1.2 Baseline ModelsWe compare our method against several state-of-the-art models: GPT-4, DeepSeek-Prover-V1, andDeepSeek-Prover-V1.5.3.2 Results3.2.1 Performance on miniF2FModel Pass Rate (%) @ Number of PassesGPT-4 23.0 @ 10DeepSeek-Prover-V1 50.0 @ 32DeepSeek-Prover-V1.5 60.2 @ 32Our Method 63.5 @8Table 1: Results on miniF2F test set3.2.2 Performance on ProofNetModel Validation Pass Rate (%) Test Pass Rate (%)DeepSeek-Prover-V1.5 21.6 23.7Our Method 23.4 25.3Table 2: Results on ProofNet3.3 Analysis3.3.1 Efficiency GainsOur method demonstrates a 75% reduction in passes, with 23% reduced wall clock time compared toDeepSeek-Prover-V1.5, This efficiency gain can be attributed to the effective pruning of the searchspace through our conformal prediction-based approach.3.4 DiscussionThese preliminary results demonstrate that our method significantly improves upon existing ap-proaches in both proof success rate and efficiency. The integration of conformal prediction withLLM-guided search allows for more effective exploration of the proof space, leading to higher-qualityproofs. While promising, the approach requires further refinement and empirical validation to fullyrealize its capabilities. Nonetheless, it represents a meaningful step towards bridging the gap betweengenerative AI models and statistically robust decision-making processes in formal reasoning tasks.4References[1]Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.arXiv preprint arXiv:2009.03393 , 2020.[2]Yuhuai Jiang, Shunyu Kadavath, Soonwon Xie, Guy Gur-Ari, Paul Tran-Bach, Dario Choi,Peter Weng, Tushar Besiroglu, Yonatan Shlomi, Luke Melas-Kyriazi, et al. Minerva: Solvingquantitative reasoning problems with language models. arXiv preprint arXiv:2206.14858 , 2022.[3]Weitao Zhao, Wenda Liu, Qingxiang Sun, Jianlin Dong, Dacheng Yu, and Yingfei Han. Leancopi-lot: Interactive theorem proving with large language models. arXiv preprint arXiv:2310.04470 ,2023.[4]Kunlun Xin, Zihui Li, Zhen Wang, Yizhuang Wang, Tao Zhang, Zelin Zhang, Yining Li, ZihaoWang, Weizhou Liu, and Xiaodong Wang. Deepseek-prover-v1.5: Towards comprehensiveformal mathematical theorem proving. arXiv preprint arXiv:2402.03302 , 2024.[5]Kunhao Zheng, Jesse Michael Li, Joshua B Tenenbaum, Mateja Balog, and Yuxuan Wu.minif2f: a cross-system benchmark for formal olympiad-level mathematics. arXiv preprintarXiv:2109.00110 , 2021.[6]Huan Li, David Krueger, Tongshuang Yue, Chiyuan Zhang, Matej K Buch, Shiyu L Farrell,Matthew Lamb, and Tamara D Loboda. Proofnet: Autoformalizing and formally provingundergraduate-level mathematics. arXiv preprint arXiv:2302.12433 , 2023.[7]Vladimir V ovk, Alex Gammerman, and Glenn Shafer. Algorithmic learning in a random world.2005.[8] Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil,Ryan J Prenger, and Animashree Anandkumar. Leandojo: Theorem proving with retrieval-augmented language models. Advances in Neural Information Processing Systems , 36, 2024.A Appendix / supplemental materialA Proof of Theoretical GuaranteeFor a given significance level ε∈(0,1), our stratified conformal prediction method for proof stateevaluation satisfies the following coverage guarantee:P(L(Sn+1)≤p∗(Sn+1)≤U(Sn+1))≥1−εwhere p∗(Sn+1) =P(Yn+1= 1|Sn+1)is the true probability of a successful proof from state Sn+1,and[L(Sn+1), U(Sn+1)]is the prediction interval produced by our method.Proof. B Proof of Theoretical GuaranteeFor a given significance level ε∈(0,1), our stratified conformal prediction method for proof stateevaluation satisfies the following coverage guarantee:P(L(Sn+1)≤p∗(Sn+1)≤U(Sn+1))≥1−εwhere p∗(Sn+1) =P(Yn+1= 1|Sn+1)is the true probability of a successful proof from state Sn+1,and[L(Sn+1), U(Sn+1)]is the prediction interval produced by our method.Proof. Consider a calibration set Z={Z1, . . . , Z n}, where each Zi= (si, yi)consists of a proofstatesiand its binary outcome yi∈ {0,1}. The calibration set is partitioned into mstrataZ1, . . . ,Zmusing the Proof State Space Compressor C, ensuring that within each stratum Zj, the proof states aresimilar and exchangeable.For a new test proof state Sn+1= (gn+1, an+1, tn+1), assign it to a stratum Zjbased on its similarityto existing proof states using C. Given the exchangeability within each stratum Zj, the standardconformal prediction guarantee applies within Zj.Define the nonconformity score for a proof state Zi= (si, yi)as:A(Zi) =|yi−P(Yi= 1|si)|5where P(Yi= 1|si)is the estimated probability of proof success given state si.For the new proof state Sn+1, compute the nonconformity scores for both possible outcomes y∈{0,1}:A((Sn+1, y)) =|y−P(Yn+1= 1|Sn+1)|The p-value for each outcome yis defined as:p(y|Sn+1) =|{i∈ Zj:A(Zi)≥A((Sn+1, y))}|+ 1|Zj|+ 1This ensures that p-values are properly calibrated within the stratum Zj.The prediction region is:Γε(Sn+1) ={y∈ {0,1}:p(y|Sn+1)> ε}Based on Γε(Sn+1), derive the probability interval:[L(Sn+1), U(Sn+1)] =[0,1]ifΓε(Sn+1) ={0,1}[1,1]ifΓε(Sn+1) ={1}[0,0]ifΓε(Sn+1) ={0}By the properties of conformal prediction within each stratum Zj[7], we have:P(Yn+1)∈Γε(Sn+1)|Sn+1∈ Zj)≥1−εThis implies that the true outcome Yn+1will lie within the prediction region Γε(Sn+1)with probabil-ity at least 1−ε.Since p∗(Sn+1) =P(Yn+1= 1|Sn+1), observe the following:Yn+1= 1 = ⇒p∗(Sn+1)≥L(Sn+1)Yn+1= 0 = ⇒p∗(Sn+1)≤U(Sn+1)Therefore, combining these two implications, we obtain:P(L(Sn+1)≤p∗(Sn+1)≤U(Sn+1)) =P(Yn+1∈Γε(Sn+1))≥1−εThus, the coverage guarantee holds:P(L(Sn+1)≤p∗(Sn+1)≤U(Sn+1))≥1−εThis completes the proof.Implications for Theorem Proving:The theoretical guarantee ensures that our stratified conformal prediction method produces well-calibrated probability intervals for proof success. Specifically, with probability at least 1−ε, thetrue probability p∗(Sn+1)of successfully proving a theorem from state Sn+1lies within the interval[L(Sn+1), U(Sn+1)]. This reliability enables the proof search algorithm to prioritize steps withhigher upper bounds U(s′), indicating a greater likelihood of success, while effectively managinguncertainty in steps with wider intervals. Consequently, the algorithm balances exploration andexploitation, enhancing both the efficiency and robustness of automated theorem proving.6 |
wzaMGXiOEy | Intermediate Fine-Tuning ImprovesMathematical Reasoning in Smaller ModelsNeeraj Gangwar Suma P Bhat Nickvash KaniElectrical and Computer EngineeringUniversity of Illinois Urbana-Champaign, IL, USA{gangwar2,spbhat2,kani}@illinois.eduAbstractWhile large models pre-trained on high-quality data exhibit excellent performanceacross various reasoning tasks, including mathematical reasoning (e.g. GSM8k,MultiArith), specializing smaller models in mathematical reasoning remains achallenging problem. A common research approach to address this challengeinvolves distilling knowledge from large pre-trained teacher models into smallerstudent models. Other techniques include augmenting datasets by rephrasingquestions or using multiple views of solutions to improve reasoning performance.In this work, we explore intermediate fine-tuning and show that fine-tuning amodel on an arithmetic dataset before fine-tuning it on a reasoning dataset helpsimprove the model’s performance on the reasoning tasks. The arithmetic datasetcan be generated programmatically, eliminating the resource-intensive task ofdataset creation. We evaluate the impact of intermediate fine-tuning using theoriginal GSM8k training set and an expanded GSM8k training set created throughdistillation. Our experiments on multiple datasets demonstrate that intermediatefine-tuning leads to average improvements of 6.3% and 14.2% in reasoning tasksusing the original and distilled training sets, respectively, with greedy decodingcompared to the models fine-tuned directly on these sets.1 IntroductionScaling the model and data sizes has had a tremendous effect on performance across various naturallanguage processing (NLP) tasks (Chowdhery et al., 2023; Achiam et al., 2023; Touvron et al., 2023b;Jiang et al., 2023). These pre-trained models can learn from a few demonstrations using in-contextlearning and do not require task-specific fine-tuning (Brown et al., 2020). They also benefit fromgenerating a sequence of reasoning steps before arriving at the final answer. These strategies havebeen particularly effective for mathematical reasoning (Wei et al., 2022c; Nye et al., 2022; Fu et al.,2022; Zhou et al., 2022).While these large models exhibit excellent performance on mathematical reasoning tasks, adaptingsmaller models for these tasks remains an open problem (Wei et al., 2022b). This problem ischallenging because the math reasoning datasets, like GSM8k (Brown et al., 2020), consist of a smallnumber of reasoning problems, generally accompanied by one solution. These datasets do not containsufficient training examples to capture the complexity of math reasoning. To overcome this issue,a widely explored research direction is to distill knowledge from large pre-trained teacher modelsinto smaller student models. Some methods use questions from existing training datasets and useprompting to generate solutions for fine-tuning smaller models (Ho et al., 2023; Magister et al., 2023;Fu et al., 2023; Hsieh et al., 2023; Yue et al., 2024). Others use various techniques to rephrase thequestions to create more examples (Yu et al., 2024) or multiple views of solutions (Liang et al., 2024)to achieve better reasoning performance.38th Conference on Neural Information Processing Systems (NeurIPS 2024).This work focuses on using synthetically generated datasets to improve reasoning performance. Weexplore if fine-tuning a model on a related dataset before the reasoning datasets helps improve themodel’s reasoning abilities. Specifically, we programmatically generate a dataset with arithmetictasks and fine-tune the models on this dataset before fine-tuning them on the reasoning datasets .In transfer learning literature, this is referred to as intermediate fine-tuning (Vu et al., 2020) orsupplementary training (Phang et al., 2018). Empirical observations using several mathematicaldatasets lead to the following key takeaways:•Models fine-tuned on the arithmetic dataset before a reasoning dataset perform better thanthe ones directly fine-tuned on the reasoning dataset. The arithmetic dataset can be generatedprogrammatically, eliminating the need for manual resources.•Based on our observations with multiple datasets with varying mathematical reasoning tasks,we find that intermediate fine-tuning results in better out-of-domain generalization.•We evaluate our approach on datasets with relatively small and sufficiently large trainingsets. Our experiments show that intermediate fine-tuning improves performance in bothcases.Our source code and datasets are publicly available.12 Related WorkAdapting a pre-trained model for a downstream task has been traditionally done through task-specificfine-tuning. However, this approach does not work for mathematical reasoning tasks because datasets,like GSM8k, do not contain enough examples to capture the complexity of mathematical reasoning.Several works have focused on distilling multi-step reasoning solutions from large teacher models toovercome this limitation. Fu et al. (2023) prompted Codex (Chen et al., 2021) to generate multiplemulti-step solutions for the examples in the GSM8k training set and fine-tuned FlanT5 on the onesthat led to the correct final answer. Hsieh et al. (2023) used PaLM-540B (Chowdhery et al., 2023) forgenerating solutions and fine-tuned T5 (Raffel et al., 2020) in a multi-task setting to generate the labelsand rationale. Liu et al. (2023) used GPT-3.5-turbo to generate synthetic GSM8k-like examples. Yueet al. (2024) showed that a hybrid of chain-of-thought and program-of-thought solutions performedbetter than using either format individually and created math generalist models – MAmmoTH. Inaddition to using LLMs to generate more solutions, Yu et al. (2024) used LLM rephrasing andbackward reasoning to augment questions and created a new dataset called MetaMathQA.Transfer learning has played a pivotal role in NLP. Vu et al. (2020) studied the effect of intermediatefine-tuning on the model’s performance on a target task. Training on large multi-task mixtures isalso a common trend in NLP (Aribandi et al., 2022; Wei et al., 2022a; Chung et al., 2024). Anotherresearch direction explores identifying relevant examples for a given downstream task from a hugecollection of datasets, like P3 (Sanh et al., 2021). These methods create embeddings for all examplesof interest using hidden states (Ivison et al., 2023) or gradients (Xia et al., 2024). Given a task, asmall subset of relevant examples are selected based on similarity. These methods have been mainlyapplied to data-efficient instruction-tuning.3 Our ApproachIn this work, we fine-tune a model on an intermediate task before specializing it in mathematicalreasoning. This is also referred to as intermediate fine-tuning (defined below). In our approach,we use an arithmetic dataset for intermediate fine-tuning. The arithmetic dataset is generatedprogrammatically, thus eliminating the resource-intensive task of dataset creation.3.1 Intermediate Fine-TuningFine-tuning a model on an intermediate task before a downstream task can improve the model’sperformance on the said downstream task (Phang et al., 2018; Vu et al., 2020). The downstream taskis also referred to as the target task. This is called intermediate fine-tuning and can lead to successful1https://github.com/neerajgangwar/reasoning-ift2Table 1: Accuracy (%) achieved by models fine-tuned on the GSM8k datasets w/ and w/o intermediatefine-tuning (IFT) on the arithmetic dataset. We report the accuracy values with greedy and consistencydecoding, separated by “/”, with the greedy accuracy on the left. Model performance on MultiArith,ASDiv, and SV AMP is included to demonstrate no loss in out-of-domain generalization.Training Dataset Model IFT GSM8k MultiArith ASDiv SVAMPGSM8k (Orig.)FlanT5-Base✗ 07.7 / 09.1 17.2 / 17.4 08.5 / 08.6 06.6 / 07.7✓ 10.5 / 12.8 (+2.8 / +3.7) 26.1 / 31.5 (+8.9 / +14.1) 11.2 / 12.8 (+2.7 / +4.2) 09.1 / 09.9 (+2.5 / +2.2)FlanT5-Large✗ 12.9 / 14.7 28.9 / 29.1 15.7 / 16.8 12.1 / 12.6✓ 17.1 / 18.0 (+4.2 / +3.3) 49.4 / 54.1 (+20.5 / +25.0) 21.0 / 21.4 (+5.3 / +4.6) 15.3 / 15.9 (+3.2 / +3.3)T5v1.1 LM Adapt✗ 06.6 / 06.7 13.3 / 12.8 04.9 / 06.4 05.7 / 06.1✓ 06.6 / 08.7 (0.0 / +2.0) 22.8 / 24.1 (+9.5 / +11.3) 10.3 / 11.4 (+5.4 / +5.0) 07.5 / 08.2 (+1.8 / +2.1)GSM8k (Dist.)FlanT5-Base✗ 17.5 / 19.9 31.1 / 33.9 23.6 / 24.6 20.4 / 20.2✓ 21.4 / 25.0 (+3.9 / +5.1) 65.0 / 68.5 (+33.9 / +34.6) 34.8 / 36.3 (+11.2 / +11.7) 26.9 / 29.8 (+6.5 / +9.6)FlanT5-Large✗ 22.4 / 24.9 45.0 / 43.7 29.1 / 30.2 23.2 / 25.0✓ 27.7 / 30.5 (+5.3 / +5.6) 74.4 / 77.0 (+29.4 / +33.3) 40.4 / 42.6 (+11.3 / +12.4) 35.6 / 37.9 (+12.4 / +12.9)T5v1.1 LM Adapt✗ 17.2 / 18.9 35.6 / 35.6 23.0 / 24.7 19.4 / 20.8✓ 22.6 / 24.0 (+5.4 / +5.1) 49.4 / 54.6 (+13.8 / +19.0) 30.7 / 31.9 (+7.7 / +7.2) 24.1 / 24.8 (+4.7 / +4.0)knowledge transfer for similar intermediate and target tasks. Vu et al. (2020) have shown that thishelps tasks with limited training examples and sufficiently large training sets. Building on theseresults, we explore if an intermediate task can be leveraged to improve the model’s mathematicalreasoning performance.3.2 Intermediate Task for Math ReasoningFollowing prior work on transfer learning (Vu et al., 2020; Neyshabur et al., 2020), while variousmathematical datasets may help improve a model’s reasoning performance, we focus on an arithmeticdataset in this work for two reasons. First, arithmetic computations are an integral part of mathematicalreasoning. Second, while curating a math reasoning dataset requires considerable resources, a simplearithmetic dataset can be generated programmatically. We leave the exploration of other potentialintermediate tasks for future work.We refer to Liu and Low (2023) to programmatically generate an arithmetic dataset. Their workhas shown that LLaMA (Touvron et al., 2023a) fine-tuned on a programmatically generated datasetoutperforms GPT-4 (Achiam et al., 2023) on arithmetic tasks. While the dataset from Liu and Low(2023) contains the basic arithmetic operations – addition, subtraction, multiplication, and division,we also include fractions and percentages. GSM8k does not require computations over large numbers,hence we limit the number of digits in the operands to seven. Furthermore, we use log-uniformsampling to ensure that the dataset is not skewed towards numbers with greater digits. This datasetcontains ∼1.3M examples.3.3 DatasetsTraining. We use GSM8k for model specialization. As it does not have a validation set, werandomly sample 512 examples from the training set to create a validation set. We use two versionsof GSM8k in this work.Original. In the first version, we use the remaining examples from the training set for modelspecialization. This dataset contains 6961 examples. We refer to this dataset as GSM8k (Orig.).Distilled. We generate a distilled dataset using the questions from GSM8k (Orig.) to evaluate ifintermediate fine-tuning benefits tasks with large training datasets. This dataset is generated byprompting Mistral-7B (Jiang et al., 2023) using the prompt from Wei et al. (2022c). We generate64 solutions per question and keep the ones that lead to the correct final answer. After removingduplicate solutions, this results in a dataset with ∼175k examples. We refer to this dataset as GSM8k(Dist.).Evaluation We use the GSM8k test set to evaluate the models. We also use three additional datasets– MultiArith (Roy and Roth, 2015), ASDiv (Miao et al., 2020), and SV AMP (Patel et al., 2021)– to test out-of-domain generalization. MultiArith contains problems focused on basic arithmeticoperations and is relatively simpler than GSM8k. ASDiv focuses on diverse language usage patterns32 4 6 8 103540455055606570ModelFlanT5-BaseFlanT5-LargeEpoch (Arithmetic Training)BigBench Arith. Acc.(a)2 4 6 8 105101520253035Training Dataset, ModelGSM8k (Orig.), FlanT5-BaseGSM8k (Orig.), FlanT5-LargeGSM8k (Dist.), FlanT5-BaseGSM8k (Dist.), FlanT5-LargeEpoch (Arithmetic Training)GSM8k Test Acc.(b)Figure 1: (a) Accuracy (%) on the BigBench Arithmetic benchmark at different intervals duringthe intermediate fine-tuning. (b) GSM8k test accuracy (%) of the models initialized from differentcheckpoints in the intermediate fine-tuning. The dotted lines of the same color correspond to themodels directly fine-tuned on GSM8k.and covers wide problem types taught in elementary school. SV AMP contains problems with varyingstructures to ensure that a model cannot solve the problems by applying simple heuristics and ignoringquestion text.4 Experiments4.1 Training DetailsFor our experiments, we use FlanT5 (Chung et al., 2024) which is an instruction-tuned T5 (Raffelet al., 2020). The base and large versions of FlanT5 are used with 250M and 750M parameters,respectively. We use the AdamW optimizer (Loshchilov and Hutter, 2017) with a learning rate of10−4, a weight decay of 10−4, and an effective batch size of 128. For FlanT5-Large, a learning ratewarmup of 500 steps is used. All model training processes are initialized using a fixed seed.The intermediate fine-tuning is performed for 10 epochs without validation, and checkpoints aresaved every other epoch. To adapt these models for reasoning, we continue the training from thesecheckpoints on GSM8k. The models are fine-tuned for 20 and 100 epochs on GSM8k (Dist.) andGSM8k (Orig.), respectively. The best checkpoint is selected based on the GSM8k validationperformance.We use greedy and consistency decoding at inference. For consistency decoding, nucleus sampling(Holtzman et al., 2019) is used with T= 0.6andp= 0.9to sample eight responses, and the mostconsistent final answer is chosen. As nucleus sampling is a stochastic decoding method, we repeatthe evaluation with consistency decoding three times and report the mean accuracy.4.2 ResultsIn-Domain Performance. We first evaluate the models on the GSM8k test set. Table 1 shows theaccuracy (%) achieved by different models. We observe that both FlanT5-Base and FlanT5-Largebenefit from intermediate fine-tuning, and the performance on GSM8k improves significantly. Theseresults also show that intermediate fine-tuning helps with both GSM8k (Orig.), which has a smalltraining set, and GSM8k (Dist.), which already has a sufficiently large training set.Out-of-Domain Performance. Next, the models fine-tuned on GSM8k are evaluated on MultiArith,ASDiv, and SV AMP, and the results are shown in Table 1. These results indicate that intermediatefine-tuning does not harm out-of-domain generalization. The models fine-tuned on the arithmeticdataset first generalize better than those directly fine-tuned on GSM8k.Arithmetic vs GSM8k Performance. We use the checkpoints fine-tuned on the arithmetic datasetto initialize the models to be fine-tuned on GSM8k. But does good performance on arithmetic tasksalways translate to better GSM8k performance? Our experiments show that this is not always thecase. We use the BigBench Arithmetic benchmark (BIG-bench authors, 2023) to evaluate the model’s4arithmetic abilities and report micro-averaged accuracy across addition, subtraction, multiplication,and division. The results of this experiment are shown in Figure 1. The models fine-tuned on thearithmetic dataset perform better on the BigBench Arithmetic benchmark as the training progresses(Figure 1a). However, the models initialized from the checkpoints with better arithmetic performancedo not always result in better GSM8k performance (Figure 1b). These results agree with the findingsof Neyshabur et al. (2020) which show, for image datasets, that pre-training performance is notalways a faithful indicator of effective transfer learning. This makes it harder to decide when to stopthe intermediate fine-tuning. However, it should be noted that all checkpoints from intermediatefine-tuning result in a better performance than directly fine-tuning the model on GSM8k.5 ConclusionIn this work, we examined intermediate fine-tuning for mathematical reasoning. Our experimentsshowed that fine-tuning a model on a programmatically generated arithmetic dataset before a reasoningdataset helped improve the model’s performance on the reasoning tasks. We evaluated our approachwith small and large datasets, and intermediate fine-tuning resulted in better performance in bothcases. Moreover, intermediate fine-tuning did not harm out-of-domain generalization, instead, modelsfine-tuned on the arithmetic dataset first showed better out-of-domain generalization. Finally, whilethis work does not offer a method for determining when to stop intermediate fine-tuning, initializingmodels from any checkpoints during the process yielded better results than fine-tuning them directlyon GSM8k.6 LimitationsWhile intermediate fine-tuning improved a model’s mathematical reasoning performance, goodperformance on the arithmetic tasks did not always lead to better reasoning performance. Due to this,deciding the maximum number of epochs for intermediate fine-tuning and selecting a checkpointfrom this process to initialize the next model remain open problems. Furthermore, identifying anintermediate task or programmatically generating a dataset for it may not always be feasible, limitingthe applicability of this approach. Other mathematical datasets for intermediate fine-tuning may alsobe explored. Finally, we evaluated our approach on GSM8k. The experiments presented in this workmay be extended to include other mathematical reasoning datasets, like MATH (Hendrycks et al.,2021). We leave these avenues for future research.ReferencesJ. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman,S. Anadkat, et al. GPT-4 technical report. arXiv preprint arXiv:2303.08774 , 2023.V . Aribandi, Y . Tay, T. Schuster, J. Rao, H. S. Zheng, S. V . Mehta, H. Zhuang, V . Q. Tran, D. Bahri, J. Ni, et al.Ext5: Towards extreme multi-task scaling for transfer learning. In International Conference on LearningRepresentations , 2022.BIG-bench authors. Beyond the imitation game: Quantifying and extrapolating the capabilities of languagemodels. Transactions on Machine Learning Research , 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=uyTL5Bvosj .T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry,A. Askell, et al. Language models are few-shot learners. Advances in neural information processing systems ,33:1877–1901, 2020.M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y . Burda, N. Joseph, G. Brockman,et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021.A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton,S. Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research ,24(240):1–113, 2023.H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y . Tay, W. Fedus, Y . Li, X. Wang, M. Dehghani, S. Brahma, et al.Scaling instruction-finetuned language models. Journal of Machine Learning Research , 25(70):1–53, 2024.5Y . Fu, H. Peng, A. Sabharwal, P. Clark, and T. Khot. Complexity-based prompting for multi-step reasoning. InThe Eleventh International Conference on Learning Representations , 2022.Y . Fu, H. Peng, L. Ou, A. Sabharwal, and T. Khot. Specializing smaller language models towards multi-stepreasoning. In International Conference on Machine Learning , pages 10421–10430. PMLR, 2023.D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt. Measuringmathematical problem solving with the math dataset. NeurIPS , 2021.N. Ho, L. Schmid, and S.-Y . Yun. Large language models are reasoning teachers. In Proceedings of the61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages14852–14882, 2023.A. Holtzman, J. Buys, L. Du, M. Forbes, and Y . Choi. The curious case of neural text degeneration. arXivpreprint arXiv:1904.09751 , 2019.C.-Y . Hsieh, C.-L. Li, C.-k. Yeh, H. Nakhost, Y . Fujii, A. Ratner, R. Krishna, C.-Y . Lee, and T. Pfister. Distillingstep-by-step! outperforming larger language models with less training data and smaller model sizes. InFindings of the Association for Computational Linguistics: ACL 2023 , pages 8003–8017, 2023.H. Ivison, N. A. Smith, H. Hajishirzi, and P. Dasigi. Data-efficient finetuning using cross-task nearest neighbors.InFindings of the Association for Computational Linguistics: ACL 2023 , pages 9036–9061, 2023.A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. d. l. Casas, F. Bressand, G. Lengyel,G. Lample, L. Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825 , 2023.Z. Liang, D. Yu, X. Pan, W. Yao, Q. Zeng, X. Zhang, and D. Yu. Mint: Boosting generalization in mathematicalreasoning via multi-view fine-tuning. In Proceedings of the 2024 Joint International Conference on Com-putational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) , pages 11307–11318,2024.B. Liu, S. Bubeck, R. Eldan, J. Kulkarni, Y . Li, A. Nguyen, R. Ward, and Y . Zhang. Tinygsm: achieving 80% ongsm8k with one billion parameters. In The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS’23 ,2023.T. Liu and B. K. H. Low. Goat: Fine-tuned llama outperforms GPT-4 on arithmetic tasks. arXiv preprintarXiv:2305.14201 , 2023.I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017.L. C. Magister, J. Mallinson, J. Adamek, E. Malmi, and A. Severyn. Teaching small language models to reason.InProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: ShortPapers) , pages 1773–1781, 2023.S.-y. Miao, C.-C. Liang, and K.-Y . Su. A diverse corpus for evaluating and developing English math wordproblem solvers. In D. Jurafsky, J. Chai, N. Schluter, and J. Tetreault, editors, Proceedings of the 58th AnnualMeeting of the Association for Computational Linguistics , pages 975–984, Online, July 2020. Associationfor Computational Linguistics. doi: 10.18653/v1/2020.acl-main.92. URL https://aclanthology.org/2020.acl-main.92 .B. Neyshabur, H. Sedghi, and C. Zhang. What is being transferred in transfer learning? Advances in neuralinformation processing systems , 33:512–523, 2020.M. Nye, A. J. Andreassen, G. Gur-Ari, H. Michalewski, J. Austin, D. Bieber, D. Dohan, A. Lewkowycz,M. Bosma, D. Luan, et al. Show your work: Scratchpads for intermediate computation with language models.InDeep Learning for Code Workshop , 2022.A. Patel, S. Bhattamishra, and N. Goyal. Are NLP models really able to solve simple math word problems?In K. Toutanova, A. Rumshisky, L. Zettlemoyer, D. Hakkani-Tur, I. Beltagy, S. Bethard, R. Cotterell,T. Chakraborty, and Y . Zhou, editors, Proceedings of the 2021 Conference of the North American Chapter ofthe Association for Computational Linguistics: Human Language Technologies , pages 2080–2094, Online,June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.168. URLhttps://aclanthology.org/2021.naacl-main.168 .J. Phang, T. Févry, and S. R. Bowman. Sentence encoders on stilts: Supplementary training on intermediatelabeled-data tasks. arXiv preprint arXiv:1811.01088 , 2018.C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y . Zhou, W. Li, and P. J. Liu. Exploring thelimits of transfer learning with a unified text-to-text transformer. Journal of machine learning research , 21(140):1–67, 2020.6S. Roy and D. Roth. Solving general arithmetic word problems. In L. Màrquez, C. Callison-Burch, and J. Su,editors, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing , pages1743–1752, Lisbon, Portugal, Sept. 2015. Association for Computational Linguistics. doi: 10.18653/v1/D15-1202. URL https://aclanthology.org/D15-1202 .V . Sanh, A. Webson, C. Raffel, S. H. Bach, L. Sutawika, Z. Alyafeai, A. Chaffin, A. Stiegler, T. L. Scao,A. Raja, M. Dey, M. S. Bari, C. Xu, U. Thakker, S. S. Sharma, E. Szczechla, T. Kim, G. Chhablani, N. Nayak,D. Datta, J. Chang, M. T.-J. Jiang, H. Wang, M. Manica, S. Shen, Z. X. Yong, H. Pandey, R. Bawden, T. Wang,T. Neeraj, J. Rozen, A. Sharma, A. Santilli, T. Fevry, J. A. Fries, R. Teehan, S. Biderman, L. Gao, T. Bers,T. Wolf, and A. M. Rush. Multitask prompted training enables zero-shot task generalization, 2021.H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro,F. Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 ,2023a.H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra, P. Bhargava,S. Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 ,2023b.T. Vu, T. Wang, T. Munkhdalai, A. Sordoni, A. Trischler, A. Mattarella-Micke, S. Maji, and M. Iyyer. Exploringand predicting transferability across NLP tasks. In B. Webber, T. Cohn, Y . He, and Y . Liu, editors, Proceedingsof the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 7882–7926,Online, Nov. 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.635. URLhttps://aclanthology.org/2020.emnlp-main.635 .J. Wei, M. Bosma, V . Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V . Le. Finetuned languagemodels are zero-shot learners. In International Conference on Learning Representations , 2022a.J. Wei, Y . Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler,et al. Emergent abilities of large language models. Transactions on Machine Learning Research , 2022b.J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V . Le, D. Zhou, et al. Chain-of-thoughtprompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022c.M. Xia, S. Malladi, S. Gururangan, S. Arora, and D. Chen. Less: Selecting influential data for targeted instructiontuning. arXiv preprint arXiv:2402.04333 , 2024.L. Yu, W. Jiang, H. Shi, J. YU, Z. Liu, Y . Zhang, J. Kwok, Z. Li, A. Weller, and W. Liu. Metamath: Bootstrapyour own mathematical questions for large language models. In The Twelfth International Conference onLearning Representations , 2024. URL https://openreview.net/forum?id=N8N0hgNDRt .X. Yue, X. Qu, G. Zhang, Y . Fu, W. Huang, H. Sun, Y . Su, and W. Chen. Mammoth: Building mathgeneralist models through hybrid instruction tuning. In The Twelfth International Conference on LearningRepresentations , 2024.D. Zhou, N. Schärli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, C. Cui, O. Bousquet, Q. V . Le, et al.Least-to-most prompting enables complex reasoning in large language models. In The Eleventh InternationalConference on Learning Representations , 2022.7 |
vPfm789BK0 | LLM and Simulation as Bilevel Optimizers:A New Paradigm to Advance Physical ScientificDiscoveryPingchuan Ma1, Tsun-Hsuan Wang1, Minghao Guo1, Zhiqing Sun2,Joshua B. Tenenbaum1 3 4, Daniela Rus1, Chuang Gan5 6, Wojciech Matusik11MIT CSAIL,2CMU LTI,3MIT BCS,4CBMM,5UMass Amherst,6MIT-IBM Watson AI LabAbstractLarge Language Models have recently gained significant attention in scientificdiscovery for their extensive knowledge and advanced reasoning capabilities. How-ever, they encounter challenges in effectively simulating observational feedbackand grounding it with language to propel advancements in physical scientific dis-covery. Conversely, human scientists undertake scientific discovery by formulatinghypotheses, conducting experiments, and revising theories through observationalanalysis. Inspired by this, we propose to enhance the knowledge-driven, abstractreasoning abilities of LLMs with the computational strength of simulations. Weintroduce Scientific Generative Agent (SGA), a bilevel optimization framework:LLMs act as knowledgeable and versatile thinkers, proposing scientific hypothesesand reason about discrete components, such as physics equations or moleculestructures; meanwhile, simulations function as experimental platforms, providingobservational feedback and optimizing via differentiability for continuous parts,such as physical parameters. We conduct extensive experiments to demonstrate ourframework’s efficacy in constitutive law discovery and molecular design, unveil-ing novel solutions that differ from conventional human expectations yet remaincoherent upon analysis.1 IntroductionPhysical science automation aims to accelerate discovery [ 55]. Key aspects of human scientificprocess include: iterative hypothesis testing [ 40], discrete and continuous solution components [ 55],knowledge exploitation with occasional exploration [ 58], and universal principles with discipline-specific nuances [ 44]. LLMs excel as generalist tools with vast knowledge [ 1], aiding scientificdiscovery through reasoning and natural language interfaces. However, they lack computationalcapabilities crucial for physical sciences that requires specific domain knowledge.To this end, inspired by the overarching philosophy of human scientists, we introduce ScientificGenerative Agent (SGA) , a bilevel optimization approach wherein the outer-level engages LLMsas knowledgeable and versatile thinkers for generating and revising scientific hypothesis, while theinner-level involves simulations as experimental platforms for providing observational feedback.Overall, our contributions are concluded as:•We present a generic framework for physical scientific discovery that combines LLMs with physicalsimulations.•We propose a bilevel optimization with LLMs for discrete-space search-based optimization anddifferentiable simulations for continuous-space gradient-based optimization.•We conduct extensive experiments to demonstrate the effectiveness and generality of the proposedframework in physics law discovery and molecular design.38th Conference on Neural Information Processing Systems (NeurIPS 2024). Top-K Heap Continuous ParameterizationDiscrete ExpressionClass Header1 class Physics(nn.Module):2 def __init__ (self): 3 super().__init__ () 4 self.a = ... 5 def forward(self, F): 6 F_new = self.a * F 7 return F_new LLMNext?Exploit!Explore!Python CodeContinuous ParameterizationDiscrete Expression4 - self.a = ... 4 + self.b = ... 6 - F_new = self.a * F 6 + F_new = F / self.b SimulationIterationLossFeedbacktopk()Outer-Level OptimizationCode EvaluationInner-Level Optimziationappend()LLM-Driven Outer-Level Optimization Sim-Driven Inner-Level Optimizationt=0 t=1 t=2 t=3 t=0 t=1 t=2 t=3Purely Elastic Material Weakly Compressible FluidLoss = 10.0 Loss = 0.1init()top1()Figure 1: The overall pipeline of Scientific Generative Agent (SGA). Taking the constitutive lawsearching problem as an example, the input is an initial guess (a purely elastic material), and theoutput is another constitutive law optimized towards the ground truth (weakly compressible fluid).2 Scientific Generative AgentSGA is a bilevel optimization framework where the upper level features LLMs as proposers ofscientific solutions, and the lower level utilizes simulations as experimental platforms for validation.We illustrate the overall pipeline in Fig. 1.2.1 Bilevel Optimization PipelineFirst, we describe the simulation as a process where a simulator takes in a scientific expressionand continuous components as inputs and gives simulated physical phenomenon and additionalobservational feedback as outputs. Next, the LLM acts as a thinker to propose expressions basedon past experimental results from simulation. This process involves the LLM taking in a set of pastsimulation results containing an evaluation of the scientific problem, other physical feedback, andpast proposals, along with a prompt. The LLM then outputs proposed expressions and continuousparameterization for the decision variables. With these elements, we define a bilevel optimizationproblem: the objective is to minimize the evaluation of the simulated physical phenomenon, whichdepends on the proposed expression, continuous parameterization, and optimal continuous parameters.The optimization problem has two levels: (i) the outer optimization searches for an expression thatdefines what experiments to be conducted and continuous parameterization that defines the searchspace of the inner continuous optimization; (ii) the inner optimization, which depends on the outer-level variables, searches for the optimal continuous parameters given the proposed expression viadifferentiable simulation. We detail the complete algorithm with a python-like pseudo-code in Alg. 1.2.2 LLM-Driven Outer-Level SearchLLM-driven Optimization LLMs are effective for generic optimization through prompting andcontext [ 60,43]. Inspired by [ 31], we use evolutionary search with multiple offspring per iteration.Our approach selects several high-performing candidates, enhancing hypothesis feasibility andfacilitating crossover, with LLMs generating new hypotheses from past experiments [43].2Table 1: Benchmark . We use column #Iter. as the number of iterations, #Hist. as the Kvalue forthe top-k retrieval,#Exploit#Exploreas the number of offspring for exploitation versus exploration, Bilevel asif bilevel optimization is enabled. The best method with the lowest loss is highlighted in bold text.Method #Iter. #Hist.#Exploit#ExploreBilevelConstitutive Law Search Molecule Design(a)↓ (b)↓ (c)↓ (d)↓ (e)↓ (f)↓ (g)↓ (h)↓CoT 1 5 N/A ✗ 298.5 1462.3 150.0 384.1 3.0 32.1 18.6 6.0FunSearch 20 2 0 / 4 ✗ 210.3 872.2 82.8 139.5 1.1 7.1 8.3 1.1Eureka 5 1 0 / 16 ✗ 128.0 531.0 101.7 150.1 4.3 9.8 3.3 9.7e-1OPRO 5 5 0 / 16 ✗ 136.2 508.3 99.2 128.8 2.4 9.4 3.1 1.3Ours (no bilevel) 5 5 4 / 12 ✗ 90.2 517.0 83.6 68.4 8.6e-1 9.1 1.8 1.4Ours (no exploit) 5 5 0 / 16 ✓ 3.0e-3 3.9e-1 6.6e-2 1.4e-12 4.0e-4 1.5e-1 6.1e-1 2.8e-5Ours 5 5 4 / 12 ✓ 5.2e-5 2.1e-1 6.0e-2 1.4e-12 1.3e-4 1.1e-1 5.4e-1 3.6e-5Interfacing with Simulation Integrating LLMs with simulation requires efficient, structured com-munication. We use equation searching and entity searching for LLM-to-simulation communication,unified as an abstraction. Equation searching allows LLMs to propose equations and search spaces,while entity searching focuses on structural descriptions. For simulation-to-LLM communication, weuse expert knowledge to extract relevant information as feedback, similar to a senior scientist guidinga junior colleague. The inner optimization results also serve as feedback from simulation to LLMs,detailed in the next section.Exploitation and Exploration We employ an exploit-and-explore strategy by adjusting LLMs’decoding temperature [ 60], mimicking human scientists’ approach to breakthroughs. When generatingoffspring, we divide them into two groups: cautious followers (exploit) and daring adventurers(explore). We observed that the exploit group often repeats previous solutions, while the exploregroup tends to yield overly random or invalid solutions. A 1:3 ratio between exploit and exploregroups has proven effective emperically based our experiments.2.3 Differentiable Inner-Level OptimizationInner optimization uses gradient-based methods to find optimal parameters within the search spacedefined by the outer level. Domain-specific knowledge is distilled through gradients from thesimulation to intermediate optimization results. These results, along with the final output, are fedback to LLMs for solution refinement. The feedback may include loss curves and auxiliary recordings,providing information on various aspects of improvement.3 Experiments3.1 Problem DefinitionsConstitutive Law Discovery Identifying the constitutive law from motion observations standsas one of the most difficult challenges in fields such as physics, material science, and mechanicalengineering. Here we follow the recent advances in physical simulation and formulate the constitutivelaw discovery task as an optimization problem [29].Molecule Design We focus on a prevalent task in molecule design: discovering molecules withspecific quantum mechanical properties. Our objective is to determine the optimal molecular structureand its 3D conformation to match a predefined target quantum mechanical property. The designprocess involves both the discrete expression – the molecular structure represented by SMILESstrings [57], and the continuous parameters – the 3D coordinates of each atom in the molecule.3.2 Experiment SetupWe design a diverse set of challenging tasks for evaluation. For constitutive law discovery, we propose4 tasks including: (a)fitting the non-linear elastic material starting from a linear elastic material,(b)fitting the von Mises plastic material starting from a purely elastic material, (c)fitting the granularmaterial starting from a purely elastic material, and (d)fitting the weakly compressible fluid starting31 2 3 4 5Iteration0100200300400500600Loss1 2 3 4 5Iteration0.00.20.40.60.81.0FunSearch Eureka OPRO OursZoom In(a)Loss trends comparison.1 2 3 4 5Iteration10−1100101Lossouter optim (LLM)inner optim (sim)w/o bilevelw/ bilevel (pre-inner-optim)w/ bilevel (ours) (b)Bilevel optimization.from a purely elastic material. For molecular design task, we consider 4 popular tasks, centeringon 3 commonly evaluated quantum mechanical properties [ 13,65]:(e)HOMO (Highest OccupiedMolecular Orbital) set to 0, (f)LUMO (Lowest Unoccupied Molecular Orbital) set to 0, (g)theHOMO-LUMO energy gap set to 0, and (h)the HOMO-LUMO energy gap set to -2.3.3 Physical Scientific DiscoveryWe consider 6 strong baselines for evaluation: (i) Chain-of-Thoughts (CoT) prompting [ 56] solvesthe problem by looking at step-by-step solutions from examples. We provide 5 examples with anexplanation to CoT as the initial solution. (ii) FunSearch [43] utilizes evolutionary strategy toavoid local optimum. We adopt the given hyperparameters from the original implementation with2 optimization histories and 4 explorers. We set the number of iterations to 20, yielding the samenumber of solutions evaluated, for a fair comparison to other methods. (iii) Eureka [31] generatesmultiple solutions in each iteration to improve the success rate of the generated code. We keep thehyperparameters from the original implementation. (iv) Optimization by PROmpting (OPRO) [60]highlights the advantages of involving a sorted optimization trajectory. We set the hyperparameters tobe equal to Eureka except for the number of historical optimization steps. In all these works (i-iv),we notice the temperatures for LLM inference are all 1.0, which is equal to the exploring temperaturein our method, so we denote them with 0 exploiter. We also consider 2 variants of our method: (v)Ours (no bilevel) removes the bilevel optimization by only searching with LLM. (vi) Ours (noexploit) removes the exploitation by setting the temperature to 1.0 all the time.We present our experiments against the 8 designed tasks and show the results in Table 1. Comparedto baselines (i-iv), our method is significantly better by a number of magnitudes. When the bileveloptimization is removed from our method, the performance drops dramatically, but still statisticallybetter than baselines (i-iv), indicating the choice of hyperparameters and the integration of exploitationis helpful for the task. When we remove the exploitation but restore the bilevel optimization, wenotice the performance grows back. It has comparable performance compared to our method in(d)or even better results in (h). However, in some tasks, especially hard ones (e.g., (b)and(f))that we care more in reality, the performance gap is over 50%, indicating the effectiveness of ourexploit-and-explore strategy. We also present the loss trend in task (a)in Figure 2a, our methodoutstands with a much lower loss and a converging trend. We present more experiments in Sec. C.3.4 Bilevel OptimizationHere we evaluate the importance of bilevel optimization in Figure 2b using the task (h). Comparingthe blue triangle curve and the red dot curve, which represent the LLM-driven outer-level optimizationand the simulation-driven inner-level optimization, it is easy to conclude that the loss performancewith bilevel optimization is better. Nevertheless, we are also interested in how bilevel optimizationworks inside each optimization step and how much LLMs and simulations help respectively. Asshown as a zigzag curve, we found that LLMs and simulations help each other over all optimizationsteps: the next proposal from LLMs will be better with simulation-optimized results, and vice versa.We argue that LLMs and simulations have different expertise: LLMs are generalist scientists who havecross-discipline knowledge, while simulations are domain experts who have specialized knowledge.4References[1]Microsoft Research AI4Science and Microsoft Azure Quantum. The impact of large lan-guage models on scientific discovery: a preliminary study using gpt-4. arXiv preprintarXiv:2311.07361 , 2023.[2] Anthropic. Introducing the next generation of claude, 2024.[3]Ignacio Arnaldo, Krzysztof Krawiec, and Una-May O’Reilly. Multiple regression geneticprogramming. In Proceedings of the 2014 Annual Conference on Genetic and EvolutionaryComputation , pages 879–886, 2014.[4]Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Under-standing and simplifying one-shot architecture search. In International conference on machinelearning , pages 550–559. PMLR, 2018.[5]Luca Biggio, Tommaso Bendinelli, Alexander Neitz, Aurelien Lucchi, and Giambattista Paras-candolo. Neural symbolic regression that scales. In International Conference on MachineLearning , pages 936–945. Pmlr, 2021.[6]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models arefew-shot learners. Advances in neural information processing systems , 33:1877–1901, 2020.[7]Han Cai, Ligeng Zhu, and Song Han. ProxylessNAS: Direct neural architecture search on targettask and hardware. In International Conference on Learning Representations , 2019.[8]Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language modelsas tool makers. In International Conference on Learning Representations , 2024.[9]William La Cava, Tilak Raj Singh, James Taggart, Srinivas Suri, and Jason Moore. Learningconcise representations for regression by evolving networks of trees. In International Conferenceon Learning Representations , 2019.[10] Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In Proceedings ofthe 22nd acm sigkdd international conference on knowledge discovery and data mining , pages785–794, 2016.[11] Benoît Colson, Patrice Marcotte, and Gilles Savard. An overview of bilevel optimization.Annals of operations research , 153:235–256, 2007.[12] Tao Du, Kui Wu, Pingchuan Ma, Sebastien Wah, Andrew Spielberg, Daniela Rus, and WojciechMatusik. Diffpd: Differentiable projective dynamics. ACM Transactions on Graphics (TOG) ,41(2):1–21, 2021.[13] Xiaomin Fang, Lihang Liu, Jieqiong Lei, Donglong He, Shanzhuo Zhang, Jingbo Zhou, FanWang, Hua Wu, and Haifeng Wang. Geometry-enhanced molecular representation learning forproperty prediction. Nature Machine Intelligence , 4(2):127–134, 2022.[14] Thomas A Halgren. Merck molecular force field. i. basis, form, scope, parameterization, andperformance of mmff94. Journal of computational chemistry , 17(5-6):490–519, 1996.[15] Nikolaus Hansen. The cma evolution strategy: a comparing review. Towards a new evolutionarycomputation: Advances in the estimation of distribution algorithms , pages 75–102, 2006.[16] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, ChrisBamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand,et al. Mixtral of experts. arXiv preprint arXiv:2401.04088 , 2024.[17] Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder formolecular graph generation. In International conference on machine learning , pages 2323–2332.PMLR, 2018.[18] Ying Jin, Weilin Fu, Jian Kang, Jiadong Guo, and Jian Guo. Bayesian symbolic regression.arXiv preprint arXiv:1910.08892 , 2019.5[19] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, andTie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. Advances in neuralinformation processing systems , 30, 2017.[20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In InternationalConference on Learning Representations , San Diega, CA, USA, 2015.[21] Michael Kommenda, Bogdan Burlacu, Gabriel Kronberger, and Michael Affenzeller. Parameteridentification for symbolic regression using nonlinear least squares. Genetic Programming andEvolvable Machines , 21(3):471–501, 2020.[22] Stefan Kramer, Mattia Cerrato, Sašo Džeroski, and Ross King. Automated scientific discovery:From equation discovery to autonomous discovery systems. arXiv preprint arXiv:2305.02251 ,2023.[23] William La Cava, Thomas Helmuth, Lee Spector, and Jason H Moore. A probabilistic and multi-objective analysis of lexicase selection and ε-lexicase selection. Evolutionary Computation ,27(3):377–402, 2019.[24] William La Cava, Patryk Orzechowski, Bogdan Burlacu, Fabricio de Franca, Marco Virgolin,Ying Jin, Michael Kommenda, and Jason Moore. Contemporary symbolic regression methodsand their relative performance. In Proceedings of the Neural Information Processing SystemsTrack on Datasets and Benchmarks , 2021.[25] Greg Landrum et al. Rdkit: A software suite for cheminformatics, computational chemistry,and predictive modeling. Greg Landrum , 8:31, 2013.[26] Wenqiang Li, Weijun Li, Linjun Sun, Min Wu, Lina Yu, Jingyi Liu, Yanjie Li, and SongsongTian. Transformer-based model for symbolic regression via joint supervised learning. In TheEleventh International Conference on Learning Representations , 2022.[27] Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search.InInternational Conference on Learning Representations , 2019.[28] Risheng Liu, Pan Mu, Xiaoming Yuan, Shangzhi Zeng, and Jin Zhang. A general descentaggregation framework for gradient-based bi-level optimization. IEEE Transactions on PatternAnalysis and Machine Intelligence , 45(1):38–57, 2022.[29] Pingchuan Ma, Peter Yichen Chen, Bolei Deng, Joshua B Tenenbaum, Tao Du, Chuang Gan, andWojciech Matusik. Learning neural constitutive laws from motion observations for generalizablepde dynamics. In International Conference on Machine Learning . PMLR, 2023.[30] Pingchuan Ma, Tao Du, Joshua B Tenenbaum, Wojciech Matusik, and Chuang Gan. Risp:Rendering-invariant state predictor with differentiable simulation and rendering for cross-domain parameter estimation. In International Conference on Learning Representations , 2021.[31] Yecheng Jason Ma, William Liang, Guanzhi Wang, De-An Huang, Osbert Bastani, DineshJayaraman, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Eureka: Human-level reward designvia coding large language models. In International Conference on Learning Representations ,2024.[32] Miles Macklin. Warp: A high-performance python framework for gpu simulation and graphics,March 2022. NVIDIA GPU Technology Conference.[33] Trent McConaghy. Ffx: Fast, scalable, deterministic symbolic regression technology. GeneticProgramming Theory and Practice IX , pages 235–260, 2011.[34] T. Nathan Mundhenk, Mikel Landajuela, Ruben Glatt, Claudio P. Santiago, Daniel M. Fais-sol, and Brenden K. Petersen. Symbolic regression via neural-guided genetic programmingpopulation seeding. In Advances in Neural Information Processing Systems , 2021.[35] OpenAI. OpenAI: Introducing ChatGPT, 2022.[36] OpenAI. OpenAI: GPT-4, 2023.6[37] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin,Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models tofollow instructions with human feedback. Advances in neural information processing systems ,35:27730–27744, 2022.[38] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan,Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperativestyle, high-performance deep learning library. Advances in neural information processingsystems , 32, 2019.[39] Brenden K Petersen, Mikel Landajuela Larma, Terrell N Mundhenk, Claudio Prata Santiago,Soo Kyung Kim, and Joanne Taery Kim. Deep symbolic regression: Recovering mathematicalexpressions from data via risk-seeking policy gradients. In International Conference on LearningRepresentations , 2020.[40] Karl Popper. The logic of scientific discovery . Routledge, 2005.[41] Raghunathan Ramakrishnan, Pavlo O Dral, Matthias Rupp, and O Anatole V on Lilienfeld.Quantum chemistry structures and properties of 134 kilo molecules. Scientific data , 1(1):1–7,2014.[42] Sereina Riniker and Gregory A Landrum. Better informed distance geometry: using what weknow to improve conformation generation. Journal of chemical information and modeling ,55(12):2562–2574, 2015.[43] Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog,M Pawan Kumar, Emilien Dupont, Francisco JR Ruiz, Jordan S Ellenberg, Pengming Wang,Omar Fawzi, et al. Mathematical discoveries from program search with large language models.Nature , pages 1–3, 2023.[44] Alex Rosenberg and Lee McIntyre. Philosophy of science: A contemporary introduction .Routledge, 2019.[45] Robert E Schapire. The boosting approach to machine learning: An overview. Nonlinearestimation and classification , pages 149–171, 2003.[46] Gisbert Schneider. Automating drug discovery. Nature reviews drug discovery , 17(2):97–113,2018.[47] Ankur Sinha, Pekka Malo, and Kalyanmoy Deb. Evolutionary algorithm for bilevel optimizationusing approximations of the lower level optimal solution mapping. European Journal ofOperational Research , 257(2):395–411, 2017.[48] Ankur Sinha, Pekka Malo, and Kalyanmoy Deb. A review on bilevel optimization: Fromclassical to evolutionary approaches and applications. IEEE Transactions on EvolutionaryComputation , 22(2):276–295, 2017.[49] Theodore Sumers, Shunyu Yao, Karthik Narasimhan, and Thomas Griffiths. Cognitive ar-chitectures for language agents. Transactions on Machine Learning Research , 2024. SurveyCertification.[50] Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong. Solving olympiad geometrywithout human demonstrations. Nature , 625(7995):476–482, 2024.[51] Silviu-Marian Udrescu, Andrew Tan, Jiahai Feng, Orisvaldo Neto, Tailin Wu, and Max Tegmark.Ai feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity. Advances inNeural Information Processing Systems , 33:4860–4871, 2020.[52] Mojtaba Valipour, Bowen You, Maysum Panju, and Ali Ghodsi. Symbolicgpt: A generativetransformer model for symbolic regression. arXiv preprint arXiv:2106.14131 , 2021.[53] Marco Virgolin, Tanja Alderliesten, and Peter AN Bosman. Linear scaling with and withinsemantic backpropagation-based genetic programming for symbolic regression. In Proceedingsof the genetic and evolutionary computation conference , pages 1084–1092, 2019.7[54] Marco Virgolin, Tanja Alderliesten, Cees Witteveen, and Peter AN Bosman. Improvingmodel-based genetic programming for symbolic regression of small expressions. Evolutionarycomputation , 29(2):211–237, 2021.[55] Hanchen Wang, Tianfan Fu, Yuanqi Du, Wenhao Gao, Kexin Huang, Ziming Liu, PayalChandak, Shengchao Liu, Peter Van Katwyk, Andreea Deac, et al. Scientific discovery in theage of artificial intelligence. Nature , 620(7972):47–60, 2023.[56] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le,Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models.Advances in Neural Information Processing Systems , 35:24824–24837, 2022.[57] David Weininger. Smiles, a chemical language and information system. 1. introduction tomethodology and encoding rules. Journal of chemical information and computer sciences ,28(1):31–36, 1988.[58] Mignon Wuestman, Jarno Hoekman, and Koen Frenken. A typology of scientific breakthroughs.Quantitative Science Studies , 1(3):1203–1222, 2020.[59] Chao Xue, Xiaoxing Wang, Junchi Yan, Yonggang Hu, Xiaokang Yang, and Kewei Sun.Rethinking bi-level optimization in neural architecture search: A gibbs sampling perspective.InAAAI Conference on Artificial Intelligence , volume 35, pages 10551–10559, 2021.[60] Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and XinyunChen. Large language models as optimizers. In International Conference on Learning Repre-sentations , 2024.[61] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, andKarthik R Narasimhan. Tree of thoughts: Deliberate problem solving with large languagemodels. In Conference on Neural Information Processing Systems , 2023.[62] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and YuanCao. ReAct: Synergizing reasoning and acting in language models. In International Conferenceon Learning Representations , 2023.[63] Naruki Yoshikawa, Kei Terayama, Masato Sumita, Teruki Homma, Kenta Oono, and Koji Tsuda.Population-based de novo molecule generation, using grammatical evolution. Chemistry Letters ,47(11):1431–1434, 2018.[64] Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Jeff Huang, Chuyue Sun, Cody Hao Yu, ShiyiCao, Christos Kozyrakis, Ion Stoica, Joseph E Gonzalez, et al. Efficiently programming largelanguage models using sglang. arXiv preprint arXiv:2312.07104 , 2023.[65] Gengmo Zhou, Zhifeng Gao, Qiankun Ding, Hang Zheng, Hongteng Xu, Zhewei Wei, LinfengZhang, and Guolin Ke. Uni-mol: A universal 3d molecular representation learning framework.InInternational Conference on Learning Representations , 2023.[66] Zhenpeng Zhou, Steven Kearnes, Li Li, Richard N Zare, and Patrick Riley. Optimization ofmolecules via deep reinforcement learning. Scientific reports , 9(1):10752, 2019.A Related WorkA.1 Automated Scientific DiscoveryAutomated scientific discovery, enhanced by machine learning methods, serves as a powerful ac-celerator for research, enabling scientists to generate hypotheses, design experiments, interpret vastdatasets, and unearth insights that may elude traditional scientific methodologies [ 1,22,55]. Thismultifaceted process unfolds through two synergistically linked stages: hypothesis formation and thecollection and analysis of experimental data. The integration of automated systems not only augmentsthe scientific inquiry process but also streamlines the discovery pipeline, from conceptualization toempirical validation. This paper places a particular emphasis on, but is not limited to, constitutive8Algorithm 1 Scientific Generative AgentRequire: Discrete expression and continuous param (E,θ∈Θ), Num of exploiting Ml, Num of exploringMh, Exploiting temperature Tl, Exploring temperature Th1:# Store ranked (solution,param) by heap2:H←heap()3:# Continuous optimization4:ˆθ←optim( E,θ;Φ)5:H.append(( E,ˆθ))6:fori= 1, . . . , N do7: # Generate Mlsolutions from LLM8: (E,Θ)[:Ml]←LLM(H.topk( K),Tl)9: # Generate Mhsolutions from LLM10: (E,Θ)[Ml:Ml+Mh]←LLM(H.topk( K),Th)11: form= 1, . . . , M l+Mhdo12: # Continuous optimization13: ˆθ←optim( E,θ∈Θ;Φ)14: H.append(( E,ˆθ))15: end for16:end forEnsure: H.topk(1 )# Return the bestlaw discovery and molecular design. These areas exemplify the profound impact of automationin unraveling complex material behaviors and in the innovative design of molecules with tailoredproperties. Automatic identification of constitutive material models has been a long-standing problemand recent works utilizes differentiable simulation [ 12,29,30] to address it as a system identificationproblem. Leveraging machine learning and artificial intelligence, researchers are able to predictmolecular behavior, optimize chemical structures for specific functions, and thus, rapidly acceleratethe development of new drugs, materials, and chemicals [17, 66, 46].A.2 Large Language Models and AgentsThe advancement of Large Language Models (LLMs) such as ChatGPT and GPT-4 has sparkedconsiderable interest in their potential as autonomous agents [6, 35, 36]. Recent developments haveshown that LLMs can be enhanced to solve complex problems by creating and utilizing their owntools, as demonstrated in the LATM framework [ 49], and by acting as optimizers in the absence ofgradients, as seen in the OPRO methodology [ 60]. These approaches signify a shift towards moreindependent and versatile LLM-based agents capable of generating solutions through self-craftedtools and optimization techniques [ 8,62,61], showcasing their evolving problem-solving capabilities.In the realm of scientific discovery, LLMs have begun to make significant contributions, particularlyin mathematics and computational problems. The FunSearch method [ 43] pairs LLMs with evaluatorsto exceed known results in extremal combinatorics and online bin packing, illustrating LLMs’ abilityto discover new solutions to established problems. Similarly, AlphaGeometry’s success [ 50] insolving olympiad-level geometry problems without human demonstrations highlights the potential ofLLMs in automating complex reasoning tasks. These examples underline the transformative impactof LLMs in pushing the boundaries of scientific inquiry and automated reasoning.A.3 Bilevel OptimizationBilevel optimization involves a hierarchical structure with two levels of optimization problems,where the solution to the upper-level problem is contingent upon the outcome of the lower-levelproblem [ 11]. Bilevel optimization problems are inherently more complex than their single-levelcounterparts due to the nested nature of the optimization tasks and the intricate interdependenciesbetween them. Recent advancements have focused on developing efficient algorithms, includingevolutionary algorithms [ 48], gradient-based approaches [ 28], and approximation techniques [ 47], totackle the computational challenges presented by the non-convex and non-differentiable characteristicsof many bilevel problems. Among a wide span of application domains of bilevel optimization, neuralarchitecture search (NAS) [ 27,4,7,59] is prominent and close to the problem setting in this paper:the upper level optimizes the discrete neural network architecture while the lower level optimizes9Table 2: Symbolic Regression.Method R2↑ MSE↓ MAE↓SymbolicAIFeynman [51] 0.05105 22814675.8 2520.0 ✓DSR [39] 0.57527 10966411.0 2045.0 ✓BSR [18] 0.66526 8642965.0 1938.6 ✓AdaBoost [45] 0.75058 6439962.9 1777.7 ✗GP-GOMEA [54] 0.77734 5749076.4 1580.1 ✓SBP-GP [53] 0.81773 4706077.0 1367.5 ✓LightGBM [19] 0.83368 4294433.7 1129.9 ✗XGBoost [10] 0.87775 3156500.5 1109.2 ✗MRGP [3] 0.91074 2304682.5 950.5 ✓EPLEX [23] 0.91851 2104070.1 122.2 ✓FFX [33] 0.93124 1775263.7 801.7 ✓MLP 0.98240 454461.5 366.3 ✗FEAT [9] 0.98761 319800.6 336.1 ✓DSO [34] 0.99642 92374.9 168.6 ✓Operon [21] 0.99684 81577.9 92.4 ✓SymbolicGPT [52] 0.52333 6862154.7 1680.7 ✓NeSymReS [5] N/A to >3 variables ✓T-JSL [26] N/A to >2 variables ✓Ours 0.99901 17424.6 86.4 ✓the continuous weights of the neural network. However, typical NAS methods require a predefinedsearch space, constraining the exploration of discrete network architectures to manually specifiedboundaries. Our framework distinguishes itself by employing LLM encoded with general knowledgeand gets rid of the limitations imposed by manual design constraints.B More ExplanationsB.1 Implementation DetailsWe run all our experiments 5 times with different random seeds following previous practices [ 31].Due to the complexity of the task, we provide a simple bootstrapping example of a valid design toensure the success rate. We use warp [ 32] for the differentiable MPM simulation, and we develop ourinner-level optimization upon PyTorch [ 38]. In all our experiments, we use mean square error as thecriteria and Adam optimizer [ 20]. We choose gpt-4-turbo-preview as the backbone model forLLM and tentatively set the exploiting temperature Tl= 0.5and exploring temperature Th= 1.0.For the generation of 3D conformations, we utilize the ETKGD algorithm [ 42] followed by op-timization using the Merck Molecular Force Field (MMFF) [ 14], both implemented within theRDKit [ 25]. To get the quantum mechanical property values, we employ UniMol [ 65], a pre-trainedtransformer-based large model, which has been fine-tuned on the QM9 dataset [41].B.2 AlgorithmWe attach the full python-like pseudo-code of Scientific Generative Agent pipeline in Alg. 1.B.3 Data WorkflowThe full input to LLM has 3 main parts: (i) system prompt, (ii) iteration information, and (iii) formatprompt. For the system prompt, we insert it into the LLM at the beginning or input it as a specialinstruction depending on the type of LLM. For the iteration information, we first concatenate the codeand its feedback and then simply stack the top Ksolutions. Finally, we append the format prompt atthe end of the prompt to regularize the expected output. From our experiments, it is important to keepthe order of prompts to ensure the performance and the successful parsing. More precisely, we showthis process in the following python-like code:10Table 3: Comparison with population-based molecule design .Method (e)↓ (f)↓ (g)↓ (h)↓GhemGE 4.8e-3 1.8 1.5 9.8e-5Ours 1.3e-4 1.1e-1 5.4e-1 3.6e-5Table 4: Experiment in imaginary constitutive law .Method FunSearch Eureka OPRO OursLoss 105.0 89.1 98.0 1.3e-31 prompts = []2 prompts . append ( system_prompt )3 for solution in reversed ( solutions . topk ()):4 iteration_prompt = solution . code + ’\n’ + solution . feedback5 prompts . append ( iteration_prompt )6 prompts . append ( format_prompt )7 full_prompt = ’\n’. join ( prompts )B.4 Differences to Symbolic Regression Task•Our problem focuses on loss-guided general scientific discovery, which is a super-set of regularregression problems. In the constitutive law search tasks, we do not directly feed the input/outputpair to our method. Instead, we consider a much more challenging task: apply the generatedconstitutive law recursively and use the overall loss as the performance metric. Concretely, aclassic SR methods solve arg min f∥f(X)−y∥given < X, y > pairs, whereas our method solvesarg min f∥g(f(X))∥given < X, g (f(X))>pairs and gis a complex function like physicalsimulation. It is easy to construct gto cover the former case using the later formulation, provingthe generality of our problem setup. We formulate our problem as such to reflect a more realisticscenario in scientific discovery, where direct supervision is extremely sparse.•Our method supports arbitrary number of input variables and output features, where most of SRmethods [ 52] have limitation on the number of input and output. The input limitation strongly capsthe complexity of tasks they can solve, and the output limitation forces them ignore the structuralcorrelation between each output dimension. In a comparison, our method supports arbitraryproblem settings thanks to the code-based representation, which enables multi-dimensional arraysand tensor operations.•Our model adapts to multi-discipline application easily, while traditional SR methods typicallyincorporate with domain-experts’ priors via hard-coded constraints and heuristic [ 51], which islimited, domain-specific, and difficult to customize. Our method is built upon LLMs pre-trained oninternet-level data that contains multi-discipline natural languages, mathematical expressions, andcodes. As a result, it is easy for users to customize it and adapt to their own diverse applicationsvia natural language guidance.C More ExperimentsC.1 Symbolic RegressionWe also compare our method with traditional methods in each specific area to demonstrate thegeneralizability of our method. First, we reformulate our constitutive law search task (a)into asymbolic regression task by (i) capture the ground-truth output (the stress tensors) as the supervision,and (ii) separate the 9 output dimension into 9 independent problems and ensemble them for evaluation.Note that these modifications dramatically simplified the original task: we removed back-propagationthrough time (BPTT) and directly discover the constitutive law without surrogate loss. We evaluate14 traditional baselines in SRBench [ 24] and 3 data-driven pre-trained baselines. As shown in Table 2,our method topped on this task even with a much more challenging setting. Also, since our methoddepends on the in-context learning ability of LLMs, it has little constraint in the number of variablesthan the data-driven pre-trained baselines.11(a)(b) (c)(d)(e)(f) (g)(h)GPT-3.5Claude-3Mixtral-8x7BGPT-41243Figure 3: Backbone LLM.C.2 Population-based Molecule DesignFor molecule design tasks, we also compare our method with GhemGE [ 63], which employs apopulation-based molecule design algorithm. As shown in Table 3, our method presents a muchlower loss, demonstrating the general effectiveness of our method.C.3 Generalization or MemorizationIn order to figure out if the improvement introduced by our method is merely because the LLMsaw the solutions in its training phase, we design an experiment ablating it by making it invent animaginary constitutive law that does not exist on the earth. We mix the constitutive law of von Misesplasticity, granular material, and weakly compressible fluid by 50%, 30%, and 20%, so that the newconstitutive law represents an imaginary material whose behavior is extremely complex. We repeatour experiment setup as in Figure 1. We compare our method against the baselines and report theperformances in Table 4. As shown in the table, our method can still discover the constitutive lawwith a low quantitative loss. From our observation, there is very little visual difference between theground-truth material and the optimized constitutive law.C.4 LLM BackboneIn addition to GPT-4 [36], we repeat the experiments in Table 1 using 3 additional LLM backbones:(i) GPT-3.5 [ 37], (ii) Claude-3-Sonnet [ 2], and (iii) Mixtral-8x7B [ 16], and report the rank of themin Figure 3. Indicated by the largest area, GPT-4, as our choice, statistically outperforms the othermethods. Interestingly, we found Claude-3-Sonnet is the second top method on most of constitutivelaw search task, while Mixtral-8x7B even tops on 2 molecule design tasks. As a result, our workflowalso works for other LLMs, however, our suggestion for practitioners is to try GPT-4 as the firstchoice but also consider open-source model (e.g., Mixtral-8x7B) for budget or customizability.C.5 Exploitation v.s. ExplorationWe visualize the statistics of the simulation execution status in Figure 4 (a) using the task (b), whichis one of the most challenging tasks in our experiments. When the exploitation is removed, theerror rate dramatically increases, as shown by a decrease in green bars. It leads to a degeneration inthe performance of the methods with exploitation as shown in Figure 4 (b). However, even thoughthe success rate remains high, when exploration is removed, the optimization result is still worsethan keeping them both. We argue that exploration is significant when the optimization problem ischallenging, especially in our case, where the search space is highly non-linear and unstructured andresulting in numerous local optimum.120 1 2 3 4Iteration0.2250.2500.2750.3000.3250.3500.3750.400LossOursNo ExplorationNo Exploitation1 2 3 4 5 1 2 3 4 5 1 2 3 4 501020304050607080# Solutions33443432017402020541610605153428185911105222648211160146244214252530292427401228262628Success Training Error Syntax Error Success Rate CurveOurs No Exploration No ExploitationIteration Iteration Iteration(a) (b)Figure 4: Exploitation v.s. Exploration.Table 5: Longer Iteration.#Iterations (a)↓ (b)↓ (c)↓ (d)↓ (e)↓ (f)↓ (g)↓ (h)↓5 5.2e-5 2.1e-1 6.0e-2 1.4e-12 1.3e-4 1.1e-1 5.4e-1 3.6e-520 4.2e-6 4.0e-4 2.5e-3 1.4e-12 1.3e-4 6.5e-2 1.2e-1 5.6e-6Improvement +1138.1% +52400.0% +2300.0% 0.0% 0.0% +69.2% +350.0% +542.9%C.6 Longer IterationIn order to further investigate the potential of our method and ablate the hyper-parameters forpractitioners, we add a new study in terms of the number of iterations (question-answering cycles).We repeat our experiment in Table 1 with a prolonged number of iterations to 20 and report theperformance in Table 5.As shown in the table, the number of iterations turns out to be a determining hyper-parameterwith significant impart on the performance. While it has little affect on relatively easier tasks, itdramatically improves the performance of the most challenging tasks including (b)and(c). Forpractitioners, the number of iteration should be first considered as the most important hyper-parameterwhen adapting our method to their own tasks.D Case StudyD.1 Constitutive Law SearchWe provide a trimmed snippet of our searched constitutive law in Figure 5 (a) for task (a)where ahighly non-linear material is provided as the trajectory to fit. We reformat the code slightly to fit intothe text. Starting from a linear material, our method is able to automatically generate the constitutivelaw with a quadratic deviatoric term. Note that our method also provides a concrete implementationof__init__ function that defines the continuous parameters in the computational graph for laterinner-level optimization.D.2 Molecule DesignWhen comparing the two molecules with respect to their HOMO-LUMO energy gap based onoptimized results from the LLM as shown in Figure 5 (b), we observe distinct characteristics ineach: (i) Molecule A (gap-0) includes sulfur and chlorine atoms attached to a ring, coupled with atrifluoromethyl group, introducing electron-withdrawing effects, and (ii) Molecule B (gap-2) includesoxygen (notably in ethers) and sulfur within the ring structures introducing localized non-bondingelectron pairs. Furthermore, the overall structure of Molecule B is more complex than that ofMolecule A, containing multiple rings. An intriguing aspect of Molecule B, which might initiallydefy expectations, is the presence of a single fluorine atom. The high electronegativity of fluorinetypically leads to electron density withdrawal, influencing the gap value. However, due to thecomplexity of Molecule B’s structure, the impact of the fluorine atom is somewhat localized, therebynot significantly altering the gap value.13Molecule A Molecule BC1CC(SC1Cl)C(C(F)(F)F)N C1OC2SC3C4OC(F)S4C13C2(b)(a)class Physics(nn.Module): def __init__(self, youngs_modulus_lo g: float = 13.03, poissons_ratio_sigmoi d: float = -1.99): super().__init__() self.youngs_modulus_log = nn.Parameter ( torch.tensor (youngs_modulus_lo g)) # Log of Young's modulus self.poissons_ratio_sigmoid = nn.Parameter ( torch.tensor (poissons_ratio_sigmoi d)) # Sigmoid of Poisson's ratio def forward(self, F: torch.Tensor) -> torch.Tensor: youngs_modulus = self.youngs_modulus_log.exp( ) poissons_ratio = torch.sigmoid(self.poissons_ratio_sigmoid) * 0.49 mu = youngs_modulus / (2 * (1 + poissons_ratio)) # Shear modulus lam = youngs_modulus * poissons_ratio / ( (1 + poissons_ratio) * (1 - 2 * poissons_ratio) ) # Deformation gradient determinant J J = F.det().view (-1, 1, 1) F_invT = F.inverse().transpose (1, 2) # Volumetric part P_vol = lam * (J - 1) * F_invT # Deviatoric part P_dev = mu * (F - (1 / J) * F_invT) # Compute Kirchhoff stress tensor kirchhoff_stress = P_vol + P_dev @ F.transpose (1, 2) return kirchhoff_stres s Figure 5: Case Study.E Conclusion and LimitationsWe consider a few limitations and future directions. (i) Although we prompt the LLM to generatepseudo-code plans and comments, it is generally hard to ensure the interpretability of LLM-generatedsolutions. (ii) Since the LLM-generated codes are executed directly without any filtering in ourapplication, there exists potential AI safety risk that hazards the operating system. (iii) Our methodonly utilizes the internal knowledge of LLMs as the prior, where in reality people design manualconstraints and rule to regularize and improve the optimization [ 51]. We leave these domain-specific applications and human feedback-based regularization methods as our future work. (iv) Theperformance our method highly depends on the differentiablity of the generated code. However,Zero-order optimizers [ 15] should also shine since the number of continuous parameters is relativelylimited. (v) LLM inference requires large computational resources and thus increases expense. Forexample, it spends around $10 for our method to complete one task using GPT-4, which will beincreasingly inacceptable when the number of iteration grows. (vi) Due to the reuse of previouslygenerated solutions in our proposed top-k heap, the KV cache in LLM will be highly similar betweenneighbor iterations. It opens a gate for recent KV cache optimization methods [ 64] to speedup ourmethod by KV cache reusing.In conclution, we present Scientific Generative Agent, a bi-level optimization framework: LLMsserve as knowledgeable and adaptable thinkers, formulating scientific solutions like physics equationsor molecule structures; concurrently, simulations operate as platforms for experimentation, offeringobservational feedback and optimizing continuous components like physical parameters. We focusedon two scientific problems: constitutive law search and molecular design. Our approach outperformsother LLM-based benchmark methods, delivering consistent, robust, and nearly monotonic improve-ment. Furthermore, it shows exceptional ability in identifying unknown, true constitutive laws andmolecular structures. Remarkably, our system generates innovative solutions that, despite beingunconventional, are deemed reasonable after being thoroughly analyzed by experts in their respectivedomains. We view our process as a trailblazer, establishing a new paradigm for utilizing LLMs andsimulations as bilevel optimization to further advancements in physical scientific discoveries.14NeurIPS Paper Checklist1.ClaimsQuestion: Do the main claims made in the abstract and introduction accurately reflect thepaper’s contributions and scope?Answer: [Yes]Justification: See Section 2 and Section 3.Guidelines:•The answer NA means that the abstract and introduction do not include the claimsmade in the paper.•The abstract and/or introduction should clearly state the claims made, including thecontributions made in the paper and important assumptions and limitations. A No orNA answer to this question will not be perceived well by the reviewers.•The claims made should match theoretical and experimental results, and reflect howmuch the results can be expected to generalize to other settings.•It is fine to include aspirational goals as motivation as long as it is clear that these goalsare not attained by the paper.2.LimitationsQuestion: Does the paper discuss the limitations of the work performed by the authors?Answer: [Yes]Justification: See Appendix E.Guidelines:•The answer NA means that the paper has no limitation while the answer No means thatthe paper has limitations, but those are not discussed in the paper.• The authors are encouraged to create a separate "Limitations" section in their paper.•The paper should point out any strong assumptions and how robust the results are toviolations of these assumptions (e.g., independence assumptions, noiseless settings,model well-specification, asymptotic approximations only holding locally). The authorsshould reflect on how these assumptions might be violated in practice and what theimplications would be.•The authors should reflect on the scope of the claims made, e.g., if the approach wasonly tested on a few datasets or with a few runs. In general, empirical results oftendepend on implicit assumptions, which should be articulated.•The authors should reflect on the factors that influence the performance of the approach.For example, a facial recognition algorithm may perform poorly when image resolutionis low or images are taken in low lighting. Or a speech-to-text system might not beused reliably to provide closed captions for online lectures because it fails to handletechnical jargon.•The authors should discuss the computational efficiency of the proposed algorithmsand how they scale with dataset size.•If applicable, the authors should discuss possible limitations of their approach toaddress problems of privacy and fairness.•While the authors might fear that complete honesty about limitations might be used byreviewers as grounds for rejection, a worse outcome might be that reviewers discoverlimitations that aren’t acknowledged in the paper. The authors should use their bestjudgment and recognize that individual actions in favor of transparency play an impor-tant role in developing norms that preserve the integrity of the community. Reviewerswill be specifically instructed to not penalize honesty concerning limitations.3.Theory Assumptions and ProofsQuestion: For each theoretical result, does the paper provide the full set of assumptions anda complete (and correct) proof?Answer: [NA]15Justification: No theoretical results.Guidelines:• The answer NA means that the paper does not include theoretical results.•All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.•All assumptions should be clearly stated or referenced in the statement of any theorems.•The proofs can either appear in the main paper or the supplemental material, but ifthey appear in the supplemental material, the authors are encouraged to provide a shortproof sketch to provide intuition.•Inversely, any informal proof provided in the core of the paper should be complementedby formal proofs provided in appendix or supplemental material.• Theorems and Lemmas that the proof relies upon should be properly referenced.4.Experimental Result ReproducibilityQuestion: Does the paper fully disclose all the information needed to reproduce the main ex-perimental results of the paper to the extent that it affects the main claims and/or conclusionsof the paper (regardless of whether the code and data are provided or not)?Answer: [Yes]Justification: See Section 3.Guidelines:• The answer NA means that the paper does not include experiments.•If the paper includes experiments, a No answer to this question will not be perceivedwell by the reviewers: Making the paper reproducible is important, regardless ofwhether the code and data are provided or not.•If the contribution is a dataset and/or model, the authors should describe the steps takento make their results reproducible or verifiable.•Depending on the contribution, reproducibility can be accomplished in various ways.For example, if the contribution is a novel architecture, describing the architecture fullymight suffice, or if the contribution is a specific model and empirical evaluation, it maybe necessary to either make it possible for others to replicate the model with the samedataset, or provide access to the model. In general. releasing code and data is oftenone good way to accomplish this, but reproducibility can also be provided via detailedinstructions for how to replicate the results, access to a hosted model (e.g., in the caseof a large language model), releasing of a model checkpoint, or other means that areappropriate to the research performed.•While NeurIPS does not require releasing code, the conference does require all submis-sions to provide some reasonable avenue for reproducibility, which may depend on thenature of the contribution. For example(a)If the contribution is primarily a new algorithm, the paper should make it clear howto reproduce that algorithm.(b)If the contribution is primarily a new model architecture, the paper should describethe architecture clearly and fully.(c)If the contribution is a new model (e.g., a large language model), then there shouldeither be a way to access this model for reproducing the results or a way to reproducethe model (e.g., with an open-source dataset or instructions for how to constructthe dataset).(d)We recognize that reproducibility may be tricky in some cases, in which caseauthors are welcome to describe the particular way they provide for reproducibility.In the case of closed-source models, it may be that access to the model is limited insome way (e.g., to registered users), but it should be possible for other researchersto have some path to reproducing or verifying the results.5.Open access to data and codeQuestion: Does the paper provide open access to the data and code, with sufficient instruc-tions to faithfully reproduce the main experimental results, as described in supplementalmaterial?16Answer: [Yes]Justification: We will release the code and data upon acceptance.Guidelines:• The answer NA means that paper does not include experiments requiring code.•Please see the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•While we encourage the release of code and data, we understand that this might not bepossible, so “No” is an acceptable answer. Papers cannot be rejected simply for notincluding code, unless this is central to the contribution (e.g., for a new open-sourcebenchmark).•The instructions should contain the exact command and environment needed to run toreproduce the results. See the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•The authors should provide instructions on data access and preparation, including howto access the raw data, preprocessed data, intermediate data, and generated data, etc.•The authors should provide scripts to reproduce all experimental results for the newproposed method and baselines. If only a subset of experiments are reproducible, theyshould state which ones are omitted from the script and why.•At submission time, to preserve anonymity, the authors should release anonymizedversions (if applicable).•Providing as much information as possible in supplemental material (appended to thepaper) is recommended, but including URLs to data and code is permitted.6.Experimental Setting/DetailsQuestion: Does the paper specify all the training and test details (e.g., data splits, hyper-parameters, how they were chosen, type of optimizer, etc.) necessary to understand theresults?Answer: [Yes]Justification: See Section 3.Guidelines:• The answer NA means that the paper does not include experiments.•The experimental setting should be presented in the core of the paper to a level of detailthat is necessary to appreciate the results and make sense of them.•The full details can be provided either with the code, in appendix, or as supplementalmaterial.7.Experiment Statistical SignificanceQuestion: Does the paper report error bars suitably and correctly defined or other appropriateinformation about the statistical significance of the experiments?Answer: [Yes]Justification: See Table 1.Guidelines:• The answer NA means that the paper does not include experiments.•The authors should answer "Yes" if the results are accompanied by error bars, confi-dence intervals, or statistical significance tests, at least for the experiments that supportthe main claims of the paper.•The factors of variability that the error bars are capturing should be clearly stated (forexample, train/test split, initialization, random drawing of some parameter, or overallrun with given experimental conditions).•The method for calculating the error bars should be explained (closed form formula,call to a library function, bootstrap, etc.)• The assumptions made should be given (e.g., Normally distributed errors).•It should be clear whether the error bar is the standard deviation or the standard errorof the mean.17•It is OK to report 1-sigma error bars, but one should state it. The authors shouldpreferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesisof Normality of errors is not verified.•For asymmetric distributions, the authors should be careful not to show in tables orfigures symmetric error bars that would yield results that are out of range (e.g. negativeerror rates).•If error bars are reported in tables or plots, The authors should explain in the text howthey were calculated and reference the corresponding figures or tables in the text.8.Experiments Compute ResourcesQuestion: For each experiment, does the paper provide sufficient information on the com-puter resources (type of compute workers, memory, time of execution) needed to reproducethe experiments?Answer: [Yes]Justification: See Section 3.Guidelines:• The answer NA means that the paper does not include experiments.•The paper should indicate the type of compute workers CPU or GPU, internal cluster,or cloud provider, including relevant memory and storage.•The paper should provide the amount of compute required for each of the individualexperimental runs as well as estimate the total compute.•The paper should disclose whether the full research project required more computethan the experiments reported in the paper (e.g., preliminary or failed experiments thatdidn’t make it into the paper).9.Code Of EthicsQuestion: Does the research conducted in the paper conform, in every respect, with theNeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ?Answer: [Yes]Justification: The authors confirm that the research conducted in the paper conform,in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines .Guidelines:•The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.•If the authors answer No, they should explain the special circumstances that require adeviation from the Code of Ethics.•The authors should make sure to preserve anonymity (e.g., if there is a special consid-eration due to laws or regulations in their jurisdiction).10.Broader ImpactsQuestion: Does the paper discuss both potential positive societal impacts and negativesocietal impacts of the work performed?Answer: [Yes]Justification: See Section E.Guidelines:• The answer NA means that there is no societal impact of the work performed.•If the authors answer NA or No, they should explain why their work has no societalimpact or why the paper does not address societal impact.•Examples of negative societal impacts include potential malicious or unintended uses(e.g., disinformation, generating fake profiles, surveillance), fairness considerations(e.g., deployment of technologies that could make decisions that unfairly impact specificgroups), privacy considerations, and security considerations.18•The conference expects that many papers will be foundational research and not tiedto particular applications, let alone deployments. However, if there is a direct path toany negative applications, the authors should point it out. For example, it is legitimateto point out that an improvement in the quality of generative models could be used togenerate deepfakes for disinformation. On the other hand, it is not needed to point outthat a generic algorithm for optimizing neural networks could enable people to trainmodels that generate Deepfakes faster.•The authors should consider possible harms that could arise when the technology isbeing used as intended and functioning correctly, harms that could arise when thetechnology is being used as intended but gives incorrect results, and harms followingfrom (intentional or unintentional) misuse of the technology.•If there are negative societal impacts, the authors could also discuss possible mitigationstrategies (e.g., gated release of models, providing defenses in addition to attacks,mechanisms for monitoring misuse, mechanisms to monitor how a system learns fromfeedback over time, improving the efficiency and accessibility of ML).11.SafeguardsQuestion: Does the paper describe safeguards that have been put in place for responsiblerelease of data or models that have a high risk for misuse (e.g., pretrained language models,image generators, or scraped datasets)?Answer: [Yes]Justification: See Section E.Guidelines:• The answer NA means that the paper poses no such risks.•Released models that have a high risk for misuse or dual-use should be released withnecessary safeguards to allow for controlled use of the model, for example by requiringthat users adhere to usage guidelines or restrictions to access the model or implementingsafety filters.•Datasets that have been scraped from the Internet could pose safety risks. The authorsshould describe how they avoided releasing unsafe images.•We recognize that providing effective safeguards is challenging, and many papers donot require this, but we encourage authors to take this into account and make a bestfaith effort.12.Licenses for existing assetsQuestion: Are the creators or original owners of assets (e.g., code, data, models), used inthe paper, properly credited and are the license and terms of use explicitly mentioned andproperly respected?Answer: [Yes]Justification: See Section 3.Guidelines:• The answer NA means that the paper does not use existing assets.• The authors should cite the original paper that produced the code package or dataset.•The authors should state which version of the asset is used and, if possible, include aURL.• The name of the license (e.g., CC-BY 4.0) should be included for each asset.•For scraped data from a particular source (e.g., website), the copyright and terms ofservice of that source should be provided.•If assets are released, the license, copyright information, and terms of use in thepackage should be provided. For popular datasets, paperswithcode.com/datasetshas curated licenses for some datasets. Their licensing guide can help determine thelicense of a dataset.•For existing datasets that are re-packaged, both the original license and the license ofthe derived asset (if it has changed) should be provided.19•If this information is not available online, the authors are encouraged to reach out tothe asset’s creators.13.New AssetsQuestion: Are new assets introduced in the paper well documented and is the documentationprovided alongside the assets?Answer: [NA]Justification: This work does not release new assets.Guidelines:• The answer NA means that the paper does not release new assets.•Researchers should communicate the details of the dataset/code/model as part of theirsubmissions via structured templates. This includes details about training, license,limitations, etc.•The paper should discuss whether and how consent was obtained from people whoseasset is used.•At submission time, remember to anonymize your assets (if applicable). You can eithercreate an anonymized URL or include an anonymized zip file.14.Crowdsourcing and Research with Human SubjectsQuestion: For crowdsourcing experiments and research with human subjects, does the paperinclude the full text of instructions given to participants and screenshots, if applicable, aswell as details about compensation (if any)?Answer: [NA]Justification: This work does not involve crowdsourcing nor research with human subjects.Guidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Including this information in the supplemental material is fine, but if the main contribu-tion of the paper involves human subjects, then as much detail as possible should beincluded in the main paper.•According to the NeurIPS Code of Ethics, workers involved in data collection, curation,or other labor should be paid at least the minimum wage in the country of the datacollector.15.Institutional Review Board (IRB) Approvals or Equivalent for Research with HumanSubjectsQuestion: Does the paper describe potential risks incurred by study participants, whethersuch risks were disclosed to the subjects, and whether Institutional Review Board (IRB)approvals (or an equivalent approval/review based on the requirements of your country orinstitution) were obtained?Answer: [NA]Justification: This work does not involve crowdsourcing nor research with human subjects.Guidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Depending on the country in which research is conducted, IRB approval (or equivalent)may be required for any human subjects research. If you obtained IRB approval, youshould clearly state this in the paper.•We recognize that the procedures for this may vary significantly between institutionsand locations, and we expect authors to adhere to the NeurIPS Code of Ethics and theguidelines for their institution.•For initial submissions, do not include any information that would break anonymity (ifapplicable), such as the institution conducting the review.20 |
uwagVHmyNA | Flow-DPO: Improving LLM Mathematical Reasoningthrough Online Multi-Agent LearningYihe Deng1,2, Paul Mineiro21University of California, Los Angeles2Microsoft ResearchAbstractMathematical reasoning is a crucial capability for Large Language Models (LLMs),yet generating detailed and accurate reasoning traces remains a significant chal-lenge. This paper introduces a novel approach to produce high-quality reasoningtraces for LLM fine-tuning using online learning Flows . Our method employsan incremental output production Flow, where component LLMs collaborativelyconstruct solutions through iterative communication. We train the Flow usingonline Direct Preference Optimization (DPO) learning with rollouts, generatingDPO pairs for each training example and updating models in real-time. We di-rectly compare the quality of reasoning traces generated by our method with thoseproduced through direct model inference, demonstrating the effectiveness of ourapproach in improving LLM performance in mathematical reasoning tasks.1 IntroductionMathematical reasoning is a fundamental and vital aspect of Large Language Model (LLM) capabili-ties, as it is intrinsically linked to logical consistency and problem-solving abilities (Yu et al., 2023;Lu et al., 2023; Zhang et al., 2024c; Gao et al., 2024; Liu et al., 2024). This area has gained significantresearch interest, partly due to the ease with which results can be verified. Despite the abundanceof datasets containing mathematical questions and answers, generating detailed, accurate, and clearreasoning steps remains a significant challenge. While human annotators excel at providing correctanswers, their intermediate steps are often too concise or disorganized, rendering the data inadequatefor training LLMs. Consequently, researchers increasingly utilize LLM-generated reasoning traces formodel fine-tuning. Given the limited feedback provided by mere correctness of final answers, there isgrowing interest in having the target model generate its own reasoning traces for self-improvement.This approach is particularly relevant in two scenarios: (1) advancing a frontier model (i.e., enhancinga model that is already among the best available), and (2) addressing the high costs associated withusing large closed-source models compared to smaller open-source alternatives.Previous research in this domain has primarily focused on collecting accurate reasoning traces fromthe model itself through inference and filtering (Zelikman et al., 2022; Yuan et al., 2023; Singhet al., 2023; Hosseini et al., 2024; Pang et al., 2024; Zelikman et al., 2024), subsequently utilizingthese traces for Supervised Fine-Tuning (SFT) or Direct Preference Optimization (DPO) (Rafailovet al., 2024). Rejection sampling Fine-Tuning (RFT) (Yuan et al., 2023), a standard and effectiveapproach, augments training data by collecting and filtering unique model responses that yield correctanswers. This method is commonly associated with outcome reward, which is based on the finalanswer. Consequently, another research avenue explores process reward, aiming to generate superiorreasoning traces through step-by-step verification or reward mechanisms. While human annotation ofeach reasoning step has been shown to significantly enhance model performance (Lightman et al.,2023), the substantial cost of such annotations has led researchers to approximate process reward bytreating reasoning steps that result in correct answers as preferred steps (Wang et al., 2024a; Zhanget al., 2024b; Lai et al., 2024; Wang et al., 2024b). In essence, given identical training prompts(questions) and desired outcomes (answers), the research community is actively seeking effective andefficient methods to generate high-quality reasoning traces for LLM fine-tuning. This process can be38th Conference on Neural Information Processing Systems (NeurIPS 2024).conceptualized as a two-step approach: the data collection step, which aims to identify a “ Better ”operator for trace production, and the SFT step, which “ Compiles ” the collected data into a singleLLM model in a System 1 fashion.This paper focuses on designing a novel and improved pipeline for obtaining high-quality reasoningtraces. We directly compare the quality of reasoning traces generated by our method with thoseproduced through direct model inference, using the same volume of data for SFT, filtering on thecorrect answers and comparing the SFT-ed model performances. Our approach proposes the use ofonline learning Flows to generate such traces, as opposed to single model inferences. These Flowscomprise a collection of component LLMs based on the same architecture, which collaborativelyconstruct solutions through iterative communication (Mineiro, 2024). Specifically, we introduce anincremental productions flow, wherein one LLM generates a limited number of tokens as answerchunks, while another determines whether the maintained partial answer has reached completion.Furthermore, we train our Flow using online DPO learning with rollouts, generating a batch of DPOpairs for each training example at every answer chunk and updating the models as the training datacomes in. This core concept aligns with process reward models (PRMs) (Lightman et al., 2023),aiming to generate superior traces incrementally, thus providing denser rewards during fine-tuning.Our method offers greater flexibility by not constraining itself to predefined “reasoning steps”. Instead,it allows for adjustable chunk sizes, accommodating fine-grained chunks of mere dozens of tokensand generalizing to outcome reward models when larger chunk sizes are employed. Lastly, ourapproach remains compatible with further enhancements such as data augmentation and DPO.2 MethodQuestionAnswer LLMStop LLMFinal AnswerPartialReasoningAnswerChunkAppendNo YesFigure 1: Illustration of the incremental production flow. TheAnswer LLM is designated to generate an answer chunk witha limited number of tokens. The Stop LLM determines if thecurrent partial answer has reached a satisfying final answer.Incremental Output ProductionFlow. We experimented withdifferent flow architectures andachieved the best results with theincremental output production de-sign. As illustrated in Figure 1,this implementation primarily in-volves two independent LLMs ofidentical architecture: the AnswerLLM and the Stop LLM. The An-swer LLM generates one chunk ofthe response at a time, adhering toa predetermined maximum tokenlimit. We maintain a partial an-swer, initially empty, to which eachnewly generated answer chunk isappended. This partial answer is then evaluated by the Stop LLM to determine whether the com-plete response has been achieved. This iterative process continues until the Stop LLM signals thecompletion of the final answer. Thus, the Flow incrementally constructs the response, with smallerchunk sizes enabling more granular control and larger chunk sizes approximating single-pass modelgeneration. Notably, both the Answer LLM and Stop LLM start from the same base model but arefine-tuned with distinct LoRA adaptors to specialize in their respective tasks.Online Flow Learning with Rollouts. We further enhance the Flow through online DPO learning,incorporating random rollouts at each output node. Figure 2 illustrates this training process. For eachinput question, the Flow initiates with the Answer LLM generating an answer chunk, continuinguntil a complete response is produced. Given this output chain, we then perform a random rolloutat each output node. For instance, after the initial answer chunk generation and the Stop agent’s"No" determination, we allow the Flow to generate an alternative answer chunk, building upon theprevious partial answer. This process continues until a second complete answer is reached. If thetwo answers differ in correctness, we consider them a DPO pair for the Answer LLM, with thechunk leading to the correct answer chosen as the preferred response. Importantly, both the AnswerLLM and the Stop LLM are involved in these rollouts and subsequent fine-tuning, with the latterbeing evaluated on its stopping decisions. For each training instance comprising a question and ananswer, we generate a batch of DPO pairs to train both LLMs. This approach enables an onlinetraining scheme, updating the models incrementally as new data is processed. This methodologyshares similar intuition with the concurrent MCTS-based approaches (Zhang et al., 2024a,b), which2traverses the tree of reasoning steps by selecting the most promising child steps until an answer isreached. From each newly expanded step, they perform a random rollout to estimate the reward ofthat step. However, we only perform one random rollout at each node without traversing through atree for better efficiency. Additionally, rather than optimizing over pre-defined reasoning steps, weperform online DPO learning on fine-grained answer chunks.StartStop?No......Answer:8Q: The four points A(-4, 0), B(0, -4), X(0, 8), Y(14, k) are grouped on the Cartesian plane. If segment AB is parallel to segment XY what is the value of k?Let's break it down step by step:\n1. Since segment AB is parallel to segment XY, it means that they have the same slope ...2. Now, we need to find the slope of segment XY. We can do this by using the formula:\n slope = ......... ...... Answer:62. We can find the slope of segment AB by using the formula:\n slope = rise/run = ( -4 - 0 ) / ( 0 - (-4) ) = -4 / 4 = -1RolloutIncorrectCorrectFigure 2: Illustration of the DPO training with rollouts. At each node of the initial generation, we doa random rollout that is different from the original node and continue generation to a final answer. Apair that leads to different answers (correct and incorrect) is considered a DPO training data.3 Results3.1 Experiment Setup.In our experiments, we consider one LLM model for the entire Flow (Answer LLM and Stop LLM) aswell as the Compile step. For the model, we employ two recent and competitive models of differentscales: Llama-3-8B-Instruct andPhi-3-medium-128k-instruct (14B). To investigate theeffectiveness of our method, we utilize MetaMath (Yu et al., 2023) as the training dataset. MetaMathis derived from the training data of GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021),enhanced through data augmentation techniques. We evaluate the quality of reasoning traces duringthe Compile step on both GSM8K and MATH datasets. In the Flow learning phase, we use separateLoRA adapters for the Answer LLM and Stop LLM to specialize their capabilities during DPOtraining. In the Compile phase, we collect an equal amount of data with traces that lead to correctanswers from the flow and the baseline, enabling an independent assessment of reasoning quality byexamining how it enhances a single model’s performance through SFT. We uniformly used a subsetof1,500data from MetaMath for all baselines in Compile. For consistency across all baselines, wemaintain identical hyperparameters and system prompts in both the SFT process and evaluation.3.2 Progressive Validation AccuracyWe begin by examining the progressive validation accuracy of the Flow during online DPO trainingwith rollouts. Progressive validation accuracy is defined as the cumulative accuracy of the model onincoming training data prior to training:AccNprog=1NNXi=1IΘ(i−1)(xi) =yi,where Nis the number of seen training data, represents the language model fine-tuned on the first i−1data points, xiis the i-th question in data and yiis the correct answer. This metric serves as a reliableindicator of the Flow’s generalization performance throughout the training process. Figures 3 and 4illustrate the progressive validation accuracy of our Flow, both with and without training, alongsidethe zero-shot performance of a single LLM generating reasoning and answers in one step. Withouttraining, the Flow’s inference accuracy marginally underperforms that of the standalone model. Thisdiscrepancy indicates the Flow’s initial inefficiency in managing task-specific requirements, such asexplicitly determining when to conclude reasoning or continue based on partial answers. These resultshighlight the importance of the training process in optimizing the Flow’s performance for complexreasoning tasks. Meanwhile, online DPO training effectively enhances the Flow’s ability to generalizeto new data during online learning across various LLM models. For the Llama-3-8B-Instructmodel, online DPO learning significantly improves the Flow’s performance by 20% within just 2,000training instances. Similarly, for the Phi-3-medium-128k-instruct model, which demonstrates3strong initial performance in mathematical reasoning with a 79% zero-shot accuracy on the trainingdata, online DPO learning yields a notable improvement of 4 percentage points, reaching nearly83% accuracy. We note that, the online training scheme enables us to use the progressive validationaccuracy as a good indicator for early stopping.250 500 750 1000 1250 1500 1750 2000# of training samples4550556065Progressive Validation Accuracy (%)Llama-3-Instruct on MetaMathw/out trainingw/ trainingmodel zero-shotFigure 3: Progressive validation accuracy ofLlama-3-Instruct on MetaMath.250 500 750 1000 1250 1500 1750 2000# of training samples787980818283Progressive Validation Accuracy (%)Phi-3-Medium on MetaMathw/out trainingw/ trainingmodel zero-shotFigure 4: Progressive validation accuracy of Phi-3-Medium on MetaMath.3.3 CompileTo assess the quality of flow-generated reasoning traces, we compare them with model-generatedtraces produced in the Compile step, where we use the collected reasoning traces for SFT on a singleLLM. We establish baselines using the model’s zero-shot accuracy and its performance after SFT withground truth traces from the dataset. Additionally, we consider model-generated correct traces forSFT as a strong and widely-used self-training baseline (Yuan et al., 2023; Singh et al., 2023; Hosseiniet al., 2024). To ensure a fair comparison of trace quality, we maintain consistent data volumes acrossall baselines, focusing exclusively on traces that lead to correct answers. The comparative results arepresented in Table 1.Table 1: Main results of comparing the quality of traces used for SFT. We report the accuracy (%) foreach model fine-tuned on an identical set of prompts, but with varying answer sources. For Phi-3, wedoes not include GSM8K due to its already optimized performance on the dataset.Model Method GSM8K MATHLlama-3-Instruct (8B)0-shot 48.9 22.3SFT (ground-truth) 67.2 25.1SFT (self-generated) 68.8 24.2SFT (Flow-generated) 71.3 27.8Phi-3-Medium (14B)0-shot - 35.4SFT (ground-truth) - 36.3SFT (self-generated) - 36.5SFT (Flow-generated) - 38.63.4 Qualitative Analysis.We present a qualitative analysis comparing the reasoning traces generated by our proposed flowmethod with the ground-truth annotations from the dataset. Through examination of examplequestions, we demonstrate that while both approaches arrive at correct answers, the flow-generatedreasoning traces provide more detailed instructional guidance. To validate these observations, weemployed GPT-4o to conduct a systematic qualitative evaluation of response quality. The evaluationresults align with our observations, indicating that flow-generated responses (Response B) have betterquality. Specifically, it emphasized that flow-generated reasoning traces provides clearer instructionalguidance by emphasizing key concepts, such as the negative reciprocal relationship between theslopes of perpendicular lines, while maintaining a logical, step-by-step solution process. The responseavoids unnecessary complexity, focusing on essential steps, which improves accessibility and ease ofunderstanding. We provide an additional example in Appendix A.1.4Reasoning Traces ExampleQuestion :The graph of the equation x+ 2y+ 3 = 0 is perpendicular to the graph of the equationax+ 2y+ 3 = 0. What is the value of a?Response A (Ground-truth) :The graph of the equation x+ 2y+ 3 = 0 has a slope of −12. Since the graphs areperpendicular, the slope of the graph of ax+ 2y+ 3 = 0 must be the negative reciprocal of−12, which is 2. So, the equation ax+ 2y+ 3 = 0has a slope of 2. To find a, we can rewritethe equation as 2y=−ax−3and divide both sides by 2to get y=−a2x−32. Comparing thisto the slope-intercept form y=mx+b, we see that the slope mis−a2. Since we know theslope is 2, we have −a2= 2. Multiplying both sides by −2, we get a=−4. The answer is: -4Response B (Flow-generated) :To solve this problem, we need to understand the concept of perpendicular lines in thecontext of linear equations. When two lines are perpendicular, their slopes are negativereciprocals of each other.Let’s start by rewriting the given equation x+ 2y+ 3 = 0in the slope-intercept form, whichisy=−12x−32. This tells us that the slope of the line is −12.Now, we need to find the slope of the line represented by the equation ax+ 2y+ 3 = 0. Wecan do this by rewriting the equation in the slope-intercept form, which is y=−a2x−32.This tells us that the slope of this line is −a2.Since the two lines are perpendicular, their slopes are negative reciprocals of each other.This means that −12·(−a2) = 1, which simplifies to a=−4.Therefore, the value of ais−4.GPT-4o Evaluation :Response B is better, and here’s why:1. Conceptual ExplanationResponse A correctly identifies that the two lines are perpendicular and correctly uses thenegative reciprocal relationship between the slopes to solve for a. However, it jumps intothe algebra without much explanation of why the slopes should be negative reciprocals ofeach other. Response B takes the time to explicitly explain the concept of perpendicularity,emphasizing that the slopes of perpendicular lines are negative reciprocals. This makes theresponse more educational and ensures the reader understands the reasoning behind themath.2. Clarity in SolvingResponse A derives the correct equation and solves it correctly but introduces someunnecessary steps. For instance, the intermediate steps involving rewriting the equationas2y=−ax−3and dividing both sides by 2 are more complicated than necessary.Response B simplifies the approach by immediately focusing on the slope-intercept formof both equations. It efficiently explains how to extract the slope and directly comparesthe slopes of the two equations, making the process more straightforward and easier to follow.Conclusion :Response B provides a clearer and more thorough explanation, focusing on both the conceptand the algebra, while avoiding unnecessary complications. Its step-by-step approach makesit more accessible and easier to follow.4 ConclusionWe present an effective approach for generating high-quality reasoning traces using a given LLM,thereby enhancing its mathematical reasoning capabilities. Our method introduces Flow, a multi-agentconversation framework that leverages multiple LLMs to collaboratively solve complex problemsthrough iterative communication. We further refine this framework using online DPO learning withrollouts. Empirical evaluations across various LLM scales on widely-used mathematical benchmarksdemonstrate the efficacy of our method, revealing that Flow-generated traces exhibit superior qualitycompared to both ground-truth and model-generated correct traces. The adaptability of our approachin accommodating different chunk sizes and its applicability to diverse complex reasoning tasksunderscore its potential scalability across various scenarios and domains. Future research directionsmay include optimizing the training process, investigating the impact of increased data, and extendingour methodology to other fields requiring sophisticated reasoning capabilities.5ReferencesCOBBE , K.,KOSARAJU , V.,BAVARIAN , M.,CHEN, M.,JUN, H.,KAISER , L.,PLAPPERT , M.,TWOREK , J.,HILTON , J.,NAKANO , R.,HESSE , C.andSCHULMAN , J.(2021). Training verifiersto solve math word problems. arXiv preprint arXiv:2110.14168 .GAO, B.,SONG , F.,YANG , Z.,CAI, Z.,MIAO, Y.,DONG , Q.,LI, L.,MA, C.,CHEN, L.,XU, R.ET AL .(2024). Omni-math: A universal olympiad level mathematic benchmark for large languagemodels. arXiv preprint arXiv:2410.07985 .HENDRYCKS , D.,BURNS , C.,KADAVATH , S.,ARORA , A.,BASART , S.,TANG, E.,SONG, D.andSTEINHARDT , J.(2021). Measuring mathematical problem solving with the math dataset. arXivpreprint arXiv:2103.03874 .HOSSEINI , A.,YUAN, X.,MALKIN , N.,COURVILLE , A.,SORDONI , A.andAGARWAL , R.(2024).V-star: Training verifiers for self-taught reasoners. arXiv preprint arXiv:2402.06457 .LAI, X.,TIAN, Z.,CHEN, Y.,YANG , S.,PENG, X. andJIA, J.(2024). Step-dpo: Step-wisepreference optimization for long-chain reasoning of llms. arXiv preprint arXiv:2406.18629 .LIGHTMAN , H.,KOSARAJU , V.,BURDA , Y.,EDWARDS , H.,BAKER , B.,LEE, T.,LEIKE , J.,SCHULMAN , J.,SUTSKEVER , I.andCOBBE , K.(2023). Let’s verify step by step. arXiv preprintarXiv:2305.20050 .LIU, H.,ZHENG , Z.,QIAO, Y.,DUAN, H.,FEI, Z.,ZHOU , F.,ZHANG , W.,ZHANG , S.,LIN, D.andCHEN, K.(2024). Mathbench: Evaluating the theory and application proficiency of llms witha hierarchical mathematics benchmark. arXiv preprint arXiv:2405.12209 .LU, P.,BANSAL , H.,XIA, T.,LIU, J.,LI, C.,HAJISHIRZI , H.,CHENG , H.,CHANG , K.-W. ,GALLEY , M. andGAO, J.(2023). Mathvista: Evaluating mathematical reasoning of foundationmodels in visual contexts. arXiv preprint arXiv:2310.02255 .MINEIRO , P.(2024). Online joint fine-tuning of multi-agent flows. arXiv preprint arXiv:2406.04516.PANG , R. Y. ,YUAN, W.,CHO, K.,HE, H.,SUKHBAATAR , S.andWESTON , J.(2024). Iterativereasoning preference optimization. arXiv preprint arXiv:2404.19733 .RAFAILOV , R.,SHARMA , A.,MITCHELL , E.,MANNING , C. D. ,ERMON , S.andFINN, C.(2024).Direct preference optimization: Your language model is secretly a reward model. Advances inNeural Information Processing Systems 36.SINGH , A.,CO-REYES , J. D. ,AGARWAL , R.,ANAND , A.,PATIL , P.,LIU, P. J. ,HARRISON ,J.,LEE, J.,XU, K.,PARISI , A. ET AL .(2023). Beyond human data: Scaling self-training forproblem-solving with language models. arXiv preprint arXiv:2312.06585 .WANG , P.,LI, L.,SHAO, Z.,XU, R.,DAI, D.,LI, Y.,CHEN, D.,WU, Y.andSUI, Z.(2024a).Math-shepherd: Verify and reinforce llms step-by-step without human annotations. In Proceedingsof the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: LongPapers) .WANG, Z.,LI, Y.,WU, Y.,LUO, L.,HOU, L.,YU, H.andSHANG , J.(2024b). Multi-step problemsolving through a verifier: An empirical analysis on model-induced process supervision. arXivpreprint arXiv:2402.02658 .YU, L.,JIANG , W.,SHI, H.,YU, J.,LIU, Z.,ZHANG , Y.,KWOK , J. T. ,LI, Z.,WELLER , A.andLIU, W. (2023). Metamath: Bootstrap your own mathematical questions for large languagemodels. arXiv preprint arXiv:2309.12284 .YUAN, Z.,YUAN, H.,LI, C.,DONG, G.,LU, K.,TAN, C.,ZHOU, C.andZHOU, J.(2023). Scalingrelationship on learning mathematical reasoning with large language models. arXiv preprintarXiv:2308.01825 .6ZELIKMAN , E.,HARIK , G.,SHAO, Y.,JAYASIRI , V.,HABER , N.andGOODMAN , N. D. (2024).Quiet-star: Language models can teach themselves to think before speaking. arXiv preprintarXiv:2403.09629 .ZELIKMAN , E.,WU, Y.andGOODMAN , N. D. (2022). Star: Self-taught reasoner. arXiv preprintarXiv:2203.14465 .ZHANG , D.,WU, J.,LEI, J.,CHE, T.,LI, J.,XIE, T.,HUANG , X.,ZHANG , S.,PAVONE , M.,LI,Y.ET AL .(2024a). Llama-berry: Pairwise optimization for o1-like olympiad-level mathematicalreasoning. arXiv preprint arXiv:2410.02884 .ZHANG , D.,ZHOUBIAN , S.,YUE, Y.,DONG , Y.andTANG , J.(2024b). Rest-mcts*: Llm self-training via process reward guided tree search. arXiv preprint arXiv:2406.03816 .ZHANG , R.,JIANG , D.,ZHANG , Y.,LIN, H.,GUO, Z.,QIU, P.,ZHOU, A.,LU, P.,CHANG , K.-W. ,GAO, P. ET AL .(2024c). Mathverse: Does your multi-modal llm truly see the diagrams in visualmath problems? arXiv preprint arXiv:2403.14624 .7A Experiment DetailsA.1 Additional Qualitative AnalysisIn the example below, we show an additional qualitative example, which similarly indicates thatflow-generated responses (Response B) have better quality. The GPT-4o evaluation emphasizedthat flow-generated reasoning traces provides structured, step-by-step explanations that enhancecomprehensibility. Furthermore, the evaluation highlighted the notable improvement in both clarityand explanatory depth within individual reasoning steps generated by our flow method.Reasoning Traces ExampleQuestion :Riku has 25 times more stickers than Kristoff. If Kristoff has x stickers, Riku has 2210stickers. What is the value of unknown variable x?Response A (Ground-truth) :To solve this problem, we need to determine the value of x, which represents the number ofstickers Kristoff has. We know that Riku has 25 times more stickers than Kristoff. Therefore,Riku has x + 25x = 26x stickers. We also know that Riku has 2210 stickers. We can set upthe equation as follows: 26x = 2210 Let’s solve for x: x = 2210 / 26 x = 85 The value of xis 85. #### 85 The answer is: 85Response B (Flow-generated) :Let’s break this problem down step by step.We know that Riku has 25 times more stickers than Kristoff. This means that Riku’s stickercount is 25 times Kristoff’s sticker count.We are also given that Riku has 2210 stickers. So, we can set up an equation:Riku’s stickers = 25 ×Kristoff’s stickers + Kristoff’s stickersSince Riku has 2210 stickers, we can substitute this value into the equation:2210 = 25 ×Kristoff’s stickers + Kristoff’s stickersNow, let’s simplify the equation by combining the terms:2210 = 26 ×Kristoff’s stickersTo solve for Kristoff’s stickers, we can divide both sides of the equation by 26:85GPT-4o Evaluation :Response B is better, and here’s why:1. Clarity and StructureResponse A jumps straight into the math, stating the equation and solving it quickly.However, it lacks the step-by-step breakdown, which is important when explaining thereasoning behind the solution. Response B explains the problem in a more detailed andstructured manner. It starts by clarifying the relationship between the number of stickersRiku and Kristoff have and then carefully sets up the equation. This approach is moreinstructional, making it easier for the reader to follow the logic behind the solution.2. Correct Interpretation of the ProblemBoth responses solve the problem correctly by deriving the equation 26x= 2210and solvingforx= 85. Response A has an issue in wording, where it states, “Riku has x + 25x =26x stickers.” While the math is correct, the way it’s worded is slightly misleading becausesaying “25 times more” is interpreted differently than “25 times as many.” Response Bclarifies this better by stating “Riku’s sticker count is 25 times Kristoff’s sticker count.”3. Educational ValueResponse B takes the time to simplify the equation and explain each step, making it moreeffective for a student or learner to understand the process. It also uses better terminologylike “simplify the equation” and breaks down the division, which enhances the educationalquality of the response.Conclusion :Response B provides a more thorough, structured, and pedagogically sound explanation. Itguides the user through the problem, making it easier to understand each step.A.2 HyperparametersExperiments were conducted on NVIDIA RTX A100 (80G) GPU clusters. The online DPO fine-tuning process for reasoning trace generation takes approximately 36-48hours on 4GPUs. TheCompile (SFT) process takes approximately 1hour on 1GPU.8Table 2: Online DPO Fine-tuning hyperparameters.Learning rate 5e-6Optimizer AdamGlobal batch size 32DPO coefficient β 1.0Gradient clipping 1.0lora_r 8lora_alpha 8lora_dropout 0.05lora_target allMaximum steps (chunks) 6Chunk size 160Table 3: Comiple (SFT) hyperparameters.Learning rate 2e-4Optimizer AdamGlobal batch size 16Gradient clipping 1.0gradient_accumulation_steps 2warmup_ratio 0.1lora_r 16lora_alpha 16lora_dropout 0.05lora_target allTraining epochs 3A.3 PromptsPrompt for Answer LLM<System>Youareahelpfulmathematicalassistant. Explainyourreasoningandthensolvetheproblem.<User>{Input Question}Prompt for Stop LLM<System>You are an assistant that replies with Yes or No only. In the following task, you are given aProblem and a Candidate Solution. Decide if the Candidate Solution is correct.<User>Problem: {problem}Candidate Solution: {solution}Is the Candidate Solution correct? Reply with Yes or No only.Prompt for GPT-4o EvaluationReview the user’s question and the corresponding two responses. Determine which responseis better.User: <Question>Response A: <response A>Response B: <response B>After examining the original question, response, and both judgments:- Explain which response is better and why.- Conclude with a clear statement of which response is better.9 |
uHtzqZKbeK | Skywork-Math: Data Scaling Laws for MathematicalReasoning in LLMs — The Story Goes OnLiang ZengSkywork [email protected] [email protected] this paper, we investigate the underlying factors that potentially enhance themathematical reasoning capabilities of large language models (LLMs). We arguethat the data scaling law for math reasoning capabilities in modern LLMs is farfrom being saturated, highlighting how the model’s quality improves with increasesin data quantity. To support this claim, we introduce the Skywork-Math modelseries, supervised fine-tuned (SFT) on common 7B LLMs using our proposed 2.5M-instance Skywork-MathQA dataset. Skywork-Math 7B has achieved impressiveaccuracies of 51.2% on the competition-level MATH benchmark and 83.9% onthe GSM8K benchmark using only SFT data, outperforming an early version ofGPT-4 on MATH. The superior performance of Skywork-Math models contributesto our novel two-stage data synthesis and model SFT pipelines, which includethree different augmentation methods and a diverse seed problem set, ensuringboth the quantity and quality of Skywork-MathQA dataset across varying difficultylevels. Most importantly, we provide several practical takeaways to enhance mathreasoning abilities in LLMs for both research and industry applications.1 IntroductionMore is different.—-Philip W. Anderson, 1972Reasoning ability is a hallmark of human intelligence [ 14,11,27]. Although Large LanguageModels (LLMs) have recently demonstrated significant capabilities in various tasks such as con-versation [ 1,3,18] and summarization [ 28,30,21,2], they often struggle with complex reasoningtasks [ 11,17,29]. One particularly challenging area is mathematical reasoning [ 13,9,32,4,12],which requires the ability to solve mathematical problems and derive logical conclusions in a step bystep manner [27, 20, 23, 31, 24].Two prevailing beliefs guide researchers and practitioners in enhancing mathematical reasoningabilities of LLMs. The first belief posits that complex reasoning abilities, especially mathematicalreasoning, are emergent abilities that exist in large language models but not in small models [ 27,26].Typically, models with more than 30 billion parameters exhibit the strong mathematical reasoningability [ 7]. The second belief is the seminal "superficial alignment" hypothesis [ 33], which assertsthat"A model’s knowledge and capabilities are learnt almost entirely during pre-training, whilealignment teaches it which sub-distribution of formats should be used when interacting with users." .According to this hypothesis, the alignment process, primarily through supervised fine-tuning (SFT),does not inject new knowledge or improve inherent abilities but rather adjusts the output responseformat. This implies that the strong mathematical reasoning ability may not be significantly improvedby a large amount of synthetic SFT data.38th Conference on Neural Information Processing Systems (NeurIPS 2024).50 60 70 80 90GSM8K1020304050MATHGPT-4GPT-4-0314MetaMath-7BMetaMath-13B WizardMath-70BInternLM2-Math-7BLEMA-LLaMA2-7BLEMA-LLaMA2-13BMetaMath-Mistral-7B MAmmoTH-7BMAmmoTH-13BMAmmoTH-70BWizardMath-7BWizardMath-13BMetaMath-70BLLaMA3-8B GPT-3.5-Turbo LEMA-LLaMA2-70BDeepSeekMath-Instruct-7BInternLM2-Math-20BXwin-Math-13BXwin-Math-Mistral-7BSkywork-Math-LLaMA2-7BSkywork-Math-DeepSeekMath-7BSkywork-Math-Mistral-7BChatGLM3-Math-SFT-32B/Xwin-Math-7BT op@1 Accuracy (%)Figure 1: Top1 accuracy on GSM8K [ 9] and MATH [ 13] using only SFT techniques, without usingexternal toolkits and voting techniques. Following MetaMath [ 31], we employ a zero-shot chain-of-thought evaluation framework. Skywork-Math models achieve state-of-the-art accuracy amongmodels smaller than 10B parameters using only synthetic SFT data and surpass an early version ofGPT-4 on MATH.In this paper, we re-examine these two common beliefs mentioned above regarding mathematicalreasoning abilities of LLMs. For the first belief, we introduce the Skywork-Math model series,which are supervised fine-tuned (SFT) on common 7B pre-trained LLM models without employingother complex alignment techniques such as RLHF [ 6,8] and DPO [ 19]. Skywork-Math 7B modelshave achieved impressive accuracies of 51.2% on the competition-level MATH [ 13] benchmark and83.9% on the GSM8K [ 9] benchmark, notably outperforming an early version of GPT-4 on MATH.Our empirical findings, consistent with the conclusions in [ 16], suggest that strong mathematicalreasoning ability can indeed exist in common 7B language models. Moreover, scaling up syntheticSFT data can further enhance the mathematical reasoning ability of Skywork-Math 7B models.For the second belief, we propose Skywork-MathQA high-quality SFT dataset containing 2.5 mil-lion instances, which is much larger than open-sourced dataset of its kind to date, such as Meta-MathQA [ 31] containing 395K samples. We empirically observe that the scaling law curve on the SFTalignment for mathematical reasoning in modern LLMs is far from being saturated (ref. Figure 3).We have carefully scaled the Skywork-MathQA SFT dataset with diverse and high-quality samplesspecifically within the mathematical domain to enhance the model’s capability in understanding andsolving mathematical problems.Due to the scarcity of high-quality and challenging mathematical data, various pipelines and promptshave been employed to generate synthetic mathematical data [ 31,23,16,24,27,25]. To addressthis deficiency, we employ GPT-4 to generate a substantial amount of synthetic data through a noveltwo-stage data synthesis pipeline, in conjunction with the corresponding model SFT process. In stage1, our objective is to obtain normal synthetic problems to enhance the models’ general comprehensionof mathematical problems. To maintain the diversity in data selection process, we utilize the core-setapproach [ 22] on enlarged seed problems. However, as the data volume increases, we empirically2Highlighted Takeaways• The potential for math reasoning capabilities in modern LLMs is far from exhausted. Thequality of LLMs can significantly improve with increases in data quantity (ref. Figure 3).Skywork-Math 7B models already demonstrate strong mathematical reasoning abilities bySFTing on common 7B pre-trained LLM models.•The learning process for accessing the math reasoning ability involves multiple stages.Training LLM models in a meaningful order, from the easy problems to the hard ones, canprovide performance improvements.•When scaling the synthetic SFT dataset, increasing the diversity of seed problems andaugmentation methods can improve the math reasoning performance of LLMs.•Selecting influential data with high-quality from a large dataset is non-trivial [ 10]. Ourempirical results indicate that some straightforward methods to select the so-called "high-quality" data may not increase (and can even hurt) LLMs’ performance compared torandomly selecting data. The selection process involves multiple constraints, and the "high-quality" data could significantly decrease the difficulty level of problems, thus negativelyimpacting the performance of LLMs.•The LLM models have strong knowledge transfer capabilities for mathematical reasoningacross bilingual benchmarks (i.e., English and Chinese). We hypothesize that this can beattributed to the inherent nature of symbols and numbers in math problems, which retaintheir intrinsic meaning regardless of the language used.•Although Skywork-Math 7B models have achieved considerable improvement in robustnesstests compared to other open-source LLM models, they remain sensitive to the distractorsin math word problems compared with proprietary GPT-4 models.•Sparse MOE models cannot clearly exceed the performance upper bound of their densecounterparts through SFT alignment in the context of math reasoning.•Two subtle but crucial practical implementation techniques—preventing data leakageand considering the influence of model maximum length—significantly impact the finalperformance of LLM models.observe that the relationship between performance and data quantity begins to plateau. Accordingly,in stage 2, we diversify the dataset further by introducing a proportion of augmented hard problems,thereby exposing the model to more challenging mathematical questions. Without continual pre-training on a large-scale math corpus [ 23,5], Skywork-Math models achieve impressive performancewith just supervised fine-tuning on common pre-trained LLMs containing only 7B parameters.Most importantly, we provide valuable insights and practical takeaways to enhance the mathematicalreasoning ability in LLMs, benefiting both research and industry communities 2.2 MethodIn this section, we present the methodology of Skywork-Math 7B models, as illustrated in Figure 2.Skywork-Math models aim to enhance math reasoning abilities during the model alignment process,particularly in the SFT stage, using common and publicly available 7B pre-trained models. We employa two-stage SFT approach, in conjunction with two data synthesis pipelines to produce high-qualitydata. In stage 1, we feed base pre-trained models with our generated normal synthetic problems (2.1Minstances) to produce an intermediate model. In stage 2, to mitigate the diminishing returns in LLMs’performance as the quantity of data increases, we generate hard synthetic problems (0.4M instances)and develop our Skywork-Math models. To ensure the quality of data, we primarily utilize GPT-4-1106-preview [ 1] to generate 2.5M-instance synthetic Skywork-MathQA dataset. Due to spaceconstraints, detailed methods and experimental results can be found in the appendix.3Stage2Stage1BaseModel(a) Data Synthesis Pipeline(b) Model SFT PipelineIntermediateModelIntermediateModelSkywork-MathModelLLMHardSyntheticProblemsNormalSyntheticProblemsHardSeedProblemsSeedProblemsData SynthesisData SynthesisSeedSyntheticProblemsData SynthesisDiversity SelectionFigure 2: Overview of our proposed two-stage method. (a) The data synthesis pipeline of theSkywork-MathQA dataset. (b) The model SFT pipeline of the Skywork-Math model series.3 Data Scaling Laws in SFT on Mathematical ReasoningIn Figure 3, we illustrate the relationship between synthetic SFT dataset size and model performanceon GSM8K and MATH. The curve clearly exhibits a scaling law relationship between the size of SFTdata and model’s performance. Here are some in-depth observations:Quantity Breeds Quality. To enhance the mathematical reasoning abilities in LLMs, increasing thequantity of synthetic data can significantly improve the quality of model performance. This scalingtrend implies that, while SFT with a small amount of data could achieve decent results [ 33], utilizinga larger scale of synthetic SFT data can further improve math reasoning performance.Diminishing Returns from Continual Pre-Training. The DeepSeekMath-Base [ 23] 7B model,which has been continually pre-trained with 120B math-related tokens sourced from the web, initiallydemonstrates superior performance. However, as we increase the synthetic dataset size in theSkywork-MathQA dataset, this advantage diminishes and is eventually surpassed by the Mistral [ 15]7B base model. As the amount of SFT data increases, Skywork-Math-Mistral-7B and Skywork-Math-LLaMA2-7B catch up in performance to the Skywork-Math-DeepSeekMath-7B. This suggests thatwhile specialized pre-training provides a strong initial boost, its benefits are not consistently scalableand can be matched by increasing the quantity of synthetic SFT data.Effect of Problem Difficulty. The accuracy performance for Skywork-Math 7B model seriessignificantly increases as the synthetic data size expands from 2.1M to 2.5M, corresponding to thestage 2 in our data synthesis pipeline. This performance improvement in the final stage of datascaling indicates that incorporating more complex problems— ranging from Level 3 to Level 5 in theMATH dataset—has a substantial positive impact on model performance. This finding underscoresthe importance of not only generating a large quantity of data but also including more challengingproblems to push the limits of math reasoning abilities of LLM models.4Figure 3: The zero-shot top1 performance of Skywork-Math 7B model series improves significantlywith the increased size of synthetic SFT data in the Skywork-MathQA dataset, showing a clear trendof enhanced math as data quantity increases.4 ConclusionWe study how to empower mathematical reasoning abilities for common 7B pre-trained LLM models.We propose the Skywork-MathQA dataset, consisting of 2.5 million diverse and high-quality SFTinstances, implemented through our novel two-stage data synthesis pipeline. We introduce Skywork-Math model series, demonstrating that common small-scale 7B language models can stimulatestrong mathematical reasoning ability using only synthetic SFT data. Skywork-Math models achievestate-of-the-art accuracy among models smaller than 10B parameters using only synthetic SFT data,surpassing 70B LLM models and an early version of GPT-4 on MATH. These results suggest thatthe data scaling law for mathematical reasoning in LLM models remains significant and promising.Notably, this research provides several valuable insights and practical takeaways to advance ourunderstanding of the capabilities and limitations of LLMs in mathematical reasoning.5References[1]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia LeoniAleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4technical report. arXiv preprint arXiv:2303.08774, 2023.[2]Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxan-dra Cojocaru, Mérouane Debbah, Étienne Goffinet, Daniel Hesslow, Julien Launay, QuentinMalartic, et al. The falcon series of open language models. arXiv preprint arXiv:2311.16867 ,2023.[3] Anthropic. The claude 3 model family: Opus, sonnet, haiku. 2024.[4]Daman Arora, Himanshu Gaurav Singh, et al. Have llms advanced enough? a challengingproblem solving benchmark for large language models. arXiv preprint arXiv:2305.15074 , 2023.[5]Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer,Albert Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language modelfor mathematics. In The 3rdWorkshop onMathematical Reasoning andAIatNeurIPS’23 ,2023.[6]Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, DawnDrain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmlessassistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 ,2022.[7]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models arefew-shot learners. Advances inneural information processing systems, 33:1877–1901, 2020.[8]Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, JavierRando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. Open problemsand fundamental limitations of reinforcement learning from human feedback. arXiv preprintarXiv:2307.15217, 2023.[9]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christo-pher Hesse, and John Schulman. Training verifiers to solve math word problems. CoRR ,abs/2110.14168, 2021.[10] Logan Engstrom, Axel Feldmann, and Aleksander Madry. Dsdm: Model-aware dataset selectionwith datamodels. arXiv preprint arXiv:2401.12926, 2024.[11] Gael Gendron, Qiming Bao, Michael Witbrock, and Gillian Dobbie. Large language models arenot strong abstract reasoners yet. In ICLR 2024 Workshop: How FarAreWeFrom AGI, 2024.[12] Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu,Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark forpromoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprintarXiv:2402.14008, 2024.[13] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, DawnSong, and Jacob Steinhardt. Measuring mathematical problem solving with the MATHdataset. In Thirty-fifth Conference onNeural Information Processing Systems Datasets andBenchmarks Track (Round 2), 2021.[14] Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: Asurvey. arXiv preprint arXiv:2212.10403, 2022.[15] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra SinghChaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, LucileSaulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.6[16] Chen Li, Weiqi Wang, Jingcheng Hu, Yixuan Wei, Nanning Zheng, Han Hu, Zheng Zhang, andHouwen Peng. Common 7b language models already possess strong math capabilities. arXivpreprint arXiv:2403.04706, 2024.[17] Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-ChunZhu, and Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large languagemodels. In Thirty-seventh Conference onNeural Information Processing Systems, 2023.[18] Andrew Peng, Michael Wu, John Allard, Logan Kilpatrick, and Steven Heidel. Gpt-3.5 turbofine-tuning and api updates. 2023.[19] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, andChelsea Finn. Direct preference optimization: Your language model is secretly a reward model.Advances inNeural Information Processing Systems, 36, 2024.[20] David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematicalreasoning abilities of neural models. arXiv preprint arXiv:1904.01557, 2019.[21] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow,Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow,Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, ThomasWang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, RachelBawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier,Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay,Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji,Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue,Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. Bloom:A 176b-parameter open-access multilingual language model. CoRR, abs/2211.05100, 2022.[22] Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-setapproach. arXiv preprint arXiv:1708.00489, 2017.[23] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li,Y Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in openlanguage models. arXiv preprint arXiv:2402.03300, 2024.[24] Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor Git-man. Openmathinstruct-1: A 1.8 million math instruction tuning dataset. arXiv preprintarXiv:2402.10176, 2024.[25] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, AakankshaChowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in languagemodels. arXiv preprint arXiv:2203.11171, 2022.[26] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, DaniYogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of largelanguage models. arXiv preprint arXiv:2206.07682, 2022.[27] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le,Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models.Advances inneural information processing systems, 35:24824–24837, 2022.[28] Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, ChengCheng, Weiwei Lü, Rui Hu, et al. Skywork: A more open bilingual foundation model. arXivpreprint arXiv:2310.19341, 2023.[29] Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, NajoungKim, Jacob Andreas, and Yoon Kim. Reasoning or reciting? exploring the capabilities andlimitations of language models through counterfactual tasks. CoRR, abs/2307.02477, 2023.[30] Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv,Da Pan, Dian Wang, Dong Yan, et al. Baichuan 2: Open large-scale language models. arXivpreprint arXiv:2309.10305, 2023.7[31] Longhui Yu, Weisen Jiang, Han Shi, Jincheng YU, Zhengying Liu, Yu Zhang, James Kwok,Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematicalquestions for large language models. In The Twelfth International Conference onLearningRepresentations, 2024.[32] Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied,Weizhu Chen, and Nan Duan. Agieval: A human-centric benchmark for evaluating foundationmodels. arXiv preprint arXiv:2304.06364, 2023.[33] Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, AviaEfrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and OmerLevy. Lima: Less is more for alignment. CoRR, abs/2305.11206, 2023.8 |
tIlDF5B6T4 | Learning Mathematical Rules with Large LanguageModelsAntoine Gorceix∗J.P. Morgan AI ResearchBastien Le Chenadec∗J.P. Morgan AI ResearchAhmad Rammal∗J.P. Morgan AI ResearchNelson VadoriJ.P. Morgan AI ResearchManuela VelosoJ.P. Morgan AI ResearchAbstractIn this paper, we study the ability of large language models to learn specificmathematical rules such as distributivity or simplifying equations. We presentan empirical analysis of their ability to generalize these rules, as well as to reusethem in the context of word problems. For this purpose, we provide a rigorousmethodology to build synthetic data incorporating such rules, and perform fine-tuning of large language models on such data. Our experiments show that ourmodel can learn and generalize these rules to some extent, as well as suitably reusethem in the context of word problems.1 IntroductionWe focus on a specific aspect of mathematical reasoning, namely the ability of large languagemodels (LLMs) to learn specific abstract mathematical rules and to reuse them while answeringword problems, namely questions formulated in natural language that are usually made up of a fewsentences describing a scenario that needs to be solved through mathematics. We will also refer tothese mathematical rules as "skills". Thus, we study the following research question: Can LLMslearn and generalize specific mathematical rules, and apply them in contexts that have not been seenduring training, for example when answering word problems?We will fine-tune models on carefully built synthetic data reflecting the mathematical rules of interest,and we provide a detailed description of our methodology for building such data in section 2 andappendix B. Broadly speaking, we want our training data to be presented similarly to what you wouldfind in a mathematics textbook. For example, focusing on how to rearrange or simplify an equation,or how to use the distributivity property of the addition operator. But without word problems.The data at test time, however, will contain word problems where the model first needs to translatethe question in natural language into one or more equations, and then use various rules to solvethese equations. This is what we will call bottom-up generalization , namely the model’s ability togo from mathematical rules to answering word problems. We also provide an empirical study onthe model’s ability to generalize when increasing the mathematical complexity of the task. That is,when we increase parameters such as the number of variables, the number of equations, the variablename token length, and so on. We will call this top-down generalization , as we zoom into a givenmathematical rule to make it more complex. We comment on the related work in appendix A.Our contributions. We provide a rigorous methodology to create synthetic data containing specificmathematical rules such as manipulating equations (section 2). We show how fine-tuning models onthose rules allows them to be reused in the context of word problems, while maintaining performance*These authors contributed equally to this work.38th Conference on Neural Information Processing Systems (NeurIPS 2024).on usual benchmarks (section 3). We conduct experiments showing that models can generalize theserules to some extent when we increase the mathematical complexity of the problem, such as thenumber of variables (section 4). We find that allowing variables to take value in a larger set of tokensimproves the ability of the model to generalize distributivity to unseen variable names. Finally, weprovide an algorithm to extract mathematical expressions from text data detailed in appendix E, whichwe use to evaluate all models.2 Building Synthetic Data Incorporating Mathematical RulesWe aim at constructing synthetic data reflecting specific mathematical rules that we would like ourmodel to learn. The detailed description of the synthetic data is provided in appendix B, and ispresented similarly to what you would find in a mathematics textbook, without word problems.In section 3 we focus on bottom-up generalization. To this extent we consider the following rulespresented in section B.1: a) finding the roots of a quadratic polynomial; b) solving for a given variablein a linear equation in that variable, for example: solve for the variable xin13·x·y+ 24·y2=17·x·z+ 12·y·z.x=12·y·(z−2·y)13·y−17·z; c) simplifying terms in an equation, for example: simplifythe following expression: −9·x2−8·y+ 5·y. By grouping terms: −9·x2−3·y; d) isolating avariable in an equation of the form a=b∗(c+d+. . .)ora=b/(1/c+ 1/d+. . .).In section 4 we focus on top-down generalization. We consider rules such as distributivity, exponenti-ation, manipulation of single and pairs of equations as well as solving single steps of Gaussian elimi-nation, all presented in section B.2. In particular, we focus on the model’s ability to apply those ruleswhen the variables appearing in the equations are arbitrary strings, and study the impact on generaliza-tion performance. For example: Q: Expand this expression: (−5+soccer −dog)∗(blue−sky). A: Bythe distributivity property: −5∗blue+5∗sky+soccer ∗blue−soccer ∗sky−dog∗blue+dog∗sky.The goal is to learn the fundamental nature of the corresponding operators, which do not depend onthe variable names.3 Bottom-Up Generalization: Going from Mathematical Rules to WordProblemsWe consider three classes of word problems. Finding a solution to these problems involves mainlytwo steps. First, the model needs to correctly translate the question in natural language into one ormultiple equations. Second, the model needs to apply one or more mathematical skills to correctlymanipulate the equations to answer the question. We choose our examples such that the baselinemodel is able to perform the first step but struggles on the second. Our goal when performingfine-tuning is as follows: keep the general knowledge required to solve the first step, while providingnew tools to also solve the second. We write the solutions to these problems using chain-of-thought,i.e. we break down the reasoning in several steps. We will fine-tune Llama-3 8B Instruct on on a mixof synthetic data described in section B.1 together with data from the Orca dataset Mukherjee et al.(2023) acting as a regularizer in order to avoid catastrophic forgetting and maintain performance oncommon benchmarks (see section C.4). Additional details are provided in appendix C.Quadratic Polynomials. We consider simple geometric problems involving the calculation of areasfor shapes with the same unknown side length. The total area of all shapes is known, while theunknown side length is to be determined. The solution involves two main steps. The first one isto formulate a quadratic equation that models the relationship between the known areas and theunknown side length. This equation is always a quadratic polynomial. The second step is to findthe roots of this polynomial to find the unknown side length. Problems are constructed to alwaysyield one positive and one negative root, ensuring no ambiguity in the solution. We generate manysuch problems by varying the names (pools, fields, etc.) and shapes (square, triangle, parallelogram,rectangle) of the surfaces, the number of such surfaces per shape (in average, 2.5 surfaces per shapeper word problem), as well as the values of the areas and side lengths appearing in the problem. Wepresent an example of prompt given to the model in Figure 1.We provide three examples of prompts and responses to the model (3-shot), and assess the validityof the numerical solution. We evaluate the performance of our fine-tuned model against a baselineacross varying levels of difficulty. The difficulty scale ranges from 0 to 100%, in increments of 20%,2corresponding to the proportion of problems with non-integer roots. Non-integer roots present agreater challenge as they require more complex calculations compared to integer roots, which thebaseline model often guesses via straightforward factorization. To accurately determine non-integerroots, the discriminant of the polynomial must be calculated. We present our results in table 1.While the baseline model successfully translates the problem into an equation, it encounters difficultiesin finding the roots of the quadratic polynomial. Instead of applying the discriminant method, itprematurely attempts a hazardous factorization, often leading to errors. In contrast, the fine-tunedmodel consistently utilizes the discriminant approach, a result of targeted fine-tuning on a quadraticpolynomial solving task. As a consequence, our fine-tuned model is able to outperform the baseline.Figure 1: Word problem example - quadratic polynomial.Prompt : Jim has a total of 2 swimming pools. The first swimming pool is a square, with an unknown sidelength x. The second is a rectangle with one side measuring 1 meters and the other being the unknown sidelength x. The total area covered by these swimming pools is 2 square meters. What is the unknown sidelength x?Table 1: Accuracy (%) - quadratic polynomials (3-shot).Difficulty 0 20 40 60 80 100Llama-3 8B Instruct 14 10 10 10 9 8Llama-3 8B Fine-tuned 39 39 37 36 35 35Physics Problems Involving Resistors. We consider simple electrical circuits composed of nresistors connected either all in parallel or all in series. The model is tasked with finding the equationgoverning the circuit and isolating a variable in the equation. We provide 3 few-shot examples ofprompts and answers to the model, and then evaluate the correctness of the symbolic solution. Wepresent an example of prompt given to the model in Figure 2. We compare the performance of aFigure 2: Word problem example - resistor circuit.Prompt : You have a circuit with the following resistors: [R1 || R2]. Given that the current flowing throughthe circuit is I amp and the voltage across the circuit is U volts, express the resistance of R1 in terms of theother variables.fine-tuned model against a baseline model on different configurations of nresistors either in parallelor in series. For each configuration (i.e. [R1 || R2], [R1 - R2], etc.), 100 examples are generated byfirst generating a prompt for each possible unknown, and then sampling multiple responses from themodel if there are not enough unique problems (using a temperature of 0.1and top-5 sampling). Thefine-tuning data only contains variables from the restricted vocabulary (cf section B), with the skillof isolating a variable in an equation of the form a=b∗(c+d+. . .)ora=b/(1/c+ 1/d+. . .).Table 2 shows the results of the models on resistors in series and parallel. The fine-tuned modelperforms perfectly on resistors in series. On resistors in parallel, the fine-tuned model performssignificantly better than the baseline model, but still struggles with more complex configurationsdespite being trained on these equations.Fruit Baskets. Alice and Bob buy different quantities of fruits, each with a specific price, and bothend up paying the same total amount. The goal is to find the price of one fruit based on the prices andquantities of the others. We generate these problems by varying the names of the fruits, the numberof fruits, and the relationships between their quantities and prices. In our examples, all the fruitsdepend on the quantities and price of the first fruit, as illustrated in Figure 3. This involves setting upan equation, substituting variables, simplifying it, and solving for the unknown. The problem reducesto solving a linear equation involving three variables. We can increase the complexity of the problemby simply increasing the number of fruits that are purchased.We evaluate the model’s answer by extracting its symbolic output and comparing it to the correctone. We observed an improvement in the fine-tuned model compared to the Llama-3 8B Instruct3Table 2: Accuracy (%) - resistors problems of increasing complexity (3-shot).Resistors in series Resistors in parallelNumber of resistors 2 3 4 5 2 3 4 5Llama-3 8B Instruct 98 100 81 100 58 8 14 17Llama-3 8B Fine-tuned 100 100 100 100 68 73 60 35Figure 3: Word problem example - fruit baskets.Prompt : Alice and Bob went to the grocery store and bought the following items:- bananas: Alice bought qA1, and Bob bought qB1. The price of a single one is p1.- blueberries: Alice bought qA2where qA2= 2×p1, and Bob bought qB2where qB2= 8×qA1. Theprice of a single one is p2where p2= 5×qB1.Both ended up paying the same total price. Find the price of bananas in terms of qA1andqB1.baseline. The latter scores 19% versus 35% with the former in the case of two fruits. For three fruits,the baseline scores 0% against 13% for the fine-tuned model. While the baseline model can set upthe full equation, it struggles with simplifying or solving it. The fine-tuned model, though not perfect,performs better in finding the correct answer due to its fine-tuning on mathematical tasks such asexpression simplification and equation solving.We also considered assigning numerical values to the variables in that problem, requiring the modelto perform calculations to find a numerical solution instead of a symbolic one. We illustrate this infigure 9. We found that the models’ performances were very similar to the symbolic case. We refer tothe appendix C.3, table 3 for a full overview on the results.On the necessity to align training data with the word problems. Our experiments reveal limitationsin the ability to generalize specific mathematical rules to word problems. First, we have to make surethe form of the equations in the training data is the same as the form of the equations in the test data.For instance, the model is unable to perform on the resistors in parallel problem when it is trainedto put fractions over a common denominator in its answer. We say that the model is triggered inthe wrong modality. Another example with the resistors in parallel is as follows: if in the trainingdata, we put equations in the form " A/B ", and in the test data we put them in the form " A∗1/B",the model will not be able to perform. We say that the model is not triggered . This second exampleillustrates the importance of the choice of the few-shot examples, because the form of the equationwill generally match the form in the few-shot examples.4Top-Down Generalization: Increasing the Mathematical Complexity of theTaskWe fine-tune Llama-2 7B Chat on allthe synthetic data described in section B.2 (and that data only)and evaluate the model’s ability to perform the following rules: distributivity, commutativity, division,exponentiation, variable evaluation, remarkable identities, single equation and two equations manip-ulation . We check in section D.4 that the performance on general knowledge benchmarks remainsrelatively stable, and present experimental details in section D.6.Solving systems of equations by recursive call of our model . In section D.2, we train the model onindividual steps of the Gaussian elimination algorithm and show at test time that we can solve a fullsystem by recursively calling the model. An example is presented in figure 5.Experiments on various mathematical rules . We analyze the ability of our model to perform therules of interest under different configuration regarding the training vocabulary. We find that our fine-tuned Llama-2 model consistently outperforms other models across all considered mathematical rules.The full results are presented in section D.3. In section D.5, we present the model’s performance onsome word problems, in particular we emphasize its ability to combine skills in figures 15 and 16.Ablation study on the distributivity rule . In section D.1 we focus on the distributivity rule. We trainthe model on data where the variables appearing in the equations take value in subsets of increasing4tokenizer vocabulary sizes. We find that training on larger vocabulary sizes improves the ability ofthe model to generalize distributivity to unseen variable names as well as to increasing the number ofvariables, see figure 4.Figure 4: Validation accuracy on the distributivity rule for different vocabulary sizes. Each model isevaluated on the (100−x)%complement of its training vocabulary x%(%of tokenizer’s vocabulary).The dashed lines delimit the parameters seen during training from those unseen. From left to right,from top to bottom: x= 1,10,50,75,95.Figure 5: Detailed resolution of a system of equations by recursive call of our model on each stepi→i+ 1. The variables are dog, sky .−9∗dog−cat∗sky=−blueberry3∗dog−tree∗sky=−6step 0dog+ (cat/9)∗sky= (blueberry/ 9)3∗dog−tree∗sky=−6step 1dog+ (cat/9)∗sky= (blueberry/ 9)−(tree+ (cat/9)∗3)∗sky=−(6 + (( blueberry/ 9)∗3))step 2(dog+ (cat/9)∗sky= (blueberry/ 9)sky=(6+(( blueberry/ 9)∗3))(tree +(cat/ 9)∗3)step 3(dog= ((blueberry/ 9)−((6−((f/9)∗3))(tree +(cat/ 9)∗3)∗(cat/9)))sky=(6+(( blueberry/ 9)∗3))(tree +(cat/ 9)∗3)step 45 ConclusionWe showed how fine-tuning models on some specific mathematical rules allows them to be reused inthe context of word problems (bottom-up generalization), and also focused on the ability to generalizespecific rules such as distributivity and manipulating equations (top-down generalization). Futureresearch directions could include building a robust and rigorous methodology to create word problemdata from a set of mathematical rules.5DisclaimerThis paper was prepared for information purposes by the Artificial Intelligence Research groupof JPMorgan Chase & Co and its affiliates (“JP Morgan”), and is not a product of the ResearchDepartment of JP Morgan. JP Morgan makes no representation and warranty whatsoever anddisclaims all liability, for the completeness, accuracy or reliability of the information contained herein.This document is not intended as investment research or investment advice, or a recommendation,offer or solicitation for the purchase or sale of any security, financial instrument, financial product orservice, or to be used in any way for evaluating the merits of participating in any transaction, andshall not constitute a solicitation under any jurisdiction or to any person, if such solicitation undersuch jurisdiction or to such person would be unlawful.ReferencesAhn, J., Verma, R., Lou, R., Liu, D., Zhang, R., and Yin, W. (2024). Large language models formathematical reasoning: Progresses and challenges. In Falk, N., Papi, S., and Zhang, M., editors,Proceedings of the 18th Conference of the European Chapter of the Association for ComputationalLinguistics: Student Research Workshop , pages 225–237, St. Julian’s, Malta. Association forComputational Linguistics.Azerbayeva, Z., Schoelkopf, H., Paster, K., Santos, M. D., McAleer, S. M., Albert Q. Jiang, J. D.,Biderman, S., and Welleck, S. (2024). Llemma: An open language model for mathematics. ICLR .Bubeck, S., Chandrasekaran, V ., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y . T., Li,Y ., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T., and Zhang, Y . (2023). Sparks of artificialgeneral intelligence: Early experiments with gpt-4. https://arxiv.org/pdf/2303.12712 .Chen, Z., Chen, Y ., Han, J., Huang, Z., Qi, J., and Zhou, Y . (2024). An empirical study of data abilityboundary in llms’ math reasoning. https://arxiv.org/pdf/2403.00799 .Clark, P., Cowhey, I., Etzioni, O., Khot, T., Sabharwal, A., Schoenick, C., and Tafjord, O. (2018).Think you have solved question answering? try arc, the ai2 reasoning challenge. ArXiv ,abs/1803.05457.Dettmers, T., Pagnoni, A., Holtzman, A., and Zettlemoyer, L. (2024). Qlora: Efficient finetuning ofquantized llms. Advances in Neural Information Processing Systems , 36.Gao, L., Tow, J., Abbasi, B., Biderman, S., Black, S., DiPofi, A., Foster, C., Golding, L., Hsu,J., Le Noac’h, A., Li, H., McDonell, K., Muennighoff, N., Ociepa, C., Phang, J., Reynolds, L.,Schoelkopf, H., Skowron, A., Sutawika, L., Tang, E., Thite, A., Wang, B., Wang, K., and Zou, A.(2023). A framework for few-shot language model evaluation.Gou, Z., Shao, Z., Gong, Y ., Shen, Y ., Yang, Y ., Huang, M., Duan, N., and Chen, W. (2024). ToRA:A Tool-Integrated Reasoning Agent for Mathematical Problem Solving. ICLR .Hayou, S., Ghosh, N., and Yu, B. (2024). LORA+: efficient low rank adaptation of large models.Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., and Steinhardt, J. (2020).Measuring Massive Multitask Language Understanding. arXiv (Cornell University) .Imani, S., Du, L., and Shrivastava, H. (2023). Mathprompter: Mathematical reasoning using largelanguage models. ACL Industry Track .Jelassi, S., d’Ascoli, S., Domingo-Enrich, C., Wu, Y ., Li, Y ., and Charton, F. (2023). Lengthgeneralization in arithmetic transformers. https://arxiv.org/pdf/2306.15400 .Lample, G. and Charton, F. (2020). Deep learning for symbolic mathematics. ICLR .Lewkowycz, A., Andreassen, A., Dohan, D., Dyer, E., Michalewski, H., Ramasesh, V ., Slone, A., Anil,C., Schlag, I., Gutman-Solo, T., Wu, Y ., Neyshabur, B., Gur-Ari, G., and Misra, V . (2022). Solvingquantitative reasoning problems with language models. In Koyejo, S., Mohamed, S., Agarwal, A.,Belgrave, D., Cho, K., and Oh, A., editors, Advances in Neural Information Processing Systems ,volume 35, pages 3843–3857. Curran Associates, Inc.6Luo, H., Sun, Q., Xu, C., Zhao, P., Lou, J., Tao, C., Geng, X., Lin, Q., Chen, S., and Zhang,D. (2023). Wizardmath: Empowering mathematical reasoning for large language models viareinforced evol-instruct. https://arxiv.org/pdf/2308.09583 .Mirzadeh, I., Alizadeh, K., Shahrokhi, H., Tuzel, O., Bengio, S., and Farajtabar, M. (2024). Gsm-symbolic: Understanding the limitations of mathematical reasoning in large language models.https://arxiv.org/pdf/2410.05229 .Mukherjee, S., Mitra, A., Jawahar, G., Agarwal, S., Palangi, H., and Awadallah, A. (2023). Orca: Pro-gressive Learning from Complex Explanation Traces of GPT-4. https://arxiv.org/abs/2306.02707 .Shao, Z., Wang, P., Zhu, Q., Xu, R., Song, J., Bi, X., Zhang, H., Zhang, M., Li, Y ., Wu, Y ., and Guo,D. (2024). Deepseekmath: Pushing the limits of mathematical reasoning in open language models.https://arxiv.org/pdf/2402.03300 .Taylor, R., Kardas, M., Cucurull, G., Scialom, T., Hartshorn, A., Saravia, E., Poulton, A.,Kerkez, V ., and Stojnic, R. (2022). Galactica: A large language model for science.https://arxiv.org/pdf/2211.09085 .Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y ., Bashlykov, N., Batra, S.,Bhargava, P., Bhosale, S., Bikel, D., Blecher, L., Ferrer, C. C., Chen, M., Cucurull, G., Esiobu, D.,Fernandes, J., Fu, J., Fu, W., Fuller, B., Gao, C., Goswami, V ., Goyal, N., Hartshorn, A., Hosseini,S., Hou, R., Inan, H., Kardas, M., Kerkez, V ., Khabsa, M., Kloumann, I., Korenev, A., Koura, P. S.,Lachaux, M.-A., Lavril, T., Lee, J., Liskovich, D., Lu, Y ., Mao, Y ., Martinet, X., Mihaylov, T.,Mishra, P., Molybog, I., Nie, Y ., Poulton, A., Reizenstein, J., Rungta, R., Saladi, K., Schelten, A.,Silva, R., Smith, E. M., Subramanian, R., Tan, X. E., Tang, B., Taylor, R., Williams, A., Kuan,J. X., Xu, P., Yan, Z., Zarov, I., Zhang, Y ., Fan, A., Kambadur, M., Narang, S., Rodriguez, A.,Stojnic, R., Edunov, S., and Scialom, T. (2023). Llama 2: Open foundation and Fine-Tuned chatmodels.Trinh, T., Wu, Y ., Le, Q., He, H., and Luong, T. (2024). Solving olympiad geometry without humandemonstrations. Nature .Wang, L., Xu, W., Lan, Y ., Hu, Z., Lan, Y ., Lee, R. K.-W., , and Lim, E.-P. (2023). Plan-and-solveprompting: Improving zero-shot chain-of-thought reasoning by large language models. ACL.XTX Investments (2024). AI Mathematical Olympiad - Progress Prize 1.Yang, K., Swope, A. M., Gu, A., Chalamala, R., Song, P., Yu, S., Godil, S., Prenger, R., andAnandkumar, A. (2023). Leandojo: Theorem proving with retrieval-augmented language models.NeurIPS Datasets and Benchmarks .Yuan, Z., Yuan, H., Li, C., Dong, G., Tan, C., , and Zhou, C. (2023). Scaling relationship on learningmathematical reasoning with large language models. https://arxiv.org/pdf/2308.01825 .Zellers, R., Holtzman, A., Bisk, Y ., Farhadi, A., and Choi, Y . (2019). Hellaswag: Can a machinereally finish your sentence? In Proceedings of the 57th Annual Meeting of the Association forComputational Linguistics .Zheng, C., Liu, Z., Xie, E., Li, Z., and Li, Y . (2023). Progressive-hint prompting improves reasoningin large language models. https://arxiv.org/pdf/2304.09797 .7A Related WorkIn recent years, researchers have trained large language models (LLMs) on data of unprecedentedsize, and studying the emergent capabilities of these models on various tasks has been a centralfocus Bubeck et al. (2023). Nevertheless, mathematical reasoning remains a challenge, althoughthe rate of improvement of LLMs on solving mathematical problems has been significant Ahn et al.(2024). A prize was even launched with the goal of getting AI to perform at gold medal level at theInternational Mathematical Olympiad XTX Investments (2024), emphasizing the importance of thisresearch topic. In 2024, Numina won the first progress prize. Their recipe involved tool integration aswell as fine-tuning DeepSeek Math Shao et al. (2024), a model built using scalable math pre-training.Properly evaluating LLMs on mathematical reasoning tasks presents significant challenges, amongwhich: i) data contamination: since LLMs are trained on vast amounts of data, it is often not clearthat some problems have not been seen during training; ii) assessment of proof correctness: it is noteasy to automatically determine whether a sequence of mathematical reasoning steps - and moregenerally a proof - is correct. On this topic we note the work conducted with proof assistants such asLean Yang et al. (2023), for which we can immediately determine whether a proof is true or false asmathematical reasoning is seen as a computer program; iii) the variety of mathematical problemsis significant and their complexity/difficulty is not trivial to establish. On this topic we believe thatwork allowing to suitably categorize problems would be beneficial to the community.ToRA Gou et al. (2024) combines natural language reasoning (rationale) with symbolic solvers(programmatic reasoning). WizardMath Luo et al. (2023) applies Reinforcement Learning fromEvol-Instruct Feedback (RLEIF) to the domain of mathematics. MathPrompter Imani et al. (2023)uses the zero-shot chain-of-thought prompting technique to generate multiple algebraic expressionsto solve the same problem in different ways and thereby raises the confidence level in the outputresults. Alphageometry Trinh et al. (2024) significantly advances the state-of-the-art on geometryproblems at the level of International Mathematical Olympiads. Llemma Azerbayeva et al. (2024)is a suite of models tailored for mathematical reasoning. Galactica Taylor et al. (2022) is a largelanguage model that can store, combine and reason about scientific knowledge. Minerva Lewkowyczet al. (2022) solves scientific and mathematical questions in natural language, generates step-by-stepsolutions using Latex notation, and was trained on data composed of scientific and mathematicalinformation. Yuan et al. (2023) applies rejection sampling fine-tuning (RFT), that uses supervisedmodels to generate and collect correct reasoning paths in order to augment fine-tuning datasets.Zheng et al. (2023) proposes progressive-hint prompting that enables automatic multiple interactionsbetween users and LLMs by using previously generated answers as hints to progressively guidetowards correct answers. Wang et al. (2023) proposes plan-and-solve prompting, that consists of twocomponents: first, devising a plan to divide the entire task into smaller subtasks, and then carrying outthe subtasks according to the plan. Bubeck et al. (2023) present among other things an experimentalstudy of GPT4’s mathematical reasoning capabilities. Jelassi et al. (2023) examine how transformerscope with two challenges: learning basic integer arithmetic, and generalizing to longer sequencesthan seen during training . Lample and Charton (2020) present a deep learning approach for taskssuch as symbolic integration and solving differential equations. LeanDojo Yang et al. (2023) exploresmathematical reasoning using the Lean proof assistant, among other things they suitably retrieverelevant premises from a vast math library. Mirzadeh et al. (2024) presents an interesting studyon how model performance varies when slightly varying problems from the GSM8K benchmark.Chen et al. (2024) studies the impact of mixing different types of data on mathematical reasoningability. Finally, the recent survey paper Ahn et al. (2024) presents the state of the field on the topic ofmathematical reasoning.8B Detailed Description of the Synthetic Data Incorporating MathematicalRulesOur data is a combination of text and mathematical expressions, which are built from a set of typesandoperations . Atype is a fundamental object that can be manipulated (a variable, an integer, adecimal number, etc.). By applying arithmetic operations to these types, we can construct complexmathematical expressions in the form of Abstract Syntax Trees (AST).Specifically we consider the following types for our synthetic data:• integers,• decimals with up to two decimal places,•arbitrary strings, either made from the concatenation of ktokens taken from a subset of thetokenizer’s vocabulary1which we call full vocabulary , or from the concatenation of a latinor greek letter with a digit which we call restricted vocabulary ,We overload our programming language’s arithmetic operators ( +,−,∗,/, ˆ) to implement a set ofsimplification rules. For instance, when adding two integers, we generally simplify the expression byevaluating the sum (though each simplification rule can be turned off). When no simplification applies,a node is added to the AST. This custom data structure allows us to control precise simplificationdetails such as the order of terms, whether to evaluate numerical expressions, etc. In contrast,using a symbolic mathematics library such as Sympy would not allow us to control these details, assome simplifications are performed systematically. We also implement a translation function thatconverts our custom data structure to Sympy’s data structure, and vice versa, to handle more advancedsimplification rules externally, striking a balance between control and flexibility.Types only hold their own value (e.g. 4, "x", 12.51), and the negative sign is considered a unaryoperation, while the other operations are binary. Thus our expressions are ASTs where each leaf isa type, and each internal node is a unary or binary operation. In simpler cases we might considerexpressions of the form ω1⊙1ω2⊙2··· ⊙ n−1ωn, where each ωiis a type, and each ⊙iis anarithmetic operation.We are then able to represent our data structure as a string, taking into account operator precedenceand associativity. We can thus construct mathematical expressions programatically, and generateprompts for our models. All of our prompts are of the form ∗start Question ∗midAnswer ∗end, where∗start,∗midand∗endare model-specific strings that separate the question from the answer. Forexample ∗start=[INST], ∗mid=[/INST] and ∗endis empty in the case of Llama-2 Chat. Similarstring are used in the case of Llama-3 Instruct. In the below, ωiand⊙iwill denote respectively typesand operations, which value can change depending on the context. For each one of the cases below,the number of terms appearing in the expressions is sampled randomly between some pre-specifiedlower and upper bounds.B.1 Rules Used in Section 3For the rules below, when relevant we use Sympy to solve equations symbolically and factorize theresult.Quadratic polynomials. We want our model to solve quadratic polynomials. ω1, ω2, ω3,are randomintegers and ω4is an arbitrary token.• Question. Solve this quadratic equation: ω1×ω24+ω2×ω4+ω3= 0.• Answer. We first calculate the discriminant ∆ =ω22−4ω1ω3.–If∆>0: The equation has two distinct real roots:*ω4=−ω2±√∆2ω1–If∆ = 0 : The equation has exactly one real root:*ω4=−ω22ω11We consider the full vocabulary minus a set of tokens such as spaces, arithmetic symbols {+,−,∗, /,=},etc.9–if∆<0: The equation has two complex conjugate roots.*ω4=−ω2±i√∆2ω1•Example. Question. Solve this quadratic equation: x2+x−2 = 0 . Answer. To solvethis quadratic equation, we first calculate the discriminant ∆:∆ = ω22−4ω1ω3=12−4(1)(−2) = 9 . Since ∆>0, the equation has two distinct real roots, calculated usingthe quadratic formula: x=−ω2±√∆2a. Substituting the values, we get: root 1:−1+√92= 1,root 2:−1−√92=−2.First-order single equation. We consider a single equation that we solve for a certain variable. Toensure simplicity, we focus on first-order equations, which can be solved through straightforwardalgebraic transformations without the need for advanced techniques. We generate the answer usingSymPy. The equation ωAconsists of terms on both sides, ωA:=Pni=1ai·ωi=Pni=1bi·ωi, for avariable ωk.• Question. Solve for the variable ωkin the equation: ωA.• Answer. ωk=Pi̸=kωi.(bi−ai)ak−bk.•Example. Question. Solve for the variable xin13·x·y+ 24·y2= 17·x·z+ 12·y·z.Answer. x=12·y·(z−2·y)13·y−17·z.Simplify expression. We want the model to reduce an expression ωA:=Pni=1ai·ωi+Pni=1bi·ωitohave it in its canonical form, by grouping terms and simplifying some others ωB:=Pni=1(ai+bi)·ωi.We generate the answer using SymPy.• Question. Simplify the following expression: ωA.•Answer. By grouping terms and eliminating common factors, we get the simplified expres-sion: ωB.•Example. Question. Simplify the following expression: −9·x2−8·y+ 5·y.Answer. Bygrouping terms: −9·x2−3·y.Resistors equations. We consider specific equations that appear in the context of electrical circuits,where resistors are connected in parallel or in series. In the first case, we isolate a variable in anequation of the form ωA:=ω1=ω2×Pni=3ωi:• Question. Solve for the variable ωkinωA.• Answer. ωk=ω1ω2−P3≤i≤n;i̸=kωi• Example. Question. Solve for the variable xinu=v×(w+x).Answer. x=uv−w.In the second case, we isolate a variable in an equation of the form ωB:=ω1=ω2/(Pni=31/ωi):• Question. Solve for the variable ωkinωB.• Answer. ωk= 1/ω2ω1−P3≤i≤n;i̸=k1/ωi•Example. Question. Solve for the variable xinu=v/(1/w+ 1/x).Answer. x=1/(v/u−1/w).B.2 Rules Used in Section 4Distributivity. We expand an expression ωAconsisting of the product of two brackets ωA=(Pni=1ωi)∗(Pmj=1ωn+j), to obtain the distributed expression ωB=Pni=1Pmj=1ωi∗ωn+j.• Question. Expand this expression: ωA.• Answer. By the distributivity property: ωB.•Example. Question. Expand this expression: (−5 +soccer −dog)∗(blue−sky).Answer.By the distributivity property: −5∗blue+ 5∗sky+soccer ∗blue−soccer ∗sky−dog∗blue+dog∗sky.10Single equation manipulation. We consider two rules. The first consists in computing an affinetransformation ω3∗ωA+ω4of an equation ωAof the form ω1=ω2, namely ω3∗ω1+ω4=ω3∗ω2+ω4. The second rule consists in "simplifying" an equation ωAof the form ω1=ω2and performs three steps at once: putting the equation in standard form2, canceling out terms onboth sides, and factorizing the remaining terms. We use the convention that equations are alwaysencapsulated within two semicolons, in order to help the model recognize equations.• Questions. a) Assumptions: E1 ; ωA;. Compute: ω3∗E1 +ω4.b)Simplify: ; ωA;.• Answers. a-b) We get: ; ωB;.•Example a). Question. Assumptions: E1 ; cat+dog−2 =tree;. Compute: sky∗E1 + 8 .Answer. We get: ; sky∗cat+sky∗dog−sky∗2 + 8 = sky∗tree+ 8;.•Example b). Question. Simplify the equation: ; 7∗dog+sky∗cat+sky∗dog−sky∗2+8 =sky∗tree+ 7∗dog+ 8−blueberry ;.Answer. We get: ; sky∗(cat+dog−2−tree) +blueberry = 0;.Commutativity. We study the commutativity of an operation ⊙A∈ {∗ ,±}in an expressionωA:=ω1⊙1ω2⊙2··· ⊙ n−1ωn. If⊙A=∗, then⊙i=∗ ∀i, and if ⊙A=±, then each ⊙iiseither +or−with equal probability. We specify the two types ωi,ωjamong nto which we want toapply commutativity, and the commuted expression ⊙Bis the same as ⊙Abut where we switch thepositions of ωi,ωj. If one of the latter appears more than once in the expression (i.e. ωi=ωkorωj=ωkfor some k̸=i, j), we choose the first from the left.• Question. Apply the commutativity property of ⊙1toω1,ω2inωA.• Answer. By the commutativity property of ⊙1:ωA=ωB.•Example. Question. Apply the commutativity property of ∗tocat,5in5∗cat∗soccer .Answer. We get: cat∗5∗soccer .Division. We consider three properties of the division:ω1⊙1ω2ω3=ω1ω3⊙1ω2ω3, where ⊙1∈ {+,−};ω1∗ω2ω3∗ω4=ω1ω3∗ω2ω4;ω1ω2ω3=ω1ω2∗ω3.• Question. Use a fundamental property of the division in ωA.• Answer. By a property of the division: ωB.•Example. Question. Use a fundamental property of the division in (−sky∗blue)/(tree∗−cloud ).Answer. By a property of the division: (−sky)/(tree)∗(blue)/(−cloud ).Exponentiation. We consider five properties of the exponentiation: ω0= 1; the definition ofexponentiation ωn=ω∗ ··· ∗ ω|{z}ntimesifnis a positive integer, the reciprocal of the latter if nis a negativeinteger; if n, m are signed integers: ωn∗ωm=ωn+m;ωn1∗ωn2= (ω1∗ω2)n;ωnωm=ωn−m.•Questions. a) Apply the definition of the exponentiation to: ωA.b)Give the value of: ω0.c)Use a fundamental property of the exponentiation: ωA.•Answers. a) By definition of the exponentiation: ωB.b-c)By a property of the exponentiation:ωB..• Examples.Question. Use a fundamental property of the exponentiation: (bluesky−3)/(bluesky9).Answer. By a property of the exponentiation: bluesky−12.Question. Apply the definition of the exponentiation to: bluesky3.Answer. By definition ofthe exponentiation: bluesky ∗bluesky ∗bluesky .Question. Give the value of: bluesky0.Answer. By fundamental property of the exponenti-ation: bluesky0= 1.2that is, in the form ω1−ω2= 0.11Variable evaluation. We teach the model to substitute k≥0types in a given equation, based onsome assumptions on these types (if k= 0, nothing occurs). Precisely, assume that we have anequation ωAof the form ω1⊙1ω2⊙2··· ⊙ n−1ωn=ωn+1⊙n+1ωn+2⊙n+2··· ⊙ n+m−1ωn+m,and that by assumption ωni=ω′nifor a set of indexes {ni}and some types ω′ni. Then, the newequation ωBis the same as ωAbut with ω′nireplaced by ωni.•Question. Assumptions: ωn1=ω′n1, . . . , ω nk=ω′nk. Based on the assumptions, evaluateωA.• Answer. The evaluated expression: ωB.•Example. Question. Assumptions: sky= 2, blueberry =cat. Based on the assumptions,evaluate dog−tree+sky=blueberry .Answer. The evaluated expression: dog−tree+2 =cat.Remarkable identities. We consider the remarkable identity (ω1⊙1ω2)2=ω21+ω22⊙12ω1ω2,where ⊙1∈ {+,−}.• Question. Expand this expression: ωA.• Answer. By the remarkable identity properties: ωB.•Example. Question Expand this expression: (sky−blueberry )2.Answer. By the remark-able identity properties: sky2+blueberry2−2∗sky∗blueberry .Combination of two equations. We consider two skills. The first skill consists in computing an affinetransformation ω5∗ωA+ω6∗ωBof two equations ωA,ωBof respective forms ω1=ω2andω3=ω4.Here, ω1,ω2,ω3,ω4are expression types. This yields the equation ω5∗ω1+ω6∗ω3=ω5∗ω2+ω6∗ω4.The second skill consists in being able to determine whether two equations are equivalent. This isdone in several steps: first, the two equations are put into standard form. Then, we compute thedifference of their left-hand sides. If we get 0 = 0 , then we conclude that the two equations areequivalent, otherwise they are not as the left-hand residual is not zero. In the latter case, the residualis provided in factorized form.Systems of equations. Remember that Gaussian elimination is a method to solve a system of nlinear equations in ksteps. Each step transforms the system at step ito a new, simpler system at stepi+ 1. Our aim is to teach the model to perform any step i→i+ 1. For this, we create a system of kequations with n≥kvariables. The latter are always taken to be of the m-token base type. Then,we generate a matrix Mof coefficients of size k×(n+ 1) associated with the system, where thecoefficients are arbitrary types:M=a11a12. . . a 1n|b1a21a22. . . a 2n|b2............|...ak1ak2. . . a kn|bkLetx1, . . . , x nbe the variables, thus our system of equations at step 0 is:a11x1+a12x2+. . .+a1nxn=b1a21x1+a22x2+. . .+a2nxn=b2...ak1x1+ak2x2+. . .+aknxn=bkWe then apply a step of Gaussian elimination, which is a method that transforms the coefficient matrixof the system into row-echelon form and then back-substitutes. First, we perform row operationsto transform the augmented matrix into row-echelon form. The goal is to create zeros below thediagonal elements. To do this, we first divide the first row L1by the coefficient of x1and then∀i∈[2, k]we update the expression of the ithrowLi,Li→Li−ai·L1. This eliminates the x1variable from equation i. Then we do the same ∀i∈[2, k],∀j∈[i, k]we update Lj→Lj−aj·Li.This eliminates variable ifrom each equation j. At the end of this first phase, we obtain a new matrix12of coefficients:M′=1a′12. . . a′1n|b′10 1 . . . a′2n|b′2............|...0 0 . . . 1|b′kBack substitution is then performed to express variables in terms of other variables and coefficientssuch that the new system is simpler.We store in our data all steps i→i+ 1. This gives us a set of pairs (stepi, step i+1)withstepithestate of the system at step iof Gaussian elimination. From each such pair, we construct two kinds oftextual data:•One step Gaussian elimination The question contains stepiand the answer containsstepi+1.•One "clean" step Gaussian elimination Because our variables and coefficients in thesystem are essentially m-tokens, working performing Gaussian elimination can yield verylarge equations after a few steps. Thus, later steps will by construction always containcoefficients that are complex mathematical equations. In order to remediate to this issue, wereplace the coefficients appearing in front of the variables in step iby simpler coefficients,namely m-tokens. Concretely, we will replace large coefficients of the formdog−cat∗2tree∗houseby a single m-token. We call stepcleani the simplification of stepi. The question containsstepcleani and the answer contains the corresponding stepcleani+1.Systems are built according to this format: ; [;equation 1; ;equation 2;. . .;equation n; ];. The ideabehind this convention is to help the model spot equations using the symbol ;. Below we provide anexample of Gaussian elimination:•Question. The variables: dog, sky .Perform one step of Gaussian elimination : ;[;−9∗dog−cat∗sky=−blueberry ;;3∗dog−tree∗sky=−6;];.•Answer. We get: ;[; dog+ (cat/9)∗sky= (blueberry/ 9);;3∗dog−tree∗sky=−6;];.•Example. Question. The variables: dog, sky . Perform one step of Gaussian elimination:;[;−9∗dog−cat∗sky=−blueberry ;;3∗dog−tree∗sky=−6;];.Answer. We get:;[;dog+ (cat/9)∗sky= (blueberry/ 9);;3∗dog−tree∗sky=−6;];.13C Bottom-Up Generalization: Going from Mathematical Rules to WordProblemsBelow are the hyperparameters used during our experiments:• One epoch of training.•Quantized fine-tuning with low-rank adaption (QLoRA Dettmers et al. (2024)) with a rankof256and a dropout of 0.1, and a quantization of 4bits for the weights.• Learning rate of 10−5with cosine decay.•AdamW optimizer quantized with 8 bits, weight decay of 0.001and gradient clipping of0.3.• Batch size of 16.We run our experiments on 4 Nvidia A10 GPUs with 24GB of video memory on an AWS g5.12xlinstance, thus fitting one replica of the model on each GPU.C.1 Quadratic PolynomialsWe generate 100 examples of quadratic polynomial problems by systematically varying the surfacetype (e.g., swimming pools, fields), the names of the owners (e.g., Jim, Alice), the number of eachsurface (square, rectangle, triangle, parallelogram) as well as the known length (for rectangle, triangle,parallelogram). We ensure that the discriminant of all the polynomials are perfect squares. Thisconstraint avoids evaluating the model’s ability to compute square roots.We provide an example of prompt and response for our problems (Figure 6).Figure 6: Example of a word problem and response - quadratic polynomials.Prompt : Jim has a total of 2 swimming pools. The first swimming pool is a square, with an unknown sidelength x. The second is a rectangle with one side measuring 1 meters and the other being the unknown sidelength x. The total area covered by these swimming pools is 2 square meters. What is the unknown sidelength x?Response : The total area is the sum of the area of the square and the rectangle. The area of the squareisx2, the area of the rectangle is x, and the total area of all fields is 2 square meters. Therefore, theequation to solve is: x2+x= 2. To solve the quadratic equation, we first calculate the discriminant ∆:∆ =b2−4ac= 12−4(1)(−2) = 9 . Since ∆>0, the equation has two distinct real roots, calculatedusing the quadratic formula: x=−b±√∆2a. Substituting the values, we get: root 1:−1+√92= 1, root 2:−1−√92=−2. Only one root is positive, so the solution is x= 1.In our experiments, the baseline model demonstrated suboptimal performance. To address this, weintroduced a 3-shot prompting that explicitly detailed the steps for calculating the roots, leading toa marginal improvement in performance, though it remained relatively low ( ≈10%). While thebaseline successfully formulates the quadratic polynomial equation, it fails to consistently applythe correct approach for solving it. Instead of computing the discriminant, the baseline frequentlyattempts an erroneous factorization, which reduces its overall effectiveness.We fine-tune the baseline model in order to improve its ability to find the roots of the polynomialby computing the discriminant. For training, we generate a dataset based on three mathematicalrules: quadratic polynomials (200,000 examples), simplify expression (100,000 examples), andsingle equation manipulation (100,000 examples). These rules are detailed in section B.1, B.2.Additionally, we incorporate 250,000 examples from the Orca dataset Mukherjee et al. (2023) as aregularizer during training. A larger dataset is generated for quadratic polynomials due to the greatercomplexity of the task.In the training data, we ensure that the discriminant of the polynomials are always perfect squares,preventing the model from learning how to compute square roots.Let’s consider the quadratic polynomial expression ω1×ω24+ω2×ω4+ω3. Initially, the trainingdata included all possible integer values for ω1andω2between −10and10, and for ω3between14−100and100. However, this range proved too broad, making it difficult for the model to learneffectively. As a result, the fine-tuned model frequently miscalculated the discriminant. To addressthis issue, we narrowed the parameter distribution, restricting ω1andω2to positive integers, and ω3to negative values, better aligning the training data with our data at test time.C.2 Physics Problems Involving ResistorsWe fine-tuned the model on 100,000 examples with equations akin to those necessary to solveresistor circuits. 50,000 examples correspond to resistors in series, and 50,000 examples correspondto resistors in parallel. We provide an example of prompt and response for a resistor circuit inparallel (Figure 7). We also add 25,000 examples from the Orca dataset Mukherjee et al. (2023) as aregularizer during training.For this particular problem we only consider symbolic solutions, thus we do not introduce numericalvalues for the variables. We evaluate the model’s response by extracting the symbolic solution andcomparing it to the correct one.Figure 7: Example of a prompt and response for a resistor circuit.Prompt : You have a circuit with the following resistors: [R1 || R2]. Given that the current flowing throughthe circuit is I amp and the voltage across the circuit is U volts, express the resistance of R1 in terms of theother variables.Response : Using properties of resistors in series and parallel, and Ohm’s law, we can determine that theequation governing the circuit is U = I/(1/R2 + 1/R1). Isolating the unknown, we get the symbolic solutionR1 = 1/(I/U - 1/R2).The error modes are similar for the baseline and fine-tuned models. The model is always able to findthe correct equation, but the fine-tuned model makes less mistakes. These mistakes are mainly due tosign errors or failing to inverse a fraction (i.e. if 1/x=Athenx= 1/A).C.3 Fruit BasketsFor the fruit baskets examples we trained a model mainly on two mathematical rules: First-ordersingle equation (150,000 examples), and Simplify expression (150,000 examples).We selectedthe hyperparameters for these rules (such as the number of terms and variables) to ensure that thegenerated data aligns with the format of equations typically found in the fruit baskets problem. Werefer to section B.2 for the full details about these rules. Additionally, we incorporate 150,000examples from the Orca dataset Mukherjee et al. (2023) as a regularizer during training. The FruitBaskets problem requires setting up an equation, simplifying it to its canonical form, and solving fora specific variable. By fine-tuning the model on these rules, it can generalize and effectively simplifyand solve the equation in Fruit Baskets problems."Numerical" Fruit Baskets. We also considered assigning numerical values to the variables inthat problem, requiring the model to perform calculations to find a numerical solution instead of asymbolic one. We illustrate this in figure 9. We found that the models’ performances were similar inthe symbolic and numerical cases. This is showcased in table 3.Table 3: Accuracy (%) - fruit basket problems (3-shot).ModelSymbolic Numerical2 Fruits 3 Fruits 2 Fruits 3 FruitsLlama-3 8B Instruct 19 0 21 6Llama-3 8B Fine-tuned 35 13 36 1315Figure 8: Example of a prompt and response for a symbolic fruit basket problem.Prompt : Alice and Bob went to the grocery store and bought the following items:- bananas: Alice bought qA1, and Bob bought qB1. The price of a single one is p1.- blueberries: Alice bought qA2where qA2= 2×p1, and Bob bought qB2where qB2= 8×qA1. Theprice of a single one is p2where p2= 5×qB1.Both ended up paying the same total price. Find the price of bananas in terms of qA1andqB1.Response : Since both Alice and Bob ended up paying the same total price for the fruits they bought, wecan set up the following equation:qA1×p1+qA2×p2=qB1×p1+qB2×p2We substitute the known relationships:qA1×p1+ 2×p1×5×qB1=qB1×p1+ 8×qA1×5×qB1Letx=p1,y=qA1, andz=qB1. The equation becomes:y×x+ 2×x×5×z=z×x+ 8×y×5×zSimplifying:y×x+ 10×x×z=z×x+ 40×y×zSolving for x, we get:x=40×y×zy+ 9×zSince x=p1,y=qA1, andz=qB1, we have:p1=40×qA1×qB1qA1+ 9×qB1Figure 9: Example of a prompt and response for a numerical fruit basket problem.Prompt : Alice and Bob went to the grocery store and bought the following items:- bananas: Alice bought qA1, and Bob bought qB1. The price of a single one is p1.- blueberries: Alice bought qA2where qA2= 2×p1, and Bob bought qB2where qB2= 8×qA1. Theprice of a single one is p2where p2= 5×qB1.Both ended up paying the same total price. Find the price of bananas, given that qA1= 2andqB1= 1.Response : Since both Alice and Bob ended up paying the same total price for the fruits they bought, wecan set up the following equation:qA1×p1+qA2×p2=qB1×p1+qB2×p2We substitute the known relationships:qA1×p1+ 2×p1×5×qB1=qB1×p1+ 8×qA1×5×qB1Given that qA1= 2andqB1= 1, the equation becomes:1×p1+ 2×p1×5×2 = 2×p1+ 8×1×5×2Simplifying:p1+ 20×p1= 2×p1+ 80Solving for p1, we get:p1=801916C.4 Benchmark PerformanceTable 4: Benchmark performance.Model GSM8K (5-shot) MMLU (0-shot) ARC-Challenge (0-shot) HellaSwag (0-shot)Llama-3 8B Instruct 64.1 58.6 49.6 62.7Llama-3 8B Fine-tuned (Resistors) 65.4 60.8 51.8 65.1Llama-3 8B Fine-tuned (Polynomials) 68.9 60.8 51.4 66.4Llama-3 8B Fine-tuned (Fruits) 69.1 60.9 51.7 65.7The performance on these benchmarks remains relatively stable after fine-tuning, demonstrating thatthe newly acquired skills did not disrupt the model’s prior knowledge.17DTop-Down Generalization: Increasing the Mathematical Complexity of theTaskWe fine-tune Llama-2 7B Chat Touvron et al. (2023) on allthe data described in section B.2(and that data only) and evaluate the model’s ability to perform the following rules: distributivity,commutativity, division, exponentiation, variable evaluation, remarkable identities, single equationand two equations manipulation . The research questions that we want to answer are:• Can the model learn the rules that it has been trained on?• Can the model generalize the rules that it has been trained on?• Can the model use the rules when prompted differently than during training?• Can the model use combinations of the individual rules that it has been trained on?Hyperparameters and compute used during experiments are detailed in section D.6. We will essentiallyconsider two versions of our fine-tuned model: one trained on distributivity data only (section D.1),and one trained on all mathematical rules (sections D.2, D.3, D.5). In section D.4, we evaluateour trained models on general knowledge benchmarks in order to check that performance remainsrelatively stable.D.1 Distributivity Data: Impact of Training on Various Tokenizer Vocabulary Sizes onGeneralization PerformanceWe generate training data of 2 million examples of the distributivity rule as described in section B.2,where each bracket contains at most 5 types ωi: the number of variables is a random integer from 1 to5. Each type is either an integer (probability 10%) or a k-token (probability 90%), where k∈[1,3].The variables appearing in the equations take value in subsets of increasing tokenizer vocabularysizes. For this, we split the tokenizer vocabulary of 32,000 tokens into partitions of increasing sizes(1%, 10%, 50%, 75%, 95% and 100%). We evaluate the model both on the complement of itstraining vocabulary (i.e. on examples that it has never seen during training), as well as on its trainingvocabulary. For each example, we report a score of 1 if the model prediction matches exactly theexpected answer (i.e., the two strings are equal), and 0 otherwise. We present experimental results infigures 4, 10.We find that training on larger vocabulary sizes improves the ability of the model to generalizedistributivity to unseen variable names as well as to increasing the number of variables. The model’sperformance on the training vocabulary is stable accross different vocabulary sizes, and significantlyhigher than the performance on the complement vocabulary, even for the largest vocabulary size. Interms of generalization, the models reach decent performance on their training vocabulary (considerthat distributing 7times 7terms leads to 49terms). In particular, we observe that the model struggleswith some particular tokens (e.g. chinese or cyrillic characters). Other than that, it makes mistakes onvery large expressions, forgetting chunks of the expression towards the end, or confusing signs.In figure 11 we evaluate the model trained on the full vocabulary on a subset of the vocabularyrestricted to latin alphabet (lowercase and uppercase). The model’s performance increases in thegeneralization domain, suggesting some sensibility to certain tokens.18Figure 10: Validation accuracy on the distributivity rule for different vocabulary sizes. Each model isevaluated on its training vocabulary. From left to right, from top to bottom: x= 1,10,50,75,95,100. The dashed lines delimit the parameters seen during training from those unseen.Figure 11: Validation accuracy on the distributivity rule for the full vocabulary with only latincharacters. The dashed lines delimit the parameters seen during training from those unseen.19D.2 Solving Systems of Equations by Recursive CallSystem solving is a difficult task for most language models, as it requires the elaboration of a seriesof reasoning steps. Language models have a finite context window, which constrains the amount ofinformation they can process. Furthermore, numerous variables must be maintained and manipulatedover multiple steps. Language models struggle with long-range dependencies, where the relevantinformation from earlier steps needs to be accurately memorized and applied in later steps. Evenstate-of-the-art models have difficulty solving systems with more than two variables and equations.To overcome these problems, our idea is to break down system solving into elementary steps andteach our model to correctly transition from each one of the steps to the next (as opposed to trainingit on the full resolution). The training data contains examples where the system is already resolved(termination condition), and when it is the case, the model recognizes that there is nothing left to doand returns The system is already simplified .The detailed construction of the corresponding data from steps i→i+ 1of the Gaussian eliminationis provided in section B.2. We considered systems from 1 to 5 variables. For each step transitioni→i+ 1, we report a score of 1 if the model predicted string matches exactly the ground truth, and 0otherwise. We obtain an average score of 88.5±0.5%. The latter was computed over 100 examplesand 2 random seeds. We provide a detailed example of system resolution in figure 5.D.3 Experiments on All Mathematical RulesWe fine-tune our model on all the mathematical rules detailed in section B. To evaluate the models,we considered eight different configurations:•4 train configurations : We use the same hyperparameters as the training dataset: 3 to 5variables per instance. We consider two vocabulary settings:–Full vocabulary : The full Llama-2 vocabulary. Each term is a concatenation of 1 to 3Llama-2 tokens.*With integers: 10% of the variables in an expression are integers, while the rest area concatenation of Llama-2 tokens.*Without integers: All the variables are a concatenation of Llama-2 tokens.–Restricted vocabulary : The vocabulary consists of Latin and Greek letters, one tokenper variable, and no integers allowed.*With digits: The vocabulary consists of Latin and Greek letters and a concatenationof these letters with numbers from 0to9(e.g., α0).*Without digits: The vocabulary consists of only Latin and Greek letters without anynumbers.•4 test configurations : We increase the number of variables to 6 to 7 variables per instance.We consider two vocabulary settings:–Full vocabulary : The full Llama-2 vocabulary. Each term is a concatenation of 3 to 5Llama-2 tokens.*With integers: 10% of the variables in an expression are integers, while the rest area concatenation of Llama-2 tokens.*Without integers: All the variables are a concatenation of Llama-2 tokens.– Restricted vocabulary : Same as the training configurationThe goal of excluding integers in some configurations is to ensure robust evaluation of symbolicreasoning, avoiding cases where models might output correct numerical results without demonstratingan understanding of the underlying properties. For example, in evaluating the division property(1 + 5) /3 = 1 /3 + 5 /3, we expect models to show understanding of this property rather than simplycomputing numerical expressions. On the other hand, the restricted vocabulary relying on the Latinand Greek alphabetical letters ensures a fair comparison to baseline models likely unfamiliar withunusual tokens in Llama 2’s tokenizer. The baseline models considered are the instruct versions ofLlemma Azerbayeva et al. (2024), Llama-2, Llama-3, Mistral, and WizardLM Luo et al. (2023). Eachmodel generated responses with a max_new_tokens set to 512. This token limit was determined tobe sufficient for generating correct answers based on tests with multiple lengths (up to 1500 tokens),20taking into consideration the verbose nature of these models. Each mathematical rule was evaluatedon 100 examples over 2 seeds. The results are presented in tables 5, 6, 7 and 8.Our fine-tuned Llama-2 model consistently outperforms other models across all considered mathemat-ical rules, irrespective of the configuration. Notably, our model demonstrates strong generalizationcapabilities across various configurations. Baseline models demonstrate moderate performance on therestricted vocabulary and significantly underperform on the full vocabulary, which is expected due totheir limited grasp of what a mathematical variable is. In contrast, our model maintains consistentlyhigh scores across the configurations. Additionally, we conduct a safety check by verifying stringequality as a lower-bound metric to ensure the accuracy of our metrics. This validation process isdemonstrated in Table 13.Given the lengthy nature of the prompts in distributivity and two equations tasks, we decided toadd another test configuration with the full vocabulary with integers in which we only increase thenumber of variables taking it from 6 to 7, while keeping the variables in the same shape as the onesin the training (a concatenation of 1 to 3 tokens). We only test this configuration on the two skills ofdistributivity and two equations, and the results are presented in table 14.Table 5: Train configuration. Full vocabulary with integers.Model Distributivity Commutativity DivisionExponent-iationVariableEvaluationRemarkableIdentitiesSingleEquationTwoEquationsLlama 2 7B Chat fine-tuned (all) 97.0±2.0 99 .0±1.0 99 .0±0.0 99 .5±0.5 96 .5±2.5 98 .0±2.0 98 .5±0.5 99 .0±0.0Llama-3 8B Instruct 0.5±0.5 9 .0±0.0 2 .5±0.5 52 .0±3.0 31 .0±2.0 24 .0±7.0 0 .0±0.0 0 .0±0.0WizardMath 7B 9.5±0.5 12 .5±0.5 6 .5±1.5 65 .0±1.0 26 .5±5.5 36 .5±1.5 0 .0±0.0 2 .0±1.0Mistral 7B Instruct 1.5±0.5 11 .4±3.4 5 .0±1.0 51 .0±5.9 8 .0±4.0 9 .0±3.0 0 .5±0.5 0 .0±0.0Llama-2 7B chat 0.0±0.0 4 .5±1.5 2 .5±1.5 24 .0±2.0 2 .0±1.0 4 .5±1.0 0 .0±0.0 0 .5±0.5llemma 7B 0.0±0.0 0 .5±0.5 0 .0±0.0 25 .0±0.0 5 .5±3.4 0 .5±0.5 0 .0±0.0 0 .0±0.0Table 6: Test configuration. Full vocabulary with integers.Model Distributivity Commutativity DivisionExponent-iationVariableEvaluationRemarkableIdentitiesSingleEquationTwoEquationsLlama 2 7B Chat fine-tuned (all) 24.0±3.0 97 .5±2.5 98 .5±0.5 100 .0±0.0 98 .5±0.5 99 .0±1.0 94 .5±3.5 73 .5±4.5Llama-3 8B Instruct 0.0±0.0 1 .0±1.0 0 .5±0.5 32 .5±5.5 8 .5±1.5 14 .5±0.5 0 .0±0.0 20 .0±2.0WizardMath 7B 0.0±0.0 1 .0±1.0 6 .5±0.5 55 .5±1.5 7 .5±1.5 26 .5±7.5 0 .0±0.0 21 .0±2.0Mistral 7B Instruct 0.0±0.0 0 .0±0.0 4 .0±0.0 45 .0±2.5 0 .5±0.5 4 .5±1.5 0 .0±0.0 17 .0±1.0Llama-2 7B chat 0.0±0.0 0 .0±0.0 0 .0±0.0 19 .5±4.5 0 .0±0.0 1 .5±0.5 0 .0±0.0 38 .0±3.0llemma 7B 0.0±0.0 2 .0±1.0 1 .5±1.5 44 .0±0.0 4 .0±2.0 7 .5±0.5 0 .0±0.0 7 .5±0.5Table 7: Train configuration. Restricted vocabulary with digits.Model Distributivity Commutativity DivisionExponent-iationVariableEvaluationRemarkableIdentitiesSingleEquationTwoEquationsLlama 2 7B Chat fine-tuned (all) 89.0±4.0 98 .0±1.0 100 .0±0.0 100 .0±0.0 98 .5±0.5 100 .0±0.0 99 .5±0.5 98 .5±0.5Llama-3 8B Instruct 30.0±1.9 44 .0±4.9 16 .5±2.4 60 .5±1.5 91 .0±1.0 36 .0±1.9 1 .0±0.0 13 .5±1.5WizardMath 7B 45.5±3.5 38 .5±1.5 19 .5±5.5 58 .5±2.5 63 .5±4.5 46 .0±3.0 0 .0±0.0 18 .5±4.5Mistral 7B Instruct 26.5±2.5 36 .5±1.5 22 .0±0.0 42 .0±3.0 56 .0±4.0 15 .5±4.5 1 .0±0.0 13 .0±2.0Llama-2 7B chat 3.0±0.0 15 .0±0.0 5 .5±0.5 37 .5±2.5 28 .0±1.9 12 .0±6.0 0 .0±0.0 15 .5±1.5llemma 7B 0.0±0.0 2 .5±0.5 1 .5±1.5 50 .0±5.0 4 .0±2.0 3 .0±0.0 0 .0±0.0 18 .5±0.521Table 8: Test configuration. Restricted vocabulary with digits.Model Distributivity Commutativity DivisionExponent-iationVariableEvaluationRemarkableIdentitiesSingleEquationTwoEquationsLlama 2 7B Chat fine-tuned (all) 81.5±0.5 89 .0±2.0 100 .0±0.0 100 .0±0.0 92 .0±2.0 100 .0±0.0 100 .0±0.0 96 .5±1.5Llama-3 8B Instruct 15.0±4.0 29 .5±3.5 16 .5±2.5 60 .5±1.5 79 .0±4.0 35 .5±2.5 0 .0±0.0 3 .0±1.0WizardMath 7B 4.0±0.0 27 .0±4.0 19 .5±5.5 58 .5±2.5 52 .5±2.5 46 .0±3.0 0 .5±0.5 2 .5±0.5Mistral 7B Instruct 2.5±1.5 18 .5±0.5 22 .0±0.0 42 .5±2.5 52 .0±4.0 15 .5±4.5 1 .0±1.0 1 .5±0.5Llama-2 7B chat 0.0±0.0 4 .5±0.5 5 .5±0.5 37 .5±2.5 15 .5±1.5 12 .0±6.0 0 .0±0.0 1 .0±1.0llemma 7B 0.0±0.0 1 .0±1.0 1 .5±1.5 36 .5±2.0 3 .0±1.0 3 .0±0.0 0 .0±0.0 5 .0±1.0Table 9: Train configuration. Restricted vocabulary without digits.Model Distributivity Commutativity DivisionExponent-iationVariableEvaluationRemarkableIdentitiesSingleEquationTwoEquationsLlama 2 7B Chat fine-tuned (all) 96.0±2.0 99 .5±0.5 100 .0±0.0 100 .0±0.0 95 .5±1.5 100 .0±0.0 98 .5±1.5 97 .5±0.5Llama-3 8B Instruct 36.0±6.0 43 .5±1.5 21 .5±0.5 79 .5±2.5 88 .0±2.0 45 .0±1.0 2 .5±0.5 24 .5±5.5WizardMath 7B 48.5±3.5 47 .0±3.0 24 .5±0.5 85 .0±3.0 68 .5±0.5 72 .0±0.0 2 .5±0.5 22 .5±3.5Mistral 7B Instruct 28.0±3.0 31 .5±6.5 24 .5±2.5 70 .5±0.5 54 .0±5.0 41 .0±7.0 1 .0±1.0 19 .5±2.5Llama-2 7B chat 7.0±0.0 14 .5±0.5 7 .0±1.0 45 .0±4.0 31 .0±4.0 1 .0±1.0 0 .0±0.0 37 .5±1.5llemma 7B 2.5±0.5 3 .5±0.5 1 .5±1.5 44 .0±0.0 9 .5±1.5 7 .5±0.5 0 .5±0.5 16 .0±3.0Table 10: Test configuration. Restricted vocabulary without digits.Model Distributivity Commutativity DivisionExponent-iationVariableEvaluationRemarkableIdentitiesSingleEquationTwoEquationsLlama 2 7B Chat fine-tuned (all) 75.0±2.0 94 .5±0.5 100 .0±0.0 100 .0±0.0 88 .0±1.0 100 .0±0.0 97 .0±1.0 95 .5±1.5Llama-3 8B Instruct 6.5±0.5 25 .5±4.5 21 .5±0.5 79 .5±2.5 27 .5±0.5 45 .0±1.0 2 .0±1.0 7 .5±1.5WizardMath 7B 4.0±1.0 28 .0±2.0 24 .5±0.5 85 .0±3.0 49 .0±1.0 72 .0±0.0 1 .0±0.0 8 .0±1.0Mistral 7B Instruct 7.0±2.0 19 .5±1.5 24 .5±2.5 71 .0±1.0 42 .0±1.0 41 .0±7.0 1 .5±0.5 4 .5±0.5Llama-2 7B chat 0.0±0.0 7 .0±0.0 7 .0±1.0 45 .0±4.0 22 .0±0.0 1 .0±1.0 0 .0±0.0 20 .0±2.0llemma 7B 0.0±0.0 2 .0±1.0 1 .5±1.5 44 .0±0.0 4 .0±2.0 7 .5±0.5 0 .0±0.0 7 .5±0.5Table 11: Train configuration. Full vocabulary without integers.Model Distributivity Commutativity DivisionExponent-iationVariableEvaluationRemarkableIdentitiesSingleEquationTwoEquationsLlama 2 7B Chat fine-tuned (all) 98.0±1.0 99 .0±1.0 99 .0±0.0 99 .5±0.5 99 .0±1.0 98 .0±2.0 99 .0±1.0 99 .5±0.5Llama-3 8B Instruct 0.5±0.5 9 .5±0.5 2 .0±0.0 49 .0±0.0 35 .0±6.0 21 .5±4.5 0 .0±0.0 0 .0±0.0WizardMath 7B 6.5±0.5 9 .5±0.5 7 .0±2.0 65 .0±0.5 22 .5±4.5 32 .5±0.5 0 .0±0.0 1 .0±0.0Mistral 7B Instruct 3.0±1.0 8 .5±2.5 5 .0±2.0 51 .0±4.0 5 .5±0.5 8 .5±3.5 0 .0±0.0 0 .0±0.0Llama-2 7B chat 0.0±0.0 4 .5±0.5 1 .5±1.5 21 .0±1.0 0 .5±0.5 5 .5±0.5 0 .0±0.0 0 .0±0.0llemma 7B 0.0±0.0 0 .5±0.5 0 .0±0.0 22 .5±1.5 0 .0±0.0 0 .5±0.5 0 .0±0.0 0 .0±0.0Table 12: Test configuration. Full vocabulary without integers.Model Distributivity Commutativity DivisionExponent-iationVariableEvaluationRemarkableIdentitiesSingleEquationTwoEquationsLlama 2 7B Chat fine-tuned (all) 15.0±5.0 96 .0±1.0 99 .0±1.0 100 .0±0.0 96 .0±1.0 99 .0±1.0 94 .5±1.5 65 .5±4.5Llama-3 8B Instruct 0.0±0.0 0 .5±0.5 0 .5±0.5 27 .5±1.5 5 .0±0.0 16 .5±0.5 0 .0±0.0 16 .5±0.5WizardMath 7B 0.0±0.0 1 .0±1.0 3 .0±2.0 56 .0±1.0 5 .5±3.5 19 .5±6.5 0 .0±0.0 23 .5±2.5Mistral 7B Instruct 0.0±0.0 0 .0±0.0 1 .0±0.0 42 .5±3.5 0 .0±0.0 3 .0±2.0 0 .0±0.0 15 .5±2.5Llama-2 7B chat 0.0±0.0 0 .0±0.0 0 .5±0.5 17 .0±2.0 0 .0±0.0 2 .5±1.5 0 .0±0.0 35 .5±2.5llemma 7B 0.0±0.0 0 .0±0.0 0 .0±0.0 17 .0±2.0 0 .0±0.0 0 .0±0.0 0 .0±0.0 19 .5±2.5Table 13: Evaluation results on exact predictions. For each example, we report a score of 1 ifthe model prediction matches exactly the expected answer (i.e., the two strings are equal), and 0otherwise.Configuration Distributivity Commutativity DivisionExponent-iationVariableEvaluationRemarkableIdentitiesSingleEquationTwoEquationsTest, full vocabulary without integers 15.0±5.0 96 .0±1.0 99 .0±1.0 100 .0±0.0 96 .0±1.0 99 .0±1.0 94 .5±1.5 29 .0±1.0Test, full vocabulary with integers 24.0±3.0 97 .5±1.5 98 .5±0.5 100 .0±0.0 98 .5±0.5 99 .0±1.0 94 .0±4.0 31 .0±3.0Train, full vocabulary without integers 99.0±1.0 99 .0±1.0 99 .0±0.0 99 .5±0.5 99 .0±1.0 98 .0±2.0 99 .0±1.0 97 .0±0.0Train, full vocabulary with integers 98.5±1.5 99 .0±1.0 99 .0±0.0 99 .5±0.5 98 .5±0.5 98 .0±2.0 98 .5±0.5 97 .0±1.022Table 14: Test configuration for distributivity and two equations. Full vocabulary with integersTest configuration for distributivity and two equations. Full vocabulary with integersModel Distributivity Two EquationsLlama 2 7B Chat fine-tuned (all) 70.0±7.0 78 .0±1.0Llama-3 8B Instruct 0.0±0.0 24 .5±1.5WizardMath 7B 0.0±0.0 41 .0±2.0Mistral 7B Instruct 0.0±0.0 19 .5±1.5Llama-2 7B chat 0.0±0.0 27 .0±4.0llemma 7B 0.0±0.0 30 .5±0.523D.4 Benchmark PerformanceCatastrophic forgetting is a well-known issue when fine-tuning large language models, where themodel’s performance on the pre-training task decreases after fine-tuning on a new task. We evaluateour models on 3 established benchmarks before and after fine-tuning. The results are presented intable 15. While there is a small performance drop after fine-tuning, the model still performs well onthe general knowledge tasks.Table 15: Evaluation on general knowledge tasks. We evaluate on the MMLU Hendrycks et al. (2020),ARC Clark et al. (2018) and HellaSwag Zellers et al. (2019) datasets for general knowledge andlanguage understanding. The evaluation is performed under the framework of Gao et al. (2023).Model MMLU ARC HellaSwagLlama 2 7B Chat pretrained 46.4 64.1 57.8Llama 2 7B Chat fine-tuned (all operations) 39.7 57.6 53.7Llama 2 7B Chat fine-tuned (distributivity) 37.6 57.9 54.6D.5 Some Word ProblemsHere we discuss cases where our trained model successfully applies the mathematical rules seenduring training in new situations. We see that our model is able to extract "abstract mathematicalskills" and apply them on practical examples, which is striking.We provide in figures 12 and 13 qualitative examples for the distributivity rule. The main takewayis that despite being trained on the distributivity rule only (with a single prompt template), themodel is able to infer the distributivity property on unseen prompts and apply it correctly, whereasthe pre-trained model fails to do so. What is more striking is that these unseen prompts are wordproblems, as opposed to "mathematical" ones seen during training.Figures 15 and 16 present examples of the model’s ability to combine multiple rules it has beentrained on (the allmodel refers to the model trained on all mathematical rules, see section D.3).Figure 16 is particularly striking and shows the benefit of training on multiple rules: the model trainedon distributivity only doesn’t simplify a5∗a2, contrary to the model trained on all rules.Figure 12: The fine-tuned model is able to extract a skill from its training on formal mathematicalrules (distributivity) and apply it to a practical example.Instruction : Bob builds toy cars made of different pieces. Each toy is made of 4 wheels, 1 body and 2*zcar seats (depending on the models). Bob receives a command for N cars. Give an expression representinghow many parts he will need to assemble.Fine-tuned answer : The expression representing how many parts Bob will need to assemble is: N*4wheels+N*1 body+N*2*z car seats.Pre-trained answer : To determine the number of parts needed to assemble N cars, we need to calculatethe total number of parts required for each car and then multiply it by the number of cars requested.The total number of parts required for each car can be calculated as follows: 4 wheels x 1 body x 2 z carseats = 8 parts per car. Therefore, the total number of parts needed to assemble N cars can be expressed as:N x 8 parts per car = 8N. So, Bob will need 8N parts to assemble N cars.24Figure 13: Qualitative example. The fine-tuned model is able to re-use the distributivity rule on aprompt that it hasn’t seen during training to solve the problem.Instruction : Alice has to buy new uniforms for her children. One shirt costs $20, one pair of pants costs$30 and one pair of shoes costs $x. Let N be the number of Alice’s children. Compute the total cost ofthese uniforms.Pre-trained answer : To compute the total cost of the uniforms, we need to know the value of x, whichrepresents the cost of one pair of shoes.Let’s assume that Alice has n children, so the total cost of the uniforms can be calculated as follows:Cost of shirts = n x $20 = $20n Cost of pants = n x $30 = $30n Cost of shoes = xTotal cost = Cost of shirts + Cost of pants + Cost of shoes = $20n + $30n + x = $50nTherefore, the total cost of the uniforms for Alice’s children is $50n.Fine-tuned (distributivity) answer : By the distributivity property:Total cost = 20*N+30*N+x*N.Figure 14: Qualitative example on the distributivity rule.Instruction : Jimmy owns N houses. For each house he has a key for the front door and one for the garagedoor. Compute the total number of keys he has.Pre-trained answer : To compute the total number of keys Jimmy has, we need to multiply the number ofhouses he owns by the number of keys he has for each house.Since Jimmy owns N houses, he has a total of N keys for the front doors and N keys for the garage doors.Therefore, the total number of keys Jimmy has is: N×N=N2. So, Jimmy has N2keys in total.Fine-tuned answer : The total number of keys Jimmy has is 2∗N.Figure 15: Qualitative example. The fine-tuned model is able to combine two skills (distributivityand variable evaluation) without being explicitly trained on this combination.Instruction : Expand (3+x)*(y+5). Then replace x by z in the resulting expression.Pre-trained answer : Sure! To expand the expression (3+x)*(y+5), we need to follow the order of operations(PEMDAS): 1. Parentheses: inside the parentheses we have (3+x) and (y+5). 2. Exponents (none in thiscase). 3. Multiplication: multiply (3+x) and (y+5). So the result is: (3+x)*(y+5) = 3x + 5y + x + 5. Now, ifwe replace x by z, we get: (3z + 5y + z + 5).Fine-tuned (full) answer : By the distributivity property: 3*y+3*5+z*y+z*5.Figure 16: Qualitative example. The fine-tuned model is able to combine two skills (distributivityand exponentiation simplification) without being explicitly trained on this combination.Instruction : Expand the expression (a^5+cat)*(a^2).Fine-tuned (full) answer : By the distributivity property: a^7+cat*a^2.Fine-tuned (distributivity) answer : By the distributivity property: a^5*a^2+cat*a^2.Pre-trained answer : To expand the expression ‘(a^5 + cat)*(a^2)‘, we need to follow the order ofoperations (PEMDAS):1. Parentheses: inside the parentheses we have ‘a^5 + cat‘ 2. Exponents: ‘a^5‘ = ‘a * a * a * a * a * a‘ =‘a^6‘ 3. Multiplication: ‘a^6 * a^2‘ = ‘a * a * a * a * a * a * a‘ = ‘a^7‘So, the expanded expression is ‘a^7‘.25Table 16: Compute resources used for our experiments. We report the number of training examples,the training time (for fine-tuning) and the inference time (for evaluation).Experiment Epochs Training examples Training time Inference timeDistributivity 1 2M 32h 12hAll 2 2,32M 74h 3hLlama-3 8B Instruct - - - 8hWizardMath 7B - - - 8hMistral 7B Instruct - - - 8hllemma 7B - - - 8hD.6 Experimental Details and ComputeBelow are the hyperparameters used during our experiments:• One epoch of training for models trained on distributivity only, two epochs otherwise.•Quantized fine-tuning with low-rank adaption (QLoRA Dettmers et al. (2024)) with a rankof256and a dropout of 0.1• Constant learning rate of 10−5•Additionally following the work of Hayou et al. (2024) we use a learning rate ratio of 16between the matrices AandBof the low rank decomposition• AdamW optimizer quantized with 8 bits• Batch size of 8 or 32 depending on the experimentWe report in table 16 the compute ressources used for our experiments. Each experiment was run on4 Nvidia A10 GPUs with 24GB of video memory on an AWS g5.12xl instance.In table 17 we provide statistics on the number of tokens in our training data, per mathematical rule.Table 17: Number of tokens in the training datasetPer operationAll Distributivity Commutativity DivisionExponent-iationVariableEvaluationRemarkableIdentitiesSingleEquationTwoEquationsSystemSolvingMean 74.7 81.1 45.5 58.0 48.8 49.1 69.9 81.8 131.2 106.7Standard deviation 37.0 34.9 7.1 8.4 14.5 14.6 12.7 25.7 36.3 40.226E EvaluationTo evaluate and compare the performance of our fine-tuned model against baseline models, wedeveloped a streamlined pipeline. The primary objective of this pipeline is to assess the correctnessof answers generated by models in response to mathematical prompts. Our approach heavily relieson SymPy for handling symbolic mathematics. The pipeline is summarized in figure 17. First weprovide an overview of the main elements.Figure 17: Pipeline for evaluationExtract Relevant Mathematical Expressions. Models such as LLaMA, Mistral, WizardLM, andLlemma do not produce outputs in a standardized format. Therefore, isolating mathematical formulasfrom their often noisy outputs is generally challenging. The ambiguity and variability of notation andconventions in mathematical expressions, and the handling of nested parenthesis make it impossibleto write comprehensive regular expression patterns. Therefore, we developed a custom algorithm(Figure 18) that parses strings and identifies expressions involving symbols like +,−,/,∗,=,÷,×,∧. These expressions are then converted to be compatible with SymPy.Adapting to SymPy. In our evaluations, we rely on SymPy for symbolic mathematics. However,the extensive vocabulary of Llama-2 includes some unusual strings that SymPy cannot process. Weaddress this issue by mapping problematic tokens to non-problematic ones.Evaluation metrics. To evaluate an output’s correctness, we create custom metrics for each mathe-matical rule. We perform a common check common across all rules using SymPy, namely that themodel’s output and the ground truth answer are equivalent.E.1 Save Data InfoAt the level of the generated dataset, all useful information for later evaluation is saved. Whengenerating a test dataset, the output should be a dictionary for each instance. The dictionariestypically have the following structure:•dict["prompt"]: The prompt of the instance, used to prompt the different models.Example : Instance = " ∗start Expand this expression: {original_expression}. ∗endBy thedistributivity property: {distributive_expression}."Prompt = " ∗start Expand this expression: {original_expression}. ∗end"•dict["original_expression"]: The original expression of the instance.•dict["answer"]: The answer (distributed, commuted, solved equation, etc.) of the instance.Both the original expression and the answer are important for evaluating the correctness of agiven response.•dict["variables"]: The list of variables of the instance. Knowing these variables facilitatesthe extraction of relevant mathematical expressions from the model’s output.Example : Instance = " ∗start Expand this expression: (a+2)*(b-3). ∗endBy the distributivityproperty: a*b-a*3+2*b-2*3."Variables = {a,2, b,−3}•dict["prompt_type"]: The type of the instance (distributivity, commutativity, single equation,etc.)E.2 Generate Models ResponsesThe baseline models considered are the instruct versions of Llemma, Llama-2, Llama-3, Mistral, andWizardLM. Prompting these models involves several key steps:27Models are loaded using the AutoModelForCausalLM andAutoTokenizer classes from the Hug-ging Face transformers library.Prompts are customized based on the specific requirements of each model. For instance, the promptformat for Llama-2 differs from that of WizardLM, with specific patterns adjusted using regularexpressions provided on the Hugging Face page. For example, the prompt for WizardMath 7B is:"Below is an instruction that describes a task. Write a response that appropriatelycompletes the request.###Instruction: cleaned_prompt### Response:"Prompts are processed in batches to optimize performance and resource utilization. The generatedresponses are then cleaned to remove instruction tokens and other artifacts based on the specificmodel’s requirements, ensuring a consistent format for evaluation.E.3 Extract Relevant Mathematical ExpressionsModels such as LLaMA, Mistral, WizardLM, and Llemma do not produce outputs in a standardizedformat. Therefore, isolating mathematical formulas from their generally noisy outputs is verychallenging. The ambiguity and variability of notation and conventions in mathematical expressions,and the handling of nested parenthesis make it impossible to write comprehensive regex patterns.Therefore, we developed a custom algorithm that parses strings and identifies expressions involvingsymbols like +,−,/,∗,=,÷,×,∧. These expressions are then converted to be compatible withSymPy.Our algorithm works as follows: we start by splitting a string on whitespace and examining eachword and its neighbors. If the previous word ends with a mathematical sign, or if the current wordbegins with one, then the current word is part of a mathematical expression. Otherwise, if the currentword ends with a mathematical sign, or if the next word begins with one, then the current word startsa new expression. Otherwise, we check if the current word contains any mathematical symbols. If itdoes and SymPy can process it, it is considered a standalone expression.There are corner cases, such as extracting is-2+3 from The answer is -2 + 3 . Here, we usedomain knowledge to remove outlier terms like is. Additional processing steps handle issues liketreating punctuations, and fixing the multiplication in expressions like abto become a*busing datasetknowledge.The decision tree 18 gives an overview on the algorithm’s reasoning.Figure 18: Decision Tree for mathematical expressions extraction28E.4 Adapt SymPyIn our evaluations, we rely on SymPy for symbolic mathematics. However, the extensive vocabularyof Llama-2 includes many unusual strings that SymPy cannot process. To address this issue, wecreated a mapping between tokens that SymPy cannot process and tokens that SymPy can handle.This mapping ensures a clean bijection with no overlapping, preserving the structure of the originalexpression while making it readable by SymPy, thus enabling effective evaluation of models usingthe full vocabulary of Llama-2.E.5 Evaluation MetricsWe evaluate the model’s mathematical outputs according to the rule that we’re testing. The initialevaluation involves checking if the model’s output and the ground truth answer are equivalent.To achieve this, we implemented a function using SymPy to determine the equivalence of twomathematical expressions. The process is as follows:1.Symbolic Simplification :• Convert the input expressions into SymPy• Compare the simplified expressions. If they are identical, return True .2.Variable Substitution :•If the simplified expressions are not identical, identify all variables in both expressions.• For a specified number of instances (default is 10), randomly assign numerical valuesto these variables.• Substitute these values into the simplified expressions.• Check if the numerical expressions are equal for each set of random values.• If they are equal in all instances, return True ; otherwise, return False .After verifying that the output and the ground truth are equivalent, we ensure that the output isdifferent from the input to confirm that the model is not merely repeating the input. Depending on thespecific rule, we then apply one of the following evaluation tests:Distributivity: Using SymPy expansion function, we check if the model’s output is equivalent to theground truth.•Example Distributivity:We gave Llama-3 the following prompt: Apply the distributive property of multiplicationover addition to the expression l∗(U+s+Z).The model generated the following response to the task:Using the distributive property of multiplication we have: l∗(U+s+Z) =l∗(U+s)+l∗(Z)The model applied the distributive property partially but did not fully expand the terms. Thecorrect answer (ground truth) is:l∗(U+s+Z) =l∗U+l∗s+l∗ZFirst, we extract the model’s mathematical output from its response: l∗(U+s) +l∗(Z).Both the model’s output and the ground truth are converted into symbolic expressions usingthe SymPy library.Finally, we verify if the model correctly applied the distributive property. SymPy will makethe necessary simplifications, and is invariant to order, ensuring that the two expressions willbe equivalent only if both are fully expanded. We compare the sympified correct expressionwith the sympified model’s output. The expanded form ( l∗U+l∗s+l∗Z) differs fromthe model’s output ( l∗(U+s) +l∗(Z)), indicating that the distributive property was notfully applied. Therefore we label it as False.Factorization: We count the number of terms in the original expression and we check that this numberdecreases in the output of the model while maintaining equivalence with the original expression.29•Example Factorzation:We provided Llama-3 with the following prompt: Factorize the expression ξ∗(−Ω) + κ∗(−ζ) +ζ∗ζ+ (−Ω)∗α.The model generated the following response: The given expression can be factored asξ∗(−Ω) + Ω( −κζ−ζ2−α). However, the model introduced incorrect terms in its output.The correct factorization (ground truth) is:ζ∗(−κ+ζ) + Ω∗(−ξ−α)First, we extract the mathematical expression produced by the model. Both the model’soutput and the ground truth are then converted into symbolic expressions using the SymPylibrary.By using SymPy to check whether the two expressions are equivalent, we determine that themodel’s output is mathematically different than the expected output, confirming the error inthe model’s factorization.Commutativity: We check that the two elements we’re supposed to swap are indeed swapped atleast once in the output expression of the model. We do that by collecting the terms in the model’soutput expression expression in their order and compare this order to the one in the input expression.•Example Commutativity: We provided Mistral 7B with the following prompt: Apply thecommutativity property of + on βandcin the expression β+z+c.The model generated the following response:The commutative property of addition has been applied to βandcin the expressionβ+z+c. The new expression is:β+c+zThe model was supposed to swap the positions of βandc. However, the output provided,β+c+z, does not reflect a proper swap. The correct application of the commutativeproperty would result in the expression c+z+β, where βandchave indeed switchedpositions.To detect this error, we compare the order of the terms in the original expression with thosein the model’s output:– Original Order: β(1st), z(2nd), c(3rd)– Order in model’s output: β(1st), c(2nd), z(3rd)Since the order of βandcis unchanged ( βis still before c) in the model’s output, we labelthis answer as False.Remarkable identities: We check that the number of terms in the output is greater or equal to two.This is a sufficient criterion to say that the model effectively expanded the input expression into atleast two sub-terms.Variable evaluation: Simply checking that the model’s output is equivalent to the ground truth issufficient in this case.Division: We consider three properties of the division:ω1⊙1ω2ω3=ω1ω3⊙1ω2ω3, where ⊙1∈ {+,−};ω1∗ω2ω3∗ω4=ω1ω3∗ω2ω4;ω1ω2ω3=ω1ω2∗ω3.•For the first property, we check that the number of terms according to "+-" is two since weexpect to have two terms summing at the end.•For the second property, we check that the number of terms according to "*" is two since weexpect to have only two terms at the end•For the third property, we count the occurrences of the division character ("/") in the input ,and the model’s output . If the count in the output is less than the initial count in theinput , it means that the model successfully got rid of the additional division sign.30Single equation: We consider two skills. The first consists in computing an affine transformation ofan equation, and the second skill consists in "simplifying" an equation. We wrote a function to checkif two equations are equivalent. It moves all terms to the left-hand side and checks that the left-handsides expressions are equivalent using the same function above.•For the first skill, simply checking if the output equation is equivalent to the ground truthequation is sufficient.•For the second skill, we ensure the equation is in standard form by verifying that the outputequation is equivalent to the input equation, and that its right-hand side is zero. Then, wecount the terms in the ground truth, the input, and the model’s output. If the ground truth hasfewer terms than the input, simplification has occurred. If the model’s output does not showa reduction in terms similar to the ground truth, we return False. Otherwise, we return True.Two equations: We test two skills: determining if two equations are equivalent and performing alinear combination of two equations.•For the first skill, if the equations are equivalent, we look for positive keywords like "Yes"in the model’s output, or "No" if they are not. If these keywords are absent, we check formathematical expressions indicating subtraction of two equations and compare this with theground truth.•For the second skill, we check if the output equation, a combination of the two equations, isequivalent to the ground truth equation.Exponentiation: We consider five properties of the exponentiation: ω0= 1; the definition ofexponentiation ωn=ω∗ ··· ∗ ω|{z}ntimesifnis a positive integer, the reciprocal of the latter if nis a negativeinteger; if n, m are signed integers: ωn∗ωm=ωn+m;ωn1∗ωn2= (ω1∗ω2)n;ωnωm=ωn−m. Weevaluate exponentiation properties as follows:•For the exponentiation definition, we count the multiplication operations (character ’*’) inthe ground truth and the model’s output. If these counts are equal, the model correctly usedmultiplication to express the power.• For the first property, we check if the model’s output has 1 in it, returning True if it’s case.•For the second and fourth property, we check if the occurrences of ∧decreased in themodel’s output, indicating a reduction from two powers to one.•For the third property we check that the number of terms in the model’s output with respectto the operation "*" is equal to two. If it is the case, it means that the model effectivelyunderstood that (ω1∗ω2)n=ωn1∗ωn2Partial distributivity metricA pattern of errors is sometimes observed in the fine-tuned model’s output. For instance in thedistributivity rule, the errors are often related to a sign mistake at the end of the output, or thesubstitution of one token with another that is similar to it. The following example illustrates this case:input:(ζ−ψ+ρ)(−T+R)Model’s output:−ζT+ζR+ψT−ψR−ρT+ρ2Ground truth:−ζT+ζR+ψT−ψR−ρT+ρRGiven this pattern, and to provide a more nuanced evaluation of the model’s performance we decidedto introduce another metric for distributivity. It accounts for minor errors, such as sign mistakes ortoken modifications, while tolerating partial correctness. This metric, named partial distributivity , iscalculated as follows:31• Collect the terms in both the output and the ground truth by splitting the expressions at the+and−operators.• Count how many of these terms in the output are present in the ground truth.• Divide this count by the total number of terms in the output.In the example above for instance we have six terms −ζT,+ζR,+ψT,−ψR,−ρT, and +ρ2.Among these terms, 5 are correct, so instead of counting the whole output expression as plain false =0, we count it as 5/6. The formula for partial distributivity metric given by:2·num_correct_termsnum_terms_ground_truth +num_terms_outputFor instance in the case of the test configuration with a restrcited vocabulary, and without digits weobtain the results in table 18.Table 18: Test configuration Restricted vocabulary without digitsTest configuration Restricted vocabulary without digitsModel Full Distributivity Partial DistributivityLlama 2 7B Chat fine-tuned (all) 75.0±2.0 95 .8±0.6Llama-3 8B Instruct 6.5±0.5 47 .2±0.3WizardMath 7B 4.0±1.0 41 .8±2.9Mistral 7B Instruct 7.0±2.0 38 .8±0.3Llama-2 7B chat 0.0±0.0 7 .4±0.1llemma 7B 0.0±0.0 0 .2±0.2The "full distributivity" column in the table above is the same as the one in Table 10.32 |
rROdzn4DSb | Learning Elementary Cellular Automata withTransformersMikhail BurtsevLondon Institute for Mathematical SciencesRoyal Institution, [email protected] Language Models demonstrate remarkable mathematical capabilities butat the same time struggle with abstract reasoning and planning. In this study,we explore whether Transformers can learn to abstract and generalize the rulesgoverning Elementary Cellular Automata. By training Transformers on statesequences generated with random initial conditions and local rules, we show thatthey can generalize across different Boolean functions of fixed arity, effectivelyabstracting the underlying rules. While the models achieve high accuracy in next-state prediction, their performance declines sharply in multi-step planning taskswithout intermediate context. Our analysis reveals that including future states orrule prediction in the training loss enhances the models’ ability to form internalrepresentations of the rules, leading to improved performance in longer planninghorizons and autoregressive generation. Furthermore, we confirm that increasingthe model’s depth plays a crucial role in extended sequential computations requiredfor complex reasoning tasks. This highlights the potential to improve LLM withinclusion of longer horizons in loss function, as well as incorporating recurrenceand adaptive computation time for dynamic control of model depth.1 IntroductionLarge Language Models (LLMs) have become valuable tools in mathematics, demonstrating impres-sive capabilities in problem-solving and reasoning tasks. Notably, OpenAI’s o1 model achieved aranking among the top 500 students in the US in a qualifier for the USA Math Olympiad (AIME) [ 1].Despite these successes, LLMs still face challenges in reasoning [ 2–6] and planning [ 7], particularlywhen required to infer and apply underlying rules from data.These observations raise a question: Do the limitations of LLMs in reasoning stem from the nature oftheir training data, the procedures employed during training, or inherent architectural constraints?Transformers [ 8], which form the backbone of many modern LLMs, are universal approximators [ 9–12] and theoretically capable of simulating Turing machines using intermediate computational steps,making them Turing complete and, by extension, capable of formal reasoning [ 13–16]. Earlierstudies found that transformers can be trained to perform symbolic integration and solving differentialequations [17], as well as symbolic regression [18, 19].In our study, we continue the line of research exploring the trainability of Transformers for mathemat-ical reasoning tasks by focusing on Elementary Cellular Automata (ECAs). The simplicity and clarityof this "toy" problem make it an ideal testbed for assessing the ability of Transformers to abstract andapply logical rules. By demonstrating that Transformers can learn and generalize Boolean functionsof fixed arity inherent in ECAs, we aim to evaluate their ability to infer, generalize, and apply logicalrules solely from observed data, without relying on memorization.38th Conference on Neural Information Processing Systems (NeurIPS 2024).ARule+Orbit - State (RO-S)011...(state 0)...101[SEP]...011..(state T-1)..101[SEP]????...( mask )...????011...(state T)...10101001...(rule)...01010[SEP]TRANSFORMER01001...(rule)...01010Orbit - State+Rule (O-SR)011...(state 0)...101[SEP]...011..(state T-1)..101[SEP]????...( mask )...????[SEP]????...( mask )...????[SEP]011...(state T)...101TRANSFORMEROrbit - Orbit (O-O)011...(state 0)...101[SEP]...011..(state T-1)..101[SEP]????...( mask )...????????...( mask )...????011...(state T)...101011..(state T+k)..101TRANSFORMEROrbit - State (O-S)011...(state 0)...101[SEP]...011..(state T-1)..101[SEP]????...( mask )...????TRANSFORMER011...(state T)...101...... BFigure 1: Learning Elementary Cellular Automata (ECA) with Transformers. A. Examples oftraining samples. Orbit of ECA is a sequence of binary strings of size W= 20 . First k= 10 statesmarked by red rectangle encode Transformer input. B.Given a part of the orbit Transformer withfull attention learns to predict the next state (O-S), the next few steps (O-O), the next state and a rule(O-SR), or predict the next state given a rule and an orbit (RO-S).2 MethodsAnElementary Cellular Automaton (ECA) is a one-dimensional, dynamical system in which spaceand time are discrete. Let r∈N:r≥1be the neighbourhood radius in space represented by aregular lattice of W∈N:W≥2r+ 1identical, locally-interconnected cells with a binary statespace, S={0,1}. The ECA’s global state ,x∈SW, is a lattice configuration specified by thevalues of all the states of all cells in the lattice at a given time. This state evolves deterministically insynchronous, discrete time steps according to a global map gρ:SW→SWdefined by a local ruleρ:S2r+1→S, so[gρ(x)]w=ρ(xw−r, . . . , x w, . . . , x w+r). The sequence of states an ECA passesthrough during its space–time evolution ,OT(x) = [x, gρ(x), gρ(gρ(x)), . . . , goT−1ρ (x)], defines itstrajectory ororbit from an initial condition (configuration) xforT∈N:T≥1. Examples of ECAorbits are visualized on the figure 1A.Let consider four modifications of learning tasks designed to evaluate different aspects of predictivemodeling and rule inference in ECAs (see Fig. 1B).Orbit-State (O-S), given an orbit OT(x) = [x(0), x(1), . . . , x(T−1)]where x(0)∈SW, the objectiveis to predict the state x(T)at time T.Orbit-Orbit (O-O), given an orbit Ok(x)for some k < T predict the subsequent states up to time T,generating OTk(x) = [x(k), . . . , x(T)].Orbit-State and Rule (O-SR), given an orbit OT(x)the objective is to predict the state x(T)at time Tand the local rule ρ.Rule and Orbit-State (RO-S), given an orbit OT(x)and the local rule ρthe objective is to predict thestatex(T)at time T.Our base model is Transformer encoder with full self-attention. It has 4 layers and 8 heads withdmodel = 512 . The input vocabulary of the model consists of tokens: [0], [1], [SEP], and [M].Thestates and the local rule ρare encoded as binary strings. The model receives the orbit as a sequence ofbits representing consecutive states separated by the [SEP] tokens. For the prediction of future statesor the inference of the local rule, the end section of the input sequence is filled with mask tokens [M]corresponding to the positions of the unknown elements.We generated a dataset with the CellPyLib [ 20]1for fixed lattice size W= 20 and neighborhoodradius r= 2. This configuration results in a total of 222r+1≈4.3×109possible boolean functionsdefining local rules. For each sample in the dataset, both the initial state and the local rule ρweregenerated randomly. We then computed the orbit for T= 20 time steps using these parameters. Thetraining dataset consists of 9.5×105and the test of 105samples. Importantly, the local rules includedin the test set are exclusive and not present in the training set. This separation ensures that the model’s1Dataset and a source code for experiments are available at https://github.com/burtsev/TransformerECA .25 10 15Input orbit length, steps0.750.800.850.900.951.00Average per bit accuracyTminA0 1 2 3Orbit look ahead, steps0.700.750.800.850.900.951.00Average per bit accuracyO-S O-O O-SR RO-S B12345678910Orbit rollout, steps0.700.750.800.850.900.951.00Average per bit accuracyO-S O-O O-SR RO-SC1 2 3 4Orbit rollout, steps0.750.800.850.900.951.00Average per bit accuracyO-S ARO-S LAO-O ARO-O LAO-SR ARO-SR LARO-S ARRO-S LADFigure 2: Transformer learns to predict the next state of ECA but struggles to plan ahead.A.Accuracy of the next state prediction for ECA orbit with size 20 generated by Boolean function of5 arguments for different input state lengths. B.Planning accuracy for different training settings (seethe main text for details). C.Accuracy for autoregressive generation of ECA orbit. D.Comparison ofautoregressive (AR) and look ahead (LA) predictions.performance reflects its ability to generalize to unseen rules, rather than simply memorizing thetraining data. To measure the quality of the model’s predictions, we use per-bit accuracy averagedover free runs, which calculates the proportion of bits correctly predicted in the output sequences.3 Results and DiscussionWe first assess whether the samples in our dataset provide sufficient information for the model to learnthe underlying dynamics successfully. The minimal number of time steps Tminneeded to recover thelocal rule ρfrom the observed orbit can be estimated analogous to the coupon collector’s problemTmin= 22r+1(ln 22r+1+γ)/W≈6.47, where γis the Euler-Mascheroni constant ( γ≈0.5772 ).Training of the model for the next state prediction (O-S task) confirms this theoretical estimation byshowing that the accuracy plateaus after 8 time steps (Fig. 2A). Therefore, we choose to use inputorbits of 10 steps O10as our main setting.Successful learning of the O-S task demonstrate that the Transformer model is capable of generalizingnot only over initial conditions for a particular function — commonly the focus in studies oftransformer trainability in CA domain [ 21–35] — but also across different Boolean functions of fixedarity (5 in our case). This indicates that the model has learned to abstract a class of rules.Next, we investigated the Transformer’s ability to plan ahead by predicting future states beyondthe immediate next state. Specifically, we trained the model to predict the state at time x(T+k)forlook-ahead steps k∈ {1,2,3}. As presented in Figure 2B, this task proved to be significantly morechallenging. While the average accuracy for next-state prediction (O-S task) was 0.96, it dropped to0.80 for k= 1and fell below 0.75 for k= 2andk= 3.To determine whether this decline was due to the Transformer’s architecture or the training objective,we explored whether accuracy could be improved by training the model to predict intermediate steps.This approach is analogous to the "chain-of-thought" method [ 36] used in large language models for3in-context learning. We employed the Orbit-Orbit (O-O) task, training the model to predict the nextfour states in parallel. The results, also shown in Figure 2B, indicate that the model retains predictiveabilities when skipping one step but struggles with skipping two or three steps.Moreover, if we hypothesize that during training the Transformer learns to generate an internalrepresentation of the local rule ρand then applies it to predict the next state, augmenting the trainingtask with explicit rule prediction might help the model form this internal rule representation moreeffectively. To test this hypothesis, we employed the Orbit-State and Rule (O-SR) training forplanning ahead without intermediate steps. For next-state prediction, the O-SR model achievedperformance comparable to the O-O model, indicating that predicting future states and inferring therule have similar effects on the model’s learning process. As presented in Figure 2B, for k= 1, theO-SR model attained an accuracy of 0.91 compared to 0.95 for the O-O model. However, for k= 2andk= 3, the O-SR model outperformed the O-O model with accuracies of approximately 0.85versus 0.75, respectively.These results suggest that learning to store a hidden representation of intermediate states (as inthe O-O training) is easier for the model but harder to generalize over multiple time steps. Incontrast, developing a hidden representation of the underlying rule (as in the O-SR training) is morechallenging initially but facilitates better generalization to longer planning horizons. This impliesthat explicitly encouraging the model to infer the generating rule can enhance its ability to makelonger-term predictions by reinforcing the internalization of the system’s dynamics.Finally, we explored the scenario where the local rule ρis explicitly provided to the model, corre-sponding to the Rule and Orbit-State (RO-S) task. Intuitively, this should be the easiest task for theTransformer, as it eliminates the need to infer the rule from the observed data. As shown in Figure 2B,the Transformer indeed learns to apply the given rule for next-state prediction with near-perfectaccuracy for k= 0andk= 1. Surprisingly, however, the performance for look-ahead steps k= 2and3drops to approximately 0.75, similar to the O-O training scenario where the rule is not provided.This unexpected decline hints that even with explicit access to the rule, the model struggles to predictfuture states beyond the two next steps. We hypothesize that this limitation arises from difficulties inlearning effective hidden representations of the intermediate states required for multi-step predictions.Despite having the rule, the Transformer may not adequately capture and propagate the necessarystate information over multiple time steps without explicit intermediate context.1234567891011Number of layers0.750.800.850.900.951.00Average per bit accuracyStep 1Step 2Step 3Step 4Figure 3: Adding layers improvesprediction of ECA orbit. Accuracyof O-O training for different numberof layers.Additionally, we evaluated the performance of the four models— trained under the O-S, O-O, O-SR, and RO-S tasks — whenused to generate continuations of the input orbit O10up toO20by predicting each subsequent state autoregressively (seeFigure 2C). As expected, the success of these models in thistask correlates with their next-token prediction accuracy (referto the first group of bars in Figure 2B). The RO-S modelexhibits the best performance, followed closely by the O-Oand O-SR models, while the O-S model shows significantlyweaker performance.When comparing autoregressive generation (AR) for rolloutsof 2, 3, and 4 steps to planning performance for the samelook-ahead steps (LA) (Figure 2D), we observe that all mod-els perform better in the autoregressive inference mode. Thissuggests that the models are more adept at short-term state-by-state prediction than at planning multiple steps ahead withoutintermediate context. The superior performance in the autore-gressive mode highlights the models’ reliance on immediatepast states to inform their predictions.Our planning results (Figure 2B) demonstrate a sharp declinein performance across all training methods starting from alook-ahead of 2 steps. This decline may be due to the Transformer’s inability to store more thanone ECA state in its hidden representations or because the task requires sequential application ofthe local rule equal to the planning horizon. The O-O training method explicitly provides memory4for intermediate states through tokens and associated hidden vectors, effectively ruling out the firstexplanation.The alternative explanation points to the limitations of sequential computation within the Transformerarchitecture, which is bounded by the number of layers. To test this, we examined how the abilityto predict future states scales with network depth. Figure 3A shows that the O-O model beginsto accurately predict the next state with 2 layers. Predicting two steps ahead requires at least 4layers, three steps necessitate 7 layers, and four steps demand 10 layers. These results suggest thateach additional planning step requires more computational layers, highlighting a direct relationshipbetween the Transformer’s depth and its capacity to model longer sequences of ECA dynamics.4 ConclusionsIn this study, we have demonstrated that Transformer models possess the ability to learn and generalizethe underlying dynamics of Elementary Cellular Automata. By designing specific tasks and trainingregimes, we showed that Transformers can abstract the governing rules and predict future states withnotable accuracy. However, their performance diminishes when required to plan multiple steps aheadwithout intermediate context, highlighting limitations in storing and propagating state informationover longer sequences. Our analysis reveals that including future states or rule prediction in thetraining loss improves the performance of next-state prediction and, as a result, enhances the overallperformance in autoregressive generation. This finding might guide training strategies for LLMsto improve their perplexity and reasoning capabilities. We also confirmed that the model’s depthplays an important role in extended sequential computations required for complex reasoning; thus,recurrence and adaptive computation time are promising directions for dynamic control of the modeldepth. These insights contribute to a deeper understanding of how neural networks can abstract rulesand point toward future research directions in improving their planning and generalization skills.AcknowledgmentsComputational grant for this work was provided by Nebius AI Cloud.References[1]OpenAI. Learning to reason with llms. https://openai.com/index/learning-to-reason-with-llms/ , 2024. Accessed: 2024-09-23.[2]Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, SeanWelleck, Peter West, Chandra Bhagavatula, Ronan Le Bras, et al. Faith and fate: Limits oftransformers on compositionality. Advances in Neural Information Processing Systems , 36,2024.[3]Yuxuan Wan, Wenxuan Wang, Yiliu Yang, Youliang Yuan, Jen-tse Huang, Pinjia He, WenxiangJiao, and Michael R Lyu. A & b== b & a: Triggering logical reasoning failures in large languagemodels. arXiv preprint arXiv:2401.00757 , 2024.[4]Wesley H Holliday and Matthew Mandelkern. Conditional and modal reasoning in largelanguage models. arXiv preprint arXiv:2401.17169 , 2024.[5]João Pedro Gandarela, Danilo S Carvalho, and André Freitas. Inductive learning of logicaltheories with llms: A complexity-graded analysis. arXiv preprint arXiv:2408.16779 , 2024.[6]Philipp Mondorf and Barbara Plank. Liar, liar, logical mire: A benchmark for suppositionalreasoning in large language models. arXiv preprint arXiv:2406.12546 , 2024.[7]Karthik Valmeekam, Kaya Stechly, and Subbarao Kambhampati. Llms still can’t plan; canlrms? a preliminary evaluation of openai’s o1 on planbench, 2024.[8]Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,Łukasz Kaiser, and Illia Polosukhin. Attention is All you Need. In Advances in neuralinformation processing systems , pages 5998–6008, 2017. URL http://papers.nips.cc/paper/7181-attention-is-all-you-need .5[9]George Cybenko. Approximations by superpositions of a sigmoidal function. Mathematics ofControl, Signals and Systems , 2:183–192, 1989.[10] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks areuniversal approximators. Neural networks , 2(5):359–366, 1989.[11] Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank J Reddi, and Sanjiv Kumar.Are transformers universal approximators of sequence-to-sequence functions? arXiv preprintarXiv:1912.10077 , 2019.[12] Clayton Sanford, Daniel J Hsu, and Matus Telgarsky. Representational strengths and limitationsof transformers. Advances in Neural Information Processing Systems , 36, 2024.[13] Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Uni-versal transformers. In International Conference on Learning Representations , 2019. URLhttps://openreview.net/forum?id=HyzdRiR9Y7 .[14] Satwik Bhattamishra, Arkil Patel, and Navin Goyal. On the computational power of transformersand its implications in sequence modeling. arXiv preprint arXiv:2006.09286 , 2020.[15] Jorge Pérez, Pablo Barceló, and Javier Marinkovic. Attention is turing-complete. Journal ofMachine Learning Research , 22(75):1–35, 2021.[16] Lena Strobl, William Merrill, Gail Weiss, David Chiang, and Dana Angluin. Transformers asrecognizers of formal languages: A survey on expressivity. arXiv preprint arXiv:2311.00208 ,2023.[17] Guillaume Lample and François Charton. Deep learning for symbolic mathematics. arXivpreprint arXiv:1912.01412 , 2019.[18] Pierre-Alexandre Kamienny, Stéphane d’Ascoli, Guillaume Lample, and François Charton.End-to-end symbolic regression with transformers. Advances in Neural Information ProcessingSystems , 35:10269–10281, 2022.[19] Stéphane d’Ascoli, Pierre-Alexandre Kamienny, Guillaume Lample, and François Charton.Deep symbolic regression for recurrent sequences. arXiv preprint arXiv:2201.04600 , 2022.[20] Luis M. Antunes. Cellpylib: A python library for working with cellular automata. Journal ofOpen Source Software , 6(67):3608, 2021. doi: 10.21105/joss.03608. URL https://doi.org/10.21105/joss.03608 .[21] N Wulff and J A Hertz. Learning cellular automaton dynamics with neural networks. Advancesin Neural Information Processing Systems , 5, 1992.[22] William Gilpin. Cellular automata as convolutional neural networks. Physical Review E , 100(3):032402, 2019.[23] Marcel Aach, Jens Henrik Goebbert, and Jenia Jitsev. Generalization over different cellularautomata rules learned by a deep feed-forward neural network, 2021.[24] Alexander Mordvintsev, Ettore Randazzo, Eyvind Niklasson, and Michael Levin. Growingneural cellular automata. Distill , 5(2):e23, 2020.[25] Ritu Pande and Daniele Grattarola. Hierarchical neural cellular automata. In Artificial LifeConference Proceedings 35 , volume 2023, page 20. MIT Press One Rogers Street, Cambridge,MA 02142-1209, USA journals-info . . . , 2023.[26] Yani Zhang and Helmut Bölcskei. Cellular automata, many-valued logic, and deep neuralnetworks. arXiv preprint arXiv:2404.05259 , 2024.[27] Gennaro Gala, Daniele Grattarola, and Erik Quaeghebeur. E (n)-equivariant graph neuralcellular automata. arXiv preprint arXiv:2301.10497 , 2023.[28] Mattie Tesfaldet, Derek Nowrouzezahrai, and Chris Pal. Attention-based neural cellular au-tomata. Advances in Neural Information Processing Systems , 35:8174–8186, 2022.6[29] Daniele Grattarola, Lorenzo Livi, and Cesare Alippi. Learning graph cellular automata. Ad-vances in Neural Information Processing Systems , 34:20983–20994, 2021.[30] Alex D Richardson, Tibor Antal, Richard A Blythe, and Linus J Schumacher. Learning spatio-temporal patterns with neural cellular automata. PLOS Computational Biology , 20(4):e1011589,2024.[31] Beomseok Kang, Harshit Kumar, Minah Lee, Biswadeep Chakraborty, and Saibal Mukhopad-hyay. Learning locally interacting discrete dynamical systems: Towards data-efficient andscalable prediction. In Alessandro Abate, Mark Cannon, Kostas Margellos, and Antonis Pa-pachristodoulou, editors, Proceedings of the 6th Annual Learning for Dynamics amp; ControlConference , volume 242 of Proceedings of Machine Learning Research , pages 1357–1369.PMLR, 15–17 Jul 2024. URL https://proceedings.mlr.press/v242/kang24a.html .[32] Jacob M Springer and Garrett T Kenyon. It’s hard for neural networks to learn the game of life.In2021 International Joint Conference on Neural Networks (IJCNN) , pages 1–8. IEEE, 2021.[33] Anton Bibin and Anton Dereventsov. Data-centric approach to constrained machine learning: Acase study on conway’s game of life. arXiv preprint arXiv:2408.12778 , 2024.[34] Veit Elser. Reconstructing cellular automata rules from observations at nonconsecutive times.Physical Review E , 104(3):034301, 2021.[35] Jaime A Berkovich and Markus J Buehler. Lifegpt: Topology-agnostic generative pretrainedtransformer model for cellular automata. arXiv preprint arXiv:2409.12182 , 2024.[36] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le,Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models.Advances in neural information processing systems , 35:24824–24837, 2022.7 |
nFU4xCyoe0 | Decomposing Complex Visual Comprehension intoAtomic Visual Skills for Vision Language ModelsHyunsik Chae†, Seungwoo Yoon†, Chloe Yewon Chun†,Gyehun Go†, Yongin Cho†, Gyeongmin Lee†, Ernest K. Ryu⋆†Seoul National University,⋆UCLA, Department of Mathematicshttps://github.com/Atomic-Visual-Skills/AVSAbstractRecent Vision Language Models (VLMs) have demonstrated impressive multi-modal comprehension and reasoning capabilities, but they often struggle withtrivially simple visual tasks. In this work, we introduce the Atomic Visual SkillsBenchmark (A VSBench) to evaluate whether VLMs possess capabilities to under-stand basic geometric features, which we refer to as atomic visual skills. Specif-ically, we systematically categorize the atomic visual skills and handcraft a setof 5,073 diverse questions designed to assess each individual atomic visual skill.Using A VSBench, we evaluate the current leading VLMs and find that they strugglewith most of these atomic visual skills that are obvious to humans.1 IntroductionRecent Vision Language Models (VLMs), also referred to more generally as Multimodal LargeLanguage Models (MLLM), integrate vision components into language models and demonstratean impressive breadth of multimodal comprehension and reasoning capabilities [ 7]. At the sametime, however, VLMs often struggle with trivially easy visual tasks as shown in Figure 1, a puzzlingphenomenon that seems almost contradictory to their remarkable performance [ 11,43]. We proposetwo hypotheses to explain the observed shortcomings of current vision-language models:Hypothesis 1: The comprehension of complex visual diagrams requires the composition of smalleratomic visual skills.Hypothesis 2: Current vision language models are incapable of such atomic visual skills.In this work, we introduce the Atomic Visual Skills Benchmark (A VSBench) to test Hypothesis 2 .A VSBench is designed to rigorously evaluate VLMs’ ability to comprehend fundamental geometricfeatures, which we refer to as atomic visual skills . We systematically categorize 36 atomic visualskills that encompass diagrams arising in high school-level geometry and handcraft a set of 5,073diverse questions designed to assess the understanding of the individual atomic visual skills.We then evaluate the state-of-the-art VLMs on A VSBench, and the results clearly support Hypothesis 2 .While our problems are designed to be trivial to humans, VLMs struggle; state-of-the-art models likeGemini-1.5-pro and GPT-4o score around 70%-75% on problems with the “easy” categorization,score around 60% on the “medium” problems, and 30% on “hard” problems. The confirmation ofHypothesis 2 also lends support to Hypothesis 1 , and suggests a promising direction of future work oftraining vision language models specifically on atomic visual skills to improve their performance incomprehending complex visual diagrams.Question : In the image, there is a slice of yellow cheese with holes. Which hole forms a perfect circle in the cheese? Choose from: v, w, x, y, z.Ground Truth : xGPT-4o: The hole labeled \"y\" forms a perfect circle inside the cheese.Skill: InteriorQuestion : How many times does the red line segment intersect with the black line segments in the image?Ground Truth : 1GPT-4o: The red line intersects with the black lines 9 times in the image.Skill: IntersectionQuestion : In the image, inside a circle there are arrows labelled as P, Q, X, U, V, W, Z, and Y. Only one of them is pointing inward, while others are pointing outwards. Guess the arrow pointing inward. Ground Truth : QGPT-4o: The arrow labeled \"W\" is pointing inward, while the others are pointing outward.Skill : DirectionQuestion : In the image, there are different shapes labelled as: a, b, c, x, y, and z. Guess the only shape that looks different from others.Ground Truth : bGPT-4o: The shape labeled \"z\" is different from the others. While all the other shapes (a, b, c, x, y) are oriented horizontally, the shape labeled \"z\" is oriented vertically.Skill: CongruenceFigure 1: Examples of A VSBench problems and responses by GPT-4o. Other state-of-the-art modelsexhibit similar failures. These examples demonstrate a deficiency in the VLMs’ understanding ofbasic geometric concepts.2 Atomic Visual Skills Benchmark (A VSBench)Many visual reasoning tasks in existing benchmarks, such as the ones listed in Section A, arecomposite tasks that can be broken down into more elementary components. This observation leadsus to define a set of atomic visual skills based on the following criteria: (i) each skill is intuitive andtrivial for adult humans, (ii) each skill cannot be decomposed further, or doing so would be unnatural,and (iii) the list of atomic visual skills should comprehensively cover the abilities required forcomprehending geometric diagrams arising in high-school level mathematics. While this definition isnot a fully rigorous one, we found it to be sufficiently clear and substantive for our work.Using these criteria, we identified 36 atomic visual skills, including the ability to understand conceptssuch as angle ,boundary ,orthogonality ,curvature , and direction . The complete list andfurther illustrations are provided in D.For adult humans, these skills are trivially simple and require little to no reasoning to perform.Therefore, we use the term comprehension instead of reasoning to emphasize our belief that theseskills do not require much reasoning or thinking to perform, for both humans and VLMs. This beliefis partially supported by Findings 3 of Section 3.1.We then constructed the Atomic Visual Skills Benchmark (A VSBench) to evaluate VLMs’ ability toperform the 36 atomic visual skills. A VSBench, as summarized in Figure 2, comprises 5,073 newhandcrafted image-question-answer triplets with the following characteristics:Abs.Pos.Shape.Under.OCRColorInteriorCard.Dir.AreaPoint.Under.BoundaryConvexityLine.Under.SymbolLengthSharpnessCardinalRel.Pos.Obj.OverlapTextureOrdinalCurvatureAdjacencyDirectionWidthCoord.Under.Intersect.Orient.Connect.Rot.Sym.AngleOrtho.CongruenceSimilarityReflectionTangencyParallelRotationEasyMediumHardFigure 2: List of 36 atomic visual skills and the number of easy, medium, and hard problems for eachskill. The difficulty is judged by the authors. We provide a total of 5,073 new handcrafted problems.2Perfect AccuracyGPT-4o +CoTGPT-4oGemini-1.5-Pro +CoTGemini-1.5-Pro LLaVA-OV (7B)Phi-3.5 (4B)DeepSeek (7B)InternVL (8B) LLaVA (13B)Math-LLaVA (13B)LLaVA (7B)Table-LLaVA (7B) Random ChanceVLMs020406080100Average Accuracy (%)100.063.6 62.658.5 58.440.137.834.3 33.431.929.627.725.919.3EasyMediumHardFigure 3: Evaluation results on A VSBench. +CoT implies the performance of the model on the rightwith chain-of-thought (CoT) prompting [ 22]. The area ratios of each colored section are aligned withthe actual ratio of problem counts. Details about the models including their full name are in E. Fullquantitative results are illustrated in Table 5.•Originality. All images and questions are newly generated, ensuring that they are free from datacontamination concerns.•Diversity. Although we focus on the set of only 36 skills, the problems feature diverse expressionsand formats, as illustrated by the sample problems in C.•Skill isolation. Each question targets a specific atomic skill, minimizing the overlap with otherskills. Recognizing the impossibility of achieving complete isolation, our method incorporatesdiverse tasks to mitigate the influence of any task or their relevant overlapping skills. For instance,to minimize the influence of other skills while evaluating the cardinal skill, we asked aboutcardinals of various concepts and objects, from colors to points, lines, and other figures.•Focus on high school geometry. We focus on the visual skills required to solve high school-level geometry problems for the following reasons: (i) the scope of the high school mathematicscurriculum is more or less clearly defined, (ii) (as our results of Section 3 show) these atomic visualskills are sufficiently challenging for current VLMs, (iii) the range of skills is broad enough to beapplicable to other visual comprehension tasks, such as interpreting charts, tables, and scientific ormathematical figures.3 Current vision language models struggle with atomic visual skillsWe evaluate three types of VLMs on A VSBench: (i) state-of-the-art proprietary models: GPT-4o[40,39] and Gemini-1.5-pro [ 49], (ii) popular mid-sized open-weight models: LLaV A-Next (7B,13B) [ 30], LLaV A-OneVision (7B) [ 26], Phi-3.5-Vision (4B) [ 1], InternVL2 (8B) [ 10], Deepseek-VL(7B) [ 31], and (iii) VLMs specifically trained for geometry or other visual data: Math-LLaV A (13B)[47], Table-LLaV A (7B) [60]. Further details of model versions are provided in Section E.The evaluation protocol consists of three steps. First, we provide the VLM with the image-questionpair and solicit a response. As we further discuss later, we also explore the effect of the chain-of-thought (CoT) prompting [ 54,22]. Second, we extract the answer from the VLM’s response usingGPT-4o mini [ 40]. Third, we ask GPT-4o mini to score the answer by comparing the extracted answerwith the answer key. We award 1point for a correct answer and 0points otherwise without any partialcredit. More details on our evaluation protocol are provided in F.3.1 Experimental results and findingsFigure 3 presents the results comparing the selected VLMs and the baseline corresponding to randomguessing. Details including exact values are provided in G. On “easy” problems, models with about10B parameters score between 32.5% and 51.0% while closed-source models, including GPT-4oand Gemini-1.5-pro, achieve over 70%, far above random chance (22.4%). On “medium” problems,models with about 10B parameters score between 23.8% and 37.8%, slightly above random chance3Abs.Pos.Shape.Under.OCRColorInteriorCard.Dir.AreaPoint.Under.BoundaryConvexityLine.Under.SymbolLengthSharpnessCardinalRel.Pos.Obj.OverlapTextureOrdinalCurvatureAdjacencyDirectionWidthCoord.Under.Intersect.Orient.Connect.Rot.Sym.AngleOrtho.CongruenceSimilarityReflectionTangencyParallelRotationSkills020406080100Average Accuracy (%)GPT-4oGemini-1.5-ProLLaVA-OV (7B)LLaVA (13B)Random ChanceFigure 4: Accuracies of a leading model, 3 outstanding models, and random chance on each skill.The skills are ordered in descending order of accuracy, averaged over all models.(19.1%). Closed-source models achieve between 58.6% and 64.6%. For “hard” problems, mostopen models score close to random chance (11.7%). The closed-source models GPT-4o (32.3%) andGemini-1.5-pro (26.9%) scored significantly better than random chance but clearly struggled.Findings 1: Models share strengths and weaknesses. Figure 4 presents the accuracies of selectedmodels on each skill. The performances across skills varied significantly. For example, most VLMsperformed well on OCR,Absolute Position , and Shapes , but performed poorly on tangency ,parallel , and angle . Interestingly, the different models largely shared the same set of skills theydid well on and the same set they found challenging.Findings 2: Domain-specific models are not better. Surprisingly, Math-LLaV A [ 47] and Table-LLaV A [ 60], which are VLMs specifically trained for geometry or visual data, did not perform betterthan general VLMs of similar size, on almost any skills within A VSBench as the results of Table 5show.Findings 3: Chain-of-thought is not helpful in enhancing atomic visual skills. We also evaluatedthe best-performing models—GPT-4o and Gemini-1.5-pro—with chain-of-thought (CoT) prompting[22] on A VSBench. Surprisingly, CoT did not help for most skills, and for some skills, it evenworsened the performance, as shown in Table 5. This contrasts with prior work, which found CoT tobe beneficial for certain visual reasoning tasks [ 32,53]. We attribute this difference to our hypothesisthat the atomic visual skills of A VSBench require simple “comprehension” and, therefore, do notbenefit from the additional “reasoning” steps afforded by CoT prompting. More concrete comparison,see G.4 ConclusionWe present the Atomic Visual Skills Benchmark (A VSBench), a benchmark designed to rigorouslyevaluate VLMs’ ability to perform atomic visual skills in a decomposed manner. We then show thatcurrent state-of-the-art VLMs struggle with such atomic visual skills.The failure of VLMs to carry out such simple atomic visual tasks raises the question: How is it thatVLMs are sometimes successful at performing complex visual tasks? For this, we hypothesize thatthe existing impressive performance on complex tasks is due to overfitting or unimodal shortcuts.Indeed, recent studies such as [ 11,18,57,50] report that VLMs tend to depend on language shortcuts,as we further reference and discuss in Section A of the appendix.Recall that our Hypothesis 1 posits that the atomic visual skills are necessary subcomponents forcomprehending complex visual diagrams. In future work, we plan to train and fine-tune VLMsdirectly on the atomic visual skills and ascertain Hypothesis 1 .4References[1]M. Abdin, J. Aneja, H. Awadalla, A. Awadallah, A. A. Awan, N. Bach, A. Bahree, A. Bakhtiari,J. Bao, H. Behl, A. Benhaim, M. Bilenko, J. Bjorck, S. Bubeck, M. Cai, Q. Cai, V . Chaudhary,D. Chen, D. Chen, W. Chen, Y .-C. Chen, Y .-L. Chen, H. Cheng, P. Chopra, X. Dai, M. Dixon,R. Eldan, V . Fragoso, J. Gao, M. Gao, M. Gao, A. Garg, A. Del Giorno, A. Goswami, S. Gu-nasekar, E. Haider, J. Hao, R. J. Hewett, W. Hu, J. Huynh, D. Iter, S. A. Jacobs, M. Javaheripi,X. Jin, N. Karampatziakis, P. Kauffmann, M. Khademi, D. Kim, Y . J. Kim, L. Kurilenko, J. R.Lee, Y . T. Lee, Y . Li, Y . Li, C. Liang, L. Liden, X. Lin, Z. Lin, C. Liu, L. Liu, M. Liu, W. Liu,X. Liu, C. Liu, P. Madan, A. Mahmoudzadeh, D. Majercak, M. Mazzola, C. C. T. Mendes,A. Mitra, H. Modi, A. Nguyen, B. Norick, B. Patra, D. Perez-Becker, T. Portet, R. Pryzant,H. Qin, M. Radmilac, L. Ren, G. de Rosa, C. Rosset, S. Roy, O. Ruwase, O. Saarikivi, A. Saied,A. Salim, M. Santacroce, S. Shah, N. Shang, H. Sharma, Y . Shen, S. Shukla, X. Song, M. Tanaka,A. Tupini, P. Vaddamanu, C. Wang, G. Wang, L. Wang, S. Wang, X. Wang, Y . Wang, R. Ward,W. Wen, P. Witte, H. Wu, X. Wu, M. Wyatt, B. Xiao, C. Xu, J. Xu, W. Xu, J. Xue, S. Yadav,F. Yang, J. Yang, Y . Yang, Z. Yang, D. Yu, L. Yuan, C. Zhang, C. Zhang, J. Zhang, L. L. Zhang,Y . Zhang, Y . Zhang, Y . Zhang, and X. Zhou. Phi-3 technical report: A highly capable languagemodel locally on your phone. arXiv:2404.14219 , 2024.[2]Z. Allen-Zhu and Y . Li. Physics of language models: Part 3.2, knowledge manipulation.arXiv:2309.14402 , 2023.[3]S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. VQA: Visualquestion answering. International Conference on Computer Vision , 2015.[4]S. Arora and A. Goyal. A theory for emergence of complex skills in language models.arXiv:2307.15936 , 2023.[5]J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. Cai, M. Terry,Q. Le, and C. Sutton. Program synthesis with large language models. arXiv:2108.07732 , 2021.[6]L. Berglund, M. Tong, M. Kaufmann, M. Balesni, A. C. Stickland, T. Korbak, and O. Evans.The reversal curse: LLMs trained on "A is B" fail to learn "B is A". International Conferenceon Learning Representations , 2024.[7]F. Bordes, R. Y . Pang, A. Ajay, A. C. Li, A. Bardes, S. Petryk, O. Mañas, Z. Lin, A. Mahmoud,B. Jayaraman, M. Ibrahim, M. Hall, Y . Xiong, J. Lebensold, C. Ross, S. Jayakumar, C. Guo,D. Bouchacourt, H. Al-Tahan, K. Padthe, V . Sharma, H. Xu, X. E. Tan, M. Richards, S. Lavoie,P. Astolfi, R. A. Hemmat, J. Chen, K. Tirumala, R. Assouel, M. Moayeri, A. Talattof, K. Chaud-huri, Z. Liu, X. Chen, Q. Garrido, K. Ullrich, A. Agrawal, K. Saenko, A. Celikyilmaz, andV . Chandra. An introduction to vision-language modeling. arXiv:2405.17247 , 2024.[8]J. Cao and J. Xiao. An augmented benchmark dataset for geometric question answering throughdual parallel text encoding. International Conference on Computational Linguistics , 2022.[9]J. Chen, T. Li, J. Qin, P. Lu, L. Lin, C. Chen, and X. Liang. UniGeo: Unifying geometrylogical reasoning via reformulating mathematical expression. Computer Vision and PatternRecognition , 2022.[10] Z. Chen, W. Wang, H. Tian, S. Ye, Z. Gao, E. Cui, W. Tong, K. Hu, J. Luo, Z. Ma, et al. Howfar are we to GPT-4v? closing the gap to commercial multimodal models with open-sourcesuites. arXiv:2404.16821 , 2024.[11] X. Fu, Y . Hu, B. Li, Y . Feng, H. Wang, X. Lin, D. Roth, N. A. Smith, W.-C. Ma, and R. Krishna.BLINK: Multimodal large language models can see but not perceive. arXiv:2404.12390 , 2024.[12] O. Golovneva, Z. Allen-Zhu, J. Weston, and S. Sukhbaatar. Reverse training to nurse the reversalcurse. arXiv:2403.13799 , 2024.[13] Y . Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh. Making the V in VQA matter:Elevating the role of image understanding in visual question answering. Computer Vision andPattern Recognition , 2017.5[14] D. Guo, Q. Zhu, D. Yang, Z. Xie, K. Dong, W. Zhang, G. Chen, X. Bi, Y . Wu, Y . K. Li, F. Luo,Y . Xiong, and W. Liang. DeepSeek-Coder: When the large language model meets programming– the rise of code intelligence. arXiv:2401.14196 , 2024.[15] D. Gurari, Q. Li, A. J. Stangl, A. Guo, C. Lin, K. Grauman, J. Luo, and J. P. Bigham. VizWizgrand challenge: Answering visual questions from blind people. Computer Vision and PatternRecognition , 2018.[16] M. Hanna, O. Liu, and A. Variengien. How does GPT-2 compute greater-than?: Interpretingmathematical abilities in a pre-trained language model. arXiv:2305.00586 , 2023.[17] T. He, D. Doshi, A. Das, and A. Gromov. Learning to Grok: Emergence of in-context learningand skill composition in modular arithmetic tasks. arXiv:2406.02550 , 2024.[18] C.-Y . Hsieh, J. Zhang, Z. Ma, A. Kembhavi, and R. Krishna. SugarCrepe: Fixing hackablebenchmarks for vision-language compositionality. arXiv:2306.14610 , 2023.[19] K. Kafle, B. Price, S. Cohen, and C. Kanan. DVQA: Understanding data visualizations viaquestion answering. Computer Vision and Pattern Recognition , 2018.[20] M. Kazemi, H. Alvari, A. Anand, J. Wu, X. Chen, and R. Soricut. GeomVerse: A systematicevaluation of large models for geometric reasoning. arXiv:2312.12241 , 2023.[21] A. Kembhavi, M. Salvato, E. Kolve, M. Seo, H. Hajishirzi, and A. Farhadi. A diagram is wortha dozen images. arXiv:1603.07396 , 2016.[22] T. Kojima, S. S. Gu, M. Reid, Y . Matsuo, and Y . Iwasawa. Large language models are zero-shotreasoners. arXiv:2205.11916 , 2022.[23] B. Lake and M. Baroni. Generalization without systematicity: On the compositional skillsof sequence-to-sequence recurrent networks. International Conference on Machine Learning ,2018.[24] N. Lee, K. Sreenivasan, J. D. Lee, K. Lee, and D. Papailiopoulos. Teaching arithmetic to smalltransformers. International Conference on Learning Representations , 2024.[25] M. Lewis, N. V . Nayak, P. Yu, Q. Yu, J. Merullo, S. H. Bach, and E. Pavlick. Does CLIP bindconcepts? probing compositionality in large image models. arXiv:2212.10537 , 2022.[26] B. Li, Y . Zhang, D. Guo, R. Zhang, F. Li, H. Zhang, K. Zhang, Y . Li, Z. Liu, and C. Li.LLaV A-OneVision: Easy visual task transfer. arXiv:2408.03326 , 2024.[27] Z. Lin, X. Chen, D. Pathak, P. Zhang, and D. Ramanan. Revisiting the role of language priorsin vision-language models. arXiv:2306.01879 , 2024.[28] Z. Lin and K. Lee. Dual operating modes of in-context learning. arXiv:2402.18819 , 2024.[29] H. Liu, C. Li, Y . Li, B. Li, Y . Zhang, S. Shen, and Y . J. Lee. LLaV A-NeXT: Improved reasoning,OCR, and world knowledge, 2024.[30] H. Liu, C. Li, Q. Wu, and Y . J. Lee. Visual instruction tuning. Neural Information ProcessingSystems , 2023.[31] H. Lu, W. Liu, B. Zhang, B. Wang, K. Dong, B. Liu, J. Sun, T. Ren, Z. Li, H. Yang, Y . Sun,C. Deng, H. Xu, Z. Xie, and C. Ruan. DeepSeek-VL: Towards real-world vision-languageunderstanding. arXiv:2403.05525 , 2024.[32] P. Lu, H. Bansal, T. Xia, J. Liu, C. Li, H. Hajishirzi, H. Cheng, K.-W. Chang, M. Galley, andJ. Gao. MathVista: Evaluating mathematical reasoning of foundation models in visual contexts.International Conference on Learning Representations , 2024.[33] Z. Ma, J. Hong, M. O. Gul, M. Gandhi, I. Gao, and R. Krishna. CREPE: Can vision-languagefoundation models reason compositionally? Computer Vision and Pattern Recognition , 2023.6[34] A. Masry, X. L. Do, J. Q. Tan, S. Joty, and E. Hoque. ChartQA: A benchmark for questionanswering about charts with visual and logical reasoning. Findings of the Association forComputational Linguistics , 2022.[35] N. Methani, P. Ganguly, M. M. Khapra, and P. Kumar. PlotQA: Reasoning over scientific plots.Conference on Applications of Computer Vision , 2020.[36] S. Min, X. Lyu, A. Holtzman, M. Artetxe, M. Lewis, H. Hajishirzi, and L. Zettlemoyer. Re-thinking the role of demonstrations: What makes in-context learning work? arXiv:2202.12837 ,2022.[37] M. Okawa, E. S. Lubana, R. Dick, and H. Tanaka. Compositional abilities emerge multiplica-tively: Exploring diffusion models on a synthetic task. Neural Information Processing Systems ,2023.[38] S. Ontanón, J. Ainslie, V . Cvicek, and Z. Fisher. Making transformers solve compositionaltasks. arXiv:2108.04378 , 2021.[39] OpenAI. GPT-4 technical report. arXiv:2303.08774 , 2024.[40] OpenAI. GPT-4o system card. https://openai.com/research/gpt-4o-system-card ,Aug 2024.[41] R. Paiss, A. Ephrat, O. Tov, S. Zada, I. Mosseri, M. Irani, and T. Dekel. Teaching CLIP to countto ten. International Conference on Computer Vision , 2023.[42] O. Press, M. Zhang, S. Min, L. Schmidt, N. A. Smith, and M. Lewis. Measuring and narrowingthe compositionality gap in language models. arXiv:2210.03350 , 2022.[43] P. Rahmanzadehgervi, L. Bolton, M. R. Taesiri, and A. T. Nguyen. Vision language models areblind. arXiv:2407.06581 , 2024.[44] R. Ramesh, E. S. Lubana, M. Khona, R. P. Dick, and H. Tanaka. Compositional capabilities ofautoregressive transformers: A study on synthetic, interpretable tasks. International Conferenceon Machine Learning , 2024.[45] B. Rozière, J. Gehring, F. Gloeckle, S. Sootla, I. Gat, X. E. Tan, Y . Adi, J. Liu, R. Sauvestre,T. Remez, J. Rapin, A. Kozhevnikov, I. Evtimov, J. Bitton, M. Bhatt, C. C. Ferrer, A. Grattafiori,W. Xiong, A. Défossez, J. Copet, F. Azhar, H. Touvron, L. Martin, N. Usunier, T. Scialom, andG. Synnaeve. Code Llama: Open foundation models for code. arXiv:2308.12950 , 2023.[46] J. Shen, Y . Yuan, S. Mirzoyan, M. Zhang, and C. Wang. Measuring vision-language stem skillsof neural models. International Conference on Learning Representations , 2024.[47] W. Shi, Z. Hu, Y . Bin, J. Liu, Y . Yang, S.-K. Ng, L. Bing, and R. K.-W. Lee. Math-LLaV A: Boot-strapping mathematical reasoning for multimodal large language models. arXiv:2406.17294 ,2024.[48] J. Song, Z. Xu, and Y . Zhong. Out-of-distribution generalization via composition: a lens throughinduction heads in transformers. arXiv:2408.09503 , 2024.[49] Gemini Team. Gemini: A family of highly capable multimodal models. arXiv:2312.11805 ,2024.[50] T. Thrush, R. Jiang, M. Bartolo, A. Singh, A. Williams, D. Kiela, and C. Ross. Winoground:Probing vision and language models for visio-linguistic compositionality. Computer Vision andPattern Recognition , 2022.[51] S. Tong, E. Brown, P. Wu, S. Woo, M. Middepogu, S. C. Akula, J. Yang, S. Yang, A. Iyer,X. Pan, A. Wang, R. Fergus, Y . LeCun, and S. Xie. Cambrian-1: A fully open, vision-centricexploration of multimodal LLMs. arXiv:2406.16860 , 2024.[52] S. Tong, Z. Liu, Y . Zhai, Y . Ma, Y . LeCun, and S. Xie. Eyes wide shut? exploring the visualshortcomings of multimodal LLMs. Computer Vision and Pattern Recognition , 2024.7[53] X. Wang, Z. Hu, P. Lu, Y . Zhu, J. Zhang, S. Subramaniam, A. R. Loomba, S. Zhang, Y . Sun,and W. Wang. SciBench: Evaluating college-level scientific problem-solving abilities of largelanguage models. International Conference on Machine Learning , 2024.[54] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V . Le, andDenny Zhou. Chain-of-Thought prompting elicits reasoning in large language models. NeuralInformation Processing Systems , 2024.[55] Z. Xu, Z. Shi, and Y . Liang. Do large language models have compositional ability? aninvestigation into limitations and scalability. ICLR Workshop on Mathematical and EmpiricalUnderstanding of Foundation Models , 2024.[56] M. Yuksekgonul, F. Bianchi, P. Kalluri, D. Jurafsky, and J. Zou. When and why vision-languagemodels behave like bags-of-words, and what to do about it? International Conference onLearning Representations , 2023.[57] R. Zhang, D. Jiang, Y . Zhang, H. Lin, Z. Guo, P. Qiu, A. Zhou, P. Lu, K. Chang, P. Gao, et al.MathVerse: Does your multi-modal LLM truly see the diagrams in visual math problems?arXiv:2403.14624 , 2024.[58] H. Zhao, S. Kaur, D. Yu, A. Goyal, and S. Arora. Can models learn skill composition fromexamples? ICML Workshop on LLMs and Cognition , 2024.[59] T. Zhao, T. Zhang, M. Zhu, H. Shen, K. Lee, X. Lu, and J. Yin. VL-Checklist: Evaluatingpre-trained vision-language models with objects, attributes and relations. arXiv:2207.00221 ,2022.[60] M. Zheng, X. Feng, Q. Si, Q. She, Z. Lin, W. Jiang, and W. Wang. Multimodal table under-standing. arXiv:2406.08100 , 2024.8A Prior worksVLM benchmarks and language shortcuts. Existing VLM benchmarks evaluate models on theirability to solve diverse vision-language problems from general real-world tasks [ 3,13,15], tasksthat require specific skills such as high-school geometry [ 32,20,9,8], analyzing charts and tables[34,60,35], and other scientific visual data [ 19,21]. However, most VLM benchmarks do not containa mechanism for verifying whether a correct solution is based on correctly comprehending the visualinformation, allowing the models to sometimes rely on linguistic biases to find a solution [ 7]. Lin etal. [27] revealed that by simply avoiding implausible or less fluent sentences, blind language modelscan distinguish the correct description of an image from wrong ones on CREPE [33], VL-Checklist[59], and ARO[ 56]. Mathverse [ 57] observed that, when solving geometry problems, VLMs relymostly on textual inputs without correctly interpreting diagrams.Some recent work has started to seek unbiased ways to measure visual capabilities. Winoground [ 50]prevents choosing image captions based on the plausibility of the sentence structure, by providingtwo images with same objects or concepts but with different relationships. Blink [ 11] and CV-Bench[51] present novel vision-oriented tasks with minimized effects of linguistic biases.Compositional reasoning. There has been intensive recent research on the compositional capa-bilities of Language Models [ 4,55,17,48,44,58,23,38,42]. VLMs have additionally showncompositional capabilities in visual tasks [ 25,37,33,59,56]. However, such studies left the visualportion with less attention, thus vulnerable to linguistic shortcuts such as removing grammaticallywrong setnences or choosing more realistic sentences as answers. To mitigate this issue, Sugar-Crepe [ 18] generated sentences with ChatGPT to provide incorrect captions of given images, withdifferent compositional structures while as realistic as the ground truths.Research on atomic skills of LLMs. To understand the capabilities of LLMs, there has beenprior work on studying LLMs in simple idealized experiments. This includes research on in-contextlearning [ 28,36], arithmetic (addition and multiplication) [ 38,16,24], fact search and reverse factsearch [2, 6, 12], and programming [5, 45, 14].However, there have been far fewer studies of this kind for vision language models. Paiss et al. [ 41]focused on counting objects in image and suggested CountBench. Shen et al. [ 46] suggest a skill-based approach to evaluating VLMs, but their list of skills is not atomic. CV-Bench [ 51] evaluates 4vision-centric skills: spatial relationship, object count, depth order, and relative distance. MMVP[52] challenges VLMs to understand 9 visual patterns.BlindTest [ 43] observed failures of VLMs with 7 simple tasks focusing on fundamental geometricfeatures, some of which share similar approaches with A VSBench. While these tasks are novel andeffective, they lack diversity in color, shape, or word choice. Their Task 1 for instance, uses onlyone red and one blue line, each with exactly one sharp turn. For Task 3, they adopted only threetypes of strings and a red circle to generate visual context. Moreover, these 7 tasks are insufficient tocollectively evaluate the full spectrum of visual capabilities, leading to limited scope in evaluationobjectives. In contrast, our A VSBench offers a systematic, comprehensive framework for evaluatinga comprehensive set of visual skills of VLMs with an extensive dataset that is rich in color, shape,and other variations.B Problem difficulty categorizationThe problems are categorized into three difficulty levels: easy, medium, and hard. Problems catego-rized as easy or medium should be quickly solvable by humans, whereas hard questions, althoughmore time-consuming, are designed to be clear and easily verifiable. We clarify that the difficultylevels were determined by the authors, so there is a degree of subjectivity to the categorization.9C Sample problemsWe present 99 sample problems to provide the readers with the general characteristics of our A VS-Bench dataset, along with the responses of Gemini-1.5-pro[ 49], GPT-4o[ 40], and LLaV A-Next-13B[ 29]. For readability, we report a summary rather than the full text of the model response. SeeFigure 5 for examples of full model responses.Ground Truth 1GPT-4o Response : The red line intersects with the black lines 9 times in the image. Extracted Answer : 9Score : 0Question How many times does the red line segment intersect with the black line segments in the image? Gemini-1.5-pro Response : The red line intersects the black lines ** 9** times. \n Extracted Answer : 9Score : 0LLaV A-Next-13B Response : The red line intersects with the black lines at three points. Extracted Answer : 3Score : 0Figure 5: Full responses of VLMs and scoring by GPT-4o mini.10Question Choose the most appropriate color to fill in the box marked with ‘?’ in the image. The answer is one of ‘a’, ‘b’, ‘c’, or ‘d’. Ground Truth a Gemini a GPT aLLaV A bQuestion In the image, all but one shape is congruent to the others, meaning they can be perfectly overlapped by moving, rotating, and flipping. Choose the one shape that looks different. Answer from: a, b, c, d, e, f.Ground Truth c Gemini d GPT dLLaV A eQuestion In the image, there are different shapes labeled as a, b, c, x, y, and z. Identify the one shape that looks different from the others. Ground Truth y Gemini z GPT yLLaV A bQuestion In the image, there are different shapes labeled as a, b, c, x, y, and z. Guess which shape is different from the others. Ground Truth b Gemini z GPT zLLaV A xQuestion In the image, there are triangles labeled with numbers or letters. Which numbered triangle is congruent to triangle X? Ground Truth 4 Gemini 3 GPT 1LLaV A 1Question I gave you a graph as an image. How many connected components are there? Your answer should be a single number. For example, if there are 8 connected components, your answer should be "8". Ground Truth 3 Gemini 4 GPT 2LLaV A 4Question In the image, there are 9 nodes connected to each other. Guess which numbered node is not connected to any other nodes. Ground Truth 4 Gemini 4 GPT 4LLaV A 1Question Which color's circle is connected to the red circle through black lines in the image? Choose from: pink, orange, yellow, green, blue, purple, black. Ground Truth Green Gemini Blue GPT Green LLaV A Yellow(2/3), Pink(1/3) Question In the image, there are nodes labeled A through E and 1 through 3, connected to each other by straight lines. Among A, B, C, D, and E, which node is not directly connected to node 2? Ground Truth A Gemini E GPT ALLaV A AFigure 6: Sample problems from A VSBench and responses from Gemini-1.5-pro, GPT-4o, and LLaV A-Next-13B, 1/11.11Question You can see disjoint green rectangles in the image. Which green rectangle has its exact boundary marked in black? Answer from a, b, c, d, e, f, g. Ground Truth g Gemini c GPT fLLaV A dQuestion How many vertices are there in the polygon provided? Ground Truth 9 Gemini 10 GPT 10LLaV A 6 Question How many straight lines are in the image? Ground Truth 5 Gemini 6 GPT 7LLaV A 4Question I divided the photo on the left into pieces, as shown on the right. How many pieces are there after the split? Ground Truth 5 Gemini 5 GPT 5LLaV A 4Question In the image, there are red, blue, green, purple, and yellow shapes, with points A and B near each shape. For which color's shape is the shortest path from A to B counterclockwise? Ground Truth yellow Gemini red GPT redLLaV A purple Question In the image, there are six crosses with arrows. Among them, one arrow is rotating in the opposite direction compared to the other five. Find the one that rotates in a different orientation from the others. Answer with the letter that denotes it. Ground Truth E Gemini F GPT CLLaV A Bottom Right Question In the image, there are five arrows on a shape. Each arrow points either clockwise or counterclockwise. Which arrow is NOT pointing the same orientation with the arrow A? Answer with single letter. Ground Truth E Gemini B GPT BLLaV A CQuestion When you start from the black heart in the image, following which curve makes you run counterclockwise? Choose from: red, yellow, green, blue, purple. Ground Truth Purple Gemini Green GPT Purple LLaV A RedQuestion There is a gradually changing colors in the shapes on the first row. What will be the color best fits in a shape labeled “?” ? Choose one from “A”, “B”, “C”, “D" and "E". Ground Truth B Gemini B GPT DLLaV A DFigure 7: Sample problems from A VSBench and responses from Gemini-1.5-pro, GPT-4o, and LLaV A-Next-13B, 2/11.12Question In the image, there are black, red, orange, green, blue, and purple regions, with some regions sharing sides. Choose all the regions that share at least one side with the red region and list their colors. Ground Truth Green Gemini Green, Blue, Orange GPT Green, Orange LLaV A Black, Green, Blue, Purple Question Among a, b, c, d, e, and f in the image, which pair of lines forms the only obtuse angle? Consider only the internal angle. Ground Truth b Gemini d GPT eLLaV A a and b Question In the image, there are rectangles formed by a pair of horizontal lines and a pair of vertical lines, labeled by numbers from 1 to 7. Guess the smallest rectangle. Ground Truth 6 Gemini 2 GPT 4LLaV A 1Question In the image, there are five different shapes and a thick black line on them. Which color's shape possesses the largest area above the black line? Choose from: yellow, green, blue, purple, pink. Ground Truth Green Gemini Yellow GPT Blue LLaV A Yellow Question Let's say there are farms called a, b, c, d, and e, as shown in the image. Black curves represent their fences. From which farm are the sheep unable to escape? Ground Truth b Gemini b GPT bLLaV A aQuestion There are six gray shapes labeled with alphabets in the image. Which shape has only its boundary marked as black? Answer from a, b, c, d, e, f." Ground Truth b Gemini f GPT cLLaV A dQuestion In the image, a square is divided into cells 1, 2, 3, 4, and 5. What is the name of the boundary between cells 4 and 5? Choose from BG, CH, FG, AG, and DG, and write your answer in alphabetical order. For example, the boundary between cells 1 and 2 is 'BG'. Ground Truth FG Gemini FH GPT GHLLaV A DHQuestion In this problem, we say two areas are adjacent to each other if one is directly above, below, to the left, to the right, or diagonal to the other. Answer with all the colors of the areas that are not adjacent to box A: red, orange, yellow, light green, green, light blue, blue, purple, gray, and white. Ground Truth Yellow, Purple Gemini Yellow, Light Blue GPT Gray, Blue LLaV A All of them Question Choose the only acute angle in the image from the following: AOP, AOQ, AOR, AOS, AOT, AOU. Ground Truth AOR Gemini AOR GPT AOP LLaV A AOQ Figure 8: Sample problems from A VSBench and responses from Gemini-1.5-pro, GPT-4o, and LLaV A-Next-13B, 3/11.13Question Among the three options in the given picture, choose all the options where points A and B are connected by a black line. Ground Truth 1 Gemini Option 1 GPT Option 1, 3 LLaV A Option 1, 3 Question In the image, there are shapes of different colors. Identify the color of the only convex shape. Choose from: red, purple, orange, green, or blue. Ground Truth Orange Gemini Orange GPT Orange LLaV A Blue Question In the image, there is a 9 by 9 grid with some blocks colored. Among blue, purple, red, green, and yellow, whose area's shape is convex? Ground Truth Purple Gemini Purple GPT Purple LLaV A BlueQuestion The picture describes six dots labeled as A, B, C, D, E and F on the x-y coordinate. Which dot has the same x-coordinate as dot A? <1> dot B <2> dot F <3> dot E <4> none of the above Ground Truth 2 Gemini 4 GPT 1(2/3), 2(1/3) LLaV A dot B Question What are the x and y coordinates of point C? The answer should be in the form of C(x, y). The x and y coordinates of point C are integers. Ground Truth C(-6, -2) Gemini C(-6, -2) GPT C(-6, -1) LLaV A C(-7, -6) Question In the image, there are two circles and a curve. Which color curve is most likely to be part of a circle? Choose from: red, orange, blue, purple, green, yellow, black. Ground Truth Blue Gemini Blue GPT Blue LLaV A Blue Question The image shows a graph of a function with labeled points: A, B, C, and D. Choose the point where the function is most sharply bent. State the label of the point. Ground Truth A Gemini A GPT DLLaV A CQuestion In the image, inside a circle, there are arrows labeled as P, Q, X, V , W, and Z. Only one of them is pointing inward, while the others are pointing outward. Guess the arrow pointing inward. Ground Truth Q Gemini Z GPT WLLaV A WQuestion In the image, there are four paths. After following the arrows to the end of each path, one of the routes points in a different direction from the others. Identify the label of the route that points in the different direction. Choose from 'A', 'B', 'C', or 'D'.Ground Truth B Gemini D GPT ALLaV A BFigure 9: Sample problems from A VSBench and responses from Gemini-1.5-pro, GPT-4o, and LLaV A-Next-13B, 4/11.14Question In the image, there are a cloud, a sun, a moon, a lightning, and a three-headed arrow. Identify the shape that is NOT pointed at by the arrow. Choose from cloud, sun, moon, and lightning. Ground Truth Cloud Gemini Cloud GPT Moon LLaV A Lightning Question In the image, there are dots in various colors and an arrow. Guess the color of the dot the arrow is pointing at. Choose from: red, orange, blue, green, purple, yellow, brown, gray, black. Ground Truth Yellow Gemini No dot GPT Purple LLaV A Not Possible to Determine Question In the picture, arrows A, B, C, and D are drawn. Among arrows A, B, C, and D, which one is pointing in a different direction from the other three arrows? Ground Truth B Gemini C GPT C(2/3), D(1/3) LLaV A DQuestion In the image, there is a pentagon ABCDE and 5 points of different colors on line AD. Which colored point is most likely the intersection of line AD and line EC? Choose from: red, blue, green, yellow, or purple. Ground Truth Yellow Gemini Purple GPT Green LLaV A RedQuestion How many times does the red line segment intersect with the black line segments in the image? Ground Truth 1 Gemini 9 GPT 9LLaV A 3Question In the image, there are five vertices: 1, 2, 3, 4, and 5, connected by some edges. Among the following choices, which edge exists in the image? Choose from the following options: a) edge(1,2) b) edge(2,3) c) edge(3,4) d) edge(4,5) e) edge(5,1) Ground Truth a Gemini c, a, d GPT bLLaV A a, b, c Question There are five points—A, B, C, D, and E—in the given picture. Some of these points are connected by line segments, while others are not. Choose all the pairs from the options where a line segment exists between the two points, and answer with the appropriate options. [Options] (1) A and B (2) A and C (3) A and D (4) A and E (5) B and C (6) B and D (7) B and E (8) C and D (9) C and E (10) D and E. Ground Truth 1, 5, 7, 8 Gemini 1, 5, 6, 8, 9, 10 GPT 1, 4, 5, 6, 7, 8 LLaV A 1, 2, 3, 4 Question Read this rotated image. Ground Truth Everland Gemini Everyday GPT EVERYONE LLaV A PURPOSE Question Read the six-letter word displayed in the picture. Ground Truth Helium Gemini STREAM GPT HELLO! LLaV A Not Recognizable Figure 10: Sample problems from A VSBench and responses from Gemini-1.5-pro, GPT-4o, and LLaV A-Next-13B, 5/11.15Question What are the two characters marked with a blue circle in the phrase in the image? Answer with the two characters. Ground Truth EW Gemini E W GPT E, W LLaV A N YQuestion Which of the alphabets is not present in the image: “A”, “U”, “E”, “F”, “W”, “N”, “Y”? Ground Truth N Gemini N GPT NLLaV A UQuestion Read the text in the given image. Ground Truth Sejong Gemini salojas GPT Buolas LLaV A buoias Question There are shapes of different colors labeled with numbers. Counting from left to right, what is the color of the third circle? Choose from: pink, blue, yellow, green. Ground Truth Pink Gemini Green GPT Pink LLaV A Green Question There are several letters in the image. What is the letter that is fourth from the above? You should also consider its case. For example, if 'z' is fourth from the above, your answer should be "z". Ground Truth s Gemini s GPT sLLaV A qQuestion There are four shapes labeled 1, 2, 3, and 4 in the given picture. Each shape is semi-transparent and has a single color. Choose all the pairs where overlapping occurs, and answer with the label of the option: (a) 1 and 2 (b) 1 and 3 (c) 1 and 4 (d) 2 and 3 (e) 2 and 4 (f) 3 and 4. Ground Truth d Gemini d GPT a, d, e, f LLaV A dQuestion In the image, there are three overlapping shapes, two of which are a trapezoid and a two-headed arrow. Identify the last shape from the following options: a triangle, a rectangle, a pentagon, a hexagon, or a circle. Ground Truth Pentagon Gemini Hexagon GPT Pentagon LLaV A Rectangle Question In the image, W, X, Y , and Z each represent a green shape and a group of blue shapes drawn together. Which image shows the only case where the green and blue shapes do not overlap? Ground Truth Y Gemini X GPT XLLaV A YQuestion Which color's line is parallel to line l? Choose from: blue, orange, green, purple, red, brown. Ground Truth Red Gemini Brown GPT Brown LLaV A Purple Figure 11: Sample problems from A VSBench and responses from Gemini-1.5-pro, GPT-4o, and LLaV A-Next-13B, 6/11.16Question In the image, there are several line segments with distinct colors. Given that there is a unique line segment that is parallel to the yellow line, what is the color of the line? Choose from brown, green, blue, and black. Ground Truth Green Gemini Brown GPT Green LLaV A Green Question Which color's text is written parallel to the black line in the image? Choose from: red, blue, green, orange, or purple. Ground Truth Green Gemini None GPT Green LLaV A BlueQuestion Among the six red lines labeled 1, 2, 3, 4, and 5 respectively, choose all the red lines that are parallel to the blue line in the given image. Ground Truth 1, 3, 4 Gemini 2, 5 GPT 1, 3, 4, 5 LLaV A 1, 5Question In the image, there are lines labeled S, T, U, X, Y , and Z, and a black curve. Which line intersects the curve twice? Ground Truth Y Gemini Z GPT TLLaV A UQuestion Among the six areas in the given picture, find the four areas that contain one or more black points and answer with their labels(numbers). Ground Truth 1, 4, 5, 6 Gemini 1, 4, 5, 6 GPT 1, 2, 5, 6 LLaV A 1, 3, 4, 6 Question In the image, there are curves labeled as: a, b, c, x, y, and z. Guess the only curve with a sharp point. Ground Truth z Gemini y GPT yLLaV A zQuestion In the image, there are curves labeled as: a, b, c, x, y, and z. Guess the only curve that is smooth everywhere. Ground Truth x Gemini z GPT cLLaV A aQuestion In the image, there are multiple squares with a number or a letter labeled inside. Name the only label that is written at the bottom of its corresponding square. Ground Truth 6 Gemini H GPT 1LLaV A GQuestion In the image, there are four arrows located in each corner. Where is the one with a different color? Choose from [upper right, upper left, lower right, lower left]. Ground Truth Lower Right Gemini Lower Right GPT Lower Left LLaV A Upper Right Figure 12: Sample problems from A VSBench and responses from Gemini-1.5-pro, GPT-4o, and LLaV A-Next-13B, 7/11.17Question In the image, there is a slice of yellow cheese with holes. Which hole forms a perfect circle in the cheese? Choose from: v, w, x, y, z. Ground Truth x Gemini w GPT yLLaV A VQuestion Which color's area is totally contained inside the blue area? Choose just one from: red, green, pink, gray, purple, yellow, brown. Ground Truth Yellow Gemini Gray GPT Purple LLaV A Green Question In this picture, there is a circle and 9 points. The center of the circle refers to the point from which all points on the circle are equidistant. Which point among E, F, G, H, I, J, K, L, M is most likely to be the center of the circle in the picture? Ground Truth M Gemini M GPT MLLaV A IQuestion In the image, there are squares of the same size, colored red, yellow, black, green, blue, orange, pink, and purple. Identify the color of the only square at a different height. Ground Truth Orange Gemini Orange GPT Orange LLaV A Orange Question In the image, point P is the midpoint of the line segment BD. Similarly, choose the point that is most likely to be the midpoint of the line segment AC. The answer should be one of E, F, G, or H. Ground Truth E Gemini F GPT FLLaV A EQuestion In the image, there are squares of the same size, colored red, yellow, black, green, blue, orange, pink, and purple. Identify the color of the square closest to the top. Ground Truth Blue Gemini Blue GPT Black LLaV A Blue Question There are points labeled 1, 2, 3, 4, and 5 in the picture. Choose the phrase that describes the image correctly. Point 2 is on the (left/right/upper/lower/upper left/upper right/lower left/lower right) side of point 4. Ground Truth Upper Gemini Upper Left GPT Upper Left LLaV A Upper Left Question Among black, blue, red, green, and yellow, choose the color of the pair of points that is not line-symmetric about the black line. Ground Truth Yellow Gemini Yellow GPT Yellow LLaV A Red and Yellow Question In the image, there are a black triangle on the left of the line and five triangles on the right of the line. Choose the triangle on the right that is symmetric to the black triangle with respect to the line. Choose your answer from “red”, “blue”, “green”, “yellow”, “purple”. For example, if black and red triangle are symmetric with respect to the line, your answer should be “red”. Ground Truth Blue Gemini Green GPT Purple LLaV A BlueFigure 13: Sample problems from A VSBench and responses from Gemini-1.5-pro, GPT-4o, and LLaV A-Next-13B, 8/11.18Question In the image, there are four arrows and three line segments of different colors. The black arrow is the original arrow, and the other arrows are the results of reflections with respect to the line segment that matches the color of the resulting arrow. Among the three colors, one is incorrectly reflected. Which color is not correctly reflected? Choose from blue, gray, and purple. Ground Truth Gray Gemini Purple GPT Blue LLaV A Purple Question In the image, there is a black line and four pairs of curves (each curve has the same color as its partner), each representing a reflection of a figure. Which pair failed to correctly represent a reflection with respect to the black line? Answer with the color of the pair: blue, green, black, or red.Ground Truth Red Gemini Red GPT Blue LLaV A Blue Question In the image, there is a 9 by 9 grid with some blocks colored. Among blue, red, orange, green, and yellow, which color blocks are line-symmetric about the black line? Ground Truth Blue Gemini Blue, Red, Orange GPT Green, Blue, Red LLaV A Blue and Red Question Choose the word in parentheses that correctly describes the image: In the image, line segment (AD/DC/CB) has the same length as line segment AB.Ground Truth DC Gemini DC GPT ADLLaV A DCQuestion In the image, there are curves labeled a, b, c, x, y, and z. Which curve is the shortest? Ground Truth y Gemini y GPT yLLaV A cQuestion In the image, there is a black line and 5 different shapes on it. Which shape is the tallest? Choose from: heart, square, triangle, oval, circle. Ground Truth Circle Gemini Circle GPT Heart LLaV A Circle Question Guess which color's line is the only line orthogonal to the bold black line in the image. Choose from: blue, green, red, orange, purple, brown. Ground Truth Red Gemini Orange GPT Green LLaV A Blue Question In the image, there are six points A, B, C, D, E, and O and some line segments. Several pairs of line segments meet at one of the points and form a right angle. Find three points where the right angles are located. List the letters denoting the correct points in the format of ‘X, Y , Z’, in alphabetical order. Ground Truth C, D, E Gemini C, D, E GPT B,E,O LLaV A A, B, C, A, E, O, B, D, D Question In the image, there are four choices from A to D, each paired with two line segments. Which choice represents a right angle? Answer with a single letter. Ground Truth A Gemini B GPT CLLaV A DFigure 14: Sample problems from A VSBench and responses from Gemini-1.5-pro, GPT-4o, and LLaV A-Next-13B, 9/11.19Question In the image, there exists a line segment perpendicular to the line EA. Choose the line segment from the following choices: 1) AB 2) BD 3) BC 4) FB Answer with the number of the choice. Ground Truth 3 Gemini 2 GPT 2LLaV A 2Question In the image, there are lines with different colors. Which color line is NOT orthogonal to the black line? Choose from: red, blue, orange, green, purple, pink, brown, yellow. Ground Truth Orange Gemini Red GPT RedLLaV A Purple Question In the image, which green shape is least likely to be a rotation of X about the center of the circle? Choose from: a, b, c, d, or e. Ground Truth c Gemini d GPT dLLaV A bQuestion Which of the shapes labeled 1, 2, 3, or 4 cannot be made by rotating shape A? Provide the corresponding label. Ground Truth 3 Gemini 4 GPT 2LLaV A 1Question If you rotate the black shape around the center of the circle in the image, which color’s shape will fit perfectly with it, as shown in the sample above? Ground Truth Orange Gemini Orange GPT RedLLaV A Purple Question In the image, there is a yellow shape and five other shapes. Choose the shape that is symmetric with the yellow shape with respect to point G. Choose your answer from A, B, C, D, or E. Ground Truth B Gemini E GPT CLLaV A CQuestion Among shapes A, B, C, D, E, and F, which one does NOT have any 3-fold rotational symmetry? Ground Truth A Gemini F GPT CLLaV A DQuestion Choose all the options that correctly describe the given purple shape. (1) It is a quadrilateral. (2) It is a square. (3) It is a rectangle. (4) It is a trapezoid. (5) It is a rhombus. (6) It is a parallelogram. Ground Truth 1 Gemini 1 GPT 1LLaV A 6Question Choose all the shapes you can see in this picture and answer with their labels: (1) triangle (2) square (3) pentagon (4) hexagon (5) arrow (6) heart (7) star (8) circle. Ground Truth 1, 2, 3, 4, 5, 7 Gemini 1, 4, 5, 7 GPT 1, 2, 3, 4, 5, 7 LLaV A 1, 2, 3, 4, 5, 6, 7, 8 Figure 15: Sample problems from A VSBench and responses from Gemini-1.5-pro, GPT-4o, and LLaV A-Next-13B, 10/11.20Question In the given picture, there are line segments which become the sides of some shape. What is the type of the shape in the picture? Choose one the following and answer in a single word: triangle, rectangle, parallelogram, pentagon, hexagon, circle, star, or heart. Ground Truth Hexagon Gemini Hexagon GPT Pentagon LLaV A Parallelogram Question In the image, there are figures of different sizes labeled A, B, C, X, Y , and Z. Which figure is the only one with a different aspect ratio? Ground Truth C Gemini Y GPT WLLaV A YQuestion Among the points A, B, C, and D in the given picture, identify the only point where the green curve and the orange line meet but are not tangent. Ground Truth C Gemini D GPT BLLaV A CQuestion Among A, B, C, and D in the given picture, which rectangle has the same texture as the purple rectangle? The answer should be a single letter. Ground Truth D Gemini D GPT DLLaV A AQuestion Lines labeled A, B, C, D, E, F, G, H, I, J, and K are drawn in various styles. List all the letters that represent lines with the same style as line K. Ground Truth D, F, H Gemini D, F, H GPT D, F, H LLaV A A, B, C, D, E, F, G, H, I, J Question There are two white papers in each option. Among a, b, c, d, e, f, and g, which option has the narrowest gap between the two white papers? Ground Truth g Gemini g GPT gLLaV A eQuestion Which color's road is the widest? Choose from: yellow, blue, green, pink, gray. Ground Truth Yellow Gemini Gray GPT Gray LLaV A Yellow Question In the image, which line is tangent to the orange curve at point O among the green, blue, black, and yellow line segments? State the color of the line segment in lowercase. Ground Truth Yellow Gemini Black GPT Black LLaV A Green Question In the image, there are 9 alphabet letters, labeled A to I, along with one red curve and one blue curve. List all the alphabet letters that are located between the red curve and the blue curve. Ground Truth D, F, I Gemini A, C GPT A, C, D LLaV A B, C, D, E, F, G, H Figure 16: Sample problems from A VSBench and responses from Gemini-1.5-pro, GPT-4o, and LLaV A-Next-13B, 11/11.21D Descriptions of the atomic visual skillsIn this section, we provide detailed definitions of the 36 atomic visual skills, together with a corre-sponding problem sample from A VSBench.1.Angle is a skill to understand how an angle is visually represented. Angle is the primaryfactor in how a polygon looks, how two or more objects are related, and many othersituations.2.Direction is an ability to recognize linear direction in an image. It is a fundamental skillin human vision, supporting representation of linearity and multi-dimensional relations.3.Boundary is a skill to understand the ends of objects or areas, and to detect visual repre-sentation of edges. The skill is used in distinguishing between distinct objects, or detectingboundaries between spaces.224.Cardinal is a field about counting distinct objects or specified concepts. Mastery ofcardinals implies measuring quantities or dealing with multiple objects. Especially, it shouldtake into account everything that satisfies given conditions, giving a difference from the skillof understanding Ordinals .5.Congruence is a skill of detecting objects with the exact same scale and shape, and under-standing their correspondence. Congruence is a primary component of visualizing varioussymmetries including translation, rotation or flipping. Congruence is distinguished fromother equivalence because it requires the objects to be equal at all levels of measurement.6.Convexity is a skill of understanding convexity of given shapes. The skill is also closelyrelated to detecting bumps or indentations and understanding convex and concave functions.237.Intersection is a mastery of detecting intersections of lines and curves. The skill isnecessary for interpreting relationships among 1-dimensional objects, and also amonghigher dimensional objects from 1-dimensional representations of their boundaries.8.Line is a skill to detect line segments and understand their roles in the image. This skill is afundamental unit in understanding various objects as polygons, graphs and diagrams.9.OCRis a skill to detect and read characters from visual inputs.2410.Ordinal is a skill to count certain objects or concepts in a given order. Mastery of this skillrequires not just counting but also focusing on specific portions and order of targets, givinga difference from Cardinal Understanding.11.Overlap skill is about correctly recognizing two or more objects sharing a common area.The skill is crucial in understanding overlapping shapes or complex shapes such as diagrams.12.Interior is a skill of distinguishing between interior and exterior of the target area. Thisskill is essential in perceiving different areas.2513.Relative Position is an ability to identify positional relationships between objects thatcannot be simply described such as inside, outside, or moved in a certain direction. Thisskill requires comprehension of complex relationships such as “positioned in between,” or“at the same side of.”14.Reflection is a field of recognizing linear symmetries. It requires detecting the axis ofreflection and induced correspondence of objects.15.Length is a skill to handle lengths of different objects. It involves comparing differentlengths and measuring distances of objects.2616.Rotation is an ability to identify changes in positions and angles induced by rotation, anddetecting the axis of rotation.17.Rotational Symmetry is a field of symmetric representations with respect to rotations.The skill involves understanding invariant geometric features under specified rotations.18.Symbol is a skill to detect symbols, understand their roles in the image, and combine themwith other visual information to attain the correct interpretation of the image.2719.Texture is a skill to understand textures of objects in the image. The skill is essentialas texture is another main component of visual representation of objects, and is used todistinguish different objects with same shapes, such as line styles.20.Width is a skill to understand thickness and width of objects or areas. The skill is essentialin measuring area or proportion of images together with length understanding.21.Adjacency is a skill to recognize when two or more objects are next to each other. Theskill is crucial in understanding features induced by close positions such as forming clusters.2822.Absolute Position is a skill to correctly understand where the objects are represented asa part of the visual input, independently of other objects. This involves recognizing objectsposited at corners of an image, or comparing heights of objects represented in the image.23.Area is a skill to handle 2-dimensional volumes, including comparing areas.24.Cardinal Direction is a skill to understand primary directions including up, down, left,right, or diagonals. This involves recognizing North, South, West, and East directions.2925.Orthogonality is a skill to identify orthogonal relations of objects in the image, includinga right angle formed by two lines. Understanding orthogonality is fundamental in geometry,design, and engineering.26.Tangency is a skill to detect tangent objects. This skill focuses on geometric representationof tangent curves or boundaries, and is different from understanding adjacency that ratherfocuses on positional information.27.Connectedness is a skill to identify connected components and detect links betweenobjects. This is crucial in understanding interactions and distinguishing distinct components.3028.Parallel is a skill to recognize parallel lines or curves. This is essential in identifyingfundamental objects like squares.29.Similarity is a skill to understand equivalence of geometric representations independentof scale. It also involves understanding of rescaling or comparing aspect ratios.30.Color is an ability to perceive, distinguish different colors, and understand the change insaturation and brightness.3131.Coordinate is a skill to recognize and acquire correct information upon coordinate systems.We provide and acquire information about different systems such as polar coordinates.32.Point is a fundamental capability to detect points and understand their roles in the image.It also involves understanding nodes in different graphs.33.Shape is a skill to understand details of shapes and compare different shapes independentlyof positions or tilts. It also involves identifying popular shapes such as triangles, rectangles,circles, and stars.3234.Curvature is an ability to measure and compare curvatures of different curves. Thisinvolves distinguishing between straight lines and wavy curves, and detecting bends in ashape.35.Sharpness is a skill to detect pointy parts of a shape. This is essential in understanding therepresentations of non-smooth objects such as points of a function that are not differentiable.36.Orientation is a skill to correctly distinguish clockwise and counterclockwise tendenciesinduced by not only rotations but also other movements that result in clockwise and coun-terclockwise directional change. The name originated from the mathematical definition oforientation in differential geometry.33E Model versionsWe evaluated closed-source models ChatGPT [ 40], Gemini [ 49] and open-weight models LLaV A-NeXT [ 29], LLaV A-OneVision [ 26], Math-LLaV A [ 47], Table-LLaV A [ 60], Phi-3.5-Vision [ 1],InternVL2 [ 10], DeepSeek-VL [ 31]. Tables 1 and 2 describe further details about the model sizesand versions. For closed-source models, we used the commercial APIs. All models’ temperatureswere set to 0.Table 1: Versions of closed-source modelsModel Name VersionChatGPT gpt-4o-2024-05-13gpt-4o-mini-2024-07-18 (For scoring)Gemini gemini-1.5-pro-001Table 2: Versions and model sizes of open-weight modelsVersion Model Size(s)LLaV A-NeXT 7B, 13BLLaV A-OneVision 7BMath-LLaV A 13BTable-LLaV A 7BPhi-3.5-Vision-Instruct 4BInternVL2 8BDeepSeek-VL 7BF Further details on evaluation processWe used GPT-4o mini to extract answers from model responses and to judge correctness. Few-shotin-context learning prompts we provided to GPT-4o mini as described in Tables 3 and 4. To verifythe reliability of this pipeline, we randomly selected 128problems from our dataset and comparedthe scores from GPT-4o mini with human annotations. Reassuringly, GPT-4o mini and the humanannotators agreed on the scoring of the 128problems. We attribute this high level of reliability, inpart, to the straightforward and clear design of our questions and answers.34Element PromptSystem prompt Imagine you are an intelligent teacher. Thoroughly read the providedinstruction to ensure a solid understanding of the information provided.Task description Please read the following example. Then extract the answer from themodel response and type it at the end of the prompt. If the questionrequires a full sentence with a correct word filled in, please provide theword only.{examples }Question: { question }Model response: { model response }Extracted Answer:Examples Question: There is a single rectangle with multiple color layers in theimage. What is the color of the boundary of the rectangle? The answershould be one of ‘red’, ‘yellow’, ‘green’, or ‘blue’.Model response: The color of the boundary of the circle is red.Extracted answer: redQuestion: How many line segments are in the image? Answer shouldbe a number.Model response: There are 4 dashed line segments in the image.Extracted answer: 4Question: Choose the word in parentheses that correctly describes theimage. Rewrite the sentence with the chosen word.In the image, shape (A/B) has sides curved inward. (Unit: $)Model response: In the image, shape B has sides curved inward.Extracted answer: BQuestion: Choose the phrase in parentheses that correctly describes theimage. Rewrite the sentence with the chosen phrase.In the given image, the green arrow (is longer than/has the same lengthas/is shorter than) the black arrow.Model response: In the given image, the green arrow is longer than theblack arrow.Extracted answer: is longer thanQuestion: In this image, choose the path which is a single line segmentbetween points A and B from the following options. Provide youranswer as a single uppercase letter: (A) the purple path (B) the blue path(C) the green path (D) the red pathModel response: BExtracted answer: BQuestion: Choose the most appropriate color to fill in the box markedwith ‘?’ in the image. The answer is one of ‘a’, ‘b’, ‘c’, or ‘d’.Model response: The correct color to fill in the box marked with ’?’ is(a) blue. The colors are following a gradient pattern from red, to a morepurple hue, and finally to blue. The logical next color in the sequencewould be blue, as it extends the progression seen in the previous squares.Extracted answer: aQuestion: There is a book in the image. What is the color of the book inthe image? Choose answer from the number of the option and give youranswer in “1”, “2”, “3”, or “4”. (1) red (2) yellow (3) blue (4) greenModel response: The color of the guitar in the image is (2) yellow.Extracted answer: 2Table 3: System prompt, task description, and examples used to prompt GPT-4o mini for answer extraction.35Element PromptSystem prompt Imagine you are an intelligent teacher. Thoroughly read the providedinstruction to ensure a solid understanding of the information provided.Task description The [Standard Answer] is the correct answer to the question, and the[Model Answer] is the answer generated by a model for that question.Thoroughly read both the [Standard Answer] and the [Model Answer].Assess the consistency of the information provided in these tworesponses.Although you do not know the specific question, you can still assess theconsistency between the two responses by checking for logical conflictsif both responses are assumed to be correct.If the [Model Answer] is consistent with the [Standard Answer], pleaseanswer ‘1’. Otherwise, answer ‘0’.When the [Standard Answer] is provided as a list, answer ‘1’ if the[Model Answer] is consistent with at least one item on the list.Otherwise, answer ‘0’.Below are the examples of the correct consistency judgment.{examples }Now, below are two answers to a question. What is your judgment?[Standard Answer] { standard answer }[Model Answer] { extracted answer }Judgment:Examples [Standard Answer] a[Model Answer] aJudgment: 1[Standard Answer] 1[Model Answer] 4Judgment: 0[Standard Answer] circle[Model Answer] the circleJudgment: 1[Standard Answer] 4[Model Answer] shape 4Judgment: 1[Standard Answer] line segment B and C[Model Answer] B, CJudgment: 1[Standard Answer] ac[Model Answer] caJudgment: 0[Standard Answer] 2[Model Answer] twoJudgment: 1[Standard Answer] three[Model Answer] 3Judgment: 1[Standard Answer] [‘ac’, ‘ca’][Model Answer] caJudgment: 1Table 4: System prompt, task description, and examples used to prompt GPT-4o mini for judgment.36G Results on 36 skills and Further AnalysisIn this section, we provide full details of Section 3 and further analysis of Findings 3.Table 5 presents our full evaluation results on A VSBench. “Random Chance” represents the expectedaccuracy of randomly choosing multiple-choice questions and scoring 0 from all non-multiple-choice questions. The subcolumn named “overall” indicates the accuracy across all problems of itscorresponding skill. The column named “ TOTAL " describes the overall accuracy across all problemsin A VSBench, involving all skills.As mentioned in Findings 3, CoT did not provide meaningful performance gains when used withGPT-4o and Gemini-1.5-pro. Gemini-1.5-pro had only a 0.1%gain from applying CoT, and GPT-4ohad a 1%gain. However, CoT worsened performances of skills including OCR,Length , and Symbol .As in the case of Figure 17, by inspecting the responses of GPT-4o with and without CoT prompting,we observe that the additional reasoning steps afforded by CoT are not helpful in comprehendingvisual inputs.Ground Truth bGPT-4o The shape labeled \" z\" is different from the others. While all the other shapes (a, b, c, x, y) are arranged horizontally, the shape labeled \"z\" is arranged vertically. GPT-4o + CoT Let's analyze the shapes step by step:\n\n1. **Shape x**: Consists of three circles in a horizontal line.\n2. **Shape y**: Consists of three circles in a horizontal line.\n3. **Shape a**: Consists of three circles in a horizontal line.\n4. **Shape c**: Consists of three circles in a horizontal line.\n5. **Shape b**: Consists of three circles in a horizontal line.\n6. **Shape z**: Consists of three circles in a vertical line.\n\nAll shapes except for shape z consist of three circles in a horizontal line. Shape z is the only one with circles arranged vertically.\n\nTherefore, the shape that is different from the others is ** shape z **.Question In the image, there are different shapes labeled as a, b, c, x, y, and z. Guess which shape is different from the others. Figure 17: Responses of GPT-4o on a problem from A VSBench with and without chain-of-thought (CoT) prompt.37TOTAL OCR Absolute position Adjacency Angle Area Boundaryeasy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overallRandom Chance 22.4 19.1 11.7 19.2 4.1 6.9 0.8 4.6 20.8 19.0 12.5 18.2 4.7 10.6 2.6 7.0 25.1 20.2 16.8 22.8 28.7 29.8 13.9 26.1 22.6 17.5 15.4 19.1Closed Source ModelsGPT-4o [40] 75.4 61.6 32.3 62.5 97.3 96.1 47.1 87.4 100.0 96.3 30.0 81.9 79.6 71.7 29.4 68.9 56.8 37.0 17.6 47.3 90.8 68.1 42.9 71.6 95.0 66.7 41.2 72.3GPT-4o (+CoT) 75.5 64.6 30.9 63.5 94.5 96.1 44.1 85.8 100.0 96.3 35.0 83.1 83.7 81.1 29.4 74.8 63.1 24.1 17.6 47.3 87.7 79.7 54.3 77.5 98.3 70.4 44.1 75.7Gemini-1.5-pro [49] 71.8 57.4 26.9 58.3 93.2 90.8 32.4 80.9 100.0 96.3 20.0 79.5 71.4 79.2 23.5 68.1 60.4 18.5 23.5 44.5 93.8 58.0 22.9 64.5 91.7 81.5 32.4 74.3Gemini-1.5-pro (+CoT) 70.8 58.6 27.0 58.4 90.4 92.1 29.4 79.8 100.0 85.2 25.0 77.1 81.6 79.2 11.8 70.6 61.3 27.8 23.5 47.8 92.3 59.4 40.0 68.0 95.0 79.6 26.5 73.6Open Source ModelsLLaV A-NeXT (7B) [29] 36.4 23.8 15.0 27.6 68.5 46.1 8.8 48.1 66.7 59.3 15.0 51.8 16.3 11.3 5.9 12.6 28.8 20.4 17.6 25.3 50.8 33.3 5.7 34.3 31.7 44.4 26.5 35.1LLaV A-NeXT (13B) 41.1 28.6 16.4 31.8 79.5 53.9 8.8 55.7 75.0 74.1 10.0 59.0 28.6 32.1 11.8 27.7 22.5 27.8 11.8 23.1 73.8 43.5 11.4 48.5 35.0 40.7 14.7 32.4LLaV A-OneVision (7B) [26] 51.0 37.8 18.1 40.0 79.5 69.7 23.5 65.0 94.4 85.2 15.0 72.3 36.7 20.8 5.9 25.2 28.8 22.2 17.6 25.8 78.5 40.6 20.0 50.9 65.0 42.6 8.8 43.9Table-LLaV A (7B) [60] 32.5 24.3 13.5 25.9 53.4 26.3 5.9 33.3 47.2 63.0 20.0 45.8 20.4 11.3 5.9 14.3 22.5 25.9 11.8 22.5 41.5 26.1 11.4 29.0 25.0 61.1 26.5 38.5Math-LLaV A (13B) [47] 37.7 27.1 15.7 29.6 54.8 38.2 8.8 39.3 50.0 70.4 15.0 48.2 16.3 24.5 17.6 20.2 17.1 18.5 11.8 17.0 67.7 33.3 17.1 43.2 33.3 38.9 11.8 30.4Phi-3.5-Vision-Instruct (4B) [1] 49.0 34.8 16.5 37.7 78.1 51.3 8.8 54.1 86.1 70.4 10.0 62.7 38.8 47.2 11.8 38.7 22.5 33.3 23.5 25.8 86.2 39.1 25.7 54.4 51.7 38.9 11.8 37.8InternVL2 (8B) [10] 43.0 31.6 13.5 33.3 65.8 52.6 8.8 49.7 30.6 63.0 15.0 37.3 32.7 30.2 17.6 29.4 28.8 18.5 0.0 23.1 60.0 30.4 11.4 37.9 45.0 40.7 0.0 33.1DeepSeek-VL (7B) [31] 45.9 30.2 15.1 34.3 69.9 50.0 17.6 51.9 91.7 88.9 25.0 74.7 20.4 22.6 5.9 19.3 32.4 22.2 17.6 28.0 66.2 42.0 14.3 45.6 56.7 42.6 0.0 38.5Cardinal Cardinal Direction Color Congruence Connectedness Convexity Coordinateeasy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall5.4 3.1 0.0 3.3 27.6 14.9 13.5 20.6 19.9 18.8 13.3 18.6 25.1 25.2 15.5 23.1 8.1 11.5 8.0 9.7 37.7 22.7 17.8 30.6 16.4 17.6 9.3 16.3Closed Source Models84.4 66.7 22.7 64.0 86.8 93.8 25.0 80.5 90.4 85.7 46.9 82.7 81.5 26.9 31.8 49.8 71.2 67.1 25.0 60.7 82.3 50.0 69.2 70.6 71.7 52.9 21.4 56.988.3 73.1 31.8 70.1 86.8 93.8 41.7 82.9 91.3 84.5 46.9 82.7 79.0 26.9 20.5 46.3 53.8 48.6 28.6 46.7 85.5 55.9 38.5 70.6 67.9 65.7 21.4 62.084.4 62.4 36.4 65.0 92.1 78.1 41.7 79.3 87.8 79.8 40.6 78.4 67.9 34.6 25.0 45.8 59.6 55.7 21.4 50.7 66.1 41.2 53.8 56.9 77.4 57.1 21.4 61.383.1 59.1 25.0 60.7 92.1 75.0 33.3 76.8 85.2 81.0 56.2 79.7 66.7 34.6 18.2 43.8 69.2 60.0 25.0 56.7 61.3 44.1 53.8 55.0 83.0 61.4 7.1 64.2Open Source Models54.5 23.7 6.8 31.3 52.6 28.1 25.0 39.0 58.3 32.1 12.5 42.4 28.4 21.8 18.2 23.6 36.5 5.7 14.3 18.0 48.4 32.4 30.8 41.3 28.3 20.0 14.3 22.666.2 25.8 9.1 36.9 57.9 37.5 50.0 48.8 64.3 41.7 21.9 50.2 29.6 16.7 11.4 20.7 50.0 21.4 21.4 31.3 50.0 35.3 23.1 42.2 22.6 24.3 14.3 22.670.1 39.8 6.8 43.9 60.5 50.0 25.0 51.2 81.7 61.9 18.8 65.8 30.9 25.6 9.1 24.1 50.0 28.6 17.9 34.0 54.8 35.3 38.5 46.8 37.7 24.3 14.3 28.541.6 15.1 6.8 22.9 44.7 25.0 8.3 31.7 49.6 28.6 18.8 37.7 17.3 29.5 11.4 20.7 32.7 11.4 25.0 21.3 41.9 35.3 15.4 36.7 15.1 21.4 0.0 16.854.5 29.0 6.8 33.6 52.6 31.2 25.0 40.2 64.3 27.4 15.6 44.2 25.9 19.2 11.4 20.2 34.6 8.6 17.9 19.3 51.6 23.5 15.4 38.5 22.6 25.7 21.4 24.162.3 34.4 9.1 39.3 73.7 53.1 0.0 54.9 75.7 57.1 15.6 60.6 29.6 24.4 18.2 25.1 25.0 34.3 21.4 28.7 43.5 44.1 46.2 44.0 28.3 34.3 21.4 30.750.6 18.3 6.8 27.6 50.0 40.6 25.0 42.7 65.2 39.3 12.5 48.5 29.6 16.7 27.3 24.1 36.5 22.9 21.4 27.3 54.8 23.5 7.7 39.4 24.5 35.7 14.3 29.266.2 31.2 13.6 40.2 63.2 37.5 33.3 48.8 68.7 38.1 12.5 49.8 30.9 15.4 13.6 21.2 28.8 31.4 25.0 29.3 51.6 32.4 30.8 43.1 18.9 27.1 0.0 21.2Curvature Direction Interior Intersection Length Line Overlapeasy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall26.2 27.3 16.7 24.2 23.3 17.2 11.8 17.9 23.7 15.0 17.9 19.2 28.8 24.9 8.2 24.6 31.4 21.5 17.1 24.5 20.5 12.9 5.3 16.6 24.4 26.9 2.3 21.6Closed Source Models84.6 65.5 67.6 71.8 86.8 51.3 16.7 55.1 93.9 79.7 33.3 80.4 64.3 50.0 25.0 52.4 72.9 69.4 40.0 64.8 76.4 60.5 90.0 72.0 92.9 71.2 29.2 72.789.7 65.5 64.7 72.5 86.8 60.5 12.5 59.4 92.4 84.4 44.4 83.1 61.9 51.4 33.3 53.2 70.8 65.3 36.0 61.5 75.0 79.1 80.0 76.8 89.3 76.3 25.0 72.782.1 41.4 38.2 52.7 81.6 51.3 16.7 53.6 92.4 81.2 33.3 80.4 61.9 50.0 25.0 51.6 66.7 65.3 44.0 61.5 80.6 55.8 70.0 71.2 83.9 71.2 12.5 66.284.6 39.7 38.2 52.7 89.5 50.0 8.3 53.6 95.5 79.7 33.3 81.1 64.3 54.2 25.0 54.8 64.6 57.1 32.0 54.9 79.2 79.1 80.0 79.2 83.9 69.5 25.0 67.6Open Source Models12.8 29.3 20.6 22.1 39.5 26.3 12.5 27.5 53.0 35.9 11.1 40.5 38.1 26.4 16.7 29.4 45.8 36.7 24.0 37.7 26.4 18.6 20.0 23.2 10.7 20.3 12.5 15.123.1 29.3 17.6 24.4 39.5 14.5 29.2 23.9 48.5 37.5 22.2 40.5 19.0 19.4 25.0 19.8 45.8 40.8 20.0 38.5 37.5 37.2 10.0 35.2 35.7 33.9 8.3 30.266.7 34.5 17.6 39.7 68.4 31.6 29.2 41.3 71.2 56.2 38.9 60.8 33.3 20.8 0.0 23.0 52.1 49.0 16.0 43.4 61.1 37.2 50.0 52.0 41.1 33.9 12.5 33.117.9 22.4 14.7 19.1 28.9 15.8 12.5 18.8 48.5 32.8 33.3 39.9 26.2 23.6 25.0 24.6 45.8 32.7 12.0 33.6 18.1 11.6 10.0 15.2 12.5 13.6 4.2 11.520.5 25.9 20.6 22.9 36.8 25.0 12.5 26.1 43.9 31.2 16.7 35.1 31.0 20.8 16.7 23.8 45.8 42.9 20.0 39.3 29.2 18.6 10.0 24.0 44.6 23.7 16.7 30.961.5 29.3 17.6 35.9 55.3 32.9 12.5 35.5 59.1 35.9 38.9 46.6 45.2 23.6 16.7 30.2 54.2 55.1 16.0 46.7 54.2 32.6 30.0 44.8 41.1 28.8 8.3 30.243.6 24.1 8.8 26.0 50.0 26.3 8.3 29.7 68.2 43.8 11.1 50.7 33.3 25.0 16.7 27.0 45.8 51.0 8.0 40.2 40.3 30.2 30.0 36.0 46.4 39.0 16.7 38.156.4 36.2 17.6 37.4 60.5 22.4 16.7 31.9 59.1 46.9 16.7 48.6 38.1 31.9 16.7 32.5 39.6 40.8 8.0 33.6 50.0 20.9 20.0 37.6 48.2 16.9 12.5 28.8Ordinal Orientation Orthogonality Parallel Point Reflection Relative Position Rotationeasy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall easy medium hard overall23.3 16.5 2.7 15.4 27.6 33.0 26.0 29.2 24.4 13.5 13.7 18.3 24.3 22.0 18.7 22.4 24.7 20.3 3.9 18.8 26.7 25.2 17.6 24.8 28.3 26.4 18.0 26.3 32.9 27.9 15.5 24.8Closed Source Models89.1 78.2 30.3 70.1 42.4 42.9 30.0 40.5 60.0 48.4 21.4 48.8 41.1 34.9 16.1 34.0 92.5 87.1 25.0 75.9 53.2 37.1 20.0 41.2 83.3 50.0 40.0 65.5 37.1 21.1 18.0 23.997.8 85.5 30.3 76.1 50.8 50.0 40.0 48.8 62.9 51.6 21.4 51.2 50.7 51.2 6.5 41.5 96.2 96.8 33.3 82.4 39.0 39.2 20.0 36.7 88.1 46.9 30.0 65.5 22.9 31.6 10.0 21.878.3 60.0 12.1 54.5 50.8 23.8 30.0 38.0 60.0 38.7 35.7 47.5 34.2 53.5 3.2 33.3 96.2 96.8 50.0 86.1 46.8 20.6 20.0 30.7 61.9 31.2 30.0 46.4 40.0 17.5 16.0 22.589.1 85.5 21.2 70.9 39.0 38.1 40.0 38.8 62.9 29.0 21.4 42.5 27.4 67.4 19.4 37.4 96.2 90.3 54.2 85.2 35.1 23.7 16.0 27.1 57.1 34.4 30.0 45.2 31.4 10.5 10.0 15.5Open Source Models28.3 12.7 9.1 17.2 27.1 23.8 35.0 27.3 28.6 12.9 7.1 18.8 11.0 18.6 12.9 13.6 45.3 22.6 0.0 28.7 22.1 7.2 32.0 16.1 31.0 28.1 0.0 26.2 22.9 17.5 16.0 18.343.5 18.2 15.2 26.1 28.8 35.7 50.0 34.7 17.1 16.1 28.6 18.8 12.3 9.3 19.4 12.9 41.5 35.5 4.2 31.5 28.6 20.6 16.0 23.1 47.6 31.2 0.0 35.7 8.6 14.0 20.0 14.852.2 21.8 12.1 29.9 33.9 26.2 35.0 31.4 37.1 12.9 7.1 22.5 5.5 18.6 16.1 11.6 47.2 54.8 4.2 39.8 35.1 38.1 16.0 34.2 50.0 40.6 20.0 42.9 40.0 22.8 24.0 27.534.8 25.5 3.0 23.1 32.2 26.2 30.0 29.8 31.4 29.0 0.0 25.0 12.3 20.9 16.1 15.6 34.0 32.3 8.3 27.8 27.3 14.4 4.0 18.1 38.1 18.8 10.0 27.4 40.0 24.6 22.0 27.530.4 16.4 3.0 17.9 37.3 31.0 50.0 37.2 25.7 29.0 0.0 22.5 21.9 20.9 25.8 22.4 45.3 38.7 0.0 33.3 32.5 18.6 16.0 23.6 38.1 28.1 20.0 32.1 51.4 33.3 18.0 32.454.3 38.2 21.2 39.6 32.2 42.9 40.0 37.2 28.6 12.9 28.6 22.5 27.4 30.2 16.1 25.9 43.4 45.2 0.0 34.3 44.2 29.9 4.0 32.2 42.9 40.6 30.0 40.5 54.3 3.5 20.0 21.841.3 40.0 6.1 32.1 28.8 35.7 40.0 33.1 48.6 6.5 14.3 26.2 17.8 30.2 16.1 21.1 49.1 51.6 0.0 38.9 35.1 29.9 20.0 30.7 40.5 25.0 10.0 31.0 25.7 26.3 10.0 20.428.3 25.5 6.1 21.6 32.2 19.0 45.0 29.8 25.7 22.6 21.4 23.8 2.7 16.3 19.4 10.2 52.8 71.0 4.2 47.2 36.4 13.4 16.0 22.6 40.5 43.8 10.0 38.1 11.4 15.8 12.0 13.4Rotational Symmetry Shape Sharpness Similarity Symbol Tangency Texture Widtheasy medium hard total easy medium hard total easy medium hard total easy medium hard total easy medium hard total easy medium hard total easy medium hard total easy medium hard total28.0 24.3 15.1 24.5 11.7 6.7 17.9 9.7 21.7 17.8 12.7 18.6 24.6 18.7 13.3 19.8 21.0 18.6 3.8 17.1 27.7 24.2 20.8 25.9 23.2 15.8 2.3 18.1 25.0 22.3 13.5 21.9Closed Source Models61.0 69.0 21.4 61.1 98.1 83.8 50.0 85.7 85.5 83.8 25.9 72.2 86.0 47.4 27.3 57.4 76.0 78.6 14.3 66.9 24.7 24.6 26.7 24.8 83.7 71.4 33.3 73.1 72.2 60.0 39.1 61.168.3 77.5 7.1 66.7 98.1 83.8 64.3 87.1 88.7 75.7 18.5 69.8 69.8 56.1 9.1 52.5 74.0 78.6 0.0 63.8 30.1 21.1 13.3 25.5 81.4 71.4 42.9 73.1 75.9 60.0 39.1 62.446.3 64.8 7.1 52.4 98.1 74.3 50.0 80.7 83.9 83.8 18.5 69.8 67.4 45.6 9.1 46.7 76.0 80.4 0.0 65.4 36.6 36.8 40.0 37.0 70.9 69.4 47.6 67.3 57.4 51.2 21.7 49.043.9 62.0 14.3 50.8 96.2 78.4 57.1 82.9 77.4 73.0 25.9 65.1 62.8 43.9 13.6 45.1 80.0 83.9 0.0 68.5 31.2 28.1 13.3 28.5 70.9 69.4 42.9 66.7 55.6 48.8 21.7 47.1Open Source Models34.1 12.7 7.1 19.0 57.7 23.0 28.6 36.4 33.9 16.2 7.4 23.0 20.9 10.5 13.6 14.8 40.0 32.1 4.8 30.7 29.0 19.3 20.0 24.8 34.9 18.4 4.8 25.6 31.5 27.5 34.8 29.931.7 2.8 14.3 13.5 67.3 45.9 21.4 51.4 46.8 24.3 7.4 31.7 37.2 14.0 4.5 20.5 38.0 35.7 9.5 32.3 24.7 14.0 26.7 21.2 38.4 32.7 0.0 31.4 46.3 26.2 30.4 33.856.1 32.4 7.1 37.3 92.3 66.2 42.9 73.6 53.2 37.8 25.9 42.9 23.3 8.8 36.4 18.9 64.0 76.8 14.3 61.4 28.0 10.5 20.0 21.2 36.0 40.8 4.8 33.3 46.3 41.2 21.7 40.148.8 14.1 7.1 24.6 38.5 25.7 21.4 30.0 32.3 21.6 14.8 25.4 16.3 14.0 9.1 13.9 38.0 42.9 4.8 34.6 33.3 15.8 26.7 26.7 31.4 18.4 0.0 23.1 33.3 35.0 21.7 32.531.7 8.5 35.7 19.0 69.2 37.8 35.7 49.3 41.9 21.6 29.6 33.3 9.3 12.3 4.5 9.8 26.0 51.8 4.8 33.9 23.7 24.6 26.7 24.2 31.4 10.2 9.5 21.8 35.2 43.8 8.7 35.746.3 12.7 0.0 22.2 82.7 45.9 35.7 58.6 46.8 27.0 3.7 31.7 37.2 21.1 18.2 26.2 42.0 55.4 9.5 42.5 29.0 28.1 20.0 27.9 55.8 12.2 4.8 35.3 40.7 31.2 17.4 32.565.9 12.7 28.6 31.7 80.8 50.0 28.6 59.3 50.0 24.3 18.5 35.7 16.3 12.3 18.2 14.8 36.0 48.2 0.0 35.4 34.4 21.1 20.0 28.5 37.2 38.8 4.8 33.3 29.6 32.5 17.4 29.341.5 4.2 28.6 19.0 80.8 45.9 42.9 58.6 75.8 29.7 3.7 46.8 32.6 21.1 18.2 24.6 52.0 53.6 9.5 45.7 30.1 12.3 26.7 23.6 41.9 20.4 0.0 29.5 44.4 30.0 13.0 32.5Table 5: Full details of evaluation results on A VSBench. Each value represents the accuracy of the model of its row, on problems with thedifficulty and for the skill of its column.38 |
mxX8WdPCx9 | On Memorization of Large Language Models inLogical ReasoningChulin Xie‡Yangsibo Huang†,¶Chiyuan Zhang†Da Yu†Xinyun Chen†Bill Yuchen Lin§Bo Li‡Badih Ghazi†Ravi Kumar††Google‡University of Illinois Urbana-Champaign¶Princeton University§Allen Institute for AIAbstractLarge language models (LLMs) show good performance on some complicatedreasoning tasks, yet could also make the most basic reasoning mistakes. Thiscontrasting behavior is puzzling when it comes to understanding the mechanismsbehind LLMs’ reasoning capabilities. One hypothesis is that the increasingly highand nearly saturated performance on common reasoning benchmarks could bedue to the memorization of similar benchmark problems accidentally leaked intothe training data. In this paper, we systematically investigate this problem witha measurement of memorization in reasoning tasks inspired by human behaviors,and a dynamically generated logical reasoning benchmark based on Knights andKnaves puzzles. We found that LLMs could interpolate the training puzzles(achieving ∼100% accuracy) after fine-tuning, yet fail when those puzzles areslightly perturbed, suggesting that the models heavily rely on memorization tosolve those training puzzles. On the other hand, we show that LLMs learn toreason while interpolating the training set. At higher level of memorization, themodel not only solves more unseen test puzzles, but also solves them relativelyrobustly under perturbation. This phenomenon suggests that LLMs exhibit acomplex interplay between memorization and genuine reasoning abilities, andreveals an interesting direction for future research. Our code and data are availableathttps://memkklogic.github.io/ .1 IntroductionModern Large Language Models (LLMs) show impressive reasoning capabilities that allow themto solve a wide range of problems including commonsense reasoning and mathematical reasoning.In the meantime, LLMs also make mistakes on some of the most basic problems (e.g., comparingwhich number is bigger – 13.11 or 13.8 [ 17], and counting the number of sisters that Alice’s brotherhave [21]).Their contrast of both superhuman reasoning capabilities and dumb mistakes is puzzling when itcomes to understanding how exactly LLMs perform reasoning tasks. This question is important bothscientifically and practically: understanding how LLMs reason could shed light on our understandingof learning and generalization behaviors of LLMs; and is crucial for real-world applications whererobust reasoning is required to mitigate safety and trustworthiness concerns [25, 15, 26].One hypothesis is that LLMs could be relying on memorization when solving those reasoning tasks,especially when measured by popular benchmark datasets that could be accidentally leaked intothe massive internet-crawled pre-training datasets. Previous work [ 5,24] show that LLMs couldindeed memorize the training data. However, most of those studies focus on analyzing memorizationfrom the perspective of privacy [6] or copyright [13, 27] concerns. Other papers focus on designingdynamic benchmarks [ 32,23,22,11] or alternative evaluation protocols [ 30,31,28,23] that couldmitigate the issue of benchmark saturation potentially due to memorization. In this paper, we takeNeurIPS 2024 Workshop on Mathematical Reasoning and AI.a direct approach to quantify memorization in reasoning tasks and analyze the interplay betweenmemorization and reasoning. Specifically, we summarize our contributions below:•To quantify memorization in reasoning tasks, we define a memorization metric based on the notionsof interpolation and the performance inconsistency under local perturbation that are inspired byhuman behaviors.•To facilitate the measurement, we propose a new logical reasoning benchmark based on the Knightsand Knaves (K&K) [ 12] puzzles, that support the automatic generation of new puzzles withdifferent difficulty levels and local perturbations of existing puzzles.•We show that K&K puzzles are challenging and only the most advanced LLMs could consistentlysolve them. The generally low accuracy observed across most off-the-shelf models indicates thatK&K puzzles are likely uncommon in internet-based training data. However, our analysis suggeststhat certain models exhibit signs of memorization to solve the puzzles.•By fine-tuning on K&K samples, we confirm that modern LLMs are capable of memorizing a largecollection of puzzles and their solutions when seen during training. Interestingly, when measuringaccuracy on the unseen test puzzles, we found that the models’ reasoning capabilities grow withthe amount of memorization as the models interpolate [3,20,2,1] the training set1. Additionally,these enhanced reasoning abilities transfer across different levels of puzzle difficulty.2 How to Measure Memorization in Reasoning Tasks2.1 Memorization Metrics for Reasoning TasksMemorization of LLMs has been studied in various contexts such as privacy, copyright [ 6,13,27], andsolving knowledge intensive tasks [ 3,10]. In this paper, we are specifically interested in measuringthe level of memorization when solving reasoning tasks. This kind of behaviors can be observedon humans. For example, when preparing for an exam / interview, people may not be able to fullydigest the underlying principles due to various reasons or constraints. But when (luckily) facing thesame problem one has prepared for, they would still be able to solve it. The key characteristics of thistype of memorization are: (A) high accuracy on observed problems; (B) low accuracy on unseen butsimilar problems, due to the lack of deep understanding.Based on this intuition, for a dataset Dof reasoning puzzles, we measure the following two quantities:1.To measure (A), we use the accuracy Acc(f;D)to measure the percentage of the puzzles inDthatfcan solve. We are especially interested in measuring on the set of observed puzzles ,i.e. the training set, Acc(f;Tr). We say finterpolates [3,20,2,1] the training puzzles ifAcc(f;Tr)≈100% .2.To measure (B), we measure a consistency ratio CR(f;D)between the number of consistentlysolved puzzles after some local perturbations , and the number of solved puzzles (without pertur-bation). We are interested in local perturbations that makes minimal change to the puzzle andmaintain the difficulty level (to be specified in § 2.2).We combine the two factors to define a Local Inconsistency based Memorization Score :LiMem (f;D) =Acc(f;D)·(1−CR(f;D)).When there is no ambiguity, we simply call it the memorization score. LiMem (f;D)∈[0,100]% anda larger score provide a stronger evidence of memorization. In our empirical study, we say fsolvesDvia memorization ifLiMem (f;D)>10%; otherwise we say fsolvesDvia reasoning . Specifically,a high LiMem (f;Tr)matches the characteristic behavior of human memorizing observed puzzles, andin this case we say fmemorized the training puzzles . Furthermore, we also measure LiMem (f;Tst)on test examples, to study if the generalization accuracy is due to reasoning or memorization.In order to effectively measure memorization score LiMem (f;D), we need a principled way to (1)perform local perturbation that changes the problem while maintaining its difficulty level; (2) computethe new answer after perturbation, which should be different from the original answer. Towards thisgoal, we design and implement a functional dataset based on the Knights and Knaves puzzles [12].1Interpolating is a term in learning theory to indicate fitting 100% accuracy on training set.22.2 Knights and Knaves Logical Reasoning BenchmarkKnights and Knaves (K&K) is a type of logic puzzle where some characters can only answer questionstruthfully, and others only falsely. The goal is to infer each person’s identity. For example: A veryspecial island is inhabited only by knights and knaves. Knights always tell the truth, and knavesalways lie. You meet 2 inhabitants: Samuel, and Isabella. Samuel told you that Isabella is a knight.Isabella said that Samuel is a knave and Isabella is a knight. So who is a knight and who is a knave?The ground-truth answer is that (1) Samuel is a knave and (2) Isabella is a knave.Based on the K&K puzzle, we design a dynamic benchmark that supports generating new problemsand perturbing existing problems. Our library automatically solves the K&K puzzles and generatessolutions for evaluation and training. Specifically, our benchmark consists of two components:The abstract problem sampler generates random K&K puzzles in an abstract format (see details in§ B). Specifically, it takes as input the problem specification (N, D, W )that determines the difficultylevel. It then generates a problem with Npersons, and for each person, a statement that consists of arandom tree of maximum width Wand depth D. The leaf nodes can be a claim that a specific personis lying (i.e., knaive) or telling the truth (i.e., knight) , whereas the branching node can be and,or,not,if, and if-and-only-if . The problem sampler also has two subcomponents: the Solver finds all possiblesolutions to a given puzzle, which is used to guarantee that we generate only problems with a uniquesolution; the Perturber which, given a problem, generates a locally perturbed version, by replacing aleaf node or an entire statement of a random person’s statement with a different one. Perturber onlykeeps the perturbed problems that have a different solution than the original problem. Comparisonbetween the original sample and the leaf/statement-perturbed samples is provided in Tab. 1.The natural language generator takes an abstract K&K problem and formats it in natural language.The formatting is template-based, but we support a variety of different (common and uncommon)person names, role names (e.g., knight & knaves, angels & devils), and different styles of makingeach person’s claim.We create disjoint sets of ntraintraining problems and ntesttesting problems for each N-person task.Here, ntest= 100 ,ntrain= 1,000forN > 2, and ntrain= 200 for2-person tasks due to limitedcombinations. Then, we generate perturbed versions for each problem.3 Quantifying LLM Memorization of Reasoning Tasks3.1 Off-the-shelf Models2 3 4 5 6 7 8# pplLlama-3-8BGemma-2-9bPhi-3-miniPhi-3-mediumNuminaMath-7B-CoTDeepseek-Math-7bGPT-4o-miniGPT-4oClaude-3.5-sonnet0.28 0.11 0.04 0.02 0.04 0.00 0.000.30 0.16 0.09 0.06 0.05 0.02 0.050.36 0.25 0.15 0.12 0.03 0.07 0.040.44 0.34 0.16 0.14 0.04 0.07 0.030.28 0.13 0.12 0.05 0.01 0.00 0.000.35 0.21 0.08 0.06 0.02 0.00 0.000.63 0.42 0.34 0.17 0.09 0.10 0.010.68 0.57 0.49 0.32 0.23 0.21 0.110.70 0.63 0.51 0.31 0.22 0.10 0.06Acc(f;Tst)0.00.10.20.30.40.50.60.7Figure 1: Under 0-shot direct prompting, testaccuracy of off-the-shelf models drops signifi-cantly with increasing puzzle complexity.We use the K&K benchmark (§ 2.2) to evaluate 8 mod-els that are shown to perform competitively on com-mon reasoning benchmarks. We utilize zero-shot directprompting with task-specific instructions for open-endedquestion-answering. To assess the correctness, we im-plement keyword matching: a response is consideredcorrect if each person’s ground truth identity appearsin the conclusion part of the model’s output (see moredetails in § C). As shown in Fig. 1, our K&K benchmarkposes a challenging logical reasoning task for all themodels. Even for the easiest problems involving only2persons, the best models still achieve <70% accu-racy. And the performance drops significantly as thecomplexity increases (the best accuracy is only 11% for8-person problems).To quantify LLMs’ memorization of the logical reasoning task, we employ the metrics proposed inthe previous section. Since the training data for the off-the-shelf models is unknown, we will delaythe measurement of the interpolation to fine-tuned models in § 3.2 and focus on the memorizationscore under local perturbation LiMem (f;Tst)here. As shown in Fig. 1, the test accuracy is relativelylow for most cases, suggesting K&K-related problems are probably rare in the Internet and in thetraining sets of these models. However, we also note that some specific models have large gaps underlocal perturbation as shown in Fig. 5, such as GPT4o and Claude-3.5-Sonnet on 3-person problemsunder logical statement perturbation, indicating signs of memorization when solving these puzzles.3cleanstatementleafperturb type0.60.81.0consistent accuracy3-ppl FT GPT4o-minicleanstatementleafperturb type0.40.60.85-ppl FT GPT4o-minisplittraintestnepoch35clean leafstatementperturb type0.40.60.8consistent accuracy3-ppl FT Llama3-8Bclean leafstatementperturb type0.20.40.65-ppl FT Llama3-8Bsplittraintestnepoch2050Figure 2: Accuracy of finetuned models drops underdifferent perturbations. The drops on test set can besmaller than training set.0.0 0.5 1.0Acc(f;Tr)0.00.5Acc(f;Tst)0.2 0.4LiMem(f;Tr) leaf pert.0.00.5GPT4o-mini3-ppl FT 5-ppl FT 8-ppl FT0.0 0.5 1.0Acc(f;Tr)0.000.250.50Acc(f;Tst)0.0 0.2 0.4LiMem(f;Tr) leaf pert.0.000.250.50Llama3-8B3-ppl FT 5-ppl FT 8-ppl FTFigure 3: Test acc of FTed models increase with trainacc, despite that the memorization becomes strongerwith larger LiMem (f;Tr)under leaf perturbation.3.2 Fine-tuned ModelsHere, we study the model’s memorization capacity when directly fine-tuned on K&K problems. Wetake Llama3-8B and GPT4o-mini and run supervised fine-tuning (SFT) on a set of K&K trainingproblems disjoint from the test set. Specifically, during SFT, the model observes the concatenation ofthe question and the answer for each problem, but the loss is only computed on the answer part.LLMs interpolate K&K training problems . We fine-tune 50 epochs for Llama3-8B and GPT4o-mini for 5 epochs (due to budget constraints) via the OpenAI Finetune API (see details in § C). FromFig. 2 (clean), we observe high Acc(f;Tr), and GPT4o-mini fine-tuned on 3-person puzzles reachinterpolation ( Acc(f;Tr) = 100% ).Interpolating LLMs have large LiMem (f;Tr). In Fig. 2, we report the consistent accuracyAcc(f;Tr)·CRunder perturbation, defined as the ratio of samples correctly solved in both theiroriginal and perturbed forms. We observe significant gaps under math problem perturbations (e.g.,statement and leaf) on training samples, suggesting that models have large LiMem (f;Tr)and mayrely on memorization to solve the training samples.4 Large Language Interpolators Learn to Reason2 3 4 5 6 7 8# ppl for testing8543# ppl for training-0.10 0.24 0.35 0.27 0.33 0.32 0.300.21 0.24 0.46 0.29 0.30 0.31 0.210.21 0.25 0.34 0.25 0.22 0.22 0.230.26 0.37 0.30 0.19 0.12 0.10 0.13# epoch: 5−0.4−0.20.00.20.4(a) GPT4o-mini CoT FT2 3 4 5 6 7 8# ppl for testing85432# ppl for training0.17 0.29 0.22 0.31 0.31 0.18 0.250.25 0.35 0.37 0.40 0.37 0.24 0.260.20 0.39 0.37 0.27 0.31 0.18 0.280.32 0.35 0.34 0.29 0.29 0.05 0.150.28 0.05 0.07 0.06 0.07 -0.05 0.07# epoch: 5−0.4−0.20.00.20.4 (b) GPT4o-mini Direct FT2 3 4 5 6 7 8# ppl for testing8765432# ppl for training0.11 0.32 0.29 0.25 0.17 0.10 0.10-0.03 0.34 0.30 0.27 0.16 0.11 0.100.19 0.33 0.32 0.24 0.20 0.11 0.120.16 0.37 0.31 0.25 0.13 0.13 0.110.24 0.39 0.29 0.23 0.09 0.10 0.08-0.01 0.27 0.20 0.26 0.14 0.11 0.08-0.10 0.12 0.03 0.07 0.02 0.03 0.04# epoch: 52 3 4 5 6 7 8# ppl for testing8765432# ppl for training0.25 0.37 0.30 0.24 0.20 0.11 0.130.20 0.41 0.40 0.29 0.17 0.14 0.120.34 0.41 0.43 0.34 0.21 0.15 0.090.25 0.45 0.42 0.28 0.20 0.11 0.120.39 0.40 0.44 0.25 0.16 0.09 0.060.41 0.38 0.41 0.26 0.17 0.11 0.130.11 0.08 0.06 0.06 0.02 0.01 0.02# epoch: 50−0.4−0.20.00.20.4−0.4−0.20.00.20.4 (c) Llama3-8B Direct FTFigure 4: Test accuracy improvement on N-people problems for LLMs fine-tuned on M-people problems,compared to the unfine-tuned model, under 0-shot direct prompting. Most grid values are above 0, indicatingtransferability and enhanced reasoning abilities across unseen tasks. Results for more epochs are in ??.The studies in § 3 show that both off-the-shelf and fine-tuned models exhibit some level of memoriza-tion in solving K&K reasoning tasks. However, does it mean that those models do not have reasoningcapabilities at all? It turns out that the models seem to do both, and the reasoning capability actuallyimproves as the memorization level increases. Next, we present evidence that support this hypothesis.The generalization performance increases with memorization level . As shown in Fig. 3, theaccuracy of fine-tuned models on the test set continues to increase as Acc(f;Tr)increases, despitethatLp∆on training samples also increases (i.e., stronger memorization).TheLiMem (f;Tst)on test samples are smaller than LiMem (f;Tr)on train samples in Fig. 2,particularly for more challenging cases (e.g., 5-person puzzles). This suggests that models are morelikely to use reasoning when solving unseen test samples.The fine-tuned model generalizes across different difficulty levels . By fine-tuning on the M-personproblem and testing on the N-person problem, we study LLMs’ transferability. The N×Mtestaccuracy improvement grid in Fig. 4 shows that 1) training on any M-person problem generallyenhances accuracy on unseen N-person test problems for any N, indicating enhanced reasoningability on both easier and harder problems; 2) extending the training epochs generally yields betterresults, particularly for Llama3-8B; 3) test accuracy improvement is larger when N≤6, andimproving performance on more challenging tasks remains possible but more difficult.45 ConclusionIn this paper, we designed a K&K puzzle-based logical reasoning benchmark and local perturbation-based metrics to quantify LLMs’ memorization in reasoning tasks. Our results reveal an intriguinginterplay between memorization and reasoning: while models heavily rely on memorization to solvechallenging K&K puzzles, models trained to have a higher level memorization also solve more unseentest puzzles, and solve them relatively robustly (in contrast to the memorized training puzzles).References[1]Peter L Bartlett, Andrea Montanari, and Alexander Rakhlin. Deep learning: a statisticalviewpoint. Acta Numerica , 2021.[2]Mikhail Belkin. Fit without fear: remarkable mathematical phenomena of deep learning throughthe prism of interpolation. Acta Numerica , 2021.[3]Mikhail Belkin, Daniel J Hsu, and Partha Mitra. Overfitting or perfect fitting? Risk bounds forclassification and regression rules that interpolate. In NeurIPS , 2018.[4]Stella Biderman, Usvsn Prashanth, Lintang Sutawika, Hailey Schoelkopf, Quentin Anthony,Shivanshu Purohit, and Edward Raff. Emergent and predictable memorization in large languagemodels. NeurIPS , 2024.[5]Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, andChiyuan Zhang. Quantifying memorization across neural language models. In ICLR , 2023.[6]Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-V oss, Kather-ine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting trainingdata from large language models. In USENIX Security , 2021.[7]Xinyun Chen, Ryan Andrew Chi, Xuezhi Wang, and Denny Zhou. Premise order matters inreasoning with large language models. In ICML , 2024.[8]Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jian, Bill Yuchen Lin, PeterWest, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck,Xiang Ren, Allyson Ettinger, Zaïd Harchaoui, and Yejin Choi. Faith and fate: Limits oftransformers on compositionality. NeurIPS , 2024.[9]Panagiotis Giadikiaroglou, Maria Lymperaiou, Giorgos Filandrianos, and Giorgos Stamou.Puzzle solving using reasoning of large language models: A survey. In IJCAI , 2024.[10] Valentin Hartmann, Anshuman Suri, Vincent Bindschaedler, David Evans, Shruti Tople, andRobert West. SoK: memorization in general-purpose large language models. arXiv preprintarXiv:2310.18362 , 2023.[11] Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang,Armando Solar-Lezama, Koushik Sen, and Ion Stoica. LiveCodeBench: Holistic and contam-ination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974 ,2024.[12] Philip N Johnson-Laird and Ruth MJ Byrne. Meta-logical problems: Knights, knaves, and rips.Cognition , 1990.[13] Antonia Karamolegkou, Jiaang Li, Li Zhou, and Anders Søgaard. Copyright violations andlarge language models. In EMNLP , 2023.[14] Mehran Kazemi, Quan Yuan, Deepti Bhatia, Najoung Kim, Xin Xu, Vaiva Imbrasaite, andDeepak Ramachandran. BoardgameQA: A dataset for natural language reasoning with contra-dictory information. In NeurIPS , 2024.[15] Andrew Lee, Xiaoyan Bai, Itamar Pres, Martin Wattenberg, Jonathan K Kummerfeld, and RadaMihalcea. A mechanistic understanding of alignment algorithms: A case study on DPO andtoxicity. In ICML , 2024.5[16] Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, ChrisCallison-Burch, and Nicholas Carlini. Deduplicating training data makes language modelsbetter. In ACL, 2022.[17] Bill Yuchen Lin. Math Olympiad becomes easier for AI; Common sense is still hard., 2024.[18] Bill Yuchen Lin, Ronan Le Bras, and Yejin Choi. ZebraLogic: benchmarking the logicalreasoning ability of language models, 2024.[19] Philipp Mondorf and Barbara Plank. Liar, liar, logical mire: A benchmark for suppositionalreasoning in large language models. arXiv preprint arXiv:2406.12546 , 2024.[20] Vidya Muthukumar, Kailas V odrahalli, Vignesh Subramanian, and Anant Sahai. Harmlessinterpolation of noisy data in regression. IEEE Journal on Selected Areas in Information Theory ,2020.[21] Marianna Nezhurina, Lucia Cipolina-Kun, Mehdi Cherti, and Jenia Jitsev. Alice in wonderland:Simple tasks showing complete reasoning breakdown in state-of-the-art large language models.arXiv preprint arXiv:2406.02061 , 2024.[22] Manley Roberts, Himanshu Thakur, Christine Herlihy, Colin White, and Samuel Dooley. To thecutoff... and beyond? A longitudinal perspective on LLM data contamination. In ICLR , 2023.[23] Saurabh Srivastava, Anto PV , Shashank Menon, Ajay Sukumar, Alan Philipose, Stevin Prince,Sooraj Thomas, et al. Functional benchmarks for robust evaluation of reasoning performance,and the reasoning gap. arXiv preprint arXiv:2402.19450 , 2024.[24] Kushal Tirumala, Aram Markosyan, Luke Zettlemoyer, and Armen Aghajanyan. Memorizationwithout overfitting: Analyzing the training dynamics of large language models. NeurIPS , 2022.[25] Eric Wallace, Kai Xiao, Reimar Leike, Lilian Weng, Johannes Heidecke, and Alex Beutel.The instruction hierarchy: Training LLMs to prioritize privileged instructions. arXiv preprintarXiv:2404.13208 , 2024.[26] Boyi Wei, Kaixuan Huang, Yangsibo Huang, Tinghao Xie, Xiangyu Qi, Mengzhou Xia, PrateekMittal, Mengdi Wang, and Peter Henderson. Assessing the brittleness of safety alignment viapruning and low-rank modifications. In ICML , 2024.[27] Boyi Wei, Weijia Shi, Yangsibo Huang, Noah A Smith, Chiyuan Zhang, Luke Zettlemoyer, KaiLi, and Peter Henderson. Evaluating copyright takedown methods for language models. InNeurIPS Datasets and Benchmark , 2024.[28] Ruijie Xu, Zengzhi Wang, Run-Ze Fan, and Pengfei Liu. Benchmarking benchmark leakage inlarge language models. arXiv preprint arXiv:2404.18824 , 2024.[29] Feng Yao, Yufan Zhuang, Zihao Sun, Sunan Xu, Animesh Kumar, and Jingbo Shang. Datacontamination can cross language barriers. arXiv preprint arXiv:2406.13236 , 2024.[30] Zhongshen Zeng, Pengguang Chen, Shu Liu, Haiyun Jiang, and Jiaya Jia. MR-GSM8K: a meta-reasoning benchmark for large language model evaluation. arXiv preprint arXiv:2312.17080 ,2023.[31] Hugh Zhang, Jeff Da, Dean Lee, Vaughn Robinson, Catherine Wu, Will Song, Tiffany Zhao,Pranav Raja, Dylan Slack, Qin Lyu, et al. A careful examination of large language modelperformance on grade school arithmetic. arXiv preprint arXiv:2405.00332 , 2024.[32] Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, and Xing Xie. Dyval:Graph-informed dynamic evaluation of large language models. In ICLR , 2024.6AppendicesA Related Work 8B Data Generation Details 8C Experiments Details 9D Additional Experimental Results 97A Related WorkMemorization and benchmark contamination in LLMs Previous research has explored trainingdata memorization in the context of privacy and copyright [ 6], focusing on how LMs may uninten-tionally reproduce text by generating near-verbatim outputs from their training data [ 16,4,5]. Ourstudy broadens the concept of memorization to the reasoning context, by evaluating whether LLMscan recall solutions to training questions but struggle to solve their variants during testing.Such memorization patterns appear in the off-the-shelf LLMs on popular math reasoning benchmarks,indicating potential benchmark contamination (i.e., included in the training data). For example,LLMs perform exceptionally well on benchmarks such as GSM8K, MATH, and MMLU, but theirperformance drops significantly when faced with benchmark variants. These include human-curatedproblems of similar difficulty [ 31], functional variants systematically generated via programs [ 23],rephrased versions [ 28], translated versions [ 29], or problems set beyond a specific date cutoff [ 22,11].Logical reasoning benchmarks To evaluate logical reasoning capabilities in LLMs, syntheticbenchmarks have been developed. These benchmarks enable scalable generation of samples withvarying configurations and difficulty levels [ 9] to study LLM reasoning in a controlled setup. Forinstance, DyVal [ 32] dynamically generates evaluation samples with controllable complexity basedon directed acyclic graphs, which covers reasoning tasks including deductive, boolean, and abductivereasoning. The authors demonstrate that fine-tuning Llama2-13B-chat on these synthetic samplesenhances its performance on other reasoning benchmarks. Chen et al. [ 7] focus on propositional logicproblems involving definite clauses. They synthetically generate variations with different premiseorders, such as forward, backward, and shuffled. Their study shows that aligning the premise orderwith the proof order improves LLMs’ accuracy in solving these problems. Dziri et al. [ 8] explorethe limitations of transformers in tasks requiring compositional reasoning, including multiplication,Einstein’s Puzzle (a constraint satisfaction problem), and dynamic programming problems. They findthat while GPT-3 fine-tuned on their training samples can solve in-distribution problems, it fails togeneralize to out-of-distribution tasks with increased problem sizes. Extending from Einstein’s Puzzle,ZebraLogic [ 18] is introduced to require reasoning through reduction, absurdum, and eliminationto solve constraint satisfaction problems. The study shows that off-the-shelf models struggle withcomplex puzzles involving large problem sizes. BoardgameQA[ 14] presents a question-answeringdataset characterized by contradictory facts and rules in the questions. To solve this task, the authorsfind that fine-tuning BERT-large and T5-XXL on their training dataset with proofs outperformsfew-shot prompting using PaLM with chain-of-thought (CoT) prompting. Alice in Wonderland [ 21]is a type of reasoning task in the format of light quiz-style problems such as “Alice has Nbrothersand she also has Msisters. How many sisters does Alice’s brother have?” Advanced models oftenfail to reason and count accurately in such simple scenarios.The closest benchmark to us is TruthQuest [ 19], which builds on the classic K&K puzzles but onlyfocuses on evaluation samples involving 3-6 people puzzles. In contrast, our proposed frameworkextends this line of research by generating a more comprehensive set of K&K puzzles with moredifficulty levels and various perturbation types. We evaluate state-of-the-art models on more chal-lenging K&K puzzles (e.g., 8-people) and analyze their memorization behaviors (via fine-tuning) andgeneralization capabilities under local perturbations.B Data Generation DetailsDuring our data construction, we use the maximum width W= 2and depth D= 2, and the numberof persons in the puzzle N= 2,3,4,5,6,7,8.Tab. 1 presents the example knight (truth-teller) and knave (liar) scenario involving two people: Liamand Aria, with corresponding logical statements, and converted English statements, questions, andanswers. It also shows three versions of the problems: an original example, a leaf-perturbed version,and a statement-perturbed version. Specifically, (1) leaf perturbation changes a “leaf” in the logicaltree - a single truth value. In this case, it flipped Liam’s status in Aria’s statement from truth-tellerto liar. (2) Statement perturbation changes the entire structure of a statement. Here, it changedAria’s statement from an equivalence to a contradictory conjunction. Both perturbations result inchanging the answer. The leaf perturbation creates a subtle change in one statement that flips thelogical outcome, while the statement perturbation changes the entire one statement.8Table 1: 2-person puzzle generation with the knight (telling-truth) and knave (lying) and comparison betweenoriginal sample, leaf-perturbed sample, and statement-perturbed sample.Type Example Leaf Perturbed Example Statement Perturbed Exampleperson Liam (person index 0), Aria (person index 1)logical statementLiam: (‘not’, (‘lying’, 1))Aria: (‘ ⇔’, (‘telling-truth’, 1),(‘telling-truth’, 0))Liam: (‘not’, (‘lying’, 1))Aria: (‘ ⇔’, (‘telling-truth’, 1),(‘lying’, 0))Liam: (‘not’, (‘lying’, 1))Aria: (‘and’, (‘lying’, 0),(‘telling-truth’, 0))English statement"Aria is not a knave" - Liam. Ariaasserted: "Aria is a knight if andonly if Liam is a knight"."Aria is not a knave" - Liam. Ariaasserted: "Aria is a knight if andonly if Liam is a knave"."Aria is not a knave" - Liam. Ariaasserted: "Liam is a knave andLiam is a knight"question"A very special island is inhabitedonly by knights and knaves.Knights always tell the truth, andknaves always lie. You meet 2inhabitants: Liam, and Aria. Aria isnot a knave - Liam. Aria asserted:Aria is a knight if and only if Liamis a knight. So who is a knight andwho is a knave?""A very special island is inhabitedonly by knights and knaves.Knights always tell the truth, andknaves always lie. You meet 2inhabitants: Liam, and Aria. Aria isnot a knave - Liam. Aria asserted:Aria is a knight if and only if Liamis a knave. So who is a knight andwho is a knave?""A very special island is inhabitedonly by knights and knaves.Knights always tell the truth, andknaves always lie. You meet 2inhabitants: Liam, and Aria. Aria isnot a knave - Liam. Aria asserted:Liam is a knave and Liam is aknight. So who is a knight and whois a knave?"answer(1) Liam is a knight (2) Aria is aknight(1) Liam is a knave (2) Aria is aknave(1) Liam is a knave (2) Aria is aknaveC Experiments DetailsEvaluation We utilize zero-shot direct prompting with task-specific instructions for open-endedquestion-answering. We employ the following prompt:Prompt for 0-shot evaluationYour task is to solve a logical reasoning problem. You are given set of statements from whichyou must logically deduce the identity of a set of characters.You must infer the identity of each character. At the end of your answer, you must clearlystate the identity of each character by following the format:CONCLUSION:(1) ...(2) ...(3) ...### Question: {question}### Answer:In our evaluation process, we use greedy decoding with temperature t= 0 for all models and amaximum token length of 2048.To assess the correctness, we implement keyword matching: a response is considered correct if eachperson’s ground truth identity appears in the conclusion part of the model’s output.Fine-tuning For Llama3-8B fine-tuning, we employed the following standard hyperparameterswith a batch size of 4, gradient accumulation steps of 8, and 5e-5 learning rate. We finetune for amaximum of 100 epochs.For GPT4o-mini fine-tuning, we utilized the default hyperparameters provided by the OpenAI fine-tuning API. The model was fine-tuned for 5 epochs to achieve high accuracy within reasonablebudget.D Additional Experimental Results92 3 4 5 6 7 8# pplLlama-3-8BGemma-2-9bPhi-3-miniPhi-3-mediumNuminaMath-7B-CoTDeepseek-Math-7bGPT-4o-miniGPT-4oClaude-3.5-sonnet0.28 0.11 0.04 0.02 0.04 0.00 0.000.30 0.16 0.09 0.06 0.05 0.02 0.050.36 0.25 0.15 0.12 0.03 0.07 0.040.44 0.34 0.16 0.14 0.04 0.07 0.030.28 0.13 0.12 0.05 0.01 0.00 0.000.35 0.21 0.08 0.06 0.02 0.00 0.000.63 0.42 0.34 0.17 0.09 0.10 0.010.68 0.57 0.49 0.32 0.23 0.21 0.110.70 0.63 0.51 0.31 0.22 0.10 0.06Acc(f;Tst)2 3 4 5 6 7 8# ppl0.27 0.10 0.04 0.02 0.04 0.00 0.000.28 0.16 0.09 0.06 0.04 0.02 0.040.22 0.21 0.13 0.09 0.03 0.06 0.030.27 0.24 0.14 0.10 0.01 0.07 0.030.16 0.13 0.11 0.05 0.01 0.00 0.000.22 0.19 0.07 0.06 0.02 0.00 0.000.24 0.26 0.19 0.14 0.07 0.08 0.000.19 0.30 0.17 0.21 0.14 0.15 0.090.24 0.33 0.25 0.23 0.13 0.08 0.06LiMem(f;Tst) perturbed statement2 3 4 5 6 7 8# ppl0.26 0.11 0.03 0.02 0.04 0.00 0.000.30 0.16 0.09 0.05 0.04 0.02 0.040.24 0.24 0.13 0.12 0.03 0.06 0.040.27 0.28 0.12 0.10 0.03 0.04 0.020.23 0.12 0.10 0.05 0.01 0.00 0.000.22 0.17 0.06 0.05 0.02 0.00 0.000.29 0.25 0.20 0.11 0.06 0.08 0.010.20 0.20 0.22 0.18 0.14 0.13 0.090.16 0.31 0.24 0.18 0.11 0.08 0.06LiMem(f;Tst) perturbed leaf0.00.10.20.30.40.50.60.70.000.050.100.150.200.250.300.000.050.100.150.200.250.30Figure 5: Test accuracy Acc(f;Tst)of off-the-shelf models under 0-shot direct prompting drops with increasingpuzzle complexity (left). LiMem (f;Tst)on test examples under statement perturbation (middle) and leafperturbation (right) is large for specific models, indicating signs of memorization in solving these puzzles.2 3 4 5 6 7 8# pplLlama-3-8BPhi-3-miniPhi-3-mediumNuminaMath-7B-CoTDeepseek-Math-7b0.24 0.10 0.05 0.03 0.02 0.00 0.010.32 0.38 0.21 0.11 0.04 0.02 0.010.57 0.40 0.29 0.24 0.10 0.07 0.060.23 0.06 0.06 0.02 0.01 0.01 0.000.36 0.14 0.04 0.02 0.02 0.01 0.00Acc(f;Tst)2 3 4 5 6 7 8# ppl0.22 0.08 0.05 0.03 0.02 0.00 0.010.23 0.30 0.16 0.11 0.04 0.02 0.010.24 0.21 0.23 0.18 0.08 0.07 0.050.17 0.06 0.03 0.02 0.01 0.01 0.000.24 0.10 0.04 0.02 0.02 0.01 0.00LiMem(f;Tst) perturbed statement2 3 4 5 6 7 8# ppl0.20 0.09 0.05 0.03 0.02 0.00 0.010.23 0.29 0.15 0.10 0.04 0.02 0.010.28 0.22 0.19 0.17 0.07 0.06 0.040.17 0.05 0.06 0.02 0.01 0.01 0.000.24 0.12 0.04 0.02 0.02 0.01 0.00LiMem(f;Tst) perturbed leaf0.00.10.20.30.40.50.000.050.100.150.200.250.300.000.050.100.150.200.25Figure 6: Acc(f;Tst)andLiMem (f;Tst)of off-the-shelf models under 0-shot CoT prompting, where we adda chain-of-thought trigger phrase “ Let’s think step by step” in the end of the prompt.2 3 4 5 6 7 8# pplLlama-3-8BPhi-3-miniPhi-3-mediumNuminaMath-7B-CoTDeepseek-Math-7b0.00 0.01 0.00 0.00 0.00 0.00 0.000.45 0.20 0.16 0.11 0.02 0.03 0.030.54 0.31 0.18 0.10 0.07 0.05 0.060.31 0.12 0.12 0.06 0.06 0.01 0.010.32 0.19 0.10 0.03 0.01 0.01 0.01Acc(f;Tst)2 3 4 5 6 7 8# ppl0.00 0.01 0.00 0.00 0.00 0.00 0.000.28 0.19 0.15 0.09 0.01 0.02 0.030.37 0.24 0.17 0.10 0.03 0.03 0.050.27 0.11 0.11 0.06 0.05 0.01 0.010.26 0.17 0.08 0.03 0.01 0.01 0.01LiMem(f;Tst) perturbed statement2 3 4 5 6 7 8# ppl0.00 0.01 0.00 0.00 0.00 0.00 0.000.37 0.19 0.14 0.11 0.02 0.02 0.030.30 0.24 0.16 0.09 0.06 0.02 0.050.30 0.12 0.10 0.06 0.06 0.01 0.000.25 0.17 0.10 0.03 0.01 0.01 0.01LiMem(f;Tst) perturbed leaf0.00.10.20.30.40.50.00.10.20.30.00.10.20.3Figure 7: Acc(f;Tst)andLiMem (f;Tst)of off-the-shelf models under 1-shot direct prompting, where weprovide one demonstration consisting of one question and its answer.2 3 4 5 6 7 8# pplLlama-3-8BPhi-3-miniPhi-3-mediumNuminaMath-7B-CoTDeepseek-Math-7b0.14 0.02 0.02 0.01 0.01 0.00 0.000.33 0.18 0.08 0.07 0.02 0.03 0.010.45 0.28 0.21 0.08 0.04 0.05 0.080.27 0.09 0.08 0.01 0.04 0.00 0.000.34 0.07 0.06 0.01 0.00 0.00 0.01Acc(f;Tst)2 3 4 5 6 7 8# ppl0.14 0.02 0.02 0.01 0.01 0.00 0.000.25 0.14 0.06 0.06 0.02 0.03 0.010.31 0.24 0.12 0.06 0.02 0.05 0.080.25 0.08 0.08 0.01 0.04 0.00 0.000.21 0.07 0.06 0.01 0.00 0.00 0.01LiMem(f;Tst) perturbed statement2 3 4 5 6 7 8# ppl0.14 0.01 0.02 0.01 0.01 0.00 0.000.26 0.18 0.08 0.06 0.01 0.03 0.010.37 0.20 0.16 0.05 0.02 0.04 0.060.23 0.09 0.08 0.01 0.04 0.00 0.000.24 0.07 0.06 0.01 0.00 0.00 0.01LiMem(f;Tst) perturbed leaf0.00.10.20.30.40.000.050.100.150.200.250.300.00.10.20.3Figure 8: Acc(f;Tst)andLiMem (f;Tst)of off-the-shelf models under 1-shot CoT prompting, where weprovide one demonstration consisting of a question, its corresponding CoT reasoning steps, and the answer.10 |
l5FDMofecw | OpenMathInstruct-2: Accelerating AI for Math withMassive Open-Source Instruction DataShubham [email protected] [email protected] [email protected] KisacaninNVIDIAInstitute for AI R&D of SerbiaFaculty of Technical Sciences,University of Novi [email protected] [email protected] [email protected] reasoning continues to be a critical challenge in large language model(LLM) development, with a significant performance gap between closed-sourceand open-source efforts, largely due to differences in training data quality and scale.The emergence of frontier open-weight LLMs offers new opportunities to generatehigh-quality, commercially permissible synthetic data to help bridge this gap. In thispaper, we investigate the recently released Llama3.1 family of models to improveopen-source math reasoning through synthetically generated supervised finetuning(SFT) data. We conduct ablation studies to optimize design choices for the dataset,such as solution format and teacher model selection, which enhance SFT perfor-mance. We also investigate SFT’s robustness to incorrect solutions and find that atlarge data scales, the model can be robust to as much as 20% noise, suggesting thatthe simple answer-matching heuristic is sufficient for SFT data selection. Basedon these insights, we create the OpenMathInstruct-2 dataset which consists of14M question-solution pairs ( >600K unique questions), making it nearly eighttimes larger than any previous such dataset. Finetuning the Llama-3.1-8B-Baseusing OpenMathInstruct-2 outperforms Llama3.1-8B-Instruct on MATH byan absolute 14.6% (51.9 →66.5), demonstrating the effectiveness of the dataset.As part of our open-source efforts, we will release the code, the finetuned models,and the OpenMathInstruct-2 dataset under a commercially permissive license.11 IntroductionSynthetic data has emerged as a key technique for building large language models due to its cost-effectiveness and scalability [Meta-AI, 2024, NVIDIA, 2024, DeepSeek-AI, 2024b]. In particular,synthetic data is well suited for mathematical reasoning where the performance improvements withsynthetic data scaling are yet to saturate [Zeng et al., 2024, Chan et al., 2024, Yang et al., 2024].However, access to this progress is limited because the current largest math datasets remain closed-source [Zeng et al., 2024, Yang et al., 2024]. The closed nature of these datasets introduces twomajor issues. First, concerns over data leakage erode trust in reported benchmark results [Aiyappa1Data and models are available at https://huggingface .co/collections/nvidia/openmath-2-66fb142317d86400783d2c7bCode is available at https://github .com/Kipok/NeMo-SkillsMATH-AI@ NeurIPS 2024.SolutionAugmentationQuestion-SolutionAugmentationQuestion-SolutionAugmentationSolutionAugmentationMATHGSM8KDecontaminationwith Test SetsOpenMathInstruct-2(14M)9.9M2.1M2.5M0.5M8.9M2.1MFigure 1: Overview of the OpenMathInstruct-2 construction pipeline.et al., 2023]. For e.g., Zhang et al. [2024] show a drop of more than 10% for popular LLMs on anunpublished test set which is distributionally similar to the popular grade school math benchmarkGSM8K [Cobbe et al., 2021]. Second, it prevents practitioners from fully understanding the impactof data composition and algorithmic choices [Soldaini et al., 2024].Among open-source alternatives, the recent NuminaMath dataset [Li et al., 2024] has the largestcollection of questions collected from diverse sources. However, its restrictive license—likely due tothe use of GPT-4o in data processing and synthesis—limits its broader use. Similarly, other popularmath instruction tuning datasets, such as MetaMathQA [Yu et al., 2024] and MathInstruct [Yueet al., 2024], have also relied on GPT models for data synthesis, which prohibits their usage innon-commercial settings. A notable exception is the OpenMathInstruct-1 [Toshniwal et al., 2024]dataset, one of the biggest open-source math reasoning datasets, where solutions are synthesized usingopen-weight models. However, OpenMathInstruct-1 has two key limitations. Firstly, its questiondiversity is constrained, since all the questions in the dataset are drawn from the training sets ofMATH [Hendrycks et al., 2021] and GSM8K [Cobbe et al., 2021]. Secondly, at the time of itsrelease, there was a sizable gap in the math reasoning capabilities of open and closed-source models.As a result, the dataset underrepresents more challenging problems compared to its GPT-basedcounterparts [Gou et al., 2024].The emergence of frontier open-weight models [Meta-AI, 2024, DeepSeek-AI, 2024b] has made itpossible to create high-quality, commercially permissible math reasoning datasets. In this paper, weuse the recently released Llama3.1 family of models to generate synthetic math instruction tuning(SFT) data, and evaluate the quality of the math reasoning data by finetuning the Llama-3.1-8B-Basemodel. To create the dataset, we conduct careful ablation studies using the MATH dataset to determinedesign choices that impact the final SFT performance. The highlights of our findings include:•Chain-of-Thought (CoT) Solution Format : Excessive verbosity can be detrimental to theSFT performance. Our proposed CoT format outperforms Llama’s CoT format while being40% shorter in solution length (see Figure 2 for a sample solution).•Choice of Teacher Model : The SFT performance mirrors the teacher model’s perfor-mance even when controlling for the SFT data size. Specifically, we find that finetuningLlama3.1-8B-Base on solutions generated by Llama3.1-405B-Instruct outperformssolutions generated by the model itself even when controlling for the SFT data size.•Robustness of SFT : At sufficient data scale ( ≥256K), the SFT performance suffers minimalto no degradation with as much as 20% incorrect solutions. Additionally, removing solutionswith incorrect intermediate steps, as determined by LLM-as-a-Judge or the NemotronReward model [NVIDIA, 2024], also yields no performance benefit.•Impact of Question Diversity : Controlling for SFT data size, we find that question diversityhas a huge positive impact on SFT performance.Based on the above findings, we create OpenMathInstruct-2 with data synthesized using Llama-3.1-405B-Instruct. The dataset uses the MATH and GSM8K training set questions and uses the LLM to(a) synthesize solutions to the original training set questions and (b) create new question-solutionpairs similar to the training set questions. To ensure there is no test set contamination among the2QuestionBen throws four identical darts. Each hits one of fouridentical dartboards on the wall. After throwing the fourdarts, he lists the number of darts that hit each board,from greatest to least. How many different lists arepossible?Llama CoT## Step 1: Understand the constraints of the problemThere are four identical darts and four identicaldartboards. The task is to list the number of darts thathit each board from greatest to least.## Step 2: Enumerate the possibilitiesGiven the constraints, we can start by listing thepossibilities systematically, starting with the highestnumber of darts on one board and then distributing theremaining darts across the other boards in decreasingorder.## Step 3: List all possible distributions1. All four darts hit one board: 4, 0, 0, 02. Three darts hit one board, and one dart hits another:3, 1, 0, 03. Two darts hit one board, and two darts hit another: 2,2, 0, 04. Two darts hit one board, one dart hits another, and onedart hits yet another: 2, 1, 1, 05. One dart hits each board: 1, 1, 1, 1## Step 4: Count the different possibilitiesThere are 5 different ways to distribute the darts aslisted above.The final answer is: 5OpenMath CoTThe number of darts that hit each boardmust be a non-negative integer, and thesenumbers must add to 4. Since the dartboardsare identical, we can list the numbers fromgreatest to least.The possibilities are:4 + 0 + 0 + 0 = 4 + 0 + 0 = 4 + 0 = 43 + 1 + 0 + 0 = 4 + 0 + 0 = 4 + 0 = 42 + 2 + 0 + 0 = 4 + 0 + 0 = 4 + 0 = 42 + 1 + 1 + 0 = 3 + 1 + 0 = 4 + 0 = 41 + 1 + 1 + 1 = 2 + 1 + 1 = 3 + 1 = 4Counting these, there are 5possibilities.Figure 2: Comparing solutions in the Llama CoT format vs. the OpenMath CoT format for a samplequestion.synthesized questions, we perform thorough decontamination using the lm-sys pipeline, followed bymanual inspection [Yang et al., 2023]. Figure 1 provides an overview of the entire dataset constructionpipeline. The final dataset consists of 14M question-solution pairs with 600K unique questions.Thus, OpenMathInstruct-2 is about 8 times bigger than the previous biggest standalone open-sourcedataset [Toshniwal et al., 2024].The quality of OpenMathInstruct-2 is illustrated by the strong performance of the finetuned models.TheOpenMath2-Llama3.1-8B model, which is the Llama3.1-8B-Base model finetuned withOpenMathInstruct-2, outperforms Llama3.1-8B-Instruct by an absolute 14.6% on MATH withjust SFT. With a performance of 66.5 on MATH, OpenMath2-Llama3.1-8B is one of the strongestsub-10B open-source models. We will release all our fine-tuned models, code, the OpenMathInstruct-2 dataset, and a dataset explorer.2 Experimental SetupTraining Details. For all the experiments, except when training on the full dataset, we train theLlama3.1-8B-Base model for four epochs and save a checkpoint at the end of every epoch. Forthe full dataset, we train the model for about 2.2 epochs (60K steps) and save six equally spacedcheckpoints. The final checkpoint is created by averaging all the saved checkpoints. A globalbatch size of 512 is used along with the AdamW optimizer [Loshchilov and Hutter, 2019] with alearning rate of 5e-6 and a weight decay of 1e-2. All experiments are performed using the NeMotoolkit [Kuchaiev et al., 2019].Evaluation Setup. We evaluate the final finetuned model on popular math reasoning benchmarks,namely GSM8K, MATH, AMC 2023, and AIME 2024. The finetuned model is evaluated in thezero-shot setting with greedy decoding.3Table 1: Comparison of our OpenMath2-Llama model with other sub-20B open-weight and open-source models without tool usage. Open-weight models finetuned with publicly released data areconsidered as open-source for the purposes of this table.Model GSM8K MATH AMC’23 AIME’24OpenWeightDeepSeek-Coder-V2-Lite-Instruct [DeepSeek-AI, 2024a] 86.4 61.8 - 0/30Qwen2.5-Math-7B-Instruct [Yang et al., 2024] 95.2 83.6 25/40 5/30Llama3.1-8B-Instruct [Meta-AI, 2024] 84.5 51.9 9/40 2/30OpenSourceNuminaMath-7B-CoT [Li et al., 2024] 75.4 55.2 11/40 0/30OpenMath2-Llama3.1-8B (ours) 92.7 66.5 16/40 2/300 1 2 5 14Size of SFT Dataset (in million)203040506070MATH Test Accuracy (in %)Llama3.1-8B-BaseLlama3.1-8B-InstructOpenMath2-Llama3.1-8B+15.9Figure 3: MATH Test Accuracy as a function of the SFT data size.3 ResultsTo understand the impact of data scaling, we downsample the full dataset to 1M, 2M, and 5M-sizedinstruction tuning datasets using fair downsampling [Toshniwal et al., 2024]. Figure 3 plots theperformance on the MATH test set with the increase in SFT data size. With even the 1M downsam-pled version of OpenMathInstruct-2, the final model easily outperforms Llama3.1-8B-Instruct .Finally, we observe a consistent gain with an increase in data size, and even at 14M dataset size, wesee no signs of saturation in performance gains.Table 1 presents the results for top-performing, sub-20B, open-weight and open-source mod-els (without tool use). The OpenMath2-Llama3.1-8B model, which is finetuned on the fullOpenMathInstruct-2 dataset, outperforms or matches Llama3.1-8B-Instruct on all the mathreasoning benchmarks. Among the open-source models, we outperform the recently releasedNuminaMath-7B-CoT on all benchmarks as well. Finally, among all the presented models, theOpenMath2-Llama3.1-8B is second only to the Qwen2.5-Math-7B-Instruct , which has beentrained on more than a trillion synthetically generated math reasoning tokens, and starts with a basemodel, Qwen2.5-Math , which is about 35% better than Llama3.1-8B-Base .24 ConclusionWe introduce OpenMathInstruct-2, a math instruction tuning dataset with 14M question-solution pairsand more than 600K unique questions. The dataset is created using Llama3.1-405B-Instructmodel and released with a commercially permissive license. Compared to previous work,OpenMathInstruct-2 is about eight times larger than the previous biggest open-source dataset for2We are unsure of the n-gram based data contamination protocol followed by Qwen2.5-Math given itsobvious weakness in detecting paraphrases. In our own decontamination setup, which we borrow from Yanget al. [2023], we find paraphrases of test set questions that are identified by our pipeline but which n-grammatching will miss out on.4math reasoning. To support the open-source efforts, we will publicly release all the finetuned models,code, and the OpenMathInstruct-2 dataset.ReferencesRachith Aiyappa, Jisun An, Haewoon Kwak, and Yong-yeol Ahn. Can we trust the evaluation onChatGPT? In 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023) , 2023.Xin Chan, Xiaoyang Wang, Dian Yu, Haitao Mi, and Dong Yu. Scaling Synthetic Data Creation with1,000,000,000 Personas, 2024.Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and JohnSchulman. Training Verifiers to Solve Math Word Problems. arXiv preprint arXiv:2110.14168 ,2021.DeepSeek-AI. DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelli-gence, 2024a.DeepSeek-AI. DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts LanguageModel, 2024b.Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Minlie Huang, Nan Duan, andWeizhu Chen. ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving. InICLR , 2024.Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,and Jacob Steinhardt. Measuring Mathematical Problem Solving With the MATH Dataset. InNeurIPS Datasets and Benchmarks , 2021.O. Kuchaiev, J. Li, H. Nguyen, O. Hrinchuk, R. Leary, B. Ginsburg, S. Kriman, S. Beliaev,V . Lavrukhin, J. Cook, et al. NeMo: a toolkit for building AI applications using neural modules. InSystems for ML Workshop, NeurIPS , 2019.Jia Li, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi Huang, KashifRasul, Longhui Yu, Albert Q Jiang, Ziju Shen, et al. NuminaMath: The largest public dataset inAI4Maths with 860k pairs of competition math problems and solutions, 2024.Ilya Loshchilov and Frank Hutter. Decoupled Weight Decay Regularization. In ICLR , 2019.Meta-AI. The Llama 3 Herd of Models, 2024.NVIDIA. Nemotron-4 340B Technical Report, 2024.Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin Schwenk, David Atkinson, Russell Authur,Ben Bogin, Khyathi Chandu, Jennifer Dumas, Yanai Elazar, Valentin Hofmann, Ananya Jha,Sachin Kumar, Li Lucy, Xinxi Lyu, Nathan Lambert, Ian Magnusson, Jacob Morrison, NiklasMuennighoff, Aakanksha Naik, Crystal Nam, Matthew Peters, Abhilasha Ravichander, KyleRichardson, Zejiang Shen, Emma Strubell, Nishant Subramani, Oyvind Tafjord, Evan Walsh, LukeZettlemoyer, Noah Smith, Hannaneh Hajishirzi, Iz Beltagy, Dirk Groeneveld, Jesse Dodge, andKyle Lo. Dolma: an Open Corpus of Three Trillion Tokens for Language Model PretrainingResearch. In ACL, 2024.Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor Gitman.OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset. In NeurIPS Datasets andBenchmarks , 2024.An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu,Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu,Xingzhang Ren, and Zhenru Zhang. Qwen2.5-Math Technical Report: Toward MathematicalExpert Model via Self-Improvement, 2024.Shuo Yang, Wei-Lin Chiang, Lianmin Zheng, Joseph E. Gonzalez, and Ion Stoica. RethinkingBenchmark and Contamination for Language Models with Rephrased Samples, 2023.5Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, ZhenguoLi, Adrian Weller, and Weiyang Liu. MetaMath: Bootstrap Your Own Mathematical Questions forLarge Language Models. In ICLR , 2024.Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.MAmmoTH: Building math generalist models through hybrid instruction tuning. In ICLR , 2024.Liang Zeng, Liangjun Zhong, Liang Zhao, Tianwen Wei, Liu Yang, Jujie He, Cheng Cheng, Rui Hu,Yang Liu, Shuicheng Yan, Han Fang, and Yahui Zhou. Skywork-Math: Data Scaling Laws forMathematical Reasoning in Large Language Models – The Story Goes On, 2024.Hugh Zhang, Jeff Da, Dean Lee, Vaughn Robinson, Catherine Wu, Will Song, Tiffany Zhao, PranavRaja, Dylan Slack, Qin Lyu, Sean Hendryx, Russell Kaplan, Michele Lunati, and Summer Yue. ACareful Examination of Large Language Model Performance on Grade School Arithmetic, 2024.6 |
kXXsqGiVnl | How Transformers Reason: A Case Study on aSynthetic Propositional Logic ProblemGuan Zhe Hong∗1Nishanth Dikkala2Enming Luo2Cyrus Rashtchian2Xin Wang2Rina Panigrahy21Purdue University2Google [email protected] ,{nishanthd, enming, cyroid, wanxin, rinap}@google.comAbstractLarge language models (LLMs) have demonstrated remarkable performance intasks that require reasoning abilities. Motivated by recent works showing evidenceof LLMs being able to plan and reason on abstract reasoning problems in con-text, we conduct a set of controlled experiments on a synthetic propositional logicproblem to provide a mechanistic understanding of how such abilities arise. In par-ticular, for a decoder-only Transformer trained solely on our synthetic dataset, weidentify the specific mechanisms by which a three-layer Transformer solves the rea-soning task. In particular, we identify certain “planning” and “reasoning” circuitswhich require cooperation between the attention blocks to in totality implement thedesired reasoning algorithm. To expand our findings, we then study a larger model,Mistral 7B. Using activation patching, we characterize internal components thatare critical in solving our logic problem. Overall, our work systemically uncoversnovel aspects of small and large transformers, and continues the study of how theyplan and reason.1 IntroductionLanguage models using the transformer architecture [Vaswani et al., 2017] have shown remarkablecapabilities on many natural language tasks [Brown et al., 2020, Radford et al., 2019b]. Trained withcausal language modeling wherein the goal is next-token prediction on huge amounts of text, thesemodels exhibit deep language understanding and generation skills. An essential milestone in thepursuit of models which can achieve a human-like artificial intelligence, is the ability to performhuman-like reasoning and planning in complex unseen scenarios. While some recent works usingprobing analyses have shown that the activations of the deeper layers of a transformer contain richinformation about certain mathematical reasoning problems [Ye et al., 2024], the question of whatmechanisms inside the model enables such abilities remains unclear.While the study of how transformers reason in general remains a daunting task, in this work, we aimto improve our mechanistic understanding of how a Transformer reason through simple propositionallogic problems. For concreteness’ sake, consider the following problem:Rules : A or B implies C. D implies E. Facts : A is true. B is false. D is true.Question : what is the truth value of C?An answer with minimal proof is “A is true. A or B implies C; C is true.”The reasoning problem, while simple-looking on the surface, requires the model to perform severalactions that are essential to more complex reasoning problems, all without chain of thought (CoT).∗Work done as a student researcher at Google Research.38th Conference on Neural Information Processing Systems (NeurIPS 2024).Before writing down any token, the model has to first discern the rulewhich is being queried: in thiscase, it is “A or B implies C”. Then, it needs to rely on the premise variables A and B to the locatethe relevant facts , and find “A is true” and “B is false”. Finally, it needs to decide that “A is true” isthe correct one to invoke in its answer due to the nature of disjunction. It follows that, to write downthe first token “A”, the model already has to form a “mental map” of the variable relations, valueassignments and query! Therefore, we believe that this is close to the minimal problem to examinehow a model internalizes and plans for solving a nontrivial mathematical reasoning problem whereapparent ambiguities in the problem specification cannot be resolved trivially.To understand the internal mechanisms of how a transformer solves problems resembling the minimalform above, we perform two flavors of experiments. The first is on shallow transformers trained purelyon the synthetic propositional logic problems. This enables a fine-grained analysis in a controlledsetting. The other set of experiments are on a pre-trained LLM (Mistral-7B), where we primarily relyon activation patching to uncover necessary circuits for solving the reasoning problem, includingspecialized roles of certain components. At a high level, we make the following discoveries based onour two fronts of analysis:1.We discover that small transformers, trained purely on the synthetic problem, utilize certain“routing embeddings ” to significantly alter the information flow of the deeper layers when solvingdifferent sub-categories of the reasoning problem. We also characterize the different reasoningpathways: we find that problems querying for reasoning chains involving logical operatorstypically require greater involvement of all the layers in the model.2.We uncover properties of the circuit which the pretrained LLM Mistral-7B-v0.1 employs to solvethe minimal version of the reasoning problem. We find four families of attention heads, whichhave surprisingly specialized roles in processing different sections of the context: queried-rulelocating heads, queried-rule mover heads, fact-processing heads, and decision heads. We findevidence suggesting that the model follows the natural reasoning path of “QUERY →RelevantRule→Relevant Fact(s) →Decision”.We discuss related works and scope of this work in detail in Appendix A.2 Problem settingIn this section, we present the core properties of the synthetic propositional logic problem whichshall be the data model of this paper. We delay finer details and more examples of the problem toAppendix B.2.1 Data model: a propositional logic problemOur problem follows an implicit causal structure, as illustrated in Figure 1. The structure consists oftwo distinct chains: One containing a logical operator at the end of the chain, and the other forming apurely linear chain.K DV ELogical operator AP T SFigure 1: Synthetic data model. The causal struc-ture has two chains: one with a logical operator(LogOp) at the end and the other being purely alinear causal chain. This example is the length-3case.We require the model to generate a minimalreasoning chain, consisting of “relevant facts”,proper rule invocations, and intermediate truthvalues, to answer the truth-value query. Con-sider an example constructed from the causalgraph in Figure 1, written in English:• Rules : K implies D. D or E implies A. Vimplies E. T implies S. P implies T.• Facts : K is true. P is true. V is false.• Query : A.• Answer : K is true. K implies D; D is true.D or E implies A; A is true.In this example, the QUERY token A is the ter-minating node of the OR chain. Since any trueinput to an OR gate (either D or E) results in2A being true, the minimal solution chooses only one of the starting nodes from the OR chain toconstruct its argument: in this case, node K is chosen.3 Mechanisms of planning and reasoning: a case study of the length-3problemIn this section, we discuss the internal mechanisms of a small transformer trained purely on thesynthetic problem. While there are many parts of the answer of the transformer which can lead tointeresting observations, due to space limitations, we primarily focus on the model’s “mental process”for producing the most important part of the answer, namely the first token . To further justify thischoice, we find that on our problem, a model’s full-answer accuracy strongly correlates with itsaccuracy of the first answer token, as detailed in Figure 4 in the Appendix.Architecture choice . We study a decoder-only attention-only transformer closely resembling theform of GPT-2 [Radford et al., 2019a]. We discuss training and architecture details in the Appendix.We select the smallest transformer that can achieve 100% accuracy (or sufficiently close to it) toinitiate our analysis, a 3-layer 3-head variant.3.1 Empirical observationsWe begin our analysis with mechanisms that are universal to how the model plans and reasons forpredicting the first token. Then we describe the mechanisms that only arise when the model needs todeal with specific situations. We discuss the main observations here, and leave the quantitative detailsto Appendix D.Mental notes at the QUERY position . The QUERY token is likely the most important token in thecontext: it determines which chain is being queried. The transformer makes use of this token in itsanswer in an intriguing way.Observation 1: chain-type disentanglement at QUERY . We observe that, the second layer’s attentionblock exhibit disentanglement in its output direction dependent on whether it is the linear chain that isbeing queried. Intriguingly, the third layer’s attention heads place greater than 90% of their attentionweights on the QUERY position on average when the linear chain is queried.Based on Observation 1, we hypothesize that given a chain type (linear or LogOp), there exists certaindirections at the second attention block which somehow change the behavior of the third attentionblock: attracting its attention to QUERY when it is the linear chain, and pushing its attention awayfrom QUERY when it is the LogOp chain. We confirm the existence and role of this “routing” signal.Observation 2: existence of an abstract “routing signal” . We compute the average of the secondattention block’s output on 1k samples whose QUERY is for the linear chain, which we denote ashroute . There are two interesting properties of this embedding direction:1.(Linear →LogOp intervention) We generate 500 test samples where QUERY is for the linearchain. Subtracting the embedding hroute from the second attention block’s output results in themodel outputting the correct first token for the LogOp chain of the problem 100% of the time onthe test samples. In other words, the “mode” in which the model reasons is flipped from “linear”to “LogOp”.2.(LogOp →linear intervention) We generate 500 test samples where QUERY is for the LogOpchain. Adding hroute to the second attention block’s output causes the three attention heads inlayer 3 to focus on the QUERY position: greater than 99% of the attention weights are on thisposition averaged over the test samples. In this case, however, the model does not output thecorrect starting node for the linear chain on more than 90% of the test samples.It follows that there indeed exists an abstract embedding direction inside the transformer whichsignificantly changes the information flow depending on the chain type being queried .Linear chain . At this point, it is clear to us that, when QUERY is for the linear chain, the thirdlayer mainly serves a simple “message passing” role at the QUERY position. A natural questionarises: does the input to the third layer truly contain the information to determine the first token ofthe answer, namely the starting node of the linear chain? The answer is yes.3Query Key/value Output Query: EA or B implies C Answer: Queried-rule locating heads (9,25;26), (12,9), (14,24;26) D is true Queried-rule mover head (13,11), (15,8) D implies EA is true B is false Fact-processing heads (16,12;14), (17,25) Decision head (19,8) Attention head type (Layer idx, head idx) Component Legend Figure 2: High-level properties of Mistral-7B’s reasoning circuit. The (chunks of) input tokens are onthe left, which are passed into the residual stream and processed by the attention heads. We illustratethe information flow manipulated by the different types of attention heads we identified to be vital tothe reasoning task.Observation 3: linearly-decodable linear-chain answer at layer 2 . We train an affine classifier withthe same input as the third attention block, with the target being the start of the linear chain; thetraining samples only query for the linear chain, and we generate 5k of them. We obtain a testaccuracy above 97% for this classifier (on 5k test samples), confirming that layer 2 already has theanswer at the QUERY position.LogOp chain: partial answer in layers 1 & 2 + refinement in layer 3 . To predict the correctstarting node of the LogOp chain, the model employs the following strategy:1.The first two layers encode the LogOp and only a “partial answer”. More specifically, we findevidence that (1) when the LogOp is an AND gate, layers 1 and 2 tend to pass the node(s) withFALSE assignment to layer 3, (2) when the LogOp is an OR gate, layers 1 and 2 tend to passnode(s) with TRUE assignment to layer 3.2.The third layer, combining information of the two starting nodes of the LogOp chain, and theinformation in the layer-2 residual stream at the ANSWER position, output the correct answer.We delay the full set of evidence for the above claims to Appendix D.2.4 The reasoning circuit in Mistral-7BWe now turn to examine how a pretrained LLM, namely Mistral-7B solves this reasoning problem.We choose this LLM as it is amongst the smallest accessible model which achieves above 70%accuracy on (a minimal version of) our problem. We present a hypothesis for the reasoning circuitinside the model for predicting the crucial first token of the length-2 problem in Figure 2, and provideevidence relying on a popular technique in mechanistic interpretability, activation patching.We describe the main properties of the reasoning circuit inside the model for this prediction task inFigure 2. At a high level, there are several intriguing properties of the reasoning circuit of the LLM:21. Compared to the attention blocks, the MLPs are relatively unimportant to correct prediction.2. There is a sparse set of attention heads that are found to be central to the reasoning circuit:•(Queried-rule locating head) Attention heads (9,25;26), (12,9), (14,24;26) locate the queriedrule using the QUERY token, and stores this information at the QUERY position.•(Queried-rule mover head) Attention heads (13,11), (15,8) move QUERY and the queried-ruleinformation from the QUERY position to the “:” position.•(Fact processing heads) Attention heads (16,12;14), (17,25) locate the relevant facts, andmove information to the “:” position.2We use (l, h)to denote an attention head. When referencing multiple heads in the same layer, we write(l, h1;h2;...;hn)for brevity.4•(Decision head) Attention head (19,8), relying on the aggregated information, makes adecision on which token to output.4.1 Circuit analysisWe only discuss high-level intuitions and results here due to space limitations, and delay the full setof experiments and their interpretations to Appendix E.Intuitively speaking, to support our hypothesis for the reasoning circuit employed by Mistral-7B tosolve the reasoning problem, we rely on activation patching to discover the attention heads which havethe greatest influence on the model’s output distribution (recall that the MLPs are not as importantin this problem). We combine such “causal-mediation” evidence with inspections on these heads’attention patterns. This leads to the set of evidence that is (partially) visualized in Figure 3 below.Queried-rule locating Fact processing Queried-rule mover Decision (a) Query (b) Typical attention pattern (c) Value Figure 3: Patching of query and value activations of all attention heads in (a) and (c); we found thatintervening the key activations only yield trivial scores, so we do not report them here. We show in (b)the typical attention patterns of a representative set of the attention heads which are identified to beimportant in the intervention experiments shown in (a) and (c). There are several distinct observationswhich can be made in (b). Queried-rule locating head (12,9): observe that it correctly locates thequeried rule which ends with Q. Queried-rule mover head (13,11): the only token position which itfocuses on is the QUERY token Q. Fact processing head (16,14): attention concentrates in the factsection. Decision head (19,8): attention focused on the correct first answer token K.5 ConclusionWe studied the reasoning mechanisms of both small transformers and LLMs on a synthetic proposi-tional logic problem. We analyzed a shallow decoder-only attention-only transformer trained purelyon this problem as well as a pretrained Mistral-7B LLM. We uncovered interesting mechanismsthe small and large transformers adopt to solve the problem. For the small models, we found theexistence of “routing” signals that significantly alter the model’s reasoning pathway depending onthe sub-category of the problem instance. For Mistral-7B, we found four families of attention headsthat implement the reasoning pathway of “QUERY →Relevant Rule →Relevant Facts →Decision”.These findings provide valuable insights into the inner workings of LLMs on mathematical reasoningproblems.ReferencesJannik Brinkmann, Abhay Sheshadri, Victor Levoso, Paul Swoboda, and Christian Bartelt. Amechanistic analysis of a transformer trained on a symbolic multi-step reasoning task. In Lun-WeiKu, Andre Martins, and Vivek Srikumar, editors, Findings of the Association for ComputationalLinguistics ACL 2024 , pages 4082–4102, Bangkok, Thailand and virtual meeting, August 2024.Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-acl.242. URL https://aclanthology.org/2024.findings-acl.242 .Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal,Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel5Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler,Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, ScottGray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, IlyaSutskever, and Dario Amodei. Language models are few-shot learners. In NeurIPS , 2020.Xinyun Chen, Ryan A. Chi, Xuezhi Wang, and Denny Zhou. Premise order matters in reasoning withlarge language models, 2024. URL https://arxiv.org/abs/2402.08939 .Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, SeanWelleck, Peter West, Chandra Bhagavatula, Ronan Le Bras, et al. Faith and fate: Limits oftransformers on compositionality. Advances in Neural Information Processing Systems , 36, 2024.Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, AmandaAskell, Yuntao Bai, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Deep Ganguli,Zac Hatfield-Dodds, Danny Hernandez, Andy Jones, Jackson Kernion, Liane Lovitt, KamalNdousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and ChrisOlah. A mathematical framework for transformer circuits. Transformer Circuits Thread , 2021.https://transformer-circuits.pub/2021/framework/index.html.Jiahai Feng and Jacob Steinhardt. How do language models bind entities in context? In The TwelfthInternational Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=zb3b6oKO77 .Jiahai Feng, Stuart Russell, and Jacob Steinhardt. Monitoring latent world states in language modelswith propositional probes, 2024. URL https://arxiv.org/abs/2406.19501 .Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Wenfei Zhou, JamesCoady, David Peng, Yujie Qiao, Luke Benson, Lucy Sun, Alex Wardle-Solano, Hannah Szabo,Ekaterina Zubova, Matthew Burtell, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, AnsongNi, Linyong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Alexander R. Fabbri, Wojciech Kryscinski,Semih Yavuz, Ye Liu, Xi Victoria Lin, Shafiq Joty, Yingbo Zhou, Caiming Xiong, Rex Ying,Arman Cohan, and Dragomir Radev. Folio: Natural language reasoning with first-order logic,2024. URL https://arxiv.org/abs/2209.00840 .Michael Hanna, Ollie Liu, and Alexandre Variengien. How does gpt-2 compute greater-than?interpreting mathematical abilities in a pre-trained language model. In Proceedings of the 37thInternational Conference on Neural Information Processing Systems , NIPS ’23, Red Hook, NY ,USA, 2024. Curran Associates Inc.Peter Hase, Mohit Bansal, Been Kim, and Asma Ghandeharioun. Does localization inform editing?surprising differences in causality-based localization vs. knowledge editing in language models.InProceedings of the 37th International Conference on Neural Information Processing Systems ,NIPS ’23, Red Hook, NY , USA, 2024. Curran Associates Inc.Stefan Heimersheim and Neel Nanda. How to use and interpret activation patching, 2024. URLhttps://arxiv.org/abs/2404.15255 .Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXivpreprint arXiv:2103.03874 , 2021.Man Luo, Shrinidhi Kumbhar, Ming shen, Mihir Parmar, Neeraj Varshney, Pratyay Banerjee, SomakAditya, and Chitta Baral. Towards logiglue: A brief survey and a benchmark for analyzing logicalreasoning capabilities of language models, 2024. URL https://arxiv.org/abs/2310.00836 .Thomas McGrath, Matthew Rahtz, Janos Kramar, Vladimir Mikulik, and Shane Legg. The hydraeffect: Emergent self-repair in language model computations, 2023. URL https://arxiv.org/abs/2307.15771 .Kevin Meng, David Bau, Alex J Andonian, and Yonatan Belinkov. Locating and editing factualassociations in GPT. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho,editors, Advances in Neural Information Processing Systems , 2022. URL https://openreview.net/forum?id=-h6WAS6eE4 .6Jack Merullo, Carsten Eickhoff, and Ellie Pavlick. Circuit component reuse across tasks in transformerlanguage models. In The Twelfth International Conference on Learning Representations , 2024.URL https://openreview.net/forum?id=fpoAYV6Wsk .Terufumi Morishita, Gaku Morio, Atsuki Yamaguchi, and Yasuhiro Sogawa. Learning deductivereasoning from synthetic corpus based on formal logic. In Andreas Krause, Emma Brunskill,Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett, editors, Proceedings ofthe 40th International Conference on Machine Learning , volume 202 of Proceedings of MachineLearning Research , pages 25254–25274. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/morishita23a.html .Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan,Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Deep Ganguli,Zac Hatfield-Dodds, Danny Hernandez, Scott Johnston, Andy Jones, Jackson Kernion, LianeLovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish,and Chris Olah. In-context learning and induction heads. Transformer Circuits Thread , 2022.https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html.Nisarg Patel, Mohith Kulkarni, Mihir Parmar, Aashna Budhiraja, Mutsumi Nakamura, NeerajVarshney, and Chitta Baral. Multi-logieval: Towards evaluating multi-step logical reasoningability of large language models, 2024. URL https://arxiv.org/abs/2406.17169 .Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Languagemodels are unsupervised multitask learners. 2019a. URL https://api.semanticscholar.org/CorpusID:160025533 .Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Languagemodels are unsupervised multitask learners. OpenAI blog , 1(8):9, 2019b.Abulhair Saparov and He He. Language models are greedy reasoners: A systematic formal analysisof chain-of-thought. In The Eleventh International Conference on Learning Representations , 2023.URL https://openreview.net/forum?id=qFVVBzXxR2V .Abulhair Saparov, Richard Yuanzhe Pang, Vishakh Padmakumar, Nitish Joshi, Seyed Mehran Kazemi,Najoung Kim, and He He. Testing the general deductive reasoning capacity of large languagemodels using ood examples. In Proceedings of the 37th International Conference on NeuralInformation Processing Systems , NIPS ’23, Red Hook, NY , USA, 2024. Curran Associates Inc.S Seals and Valerie Shalin. Evaluating the deductive competence of large language models. In KevinDuh, Helena Gomez, and Steven Bethard, editors, Proceedings of the 2024 Conference of theNorth American Chapter of the Association for Computational Linguistics: Human LanguageTechnologies (Volume 1: Long Papers) , pages 8614–8630, Mexico City, Mexico, June 2024.Association for Computational Linguistics. doi: 10.18653/v1/2024.naacl-long.476. URL https://aclanthology.org/2024.naacl-long.476 .Chandan Singh, Jeevana Priya Inala, Michel Galley, Rich Caruana, and Jianfeng Gao. Rethinkinginterpretability in the era of large language models, 2024. URL https://arxiv.org/abs/2402.01761 .Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. ProofWriter: Generating implications, proofs,and abductive statements over natural language. In Chengqing Zong, Fei Xia, Wenjie Li, andRoberto Navigli, editors, Findings of the Association for Computational Linguistics: ACL-IJCNLP2021 , pages 3621–3634, Online, August 2021. Association for Computational Linguistics. doi:10.18653/v1/2021.findings-acl.317. URL https://aclanthology.org/2021.findings-acl.317.Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, LukaszKaiser, and Illia Polosukhin. Attention is all you need. In NIPS , 2017.Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, andStuart Shieber. Investigating gender bias in language models using causal mediation analy-sis. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances inNeural Information Processing Systems , volume 33, pages 12388–12401. Curran Associates,7Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/92650b2e92217715fe312e6fa7b90d82-Paper.pdf .Kevin Ro Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt.Interpretability in the wild: a circuit for indirect object identification in GPT-2 small. InThe Eleventh International Conference on Learning Representations , 2023. URL https://openreview.net/forum?id=NpsVSN6o4ul .Zhengxuan Wu, Atticus Geiger, Thomas Icard, Christopher Potts, and Noah Goodman. Interpretabilityat scale: Identifying causal mechanisms in alpaca. In Thirty-seventh Conference on Neural Infor-mation Processing Systems , 2023. URL https://openreview.net/forum?id=nRfClnMhVX .Anton Xue, Avishree Khare, Rajeev Alur, Surbhi Goel, and Eric Wong. Logicbreaks: A frameworkfor understanding subversion of rule-based inference, 2024. URL https://arxiv.org/abs/2407.00075 .Sohee Yang, Elena Gribovskaya, Nora Kassner, Mor Geva, and Sebastian Riedel. Do large languagemodels latently perform multi-hop reasoning? In Lun-Wei Ku, Andre Martins, and VivekSrikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for ComputationalLinguistics (Volume 1: Long Papers) , pages 10210–10229, Bangkok, Thailand, August 2024.Association for Computational Linguistics. doi: 10.18653/v1/2024.acl-long.550. URL https://aclanthology.org/2024.acl-long.550 .Tian Ye, Zicheng Xu, Yuanzhi Li, and Zeyuan Allen-Zhu. Physics of language models: Part 2.1,grade-school math and the hidden reasoning process, 2024. URL https://arxiv.org/abs/2407.20311 .Matej Ze ˇcevi ́c, Moritz Willig, Devendra Singh Dhami, and Kristian Kersting. Causal parrots: Largelanguage models may talk causality but are not causal. Transactions on Machine Learning Research ,2023. ISSN 2835-8856. URL https://openreview.net/forum?id=tv46tCzs83 .Fred Zhang and Neel Nanda. Towards best practices of activation patching in language models:Metrics and methods. In ICLR , 2024.Honghua Zhang, Liunian Harold Li, Tao Meng, Kai-Wei Chang, and Guy Van Den Broeck. Onthe paradox of learning to reason from data. In Proceedings of the Thirty-Second InternationalJoint Conference on Artificial Intelligence , IJCAI ’23, 2023. ISBN 978-1-956792-03-4. doi:10.24963/ijcai.2023/375. URL https://doi.org/10.24963/ijcai.2023/375 .8NeurIPS Paper Checklist1.ClaimsQuestion: Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope?Answer: [Yes]Justification: We indeed summarize the main results and contributions of this work in the abstractand introduction sections.Guidelines:•The answer NA means that the abstract and introduction do not include the claims made inthe paper.•The abstract and/or introduction should clearly state the claims made, including the contribu-tions made in the paper and important assumptions and limitations. A No or NA answer tothis question will not be perceived well by the reviewers.•The claims made should match theoretical and experimental results, and reflect how muchthe results can be expected to generalize to other settings.•It is fine to include aspirational goals as motivation as long as it is clear that these goals arenot attained by the paper.2.LimitationsQuestion: Does the paper discuss the limitations of the work performed by the authors?Answer: [Yes]Justification: We explicitly discuss all the limitations of this work in the conclusion section, fromarchitecture choice, to data model limitations, to limitations in our analysis.Guidelines:•The answer NA means that the paper has no limitation while the answer No means that thepaper has limitations, but those are not discussed in the paper.• The authors are encouraged to create a separate "Limitations" section in their paper.•The paper should point out any strong assumptions and how robust the results are to vi-olations of these assumptions (e.g., independence assumptions, noiseless settings, modelwell-specification, asymptotic approximations only holding locally). The authors shouldreflect on how these assumptions might be violated in practice and what the implicationswould be.•The authors should reflect on the scope of the claims made, e.g., if the approach was onlytested on a few datasets or with a few runs. In general, empirical results often depend onimplicit assumptions, which should be articulated.•The authors should reflect on the factors that influence the performance of the approach. Forexample, a facial recognition algorithm may perform poorly when image resolution is low orimages are taken in low lighting. Or a speech-to-text system might not be used reliably toprovide closed captions for online lectures because it fails to handle technical jargon.•The authors should discuss the computational efficiency of the proposed algorithms and howthey scale with dataset size.•If applicable, the authors should discuss possible limitations of their approach to addressproblems of privacy and fairness.•While the authors might fear that complete honesty about limitations might be used by review-ers as grounds for rejection, a worse outcome might be that reviewers discover limitations thataren’t acknowledged in the paper. The authors should use their best judgment and recognizethat individual actions in favor of transparency play an important role in developing normsthat preserve the integrity of the community. Reviewers will be specifically instructed to notpenalize honesty concerning limitations.3.Theory Assumptions and ProofsQuestion: For each theoretical result, does the paper provide the full set of assumptions and acomplete (and correct) proof?Answer: [NA]9Justification: This is not a theory work.Guidelines:• The answer NA means that the paper does not include theoretical results.•All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.• All assumptions should be clearly stated or referenced in the statement of any theorems.•The proofs can either appear in the main paper or the supplemental material, but if they appearin the supplemental material, the authors are encouraged to provide a short proof sketch toprovide intuition.•Inversely, any informal proof provided in the core of the paper should be complemented byformal proofs provided in appendix or supplemental material.• Theorems and Lemmas that the proof relies upon should be properly referenced.4.Experimental Result ReproducibilityQuestion: Does the paper fully disclose all the information needed to reproduce the mainexperimental results of the paper to the extent that it affects the main claims and/or conclusionsof the paper (regardless of whether the code and data are provided or not)?Answer: [Yes]Justification: We detail our experimental procedures, including architecture choice, trainingprocedure, hyperparameters and metrics used in the Appendix.Guidelines:• The answer NA means that the paper does not include experiments.•If the paper includes experiments, a No answer to this question will not be perceived well bythe reviewers: Making the paper reproducible is important, regardless of whether the codeand data are provided or not.•If the contribution is a dataset and/or model, the authors should describe the steps taken tomake their results reproducible or verifiable.•Depending on the contribution, reproducibility can be accomplished in various ways. Forexample, if the contribution is a novel architecture, describing the architecture fully mightsuffice, or if the contribution is a specific model and empirical evaluation, it may be necessaryto either make it possible for others to replicate the model with the same dataset, or provideaccess to the model. In general. releasing code and data is often one good way to accomplishthis, but reproducibility can also be provided via detailed instructions for how to replicate theresults, access to a hosted model (e.g., in the case of a large language model), releasing of amodel checkpoint, or other means that are appropriate to the research performed.•While NeurIPS does not require releasing code, the conference does require all submissionsto provide some reasonable avenue for reproducibility, which may depend on the nature ofthe contribution. For example(a)If the contribution is primarily a new algorithm, the paper should make it clear how toreproduce that algorithm.(b)If the contribution is primarily a new model architecture, the paper should describe thearchitecture clearly and fully.(c)If the contribution is a new model (e.g., a large language model), then there should eitherbe a way to access this model for reproducing the results or a way to reproduce the model(e.g., with an open-source dataset or instructions for how to construct the dataset).(d)We recognize that reproducibility may be tricky in some cases, in which case authors arewelcome to describe the particular way they provide for reproducibility. In the case ofclosed-source models, it may be that access to the model is limited in some way (e.g.,to registered users), but it should be possible for other researchers to have some path toreproducing or verifying the results.5.Open access to data and codeQuestion: Does the paper provide open access to the data and code, with sufficient instructions tofaithfully reproduce the main experimental results, as described in supplemental material?Answer: [No]Justification: Our experiments rely on internal code, which can be difficult to release.10Guidelines:• The answer NA means that paper does not include experiments requiring code.•Please see the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•While we encourage the release of code and data, we understand that this might not bepossible, so “No” is an acceptable answer. Papers cannot be rejected simply for not includingcode, unless this is central to the contribution (e.g., for a new open-source benchmark).•The instructions should contain the exact command and environment needed to run toreproduce the results. See the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•The authors should provide instructions on data access and preparation, including how toaccess the raw data, preprocessed data, intermediate data, and generated data, etc.•The authors should provide scripts to reproduce all experimental results for the new proposedmethod and baselines. If only a subset of experiments are reproducible, they should statewhich ones are omitted from the script and why.•At submission time, to preserve anonymity, the authors should release anonymized versions(if applicable).•Providing as much information as possible in supplemental material (appended to the paper)is recommended, but including URLs to data and code is permitted.6.Experimental Setting/DetailsQuestion: Does the paper specify all the training and test details (e.g., data splits, hyperparameters,how they were chosen, type of optimizer, etc.) necessary to understand the results?Answer: [Yes]Justification: We present the experimental settings in the main text and the appendix, includingdescriptions of the synthetic dataset, training procedure, hyperparameter choices, metrics used,etc.Guidelines:• The answer NA means that the paper does not include experiments.•The experimental setting should be presented in the core of the paper to a level of detail thatis necessary to appreciate the results and make sense of them.•The full details can be provided either with the code, in appendix, or as supplemental material.7.Experiment Statistical SignificanceQuestion: Does the paper report error bars suitably and correctly defined or other appropriateinformation about the statistical significance of the experiments?Answer: [Yes]Justification: We report error bars when needed.Guidelines:• The answer NA means that the paper does not include experiments.•The authors should answer "Yes" if the results are accompanied by error bars, confidenceintervals, or statistical significance tests, at least for the experiments that support the mainclaims of the paper.•The factors of variability that the error bars are capturing should be clearly stated (for example,train/test split, initialization, random drawing of some parameter, or overall run with givenexperimental conditions).•The method for calculating the error bars should be explained (closed form formula, call to alibrary function, bootstrap, etc.)• The assumptions made should be given (e.g., Normally distributed errors).•It should be clear whether the error bar is the standard deviation or the standard error of themean.•It is OK to report 1-sigma error bars, but one should state it. The authors should preferablyreport a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normalityof errors is not verified.11•For asymmetric distributions, the authors should be careful not to show in tables or figuressymmetric error bars that would yield results that are out of range (e.g. negative error rates).•If error bars are reported in tables or plots, The authors should explain in the text how theywere calculated and reference the corresponding figures or tables in the text.8.Experiments Compute ResourcesQuestion: For each experiment, does the paper provide sufficient information on the computerresources (type of compute workers, memory, time of execution) needed to reproduce theexperiments?Answer: [Yes]Justification: We discuss such details in the Appendix.Guidelines:• The answer NA means that the paper does not include experiments.•The paper should indicate the type of compute workers CPU or GPU, internal cluster, orcloud provider, including relevant memory and storage.•The paper should provide the amount of compute required for each of the individual experi-mental runs as well as estimate the total compute.•The paper should disclose whether the full research project required more compute than theexperiments reported in the paper (e.g., preliminary or failed experiments that didn’t make itinto the paper).9.Code Of EthicsQuestion: Does the research conducted in the paper conform, in every respect, with the NeurIPSCode of Ethics https://neurips.cc/public/EthicsGuidelines ?Answer: [Yes]Justification: The research conforms to the Code of Ethics of Neurips.Guidelines:• The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.•If the authors answer No, they should explain the special circumstances that require a deviationfrom the Code of Ethics.•The authors should make sure to preserve anonymity (e.g., if there is a special considerationdue to laws or regulations in their jurisdiction).10.Broader ImpactsQuestion: Does the paper discuss both potential positive societal impacts and negative societalimpacts of the work performed?Answer: [Yes]Justification: We discuss potential broader impact in section F.Guidelines:• The answer NA means that there is no societal impact of the work performed.•If the authors answer NA or No, they should explain why their work has no societal impact orwhy the paper does not address societal impact.•Examples of negative societal impacts include potential malicious or unintended uses (e.g.,disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deploy-ment of technologies that could make decisions that unfairly impact specific groups), privacyconsiderations, and security considerations.•The conference expects that many papers will be foundational research and not tied toparticular applications, let alone deployments. However, if there is a direct path to anynegative applications, the authors should point it out. For example, it is legitimate to point outthat an improvement in the quality of generative models could be used to generate deepfakesfor disinformation. On the other hand, it is not needed to point out that a generic algorithmfor optimizing neural networks could enable people to train models that generate Deepfakesfaster.12•The authors should consider possible harms that could arise when the technology is beingused as intended and functioning correctly, harms that could arise when the technology isbeing used as intended but gives incorrect results, and harms following from (intentional orunintentional) misuse of the technology.•If there are negative societal impacts, the authors could also discuss possible mitigationstrategies (e.g., gated release of models, providing defenses in addition to attacks, mechanismsfor monitoring misuse, mechanisms to monitor how a system learns from feedback over time,improving the efficiency and accessibility of ML).11.SafeguardsQuestion: Does the paper describe safeguards that have been put in place for responsible releaseof data or models that have a high risk for misuse (e.g., pretrained language models, imagegenerators, or scraped datasets)?Answer: [NA]Justification: The paper poses no such risk.• The answer NA means that the paper poses no such risks.•Released models that have a high risk for misuse or dual-use should be released with necessarysafeguards to allow for controlled use of the model, for example by requiring that users adhereto usage guidelines or restrictions to access the model or implementing safety filters.•Datasets that have been scraped from the Internet could pose safety risks. The authors shoulddescribe how they avoided releasing unsafe images.•We recognize that providing effective safeguards is challenging, and many papers do notrequire this, but we encourage authors to take this into account and make a best faith effort.12.Licenses for existing assetsQuestion: Are the creators or original owners of assets (e.g., code, data, models), used in thepaper, properly credited and are the license and terms of use explicitly mentioned and properlyrespected?Answer: [NA]Justification: The paper does not use existing assets.Guidelines:• The answer NA means that the paper does not use existing assets.• The authors should cite the original paper that produced the code package or dataset.• The authors should state which version of the asset is used and, if possible, include a URL.• The name of the license (e.g., CC-BY 4.0) should be included for each asset.•For scraped data from a particular source (e.g., website), the copyright and terms of serviceof that source should be provided.•If assets are released, the license, copyright information, and terms of use in the packageshould be provided. For popular datasets, paperswithcode.com/datasets has curatedlicenses for some datasets. Their licensing guide can help determine the license of a dataset.•For existing datasets that are re-packaged, both the original license and the license of thederived asset (if it has changed) should be provided.•If this information is not available online, the authors are encouraged to reach out to theasset’s creators.13.New AssetsQuestion: Are new assets introduced in the paper well documented and is the documentationprovided alongside the assets?Answer: [NA]Justification: The paper does not release new assetsGuidelines:• The answer NA means that the paper does not release new assets.•Researchers should communicate the details of the dataset/code/model as part of their sub-missions via structured templates. This includes details about training, license, limitations,etc.13•The paper should discuss whether and how consent was obtained from people whose asset isused.•At submission time, remember to anonymize your assets (if applicable). You can either createan anonymized URL or include an anonymized zip file.14.Crowdsourcing and Research with Human SubjectsQuestion: For crowdsourcing experiments and research with human subjects, does the paperinclude the full text of instructions given to participants and screenshots, if applicable, as well asdetails about compensation (if any)?Answer: [NA]Justification: The paper does not involve crowdsourcing nor research with human subjects.Guidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Including this information in the supplemental material is fine, but if the main contribution ofthe paper involves human subjects, then as much detail as possible should be included in themain paper.•According to the NeurIPS Code of Ethics, workers involved in data collection, curation, orother labor should be paid at least the minimum wage in the country of the data collector.15.Institutional Review Board (IRB) Approvals or Equivalent for Research with HumanSubjectsQuestion: Does the paper describe potential risks incurred by study participants, whether suchrisks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (oran equivalent approval/review based on the requirements of your country or institution) wereobtained?Answer: [NA]Justification: The paper does not involve crowdsourcing nor research with human subjectsGuidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Depending on the country in which research is conducted, IRB approval (or equivalent) maybe required for any human subjects research. If you obtained IRB approval, you shouldclearly state this in the paper.•We recognize that the procedures for this may vary significantly between institutions andlocations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelinesfor their institution.•For initial submissions, do not include any information that would break anonymity (ifapplicable), such as the institution conducting the review.14Appendix / supplemental materialA Related works, and scope of this workMechanistic interpretability . Our work falls in the area of mechanistic interpretability, which aimsto understand the mechanisms that enable capabilities of the LLM; such studies involve uncoveringcertain “circuits” in the network [Elhage et al., 2021, Olsson et al., 2022, Meng et al., 2022, Vig et al.,2020, Feng and Steinhardt, 2024, Wu et al., 2023, Wang et al., 2023, Hanna et al., 2024, Merulloet al., 2024, McGrath et al., 2023, Singh et al., 2024, Feng et al., 2024]. While the definition of a“circuit” varies across different works, in this paper, our definition is similar to the one in Wang et al.[2023]: it is a collection of model components (attention heads, neurons, etc.) with the “edges” in thecircuit indicating the information flow between the components in the forward pass; the “excitation”of the circuit is the input tokens.Evaluation of reasoning abilities of LLMs . Our work is also related to the line of work whichfocus on empirically evaluating the reasoning abilities of LLMs across different types of tasks [Xueet al., 2024, Chen et al., 2024, Patel et al., 2024, Morishita et al., 2023, Seals and Shalin, 2024,Zhang et al., 2023, Saparov and He, 2023, Saparov et al., 2024, Luo et al., 2024, Han et al., 2024,Tafjord et al., 2021, Hendrycks et al., 2021, Dziri et al., 2024, Yang et al., 2024]. While these studiesprimarily benchmark their performance on sophisticated tasks, our work focuses on understanding“how” transformers reason on logic problems accessible to fine-grained analysis.Analysis of how LLMs reason . There are far fewer studies that focus on providing fine-grainedanalysis of how LLMs reason. To the best of our knowledge, only a handful of works, such asBrinkmann et al. [2024], Xue et al. [2024], Ze ˇcevi ́c et al. [2023], Ye et al. [2024], share similargoals of understanding how transformers perform multi-step reasoning through detailed empiricalor theoretical analysis. However, none studies the [Variable relationships]+[Variable value assign-ment]+[Query] type problem in conjunction with analysis on both small transformers trained purelyon the synthetic problem, and large language models trained on a large corpus of internet data.Activation patching . At its core, activation patching, a.k.a. causal mediation analysis [Vig et al.,2020, Meng et al., 2022, Hase et al., 2024, Heimersheim and Nanda, 2024, Zhang and Nanda, 2024],uses causal interventions for uncovering the internal mechanisms or “circuits” of LLMs that enablethem to perform certain tasks. Typically, the LLM is run on pairs of “source” and “destination”prompts, and we search for components inside the model that “recover” the model’s behavior on thesource prompts by replacing parts of the model’s activation with “source activations” when runningon the destination prompt. The opposite “destination →source” intervention can also be adopted.Scope of this work . We define the scope of our analysis as follows. First, in the shallow transformerexperiments, we focus on the variant which only has self-attention layers in addition to layernormalization, positional encoding, embedding and softmax parameters. While we could havealso included MLP layers, we choose not to because the no-MLP models already achieve 100%accuracy on the problem, and adding MLPs would unnecessarily complicate the analysis. As a secondway to focus the scope of paper, in the Mistral-7B experiments, we do notseek to uncover everymodel component that participates in solving the reasoning problem. We focus more on finding andanalyzing the components that are necessary to the model’s reasoning circuit, and necessary towardsimplementing the reasoning pathway as described before. By doing so, we can fully justify thenecessity of these key components, without guessing about the roles of less-impactful sub-circuits.B Propositional logic problem and examplesIn this section, we provide a more detailed description of the propositional logic problem we study inthis paper, and list representative examples of the problem.At its core, the propositional logic problem requires the reasoner to (1) distinguish which chain typeis being queried (LogOp or linear), and (2) if it is the LogOp chain being queried, the reasoner mustknow what truth value the logic operator outputs based on the two input truth values.Below we provide a comprehensive list of representative examples of our logic problem at length 2(i.e. each chain is formed by one rule). We use [Truth values] to denote the relevant input truth valueassignments (i.e. relevant facts) to the chain being queried below.151. Linear chain queried, [True]• Rules : A or B implies C. D implies E .• Facts : A is true. B is true. D is true .• Question : what is the truth value of C?• Answer : D true. D implies E; E True.2. Linear chain queried, [False]• Rules : A or B implies C. D implies E .• Facts : A is true. B is true. D is false .• Question : what is the truth value of C?• Answer : D false. D implies E; E undetermined.3. LogOp chain queried, LogOp = OR, [True, True]• Rules : AorB implies C. D implies E.• Facts :A is true. B is true. D is true.• Question : what is the truth value of C?• Answer : B true. A or B implies C; C True.Remark. In this case, the answer “A true. A or B implies C; C True” is also correct.4. LogOp chain queried, LogOp = OR, [True, False]• Rules : AorB implies C. D implies E.• Facts :A is true. B is false. D is true.• Question : what is the truth value of C?• Answer : A true. A or B imples C; C True.5. LogOp chain queried, LogOp = OR, [False, False]• Rules : AorB implies C. D implies E.• Facts :A is false. B is false. D is true.• Question : what is the truth value of C?• Answer : A false B false. A or B implies C; C undetermined.6. LogOp chain queried, LogOp = AND, [True, True]• Rules : AandB implies C. D implies E.• Facts :A is true. B is true. D is true.• Question : what is the truth value of C?• Answer : A true B true. A and B implies C; C True.7. LogOp chain queried, LogOp = AND, [True, False]• Rules : AandB implies C. D implies E.• Facts :A is true. B is false. D is true.• Question : what is the truth value of C?• Answer : B false. A and B implies C; C undetermined.8. LogOp chain queried, LogOp = AND, [False, False]• Rules : AandB implies C. D implies E.• Facts :A is false. B is false. D is true.• Question : what is the truth value of C?• Answer : A false. A and B implies C; C undetermined.Remark. In this case, the answer “B false. A and B implies C; C undetermined” is also correct.The length-3 case is a simple generalization of this set of examples, so we do not cover those exampleshere.16C Learner characteristics, and training detailsC.1 Transformer definitionThe architecture definition follows that of GPT-2 closely.Define input x= (x1, x2, ..., x T)∈NT, a sequence of tokens with length T. It is converted into asequence of (trainable) token embeddings Xtoken = (e(x1),e(x2), ...,e(xT))∈Rdin×T. Addingto it the (trainable) positional embeddings P= (p1,p2, ...,pT)∈Rdin×T, we form the zero-th layerembedding of the transformer X0= (e(x1) +p1, ...,e(xT) +pT). The input is processed by theattention blocks as follows.Let the model have Llayers and Hheads. For layer index l∈[L]and head index j∈[H], atten-tion head Al,jis computed by Al,j(Xl−1) =Scausalh ̃Xl−1QTl,jKl,j ̃XTl−1i/√dH ̃Xl−1VTl,with ̃Xl−1=LayerNorm (Xl−1),S(·)being the softmax operator, causal [·]the causal mask opera-tor. The output of the attention block is Al=Xl−1+Concat [Al,1(Xl−1), ...,Al,H(Xl−1)]WTO,l,withWO,lthe square projection matrix (with bias). Finally, we apply an affine classifier (withsoftmax) f(x) =S( ̃XL,TWTclass+bclass)to predict the next word.In this paper, we set the hidden space embedding to 768.C.2 Training detailsIn all of our experiments, we set the learning rate to 10−4, and weight decay to 10−4. For modelswith depth less than 6, we use a batch size of 512, and train the model for 60k iterations; for modelswith depth greater than or equal to 6, we use a batch size of 256, and train for 80k iterations. Weuse the AdamW optimizer in PyTorch, with 5k iterations of linear warmup, following by cosineannealing to a learning rate of 0. Each model is trained on a single V100 GPU; the full set of modelstake around 2 - 3 days to finish training.D Section 3 Experimental setup and observationsProblem specification . In each logic problem instance, the proposition variables are randomlysampled from a pool of 80 variables (tokens). The truth values in the fact section are also randomlychosen. The linear chain is queried 20% of the time; the LogOp chain is queried 80% of the time.Accuracy of different variants of the model for the length-3 problem . We show in Figure 4 belowthat the 3-layer 3-head variant is the smallest model which achieves ̃100% accuracy on the problem.Figure 4: Accuracy of full reasoning and first token for several models on the length-3 problem.17D.1 Routing signal in the second attention blockObservation 1: chain-type disentanglement at QUERY . We construct 200 samples, with thefirst half querying the linear chain, the second half querying the logical-operator chain. We recordthe second-layer’s self-attention block output on these samples, and compute the cosine similaritybetween each pair. We show the result in Figure 5.Figure 5: Disentanglement based on whether QUERY is for the linear chain, observed at the secondself-attention block.Observation 2a: existence of an abstract “routing signal” . We make the following experimentalobservations.1.We generate 1,000 samples whose QUERY is for the linear chain, and compute the average outputembedding of layer-2 self-attention block (post projection matrix). We denote this embedding ashroute .2.(Linear →LogOp intervention) Sample a set of 500 validation samples, all of which query forthelinear chain . In the forward pass of the model on every validation sample, we subtract thehroute to the output of the second attention block — note that this “corrupted” signal from thesecond layer is also received by the third layer. We observe that the model’s first token predictionis 100% of the time the correct first token for the LogOp chain.3.(LogOp →linear intervention) We repeat the above experiment the other way around. We sampleanother 500 validation samples, but in this case they all query the LogOp chain. On every sample,during the forward pass we addhroute to the output of the second attention block. We record theattention weight of the third layer’s attention blocks at the ANSWER token position. We thenaverage these attention weights for each head. We find that all three attention heads place greaterthan 99% attention weight on the QUERY position on average, behaving exactly like when thesample naturally queries for the linear chain.Observation 2b: linearly-decodable linear-chain answer at layer 2 . We simply frame the learningproblem as a linear classification problem. The input vector of the classifier is the same as the inputto the layer-3 self-attention block, equivalently the layer-2 residual-stream embedding. The outputspace is the set of proposition variables (80-dimensional vector). We train the classifier on 5k trainingsamples (all whose QUERY is for the linear chain) using the AdamW optimizer, with learning rate18set to 5×10−3and weight decay of 10−2. We verify that the trained classifier obtains an accuracygreater than 97% on an independently sampled test set of size 5k (all whose QUERY is for the linearchain too).D.2 Answer for the LogOp ChainEvidence 3a: Distinct behaviors of affine predictors at different layers . We train two affine classifiersat two positions inside the model (each with 10k samples): Wresid,l =2at layer-2 residual stream,andWattn,l =3at layer-3 attention-block output, both at the position of ANSWER, with the targetbeing the correct first token. In training, if there are two correct answers possible (e.g. OR gate,starting nodes are both TRUE or both FALSE), we randomly choose one as the target; in testing, wedeem the top-1 prediction “correct” if it coincides with one of the answers. We observe the followingpredictor behavior on the test samples:1.Wattn,l =3predicts the correct answer 100% of the time.2.Wresid,l =2always predicts one of the variables assigned FALSE (in the fact section) if LogOpis the AND gate, and predicts one assigned TRUE if LogOp is the OR gate.Evidence 3b: linearly decodable LogOp information from first two layers . We train an affine classifierat the layer-2 residual stream to predict the LogOp of the problem instance, over 5k samples (andtested on another 5k samples). The classifier achieves greater than 98% accuracy. We note thattraining this classifier at the layer-1 residual stream also yields above 95% accuracy.Evidence 3c: identification of LogOp-chain starting nodes at layer 3 . Attention heads (3,1) and (3,3),when concatenated, produce embeddings which we can linearly decode the two starting nodes ofthe LogOp chain with test accuracy greater than 98%. We also find that they focus their attention inthe rule section of the context (as shown in Figure 6). Due to causal attention, this means that theydetermine the two starting nodes from the LogOp-relevant rules.Remark . The above pieces of observations suggest the “partial information →refinement” process.3To further validate that the embedding from the first two layers are indeed causally linked to thecorrect answer at the third layer, we perform an activation patching experiment.Evidence 3d: layer-2 residual stream at ANSWER is important to correct prediction . We verify thatlayer-3 attention does rely on information in the layer-2 residual stream (at the ANSWER position):•Construct two sets of samples D1andD2, each of size 10k: for every sample X1,n∈ D 1andX2,n∈ D 2, the context of the two samples are exactly the same, except the LogOp isflipped, i.e. if X1,nhas disjunction, then X2,nhas the conjunction operator. If layer 3 of themodel has noreliance on the Resid l=2(layer-2 residual stream) for LogOp information at theANSWER position, then when we run the model on any X2,n, patching Resid l=2(Xn,2)withResid l=2(Xn,1)at ANSWER should notcause significant change to the model’s accuracy ofprediction. However, we observe the contrary: the accuracy of prediction degrades from 100% to70.87%, with standard deviation 3.91% (repeated over 3 sets of experiments).Observation: LogOp-relevant reasoning at the third layer . We show that the output from attentionheads (3,1) and (3,3) (before the output/projection matrix of the layer-3 attention block), namelyA3,1(X2)andA3,3(X2), when concatenated, contain linearly decodable information about the twostarting nodes of the LogOp chain. We frame this as a multi-label classification problem, detailed asfollows:1.We generate 5k training samples and 5k test samples, each of whose QUERY is for the LogOpchain. For every sample, we record the target as a 80-dimension vector, with every entry set to 0except for the two indices corresponding to the two proposition variables which are the startingnodes of the LogOp chain.2.Instead of placing softmax on the final classifier of the transformer, we use the Sigmoid function.Moreover, instead of the Cross-Entropy loss, we use the Binary Cross-Entropy loss (namely the3In fact, the observations suggest that layer 3 performs a certain “matching” operation. Take the OR gate asan example. Knowing which of the three starting nodes (for LogOp and linear chain) are TRUE, and which twonodes are the starting nodes for the LogOp chain are sufficient to determine the first token! This exact algorithm,however, is not fully validated by our evidence; we leave this as part of our future work.19torch.nn.functional.binary_cross_entropy_with_logits in PyTorch, which directlyincludes the Sigmoid for numerical stability).3.We train an affine classifier, with its input being the concatenated Concat [A3,1(X2),A3,3(X2)](a 512-dimensional vector) on every training sample, and with the targets and training loss definedabove. We use a constant learning rate of 0.5×10−3, and weight decay of 10−2. The optimizeris AdamW in PyTorch.4.We assign a “correct” evaluation of the model on a test sample only if it correctly outputs the twotarget proposition variable as the top-2 entries in its logits. We observe that the classifier achievesgreater than 98% once it converges.Figure 6: Attention statistics, averaged over 500 samples, all of which query for the LogOp chain.The x-axis is simply an example prompt that helps illustrate where the attention is really placed at.Observe that only attention head (3,2) pays significant attention to the fact section. The other twoheads focus on the rule section.Reminder: due the the design of the problem, the rule, fact and query sections all have consistentlength for every sample!D.3 Extra remarksObservation 3 supplement: linearly-decodable linear-chain answer at layer 2 . We simply framethe learning problem as a linear classification problem. The input vector of the classifier is the sameas the input to the layer-3 self-attention block, equivalently the layer-2 residual-stream embedding.The output space is the set of proposition variables (80-dimensional vector). We train the classifier on5k training samples (all whose QUERY is for the linear chain) using the AdamW optimizer, withlearning rate set to 5×10−3and weight decay of 10−2. We verify that the trained classifier obtainsan accuracy greater than 97% on an independently sampled test set of size 5k (all whose QUERY isfor the linear chain too).Remarks on truth value determination . Evidence suggests that determining the truth value of thesimple propositional logic problem is easy for the model, as the truth value of the final answer islinearly decodable from layer-2 residual stream (with 100% test accuracy, trained on 10k samples)when we give the model the context+chain of thought right before the final truth value token. Thisis expected, as the main challenge of this logic problem is not about determining the query’s truth20value, but about the model spelling out the minimal proof with careful planning. When abundant CoTtokens are available, it is natural that the model knows the answer even in its second layer.E Mistral 7B: Experimental DetailsE.1 Problem formatIn our Mistral-7B experiments, the input samples have the following properties:1. We give the model 6 (randomly chosen) in-context examples before asking for the answer.2.The problem is length-2: only one rule involving the OR gate, and one linear-chain rule. Moreover,the answer is always true. In particular, the truth values of the two premise nodes of the OR chainalways have one FALSE and one TRUE.3. The proposition variables are all (single-token) capital English letters.The design decision in the first point is to ensure fairness to the LLM which was not trained on ourspecific logic problem. As for the last two point, we restrict the problem in this fashion mainly toensure that the first answer token is unique , which improves the tractability of the analysis. Note thatthese restrictions do not take away the core challenge of this problem: the LLM still needs to processall the context information without CoT to determine the correct first token .An example problem is presented below.Rules: Z or F implies B. D implies C.Facts: D is true. Z is true. F is false.Question: state the truth value of C.Answer: D is true. D implies C; C is true.Rules: U implies Y. G or I implies Q.Facts: I is true. U is true. G is false.Question: state the truth value of Y.Answer: U is true. U implies Y; Y is true.Rules: G or Z implies E. U implies K.Facts: U is true. G is true. Z is false.Question: state the truth value of E.Answer: G is true. G or Z implies E; E is true.Rules: G implies U. Y or A implies V.Facts: Y is true. G is true. A is false.Question: state the truth value of V.Answer: Y is true. Y or A implies V; V is true.Rules: U implies W. H or B implies L.Facts: B is false. U is true. H is true.Question: state the truth value of W.Answer: U is true. U implies W; W is true.Rules: F or A implies Y. E implies I.Facts: A is false. F is true. E is false.Question: state the truth value of Y.Answer: F is true. F or A implies Y; Y is true.Rules: B or F implies D. S implies T.Facts: S is true. F is true. B is false.Question: state the truth value of T.Answer:Remark. To ensure fairness to the LLM, we balance the number of in-context examples which queriesthe OR chain and the linear chain: each has 3 in-context examples. The order in which the in-contextexamples are presented (i.e. the order in which the examples with OR or linear-chain answer) israndom.21E.2 Causal mediation analysisWe provide evidence in this part of the paper primarily relying on a popular technique in mechanisticinterpretability: causal mediation analysis . Our methodology is roughly as follows:1.Suppose we are interested in the role of the activations of certain components of the LLM in acertain (sub-)task. For a running example, say we want to understand what role the attentionheads play in processing and passing QUERY information to the “:” position for inference. Letus denote the activations as Al,h;t(X), representing the activation of head hin layer l, at tokenposition t.2.Typically, the analysis begins by constructing two sets of prompts which differ in subtle ways. Anatural construction in our example is as follows: define sets of samples DorigandDalt, whereXorig,n andXalt,n have exactly the same context, except in Xorig,n , QUERY is for the LogOpchain, while in Xalt,n, QUERY is for the linear chain. Moreover, denote the correct targetsyorig,n andyalt,n respectively.3.We run the LLM on DorigandDalt, caching the attention-head activations. We also obtain thelogits of the model. We can compute the model’s logit differences∆orig,n =logit(Xorig,n )[yorig,n ]−logit(Xorig,n )[yalt,n].For a high-accuracy model, ∆orig,n should be large for most n’s, since it must be able to clearlytell that on an Xorig,n , it is the LogOp chain which is being queried, not the linear chain.4. We now perform intervention for all n, l, h andt:(a)Run the model on Xorig,n , however, replacing the original activation Al,h;t(Xorig,n )bythe altered Al,h;t(Xalt,n). Now let the rest of the run continue.4Let us denote the logitsobtained in this intervened run as logit→alt;(l,h,t)(Xorig,n ).(b) Now compute the intervened logit difference∆orig→alt,n;(l,h,t)=logit→alt;(l,h,t)(Xorig,n )[yalt,n]−logit→alt;(l,h,t)(Xorig,n )[yorig,n ].5.Now average the ∆orig→alt,n;(l,t)’s over nfor every l, handt(recall that nis the sample index).6.This procedure helps us identify components that are significant in processing and passing theQUERY information for inference. Intuitively, an activation that result in a positive and large∆orig→alt,n;(l,t)play a significant role in this subtask, because this activation helps “altering” themodel’s “belief” from “QUERY is for the LogOp chain” to “QUERY is for the linear chain”.7.Remark : due to the symmetry of this running example, it is perfectly sensible to performalt→orig interventions too, by mirroring the above procedures.Each of our experiments are done on 60 samples unless otherwise specified — we repeat someexperiments (especially the attention-head patching experiments) to ensure statistical significancewhen necessary.Calibrated metric . Please note that in this work, we adopt a calibrated/normalized version of theintervened logit difference (aimed at keeping the score’s magnitude between 0 and 1). In particular,we compute the following metric for head (l, h)at token position t:1NPn∈[N]∆orig→alt,n;(l,h,t)−∆orig∆alt−∆orig. (1)where ∆orig =1NPn∈[N]logit(Xn)[yalt,n]−logit(Xn)[yorig,n ], and ∆alt =1NPn∈[N]logit(X′n)[yalt,n]−logit(X′n)[yorig,n ]. The closer to 1 this score is, the stronger themodel’s belief is altered.E.3 QUERY-based patching: discovering the important attention headsOur analysis relies on QUERY-based patching, following the same procedure as detailed in sub-section E.2. In this set of experiments, we discover the main attention heads responsible for processingthe context and performing inference as introduced in the beginning of this Section.4Please note that layers l+ 1toLare influenced at and after token position t, and technically speaking, nowoperate “out of distribution”.22(a) Residual stream patching (b) Attention block patching (c) MLP patchingFigure 7: Query patching at the level of residual streams, attention blocks and MLPs. Highly localizedprocessing of QUERY is observed: a sharp transition occurs at layer 13 in (a), and in (b), only asparse set of attention blocks play a major role in this subtask. Furthermore, (c) shows that the MLPsplay a limited role in this subtask (besides MLP0). (Please zoom in for the details)(a) Single-head patching (b) Head group patching (12,9) (13,11) (19,8) (16,12;14) (16,0) (17,25) (14,24;26) (9,24 - 27) (15,8) Figure 8: Attention head patching, highlighting the ones with the highest intervened logit difference;x-axis is the head index. (a) shows single-head patching, and (b) shows a coarser-grained headpatching in groups. In (b), we only highlight the head groups that are not captured well by (a).Why is QUERY-based patching important to reasoning circuit discovery? To answer thisquestion, there are two points to emphasize first. (1) We know that to solve the reasoning problem,the QUERY token is critical to initiating the reasoning chain: without it, the rules and facts arecompletely useless; with it, the reasoner can then proceed to identify the relevant rules and factsto predict the answer. (2) The prompt pairs differ only by the QUERY token. Based on (1) and(2), we know that if performing the aforementioned QUERY-based causal intervention on a modelcomponent leads to a large intervened logit difference (i.e. it alters the model’s “belief”), then thiscomponent must be integral to the reasoning circuit, because the component is now identified to beQUERY-sensitive andhas causal influence on (parts of) the model’s reasoning actions.High-level interventions . We begin by presenting higher-level patching results in Figure 7, where weintervene at the level of residual streams, attention blocks, and MLPs. We can draw a few insightsfrom these results:1.A sharp transition of “QUERY processing” occurs from layer 12 to layer 13 (indexed from 0) inFigure 7(a) and (b).2.Figure 7(b) shows that a small set of attention blocks are observed to be significant for the “beliefaltering” action, namely those in layers 9, 12, 13, 14, 16 and 19.3.The MLPs, shown in Figure 7(c), play little role in this circuit, except for MLP-0. However, MLP-0 had been observed to act more as a “nonlinear token embedding” than a complex high-levelprocessing unit [Wang et al., 2023]. In the rest of this section, we primarily devote our analysis tothe attention heads, and leave the exact role of the MLPs to future work.Attention-head interventions . Figure 7 helps us locate a small set of attention blocks which areimportant to the task. However, these results alone still leave us with far too many components to23Queried-rule locating Fact processing Queried-rule mover Decision (a) Query (b) Typical attention pattern (c) Value Figure 9: Patching of query and value activations in (a) and (c); we found that intervening the keyactivations only yield trivial scores, so we do not report them here. We show in (b) the typicalattention patterns of a representative set of the attention heads which are identified to be importantin the intervention experiments shown in (a) and (c). There are several distinct observations whichcan be made in (b). Queried-rule locating head (12,9): observe that it correctly locates the queriedrule which ends with Q. Queried-rule mover head (13,11): the only token position which it focuseson is the QUERY token Q. Fact processing head (16,14): attention concentrates in the fact section.Decision head (19,8): attention focused on the correct first answer token K.examine in detail. Therefore, we run a set of finer-grained experiments, intervening on the attentionheads (over the relevant context). The results are shown in Figure 8. We find that, interestingly, onlya very small set of attention heads are central to the “belief altering” of the LLM. More specifically,in Figure 8(a), only attention heads (12,9), (13,11), (14,24;26), (16,0;12;14), (17,25), (19,8) areobserved with relatively high intervened logit differences.We note that Grouped-Query Attention used by Mistral-7B adds subtlety to the analysis5: patching asingle head might not yield a high logit difference since other heads in the same group (which possiblyperform a similar function) could overwhelm the patched head and maintain the model’s previous“belief”. Therefore, we also run a coarser-grained experiment which simultaneously patches theattention heads sharing the same key and value activations, shown in Figure 8(b). This experimentreveals that heads belonging to the group (9, 24 - 27) also have high intervened logit difference.E.4 Causal interventions on the sub-components of attention headsWe aim to understand why the attention heads identified in the last sub-section are important. For now,we continue with QUERY altering in the prompt pairs. Through intervening on the sub-componentsof each attention head, namely their value, key, and query, and through examining details of theirattention weights, we find that there are roughly four types of attention heads. We show the results inFigure 9 (repeated here in the Appendix for convenience):1.Queried-rule locating head. Attention head (12,9)’s query activation has a large intervenedlogit difference according to Figure 3(a), therefore, its query and attention patterns are QUERY-dependent and contribute to altering the model’s “belief”. Furthermore, at the QUERY position,we find that on average , its attention weight is above 90% at the “conclusion” variable of therule being queried. In other words, it is responsible for locating the queried rule, and storing thatrule’s information at the QUERY position.62.Queried-rule mover head. Attention head (13,11)’s value activations have large intervened logitdifference, and intriguingly, its query and key activations do notshare that tendency. This alreadysuggests that its attention pattern performs a fixed action on both the original and altered prompts,and only the value information is sensitive to QUERY . Furthermore, within the relevant context(excluding the 6 in-context examples given), head (13,11) assigns above 50% attention weight tothe QUERY position, and its attention weight at QUERY is about 10 times larger than the second5In Mistral-7B-v0.1, each attention layer has 8 key and value activations, and 32 query activations. Therefore,heads (l, h×4)to(l, h×4 + 3) share the same key and value activation.6Heads (9,25;26), (14,24;26) exhibit similar tendencies, albeit with smaller intervened logit differences.24largest one on average. Recalling the role of layer 12, we find evidence that head (13,11) movesthe QUERY and queried-rule information to the “:” position.73.Fact processing heads. Attention heads (16,12), (16,14) and (17,25)’s query activations havelarge intervened logit differences. Within the relevant context, they place greater than 56%, and70% of their attention respectively in the fact section of the context (starting from “Fact” andending on “.” before “Question”).4. Decision head. Attention head (19,8)’s query activations have large intervened logit differences.Its attention pattern suggests that it is a “decision” head: within the relevant context, when themodel is correct , the head’s top-2 attention weights are always on the correct starting node of thequeried rule and the correct variable in the fact section, and the two token positions occupy morethan 60% of its total attention in the relevant context on average. In other words, it already hasthe answer.E.5 Attention patterns of QUERY-sensitive attention headsIn this subsection, we provide finer details on the attention patterns of the attention heads wediscovered in Section E.3. Note that the attention weights percentage we present in this sectionare calculated by dividing the observed attention weight at a token position by the total amount ofattention the head places in the relevant context, i.e. the portion of the prompt which excludes the 6in-context examples.Queried-rule locating heads . Figure 10 presents the average attention weight the queried-rulelocating heads place on the “conclusion” variable and the period “.” immediately after the queriedrule at the QUERY token position (i.e. the query activation of the heads come from the residualstream at the QUERY token position) — (12,9) is an exception to this recording method, where weonly record its weight on the conclusion variables alone, and already observe very high weight onaverage. The heads (12,9), (14,24), (14,26), (9,25), (9,26) indeed place the majority of their attentionon the correct position consistently across the test samples. The reason for counting the period afterthe correct conclusion variable as “correctly” locating the rule is that, it is known that LLMs tend touse certain “register tokens” to record information in the preceding sentence.Figure 10: Average attention weights of the queried-rule locating heads, along with the standarddeviations. The weights are calculated by dividing the actual attention weight placed on the correct“conclusion” variable of the rule and the period “.” immediately after, by the total amount of attentionplaced in the relevant context (i.e. the prompt excluding the 6 in-context examples). Head (12,9) is anexception: we only record its attention right on the conclusion variable, and still observe 93.0±9.4%“correctly placed” attention on average .We can observe that head (12,9) has the “cleanest” attention pattern out of the ones identified, placingon average 93.0±9.4%of it attention on the correct conclusion variable alone. The more dilutedattention patterns of the other heads likely contribute to their weaker intervened logit difference scoreshown in Section E.3 in the main text.7Heads (15,8), (16,0) also appear to belong to this type, albeit with smaller intervened logit difference.25Queried-rule mover heads . Figure 11 shows the attention weight of the queried-rule mover heads.While they do not place close to 100% attention on the QUERY location consistently (when the queryactivation comes from the residual stream from token “:”, right before the first answer token), thetop-1 attention weight consistently falls on the QUERY position, and the second largest attentionweight is much smaller. In particular, head (13,11) places 54.2±12.5%attention on the QUERYposition on average, while the second largest attention weight in the relevant context is 5.2±1.1%on average (around 10 times smaller; this ratio is computed per sample and then averaged ).An extra note about head (16,0): it does notprimarily act like a “mover” head, as its attention statisticssuggest that it processes the mixture of information from the QUERY position and the “:” position.Therefore, while we present its statistics along with the other queried-rule mover heads here since itdoes allocate significant attention weight on the QUERY position on average, we do not list it as suchin the circuit diagram of Figure 2.Figure 11: Average attention weights of the queried-rule mover heads, along with the standarddeviations. The raw attention patterns are obtained at token position “:” (i.e. the query activationcomes from the residual stream at the “:” position), right before the first answer token, and the exactattention weight (indicated by the blue bars) is taken at the QUERY position; for head (16,0), wealso obtain its attention weight at the “:” position, as we found that it also allocates a large amountof attention weight to this position in addition to the QUERY position. Note : for (15,8), we foundthat it only acts as a “mover” head when the linear chain is being queried, so we are only reportingits attention weight statistics in this specific scenario; the other heads do not exhibit this interestingbehavior, so we report those heads’ statistics in all query scenarios.Fact processing heads . Figure 12 below shows the attention weights of the fact processing heads;the attention patterns are obtained at the “:” position, right before the first answer token, and we sumthe attention weights in the Fact section (starting at the first fact assignment, ending on the last “.” inthis section of the prompt). It is clear that they place significant attention on the Fact section of therelevant context.Remark. There is only one “decision head” which we identified, i.e. head (19,8). Since there is nofurther subtleties with how we recorded its attention weights or peculiar behaviors of the attentionpatterns observed, we do not elaborate further on it in the Appendix.F Potential Broader ImpactThis paper presents work whose goal is to advance the field of Machine Learning, particularly thearea of mechanistic interpretability. There are many potential societal consequences of our work,none which we feel must be specifically highlighted here26Figure 12: Average attention weights of the fact processing heads, along with the standard deviations.The weights are calculated by dividing the actual attention weight placed in the Fact section by thetotal amount of attention placed in the relevant context (i.e. the part of the prompt excluding the 6in-context examples).27 |
jErJ8kansp | STEM-P OM: Evaluating Language ModelsMath-Symbol Reasoning in Document ParsingJiaru ZouUniversity of Illinois at Urbana-ChampaignChampaign, [email protected] WangUniversity of Illinois at Urbana-ChampaignChampaign, [email protected] ThakurUniversity of Illinois at Urbana-ChampaignChampaign, [email protected] KaniUniversity of Illinois at Urbana-ChampaignChampaign, [email protected] in large language models (LLMs) have spurred research into enhancingtheir reasoning capabilities, particularly in math-rich STEM documents. WhileLLMs can generate equations or solve math-related queries, their ability to fullyunderstand and interpret abstract mathematical symbols in long, math-rich docu-ments remains limited. In this paper, we introduce STEM-P OM, a comprehensivebenchmark dataset designed to evaluate LLMs’ reasoning abilities on math symbolswithin contextual scientific text. The dataset, sourced from real-world ArXiv docu-ments, contains over 2K math symbols classified as main attributes of variables,constants, operators, and unit descriptors, with additional sub-attributes includingscalar/vector/matrix for variables and local/global/discipline-specific labels forboth constants and operators. Our extensive experiments show that state-of-the-artLLMs achieve an average of 20-60% accuracy under in-context learning and 50-60% accuracy with fine-tuning, revealing a significant gap in their mathematicalreasoning capabilities. STEM-P OMfuels future research of developing advancedMath-AI models that can robustly handle math symbols.1 IntroductionLarge language models (LLMs) have demonstrated exceptional reasoning abilities across numerousfields [ 17,13,12,25,15]. With the increasing shift towards applying LLMs to complex tasks[6,23,39], the need for supplementary data beyond the general pre-trained datasets has becomeincreasingly important. Among these, mathematical reasoning tasks [ 10,18] have recently drawn theattention of several researchers [ 19,2,45,28] (see Backgrounds in Appendix B). In particular, Part-of-Math Tagging [ 43], the mathematical analog to part-of-speech tagging [ 36] where mathematicaltokens are classified according to a given taxonomy of attributes, continues to gain interest but lacksthe foundational datasets that can support advanced NLP tasks [ 43,38,37]. In addition, integratingmathematical language into NLP models remains a substantial challenge [ 3,29], especially in therealm of document parsing [8, 24, 44]. Traditional semantic parsing methods like LateXML [31] orarXMLiv [ 22] often fall short when applied to math-rich documents, where precision and structuredsyntax are paramount [ 14,32,41]. These methods struggle to accurately perform pattern matchingbetween abstract mathematical symbols and their corresponding XML tag notations. Similarly,recent advanced LLMs, such as ChatGPT [ 26], also face difficulties in understanding and reasoningwith abstract mathematical symbols due to their contextual polymorphism [ 35] (as Figure 3 shown).38th Conference on Neural Information Processing Systems (NeurIPS 2024).Figure 1: The overall pipeline for constructing the STEM-P OMdataset. We extract math symbolswith corresponding text information to formulate the dataset. Each math symbol is initially classifiedinto one of four primary categories based on its definition. Then, the symbol is further categorizedinto secondary categories by the context in which it appears or by the symbol’s dimensionality. AnLLM is evaluated via the first-level and second-level classification tasks.For example, in the linear equation: y=mx+p,yis categorized as a variable. Whereas in thecross-entropy loss function: L(x, y) =−PNi=1xilog(yi), the symbol yrepresents the fixed targetlabels, which is considered a constant for a given dataset. Without the corresponding contextualinformation of a mathematical symbol, LLMs are unable to distinguish between different attributesof the symbol and cannot effectively process related mathematical reasoning tasks. Thus, taggingmath symbols within domain-specific contexts is essential for language models.In this paper, we introduce a novel benchmark dataset, STEM-P OM, designed to evaluate thereasoning capabilities of language models on mathematical symbols across different domains. TheSTEM-P OMdataset consists of 2,109 instances extracted from a random sampling of over 10,000arXiv manuscripts, which are math-rich documents spanning domains such as Mathematics, Physics,Chemistry, and more. We provide a mathematical symbol for each dataset instance, its order inthe document, its main and sub-level attributes, and the corresponding text or expression from theoriginal arXiv paper containing the symbol. Each mathematical symbol in the dataset is classifiedaccording to two levels of attributes [ 42]. The first-level attribute categorizes the symbol as variable,constant, operator, or unit descriptor. The second-level attribute further classifies the symbol into oneof six types based on its first-level category: scalar, vector, matrix, local, global, or discipline-specific.Figure 1 illustrates the dataset’s category distribution. To further enrich the STEM-P OMdatasetwith additional arXiv manuscripts and other math-rich document resources, we also design theSTEM-PoM Labeler , a feasible method for assisting dataset generation by automatically searching,extracting, and recording hand-labeled mathematical symbols and their corresponding context fromlong-text documents.We conduct thorough experiments on the STEM-P OMdataset to assess the mathematical reasoningabilities of seven open- and closed-source language models, including LSTM [ 11], Mixtral-8x7B[20], Llama2-13B [ 40], Llama3-80B [ 9], Claude-3.5-sonnet [ 4], GPT-3.5, and GPT-4o [ 1] withvarious prompting and fine-tuning strategies. The experimental results indicate that STEM-P OMdistinguishes the performance levels across different LLMs, offering insights into the mathematicalsymbol reasoning abilities of these models. In addition, we investigate and analyze the influence ofcontext length on the ability of language models to understand mathematical symbols.2 STEM-P OM Dataset2.1 Data AnnotationSource of Mathematical Symbols. The first crucial step in constructing the dataset is selectinghigh-quality mathematical symbols. For STEM-P OM, we primarily collect these symbols from twosources: 1. Public math-symbol datasets , where we directly utilize candidate mathematical symbols2Statistic NumberTotal Symbols 2,109Unit Descriptor 129Constant 384- Local 171- Global 121- Discipline Specific 92Operator 363- Local 181- Global 105- Discipline Specific 77Variable 1,233- Scalar 601- Vector 599- Matrix 33Avg symbols per article 4.7Avg tokens per sentence 31.8Avg tokens per math symbol 1.07Table 1: STEM-P OM Dataset Statistics Figure 2: Discipline Distribution from Source ArXivfrom the mathematical token definition extraction benchmark, MTDE [ 14]. 2. Raw ArXiv papers[7], where we apply the STEM-PoM Labeler to identify and extract mathematical symbols from theArXiv dataset. We include a detailed description of each source dataset in Appendix A.2.Dataset Construction. After obtaining the mathematical symbols, we categorize each symbolinto different attributes and assign corresponding information to construct the STEM-P OM dataset.Specifically, we first extract the file name and symbol order for each mathematical symbol. Then,for each symbol, we extract the contexts in which the symbol appears, using several predefinedlengths. Following this, we manually classify each symbol into four primary categories: Variable,Constant, Operator, and Unit Descriptor. For Variable, Constant, and Operator, we further categorizethem into sub-level categories. The variable is classified as Vector, Scalar, or Matrix, while Constantand Operator are categorized as Local, Global, or Discipline-Specific. Table 2 outlines the overalldataset structure. We manually examine each entry of the dataset thoroughly to ensure its robustnessand correctness. We provide a detailed explanation of the dataset structure in Appendix A.3 andthe definitions of each level’s attributes in Appendix A.4. Additionally, the STEM-PoM Labeler isdescribed in Appendix A.5.2.2 Dataset StatisticsWe summarize the key statistics of our dataset in this section. Table 1 presents the categorical statis-tics, including the math symbols along with their first- and second-level attributes. The distributionof Variables, Constants, Operators, and Unit Descriptors is 58.5%, 18.2%, 17.2%, and 6.1%, respec-tively. In addition, Figure 2 illustrates the discipline distribution of the source arXiv papers. Ourdataset covers mathematical symbols from various fields, including Mathematics, Physics, Chemistry,Economics, Computer Science, etc.File Name Symbol Order Symbol Main Attribute Sub Attribute Related Contents9509/adap-org9509001.html 0 f Constant Global ...1/ fnoise was discovered...9509/adap-org9509001.html 1 ∆ Operator Global ...can be quantified by studying the displacement ∆X9509/adap-org9509001.html 2 X Unit Descriptor - ...can be quantified by studying the displacement ∆X9509/adap-org9509001.html 3 t Variable Scalar ..after t steps, we can...... ... ... ... ... ...Table 2: STEM-P OM Dataset Structure3Models Context Length Overall Variable Constant Operator Unit DescriptorLSTMOne Sentence 18.7% 24.5% 13.2% 10.3% 27.1%Ten Sentences 22.6% 28.1% 16.8% 15.5% 30.2%Full Manuscript - - - - -Llama2-13BOne Sentence 36.8% 24.1% 39.3% 41.4% 42.7%Ten Sentences 42.7% 35.6% 39.8% 46.9% 48.5%Full Manuscript 45.9% 38.2% 42.8% 50.1% 52.4%Mistral-8x7BOne Sentence 47.3% 38.5% 41.7% 52.9% 56.2%Ten Sentences 49.8% 41.8% 45.9% 58.6% 56.7%Full Manuscript 53.6% 45.7% 48.9% 61.4% 58.2%Llama3-80BOne Sentence 48.9% 41.3% 44.6% 48.5% 61.5%Ten Sentences 53.0% 44.8% 48.8% 54.7% 63.7%Full Manuscript 51.7% 42.7% 43.2% 55.2% 65.8%Claude3.5-SonnetOne Sentence 63.7% 58.6% 62.5% 65.7% 67.8%Ten Sentences 65.9% 61.3% 64.3% 67.9% 70.2%Full Manuscript 66.7% 62.9% 65.8% 68.6% 69.3%GPT-3.5One Sentence 56.8% 51.5% 53.5% 59.4% 62.4%Ten Sentences 58.7% 54.5% 53.6% 61.3% 65.1%Full Manuscript 60.6% 57.2% 56.6% 63.2% 65.2%GPT-4oOne Sentence 64.9% 60.5% 64.2% 64.9% 70.1%Ten Sentences 67.4% 63.7% 66.1% 66.4% 73.5%Full Manuscript 68.5% 64.2% 67.8% 68.1% 73.8%Table 3: First-level classification accuracy with various context lengths.Models Variable Constant Operator(Vanilla) Scalar Vector Matrix Local DS Global Local DS GlobalLSTM 13.8% 15.1% 17.2% 19.2% 17.8% 22.2% 16.6% 11.3% 14.6%Llama2-13B 27.3% 24.4% 21.8% 33.6% 31.5% 33.6% 32.4% 28.3% 32.7%Mistral-8x7B 36.9% 35.8% 21.6% 34.8% 31.2% 37.8% 36.4% 34.8% 35.7%Llama3-80B 38.2% 34.1% 26.7% 37.6% 35.2% 36.1% 39.1% 32.3% 40.2%Claude3.5-Sonnet 53.2% 49.7% 55.8% 55.9% 53.1% 49.6% 56.3% 52.2% 55.9%GPT-3.5 44.5% 45.8% 48.3% 48.5% 42.9% 44.3% 48.4% 43.5% 49.7%GPT-4o 54.6% 51.3% 58.6% 58.4% 54.1% 56.2% 60.5% 57.3% 58.5%Table 4: Second-level classification accuracy with full manuscript input (Ten-sentence input forLSTM). We abbreviate "Discipline Specific" as "DS".3 Experiments3.1 SetupModels. To thoroughly evaluate our dataset across models with varying parameter sizes, we utilizethe following models: LSTM framework [ 11], Llama-2-13B [ 40], Mixtral-8x7B-v0.1 [ 20], andGPT-3.5-turbo-0125 [1].Evaluation Metrics. We apply the Precision Accuracy as our metric for the mathematical symbolclassification task, the metric can be formulated as: Precision Accuracy =Number of correct predictionsTotal number of samplesTraining & Inference Details. We evaluate several models under both pre-training and fine-tuningsettings. Specifically, we train an LSTM model with varying layers and apply the LoRA method[16,47], a PEFT technique, to GPT-3.5. We evaluate other models in the in-context learning setting.Appendix C provides the training and model parameter details.43.2 First-Level Classification Results.Table 3 presents the accuracy results for different models across varying context lengths. The resultshows that the small-parameter-size language model such as the LSTM struggles with lower accuracy,achieving between 18.7% and 22.6%. In contrast, larger models, such as Llama2-13B and Mistral-8x7B, show marked improvements as context length increases, with Mistral-8x7B reaching up to53.6% on the full manuscript. In addition, Claude3.5-Sonnet achieves comparable performancewith GPT-4o across all context lengths, with accuracy consistently above 63.7% and up to 66.7%.GPT-based models exhibit stronger performance overall, with GPT-3.5 achieving between 56.8%and 60.6%. GPT-4o further improves across all context lengths, outperforming other models withan overall accuracy of 68.5% with the full manuscript input. We observe that the performance gapbetween models remains consistent as context length increases. For instance, GPT-4o outperformsLlama3-80B by 16.0%, 14.4%, and 16.8% for context lengths of one sentence, ten sentences, and thefull manuscript, respectively. This consistent performance gap suggests that larger models with morepre-trained knowledge, such as GPT-4o and Claude3.5-Sonnet, exhibit superior scalability with longercontexts. These models are able to more effectively leverage extended context lengths to distinguishbetween mathematical symbols and other nuanced elements in the input prompts. On the other hand,the overall performance gain from increasing context length is more pronounced in smaller models,such as Llama2-13B and Mistral-8x7B, which have less pre-trained knowledge. These modelsbenefit more from extended context as they rely on additional information to compensate for theirlimited pre-training. Larger models like GPT-4o and Claude3.5-Sonnet, which come with extensivepre-trained knowledge, show relatively smaller performance gains as context length increases.3.3 Second-Level Classification Results.Table 4 shows second-level classification accuracy with full manuscript input. In this experiment,we assume that the model got the first-level classification correct. From the results, LSTM performspoorly, with an accuracy as low as 11.3% for predicting the DS. Larger models, like Llama2-13B and Mistral-8x7B, improve performance, especially in classifying Constants (up to 37.8%).Llama3-80B shows moderate improvements, with 40.2% accuracy for Global Operators, indicatingreasonable capabilities in operator classification tasks. Claude3.5-Sonnet and GPT-3.5 show furtherimprovements, particularly in Global Constants and Operators classification. GPT-3.5 achieves48.5% accuracy for Local Constants and 49.7% for Global Operators. Lastly, GPT-4o provides thehighest accuracy overall, reaching 60.5% for Local Operators and 58.6% for Matrix classification.By horizontally comparing the same model performance on different sub-attribute classifications, wefind that the attribute Constants are generally easier to classify compared to Variables and Operatorsacross all sizes of models, as seen by the overall higher accuracy in Constant-related tasks. However,Matrix and DS classification continue to present challenges, even for the largest models, indicatingthat certain structures or content types within manuscripts remain difficult to categorize accurately atthe sub-attribute level.Overall, performance across all models on both first-level and second-level classification tasks showsa clear trend of improvement with increasing context length, highlighting the importance of contextfor accurately classifying mathematical symbols. Additionally, both small and large-size languagemodels show a relatively higher accuracy in identifying Unit Descriptors and Operators comparedto Variables and Constants, indicating that symbols with more distinct contextual or syntacticalpatterns are easier for models to classify. Through the above results, we aim to gain insights into theextent to which different category attributes of mathematical symbols influence LLMs’ understandingof math-rich documents by correctly classifying the symbols in real-world scenarios. We leaveadditional experiments in Appendix D.4 ConclusionIn this paper, we introduce STEM-P OM, a comprehensive benchmark for evaluating languagemodels’ mathematical reasoning abilities to classify math symbols from scientific texts. The datasetincludes over 2,000 math instances sourced from ArXiv papers. Extensive experiments show thatthe best-performing model, achieves only 73.8% and 60.5% for first and second-level Part-of-Math Tagging classification accuracy, highlighting the challenge of extracting and categorizingmathematical symbols from large text corpora.5References[1]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia LeoniAleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4technical report. arXiv preprint arXiv:2303.08774 , 2023.[2]Janice Ahn, Rishu Verma, Renze Lou, Di Liu, Rui Zhang, and Wenpeng Yin. Largelanguage models for mathematical reasoning: Progresses and challenges. arXiv preprintarXiv:2402.00157 , 2024.[3]Fatimah Alshamari and Abdou Youssef. Astudy into math document classification using deeplearning, 2020.[4] Anthropic. Claude 3.5 sonnet, 2024. Accessed: 2024-10-14.[5]Yoshua Bengio, Réjean Ducharme, and Pascal Vincent. A neural probabilistic language model.Advances in neural information processing systems , 13, 2000.[6]Tom B Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 , 2020.[7]Colin B Clement, Matthew Bierbaum, Kevin P O’Keeffe, and Alexander A Alemi. On the useof arxiv as a dataset. arXiv preprint arXiv:1905.00075 , 2019.[8]Rebecca Dridan and Stephan Oepen. Document parsing: Towards realistic syntactic analysis.InProceedings of The 13th International Conference on Parsing Technologies (IWPT 2013) ,pages 127–133, 2013.[9]Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle,Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herdof models. arXiv preprint arXiv:2407.21783 , 2024.[10] Lyn D English. Mathematical reasoning: Analogies, metaphors, and images . Routledge, 2013.[11] Alex Graves and Alex Graves. Long short-term memory. Supervised sequence labelling withrecurrent neural networks , pages 37–45, 2012.[12] Muhammad Usman Hadi, Qasem Al Tashi, Abbas Shah, Rizwan Qureshi, Amgad Muneer,Muhammad Irfan, Anas Zafar, Muhammad Bilal Shaikh, Naveed Akhtar, Jia Wu, et al. Largelanguage models: a comprehensive survey of its applications, challenges, limitations, and futureprospects. Authorea Preprints , 2024.[13] Muhammad Usman Hadi, Rizwan Qureshi, Abbas Shah, Muhammad Irfan, Anas Zafar, Muham-mad Bilal Shaikh, Naveed Akhtar, Jia Wu, Seyedali Mirjalili, et al. A survey on large languagemodels: Applications, challenges, limitations, and practical usage. Authorea Preprints , 2023.[14] Emma Hamel, Hongbo Zheng, and Nickvash Kani. An evaluation of nlp methods to ex-tract mathematical token descriptors. In International Conference on Intelligent ComputerMathematics , pages 329–343. Springer, 2022.[15] Xinyi He, Jiaru Zou, Yun Lin, Mengyu Zhou, Shi Han, Zejian Yuan, and Dongmei Zhang.Conline: Complex code generation and refinement with online searching and correctness testing.arXiv preprint arXiv:2403.13583 , 2024.[16] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang,Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXivpreprint arXiv:2106.09685 , 2021.[17] Jie Huang and Kevin Chen-Chuan Chang. Towards reasoning in large language models: Asurvey. arXiv preprint arXiv:2212.10403 , 2022.[18] Bat-Sheva Ilany, Bruria Margolin, et al. Language and mathematics: Bridging between naturallanguage and mathematical language in solving problems in mathematics. Creative Education ,1(03):138, 2010.6[19] Shima Imani, Liang Du, and Harsh Shrivastava. Mathprompter: Mathematical reasoning usinglarge language models. arXiv preprint arXiv:2303.05398 , 2023.[20] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, ChrisBamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand,et al. Mixtral of experts. arXiv preprint arXiv:2401.04088 , 2024.[21] Diederik P Kingma. Adam: A method for stochastic optimization. arXiv preprintarXiv:1412.6980 , 2014.[22] Michael Kohlhase et al. arxmliv project. https://kwarc.info/projects/arXMLiv/ , 2024.Accessed: 2024-09-17.[23] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Largelanguage models are zero-shot reasoners. Advances in neural information processing systems ,35:22199–22213, 2022.[24] Tak Cheung Lam, Jianxun Jason Ding, and Jyh-Charn Liu. Xml document parsing: Operationaland performance characteristics. Computer , 41(9):30–37, 2008.[25] Yuanchun Li, Hao Wen, Weijun Wang, Xiangyu Li, Yizhen Yuan, Guohong Liu, Jiacheng Liu,Wenxing Xu, Xiang Wang, Yi Sun, et al. Personal llm agents: Insights and survey about thecapability, efficiency and security. arXiv preprint arXiv:2401.05459 , 2024.[26] Yiheng Liu, Tianle Han, Siyuan Ma, Jiayue Zhang, Yuanyuan Yang, Jiaming Tian, Hao He,Antong Li, Mengshen He, Zhengliang Liu, et al. Summary of chatgpt-related research andperspective towards the future of large language models. Meta-Radiology , page 100017, 2023.[27] Daniel W Lozier. Nist digital library of mathematical functions. Annals of Mathematics andArtificial Intelligence , 38:105–119, 2003.[28] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, HaoCheng, Kai-Wei Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematicalreasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255 , 2023.[29] Jordan Meadows and André Freitas. A survey in mathematical language processing. arXivpreprint arXiv:2205.15231 , 2022.[30] Tomas Mikolov. Efficient estimation of word representations in vector space. arXiv preprintarXiv:1301.3781 , 2013.[31] B Miller. Latexml the manual. Web document , 2011.[32] Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, and Jimmy Ba. Openwebmath: An opendataset of high-quality mathematical web text. arXiv preprint arXiv:2310.06786 , 2023.[33] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.Language models are unsupervised multitask learners. OpenAI blog , 1(8):9, 2019.[34] Ronald Rosenfeld. Two decades of statistical language modeling: Where do we go from here?Proceedings of the IEEE , 88(8):1270–1278, 2000.[35] Jan Frederik Schaefer and Michael Kohlhase. Towards an annotation standard for stem docu-ments: Datasets, benchmarks, and spotters. In International Conference on Intelligent ComputerMathematics , pages 190–205. Springer, 2023.[36] Helmut Schmid. Part-of-speech tagging with neural networks. arXiv preprint cmp-lg/9410018 ,1994.[37] Ruocheng Shan and Abdou Youssef. Towards math terms disambiguation using machinelearning. In Intelligent Computer Mathematics: 14th International Conference, CICM 2021,Timisoara, Romania, July 26–31, 2021, Proceedings 14 , pages 90–106. Springer, 2021.7[38] Ruocheng Shan and Abdou Youssef. Using large language models to automate annotation andpart-of-math tagging of math equations. In International Conference on Intelligent ComputerMathematics , pages 3–20. Springer, 2024.[39] Yuan Sui, Jiaru Zou, Mengyu Zhou, Xinyi He, Lun Du, Shi Han, and Dongmei Zhang. Tap4llm:Table provider on sampling, augmenting, and packing semi-structured data for large languagemodel reasoning. arXiv preprint arXiv:2312.09039 , 2023.[40] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Openfoundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023.[41] Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun RLoomba, Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-levelscientific problem-solving abilities of large language models. arXiv preprint arXiv:2307.10635 ,2023.[42] Wikipedia. Glossary of mathematical symbols, 2023. Accessed: 2024-09-17.[43] Abdou Youssef. Part-of-math tagging and applications. In International Conference on Intelli-gent Computer Mathematics , pages 356–374. Springer, 2017.[44] Dongxiang Zhang, Lei Wang, Luming Zhang, Bing Tian Dai, and Heng Tao Shen. The gap ofsemantic parsing: A survey on automatic math word problem solvers. IEEE transactions onpattern analysis and machine intelligence , 42(9):2287–2305, 2019.[45] Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, AojunZhou, Pan Lu, Kai-Wei Chang, Peng Gao, et al. Mathverse: Does your multi-modal llm trulysee the diagrams in visual math problems? arXiv preprint arXiv:2403.14624 , 2024.[46] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min,Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXivpreprint arXiv:2303.18223 , 2023.[47] Jiaru Zou, Mengyu Zhou, Tao Li, Shi Han, and Dongmei Zhang. Promptintern: Saving inferencecosts by internalizing recurrent prompt during large language model fine-tuning. arXiv preprintarXiv:2407.02211 , 2024.8A STEM-P OM Dataset Supplementary MaterialsA.1 Frequency Analysis on math symbolsFigure 3: Total frequency (show-up times) of the top-50 mathematical symbols in the STEM-P OM.This illustrates the contextual polymorphism of a single mathematical symbol, i.e. it belongs tomultiple different attribute categories depending on the related context or mathematical expression.A.2 Source DatasetMTDE [14] contains approximately 10,000 entries of mathematical symbol names along with theirdefined contexts. Each entry includes a ’short’ definition and a ’long’ definition. A short definition is asingle-word definition, while a long definition consists of one or more words. The data was collectedthrough random sampling from mathematical and scientific arXiv preprint manuscripts, coveringa broad range of disciplines such as Physics, Computer Science, and Biology. For pre-processing,we ensured that the candidate data was generated via a corpus crawler and subsequently pruned andcleaned manually.ArXiv Paper Dataset [22] contains 1.7 million arXiv articles, spanning a wide range of disciplines,including Mathematics, Physics, Chemistry, Economics, and Computer Science. We randomlysample 10,000 articles from this raw dataset and manually ensure that each manuscript is math-rich,containing numerous mathematical expressions and symbols. For pre-processing, we utilize theSTEM-PoM Labeler to extract these symbols along with their surrounding context, ensuring that thedata is representative of real-world mathematical usage across various scientific fields. Additionally,9the extracted symbols and contexts are systematically cleaned and structured to facilitate furtherclassification and analysis.A.3 Dataset Definitions in Table 2File Name: This attribute serves as a reference point, indicating the source of the file. Specifically, itdenotes the arXiv article from which the dataset extracts its contents.Symbol Order: This component records the sequence in which mathematical symbols appear withinthe article. By capturing the ordinal position of each symbol, we facilitate a structured analysis of thesymbols’ progression and their contextual relationships within the document.Symbols: This field encapsulates the mathematical symbols themselves, predominantly consisting ofGreek letters, albeit inclusive of additional characters. The precise documentation of these symbols isparamount for the subsequent analytical processes.Main and Sub Attributes: These attributes categorize each mathematical symbol into specificclasses, delineating a hierarchical structure within the dataset. This classification scheme is vital forunderstanding the symbols’ roles and relationships within the mathematical discourse.Related Contents: This segment comprises the words or sentences surrounding each symbol,embodying a critical resource for our model training. The contextual information surrounding eachsymbol is indispensable, as it imbues our models with a deeper understanding of each symbol’sapplication and significance within the mathematical narrative.A.4 First-Level and Second-Level Attributes DefinitionConstant: A value that does not change in a mathematical expression.Local Constant: Constant that is specific to a particular system or model, such as the gravitationalconstant in a simulation of a specific planetary system.Global Constant: Constant that is applicable in all contexts, like the speed of light in a vacuum.Discipline-specified Constant: Constant that applies to particular fields of study, for instance,Planck’s constant in quantum mechanics.Operator: A symbol that operates on one or more operands.Local Operators: Operator that is applied in a localized or specific context within a discipline, like aself-defined operation in matrix processing.Global Operators: Operators that is used broadly across different disciplines, like the addition ormultiplication operator.Discipline-specified Operators: Operator that is unique to certain fields, such as the Hamiltonianoperator in quantum physics.Variable: A symbol that represents an unknown or changeable quantity in a mathematical expression.Scalar: A quantity that has only magnitude, no direction.Vector: A quantity that has both magnitude and direction.Matrix: A rectangular array of numbers or symbols arranged in rows and columns.A.5 STEM-PoM LabelerDuring the dataset construction, a pivotal step involves the meticulous annotation of each mathematicalsymbol with corresponding tags. This process, inherently labor-intensive and repetitive, necessitatesa systematic approach to mitigate the workload and facilitate collaboration among the research teammembers. To address these challenges, we developed a labeling pipeline designed to streamline thedataset construction process. The UI design is shown in figure 4. The functionalities are delineatedbelow:10Figure 4: The UI Design of STEM-PoM LabelerFile Reading: We initiate the data importing operation progress by importing files from the desig-nated arXiv folder, ensuring a structured and accessible repository of mathematical documents forsubsequent processing.Symbol Identification and Contextualization: For each file, we enumerate and display essentialinformation: the current file being processed, the total number of symbols within, the sequencenumber of the current symbol, the graphical representation of the symbol, and the contextual contentsurrounding the symbol. This feature aids in providing a comprehensive overview and facilitatesaccurate symbol annotation.Annotation Interface: We then present a user-friendly interface offering a set of predefined taggingoptions for each symbol. Through the designed interface, we easily select the most appropriate tagfrom these options, standardizing the labeling process and enhancing the consistency of the dataset.Data Recording: Upon the selection of a tag for a symbol, We record this association by appendinga new line to the dataset, capturing the symbol along with its assigned tag. This systematic datarecording ensures the integrity and scalability of the MTCE dataset.Dataset Evaluation: After constructing the dataset, we manually evaluate the quality and applicabilityof the annotated data. Specifically, we process the evaluation process through the following steps:Consistency Check, Inter-annotator Agreement, Statistical Analysis, and Benchmark Testing.B BackgroundsPart-of-Math (PoM) Tagging : The part-of-math tagging task draws inspiration from similartagging tasks such as part-of-speech tagging [ 36]. In the PoM context, the goal is to label individualmathematical tokens or expressions in math formulas with their corresponding mathematical roles.Such a task is essential for enabling a deeper semantic understanding of mathematical content bymachines. Several datasets or benchmarks have been developed for the part-of-tagging task, butthere also remain several challenges. Abdou [ 43] collects mathematical content, such as formularepresentation and tagging for specific mathematical formula translations and verifications, includingconverting formulae into semantic LaTeX or testing with tools like CAS (Computer Algebra Systems).However, this focus on structured and narrow formula translations does not align with the broader,more diverse text-based tasks required to assess NLP models, due to the lack of scalability featuresin the collected math symbols. Ruocheng [ 37,38] recently evaluated the potential of leveraging11LLMs for automated annotation and Part-of-Math tagging of math symbols. However, their PoMtagging was conducted on the Digital Library of Mathematical Functions (DLMF) [ 27]. Since thesource of math symbols is only one manuscript, the mathematical tokens collected only have a singleclassification type and are self-consistent. In contrast, our dataset incorporates the inherent messinessof published literature across several STEM subjects, where these domain-specific math symbols canhave multiple classifications or meanings depending on the discipline and related context information.Large Language Models: Pre-trained large language models (LLMs) have become a cornerstone inmodern NLP [ 34,46]. These models, which assign probabilities to word sequences by decomposingthe probability of a sequence into the product of conditional probabilities of subsequent tokens,have evolved significantly over time. Early approaches were based on N-gram models, but with theadvent of distributed word embeddings [ 5,30], neural language models gained prominence. Thescalability and performance improvements introduced by these models, along with the availabilityof vast textual data, have enabled the unsupervised pre-training of LLMs. These models, oftenreferred to as foundation models [ 33,23], can then be fine-tuned on smaller, task-specific datasetsto adapt them for various downstream applications. For STEM-P OM, we apply one traditionalsequence-based NLP model, LSTM [11], and several recent LLMs for our dataset evaluation.C Additional Experiment SetupsTraining Details In our experiments, we train an LSTM with varying numbers of layers for themathematical symbol classification tasks. For LLMs, we choose GPT-3.5 and apply a commonparameter-efficient fine-tuning (PEFT) method, LoRA [ 16], to evaluate the model precision perfor-mance. We split our dataset into 80%/10%/10% for training/validation/testing sets.Model Parameters For the LSTM model, we use different layer sizes from {128, 256, 512, 1024}.The hidden state size is set to 256, the learning rate is set from {0.1, 0.01, 0.001}, the training epochis 5, and the batch size is 16. We utilize the Adam optimizer [ 21]. For GPT-3.5 fine-tuning, we usethe GPT-3.5-turbo-0125 model version and set the training epoch to 3. For LoRA fine-tuning, we setthe LoRA rank to 32, batch size to 32, weight decay to 0.01, dropout to 0.1, and learning rate to 1e−4.D Additional ExperimentsD.1 Fine-tuning on STEM-P OMContext Length Overall Variable Constant Operator Unit DescriptorVanilla InferenceOne Sentence 56.8% 51.5% 53.5% 59.4% 62.4%Ten Sentences 58.7% 54.5% 53.6% 61.3% 65.1%Full Manuscript 60.6% 57.2% 56.6% 63.2% 65.2%LoRA Fine-tunedOne Sentence 67.4% 64.8% 67.5% 71.4% 66.1%Ten Sentences 66.9% 65.4% 66.6% 71.3% 64.5%Full Manuscript 62.2% 58.4% 62.2% 65.1% 63.2%Table 5: First-level classification with various context lengths on GPT-3.5 and fine-tuned GPT-3.5.Table 5 shows the comparison result on main attributes between fine-tuned and directly vanilla-referenced GPT3.5. Notably, the fine-tuned GPT-3.5 model achieves an accuracy of 67.4% in theone-sentence context. However, its performance diminishes as the context length increases, with anoticeable drop to 66.9% for ten sentences and further down to 62.2% for full manuscript-lengthcontext. The decreasing trend suggests that while the fine-tuning process improves performance forshorter contexts, the model’s ability to handle longer contexts is hindered.The vanilla inference results also show a similar pattern of improvement with context length, butthe gap between fine-tuned and vanilla inference narrows as the context length grows. For instance,the difference in overall accuracy between fine-tuned and vanilla models is 10.6% for one-sentencecontexts but only 1.6% for full manuscripts.12Overall, the diminishing return for fine-tuned models with longer contexts indicates that fine-tuningamplifies sensitivity to the introduction of noisy or less relevant information when longer contexts areinvolved. The observation also could point to challenges in the fine-tuning process for long-contextLLMs, which require more refined techniques to handle context length effectively.D.2 Ablation StudyModel size(layers) Variable Constant Operator Unit Descriptor128 24.5% 13.2% 10.3% 27.1%256 28.7% 17.9% 15.7% 32.5%512 34.2% 23.2% 24.9% 40.0%1024 46.5% 35.9% 44.2% 51.3%Table 6: LSTM first-level classification accuracy based on different model sizesContext Length Variable Constant Operator Unit DescriptorOne Sentence 24.5% 13.2% 10.3% 27.1%Five Sentence 26.3% 15.6% 14.1% 29.2%Ten Sentence 28.1% 16.8% 15.5% 30.2%Table 7: LSTM first-level classification accuracy based on different input context lengths.Model Performance vs Model Size Table 6 presents the classification accuracy of an LSTM modelfor first-level classification across different model sizes, ranging from 128 to 1024 layers. Note thatwe set the input context length to be one sentence. The results show a clear positive correlationbetween the model size and classification accuracy across all four categories. For the smallest model(128 layers), the accuracy ranges from 10.3% for the Operator class to 27.1% for the Unit Descriptorclass. As the model size increases, the performance improves notably, with the largest model (1024layers) achieving a relatively high-performance gain in accuracy, ranging from 35.9% for the Constantclass to 51.3% for the Unit Descriptor class. The most substantial improvements are observed in theOperator category, where accuracy increases from 10.3% for 128 layers to 44.2% for 1024 layers.These results suggest that larger model sizes are more effective in capturing complex patterns.Model Performance vs Data Input Lengths Table 7 displays the classification accuracy of anLSTM model across varying input context lengths across four categories. A trend of increasingaccuracy can be observed as the input length increases. For instance, in the Variable category, theaccuracy increases from 24.5% for one sentence to 28.1% for ten sentences. Similarly, for theConstant category, accuracy rises from 13.2% for one sentence to 16.8% for ten sentences. TheOperator category shows a modest increase from 10.3% to 15.5% as the input length expands. Finally,for the Unit Descriptor category, accuracy grows from 27.1% to 30.2%. These results suggest thatlonger input data contributes to improved classification accuracy.13 |
j7DZWSc8qu | Inference Scaling Laws:An Empirical Analysis of Compute-OptimalInference for LLM Problem-SolvingYangzhen Wu1∗, Zhiqing Sun2, Shanda Li2, Sean Welleck2, Yiming Yang21Institute for Interdisciplinary Information Sciences, Tsinghua University2School of Computer Science, Carnegie Mellon [email protected]{zhiqings, shandal, swelleck, yiming}@cs.cmu.eduAbstractWhile the scaling laws of large language models (LLMs) training have been exten-sively studied, optimal inference configurations of LLMs remain underexplored.We study inference scaling laws andcompute-optimal inference , focusing on thetrade-offs between model sizes and generating additional tokens with differentinference strategies. As a first step towards understanding and designing compute-optimal inference methods, we studied cost-performance trade-offs for inferencestrategies such as greedy search, majority voting, best-of- n, weighted voting, andtwo different tree search algorithms, using different model sizes and computebudgets. Our findings indicate smaller models (e.g., Llemma-7B) can outperformlarger models given the same computation budgets, and that smaller models pairedwith advanced inference algorithms yield Pareto-optimal cost-performance trade-offs. For instance, the Llemma-7B model, equipped with our novel tree searchalgorithm, consistently outperforms Llemma-34B with standard majority voting onthe MATH benchmark across all FLOPs budgets. We hope these findings contributeto a broader understanding of inference scaling laws for LLMs.21 IntroductionScaling laws of neural networks [Hestness et al., 2017, Rosenfeld et al., 2019] have been establishedacross a range of domains, including language modeling [Kaplan et al., 2020, Hoffmann et al., 2022,OpenAI, 2023], image modeling [Henighan et al., 2020, Yu et al., 2022, Peebles and Xie, 2023],video modeling [Brooks et al., 2024], reward modeling [Gao et al., 2023], and board games [Jones,2021]. These studies have demonstrated how model performance is influenced by both the size of themodel and the amount of training computation. However, there is limited knowledge on how varyingthe compute during inference affects model performance after the model has been trained.To improve the task performance of large language models (LLMs), inference techniques typicallyinvolve additional computation as a performance maximization step at inference time [Nye et al.,2021, Wei et al., 2022, Wang et al., 2022b, Yao et al., 2023, Chen et al., 2024b]. The computationalcost of these techniques must be taken into account for compute-optimal inference . For example,a Monte Carlo Tree Search (MCTS) method [Jones, 2021] may improve task performance, butpotentially require much more compute than simply sampling solutions multiple times. Generallyspeaking, we need a comprehensive understanding of how various inference-time methods (e.g.,best-of- n, majority voting [Wang et al., 2022a]) trade off between performance and cost. To improve∗Work done during the visit at Carnegie Mellon University.2Project Page: https://thu-wyz.github.io/inference-scaling/38th Conference on Neural Information Processing Systems (NeurIPS 2024) Workshop on MATH-AI.our understanding, this paper presents a thorough empirical evaluation with careful analysis overvarious configurations of representative LLMs and inference algorithms.Specifically, we explore how to select an optimal size for the language model and an effectiveinference strategy (e.g., greedy search, majority voting, best-of- n, weighted voting, and their tree-search variants) to maximize performance (i.e., accuracy) with a given compute budget. We controlthe inference computation (FLOPs) of a fixed model by generating more tokens through the languagemodel3, sampling further candidate solutions, and ranking them with a reward model. We analyze theperformance of fine-tuned models of various sizes given different inference FLOPs on mathematicalreasoning benchmarks (e.g., GSM8K test set [Cobbe et al., 2021a] and MATH500 test set [Hendryckset al., 2021, Lightman et al., 2023b]). Our experiments cover several model families, includinggeneral-purpose LLMs (e.g., Pythia [Biderman et al., 2023] & Mistral [Jiang et al., 2023]) as well asmath-specialized ones (e.g., Llemma [Azerbayev et al., 2023]).Our results on Pythia (Fig. 1) illustrate how performance scales with increased inference computeacross various model sizes. Typically, increasing the compute budget leads to higher accuracy untilthe accuracy reaches saturation. As the compute budget increases, smaller models initially performbetter than larger ones, but once the accuracy of the smaller models saturates, the larger modelshave favorable performance. The right panel of Figure 1 demonstrates that the optimal model sizefor inference varies with different levels of computation. However, in real-world deployment, theavailable computation is typically much lower than the point where the accuracy of relatively smallmodels saturates and larger models begin to show their advantage (as shown in Figure 2, where the7B model outperforms the 34B model until 128 Llemma 7B solutions are sampled). This indicatesthat relatively smaller models could be more compute-optimal for inference.We have also found that the commonly-used MCTS method does not perform well with weightedvoting, as it often yields many unfinished solutions, hence having less effective votes. To address thisissue, we propose a novel tree search algorithm, REward BAlanced SEarch (REBASE ), which pairswell with weighted voting and achieves a Pareto-optimal trade-off between accuracy and inferencecompute. The key idea of REBASE is to use a node-quality reward to control node expansion, whicheliminates the need for explicit rollouts while ensuring enough candidate solutions for voting.In our experiments, REBASE consistently outperforms sampling and MCTS methods across allsettings, models, and tasks. Importantly, we find that REBASE with a smaller language modeltypically achieves a Pareto-optimal trade-off. For instance, we show that the Llemma-7B model canachieve competitive accuracy to a Llemma-34B model while using 2×less FLOPs when evaluatingon MATH500 (Fig. 2) or GSM8K (Fig. 3). Moreover, Llemma-7B with REBASE outperformsLlemma-34B with standard majority voting across allcompute budgets. Our results show the valueof using smaller models with advanced inference-time algorithms, and the benefits of new algorithmsfor achieving better returns on inference-time compute.1.1 Problem FormulationWe explore the following question: Given a fixed FLOPs budget, how should one select an optimalmodel size for the policy model, and an effective inference strategy to maximize performance (i.e.,accuracy)? .To address this, we represent the problem-solving error rate E(N, T;S)as a function of the numberof model parameters N, the number of generated tokens Tand the inference strategy S. Thecomputational budget Cis a deterministic function FLOPs (N, T;S), based on NandT. Our goal isto minimize Eunder the test-time compute constraint FLOPs (N, T,S) =C:(Nopt(C), Topt(C);S) = arg min(N,T,S)s.t. FLOPs (N,T,S)=CE(N, T;S)where Nopt(C)andTopt(C)denote the optimal allocation of a computational budget C.Here, the inference computation (FLOPs) for a fixed model can be modulated by generating moretokens with the policy model, e.g., by sampling additional candidate solutions and subsequentlyranking them using a reward model. We primarily consider sampling and tree-search approaches withreranking or Majority V oting as the means to consume more tokens.3Following Uesato et al. [2022], we refer to the main language model generating outputs as the policy model.It can be paired with a reward model, which scores outputs from the policy model to facilitate inference.22 8 32 128 512 2048Inference FLOPs per question (×1011)203040506070T est error on GSM8KInference scaling (Weighted Majority)410M1.4B2.8B6.9B12B0.5 1 2 4 8 16Model size (B)3040506070T est error on GSM8KInference scaling (Weighted Majority)11.512.012.513.013.514.0log(FLOPs)Figure 1: The inference computation scaling laws of Pythia exhibited in error rate on the GSM8Ktest set. We evaluate Pythia model using various sizes and various numbers of sampled solutionsfor majority voting. The leftpanel shows the error rate for each model size decreases steadily whenthe computation increases and converges at the end. The right panel shows the model performancesgiven inference FLOPs budgets. In particular, the three stars highlight the optimal model size under241,244, and 247FLOPs, indicating that the optimal model size can vary given different budgets.Both the xandyaxes are shown in logscale.1.2 Inference StrategiesWe examine sampling-based and tree-search inference strategies.In sampling methods, majority voting (a.k.a. self-consistency [Wang et al., 2022a]) samples multiplereasoning paths, selecting the most frequent answer. When using a reward model, best-of-N selectsthe highest-scoring path, while weighted majority voting selects the answer with the highest weightedscore, based on reward model values.Tree-search methods, recently adapted for LLMs [Yao et al., 2023, Zhang et al., 2023, Zhou et al.,2023, Liu et al., 2024, Choi et al., 2023, Chen et al., 2024a, Tian et al., 2024, Chen et al., 2024a],often pair with value models to guide exploration. Monte Carlo Tree Search (MCTS) is common(Appendix C reviews MCTS), but it tends to be resource-heavy, requiring many more generatedtokens than simpler methods. A more efficient inference strategy is needed, and comparisons oftree-search and sampling methods on computational cost are essential. We present our novel treesearch method REBASE in 3.1.2 Inference Scaling LawsIn order to compare the compute budgets of different models, we plot the figures with the number ofFLOPs used per question during inference. We compute the inference FLOPs based on the standardformula from [Kaplan et al., 2020].Scaling law of compute-optimal inference for model size. Figure 1 shows the relationshipbetween inference compute and error rate for different model sizes. The error rate first decreasessteadily and then starts to saturate. Initially, sampling many times from smaller models is compute-optimal. At larger compute budgets the larger models are preferable, since the performance of smallmodels has saturated. As highlighted in the right panel of Figure 1, the optimal model size variesbased on the inference budget. We performed a regression analysis on inference FLOPs Cand modelsizesNto establish a relationship between a given computational budget and its optimal model size.The resulting equation, log10(C) = 1 .19 log10(N) + 2.03, lets us estimate the optimal inferencemodel size for a specific compute budget.Llemma-7B model achieves competitive accuracy to Llemma-34B model with lower computebudget. Fig. 2 & 3 shows the relationship between error rate and inference FLOPs for Llemma7B and Llemma 34B using different inference strategies. Llemma-7B requires around 2×less totalFLOPs than Llemma-34B to achieve comparable accuracy. This held across inference strategies(sampling strategies, MCTS, REBASE ) and tasks (MATH, GSM8K). This result suggests that, withthe same training dataset and model family, generating more tokens with a suitable inference strategyusing a smaller model can have more favorable cost-performance tradeoffs than using a larger model.34 16 64 256 1024Inference FLOPs per question (×1012)505560657075T est error on MATHInference scaling (Weighted Majority)Sampling (7B)Sampling (34B)MCTS (7B)MCTS (34B)REBASE (7B)REBASE (34B)4 16 64 256 1024Inference FLOPs per question (×1012)5560657075T est error on MATHInference scaling (Best-of-N)Sampling (7B)Sampling (34B)MCTS (7B)MCTS (34B)REBASE (7B)REBASE (34B)Figure 2: The inference computation scaling comparisons across model sizes . The left/right panelshows the problem-solving error rate on MATH based on Weighted Majority/Best-of-N.2 4 8 16 32 64 128 256Inference FLOPs per question (×1012)121620242832T est error on GSM8KInference scaling (Weighted Majority)Sampling (7B)Sampling (34B)REBASE (7B)REBASE (34B)2 4 8 16 32 64 128 256Inference FLOPs per question (×1012)121620242832T est error on GSM8KInference scaling (Best-of-N)Sampling (7B)Sampling (34B)REBASE (7B)REBASE (34B)Figure 3: The inference computation scaling comparisons across model sizes . The left/right panelshows the problem-solving error rate on GSM8K based on Weighted Majority/Best-of-N. MCTS isnot included in the comparison because of its poor compute-accuracy trade-off.3 Compute-Optimal Inference3.1 Reward Balanced Search (REBASE)The REBASE tree search method inherits the exploitation and pruning properties of tree search,while using the reward model alone to estimate the nodes’ qualities without additional computationfor estimating values by sampling children. The details are provided below:Notations. We view the fine-tuned LLM as a policy πθwhich generates the solution step by step.Given a question xand the first ksteps of a solution r1···rk, the(k+ 1)-th step is sampled fromπθ(·|xr1···rk).REBASE effectively generates a solution tree during inference, in which the rootnode the question xand other nodes corresponds to solution steps. When generating solution trees,we generate children of rkby sampling from πθ(·|xr1···rk). Here we slightly abuse notations anduse the corresponding question/solution step to denote a node. The reward of a node rkis generatedby the PRM: R(rk) :=R(qr1···rk).Initialization. Given the question x, balance temperature Tb>0, and sampling number of solutionsN, we sample Ninstances of the first step for the question, yielding all the nodes of depth 1 in thesearch tree. We let the sampling budget of depth 0, B0, toNat initialization.Reward modeling and update. In the i-th iteration, the PRM assigns the rewards to all the nodesat depth i. After that, the algorithm examines whether the solutions up to depth iare complete.Supposing there are Cicompleted solutions, we update the sampling budget using Bi←Bi−1−Ci.IfBi= 0, the process ends, and we obtain Nsolutions.4Exploration balancing and expansion. For all of the nodes njwith reward R(nj)in the depth iofthe tree, we calculate the expansion width of the njas:Wj=RoundBiexp (R(nj)/Tb)Pkexp (R(nk)/Tb).Then we sample Wjchildren for njfor all the nodes in depth i, and start the next iteration.Intuitively, when the balance temperature Tbis small, this method encourages more exploitationwhich put much more compute budget on the nodes with high score, when Tbis large, it encouragesexploration where nodes with high score and low score are exlopred equally. In our experiment, wehave found Tbin the range of (0.1,0.3)works well for our process reward model.3.2 Comparing REBASE to Other BaselinesREBASE is Pareto-optimal. REBASE consistently achieves the best cost-performance tradeoffs,outperforming the sampling-based methods in all settings when fixing the model and the evaluationtask (Fig. 2, 3, 4, and 5). For example, in Figure 2, REBASE is the compute-optimal strategy at allinference compute budgets, with 7B typically the optimal model size. On the other hand, MCTSunderperforms the sampling-based methods at each compute budget, likely due to its costly rollouts(Figure 2) compared to the efficient use of the reward model in REBASE.Table 1 shows that REBASE achieves better accuracy with a lower compute budget compared tosampling-based weighted voting. With the 7B model, REBASE achieves higher accuracy with 7times less compute. This finding is novel, and differs from previous tree search methods that typicallyimprove the performance at the cost of higher computational expense compared to sampling-basedvoting [Chen et al., 2024a, Xie et al., 2023].Weaker models gain more from tree search. For example, our proposed REBASE leads to5.3%,3.3%, and 2.6%performance gains on MATH for Mistral-7B, Llemma-7B, Llemma-34B,respectively. The order of accuracy increase is inversely related to the model’s corresponding greedysearch accuracy on those datasets. This suggests that weaker models, as indicated by their lowergreedy search accuracy, benefit more from tree search methods like REBASE.REBASE saturates later than sampling with higher accuray. From Fig. 2 and Fig. 3, we observethatREBASE saturates later than the sampling methods, with lower final error rates. This is theevidence that REBASE improves the reasoning paths, due to the selection and pruning mechanism inthe intermediate steps, REBASE discards the low quality partial paths and exploring more on goodones, results in a higher probability of sampling high-quality reasoning paths.4 Conclusions and LimitationsConclusions. In this work, we conducted a comprehensive empirical analysis of inference scalinglaw and compute-optimal inference for problem-solving with language models. We examined thescaling effect of computation during inference across different model sizes and found that whileincreased computation generally leads to higher performance, the optimal model size varies with theavailable compute budget. When the computation budget is limited, smaller models can be preferable.Additionally, we introduce our novel tree search method, REBASE, which is more compute-optimalthan both sampling methods and Monte Carlo Tree Search (MCTS). REBASE typically achieveshigher accuracy while using several times less computation than sampling methods. Our resultsunderscore the potential of deploying smaller models equipped with sophisticated inference strategieslike REBASE to enhance problem-solving accuracy while maintaining computational efficiency.Limitations. Our empirical analysis specifically targets mathematical problem-solving. Investigat-ing the inference scaling laws and compute-optimal inference strategies for tasks beyond mathematicalproblem-solving would be a valuable direction for future research. Additionally, we mainly evaluatethe proposed REBASE on the GSM8K and MATH500 datasets. We speculate that the REBASEalgorithm, which assumes access only to a function that assigns scores to nodes, will be effective intasks beyond those studied here.5ReferencesDavid H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmannmachines. Cognitive science , 9(1):147–169, 1985.Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert QJiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model formathematics. arXiv preprint arXiv:2310.10631 , 2023.Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O’Brien, Eric Hal-lahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al.Pythia: A suite for analyzing large language models across training and scaling. arXiv preprintarXiv:2304.01373 , 2023.Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr,Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh.Video generation models as world simulators. 2024. URL https://openai.com/research/video-generation-models-as-world-simulators .Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal,Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models arefew-shot learners. Advances in Neural Information Processing Systems , 33:1877–1901, 2020.Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. Alphamath almost zero: process supervisionwithout process, 2024a.Ziru Chen, Michael White, Raymond Mooney, Ali Payani, Yu Su, and Huan Sun. When is tree searchuseful for llm planning? it depends on the discriminator. arXiv preprint arXiv:2402.10890 , 2024b.Sehyun Choi, Tianqing Fang, Zhaowei Wang, and Yangqiu Song. Kcts: Knowledge-constrained treesearch decoding with token-level hallucination detection, 2023.Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, AdamRoberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. PaLM:Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 , 2022.Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and JohnSchulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 ,2021a.Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solvemath word problems. arXiv preprint arXiv:2110.14168 , 2021b.Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. InInternational Conference on Machine Learning , pages 10835–10866. PMLR, 2023.Alex Graves. Sequence transduction with recurrent neural networks, 2012.Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine,and Dawn Song. The false promise of imitating proprietary llms. arXiv preprint arXiv:2305.15717 ,2023.Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, DawnSong, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. InThirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track(Round 2) , 2021.Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun,Tom B. Brown, Prafulla Dhariwal, Scott Gray, et al. Scaling laws for autoregressive generativemodeling. arXiv preprint arXiv:2010.14701 , 2020.6Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad,Md Patwary, Mostofa Ali, Yang Yang, and Yanqi Zhou. Deep learning scaling is predictable,empirically. arXiv preprint arXiv:1712.00409 , 2017.Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, ElizaRutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al.Training compute-optimal large language models. arXiv preprint arXiv:2203.15556 , 2022.Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot,Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al.Mistral 7b. arXiv preprint arXiv:2310.06825 , 2023.Andy L Jones. Scaling scaling laws with board games. arXiv preprint arXiv:2104.03113 , 2021.Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child,Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models.arXiv preprint arXiv:2001.08361 , 2020.Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ra-masesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitativereasoning problems with language models. arXiv preprint arXiv:2206.14858 , 2022.Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Makinglanguage models better reasoners with step-aware verifier. In Anna Rogers, Jordan Boyd-Graber,and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association forComputational Linguistics (Volume 1: Long Papers) , pages 5315–5333, Toronto, Canada, July2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.291. URLhttps://aclanthology.org/2023.acl-long.291 .Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, JanLeike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprintarXiv:2305.20050 , 2023a.Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike,John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step, 2023b.Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale gen-eration: Learning to solve and explain algebraic word problems. In Proceedings of the 55thAnnual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages158–167, 2017.Jiacheng Liu, Andrew Cohen, Ramakanth Pasunuru, Yejin Choi, Hannaneh Hajishirzi, and AsliCelikyilmaz. Don’t throw away your value model! generating more preferable text with value-guided monte-carlo tree search decoding, 2024.Qianli Ma, Haotian Zhou, Tingkai Liu, Jianbo Yuan, Pengfei Liu, Yang You, and Hongxia Yang.Let’s reward step by step: Step-level reward model as the navigators for reasoning, 2023.Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, DavidBieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Show your work:Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114 ,2021.OpenAI. Gpt-4 technical report, 2023.William Peebles and Saining Xie. Scalable diffusion models with transformers. In Proceedings ofthe IEEE/CVF International Conference on Computer Vision , pages 4195–4205, 2023.Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.arXiv preprint arXiv:2009.03393 , 2020.Jonathan S Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, and Nir Shavit. A constructive predictionof the generalization error across scales. arXiv preprint arXiv:1909.12673 , 2019.7Virginia Teller. Speech and language processing: an introduction to natural language processing,computational linguistics, and speech recognition, 2000.Ye Tian, Baolin Peng, Linfeng Song, Lifeng Jin, Dian Yu, Haitao Mi, and Dong Yu. Toward self-improvement of llms via imagination, searching, and criticizing. arXiv preprint arXiv:2404.12253 ,2024.Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, AntoniaCreswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process- andoutcome-based feedback. arXiv preprint arXiv:2211.14275 , 2022.Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Y Wu, and ZhifangSui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. CoRR,abs/2312.08935 , 2023.Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdh-ery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models.International Conference on Learning Representations (ICLR 2023) , 2022a.Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, andHannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instructions.arXiv preprint arXiv:2212.10560 , 2022b.Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou.Chain-of-thought prompting elicits reasoning in large language models. NeurIPS , 2022.Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao, Min-Yen Kan, Junxian He, and Qizhe Xie.Self-evaluation guided beam search for reasoning, 2023.Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and KarthikNarasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXivpreprint arXiv:2305.10601 , 2023.Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan,Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content-rich text-to-image generation. arXiv preprint arXiv:2206.10789 , 2(3):5, 2022.Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, ZhenguoLi, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions forlarge language models. arXiv preprint arXiv:2309.12284 , 2023.Shun Zhang, Zhenfang Chen, Yikang Shen, Mingyu Ding, Joshua B. Tenenbaum, and Chuang Gan.Planning with large language models for code generation, 2023.Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, and Yu-Xiong Wang. Languageagent tree search unifies reasoning acting and planning in language models, 2023.84 16 64 256 1024Infer. FLOPs per question (×1012)4550556065707580T est error on MATHLlemma-7BSampling W.M.Sampling BoNREBASE W.M.REBASE BoN1632641282565121024Infer. FLOPs per question (×1012)45505560657075T est error on MATHLlemma-34BSampling W.M.Sampling BoNREBASE W.M.REBASE BoN48163264128256512Infer. FLOPs per question (×1012)50556065707580T est error on MATHMistral-7BSampling W.M.Sampling BoNREBASE W.M.REBASE BoNFigure 4: MATH inference scaling across inference strategies and models (lower is better). Thetested models are Llemma-7B (left), Llemma-34B (middle), & Mistral-7B (right). In the legend,W.M. and BoN refer to weighted majority and best-of- n, respectively.2 4 8 16 32 64Infer. FLOPs per question (×1012)10152025303540T est error on GSM8KLlemma-7BSampling W.M.Sampling BoNREBASE W.M.REBASE BoN16 32 64 128 256Infer. FLOPs per question (×1012)101214161820222426T est error on GSM8KLlemma-34BSampling W.M.Sampling BoNREBASE W.M.REBASE BoN2 4 8 16 32 64Infer. FLOPs per question (×1012)8101214161820222426T est error on GSM8KMistral-7BSampling W.M.Sampling BoNREBASE W.M.REBASE BoNFigure 5: GSM8K inference scaling across inference strategies and models (lower is better). Thetested models are Llemma-7B (left), Llemma-34B (middle), & Mistral-7B (right). In the legend,W.M. and BoN refer to weighted majority and best-of- n, respectively.A Additional ResultsIn this section, we present the results of Llemma models and Mistral model to see how the REBASEperforms on different models. Fig. 4 and 5 shows the scaling behaviors of Llemma models andMistral model on MATH and GSM8k datasets. Table 1 shows the advantage of REBASE in specificcompute settings.Table 1: REBASE with a lower compute budget has better accuracy than sampling with a highercompute budget. We use weighted voting to aggregate candidates for both sampling and REBASE .# SAMPLES FLOP S MATH500MISTRAL -7BSAMPLING 256 8.70×101442.8REBASE 32 1.36×101445.0LLEMMA -7BSAMPLING 256 10.0×101445.5REBASE 32 1.48×101446.8LLEMMA -34BSAMPLING 64 12.1×101446.7REBASE 32 7.08×101449.29B Related WorksMathematical Reasoning with LLMs. Large language models have made significant progressin recent years, and have exhibited strong reasoning abilities [Brown et al., 2020, Hoffmann et al.,2022, Chowdhery et al., 2022, Lewkowycz et al., 2022]. Mathematical problem solving is a keytask for measuring LLM reasoning abilities [Cobbe et al., 2021a, Hendrycks et al., 2021]. Linget al. [2017] first developed the method of producing step by step solutions that lead to the finalanswer. Later, [Cobbe et al., 2021b] extended the work by training a verifier for evaluating andranking solutions. Subsequent research (e.g., Lewkowycz et al. [2022]) has shown the performancebenefits of inference-time techniques such as majority voting [Wang et al., 2022a] and weightedmajority voting [Li et al., 2023]. We choose problem solving in mathematics as the task to studycompute-optimal strategies since it allows us to accurately evaluate problem solving ability.Inference Strategies of LLM Problem Solving. A variety of inference strategies have beendeveloped to generate sequences with a trained model. Deterministic methods such as greedydecoding and beam search [Teller, 2000, Graves, 2012] find highly probable sequences, often yieldinghigh quality results but without diversity. Sampling algorithms (e.g., temperature sampling [Ackleyet al., 1985]) can produce a diverse set of results which are then aggregated to achieve higher accuracy(e.g., via majority voting [Wang et al., 2022a]). Recent methods combine search algorithms withLLMs, including breadth-first or depth-first search [Yao et al., 2023], Monte-Carlo Tree Search(MCTS) [Zhang et al., 2023, Zhou et al., 2023, Liu et al., 2024, Choi et al., 2023], and guided beamsearch [Xie et al., 2023]. All of these methods show that using search at inference time can leadto performance gains in various tasks. However, the trade-off for the improved performance is theuse of compute to perform the search. Analyzing the resulting cost-performance trade-offs remainsunderstudied. In this paper, we systematically analyze the trade-off between compute budget andproblem-solving performance, and propose a tree search method that is empirically Pareto-optimal.Process Reward Models. Process reward models (PRMs) have emerged as a technique to improvethe reasoning and problem-solving capabilities of LLMs. These models assign rewards to theintermediate steps of the LLM generated sequences. PRMs have been shown effective in selectingreasoning traces with a low error rate, and for providing rewards in reinforcement learning-stylealgorithms [Uesato et al., 2022, Polu and Sutskever, 2020, Gudibande et al., 2023]. Ma et al. [2023]applies a PRM to give rewards on the intermediate steps and guide the multi-step reasoning process.The PRM can be either trained on human labeled data [Lightman et al., 2023a] or model-labeledsynthetic data [Wang et al., 2023]. In our work, we use the PRM as the reward model for selectinggenerated solutions, and for selecting which partial solutions to explore in tree search.C MCTS DetailsIn this section, we present additional background on the Monte Carlo Tree Search (MCTS) algorithm.The MCTS process can be formulated as the following steps:Selection. The process begins at the root node. Here, the algorithm recursively selects the childnode that offers the highest Upper Confidence Bound applied to Trees (UCT) value, continuing untila node is reached that has not been expanded. The UCT is calculated using the formulaUCT (s) =Q(s) +Csln (N(Parent( s)))N(s),where Q(s)denotes the quality score of node s,N(s)is the number of visits to node s,Parent( s)denotes the parent node of s, andCis a constant determining the level of exploration.Expansion and evaluation. Upon reaching a non-terminal node s, the node is expanded bygenerating multiple child nodes. Each child node cis then evaluated using a value function V(c),which predicts the potential quality of continuing the sequence from node c.Backpropagation. After evaluation, the algorithm updates the UCT values and the visit countsfor all nodes along the path from the selected node back to the root. For any node nin this path, the10Table 2: Fine-tuning Hyper-parameters: LR refers to the learning rate, BS refers to the batch size.Pythia, Llemma-7B and LLemma-34B are the generators we use in our experiments, RM is short forReward Model. We only use problems from GSM8K to train the Pythia models.Model # Epoch Dataset BS LR Max Seq Length DtypePythia-410M 1 MetaMath (GSM8K) 128 8E-5 768 FP32Pythia-1.4B 1 MetaMath (GSM8K) 128 4E-5 768 FP32Pythia-2.8B 1 MetaMath (GSM8K) 128 3E-5 768 FP32Pythia-6.9B 1 MetaMath (GSM8K) 128 2E-5 768 FP32Pythia-12B 1 MetaMath (GSM8K) 128 1E-5 768 FP32Llemma-7B 1 MetaMath 128 8E-6 1024 FP32Llemma-34B 1 MetaMath 128 8E-6 768 FP32Llemma-34B RM 2 Math-Shepherd 128 1E-5 768 BF16updates are made as follows:N(n)←N(n) + 1,Q(n)←(N(n)−1)Q(n) +V(s)N(n).D Hyper-parametersFinetuning All the hyperparameters for model fine-tuning can be found in Table 2. We preprocessthe MetaMath [Yu et al., 2023] Dataset to make the solutions in a stepwise format.Inference For all the inference strategies, the temperature of the LLM is set to 1.0. Max tokensfor the output is 1024 and max tokens for one step is 256. For REBASE, we chose the balancetemperature (the softmax temperature in the REBASE algorithm) as Tb= 0.1. For MCTS, we setCin the UCT value as 1 and we expand 4,8,16children for the root, 2 children for other selectednodes with total 32, 64, 128 expansions respectively.11 |
hR4Hskr4GX | Constraint-Based Synthetic Data Generation for LLMMathematical ReasoningTimofey Fedoseev1,2, , Dimitar I. Dimitrov2,3, Timon Gehr3, Martin Vechev2,31École Polytechnique2INSAIT, Sofia University "St. Kliment Ohridski"3ETH Zurich{timofei.fedoseev}@polytechnique.edu1{dimitar.iliev.dimitrov}@insait.ai2{timon.gehr, martin.vechev}@inf.ethz.ch3AbstractMathematical reasoning with large language models (LLMs) is an emerging re-search area. A recent breakthrough is the use of off-the-shelf tools LLMs aretrained to utilize to offload complex tasks they cannot perform independently. Un-fortunately, this approach is limited to popular tools, as many specialized tools lackthe data to train these models on. Motivated by our observation that the currenttools used with LLMs are insufficient for solving counting problems, in this work,we explore the problem of using Satisfiability Modulo Theories (SMT) solverswith LLMs. Namely, we leverage the SMT grammar to generate synthetic dataconsisting of problem statements and their solutions represented as Python codeinteracting with the Z3 API. Our experiments show that fine-tuning LLMs onthis dataset substantially enhances their ability to generate accurate Z3 constraintencodings and improves their overall mathematical problem-solving capabilities.1 IntroductionMathematical reasoning is a key emerging area of competence for LLMs. Instead of asking forthe solution of a mathematical problem directly, a common technique is to instead ask for programcode that in turn solves the problem, offloading already easily automatable parts of the mathematicalreasoning to off-the-shelf libraries [ 1,2,3,4]. This works particularly well for popular libraries (suchas SymPy [ 3]), because large amounts of high-quality training data can be found online. Commonapproaches are to train LLMs on solutions obtained from stronger LLMs [ 5], and/or to bootstrapfrom a large number of samples, discarding any incorrect solutions [ 6]. However, not for everyreasoning tool, training data is initially similarly abundant. Therefore, more elaborate data generationapproaches may be helpful in order to more readily tap into the potential of less popular tools.Motivated by the observation that existing methods based on SymPy often struggle to solve countingproblems, we investigate an approach based on Z3Py [ 7], which excels at counting problems with asmall enough answer. As expected due to the relatively lower popularity of Z3Py, even state-of-the-artLLMs such as GPT-4o are not yet proficient users of Z3Py.In this work, we make the following contributions:•A method for generating large-scale, high-quality synthetic data for Z3Py that can be usedto enhance the mathematical problem-solving capabilities of current LLMs.•An additional Z3Py synthetic dataset obtained through filtering of problems from existingdatasets with traditional rejection sampling of solutions from LLMs.•A quantitative evaluation on GPT-4o and GPT-4o-mini against their fine-tuned variantsusing the two approaches for data generation, showing that our new training data generation38th Conference on Neural Information Processing Systems (NeurIPS 2024).method improves the model performance on the Z3Py formalization task more than rejectionsampling solutions of existing problems alone.•A qualitative evaluation, showing the models fine-tuned on our Z3Py data can solve problemsthat they were unable to solve without fine-tuning, even if allowed to use arbitrary libraries.2 Background: Enumerating Solutions with Z3PySatisfiability modulo theories (SMT) generalizes the Boolean satisfiability problem. Constraints caninvolve various different types of mathematical objects. An SMT solver takes in a set of constraintsand determines whether the constraints can be satisfied, producing a solution (“model”) if possible.The Z3 theorem prover is a state-of-the-art open-source SMT solver. Z3Py is the Z3 API forPython. We solve counting problems with Z3Py using the technique of blocking : We iteratively solveconstraints, adding constraints that the solution should not be one of the ones found previously. Thisway, the solver steps through all solutions. Once all solutions have been found, the solver returns thatthe constraints are unsatisfiable. This way we in particular obtain the number of possible solutions.In some cases, state-of-the-art LLMs are able to generate Z3Py code that solves a problem specifiedin natural language. E.g., we prompted GPT-4o with “Please give me code using Z3Py that countsthe number of ways to cover a 4x4 chessboard with 2x1 and 1x2 dominos.”, and shortened the result:1fromz3import*2solver = Solver()34n = 45board = [(x, y) forxinrange(n) foryinrange(n)]6dom = [(Bool(f"h _{x}_{y}"), [(x, y), (x+1, y)]) forxinrange(n-1) foryinrange(n)] \7+ [(Bool(f"v _{x}_{y}"), [(x, y), (x, y+1)]) forxinrange(n) foryinrange(n-1)]8solver.add(And([Sum([v forv, pos indom if(x, y) inpos]) == 1 forx, y inboard]))910count = 011whilesolver.check() == sat:12model = solver.model()13count += 114solver.add(Or([var != model[var] forvar,_indom]))15print(count) # prints 36 (the correct answer)The code first generates pairs of Z3Py Boolean variables and pairs of squares covered, for each wayto place a domino. Then, it adds constraints that say that every square of the board should be coveredby exactly one domino. Finally, the solutions are enumerated using the blocking technique.Unfortunately, despite succeeding in simple cases like this one, LLMs often fail to give a correctencoding, particularly for more involved problem statements. In this work, we show how to fine-tuneLLMs on the task of producing correct code to solve combinatorics problems using Z3Py.3 Datasets Used for Fine-tuningWe now present our methodology for generating datasets of problems with correct Z3Py solutions.3.1 Synthetic Problem GenerationTo address the lack of training data for Z3py, we generate fully synthetic pairs of counting problemswith corresponding Z3py solutions. We focus on four classes of ground sets: Sequences, permutationsof the set {1,2, . . . , n }, numbers in base k, and subsets of the set {1,2, . . . , n }. For each object, wedefine a set of supported constraints, where each constraint includes a natural language descriptionand corresponding Z3py code. Constraints can be applied to the object itself or to an integer parameterderived from the object, such as the sum of the sequence or the number of inversions in a permutation.When generating a problem, we sample a ground set and a set of constraints, where each constrainthas a probability of 1/2of being negated. These constraints are then combined to form a problemand a Z3Py solution. We run the solution and retain only the problems that finish within the timelimit, returning a non-zero answer. Below is an example of a generated synthetic problem:2A subset of the set {1,2, . . . , 6}(no two elements in the subset are consecutiveintegers) and (no three elements in the subset form an arithmetic progression )and(the subset sum is not divisible by 10). Count the number of valid objects.Here is the relevant part of the corresponding Z3py solution generated by our dataset generator:1fromz3import*2subset = [Bool(f'subset _{i}') foriinrange(6)]3subset_sum = Sum([If(subset[i], i + 1, 0) foriinrange(6)])4constraint _1 = And([Not(And(subset[i], subset[i + 1])) foriinrange(6 - 1)])5constraint _2 = And([Not(And([subset[i], subset[j], subset[k]])) \6 foriinrange(6) forjinrange(i + 1, 6) \7 forkinrange(j + 1, 6) ifj - i == k - j])8constraint _3 = subset _sum % 10 == 09constraint _4 = Not(constraint _3)10constraint _5 = And(constraint _2, constraint _4)11constraint _6 = And(constraint _1, constraint _5)12solver = Solver()13solver.add(constraint _6)3.2 Rephrasing ProblemsWe found that fine-tuning on raw synthetic problems leads to severe overfitting to the specific formatof those problems (almost perfect scores on our validation synthetic problems), while degrading theperformance on real problems from the NuminaMath dataset [ 8]. To address this issue and make theproblem statements sound more natural, we rephrase the problems using a language model.To minimize the number of incorrect rephrasings, we use a technique similar to rejection sampling.For each problem, we prompt the language model to generate a naturally sounding problem statement.We then prompt it again to generate a new Z3py solution from the rephrased statement. We verifythat the original and generated solutions produce the same integer answer. If they do, we count boththe rephrasing and the new solution as correct.In our experiments, we rephrased each problem twice and sampled three solutions for each rephrasingat temperature 0.5. For example, for the problem above, we generated the following rephrasing.Consider the set {1, 2, 3, 4, 5, 6}. How many subsets can be formed such thatno two elements in the subset are consecutive integers, no three elements form anarithmetic progression, and the sum of the elements is not divisible by 10?We constructed training sets containing 2624 and 2883 rephrased synthetic problems, for GPT-4o-mini and GPT-4o, respectively. We generated rephrasings and solutions for GPT-4o-mini and GPT-4oindependently to avoid confounding our results with knowledge transfer between the models.3.3 NuminaMath DatasetWe applied our approach to problems from the NuminaMath dataset [ 8]. We excluded the synthetic-math portion, as it contains rephrasings (which may be incorrect) of other problems. We retainedonly problems with integer answers and used the GPT-4o-mini model to filter for counting problems(which are potentially solvable using our Z3py approach). For validation, we selected the amc-aimeportion of the dataset, resulting in 145problems after filtering. The remaining 8935 filtered problemswere used to construct a baseline training set.For each problem in the training set, we generated 3 Z3py solutions for rejection sampling, using GPT-4o-mini and GPT-4o, respectively. This process left us with 2624 training problems for GPT-4o-miniand2883 training problems for GPT-4o.4 EvaluationWe fine-tuned both GPT-4o and GPT-4o-mini for one epoch using a standard learning rate on threedatasets: (i) Problems filtered from the NuminaMath dataset, (ii) synthetic problems, and (iii) a mixof NuminaMath and synthetic problems.3Z3 solutions We evaluated a total of ten models: four before fine-tuning and six after fine-tuning.To assess their performance, we generated 32samples with a temperature of 0.7for each of the 145problems in the validation set. We ran the default and fine-tuned versions of GPT4o-mini and GPT4owith a few-shot prompt specifically asking for solutions written in Z3py. We include two exampleproblems with solutions in all prompts. We additionally ran the default models with a Z3py referenceprepended to the prompt. A solution was considered correct if it executed properly using Z3py andreturned the correct answer. We first compare the models in the pass@k metric [ 9]. For each k,ranging from 1to32, we calculated the expected number of problems solved at least once given thatwe subsample the results on each problem using ksamples. For each problem solved xtimes out of32trials, the probability of at least one success in kattempts is given by:P(solved ) = 1−32−xk32kSumming the probabilities across problems provides the expected number of solved problems per k.We note that for GPT-4o, fine-tuning on synthetic data outperforms fine-tuning on the NuminaMathdataset, and fine-tuning on a mix of the two outperforms both. For GPT-4o-mini, the results are lessclear, as fine-tuning on synthetic data seems to produce the best performance. We also observe thatGPT-4o-mini fine-tuned on synthetic data performs better than GPT-4o.Model Score (maj@32)gpt4o-mini 29gpt4o-mini Z3py reference 23gpt4o-mini NuminaMath 45gpt4o-mini Synthetic 49gpt4o-mini NuminaMath + Synthetic 44gpt4o 45gpt4o Z3py reference 47gpt4o NuminaMath 59gpt4o SyntheticData 59gpt4o NuminaMath + Synthetic 624To enable a fair comparison in a setting where the answers are unavailable, we also calculate thenumber of correctly solved problems for each model using the majority vote across the 32samples.The relative performance of the models aligns with the pass@k metric, supporting the intuition that amodel which solves a problem correctly more frequently is more likely to have a majority of correctsamples.General problem solving Additionally, we prompted the models to generate any Python code,potentially using SymPy, Z3Py, or any other library. On our validation data set, all fine-tuned mod-els managed to solve additional tasks that were not solved without our fine-tuning: The strongestdifference is on GPT-4o, where fine-tuning with NuminaMath yielded 7 additional solutions onour validation set. Of those solutions, 5 used Z3Py. For GPT-4o-mini, NuminaMath and Numina-Math+Synthetic both yielded 4 extra problems solved, where all of the solutions used Z3Py.5 Conclusion and Future WorkWe presented an algorithm for synthesizing training data for fine-tuning LLMs to solve mathematicalproblem statements using SMT solvers. Our experimental results demonstrate that fully syntheticrandom problem statements can be helpful for tasks with less readily available training data. Fur-thermore, we showed that fine-tuning on SMT-based solutions improves the general problem solvingcapabilities of LLMs. Future work includes more advanced synthetic problem statements, usingfine-tuned models to bootstrap harder datasets from existing problem statements, as well as applyingthe approach to other less well-known libraries for mathematical reasoning in addition to Z3Py.5AcknowledgmentsThis research was partially funded by the Ministry of Education and Science of Bulgaria (support forINSAIT, part of the Bulgarian National Roadmap for Research Infrastructure).References[1]Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Minlie Huang, Nan Duan,and Weizhu Chen. Tora: A tool-integrated reasoning agent for mathematical problem solving,2024. URL https://arxiv.org/abs/2309.17452 .[2]Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil,Ryan Prenger, and Anima Anandkumar. Leandojo: Theorem proving with retrieval-augmentedlanguage models, 2023. URL https://arxiv.org/abs/2306.15626 .[3]Edward Beeching, Shengyi Costa Huang, Albert Jiang, Jia Li, Benjamin Lipkin, Zihan Qina,Kashif Rasul, Ziju Shen, Roman Soletskyi, and Lewis Tunstall. Numinamath 7b tir. https://huggingface.co/AI-MO/NuminaMath-7B-TIR , 2024.[4]Ai achieves silver-medal standard solving international mathematical olympiad problems. https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/ .Accessed: 2024-09-26.[5]Shuo Yin, Weihao You, Zhilong Ji, Guoqiang Zhong, and Jinfeng Bai. Mumath-code: Combiningtool-use large language models with multi-perspective data augmentation for mathematicalreasoning, 2024. URL https://arxiv.org/abs/2405.07551 .[6]Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D. Goodman. Star: Bootstrapping reasoningwith reasoning, 2022. URL https://arxiv.org/abs/2203.14465 .[7]Leonardo de Moura and Nikolaj Bjørner. Z3: An efficient smt solver. In C. R. Ramakrishnan andJakob Rehof, editors, Tools and Algorithms for the Construction and Analysis of Systems , pages337–340, Berlin, Heidelberg, 2008. Springer Berlin Heidelberg. ISBN 978-3-540-78800-3.[8]Jia LI, Edward Beeching, Lewis Tunstall, Ben Lipkin, Roman Soletskyi, Shengyi CostaHuang, Kashif Rasul, Longhui Yu, Albert Jiang, Ziju Shen, Zihan Qin, Bin Dong,Li Zhou, Yann Fleureau, Guillaume Lample, and Stanislas Polu. Numinamath.[https://github.com/project-numina/aimo-progress-prize](https://github.com/project-numina/aimo-progress-prize/blob/main/report/numina _dataset.pdf) , 2024.[9]Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, JaredKaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri,Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan,Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian,Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert,Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-V oss, William Hebgen Guss, Alex Nichol, AlexPaino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders,Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa,Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, BobMcGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. Evaluatinglarge language models trained on code, 2021. URL https://arxiv.org/abs/2107.03374 .6A Appendix: Prompts Used in ResearchRephrasing for Synthetic ProblemsRewrite the following problem as if it were a question in a middle school math competition.Ensure that the rewritten question is equivalent to the original while sounding natural.First, analyze the constraints in the initial problem. Be very careful,some constraints can contain double negatives.Provide the final statement formatted as follows: ‘‘‘statement‘‘‘.Z3 Solver (Fine-Tuning & Evaluation)You will be provided with a combinatorial problem related to enumeration.Your task is to write Python code using the Z3 library to solve the problemand print the correct answer.Requirements:1. Z3 Solver: Use the Z3 library to model the problem as a constraint satisfaction problem.2. Correctness: Ensure that your code accurately represents all the constraints of the problem.3. Efficiency: Structure your code to efficiently find and count all valid solutions.4. Output: The program should print only the integer answer -- no additional text orformatting should be included in the output.Guidelines:1. Constraints: Define all necessary variables and constraints clearly.2. Problem Breakdown: Before writing the code, break down the problem into smallercomponents and identify the constraints for each component.3. Symmetry Breaking: Implement symmetry-breaking techniques where necessary to avoidcounting equivalent solutions multiple times.4. Commenting: Add comments to your code to explain the logic and purpose of each part.Problem FilteringYou are given the following problem:\n\n{problem} \n\nDetermine if it is an enumeration problem. By “enumeration,” we mean a complete,ordered listing of all the items in a collection.More precisely, the problem must satisfy each of the following conditions:1. The problem deals with a clearly and explicitly defined,finite set of objects (like permutations, combinations, bounded sequences, graphs, etc.).Problems involving real numbers, functions, infinite sets, etc.,are not considered enumeration problems.2. There must be clear and explicit constraints on individual objects,expressible in first-order logic, that define a subset of the ground set.All problems mentioning constraints on subsets of the ground set must be discarded.3. The main and only objective of the problem must beto count the number of elements in the subset defined by the constraints.All problems mentioning minimization or maximization must be discarded.4. A natural solution must involve going through all the elements of the ground setand checking if they meet the constraints.5. The problem must be challenging to solve analytically; i.e.,it should not be possible to solve it by a simple formula or direct calculation.6. The problem statement must not contain any URLs or diagrams.Do not try to solve the problem. Carefully review each of the listed conditions.All words in the conditions are important.7Please provide a clear answer by putting “Yes” (without quotes) in \boxed{}if the problem satisfies the criteria, and “No” in \boxed{} otherwise.If you are not sure, answer “No.”Example Problem 1**Problem**:Consider the following undirected graph G with 5 vertices:• Vertices: V = \{1, 2, 3, 4, 5\}• Edges: E = \{(1,2), (1,3), (2,3), (3,4), (4,5)\}We need to assign colors to the vertices using colors A, B, and C, such that:1. Adjacent vertices must have different colors.2. Vertex 1 must be colored with color A.3. The number of vertices colored with color B must be exactly 2.How many valid colorings of the graph satisfy these constraints?**Solution **:Step 1: Define Variables:• For each vertex, define an integer variable representing its color.• Colors are encoded as integers: A = 0, B = 1, C = 2.Step 2: Specify Constraints:1. Color Assignment Constraints:• Each vertex’s color variable can take on values 0, 1, or 2.2. Adjacency Constraints:• For each edge (u, v), ensure that the colors of u and v are different.• Specific Vertex Color Constraint:• Vertex 1 must be colored with color A (0).• Cardinality Constraint:• The number of vertices colored with color B (1) must be exactly 2.Step 3: Model Counting:• Use Z3 to find all possible assignments that satisfy the constraints.• Count the number of valid colorings.**Python Code Using Z3 **‘‘‘pythonfrom z3 import *# Create a solver instancesolver = Solver()# Define colorscolors = {’A’: 0, ’B’: 1, ’C’: 2}# Define variables for each vertexv1 = Int(’v1’)v2 = Int(’v2’)v3 = Int(’v3’)v4 = Int(’v4’)v5 = Int(’v5’)vertices = [v1, v2, v3, v4, v5]# Each variable can be 0 (A), 1 (B), or 2 (C)for v in vertices:8solver.add(Or(v == 0, v == 1, v == 2))# Adjacent vertices must have different colorsedges = [(v1, v2), (v1, v3), (v2, v3), (v3, v4), (v4, v5)]for (u, v) in edges:solver.add(u != v)# Vertex 1 must be colored with color A (0)solver.add(v1 == colors[’A’])# The number of vertices colored with color B (1) must be exactly 2num_B = Sum([If(v == colors[’B’], 1, 0) for v in vertices])solver.add(num _B == 2)# Function to generate all solutions efficientlydef all_smt(s, initial _terms):def block _term(s, m, t):s.add(t != m.eval(t, model _completion=True))def fix_term(s, m, t):s.add(t == m.eval(t, model _completion=True))def all_smt_rec(terms):if sat == s.check():m = s.model()yield mfor i in range(len(terms)):s.push()block_term(s, m, terms[i])for j in range(i):fix_term(s, m, terms[j])yield from all _smt_rec(terms[i:])s.pop()yield from all _smt_rec(list(initial _terms))# Perform model countingtotal_colorings = 0for_in all_smt(solver, vertices):total_colorings += 1print(total _colorings)Example Problem 2**Problem**Consider seating 5 people around a circular table. Two of these people are identical twinsand cannot be distinguished from each other.The other three individuals are Alice, Bob, and Charlie.Arrangements that can be obtained from each other by rotation of the table are considered identical(i.e., seating arrangements are counted up to rotational symmetry).How many distinct seating arrangements are possible under these conditions?**Solution **Step 1: Break Rotational Symmetry• Since rotating the table doesn’t create a new arrangement,we can fix one person’s seat to eliminate this symmetry. We’ll fix Alice at seat 0.• Seats: 0 (Alice), 1, 2, 3, 4• Remaining Guests: Bob, Charlie, Twin 1, Twin 2Step 2: Define Variables9• We need variables to represent who is sitting in each seat (excluding the fixed seat 0).• Create integer variables for each seat (from seats 1 to 4).• Each variable will represent the guest sitting in that seat.Step 3: Encode the Guests• Since the twins are identical, we’ll represent them with the same identifier.• Guest Identifiers:• Bob: 1• Charlie: 2• Twin (both twins share this identifier): 3Step 4: Add Constraints1. Domain Constraints:• Each seat variable can take values from the set {1, 2, 3},corresponding to Bob, Charlie, and Twin.2. Uniqueness Constraints:• Bob and Charlie must occupy one seat each, while the two twinsare indistinguishable but must occupy exactly two seats.• Ensure that Bob and Charlie are each seated exactly once.• Ensure that exactly two seats are occupied by the twins.Step 5: Model CountingUse Z3 to find all possible seating arrangements that satisfy the constraints.Count the number of valid seating arrangements.**Python Code Using Z3 **‘‘‘pythonfrom z3 import *# Create a solver instancesolver = Solver()# Define guest identifiersGUESTS = {’Bob’: 1, ’Charlie’: 2, ’Twin’: 3}# Fixed seating: Alice is at seat 0# Variables for seats 1 to 4seat_vars = [Int(f’seat _{i}’) for i in range(1, 5)]# Domain constraints: Each seat must have a guest (Bob, Charlie, or Twin)for var in seat _vars:solver.add(Or([var == GUESTS[guest] for guest in GUESTS]))# Uniqueness constraints for Bob and Charliesolver.add(Sum([If(var == GUESTS[’Bob’], 1, 0) for var in seat _vars]) == 1)solver.add(Sum([If(var == GUESTS[’Charlie’], 1, 0) for var in seat _vars]) == 1)# Total number of twins must be 2solver.add(Sum([If(var == GUESTS[’Twin’], 1, 0) for var in seat _vars]) == 2)# Function to generate all seating arrangementsdef all_smt(s, initial _terms):def block _term(s, m, t):s.add(t != m.eval(t, model _completion=True))def fix_term(s, m, t):s.add(t == m.eval(t, model _completion=True))def all_smt_rec(terms):if sat == s.check():m = s.model()10yield mfor i in range(len(terms)):s.push()block_term(s, m, terms[i])for j in range(i):fix_term(s, m, terms[j])yield from all _smt_rec(terms[i:])s.pop()yield from all _smt_rec(list(initial _terms))# Perform model countingtotal_colorings = 0for_in all_smt(solver, seat _vars):total_colorings += 1print(total _colorings)‘‘‘B Appendix: Example of a Problem Solved After Fine-TuningProblem StatementFind the number of functions $f(x)$ from $\{1, 2, 3, 4, 5\}$ to $\{1, 2, 3, 4, 5\}$that satisfy $f(f(x)) = f(f(f(x)))$ for all $x$ in $\{1, 2, 3, 4, 5\}$.Generated SolutionTo solve this problem using the Z3 library, we need to model the constraints on the function \( f \)such that \( f(f(x)) = f(f(f(x))) \) for all \( x \) in \(\{1, 2, 3, 4, 5\}\).Here’s how you can implement this:‘‘‘pythonfrom z3 import *# Create a solver instancesolver = Solver()# Define the domain and rangedomain = range(1, 6)# Define a function f: {1, 2, 3, 4, 5} -> {1, 2, 3, 4, 5}f = Function(’f’, IntSort(), IntSort())# Add constraints that f maps elements to the domain {1, 2, 3, 4, 5}for x in domain:solver.add(And(f(x) >= 1, f(x) <= 5))# Add the constraint f(f(x)) = f(f(f(x))) for all xfor x in domain:solver.add(f(f(x)) == f(f(f(x))))# Function to generate all solutionsdef all_smt(s, initial _terms):def block _term(s, m, t):s.add(t != m.eval(t, model _completion=True))def fix_term(s, m, t):s.add(t == m.eval(t, model _completion=True))def all_smt_rec(terms):if sat == s.check():11m = s.model()yield mfor i in range(len(terms)):s.push()block_term(s, m, terms[i])for j in range(i):fix_term(s, m, terms[j])yield from all _smt_rec(terms[i:])s.pop()yield from all _smt_rec(list(initial _terms))# Perform model countingtotal_functions = 0for_in all_smt(solver, [f(x) for x in domain]):total_functions += 1print(total _functions)‘‘‘12 |
gt6prlTEGL | HARDM ATH: A Benchmark Dataset for ChallengingProblems in Applied MathematicsJingxuan Fan∗, Sarah Martinson∗, Erik Y. Wang∗, Kaylie Hausknecht∗,Jonah Brenner, Danxian Liu, Nianli Peng, Corey Wang, Michael P. BrennerSchool of Engineering and Applied SciencesHarvard UniversityCambridge, MA 02138, USAAbstractAdvanced applied mathematics problems are not well-represented in existingbenchmarking datasets used to evaluate Large Language Models (LLMs). Toaddress this, we introduce HARDM ATH, the Harvard Approximate ReasoningDataset for Mathematics—a dataset of 1,466 difficult problems inspired by HarvardUniversity’s graduate course on asymptotic methods. The dataset contains a diverseset of challenging applied mathematics problems with worked solutions that employvarious analytical approximation methods. Developing such solutions typicallyrequires multiple modes of analysis—including mathematical reasoning, the use ofcomputational tools, and subjective judgment—making this a challenging problemfor LLMs. We establish a framework that auto-generates an arbitrarily large numberof ‘hard’ applied mathematics problems with approximate analytical solutions thatinclude validity checks against numerical ground-truths. We evaluate frontierLLMs on HARDM ATH-MINI , a sub-sampled test set of 366 problems, as wellas on 40 word problems formulated in applied science contexts. Even leadingclosed-source models like GPT-4 achieve only 43.8% overall accuracy with few-shot Chain-of-Thought prompting, and all models demonstrate significantly lowerperformance compared to results on existing mathematics benchmark datasets. Weadditionally conduct a detailed error analysis to gain insights into the failure casesof LLMs. These results demonstrate limitations of current LLM performance onadvanced graduate-level asymptotic math problems and underscore the importanceof datasets like HARDM ATH to advance mathematical abilities of LLMs.Keywords approximation, asymptotic analysis, benchmark dataset, LLM evaluation,1 IntroductionMany scientific and engineering problems involve mathematical equations, such as integrals, ordinarydifferential equations (ODEs), and partial differential equations (PDEs), that rarely have closed-formsolutions. Traditional mathematics courses and most Large Language Model (LLM) benchmarkdatasets focus on problems with exact, analytical solutions. However, these benchmarks overlooka large class of math problems often arising in applied sciences that require approximate solutions,which are essential for gaining insights into complex systems. Numerical solutions to such problemscan be useful, but they lack the explanatory power offered by approximate analytical methods, e.g.asymptotic and applied analysis.∗Equal contributionAll data and code used for this paper can be found at: https://github.com/sarahmart/HARDMath.38th Conference on Neural Information Processing Systems (NeurIPS 2024).To address this gap, we introduce HARDM ATH, the Harvard Approximate Reasoning Datasetfor Mathematics. This benchmark dataset is designed to evaluate LLMs on their ability to solveapplied mathematics problems that require approximation techniques. HARDM ATH contains1,466 problems inspired by Harvard University’s graduate course on asymptotic methods; it coverspolynomials, ODEs, and integrals that often arise in real scientific and engineering contexts but thatcannot be solved exactly. The dataset emphasizes problems that require advanced mathematicalreasoning and approximations, offering a more challenging and diverse testbed for LLMs comparedto existing datasets, which mostly focus on simpler, symbolically solvable calculations [1, 2, 3, 4].Rather than sourcing problems from textbooks or standardized tests, we develop a codebase forautomatically generating problems and step-by-step solutions. Our dataset includes a larger set forfine-tuning and two test sets for evaluating LLMs’ mathematical reasoning on approximation methods.Here, we evaluate the accuracy of LLMs on our dataset and study their common error modes. Wefind that current LLMs perform poorly overall on these problems and demonstrate significant roomfor improvement.2 Related work2.1 Mathematics DatasetsMost mathematics datasets for evaluating or training LLMs focus on elementary arithmetic or wordproblems. Notable examples include MATH (12,500 high school competition-style problems) [ 3],GSM8K (8,500 multistep grade-school problems) [ 4], and ODYSSEY -MATH (387 hand-curatedproblems across various difficulty levels) [ 5]. While these datasets are valuable for assessing basicLLM math performance, most are limited in scope and complexity.Recent efforts targeting more advanced problems are often manually sourced. Datasets likeJEEB ENCH [6] and a subset of MATHBENCH [1] include some college-level topics, such asODEs and multivariable calculus. GHOSTS includes more advanced problems from graduate-leveltexts on functional analysis, topology, and probability theory [ 7], while ARB features formal mathproblems from qualifying exams at Harvard and Berkeley [ 8]. However, these datasets often (1) arelimited in size and scalability, (2) focus on formal mathematics, or (3) cull problems from textbooksprotected by copyrights. Notably, none of the existing datasets (Table 1) focus on advanced appliedmathematics. HARDM ATH fills this gap by presenting a large corpus of problems that requireapproximation techniques from asymptotics to be solved. HARDM ATH is also highly scalable witha codebase for data generation. Since these problems cannot be formalized using tools like Leanor solved with symbolic computation software, they present the ideal domain for evaluating howLLMs integrate natural language reasoning and code-based tools to solve out-of-training sample mathproblems.Table 1: Comparison of HARDM ATH with related datasets. Note that for all datasets excludingMATH , we report the number of relevant problems at a comparable difficulty to our dataset (e.g.,THEORY -KNOWLEDGE -COLLEGE inMATHBENCH , and GRAD-TEXT andHOLES -IN-PROOFSfrom GHOSTS .)HARDM ATH is the largest graduate-level dataset.Dataset Size Data Generation DifficultyMATH [3] 12.5K Manual High SchoolMATHBENCH -T[1] 632 Manual, Algorithmic UndergraduateJEEB ENCH [6] 236 Manual High SchoolGHOSTS [7] 190 Manual GraduateARB [8] 34 Manual GraduateHARDM ATH (Ours) 1.4K Algorithmic Graduate2.2 Recent interest in advanced mathematics reasoningAs LLMs continue to improve, there has been growing interest in developing more challengingbenchmarks. A notable example is the recent open challenge, Humanity’s Last Exam , which aims tocreate the world’s most difficult public AI benchmark, requesting questions that "only exceptional2individuals can answer correctly" and do not involve "straightforward calculation/computation"[9]. Similarly, frontier models have been advancing quickly, and many are explicitly focused onquantitative and scientific reasoning, such as OpenAI’s recent o1 series. In line with our motivationfor developing HARDM ATH to better track the progress of LLMs, OpenAI argues that "recentfrontier models do so well on MATH andGSM8K that these benchmarks are no longer effective atdifferentiating models" [10].3 DatasetsHARDM ATH contains four problem classes with seven distinct problem types covering nondimen-sionalization, polynomial root-finding, ODEs, and integrals, as well as 40 handwritten word problemsdesigned to place the problems in applied scientific contexts (see Appendix A.1 for problem details).The main dataset (1,060 problems) is suitable for model development, while the HARDM ATH-minievaluation set (366 problems) is used for benchmarking LLM performance. Fig. 1 shows a breakdownof the datasets by problem type.(a)HARDM ATH-MINI dataset (b)HARDM ATH datasetFigure 1: Breakdowns of the HARDM ATH-MINI (left) and the HARDM ATH (right) datasets.Solutions to all HARDM ATH problems share a common reasoning framework; the Method of Domi-nant Balance simplifies problems by focusing on terms that ‘dominate’ the solution’s behavior andcan significantly simplify the equation [ 11]. Solution methods also involve combining sophisticatedcomputational and analytical techniques, such as self-consistency checks and the use of numericalmethods. To solve these problems, subjective decisions about solution regimes to consider, terms toinclude, and approximation methods must be made with rigorous justification, which is challengingfor current LLMs.Implementation of this reasoning framework is realized through a robust data generation process.The data generation code uses SymPy [12] and SciPy [13] to implement mathematical procedures forgenerating approximate analytical solutions tailored to each problem class. Problems are generatedrandomly by combining sets of random coefficients, functional forms, and initial conditions. Solutionsare generated algorithmically, with key steps described in explanatory texts. The main results areembedded in the LaTeX \boxed{} command, following conventions from other mathematics datasets(e.g. MATH [3]). Each problem type includes: 1) L ATEX-formatted problem statements, 2) L ATEX-formatted solution steps, 3) accuracy demonstrations comparing analytical and numerical solutions,and 4) metadata descriptors of the problem and solution types (Appendix A.1).We evaluate solutions by calculating the relative error between analytical and numerical results atselected evaluation points. Problems are included in the dataset only if their solutions are within 10%of the numerical ground-truth, ensuring that all problems in HARDM ATH maintain high accuracy.For polynomial root correction problems, we further check that the corrections improve on theoriginal approximation.34 Evaluation4.1 Model choice and evaluation protocolsWe evaluate several leading LLMs on HARDM ATH-MINI , a subset of 366 problems representativeofHARDM ATH (Fig. 1). Closed-source LLMs evaluated include GPT-3.5 [ 14,15,16], GPT-4 [ 17]and o1-mini [ 18], open-source LLMs include Llama3 [ 19] and CodeLlama [ 20]. All models aretested in zero- and few-shot settings with Chain-of-Thought (CoT) prompting, which encouragescomplex reasoning capabilities by providing intermediate steps in sample answers [ 21]. Prompts andhyper-parameters are detailed in Appendix A.3.4.We focus our evaluation on the four key problem types in HARDM ATH:Nondim (symbolic andnumerical nondimensionalization), Roots (polynomial root-finding), ODEs (nonlinear ODEs), andIntegrals (traditional and Laplace integrals). Models are evaluated for accuracy and common er-ror modes using zero- and few-shot CoT prompting. Prompts contain example question-solutionpairs, problem setup and formatting hints (Appendix A.3.1). Following Hendrycks et al. [3], auto-matic assessment compares the final model-generated answer (A.3.1) to the true solution (both inLATEX\boxed{} environments), using SymPy -based [ 12] equivalence checks and numerical evalua-tions. We also develop a procedural grading system using GPT-4o to (1) provide intermediate stepgrading for multi-step solutions, and (2) assess models’ ability to make approximation judgments,which allow for a range of self-consistent solutions. Rubrics are adapted from grading guidelines ofthe course that inspired the HARDM ATH problems (Appendix A.3.2 ). Human grading on a subsetof LLM responses shows good alignment with GPT-based grading, with average score adjustmentssummarized in Appendix A.3.3.4.2 Model performance and error mode analysisHere, we report the accuracy of models across problem types and prompting settings (Table 2,Appendix A.4, Fig. 3). Few-shot CoT prompting enhances model performance across the board,particularly for o1-mini and GPT-4, which demonstrate the most substantial improvements, consistentwith findings from [ 21] (Fig. 3a). The performance increase associated with prompting varies byproblem-type; gains tend to saturate quickly on more challenging problems such as ODEs (AppendixA.4, Fig. 4). Notably, OpenAI’s new o1-mini, though with much smaller parameter size, outperformsother models at all tested shot levels, confirming its optimization for STEM reasoning [18].o1-mini with 5-shot CoT achieves the highest overall accuracy of 62.3%, while Llama3-8b achieves20.2%, the highest among open-source models. In contrast, Llama3-8b performs significantly betteron existing datasets, achieving 30.0% on MATH (4-shot CoT) and 79.6% on GSM-8K (8-shotCoT)[ 19]), compared to its 20.2% on HARDM ATH-MINI . GPT-4 also shows strong performanceonMATH (72.2%, 0-shot CoT), GSM-8K (92.0%, 5-shot CoT) [ 22,17], and a recently releasedadvanced mathematical dataset MINI GHOSTS (average score of 4.15 out of 5). Yet, GPT-4 achievesonly 43.8% on HARDM ATH-MINI . Similarly, o1-mini demonstrates 90.0% accuracy on MATH-500with 0-shot CoT [ 18], but achieves only 62.3% accuracy on HARDMath-mini with 5-shot CoT.This reveals a significant performance increase compared to other models on some (e.g. Nondim )but not all problem types. This suggests that the HARDM ATH benchmark presents problems thatremain challenging and unfamiliar to even the most performant LLMs developed for advanced STEMreasoning.We also evaluate model responses across varying levels of correctness, allowing us to identifycommon error patterns. When breaking down performance by correct, partially correct, and incorrectresponses, we observe that few-shot prompting improves performance to different degrees acrossproblem types (Fig. 2). LLM solutions to harder problems, like ODEs andIntegrals , are rarely fullycorrect, but receive more partial credit with increasing CoT shots. In contrast, for simpler problemslikeRoots , advanced models (o1-mini and GPT-4) get more fully correct responses with increasingCoT shots (Fig. 2, 5). Fig. 6 compares GPT-4’s responses at 0 vs. 5 shot CoT on Roots , showing thatthe most common error mode—incorrectly setting up dominant balances—gets significantly reduced.Instead, errors shift to more nuanced issues, such as missing cases or failing to calculate complexroots (examples in Appendix A.4.2). This indicates that CoT improves the model’s application ofdominant balance techniques, enabling it to overcome simple mistakes.4Table 2: Evaluation Accuracy (percentage) on the HARDM ATH evaluation set.Model ALL Nondim Roots ODEs IntegralsClosed-source modelsGPT-3.5 (0 shot) 6.04 5.05 17.2 1.39 3.33GPT-3.5 (1 shot CoT) 14.2 6.11 29.3 6.94 18.2GPT-3.5 (5 shot CoT) 24.6 24.3 35.0 16.2 23.1GPT-4 (0 shot) 14.0 6.04 33.7 7.87 14.9GPT-4 (1 shot CoT) 37.6 36.5 52.8 15.9 40.5GPT-4 (5 shot CoT) 43.8 48.6 57.3 21.7 41.4o1-mini (0 shot CoT) 29.8 38.1 24.3 10.2 32.5o1-mini (5 shot CoT) 62.3 84.5 62.1 30.6 46.5Open-source modelsLlama3-8b (0 shot) 3.67 0.50 11.5 4.63 2.52Llama3-8b (5 shot CoT) 20.2 17.9 17.1 12.0 28.1CodeLlama-13b (0 shot) 1.94 0.00 8.73 1.85 0.50CodeLlama-13b (5 shot CoT) 9.79 8.41 13.1 9.7 9.57Figure 2: Breakdown of model accuracy percentages for o1-mini, GPT-4 and Llama3 by promptingtypes and problem types.Finally, to assess how well LLMs can solve these problems when situated in realistic research contexts,we evaluate GPT-4 (the best performing stable model) on a set of word problems covering all problemtypes (Appendix A.2). This yields an overall accuracy of 28.1%. Overall, this analysis highlightsthe value of HARDM ATH as a challenging benchmark for evaluating mathematical capabilities ofLLMs on advanced approximate analytical mathematics.5 ConclusionWe introduce HARDM ATH, a new dataset covering several problem types from an advancedasymptotics course that can be used to benchmark LLMs’ mathematical capabilities and performmodel developments. This dataset consists of 1060 examples, and we additionally include 366 verifiedexamples in HARDM ATH-MINI and 40 verified ‘problems in context’ that we use to evaluate variousleading LLMs. HARDM ATH is unique as there do not exist large-scale mathematical datasetscovering problems of similar difficulty from applied mathematics, and because HARDM ATH’sproblems and solutions are algorithmically generated, meaning that one could produce datasets ofarbitrary size using our framework.Our evaluation highlights that while few-shot CoT prompting significantly improves model perfor-mance, especially for models like o1-mini and GPT-4, the overall accuracy on HARDM ATH-MINIproblems remains much lower compared to other existing benchmarks. This suggests that our datasetposes unique and challenging tasks that go beyond the boundaries of current LLM capabilities,particularly in approximation-oriented mathematical reasoning.Future work will fine-tune LLMs on HARDM ATH to improve performance. Additionally, whilewe have evaluated several frontier models, we plan to extend our evaluations to more LLMs as theybecome available. This expanded evaluation should provide more detailed insights into performancedisparities across different models, further advancing our understanding of LLMs’ capabilities inhandling complex asymptotic reasoning.5References[1]Hongwei Liu, Zilong Zheng, Yuxuan Qiao, Haodong Duan, Zhiwei Fei, Fengzhe Zhou, WenweiZhang, Songyang Zhang, Dahua Lin, and Kai Chen. Mathbench: Evaluating the theory andapplication proficiency of llms with a hierarchical mathematics benchmark. arXiv preprintarXiv:2405.12209 , 2024.[2]Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and HannanehHajishirzi. Mathqa: Towards interpretable math word problem solving with operation-basedformalisms. In Proceedings of NAACL-HLT , pages 2357–2367, 2019.[3]Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, DawnSong, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. In35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasetsand Benchmarks . NeurIPS, 2021.[4]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christo-pher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprintarXiv:2110.14168 , 2021. URL https://arxiv.org/pdf/2110.14168v1 .[5]Netmind.AI. Odyssey-math. https://github.com/protagolabs/odyssey-math/tree/main , 2024. Accessed: April 22, 2024.[6]Daman Arora, Himanshu Singh, et al. Have llms advanced enough? a challenging problemsolving benchmark for large language models. In Proceedings of the 2023 Conference onEmpirical Methods in Natural Language Processing , pages 7527–7543, 2023.[7]Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz,Philipp Petersen, and Julius Berner. Mathematical capabilities of chatgpt. Advances in NeuralInformation Processing Systems , 36, 2024.[8]Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexan-der Kranias, John J Nay, Kshitij Gupta, and Aran Komatsuzaki. Arb: Advanced reasoningbenchmark for large language models. arXiv preprint arXiv:2307.13692 , 2023.[9]Dan Hendrycks and Alexandr Wang. Submit your toughest questions for humanity’s last exam,2024. URL https://www.safe.ai/blog/humanitys-last-exam . Accessed: 2024-10-01.[10] OpenAI. Introducing openai o1-preview, 2024. URL https://openai.com/index/introducing-openai-o1-preview/ . Accessed: 2024-10-01.[11] Carl M Bender and Steven A Orszag. Advanced mathematical methods for scientists andengineers I: Asymptotic methods and perturbation theory . Springer Science & Business Media,2013.[12] Aaron Meurer, Christopher P. Smith, Mateusz Paprocki, Ond ˇrejˇCertík, Sergey B. Kirpichev,Matthew Rocklin, AMiT Kumar, Sergiu Ivanov, Jason K. Moore, Sartaj Singh, Thilina Rath-nayake, Sean Vig, Brian E. Granger, Richard P. Muller, Francesco Bonazzi, Harsh Gupta, ShivamVats, Fredrik Johansson, Fabian Pedregosa, Matthew J. Curry, Andy R. Terrel, Št ˇepán Rou ˇcka,Ashutosh Saboo, Isuru Fernando, Sumith Kulal, Robert Cimrman, and Anthony Scopatz.Sympy: symbolic computing in python. PeerJ Computer Science , 3:e103, January 2017. ISSN2376-5992. doi: 10.7717/peerj-cs.103. URL https://doi.org/10.7717/peerj-cs.103 .[13] Pauli Virtanen, Ralf Gommers, Travis E Oliphant, Matt Haberland, Tyler Reddy, David Courna-peau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, et al. Scipy 1.0:fundamental algorithms for scientific computing in python. Nature methods , 17(3):261–272,2020.[14] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving languageunderstanding by generative pre-training. 2018.[15] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Languagemodels are unsupervised multitask learners. 2019. URL https://api.semanticscholar.org/CorpusID:160025533 .6[16] Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin,Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton,Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano,Jan Leike, and Ryan Lowe. Training language models to follow instructions with humanfeedback, 2022.[17] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia LeoniAleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4technical report. arXiv preprint arXiv:2303.08774 , 2023.[18] OpenAI. Openai o1-mini, 2024. URL https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/ . Accessed: Septem-ber 30, 2024.[19] Meta AI. Meta llama 3, 2024. URL https://ai.meta.com/blog/meta-llama-3/ . Ac-cessed: 2024-06-03.[20] Meta. Code llama: Ai for coding, 2023. URL https://about.fb.com/news/2023/08/code-llama-ai-for-coding/ . Accessed: 2024-06-03.[21] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi,Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large languagemodels, 2023.[22] OpenAI. Simple evals. https://github.com/openai/simple-evals?tab=readme-ov-file#user-content-fn-1-43aa11412dfb93b343474c8d56f8882f , 2024.Accessed: 2024-06-03.[23] John H Evans. Dimensional analysis and the buckingham pi theorem. American Journal ofPhysics , 40(12):1815–1822, 1972.[24] Ian Stewart. Galois Theory . CRC Press, Taylor & Francis Group, Boca Raton, FL, 4th edition,2015. ISBN 978-1-4822-4583-7. Version Date: 20150112.7A AppendixA.1 Implementation and method details for data generationThe following subsections detail the process used to generate the problems and solutions for eachproblem type.A.1.1 Nondimensionalization of polynomialsNondimensionalization is a technique to simplify equations by reducing the number of parameters[23]. In HARDM ATH, the first type of polynomial used for nondimensionalization demonstrationcontains symbolic coefficients and is of the forma1xn1+a2xn2+a3, n 1> n 2>0. (1)Nondimensionalization converts this to the form εyn1+yn2+ 1.The second type contains numericalcoefficients and are of the form±a1xn1±a2xn2±a3, n 1> n 2which can be simplified to εyn1±yn2±1given a specific numerical value of ε. Here, integernumerical values for the coefficients a1,a2,a3are randomly chosen from [−10,10].The first nondimensionalization sub-type is generalized by varying the integer values for the degreesn1andn2within the range 0< n 2< n 1<10, while keeping a1,a2,a3>0symbolic. Solutions tothese problems express the dimensionless parameter εin terms of these three coefficients.Sample Symbolic Nondimensionalization Problem and Full SolutionProblem: Nondimensionalize the polynomiala1x10+a2x9+a3into one of the form εy10+y9+ 1.Express εas a function of a1,a2, and a3.Solution: We begin with the substitutionx=y9ra3a2This gives the expressiona1y10a3a2109+a3y9+a3Divide by the coefficient remaining in front of the constant, leaving us with the nondimen-sionalized polynomial with coefficients in terms of a1,a2, and a3:a1y10a3a2109a3+y9+ 1.By inspection, we can see thatε=a1a3a2109a3.The second subtype implements integer numerical values for the coefficients a1,a2,a3that are arerandomly chosen from [−10,10].8Sample Numeric Nondimensionalization Problem and Full SolutionProblem: Nondimensionalize the polynomialP(x) = 2 x7+ 8x2+ 5into a polynomial of the form εy7±y2±1. Solve for ε.Solution: For now, we ignore the numeric values of the coefficients and instead call thema1, a2, a3. Our polynomial is then:a1x7+a2x2+a3.Use the substitutionx=yra3a2,which gives the expressiona1y7a3a272+a3y2+a3.Divide all terms by the coefficient remaining in front of the constant term, giving us thenondimensionalized polynomial with coefficients in terms of a1, a2, a3:a1y7a3a272a3+y2+ 1Substituting in the known numeric values for a1, a2, a3(using their absolute values as wehave already accounted for sign), we get:25√10y71024+y2+ 1From inspection of this nondimensionalized equation, we can now identify ε:ε=25√101024=⇒ε≈0.08.A.1.2 Polynomial root-findingExact formulas exist for quadratic, cubic, and quartic equations, but deriving them for quintic orhigher-order polynomials is not possible [ 24].HARDM ATH includes approximate root-findingexamples for higher order polynomials of the form εxn1±xn2±1. The goal is to solve for roots interms of εusing the method of dominant balance for small and large positive εregimes.As with the nondimensionalization problems, degrees in the polynomial are randomly generated withmaximum order ten and 0< n 2< n 1. See a full problem and solution below.Sample Polynomial Root-finding Problem and Full SolutionProblem: Consider the polynomialP(x) =εx6−x5+ 1.Find first order approximations for all roots of the polynomials in the limit of small positive εand large positive ε.Solution: We begin by equating the polynomial to zero to solve for the roots: P(x) = 0 .This problem can be rewritten in the form A+B+C= 0, where: A=εx6;B=−x5;C= 1.This problem has no analytical solutions, so we find approximate solutions to the roots byconsidering the three possible dominant balances. For each dominant balance, we find the9roots of the resulting equation and evaluate whether each balance is self-consistent for smallor large positive ε.We start with the balance A+B= 0, assuming that |C|is negligible when compared to |A|and|B|. Solving this for xin terms of εthen gives us 1 non-zero root:εx6−x5= 0=⇒x=1ε.To verify that these roots are consistent with the assumption that |A|,|B| ≫ |C|,we substitutethese found roots back into the terms A,B, andCand compare their magnitudes. Using thismethod, we find that it is true that these roots are valid for small ε, while validity for large εis false.Therefore, these roots are valid in the limit of small positive εonly.Next we examine the balance B+C= 0, assuming that |A|is negligible when compared to|B|and|C|. Solving this for xin terms of εgives us 5 non-zero roots:1−x5= 0=⇒x=1,−14+√54−ip2√5 + 104,−14+√54+p−10−2√54,−√54−14−ip10−2√54,−√54−14+ip10−2√54.To verify that these roots are consistent with the assumption that |B|,|C| ≫ |A|,we substitutethese found roots back into A,B, and Cand compare their magnitudes. Using this method,we find that it is true that these roots are valid for small ε, while validity for large εis false.Therefore, these roots are valid in the limit of small positive εonly.Finally, we examine the balance A+C= 0, assuming that |B|is negligible when comparedto|A|and|C|. Solving this for xin terms of εgives us 6 non-zero roots:εx6+ 1 = 0=⇒x=−6r−1ε,6r−1ε,6q−1ε−1−√3i2,6q−1ε−1 +√3i2,6q−1ε1−√3i2,6q−1ε1 +√3i2.To verify that these roots are consistent with the assumption that |A|,|C| ≫ |B|,we substitutethese found roots back into A,B, and Cand compare their magnitudes. Using this method,we find that it is false that these roots are valid for small ε, while validity for large εis true.Therefore, these roots are valid in the limit of large positive εonly.By the Fundamental Theorem of Algebra, a polynomial of degree 6.0 has exactly 6.0 roots.Wehave found 6.0 roots that are valid in the limit of small positive εand 6.0 roots valid in thelimit of large positive ε. Our method therefore provides a complete solution to the problem,finding the correct number of roots in each εregime.The roots of P(x)for large positive εare−6r−1ε,6r−1ε,6q−1ε−1−√3i2,6q−1ε−1 +√3i2,6q−1ε1−√3i2,6q−1ε1 +√3i210and the roots of P(x)for small positive εare1ε,1,−14+√54−ip2√5 + 104,−14+√54+p−10−2√54,−√54−14−ip10−2√54,−√54−14+ip10−2√54A.1.3 Polynomial root correction termsThe use of two-term dominant balances—such as in the previous problem type—neglects termsand introduces an error. We can calculate a correction term δto reduce this error via the following:suppose the true roots x∗of a polynomial are given by x∗(ε) =x(ε)+δ,where xis our approximationto the root (as found in Appendix A.1.2)and δis the error term. Plugging the roots x∗(ε) =x(ε) +δinto the polynomial allows one to use a Taylor expansion of δaround xand solve for the correctionδ—see the worked solution below.We also check that resulting solutions have δ <xand exclude solutions that do not meet this criterion.Sample Numeric Nondimensionalization Problem and Full SolutionProblem: Consider the polynomialP(x) =εx3−x+ 1.Find approximate expressions for all roots of the polynomial in the limit of small positive εand large positive ε. Use a series expansion to calculate improved formulae for these roots toorder 1 i.e. calculate O(1)corrections for each root.Solution: Note: The root calculation in this problem follow the same method as thosedemonstrated in the A.3, so they has been omitted here. We include only correction termcalculations for the sake of brevity.We now need to calculate correction terms for these roots to give us better approximations.We consider the ansatz that the root is given by x+δ, where the correction term δis the sumof higher order terms of εthat we initially neglected in our approximation x. By definition,δ <x.We plug this ansatz into the polynomial and perform a series expansion in δ. We keepterms only up to O(1)inδ. Then, we set the expression equal to 0 and solve for δ.Regime 1: valid for small εRoot 1: −q1εx+δ=−r1ε+δSubstitute this into P(x)forxand equate to 0:−δ+ε δ−r1ε!3+r1ε+ 1 = 0 .We then expand this expression to getδ3ε−3δ2εr1ε+ 2δ−ε1ε32+r1ε+ 1 = 0and represent it as a series of O(1)inδ, discarding higher order δterms2δ−ε1ε32+r1ε+ 1≈0.11We can then solve the expression for the correction δtoO(1), and getδ≈ε1ε322−q1ε2−12.Root 2:q1εx+δ=r1ε+δSubstitute this into P(x)forxand equate to 0:−δ+ε δ+r1ε!3−r1ε+ 1 = 0 .We then expand this expression to getδ3ε+ 3δ2εr1ε+ 2δ+ε1ε32−r1ε+ 1 = 0and represent it as a series of O(1)inδ, discarding higher order δterms2δ+ε1ε32−r1ε+ 1≈0.We can then solve the expression for the correction δtoO(1), and getδ≈ −ε1ε322+q1ε2−12.Regime 2: valid for small εRoot 1: 1x+δ= 1 + δSubstitute this into P(x)forxand equate to 0:−δ+ε(δ+ 1)3= 0.We then expand this expression to getδ3ε+ 3δ2ε+ 3δε−δ+ε= 0and represent it as a series of O(1)inδ, discarding higher order δtermsδ(3ε−1) +ε≈0.We can then solve the expression for the correction δtoO(1), and getδ≈ −ε3ε−1.Regime 3: valid for large εRoot 1:3q−1εx+δ=3r−1ε+δSubstitute this into P(x)forxand equate to 0:−δ+ε δ+3r−1ε!3−3r−1ε+ 1 = 0 .12We then expand this expression to getδ3ε+ 3δ2ε3r−1ε+ 3δε−1ε23−δ−3r−1ε= 0and represent it as a series of O(1)inδ, discarding higher order δtermsδ 3ε−1ε23−1!−3r−1ε≈0.We can then solve the expression for the correction δtoO(1), and getδ≈3q−1ε3ε−1ε23−1.Root 2:3√−1ε(−1−√3i)2x+δ=3q−1ε−1−√3i2+δSubstitute this into P(x)forxand equate to 0:−δ+εδ+3q−1ε−1−√3i23−3q−1ε−1−√3i2+ 1 = 0 .We then expand this expression to getδ3ε−3δ2ε3q−1ε2−3√3iδ2ε3q−1ε2−3δε−1ε232+3√3iδε−1ε232−δ+3q−1ε2+√3i3q−1ε2= 0and represent it as a series of O(1)inδ, discarding higher order δtermsδ−3ε−1ε232+3√3iε−1ε232−1+3q−1ε2+√3i3q−1ε2≈0.We can then solve the expression for the correction δtoO(1), and getδ≈3q−1ε1 +√3i3ε−1ε23−3√3iε−1ε23+ 2.Root 3:3√−1ε(−1+√3i)2x+δ=3q−1ε−1 +√3i2+δSubstitute this into P(x)forxand equate to 0:−δ+εδ+3q−1ε−1 +√3i23−3q−1ε−1 +√3i2+ 1 = 0 .13We then expand this expression to getδ3ε−3δ2ε3q−1ε2+3√3iδ2ε3q−1ε2−3δε−1ε232−3√3iδε−1ε232−δ+3q−1ε2−√3i3q−1ε2= 0and represent it as a series of O(1)inδ, discarding higher order δtermsδ−3ε−1ε232−3√3iε−1ε232−1+3q−1ε2−√3i3q−1ε2≈0.We can then solve the expression for the correction δtoO(1), and getδ≈3q−1ε1−√3i3ε−1ε23+ 3√3iε−1ε23+ 2.A.1.4 Nonlinear ordinary differential equationsWe generate nonlinear third-order ODEs for which there do no exist exact analytical solutions andprovide approximate formulae for small and large xregimes, where the small xregime is near x= 0and the large xregime typically involves the solution diverging (example in Appendix A.1.4). Themethod is robust for higher-order problems, but for simplicity we include only third-order ODEs ofthe formy′′′=f1(x)(y′′)a+f2(x)(y′)b+f3(x)yc+f4(x),where f1(x), f2(x), f3(x), f4(x)are rational functions with integer coefficients. The initial condi-tions are randomly selected integers from [0,3]. The dataset excludes problems with a function of xas a dominant term because of the difficulty of deriving power law expressions in these cases.Approximate solutions at small xcan be derived using a Taylor series expansion (up to the thirdorder) around x= 0. Solving ODEs in the large xregime involves determining the two largest terms,assuming a divergence at some large x∗, and solving the dominant balance between these terms tocreate a power law approximation of the formy(x) =A(x∗−x)p.ODE Problem and SolutionProblem: Consider the following third-order ordinary differential equation:y′′′=−y24x4+ 6x2+ 3+y′2−y′′5x3−2x2−x+ 2−112x2−cos (x) + 11with initial conditions at x = 0:y(0) = 1 .00y′(0) = 0 .00y′′(0) = 0 .00Find analytical expressions that approximate the solution of y(x) at small and large x.Solution:The dominant balance in the large x regime is given byd3dx3y=ddxy2.14We recognize that the solution of this ODE will diverge at finite xand that divergencestypically follow a power law of the formy=α(x−x∗)p,where x∗is the divergence point. The divergence point can be determined by estimated byexamining the numerical solution generated by code.Plugging in the dominant terms we found previously yields the following equation:αp(p−2) (p−1) (x−11.45)p−3=α2p2(x−11.45)2p−2.After substituting the derivatives, the equation is reorganized to collect terms with respectto(x−x∗). This leads to an equation where the coefficients and powers of (x−x∗)areequated on both sides. Simplifying the equation gives us two separate equations, one for thecoefficients and another for the powers of (x−x∗). There is now a system of equations,where the coefficients’ equation isαp(p−2) (p−1) = α2p2and the powers’ equation is:p−3 = 2 p−2.Solving this system of equations provides the values of αandp. A valid solution is identifiedifαandpare both nonzero. Here, the solution for αandpis found to be:α=−6, p =−1With these values, the analytical approximation for the solution at large x(near the divergencepoint) is given byy=−6(x−11.45)−1.The approximate solution at small xcan also be solved used dominant balance, but one cantake advantage of the initial conditions and form a Taylor series instead around x= 0, whichis given byy(x)≈y(0) + y′(0)x+y′′(0)2!x2+y′′′(0)3!x3.Plugging in the initial conditions, we get the following expression at small x:y(x) = 1−13180x3Thus, with rounding for clarity, the solution is given byy(x) = 1−13180x3, y=−6(x−11.45)−1.A.1.5 Traditional integralsWe consider integrals of the form I(ε) =Ra01ε+P(x)dx,where P(x)is an arbitrary polynomial. Thedataset provides approximations of each integral in three regimes (small, intermediate, and large ε).The polynomial P(x)is randomly generated to consist of up to ten terms, where each term is a powerfunction of xwith an integer power randomly sampled from 1 and 20 and an integer coefficientsampled from 1 to 10. The integration bound a∈[0,100] is also randomly selected. This formensures that the integral does not oscillate.The height is approximated as the maximum value of the integrand, which is1ε, and the width can beestimated as the distance over which the integrand decreases from its maximum value by a factor of2, which implies that the width xobeys the equation1ε+P(x)=12ε⇒P(x) =ε.In the regime of small ε, the term with the smallest degree and εare the dominant terms, and in theregime of intermediate ε, the term with the largest degree and εare dominant. There exists one more15solution regime when the width of the integral exceeds the limits of integration, or when εis "verylarge." In this case, the integral is approximated by L/ε, where Lis the integration range.Sample Integral Problem and Full SolutionProblem:Consider the integral I(ε) =R56.0001ε+2.0x6.0+2.0x9.0+5.0x11.0+5.0x13.0dx.Develop analyticalformulas that approximate I(ε)for different regimes of ε.Solution: The integral is of the form I(ε) =R5601ε+P(x)dxwhere P(x)is a polynomial.Thus, its value can be estimated as the product between a height and a width.Since the integrand is maximized at x= 0, the height can be set to1ε.For small ε, we define the width as the point where the integrand becomes half of itsmaximum height. This corresponds to solving for xgiven P(x) =ε. Applying dominantbalance, considering the term in P(x)with the smallest degree, the width is approximatedas12.0∗ε1/6.0. Therefore, the analytical approximation of the integral for small εisI(ε) =0.8909ε0.8333.For an intermediate regime where εis large, we also define the width based on the term withthe largest degree. The width is approximated as15.0∗ε1/13.0. Therefore, the analyticalapproximation of the integral for large εisI(ε) =0.7647ε0.8333.If the width of the integral exceeds the range of integration, we consider one more regime forvery large ε. The width is then just the range of integration, so in this regime, the integral canbe approximated asLε. Therefore, the analytical approximation of the integral for very largeεisI(ε) =56ε.Altogether, the solutions at small, large, and very large εare0.89ε0.83,0.76ε0.83,56ε.A.1.6 Laplace integralsWe consider integrals of the form I(x) =Rbag(t)e±xf(t)dt,which can be approximated usingLaplace’s Method when xis very large because the integral’s value is dominated by the region aroundt0[11].Laplace integrals of the form I(x) =Rbag(t)e±xf(t)dtassume that f(t)>0, is never a constant, andhas an absolute minimum at a point t0either in the interior of or on the bounds of the interval [a, b].Depending on the where the minimum is, the approximate solution is eitherI(x)≈g(t0)e±xf(t0)s2πx|f′′(t0)|orI(x)≈g(t0)e±xf(t0)x|f′′(t0)|.The set of possible Laplace integrals I(x)in our dataset are parameterized by four parameters:the bounds [a, b],g(t),f(t), and the sign in front of x. To generate the dataset, the bounds foreach problem were randomly sampled from the [−1,−0.9, . . .0.9,1], and the sign was uniformlysampled from {−1,1}. The functions f(t)andg(t)were generated by randomly selecting a linearcombination of polynomials up to fifth order and basic trigonometric functions.Our solution uses SymPy under the hood to find the minima of f(t)(or the dual annealing algorithmif SymPy fails to return the minima).Laplace Integral Problem and SolutionProblem: Consider the integralI(x) =Z0.3−0.9(−1.6t2−0.5 sin ( t)−1.9)e+x(−2.5t4−0.8t3+1.4t2)dt (2)16Develop an analytical formula for I(x)that is accurate as x→ ∞ .Solution:The integral is of the formI(x) =Zbag(t)e+xf(t)dt (3)where a=−0.9,b= 0.3,g(t) =−1.6t2−0.5 sin ( t)−1.9, and f(t) =−2.5t4−0.8t3+1.4t2. This means we can use Laplace’s method to develop an analytical approximation inthe limit that x→ ∞ . In this limit, the integral will be dominated by the integrand nearthe maximum of f(t)within the bounds [−0.9,0.3]. So, to simplify the integral, we willexpand the integrand around this maximum. In this case, we can find the maximum off(t) =−2.5t4−0.8t3+ 1.4t2on the interval analytically. We begin by looking for criticalpoint(s) tcritoff(t)by solving f′(t) =−10.0t3−2.4t2+ 2.8t= 0 fort. This gives usthattcrit= [−0.66,0]. To find the maximum on this interval, we evaluate f(t)at the criticalpoint(s) tcritand the bounds −0.9and0.3. We take the tthat gives the largest value. Here,this maximum t0= [−0.66]. Since the integral is dominated by the value of the integrandnear -0.66, we Taylor expand the integrand around this point.I(x) =Zba(g(−0.66) + ( t+ 0.66)g′(−0.66) + ...)∗e+x(f(−0.66)+( t+0.66)f′(−0.66)+(t+0.66)22f′′(−0.66)+...)dt(4)Butf′(−0.66) = 0 by definition, so we can remove this term from the exponent. We canthen approximateI(x)≈Zbag(−0.66)e+x(f(−0.66)+(t+0.66)22f′′(−0.66))dt, (5)which equalsg(−0.66)e+xf(−0.66)Zbae+x((t+0.66)22f′′(−0.66))dt (6)We perform the change of variables u=qx|f′′(−0.66)|2(t+ 0.66), rewriting the integral asg(−0.66)e+xf(−0.66)Zqx|f′′(−0.66)|2(b+0.66)qx|f′′(−0.66)|2(a+0.66)s2x|f′′(−0.66)|e−u2dt (7)Since x→ ∞ , we approximate this asg(−0.66)e+xf(−0.66)s2x|f′′(−0.66)|Z∞−∞e−u2dt (8)Solving the integral and evaluating, we find thatI(x)≈ −1.21rπxe0.37x(9)A.2 Word problems in contextOne motivation for creating HARDM ATH is to help LLMs recognize and solve problems whereapproximation techniques are needed. To evaluate how LLMs perform on such problems in realisticscenarios, we sample a subset of examples from each problem type and convert these into wordproblems with a fictional context, creating a dataset of 40 such problems and solutions (see examplein the box below). This approach mirrors a format commonly found in textbooks, where a fictionalsetting provides additional context for the LLM to parse.17Although this dataset is smaller than our hand-verified evaluation set, it is large enough to evaluatethe effect of additional context in the problem statement on LLM accuracy.2. Sample Word Problem with ContextThe density of fish at different points along a certain path in a lake can be modeled as(ε+x2+x5)−1,where xrepresents the distance from the shore in kilometers (ranging from0 to 100 km), and εrepresents environmental factors that affect the fish density. To studythe total presence of fish along the path, develop an approximate analytical formula for I(ε)given below:I(ε) =Z10001ε+x2+x5dx.18A.3 Evaluation setupA.3.1 Prompts for response generationTable 3: Problem type specific hints by Question and Answer TypeQuestionTypeAnswer Type Task instructionNondim-symbolicSymPy Please answer the question requiring an answer in aSymPy convertible formula containing variables and mathoperation expressions and provide the final answer, e.g.,x3,xyinside a Latex boxed format \boxed{} .Nondim-numericalFloat (2) Please answer the question requiring a floating-point num-ber with two decimal places and provide the final value,e.g., 0.80, 3.12, inside a Latex box \boxed{} .PolynomialRootsSymPy List Please answer the question requiring a Python list contain-ing SymPy convertible formulas of variable εand mathoperation expressions and provide the final list, e.g., [ ε3,1ε] inside a Latex boxed format \boxed{} .ODEs SymPy List Please answer the question requiring a Python list contain-ing SymPy convertible formula of y=f(x)and providethe final list, e.g., [y= 1−x3, y=−6/(x−5)], inside aLatex boxed format \boxed{} .Integrals SymPy Please answer the question requiring an answer in aSymPy convertible formula containing formulas of vari-able xand math operation expressions and providethe final answer, e.g., x3inside a Latex boxed format\boxed{} .19A.3.2 Prompts for gradingTable 4: LLM-based grading prompts by Question and Answer TypeQuestiontypeAnswer type Task instructionPolynomialRootsSymPy List Please take this response {response} and this answerkey{answer key} and grade the response based on thefollowing criteria: 1) Check both the small and large εsolutions. 2) For each solution, give full credit if it com-pletely matches the elements in the answer key; give par-tial credit proportional to the number of matching rootsbetween the response and the answer key; give no creditif it is completely wrong. 3) For both partial and no creditbriefly state the error reason. 4) Average the scores for thesmall and large epsilon solutions to obtain a final scorebetween 0 and 1. 5) Give the final grading as a float inLatex boxed format \boxed{} .ODEs SymPy List Please take this response {response} and this solution{answer key} and grade the response based on the fol-lowing criteria: 1) Check both the small and large εsolu-tions. 2) For small regime solution, only give full creditif it matches the formula in the answer key exactly; giveno credit if it is doesn’t match the form. For large regimesolution, give full credit if it matches the formula in theanswer key exactly; give partial credit if it doesn’t matchbut the numerical evaluation is not far from solution at thisregime; give no credit if neither satisfies 3) Average thescores for the small and large epsilon solutions to obtain afinal score between 0 and 1. 4) Give the final grading as afloat in Latex boxed format \boxed{} .Integrals (tra-ditional)SymPy List Please take this response {response} and this solution{answer key} and grade the response based on the fol-lowing criteria: 1) Check both the small and large εsolu-tions. 2) For each solution, give full credit if it matchesthe formula in the answer key; give no credit if it is com-pletely wrong and briefly state the reason for the error. 3)Average the scores for the small and large epsilon solu-tions to obtain a final score between 0 and 1. 4) Give thefinal grading as a float in Latex boxed format \boxed{} .Integrals(Laplace)SymPy Please take this response {response} and this solution{answer key} and grade the response based on the fol-lowing criteria: 1) Check the large xfinal solution. 2) Givefull credit if it matches the formula in the answer key; givehalf credit if the {response} get to the checkpoint whereit correctly identifies t0where fattains its maximum andattempt performing Taylor’s expansion around it but thefinal answer is wrong; give no credit if it is completelywrong. 3) For both partial and no credit briefly state theerror reason. 4) Give the final grading as a float in Latexboxed format \boxed{} .20A.3.3 GPT grading human verificationModel Roots ODEs IntegralsGPT3.5 (0) 0 0 0GPT3.5 (1) 0 -0.09 -0.02GPT3.5 (5) +0.02 +0.07 +0.02GPT4 (0) 0 -0.02 0GPT4 (1) 0 -0.04 -0.02GPT4 (5) +0.07 -0.07 -0.15o1-mini (0) +0.04 +0.05 0o1-mini (5) +0.05 +0.05 0Llama3-8b (0) 0 0 -0.02Llama3-8b (5) -0.07 -0.02 -0.02Codellama3-14b (0) 0 -0.02 0Codellama3-14b (5) 0 -0.02 0Table 5: Average adjusted points using human judgment from GPT-based grading. Rows with scoreadjustments of 0.1 or more are highlighted in pink.A.3.4 Model hyper-parametersTable 6: Generating parameters for various LLMs.Model Generation SetupGPT-3.5 model = gpt-3.5-turbo, temperature = 0, max_tokens = 4000GPT-4 model = gpt-4-turbo, temperature = 0, max_tokens = 4000o1-mini model = o1-mini, temperature = 0, max_tokens = 4000Llama3 model = llama3:8b, temperature = 0CodeLlama model = codellama:13b, temperature = 0A.3.5 Computing resourceEvaluations of open-source models on HARDM ATH are conducted on a high-performance computecluster with a single Tesla V100 GPU (16GB vram). Evaluation on one problem type typically takesless than 1 hour. Evaluations of open-source models on HARDM ATH are conducted on the O2 HighPerformance Compute Cluster, supported by the Research Computing Group, at Harvard MedicalSchool. See https://it.hms.harvard.edu/our-services/research-computing for moreinformation. Evaluation on one problem type typically takes less than 1 hour with a single TeslaV100 GPU (16GB vram). We would like to thank the HMS Research Computing Consultant Groupfor their consulting services, which facilitated the computational analyses detailed in this paper.21A.4 Extended experimental resultsA.4.1 Extended evaluation results(a) Model accuracy increases with shot numbers.(b) Model accuracy breakdown by problem sub-types for all models with 5-shot CoT prompting.Figure 3: Model evaluation accuracy breakdown by shot number and problem sub-types. (a)evaluation accuracy for all models increases with shot numbers for CoT prompting with o1-mini andGPT-4 showing the most obvious improvements; (b) evaluation accuracy breakdown for all modelson all problem sub-types under the 5-shot CoT condition.Figure 4: Model performance scaling upon few-shot prompting (0, 1, 5, 10) for problem type NondimandODEs22Figure 5: Percentage of correct, partial, and incorrect responses for GPT-3.5 and CodeLlama usingCoT.Figure 6: GPT-4 error modes for problem type Roots at 0 vs. 5 shot CoT promptingA.4.2 Extended error mode analysisProblem 136 Grading Response: Roots (error mode: incorrect dominant balance setup)Question: Consider the polynomial P(x) =εx6+x4−1.Find approximate expressionsfor all roots of the polynomials in the limit of small positive εand large positive ε. Only asingle term approximation to the root is required.Grading for Large Positive ε:Model Response: For large εWhen εis large, the term εx6dominates the polynomial formost x, except for potentially small xwhere x4might still be relevant. However, focusingon the dominant term gives: P(x)≈εx6.Setting this equal to zero for an approximation(acknowledging that we’re looking for non-trivial roots of the overall polynomial, but focusingon dominance to guide our approximation), suggests that we don’t get meaningful roots fromthis term alone since x6= 0 doesn’t provide roots that satisfy the original polynomial’srequirement for non-zero roots.Reference Solution : Finally, we examine the balance A+C= 0 , assuming that |B|isnegligible when compared to |A|and|C|. Solving this for xin terms of εgives us 6 non-zeroroots: εx6−1 = 0 implies x=−1±√3i6q1ε2,1±√3i6q1ε2,−6r1ε,6r1ε.Toverify the roots found for consistency with our initial assumption, we check that |A|,|C|>>|B|holds for this root by substituting our found root expression for xback into A,B, andCand comparing their magnitudes. Using this method, we find that validity for small εisFalse and validity for large εis True.23Grading : The response wrongly sets up the dominant balance as A= 0and does not providea concrete list of roots for large ε. Therefore, score for large positive εis0.0Problem 133 Grading Response: Roots (error mode: drop imaginary roots)Question: Consider the polynomial P(x) =εx3+x2−1.Find approximate expressionsfor all roots of the polynomials in the limit of small positive εand large positive ε. Only asingle term approximation to the root is required.Grading for Large Positive ε:Model Response: To find the approximate expressions for all roots of the polynomialP(x) =εx3+x2−1in the limits of small and large ε, we will consider the two casesseparately. For Large εWhen εis large, the term εx3dominates the polynomial. Thus, thepolynomial can be approximated by ignoring the x2term, leading to εx3−1≈0. Solvingforxgives us an approximation for the root: x≈1ε13. The other roots in this limit arecomplex and given the scope, we’ll focus on this real root approximation.Reference Solution : Finally, we examine the balance A+C= 0, assuming that |B|is negligi-ble when compared to |A|and|C|. Solving this for xin terms of εgives us 3 non-zero roots:εx3−1 = 0 implies x=−1−√3i3q1ε2,−1 +√3i3q1ε2,3r1ε.To verify theroots found for consistency with our initial assumption, we check that |A|,|C|>>|B|holdsfor this root by substituting our found root expression for xback into A,B, andCand com-paring their magnitudes. Using this method, we find that validity for small εis False and va-lidity for large εis True. Therefore, these roots are valid in the limit of large positive εonly.Grading : The response makes additional assumptions to drop the two other complex rootsfor large εregime without the question asking it to do so. Therefore, score for large positive εis0.5A.4.3 Extended example GPT-4o grading responsesGrading Response: ODEs (incorrect)To grade the response, we will compare the provided solutions in the response with the groundtruth solutions in both the small xand large xregimes.Small xRegimeResponse: y(x) = 1−5e−xGround Truth: y(x) = 1−5x36The response and the ground truth solution for small xare significantly different. The groundtruth solution is a polynomial in x, while the response includes an exponential term. They donot match at all, so the solution for the small xregime is completely wrong.Large xRegimeResponse: y(x) ="Assumed rapid growth form due to y′4term."Ground Truth: y=3√30(x−2.68)13+ (1.84)The response does not provide a specific analytical expression for the large xregime, whilethe ground truth does provide a specific form. The response mentions rapid growth due toy′4, but without a clear analytical form, this response doesn’t match the ground truth solution.This indicates that the solution for the large xregime is also completely wrong.Given the evaluation criteria, both solutions do not match or are incomplete compared to theground truth. Therefore, each solution gets no credit.24The final grading is 0.0.Box 2: Example grading response for a ODEs problem that gets zero point.Grading Response: Integrals (correct)The model response approximates the integral for small εas161/4ε−3/4and for large εas181/18ε−17/18.Now, let’s compare these results with the ground truth solutions:For small ε:- The ground truth solution is I(ε) =0.64ε0.75. - The model’s result is161/4ε−3/4.Both results for small εare essentially the same, with:161/4≈0.64For large ε:- The ground truth solution is I(ε) =0.89ε0.94. - The model’s result is181/18ε−17/18.Both results for large εare essentially the same, with:181/18≈0.89Since the model’s response closely matches the ground truth solutions in both regimes (smalland large ε) and provides essentially the same approximations, I would give full credit 1.0Box 3: Example grading response for a Integrals sub-type simple integral problem that gets fullpoints.Grading Response: Integrals (partial)Answer Key1.Setup and Formulation : it 1) correctly identifies the integral form; 2) uses Laplace’smethod; 3) identifies critical points and uses Taylor expansion around the point of interest; 4)performs a change of variables to simplify the integral.2.Key Steps and Approximations : it 1) expands the integrand around the minimum of f(t);2) approximates the integral by focusing on the dominant exponential term; 3) provides afinal analytical approximation for I(x)asx→ ∞ .3. Final Result :I(x)≈ −0.004e0.89xxModel Answer1.Setup and Formulation : it 1) correctly identifies the integral form; 2) recognizes that theexponential term will dominate for large x; 3) focuses on the region where the integrand ismost significant (near t= 0.7).252.Key Steps and Approximations : it 1) uses the method of steepest descents heuristicallywithout detailed Taylor expansion; 2) approximates by evaluating the polynomial part att= 0.7and considering the exponential decay; 3) recognizes the dominant behavior of theexponential term but does not perform a detailed variable change or expansion.3. Final Result :I(x)≈ −0.017·ex·2.6·0.73Comparison1.Similarities : 1) understanding of dominance: both the answer key and the student correctlyidentify that the behavior of the integral for large xis dominated by the exponential term;2) identifying significant points: both recognize that t= 0.7is the critical point wherethe integrand contributes significantly; 3) approximate behavior: Bbth solutions focus onapproximating the behavior of the integral for large x, emphasizing the exponential term’sdominance.2.Differences : 1) mathematical rigor: the answer key employs a more rigorous mathematicalapproach, including Taylor expansion and change of variables, providing a more preciseand detailed solution; 2) heuristic approach: the student takes a more heuristic approach,focusing on the dominant exponential term without detailed mathematical expansions orvariable changes; 3) accuracy of final expression: the answer key provides a more accuratefinal expression with specific constants derived from the detailed process, while the student’sresult, though on the right track, has a different coefficient due to the heuristic method. Iwould give partial credit 0.5Box 4: Example grading response for a Integrals sub-type Laplace integral problem highlighting theerror of failing to develop Taylor’s expansion.26 |
gdxUJvPng5 | Reasoning and Tools for Human-Level ForecastingElvis Hsieh∗Preston Fu∗Jonathan Chen∗UC Berkeley{htelvis92,prestonfu,jonchen25}@berkeley.eduAbstractLanguage models (LMs) trained on web-scale datasets are largely successful dueto their ability to memorize large amounts of training data, even if only presentin a few examples. These capabilities are often desirable in evaluation on taskssuch as question answering but raise questions about whether these models canexhibit genuine reasoning or succeed only at mimicking patterns from the trainingdata. This distinction is particularly salient in forecasting tasks, where the answeris not present in the training data, and the model must reason to make logicaldeductions. We present Reasoning and Tools for Forecasting (RTF), a frameworkof reasoning-and-acting (ReAct) agents that can dynamically retrieve updatedinformation and run numerical simulation with equipped tools. We evaluate ourmodel with questions from competitive forecasting platforms and demonstratethat our method is competitive with and can outperform human predictions. Thissuggests that LMs, with the right tools, can indeed think and adapt like humans,offering valuable insights for real-world decision-making.1 IntroductionForecasting is an essential tool today, playing a critical role in government, corporate, and personaldecision-making. Weather forecasting provides essential information for agriculture, natural disasterpreparedness for governments, and travel plans for individuals. During the COVID-19 pandemic,lockdown policies were largely determined by forecasts, which were required to be sufficientlyaccurate due to their global impact [ 9]. Forecasting methodologies fall into two main categories [ 23]:statistical and judgmental. Statistical forecasting leverages time-series modeling and excels withabundant data under stable conditions. Conversely, judgmental forecasting, which we refer to simplyas “forecasting,” typically relies on human expertise, integrating historical data, domain knowledge,and intuition to make predictions, and is particularly useful when data are sparse or conditions arevolatile. By nature, forecasting requires not only accuracy but also the ability to continuously adapt todynamic data streams. This is where traditional LMs often struggle: timely data updates may causepredictions to change considerably and past data to be irrelevant.2 Related WorkInformation retrieval Reliable and accurate predictions are largely dependent on the informationavailable to the predictor. This is especially the case of LMs, which are trained on data preceding aknowledge cutoff and have been shown to perform better with information retrieval [20].Language models model the likelihood pθ(yi|x, y<i)for input sequences xand target sequences y.Retrieval-augmented generation (RAG) [ 12] proposes augmenting this approach with non-parametricmemory, i.e. retrieving the top- ktext documents zviapη(z|x)and conditioning the generator on∗Equal contribution38th Conference on Neural Information Processing Systems (NeurIPS 2024).Figure 1: RTF: High-level agent oversees low-level agents, each equipped with distinct toolkitsand data/document to accomplish various tasks,including API calling and Python simulation.Ques tionWil l MS TR close abo v e $1525 on April 30, 2024?W e need t o e v aluat e a r ange o f f act ors , including his t oric al s t ock pric e dat a , r ec ent tr ends ,W e need t o e v aluat e a r ange o f f act ors , including his t oric al s t ock pric e dat a , r ec ent tr ends ,LM r easoningDo I need a t ool? Y es .ThoughtPython _REPLActioncurrent_price = 1200 time_horizon = 0.5 # 6 monthsvolatility = 0.60 daily_volatility = volatility / (252 ** 0.5)expected_price = current_price * (1 + daily_volatility * (time_horizon * 252) ** 0.5) Obser v a tionL o w -le v el e x ecutionHigh-le v el R eAct plannerAg gr eg a t eFinal f or ecas tTable 1: Performance of different models withthe same prompt on forecasting questions. “BaseLM” refers to {GPT-4o, 4, 3.5, Llama 3 }. “Acc”is accuracy, and “Std” is ensemble standard devi-ation.Method Brier ↓Acc % ↑Std↓Crowd 0.172 73.8RTF Median of 3 0.169 72.4 0.092RTF Mean of 3 0.170 73.9 0.092RTF Sampled 0.180 71.6Halawi et al. [10] GPT-4o 0.177 68.7GPT-4o 0.210 65.5Base LM Mean 0.218 62.9 0.150Base LM Median 0.228 61.3 0.150Llama 3 0.256 56.2GPT-3.5 0.261 53.5GPT-4 0.265 54.8the retrieved passages, pθ(yi|x, z, y <i). In a forecasting context, RAG enables us to search forrelevant documents zthat may contain timely information about the forecasting task xnot presentin the training data. Therefore, we propose to equipping our forecasting agent with tools to retrieveup-to-date information beyond its knowledge cutoff.Prior approaches to LLM forecasting LMs have shown strong, interpretable reasoning capabilities[3,2,1], and have seen adoption across a wide variety of domains, even in cognitively demandingtasks. LMs have shown success across a large number of benchmarks, but previous work has shownthat LMs predictly emit memorized training data [ 6,8]. As a result, it seems plausible that success onthese benchmarks merely corresponds to memorization. On the other hand, forecasting has provento be a difficult task for LLMs. For example, Zou et al. [27] proposed a benchmark to use neuralnetworks to automate prediction in prediction markets; the authors found that while language modelscan be trained to improve their performance on forecasting tasks, their accuracy remains significantlybelow those of human experts.Current methods aim to improve the accuracy of LLM forecasting by fine-tuning and scratchpadprompting [ 16,10,24] or ensembling [ 5,18] in an attempt to approach human-level forecasting.However, fine-tuning requires extensive computational resources and dataset preparation, resultingin a costly process that limits scalability, especially for specialized applications such as forecasting.Concurrent work [ 17] benchmarks LLMs’ forecasting capabilities using the GleanGen predictionmarket, an internal tool at Google. However, this approach did not accurately reflect real humancrowd prediction distributions, and it relied on PaLM2 [ 4], which performs worse than GPT models.We propose a zero-shot tool-usage LLM framework without costly fine-tuning and tedious scratchpad-format prompting.Ensembles Leveraging multiple LLM agents has demonstrated strong performance on a varietyof tasks, and improve performance beyond that of a single agent [ 22,13]. Schoenegger et al. [18]has shown that taking ensemble sizes up to 36 outperforms any individual forecasting agent. Recentwork in tool learning has implemented task planning and execution with separate agents [ 21,19].We propose bridging this gap with a hierarchical structure to facilitate cooperation between high-level reasoning and low-level execution agents, and demonstrate that a small ensemble suffices forhuman-level performance without costly fine-tuning and tedious scratchpad format prompting.23 Reasoning and Tools for ForecastingForecasting is a complex task solving environment, for which we would like to where we leverage afrozen LM pθas reasoning. Successful forecasting agents rely on the most up-to-date information,and accordingly operate as agents that collect observations ot∈ O and take actions at∈ A .The observation space Ois natural language, as collected from the prompt itself or informationon the internet. The agent’s actions are distributed according to at∼π(at|ct), where ct=(o1,a1, . . . ,ot−1,at−1)is the context to the agents.Our proposed approach πsatisfies the following criteria:•It issimple, scalable, and time-invariant . As we consider different datasets of forecastingquestions or language models at least as capable as the current state-of-the-art, we wouldlike our approach to work at least as well.•It can produce comprehensive responses through zero-shot prompting from factual informa-tion, which can be used to reliably support decision-making in downstream scenarios.•These responses should be consistent , i.e. they should correctly synthesize the up-to-dateinformation the model collects.It’s shown that CoT prompting, even with in-context examples, can iteratively hallucinate to produceincorrect responses on complex tasks [ 25]. CoT satisfies (i) but neither (ii) nor (iii). We find thatCoT’s lack of interaction with the environment (i.e. sole reliance on its training data) limits itsreasoning abilities and over-emphasizes irrelevant information. Yao et al. [25] proposes ReAct forthis setting: A={search ,lookup ,finish }, and observations otfrom search andlookup arecollected from O ⊆ Wikipedia web API. The context is then augmented a thought ˆ at∼pθ(ˆ at|ct)that composes information about the existing context. This method has shown to significantly enhancethe model’s ability to refine its responses continuously, reducing the likelihood of erroneous outputsdue to lacking critical context information. Vanilla ReAct satisfies (i); as part of our framework, weshow that it can additionally satisfy (ii) and (iii).Hierarchical planning We define πby an aggregate of a collection of hierarchical ReAct agentswith tools for real-time data retrieval and simulation, expanding π’s observations otcollected fromO ⊆ Google Search API and Python interpreter.We propose hierarchical ReAct planning, where a LM agent acts as a high-level planner for handlingabstract logic and forecasting principles based on the outputs collected from the low-level agents(Figure 1). When LLMs handle API directly with individual agents, it can consume a large portion ofthe context window. We delegate the reasoning and API calling to specialized agents to enhancesefficiency, conserves tokens, and allows for more complex operations. The high-level agent interactswith the low-level agent by invoking it as a tool. We wrap API tools with another ReAct agent toform the low-level agent, which significantly increases API call success rates due to its self-correctionmechanism [25]. Both classes of agents are implemented with GPT-4o backbones.4 Experiments4.1 SetupModels and data Jin et al. [11], Zou et al. [26] have proposed forecasting benchmarks to assessmodels’ forecasting abilities, simulating forecasting by leveraging that models are only trained up toa cutoff date. However, these benchmarks, consisting of questions that resolved in 2022, are nowoutdated for evaluating the performance of models such as GPT-4o due to answer leakage in trainingdata (knowledge cutoff October 2023; see Appendix A.1).We curated the dataset on April 15, 2024, when we scraped the platform for questions resolvingwithin the next two weeks and corresponding human crowd predictions. We then filtered out vaguequestions, and ran every prediction method on these questions, enabling a fair comparison betweeneach method and the human crowd. To prevent answer leakage from the Google API, we set thesearch range to prior to this date. Our final dataset consisted of 201 questions spanning across 9diverse categories (see Apendiex B).3None of our baselines have direct access to prediction market data, and empirically we found thatthis information was never scraped via Google search. That is, the prediction given by the ensembleof agents relies on only the agents themselves, with no human crowd influence. (By contrast, ifdeployed in the real world, this approach could benefit from incorporating the current human crowdperformance as an input to the prediction due to the wisdom-of-crowds effect. Indeed, we observe inour experiments that human crowds are fairly well-calibrated.)Performance metrics Ournforecasting questions have true outcomes oi∈ {0,1}and probabilisticforecasts fi∈[0,1]. We evaluate our forecasts using Brier scores [ 7], i.e.1nPni=1(fi−oi)2, andaccuracy, i.e.1nPni=11{1{fi>0.5}=oi}.2 3In case LMs decline to give numerical answers, thequestion is dropped over all methods when evaluating scores.Baselines In Table 1, we compare RTF ensemble to multiple baselines: (a) crowd scores givenby the current traded values on Manifold Markets (see Appendix A.2), (b) scratchpad prompting,ensemble, and fine-tuning [10], and (c) base models from different providers.4.2 Results and ObservationsTable 1 demonstrates that RTF significantly improves over CoT and scratchpad with fine-tuning.We also achieve comparable Brier score (0.169 vs. 0.172) and superior accuracy (73.9% vs. 73.8%)compared to human predictors using the median and mean of our ensemble, respectively.We also demonstrate that ensembles for RTF yield better performance than individual agents (Brier0.169 vs. 0.180). However, this is not the case for base LMs (Brier 0.218 vs. 0.210 for GPT-4o).Base LMs tend to produce higher-variance outputs (standard deviation in ensemble size 4 of 0.150)compared to our better-calibrated ReAct agents (standard deviation in ensemble size 3 of 0.092),which satisfied (iii) as defined in Section 3.Ensembles only contribute to the final performance if each ensemble member is already sufficientlycalibrated. Indeed, Brier scores given by randomly sampling our ReAct ensemble outputs, “ReactSampled” in the table, achieved a score of 0.180, far better than was achieved by any of the basemethods (which, aside from GPT-4o, perform worse than guessing 0.5 every time by Brier score).Ablation study To demonstrate the effectiveness of our introduced components, we conduct theablation study. We showed each component is necessary for the fully functioning RTF framework.•ReAct: RTF itself without adequate guidance from ReAct struggles to properly use the toolsprovided by our low-level agents, which leads to misguided lines of reasoning that cascadedownstream. This is consistent with the observation (B) in [ 25], where groundedness andtrustworthiness come at the cost of higher reasoning error rates.•Hierarchical Planning: Empirically, without the cooperation of high- and low-level agents,a single agent fails to call APIs and perform necessary reasoning, as it exhausted available to-kens on API schemas. In our experiments, the single-agent approach frequently encounteredtime-out errors or exceeded rate limits when handling complex queries.Qualitative analysis While the baselines systematically evaluate multiple considerations, they donot consider interactions between these considerations. Empirically, we find in our samples thatthe prompting style we present is useful in generating a wide variety of arguments and providingreasonable estimates for how to weight each of those arguments. On the other hand, we see that thissame prompt GPT-4o directly does this calibration in a sequential manner to update its final estimate,which may result in over- or under-estimate based on the recency of its considerations. In general, wefind that RTF yield human-like reasoning trajectories, showing the robustness of interactive decisionmaking, supporting goal (ii) from Section 3 (see Appendix D).Calibration index In Table 2, we evaluate our methods by calibration index, which comparesbinned forecast probabilities to observed outcomes. A well-calibrated model means that if a forecast2The optimal strategy to minimize Brier scores is to forecast fi=P(oi= 1) , so this scoring metric isunbiased. It is typical to compare Brier scores to 0.25, which can be achieved by fi= 0.5for all i.3Accuracy denotes whether fiandoiare on the same side of 0.5.4Table 2: Calibration index for different methods.Method Crowd ReAct Mean ReAct Median ReAct GPT-4o GPT-4 Llama 3Calibration Index 0.0101 0.0129 0.0137 0.0164 0.0194 0.0290 0.0301predicts an event with a certain probability, the event should occur approximately that fraction of thetime over many predictions.We calculate the calibration index asCI=1NPKk=1Nk(fk−ok)2,where Nis the total number of forecasts, Nkis the number of forecasts in bin k,fkis the meanforecast probability in bin k, andokis the observed probability with which events occur in bin k. Weselect bins as the K-quantiles of the forecasts.Comparing GPT-4o and React Mean, we see a significant decrease in calibration index (0.0194 vs.0.0129), which shows that ensembling with ReAct not only increases forecasting accuracy, but alsomore accurately measures the specific magnitudes with which events occur.5 ConclusionWe present Reasoning and Tools for Forecasting, a framework to leverage LMs’ reasoning capabilitiesby interacting with the latest information. It is competitive with the predictive capabilities of humanforecasters on forecasting platforms. The RTF synthesizes information through a structured decision-making process, ensuring that the predictions are both current and relevant. Additionally, whileprevious work has shown that ensembling can improve prediction accuracy, a carefully calibratedsmaller set of models is often more cost-effective than larger ensembles. By advancing LMs’ abilitiesto reason and dynamically interact with new data, RTF offers a robust tool for real-world decision-making for tasks like forecasting.Limitations The evaluation dataset is based on prediction market data and popular questions ratherthan domain-specific questions. This facilitates a comparison with crowd prediction performance, butmay not fully capture the nuances of more specialized domains. In addition, the Google Search APIretrieves a list of URL links, titles, and text snippets, but lacks nuanced, context-specific understanding.As we can observe qualitatively (see Appendix D.2), interpreting incoherent observations from theAPI can be challenging.Acknowledgments We appreciate the inspiration from Prof. Jacob Steinhardt’s amazing forecastingclass at UC Berkeley. We thank OpenAI for granting API credits.References[1]Introducing the next generation of Claude. URL https://www.anthropic.com/news/claude-3-family .[2] Hello GPT-4o. URL https://openai.com/index/hello-gpt-4o/ .[3] Introducing OpenAI o1, Sept. 2024. URL https://openai.com/o1/ .[4]R. Anil, A. M. Dai, and O. Firat. Palm 2 technical report, 2023. URL https://arxiv.org/abs/2305.10403 .[5]A. Bassamboo, R. Cui, and A. Moreno. Wisdom of crowds: Forecasting using predictionmarkets. In Technical Report . Working paper, Kellogg School of Management, NorthwesternUniversity, 2018.[6]S. Biderman, U. Prashanth, L. Sutawika, H. Schoelkopf, Q. Anthony, S. Purohit, and E. Raff.Emergent and predictable memorization in large language models. Advances in Neural Infor-mation Processing Systems , 36, 2024.5[7]G. W. Brier. Verification of forecasts expressed in terms of probability. Monthly weather review ,78(1):1–3, 1950.[8]N. Carlini, D. Ippolito, M. Jagielski, K. Lee, F. Tramer, and C. Zhang. Quantifying memorizationacross neural language models, 2023. URL https://arxiv.org/abs/2202.07646 .[9]M. Dubé, A. Kaba, T. Cronin, S. Barnes, T. Fuselli, and V . Grant. COVID-19 pandemicpreparation: using simulation for systems-based learning to prepare the largest healthcareworkforce and system in Canada. Advances in Simulation , 5:22, Aug. 2020. ISSN 2059-0628.doi: 10.1186/s41077-020-00138-w. URL https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7432586/ .[10] D. Halawi, F. Zhang, C. Yueh-Han, and J. Steinhardt. Approaching human-level forecastingwith language models, 2024.[11] W. Jin, R. Khanna, S. Kim, D.-H. Lee, F. Morstatter, A. Galstyan, and X. Ren. Forecastqa: Aquestion answering challenge for event forecasting with temporal text data, 2021.[12] P. Lewis, E. Perez, A. Piktus, F. Petroni, V . Karpukhin, N. Goyal, H. Küttler, M. Lewis, W. tauYih, T. Rocktäschel, S. Riedel, and D. Kiela. Retrieval-augmented generation for knowledge-intensive nlp tasks, 2021.[13] Z. Liu, Y . Zhang, P. Li, Y . Liu, and D. Yang. Dynamic llm-agent network: An llm-agentcollaboration framework with agent team optimization, 2023. URL https://arxiv.org/abs/2310.02170 .[14] M. Markets. Maniswap, 2022. URL https://manifoldmarkets.notion.site/Maniswap-ce406e1e897d417cbd491071ea8a0c39 .[15] Metaculus. Wisdom of the Crowd vs. the Best of the Best of theBest, Apr. 2023. URL https://www.metaculus.com/notebooks/15760/wisdom-of-the-crowd-vs-the-best-of-the-best-of-the-best/ .[16] M. Nye, A. J. Andreassen, G. Gur-Ari, H. Michalewski, J. Austin, D. Bieber, D. Dohan,A. Lewkowycz, M. Bosma, D. Luan, et al. Show your work: Scratchpads for intermediatecomputation with language models. arXiv preprint arXiv:2112.00114 , 2021.[17] S. Pratt, S. Blumberg, P. K. Carolino, and M. R. Morris. Can language models use forecastingstrategies?, 2024. URL https://arxiv.org/abs/2406.04446 .[18] P. Schoenegger, I. Tuminauskaite, P. S. Park, and P. E. Tetlock. Wisdom of the silicon crowd:Llm ensemble prediction capabilities rival human crowd accuracy, 2024.[19] Z. Shi, S. Gao, X. Chen, Y . Feng, L. Yan, H. Shi, D. Yin, P. Ren, S. Verberne, and Z. Ren.Learning to use tools via cooperative and interactive agents, 2024. URL https://arxiv.org/abs/2403.03031 .[20] K. Shuster, S. Poff, M. Chen, D. Kiela, and J. Weston. Retrieval augmentation reduceshallucination in conversation, 2021.[21] Y . Song, W. Xiong, D. Zhu, W. Wu, H. Qian, M. Song, H. Huang, C. Li, K. Wang, R. Yao,et al. Restgpt: Connecting large language models with real-world restful apis. arXiv preprintarXiv:2306.06624 , 2023.[22] Y . Talebirad and A. Nadiri. Multi-agent collaboration: Harnessing the power of intelligent llmagents, 2023. URL https://arxiv.org/abs/2306.03314 .[23] R. Webby and M. O’Connor. Judgemental and statistical time series forecasting: a review ofthe literature. International Journal of Forecasting , 12(1):91–118, 1996. ISSN 0169-2070. doi:https://doi.org/10.1016/0169-2070(95)00644-3. URL https://www.sciencedirect.com/science/article/pii/0169207095006443 . Probability Judgmental Forecasting.[24] Q. Yan, R. Seraj, J. He, L. Meng, and T. Sylvain. Autocast++: Enhancing world event predictionwith zero-shot ranking-based context retrieval, 2024.6[25] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y . Cao. React: Synergizingreasoning and acting in language models, 2023.[26] A. Zou, T. Xiao, R. Jia, J. Kwon, M. Mazeika, R. Li, D. Song, J. Steinhardt, O. Evans, andD. Hendrycks. Forecasting future world events with neural networks, 2022.[27] A. Zou, T. Xiao, R. Jia, J. Kwon, M. Mazeika, R. Li, D. Song, J. Steinhardt, O. Evans, andD. Hendrycks. Forecasting future world events with neural networks, 2022. URL https://arxiv.org/abs/2206.15474 .7A Models and Knowledge AccuracyA.1 ModelsTable 3: Models we evaluatedModel Knowledge Cutoff Evaluation CostGPT-4o Oct 2023 $0.005/1K tokensGPT-4-Turbo Apr 2023 $0.01/1K tokensGPT-3.5-Turbo Sep 2021 $0.0005/1K tokensLlama-3-7B Mar 2023 One GPUWe list the details of models we evaluated in Table 3, where the cutoffs have been retrieved fromthe model cards. For GPT models, we run them via the OpenAI API. We host Llama-3-7B on asingle NVIDIA TITAN RTX 24GB via Ollama for roughly 0.5 GPU-hours. All other approaches arerun through the OpenAI API, for roughly 1 hour per naive baseline, 6 hours for our reproduction of[10], and 2.5 hours for our proposed method. For GPT models, we use temperature 0.5 for all theexperiments.[10] finds that GPT-3.5 and GPT-4 do not have leakage due to post-training. We find that the same istrue of GPT-4o and Llama-3-7B: prompting with “Answer this question without searching the web:Who was appointed to the Governor-General of Australia in 2024?” yielded a statement about itscutoff date, whereas the correct answer was given when prompted for the answer in 2019.A.2 Crowd PredictionsOn Manifold Markets, players make bets on the outcome of various events where the prices of betsare determined by a current aggregate of crowd predictions, which are prices in [0,1]. As bets aremade, the prices are adjusted by their automated market-makers [ 14]. As shown in [ 15], the crowdprediction is a strong baseline and consistently outperform top forecaster in the prediction market.B DatasetB.1 QuestionsOur final dataset consisted of 201 questions from Manifold Markets. These question were all resolvedafter April 15, 2024, which was the knowledge cutoff date for our low-level agent supporting theGoogle Search API. We include a subset of the dataset for reference.From Manifold Markets, we initially filtered for questions that resolve between April 16, 2024 andMay 15, 2024, inclusive. Then, to improve the quality of our questions, we filtered the questionthrough prompting GPT-4 to return “Yes” if (i) it is possible to respond to the question with a yes orno answer and (ii) the question asks about an external event from the perspective of the majority ofaskers and respondents, and “No” otherwise. Finally, after the markets have resolved, we re-collectdata using the API to extract the answers and compute Brier scores and accuracies. In the future,researchers can use our questions data for forecasting using LMs and information retrieval tools withcutoff dates before April 15, 2024 (see Section 4.1).B.2 Knowledge Evaluation by CategoryWe show the diversity of our dataset in Table 4, with categories determined by GPT-3.5. Due tothe popularity of Economics & Business and Politics & Governance questions on the forecastingplatform, we have higher proportions of data in those categories.8Table 4: Category frequenciesCount CategoryEconomics & Business 68Politics & Governance 34Science & Tech 29Arts & Recreation 29Sports 16Security & Defense 13Healthcare & Biology 5Environment & Energy 4Social Sciences 3Total 201C Forecasting Principles GuidanceOur system prompt requested that the output satisfy elementary forecasting principles. Due toits length, the full prompt will be released along with the codebase and dataset. The forecastingprinciples are as follows:•You may find relevant time series data online. If the data is recent, you can use zeroth-order orfirst-order approximations (but for longer time horizons, recall that these may break down due tosaturation effects).•Don’t ignore base rates for low-probability events that haven’t happened, but rather attempt to estimatethose probabilities to inform your estimate.•Attempt to approximate the same quantity in multiple different ways. You are welcome to split yourreasoning into multiple sections, where in each section you describe how you reached an estimateindependently. If your estimates in these sections are very far off, reason through why the estimates areso different, potentially update those estimates, and theorize how confident you are in each estimate.Then, combine those forecasts using a weighted or trimmed mean or median.•Some data can be modeled well using a common distribution, e.g., normal, log-normal, power law, orPoisson distribution (but this is by no means a guarantee). If there’s a good reason to believe this is thecase, you should use information about those distributions.D Prompts and WorkflowsDue to length, we have only included one full sample below. In general, we observe that RTF yieldsmore robust CoT-style outputs, whereas base LMs use more linear reasoning that may result inoutputs unnecessarily biased towards specific considerations.D.1 PromptWill ETH close above 3700?Background : NoneResolution criteria : NoneClosure time : April 30, 2024For reference, this prompt resolved as NO (0).D.2 RTF Output[ Entering new AgentExecutor chain ...]To forecast the probability that ETH ( Ethereum ) will close above $3700 on April 30, 2024 , I’ll follow astructured approach :1. ** Reference Classes and Base Rates **:- Historical price trends of Ethereum .- Historical performance of similar cryptocurrencies .- General market conditions and trends in the cryptocurrency market .92. ** Recent News and Distinctive Features **:- Recent developments in Ethereum technology .- Regulatory news affecting cryptocurrencies .- Broader economic conditions impacting the cryptocurrency market .3. ** Statistical Modeling **:- Use historical price data to model potential future price movements .- Consider volatility and distribution patterns typical for Ethereum .### Step -by - Step Analysis#### 1. Reference Classes and Base RatesFirst , let ’s look at the historical price data of Ethereum . We want to understand how often Ethereum hasclosed above $3700 in the past and under what conditions it has done so.Thought : Do I need to use a tool ? YesAction : Google Search SnippetsAction Input : " historical price data of Ethereum "To forecast the probability that ETH ( Ethereum ) will close above $3700 on April 30, 2024 , I’ll follow astructured approach :1. ** Reference Classes and Base Rates **:- Historical price trends of Ethereum .- Historical performance of similar cryptocurrencies .- General market conditions and trends in the cryptocurrency market .2. ** Recent News and Distinctive Features **:- Recent developments in Ethereum technology .- Regulatory news affecting cryptocurrencies .- Broader economic conditions impacting the cryptocurrency market .3. ** Statistical Modeling **:- Use historical price data to model potential future price movements .- Consider volatility and distribution patterns typical for Ethereum .### Step -by - Step Analysis#### 1. Reference Classes and Base RatesFirst , let ’s look at the historical price data of Ethereum . We want to understand how often Ethereum hasclosed above $3700 in the past and under what conditions it has done so.Thought : Do I need to use a tool ? YesAction : Google Search SnippetsAction Input : " historical price data of Ethereum "Ethereum Price History . Daily . Weekly . Monthly . 3/28 ... Historical Data . Frequently Asked Questions ( FAQs )... What Does Ethereum ’s Market Cap History Reveal About ... Jan 8, 2023 ... Price history of ETH : OnAugust 7, 2015 , the first Ethereum transaction was carried out using the transaction hash of block46147. Since its ... Ethereum ’s ( ETH ) highest price in late 2021 was much higher than any other priceseen in a chart that looks at the previous 5 years as well as 2023. Track the latest Ethereum price ,market cap , trading volume , news and more with CoinGecko ’s live ETH price chart and popular ...Features real - time ( live ) charts , ethereum blockchain , news and videos . Learn about ETH , the currenthashrate , crypto trading and more . Dec 18, 2016 ... There is no canonical ETH or BTC price , eachexchange has its own . Exchange may or may not collect historical data , and granularity of such data... Sep 1, 2017 ... Ethereum price history API ? I am working on a personal project to develop graphsof the 7 day , 1 month , 3 month , etc . price history of ethereum . Jan 30, 2022 ... ... ethereum - get ...ETH price as an integer on an IoT device like the ESP8266 3 Read all historical price data of aChainlink price feed in ... Mar 21, 2023 ... Historical tick and granular OHLC price data for Ethereumtokens based on DEX swaps . Low resolution ( can only get hourly or daily price data ... Ethereum PriceHistory . Crypto prices are volatile in the sense that they can explode to the upside or blow up yourlife savings . Since its launch , ETH has made ...] The search results indicate that there are multiplesources for historical Ethereum price data . For a thorough analysis , we ’ll need to pull specifichistorical data points . Let ’s proceed by gathering detailed historical price data for Ethereum .#### 1. Reference Classes and Base Rates ( Continued )To get a sense of the base rate , we need to know how often Ethereum has closed above $3700 in the past andunder what conditions .Thought : Do I need to use a tool ? YesAction : Google Search SnippetsAction Input : " Ethereum historical price data 2021 2022 2023"The search results indicate that there are multiple sources for historical Ethereum price data . For athorough analysis , we ’ll need to pull specific historical data points . Let ’s proceed by gatheringdetailed historical price data for Ethereum .#### 1. Reference Classes and Base Rates ( Continued )To get a sense of the base rate , we need to know how often Ethereum has closed above $3700 in the past andunder what conditions .Thought : Do I need to use a tool ? YesAction : Google Search SnippetsAction Input : " Ethereum historical price data 2021 2022 2023"Ethereum ’s ( ETH ) highest price in late 2021 was much higher than any other price seen in a chart that looksat the previous 5 years as well as 2023. Jan 8, 2023 ... Ethereum was first conceptualized in 2013.Ether ( ETH ) is the native cryptocurrency of Ethereum . On November 9, 2021 , Ethereum briefly attained... Frequently Asked Questions ( FAQs ). How Has Ethereum ’s Price History Influenced the CryptocurrencyMarket ? Nov 18, 2022 ... As one of the more innovative crypto projects , Ethereum has seen a historiclow of about 42 cents and a high of about $4800 in November 2021... Jan 9, 2022 ... A friend of minerecently asked me about finding historical open , high , low , and close ( OHLC ) data for cryptocurrencypairs . Jul 27, 2023 ... Source : https :// coinmarketcap . com / currencies / ethereum / historical - data / ...Over the past two years ( July 9, 2021 to July 9, 2023) ... 2022 , when ETH ... ETH Historical Price ;10$3 ,747.13 minus $3 ,875.64. $3 ,664.40 minus $3 ,959.28. $4 ,878.26 22.8%. Nov 10, 2021 ( over 2 years ).$0 .433 869871.6%. Oct 20... Historical annualised electricity consumption ... 2021 2022 2023 2024 040 80 120 160 200 240 280 ... The electricity cost parameter plays a crucial role in ... 2021 to 2023.Bitcoin took less than a month in 2021 to smash its 2020 price record , surpassing $40 ,000 by Jan . 7,2021. By mid -April , Bitcoin prices reached ... May 29, 2023 , 20.80 , 20.97 , 20.30 , 20.59 , 20.59 ,228 ,295 ,637. Related Tickers . BTC - USD Bitcoin USD . 68 ,466.40. -1.09%. ETH - USD Ethereum USD . 3 ,843.49.-0.85%.] The search results indicate that Ethereum reached its all - time high in November 2021 , withprices briefly surpassing $4800 . For a thorough analysis , we need to look at historical data pointsfrom 2021 , 2022 , and 2023 to understand the frequency and conditions under which Ethereum has closedabove $3700 .### Historical Price Data Analysis#### 1. Reference Classes and Base Rates ( Continued )Let ’s consider the following data points :- ** All - Time High ( ATH ) **: Ethereum reached an ATH of approximately $4800 in November 2021.- **2021**: Ethereum frequently traded above $3700 during the latter part of the year , especially aroundits ATH .- **2022**: The cryptocurrency market experienced significant volatility , and Ethereum ’s price droppedbelow $3700 for a considerable period .- **2023**: The price of Ethereum has been fluctuating , with occasional spikes but generally trading below$3700 .Based on this historical data , we can establish a base rate for Ethereum closing above $3700 . However , weneed to adjust for recent trends and market conditions .#### 2. Recent News and Distinctive Features- ** Technological Developments **: Ethereum ’s transition to Ethereum 2.0 and the implementation of Proof ofStake ( PoS ) could influence its price positively .- ** Regulatory Environment **: Increased regulation of cryptocurrencies could introduce uncertainty andaffect market sentiment .- ** Macroeconomic Conditions **: Broader economic factors , such as inflation and interest rates , can impactinvestment in cryptocurrencies .#### 3. Statistical ModelingTo model Ethereum ’s potential future price movements , we need to consider its historical volatility anddistribution patterns . Cryptocurrencies often exhibit high volatility , and their price movements canbe modeled using distributions like log - normal or power law .### Combining EstimatesLet ’s combine our estimates from different approaches :1. ** Base Rate Estimate **:- During the peak of the 2021 bull market , Ethereum frequently traded above $3700 . Assuming a 30% chancebased on historical data .2. ** Recent Trends and News **:- Positive technological developments and potential regulatory impacts . Adjusting the base rate slightlyupwards to 35%.3. ** Statistical Modeling **:- Considering the high volatility and potential for significant price movements , we can use a log - normaldistribution to model future prices . However , without precise modeling tools , we estimate a 40%chance based on volatility patterns .### Final Probability EstimateLet ’s combine these estimates using a weighted average approach :- Base Rate : 30%- Recent Trends and News : 35%- Statistical Modeling : 40%Weighted Average Calculation :\[\ text { Final Probability } = \ frac {30\% + 35\% + 40\%}{3} = 35\%\]Thought : Do I need to use a tool ? NoFinal Answer : 0.35 The search results indicate that Ethereum reached its all - time high in November 2021 ,with prices briefly surpassing $4800 . For a thorough analysis , we need to look at historical datapoints from 2021 , 2022 , and 2023 to understand the frequency and conditions under which Ethereum hasclosed above $3700 .### Historical Price Data Analysis#### 1. Reference Classes and Base Rates ( Continued )Let ’s consider the following data points :- ** All - Time High ( ATH ) **: Ethereum reached an ATH of approximately $4800 in November 2021.- **2021**: Ethereum frequently traded above $3700 during the latter part of the year , especially aroundits ATH .- **2022**: The cryptocurrency market experienced significant volatility , and Ethereum ’s price droppedbelow $3700 for a considerable period .- **2023**: The price of Ethereum has been fluctuating , with occasional spikes but generally trading below$3700 .Based on this historical data , we can establish a base rate for Ethereum closing above $3700 . However , weneed to adjust for recent trends and market conditions .#### 2. Recent News and Distinctive Features- ** Technological Developments **: Ethereum ’s transition to Ethereum 2.0 and the implementation of Proof ofStake ( PoS ) could influence its price positively .11- ** Regulatory Environment **: Increased regulation of cryptocurrencies could introduce uncertainty andaffect market sentiment .- ** Macroeconomic Conditions **: Broader economic factors , such as inflation and interest rates , can impactinvestment in cryptocurrencies .#### 3. Statistical ModelingTo model Ethereum ’s potential future price movements , we need to consider its historical volatility anddistribution patterns . Cryptocurrencies often exhibit high volatility , and their price movements canbe modeled using distributions like log - normal or power law .### Combining EstimatesLet ’s combine our estimates from different approaches :1. ** Base Rate Estimate **:- During the peak of the 2021 bull market , Ethereum frequently traded above $3700 . Assuming a 30% chancebased on historical data .2. ** Recent Trends and News **:- Positive technological developments and potential regulatory impacts . Adjusting the base rate slightlyupwards to 35%.3. ** Statistical Modeling **:- Considering the high volatility and potential for significant price movements , we can use a log - normaldistribution to model future prices . However , without precise modeling tools , we estimate a 40%chance based on volatility patterns .### Final Probability EstimateLet ’s combine these estimates using a weighted average approach :- Base Rate : 30%- Recent Trends and News : 35%- Statistical Modeling : 40%Weighted Average Calculation :\[\ text { Final Probability } = \ frac {30\% + 35\% + 40\%}{3} = 35\%\]Thought : Do I need to use a tool ? NoFinal Answer : 0.35][ Finished chain .]prompt : 2024 -04 -30: Will ETH close above 3700?response : 0.35 , index : 6D.3 GPT-4o OutputTo forecast whether ETH ( Ethereum ) will close above $3700 by April 30, 2024 , we need to consider severalfactors , including historical price data , market trends , and current events that could influence theprice . Let ’s break down the analysis step -by - step .### Step 1: Historical Data and Base RatesFirst , we need to look at the historical price data for ETH to understand its past performance andvolatility . We can use this data to establish a base rate for how often ETH has closed above $3700 inthe past .#### Historical Price Data- ETH has been highly volatile , with significant price swings over the past few years .- ETH reached an all - time high of around $4800 in November 2021.- Since then , the price has fluctuated , with periods of both significant gains and losses . Let ’s assume wehave the following historical data for ETH closing prices over the past few years : - 2021: ETHclosed above $3700 on 50 days .- 2022: ETH closed above $3700 on 20 days .- 2023 (up to October ): ETH closed above $3700 on 10 days . Given this data , we can calculate the base ratefor ETH closing above $3700 .#### Base Rate Calculation Total number of days in the dataset : 365 (2021) + 365 (2022) + 273 (2023 up toOctober ) = 1003 days Total number of days ETH closed above $3700 : 50 (2021) + 20 (2022) + 10 (2023) =80 days Base rate = 80 / 1003 approximate to 0.08 (8%)### Step 2: Current Market Trends and NewsNext , we need to consider the current market trends and any recent news that could impact the price of ETH .#### Market Trends- The cryptocurrency market has been experiencing increased institutional interest and adoption .- Regulatory developments could either positively or negatively impact the market .- Macroeconomic factors , such as inflation and interest rates , could influence investor sentiment . ####Recent News - Ethereum ’s transition to a proof -of - stake ( PoS ) consensus mechanism ( Ethereum 2.0)could have a significant impact on its price .- Any major security breaches or technological advancements could also affect the price . ### Step 3:Adjusting the Base Rate Based on the current market trends and recent news , we need to adjust ourbase rate . #### Positive Factors- Increased institutional interest and adoption could drive the price higher .- The successful implementation of Ethereum 2.0 could boost investor confidence .12#### Negative Factors- Regulatory crackdowns could negatively impact the price .- Macroeconomic factors such as rising interest rates could lead to reduced investment in riskier assetslike cryptocurrencies .### Step 4: Combining EstimatesWe can combine our base rate with adjustments for current trends and news to arrive at a final probabilityestimate .#### Adjusted Probability- Base rate : 8%- Positive adjustment : +5% ( for increased adoption and Ethereum 2.0)- Negative adjustment : -3% ( for potential regulatory and macroeconomic risks )Final adjusted probability = 8% + 5% - 3% = 10%### Step 5: Final Probability EstimateAfter considering all factors , my best prediction for the probability of ETH closing above $3700 by April30, 2024 , is: **0.10 (10%) **This estimate balances the historical base rate with current market trends and potential risks , aiming tominimize the Brier score by being well - calibrated and resolved .13 |
fn0pQWkFsP | SBI-RAG: Enhancing Math Word Problem Solving forStudents through Schema-Based Instruction andRetrieval-Augmented GenerationPrakhar DixitDepartment of Computer ScienceUniversity of Maryland Baltimore [email protected] OatesDepartment of Computer ScienceUniversity of Maryland Baltimore [email protected] students struggle with math word problems (MWPs), often finding it difficultto identify key information and select the appropriate mathematical operations.Schema-based instruction (SBI) is an evidence-based strategy that helps studentscategorize problems based on their structure, improving problem-solving accuracy.Building on this, we propose a Schema-Based Instruction Retrieval-AugmentedGeneration (SBI-RAG) framework that incorporates a large language model (LLM).Our approach emphasizes step-by-step reasoning by leveraging schemas to guidesolution generation. We evaluate its performance on the GSM8K dataset, compar-ing it with GPT-4 and GPT-3.5 Turbo, and introduce a "reasoning score" metricto assess solution quality. Our findings suggest that SBI-RAG enhances reason-ing clarity and facilitates a more structured problem-solving process potentiallyproviding educational benefits for students.1 IntroductionProficiency in solving math word problems (MWPs) is not only measured by students’ ability toarrive at the correct solution but also by their capacity to follow a structured, step-by-step reasoningprocess [ 8]. This approach is vital for developing critical thinking and mathematical reasoningabilities, which are essential for tackling complex word problems effectively [ 18]. Unfortunately,many students struggle with word problems, often failing to identify key information or select theappropriate operations despite understanding the underlying mathematical concepts. This difficultyis a significant barrier to academic success, as highlighted by a survey from the EdWeek ResearchCenter, which reported that nearly 50% of students can read a word problem’s text but fail to graspthe mathematical question being asked [ 24]. Consequently, poor problem-solving skills in MWPscan lead to academic challenges and even failure in school.Word problems, as a core component of the mathematics curriculum, serve an important functionby fostering logical analysis, mental abilities, and creative thinking. Educators and researchers haveexplored various methods to improve students’ proficiency in solving these problems. One suchmethod is Schema-Based Instruction (SBI) [ 27,7], an evidence-based approach widely used in thefield of Mathematics that helps students classify word problems based on their underlying structure orschema. SBI has been shown to enhance students’ ability to identify relevant information and applythe appropriate mathematical operations for problem-solving. In addition to educational approacheslike SBI, Intelligent Tutoring Systems (ITSs) have emerged as valuable tools in addressing challengesassociated with MWPs. ITSs leverage artificial intelligence (AI) and interactive interfaces to providepersonalized, step-by-step guidance. Examples of ITSs designed for word problem-solving includeAnimalWatch [ 2], MathCAL [ 4], PAT (Pump Algebra Tutor) [ 15] and HINTS [ 35]. These systemshave proven effective in supporting learners by offering feedback, hints, and individualized learningpaths. However, many ITSs rely on rule-based algorithms and lack the transformative potentialof more recent AI advancements, like those in Natural Language Processing (NLP) [ 19] and the38th Conference on Neural Information Processing Systems (NeurIPS 2024).development of Large Language Models (LLMs), such as ChatGPT [ 3], LLaMA 2 [ 28] and Gemini[26].LLMs exhibit emergent abilities, such as understanding linguistic nuances, making logical inferences,and decomposing tasks into simpler steps, which can be harnessed to scaffold students’ learningin math-related tasks [ 12]. However, while LLMs like GPT-4 and others can generate intermediatesteps through approaches like Chain-of-Thought (CoT) prompting [ 33], this happens primarily atthe prompting level and requires the user to have knowledge of how to effectively structure thethoughts. CoT prompting is highly dependent on precise prompt engineering; poorly designed orunclear prompts can result in irrelevant or inefficient reasoning steps. Additionally, CoT promptingcan sometimes generate illogical chains of thought, exposing gaps in the reasoning process [ 22].Creating effective CoT prompts can be time-consuming and complex, particularly for advanced tasks.In this paper, we propose a Schema-Based Instruction Retrieval-Augmented Generation (SBI-RAG)framework incorporating a Large Language Model (LLM) to assist in solving MWPs. Our systemfirst utilizes a schema classifier, trained on DistilBERT [ 23], to predict the appropriate schema Sifor a given problem P. The identified schema is then used to generate a schema-specific prompt,which retrieves relevant context from a pre-defined document set. The retrieved context, schema, andproblem are passed to an LLM (Ollama Llama 3.1), which generates a detailed, step-by-step solution.Our findings suggest that the schema-guided RAG approach facilitates a more structured problem-solving process, which we believe will lead to improved reasoning and deeper student understandingof MWPs. This framework also forms a pathway for future work, where we can incorporate feedbackfrom teachers and students to refine and adapt the system further, thereby improving reasoning andcritical thinking skills. By leveraging this system in classroom settings, we can iteratively enhance itseffectiveness as both a teaching and learning tool. In summary, our contributions include: a schemaclassifier trained to predict the relevant schema type and subcategory given a math word problem;structured prompt generation based on the predicted schema/subcategory that uses RAG to includeschema-relevant content; a new evaluation metric (the step-by-step reasoning score) to evaluate thequality of the LLM’s reasoning steps; and an LLM-as-a-judge evaluation [36].2 ApproachAs seen in Figure 1, our approach is divided into four main parts: 1) Schema Classifier, 2) PromptCreation, 3) Context Retrieval, and 4) Answer and Response Generation. The training and datasetdetails are described in Appendix C and D, respectively.A schema is a structured framework that represents a generalized method for solving a specific type ofproblem [ 25]. In the context of MWPs, schemas help categorize problems based on their underlyingstructure, making it easier to determine which mathematical operations to use [ 9]. For example,MWPs can often be grouped into two major schemas: Additive and Multiplicative [20].Each schema can be further divided into sub-categories. For instance, the Additive schema caninclude Additive Change (where a value is increased or decreased), Additive Difference (problemsthat focus on the difference between two values), and Additive Total (where two or more quantitiesare combined to get a total) [ 31]. Similarly, the Multiplicative schema can include MultiplicativeComparison (where one quantity is compared to another using multiplication), Multiplicative EqualGroups (where the total is divided into equal parts), and Multiplicative Ratios/Proportions (problemsthat involve finding ratios or proportional relationships) [30].These schemas provide a structured framework for problem-solving, helping both the language modeland learners identify the type of problem and apply the appropriate operations.Building a Schema Classifier: We develop a schema classifier that performs supervised learningto predict the relevant schema ( Si) and sub-category ( Sci) for a given problem ( P). This classifieris built using a DistilBERT model, which has been fine-tuned on a custom dataset of schema-basedinstruction problems.Each problem in the dataset is labelled with its associated schema and sub-category, helping theclassifier learn the relationships between different types of word problems and their correspondingschemas. Specifically, the input problem is tokenized and processed by the DistilBERT model, whichthen outputs the most suitable schema ( Si) and sub-category ( Sci).2The schema classifier is essential because it ensures that the correct problem-solving framework isapplied to each problem. This step forms the foundation for schema-driven problem solving, guidingthe language model to select the appropriate reasoning method.Prompt Creation: Once the schema classifier has predicted the relevant schema ( Si) and sub-category ( Sci), the next stage is prompt creation. This involves generating a structured prompt thatinstructs the system on how to apply the identified schema to solve the problem. The generatedprompt is formulated as follows:"Using the {schema} schema and {sub_category} sub-category, solve the fol-lowing problem: {question}"This prompt guides the language model by ensuring that the problem-solving approach adheres to theappropriate schema.Context Retrieval : After the schema-specific prompt is created, relevant context is retrieved tosupport the problem-solving process. This retrieval is done using a Retrieval-Augmented Generation(RAG) framework [16], which combines document retrieval with prompt-based generation.The generated prompt is embedded, and a cosine similarity search [ 21] is performed within a vectorstore to identify the most relevant documents [ 34]. The vector store contains documents that serveas knowledge resources for the problem-solving process. These documents include explanations ofvarious problem-solving strategies, examples of similar problems, and definitions of key mathematicalconcepts. For instance, a document might explain how to apply the Additive Change schema orprovide sample problems involving ratios and proportions.The document store is particularly useful because it supplies the model with additional knowledge thathelps ensure the problem is solved in a structured, context-driven manner. The retrieved documentsare ranked based on their similarity to the prompt, ensuring that the most relevant information is usedto enhance the problem-solving process [11].Answer and Response Generation: In the final stage, the retrieved context, problem, and schema-specific prompt are passed to the Llama 3.1 LLM [ 29] for generating the answer. The input isstructured by combining the context, schema, and problem, allowing the model to produce a step-by-step solution that incorporates schema-driven reasoning. This ensures that each part of theproblem-solving process is addressed in a structured manner, guiding the learner through the solutiontransparently. The response incorporates relevant contextual information, ensuring that the generatedsolution is both accurate and aligned with the instructional methodology. This schema-informed andcontext-enhanced process improves the transparency and effectiveness of the solution.Figure 1: Illustration of SBI-RAG Architecture3 EvaluationFor evaluating the utility of our approach, we focus on the step-by-step reasoning provided by thegenerated responses, rather than solely on accuracy. Our goal is to ensure that the reasoning processis clear, logical and follows schema-driven methodologies, which helps improve understanding insolving MWPs [ 25]. To address this, we introduce a new metric, the reasoning score, to measurethe quality of the reasoning in the generated solutions. We also evaluate the performance of ourschema classifier and analyze both the training and validation losses, ensuring that it generalizeswell to unseen data. We also make use of the LLM-as-a-Judge approach [ 36] to get feedback andevaluate our response from LLMs like GPT-4 and GPT-3.5 Turbo. This approach is a scalable andexplainable method for approximating human preferences [ 14], which are otherwise costly to obtain.3All experiments were run using Google’s Colab environment with an NVIDIA L4 GPU. More detailson the evaluation and metrics used are given in Appendices D, E, F, and G.Schema Classifier Results: The schema classifier was trained to identify two schema categoriesand three sub-categories: Additive Change, Additive Difference, Additive Total, MultiplicativeComparison, Multiplicative Equal Groups, and Multiplicative Ratios/Proportions. As seen in Figure2 and Figure 3, it achieved high precision, recall, and F1 scores, with an overall accuracy of 97%.The training and validation losses show consistent convergence, indicating effective learning withoutoverfitting, ensuring reliable schema predictions across various problem types.Figure 2: Confusion matrix for theschema classifierFigure 3: Training and validationlosses for the schema classifierReasoning Evaluation We evaluated the reasoning quality by comparing responses generated usingour Schema-Based RAG approach against responses from LLMs like GPT-4 and GPT-3.5 Turbo. Theresponses generated by our system, which incorporates schema-based reasoning, achieved higherscores in reasoning quality. Specifically, the best reasoning scores for SBI-RAG, GPT-4, and GPT-3.5Turbo were 0.588, 0.491, and 0.290, respectively. Paired sample t-tests showed that the differencesbetween the SBI-RAG and the GPT models were significantly different at the 0.05 level (see AppendixE). These results suggest that schema-based reasoning can enhance the overall quality of reasoning,particularly in educational contexts, when compared to responses generated by LLMs alone.LLM-as-a-Judge Results: We implemented the LLM-as-a-Judge approach [ 36] to evaluate thequality of reasoning in the responses generated by both our Schema-Based RAG system and thebaseline LLMs. This method allows for an objective, scalable evaluation by approximating humanjudgment through the use of LLMs. Our LLM-as-a-Judge process involves scoring responses basedon clarity, logical progression, and completeness. Results showed that the Schema-Based RAGapproach consistently outperformed GPT-4 and GPT-3.5 Turbo in terms of reasoning quality.Formore details refer to Appendix G.4 ConclusionDespite the promising results of our Schema-Based Instruction Retrieval-Augmented Generation(SBI-RAG) framework for improving math word problem reasoning, some limitations exist. Thisstudy relies on the LLM-as-a-Judge method, lacking direct human evaluation from educators orstudents, which would provide more informative feedback. The success of the RAG frameworkhinges on the relevance and quality of retrieved documents, which may vary and impact the generatedsolutions. The evaluation focuses on arithmetic word problems (GSM8K). More complex problemdatasets are needed to assess the framework’s generalizability. Finally, extending the framework todifferent subjects or educational levels may present challenges, requiring further adaptation. Theselimitations highlight areas for future research, particularly in improving schema coverage, expandingdataset diversity, and incorporating human evaluations.In conclusion, we proposed a Schema-Based Retrieval-Augmented Generation framework thatenhances reasoning and understanding in solving math word problems. Our approach, combiningschema-based instruction with large language models, outperformed existing LLM responses in4quality and step-by-step reasoning. This framework provides a strong foundation for improvingproblem-solving in education, with future work focused on refining the system with user feedback.Additionally, this work could have applications in enhancing the reasoning capabilities of LLMsthemselves.References[1]Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel-Kedziorski, Yejin Choi, and HannanehHajishirzi. Mathqa: Towards interpretable math word problem solving with operation-basedformalisms. arXiv preprint arXiv:1905.13319 , 2019.[2]Carole R Beal. Animalwatch: An intelligent tutoring system for algebra readiness. In In-ternational handbook of metacognition and learning technologies , pages 337–348. Springer,2013.[3]Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal,Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, ArielHerbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M.Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, MateuszLitwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, AlecRadford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020.[4]Kuo-En Chang, Yao-Ting Sung, and Shiu-Feng Lin. Computer-assisted learning for mathemati-cal problem solving. Computers & Education , 46(2):140–151, 2006.[5]Sankalan Pal Chowdhury, Vilém Zouhar, and Mrinmaya Sachan. Scaling the authoring ofautotutors with large language models. arXiv preprint arXiv:2402.09216 , 2024.[6]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and JohnSchulman. Training verifiers to solve math word problems, 2021.[7]Sara Cothren Cook, Lauren W Collins, Lisa L Morin, and Paul J Riccomini. Schema-basedinstruction for mathematical word problem solving: An evidence-based review for studentswith learning disabilities. Learning Disability Quarterly , 43(2):75–87, 2020.[8]Angela L Duckworth and David Scott Yeager. Measurement matters: Assessing personal quali-ties other than cognitive ability for educational purposes. Educational researcher , 44(4):237–251, 2015.[9]Lynn S Fuchs, Douglas Fuchs, Karin Prentice, Carol L Hamlett, Robin Finelli, and Susan JCourey. Enhancing mathematical problem solving among third-grade students with schema-based instruction. Journal of Educational Psychology , 96(4):635, 2004.[10] Arthur C Graesser, Katja Wiemer-Hastings, Peter Wiemer-Hastings, Roger Kreuz, Tutoring Re-search Group, et al. Autotutor: A simulation of a human tutor. Cognitive Systems Research ,1(1):35–51, 1999.[11] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. Retrievalaugmented language model pre-training. In International conference on machine learning ,pages 3929–3938. PMLR, 2020.[12] Joy He-Yueya, Gabriel Poesia, Rose E Wang, and Noah D Goodman. Solving math word prob-lems by combining language models with symbolic solvers. arXiv preprint arXiv:2304.09102 ,2023.[13] Shashank Mohan Jain. Hugging face. In Introduction to transformers for NLP: With the huggingface library and models to solve problems , pages 51–67. Springer, 2022.[14] Jiaming Ji, Mickel Liu, Josef Dai, Xuehai Pan, Chi Zhang, Ce Bian, Boyuan Chen, RuiyangSun, Yizhou Wang, and Yaodong Yang. Beavertails: Towards improved safety alignment of llmvia a human-preference dataset. Advances in Neural Information Processing Systems , 36, 2024.5[15] Kenneth R Koedinger and John R Anderson. Illustrating principled design: The early evolutionof a cognitive tutor for algebra symbolization. Interactive Learning Environments , 5(1):161–179,1998.[16] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, NamanGoyal, Heinrich Küttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, andDouwe Kiela. Retrieval-augmented generation for knowledge-intensive nlp tasks, 2021.[17] Zhenwen Liang, Jipeng Zhang, Lei Wang, Wei Qin, Yunshi Lan, Jie Shao, and XiangliangZhang. Mwp-bert: Numeracy-augmented pre-training for math word problem solving, 2022.[18] Liping Ma. Knowing and teaching elementary mathematics: Teachers’ understanding offundamental mathematics in China and the United States . Routledge, 2010.[19] Prakash M Nadkarni, Lucila Ohno-Machado, and Wendy W Chapman. Natural language pro-cessing: an introduction. Journal of the American Medical Informatics Association , 18(5):544–551, 2011.[20] Sarah R Powell and Lynn S Fuchs. Effective word-problem instruction: Using schemas tofacilitate mathematical reasoning. Teaching exceptional children , 51(1):31–42, 2018.[21] Faisal Rahutomo, Teruaki Kitasuka, Masayoshi Aritsugi, et al. Semantic cosine similarity. InThe 7th international student conference on advanced science and technology ICAST , volume 4,page 1. University of Seoul South Korea, 2012.[22] Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, DipanjanDas, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, and David Reitter. Measuring attribution innatural language generation models. Computational Linguistics , 49(4):777–840, 2023.[23] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled versionof bert: smaller, faster, cheaper and lighter, 2020.[24] Sarah Schwartz. Why word problems are such a struggle for students—andwhat teachers can do. https://www.edweek.org/teaching-learning/why-word-problems-are-such-a-struggle-for-students-and-what-teachers-can-do/2023/05 , 2023. Accessed: 2024-09-17.[25] Susan Stoddard. Schema-based instruction: Culturally linguistically diverse secondary studentswith emotional disorders solving math word problems . PhD thesis, Northern Arizona University,2019.[26] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highlycapable multimodal models. arXiv preprint arXiv:2312.11805 , 2023.[27] The IRIS Center. High-quality mathematics instruction: What teachers should know. https://iris.peabody.vanderbilt.edu/module/math/ , 2017. Accessed: 2024-09-17.[28] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, LukasBlecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes,Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, AnthonyHartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, MadianKhabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, ThibautLavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta,Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiao-qing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, ZhengYan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, AurelienRodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundationand fine-tuned chat models, 2023.[29] Raja Vavekanand and Kira Sam. Llama 3.1: An in-depth analysis of the next-generation largelanguage model, 2024.6[30] Gérard Vergnaud. Multiplicative structures. in (eds.) r. lesh and m. landau: Acquisition ofmathematics concepts and processes, 1983.[31] Gérard Vergnaud. A classification of cognitive tasks and operations of thought involved inaddition and subtraction problems. In Addition and subtraction , pages 39–59. Routledge, 2020.[32] Yan Wang, Xiaojiang Liu, and Shuming Shi. Deep neural solver for math word problems.InProceedings of the 2017 conference on empirical methods in natural language processing ,pages 845–854, 2017.[33] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi,Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large languagemodels, 2023.[34] Minjia Zhang and Yuxiong He. Grip: Multi-store capacity-optimized high-performance nearestneighbor search for vector search engine. In Proceedings of the 28th ACM InternationalConference on Information and Knowledge Management , pages 1673–1682, 2019.[35] Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. Progressive-hint prompt-ing improves reasoning in large language models. arXiv preprint arXiv:2304.09797 , 2023.[36] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang,Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica.Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.A Appendix / supplemental materialB Related WorkB.1 Intelligent Tutor SystemIntelligent Tutoring Systems (ITS) [ 10] have been widely adopted to provide personalized learningexperiences for students. ITS systems aim to emulate human tutors by guiding learners throughproblem-solving activities and providing timely feedback. One such system, HINTS [35], focuses onhelping students navigate through mathematical problem-solving by offering hints and scaffolding toimprove their understanding and success rates. The system is designed to foster incremental learningthrough step-by-step hints tailored to the learners’ needs, improving problem-solving skills over time.B.2 MWPTutorMWPTutor [5], a system developed for solving Math Word Problems (MWPs), integrates schema-based instruction into its design. It provides structured guidance to students by breaking down wordproblems into solvable chunks using predefined schemas for addition, subtraction, multiplication, anddivision. The tutor system helps students plan their solutions and execute them using step-by-stepguidance while providing immediate feedback. MWPTutor incorporates interactive elements likehighlighting important information in word problems and constructing solution trees to visualize thesolution path. These features improve students’ understanding of both the problem and the solutionprocess.B.3 MWP-BERTRecent advancements in Math Word Problem (MWP) [17] solving have leveraged large pre-trainedlanguage models such as BERT. MWP-BERT , a numeracy-augmented model, addresses a signifi-cant challenge in MWP solving—efficient numerical reasoning. Traditional models struggle withrepresenting numbers accurately, often substituting real numbers with symbolic placeholders, whichoverlooks crucial numerical properties. MWP-BERT introduces a novel pre-training schema thatincorporates numerical reasoning into the language model. It enhances the model’s ability to general-ize over arithmetic and algebraic problems by embedding numeracy information such as magnitudeand number types into contextualized word representations. This approach has outperformed manyconventional MWP solvers, especially in arithmetic MWP datasets like Math23k [ 32] and MathQA[1], by accurately capturing the logic of number manipulation within word problems.7B.4 HINTSTheHINTS [35] system emphasizes providing incremental and context-sensitive support to studentsworking on math problems. This tutor-like system offers "hints" that gradually lead students towardthe correct solution without directly giving them the answer. This method allows students to developtheir problem-solving strategies while avoiding the frustration of being stuck. The system alsorecords students’ problem-solving paths to provide personalized feedback, allowing for the analysisof specific challenges faced during different stages of the solution process.B.5 MathCalMathCAL [4] is another example of a computer-assisted learning system designed to supportmathematical problem-solving. It operates by dividing the problem-solving process into four distinctstages: understanding the problem, making a plan, executing the plan, and reviewing the solution.Each stage provides specific assistance tailored to the learner’s needs, with tools such as schemarepresentations and solution trees helping students visualize and articulate their solution process .The empirical evaluation of MathCAL demonstrated its effectiveness in improving problem-solvingperformance, particularly among students with lower baseline abilities. By breaking down complexproblems into manageable steps, MathCAL reduces cognitive load and promotes deeper understandingof mathematical concepts.C DatasetsC.1 Schema Based Instruction DatasetThe Schema-Based Instruction (SBI) Dataset consists of a total of 360 math word problems (MWPs),categorized based on their underlying schemas. These problems are distributed equally across sixdistinct categories, with approximately 60 problems in each sub-category (as seen in figure 4) . Thecategories include Additive Change, Additive Difference, Additive Total, Multiplicative Comparison,Multiplicative Equal Groups, and Multiplicative Ratios/Proportions.Each problem is labeled with a schema (Additive or Multiplicative) and its corresponding sub-category, allowing the system to learn and predict the appropriate schema for a given problem. Thisbalanced distribution ensures that the model receives equal representation from each schema type,preventing overfitting to any specific category and promoting generalization across diverse problemtypes.This dataset is used to learn the relationship between a given MWP and its corresponding schema. Byusing this dataset, a schema classifier is trained to accurately predict the appropriate schema for eachproblem. This classifier plays a crucial role in facilitating schema-based retrieval and guiding thegeneration of step-by-step solutions, ensuring clarity and structure in the problem-solving process.C.2 GSM8K DatasetGSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problemscreated by human problem writers. The dataset is segmented into 7.5K training problems and 1Ktest problems. These problems take between 2 and 8 steps to solve, and solutions primarily involveperforming a sequence of elementary calculations using basic arithmetic operations to reach the finalanswer. A bright middle school student should be able to solve every problem. It can be used formulti-step mathematical reasoning [6].D Training Details and ImplementationThe code for this implementation can be found on GitHub: https://github.com/pdx97/SBI-RAG_Neurips2024 .D.1 Dataset and PreprocessingThe dataset used for training consists of schema-based instruction (SBI) problems, where eachproblem is labeled with a schema and a sub-category. These labels are combined into a single label8Figure 4: Overview of SBI DatasetFigure 5: GSM8K dataset example problems9for multi-class classification. The dataset is split into training and testing sets, with 75% of the dataused for training and 25% for testing.The dataset is tokenized using the distilbert-base-uncased tokenizer from Hugging Face, andthe text is converted into input tensors consisting of input_ids andattention_masks . Labelencoding is applied to the combined schema and sub-category labels using LabelEncoder fromsklearn . The resulting dataset is then formatted for PyTorch, with columns for input IDs, attentionmasks, and labels.D.2 Model Architecture for Schema ClassifierWe used the DistilBERT model for schema classification, loaded from Hugging Face’stransformers library. The model is pre-trained and fine-tuned on our custom SBI dataset. Thenumber of output labels is set to the number of unique schema and sub-category combinations in thedataset.D.3 Training ProcessThe training was conducted using the Trainer API from Hugging Face [ 13] with the configurationshown in Table 1.Hyperparameter ValueLearning rate 2×10−5Batch size 16Number of epochs 20Optimizer AdamW with weight decay of 0.01Evaluation strategy Model evaluation at the end of each epochLogging Evaluation results logged every 10 stepsTable 1: Training Hyperparameters for Schema-Based ClassifierD.4 Evaluation and ResultsAs seen in Figures 4 and 5, we evaluated the schema classifier using accuracy, precision, recall,F1-scores, and a confusion matrix. The classifier achieved an overall accuracy of 97%. The trainingand validation losses show consistent convergence, indicating effective learning without overfitting,ensuring reliable schema predictions across various problem types.D.5 Context Retrieval ImplementationOnce the schema and sub-category are predicted, the next step involves retrieving relevant contextfor solving the problem. The document source is loaded from a URL ( https://iris.peabody.vanderbilt.edu/module/math/cresource/q2/p06/ ) [27] using the WebBaseLoader . Theloaded text is split into chunks of 1000 characters with an overlap of 200 characters to ensurecompleteness of context during retrieval.D.6 Vector Store and EmbeddingsWe use Ollama embeddings to create document embeddings for context retrieval. The embeddingsare stored in a Chroma vector store. During the retrieval process, the problem and schema-specificprompt are embedded, and a similarity search is performed to retrieve the most relevant documents.Re-ranking of documents is performed based on cosine similarity between the embedded questionand the retrieved documents.D.7 Response Generation with Ollama Llama 3.1For generating a solution to the problem, we pass the schema, sub-category, and retrieved context tothe Llama 3.1 model. The prompt is constructed in a structured format, and the model generates adetailed solution based on the provided context.10D.8 Advanced Re-ranking for Document RetrievalAn advanced re-ranking mechanism is implemented using cosine similarity between question embed-dings and document embeddings. This ensures that the most contextually relevant documents areused for generating the final answer.E Statistical SignificanceTo test the statistical significance, a paired sample t-test, also known as a dependent sample t-test, wasconducted to compare the reasoning performance of SBI-RAG with two language models, GPT 3.5Turbo and GPT 4.0. The paired sample t-test is important because it compares the means of two setsof measurements taken from the same subjects or related units. In this case, the same set of problemswas evaluated using both SBI-RAG and the GPT models, meaning the samples are dependent. Byusing this approach, we can account for the relationship between the scores, reducing variability andmaking the comparison more accurate.The results showed that SBI-RAG reasoning scores were statistically higher than both GPT 3.5 Turboand GPT 4.0. For the comparison with GPT 3.5 Turbo, the t-test gave a t-statistic of 5.87 and ap-value of 0.00012, which is much lower than the 0.05 threshold. Similarly, for the comparison withGPT 4.0, the t-test gave a t-statistic of 3.69 and a p-value of 0.00248, also well below 0.05.These results confirm that SBI-RAG outperforms both GPT 3.5 Turbo and GPT 4.0 in reasoning tasks.With p-values much lower than 0.05, we can confidently reject the null hypothesis, which assumedno difference in performance, and conclude that SBI-RAG consistently achieves higher reasoningscores.F Reasoning Score Metric and ImplementationReasoning Score MetricFigure 6: Reasoning Score SBI-RAG vs GPT-4Figure 7: Reasoning Score SBI-RAG vs GPT3.5 TurboThe reasoning score is calculated by checking both the presence of key steps and the logical flowbetween them. We first define a set of key steps and concepts relevant to solving the problem, such asoperations ("+", "*", "-"), schema-related terms ("Additive", "Multiplicative"), and problem-specificconcepts ("ratios", "proportions"). We then count how many of these key steps appear in the generatedresponse. In addition to counting the presence of steps, we calculate a delta score, which checks thelogical flow between steps.For example, consider the problem:Each bird eats 12 beetles per day, each snake eats 3 birds per day, and each jaguareats 5 snakes per day. If there are 6 jaguars in a forest, how many beetles are eateneach day?11In this problem, the key steps include calculating how many snakes are eaten by the jaguars, howmany birds are eaten by the snakes, and how many beetles are eaten by the birds. The delta scoreevaluates whether the transitions between these entities are correctly captured in the reasoning, forexample:• The transition from "jaguars" to "snakes" (i.e., each jaguar eats 5 snakes per day).• The transition from "snakes" to "birds" (i.e., each snake eats 3 birds per day).• The transition from "birds" to "beetles" (i.e., each bird eats 12 beetles per day).The final reasoning score is computed by combining the step-matching score with the delta scoreto account for both completeness and logical progression. The score is further adjusted by a clarityfactor, which depends on the length and clarity of the explanation. A higher clarity factor indicates amore detailed and structured response. For instance, in this problem, a well-reasoned response wouldclearly explain how the total number of snakes eaten by jaguars leads to the calculation of the totalnumber of beetles eaten by birds.G LLM-as-a-Judge Results and Task DefinitionLLM-as-a-Judge is a reference-free evaluation method that leverages large language models (LLMs)to score and evaluate the quality of generated responses. This approach is particularly useful whenhuman evaluation is costly or impractical to scale. By directly prompting an LLM to assess thereasoning, clarity, and structure of an answer, we can measure how well the response aligns withhuman preferences. Our study follows this methodology to evaluate the quality of schema-basedreasoning responses in math word problems (MWPs).Task Design : We designed the evaluation prompt based on guidelines from Hugging Face’s LLM-as-a-Judge model (as seen in Figure 8). Our customized prompt asked the LLM to act as a judge andevaluate responses from an educational perspective. The system was tasked to rate each responseon a scale of 0 to 10, where 0 meant the response was not helpful at all, and 10 meant the responsewas complete and thoroughly addressed the question. The rating also considered the clarity andeducational effectiveness of the responses.The task was defined using the following prompt template:You will be given a user_question and system_answer couple.Your task is to provide a ’total rating’ scoring how well the system_answeranswers the user concerns expressed in the user_question.Give your answer as a float on a scale of 0 to 10, where 0 means that the system_answeris not helpful at all, and 10 means that the answer completely and helpfully addressesthe question.Provide your feedback as follows:Feedback:::Total rating: (your rating, as a float between 0 and 10)Now here are the question and answer.Question: {question}Answer: {answer}Feedback:::Total rating:Evaluation Results : As shown in Figure 11 and Figure 12, two responses were evaluated based on amath word problem: "James spends 40 years teaching. His partner has been teaching for 10 yearsless. How long is their combined experience?" .Response 1 provided a solution that followed a schema-based approach, utilizing the Additiveschema and the Total sub-category. It offered a clear and detailed step-by-step explanation, guiding12Figure 8: LLM-as-a-Judge Task Instructions for Evaluating Responses.the reader through each part of the process. The response emphasized the use of schema-drivenreasoning to help break down the problem and apply the correct operations, making it highly suitablefor educational purposes. The structured reasoning and clarity of explanation were acknowledgedby the judge, who rated this response highly, giving it a score of 9.5/10 for its thoroughness andeducational value.Response 2 also arrived at the correct solution but lacked the same depth of explanation. It skippedseveral intermediate steps and did not provide a schema-based breakdown of the problem, making itless effective from an educational standpoint. While it was concise and accurate, it did not fully guidethe learner through the reasoning process, which reduced its value for students needing additionalsupport. Consequently, the judge assigned this response a score of 8.5/10 , noting that while it wascorrect, it could benefit from more detailed reasoning and a clearer breakdown of steps.Additionally, For a given Question (As seen in Figure9), our evaluation using the LLM-as-a-Judgeapproach assessed responses based on three key sub-metrics: Clarity ,Logical Progression , andCompleteness as shown in Figure 10. The Total rating was then calculated based on these sub-metrics,which are defined as follows:13Figure 9: Sample question with Response 1 from SBI-RAG and Response 2 from GPT-4•Clarity: Assesses how clearly the response conveys the solution. A high clarity scoreindicates ease of understanding, appropriate language use, and avoidance of unnecessaryjargon or complexity that could confuse the reader.•Logical Progression: Evaluates the logical flow of the response. A high score here indicatesthat each step follows naturally from the previous one, forming a coherent sequence thateffectively guides the reader through the problem-solving process.•Completeness: Measures whether the response fully addresses all aspects of the question.A complete response includes all necessary steps, explanations, and justifications requiredto reach the solution.Response Clarity (0-10) Logical Progression (0-10) Completeness (0-10) Total Rating (0-10)Response 1 9.0 9.0 9.0 9.0Response 2 8.0 7.5 7.0 7.5Table 2: Evaluation Scores for Response 1 and Response 2The judge was able to provide feedback explaining why each response received its respective score(asseen in Fig 13 . This structured feedback highlighted the strengths of schema-based reasoning infostering better understanding and logical problem-solving, especially when compared to answersthat merely focused on arriving at the correct solution without explaining intermediate steps.14Figure 10: Overall Scores by LLM-as-a-JudgeFigure 11: Response 1 is using SBI-RAG and Response 2 is using GPT 3.5 turbo.15Figure 12: Response 1 and Response 2 evaluationOur results demonstrate the effectiveness of using LLM-as-a-Judge for assessing educational content.The schema-driven responses generated by our system scored higher in terms of educational effec-tiveness and reasoning quality, emphasizing the potential of schema-based approaches in improvinglearning outcomes in math word problems.Figure 13: Feedback of Response 1 and Response 2By utilizing this method, we can approximate human preferences and make informed decisions abouthow schema-based approaches can enhance student learning experiences in classrooms. Future workcould extend this by incorporating human feedback and further refining the evaluation process.16NeurIPS Paper ChecklistThe checklist is designed to encourage best practices for responsible machine learning research,addressing issues of reproducibility, transparency, research ethics, and societal impact. Do not removethe checklist: The papers not including the checklist will be desk rejected. The checklist shouldfollow the references and follow the (optional) supplemental material. The checklist does NOT counttowards the page limit.Please read the checklist guidelines carefully for information on how to answer these questions. Foreach question in the checklist:• You should answer [Yes] , [No] , or [NA] .•[NA] means either that the question is Not Applicable for that particular paper or therelevant information is Not Available.• Please provide a short (1–2 sentence) justification right after your answer (even for NA).The checklist answers are an integral part of your paper submission. They are visible to thereviewers, area chairs, senior area chairs, and ethics reviewers. You will be asked to also include it(after eventual revisions) with the final version of your paper, and its final version will be publishedwith the paper.The reviewers of your paper will be asked to use the checklist as one of the factors in their evaluation.While "[Yes] " is generally preferable to "[No] ", it is perfectly acceptable to answer "[No] " provided aproper justification is given (e.g., "error bars are not reported because it would be too computationallyexpensive" or "we were unable to find the license for the dataset we used"). In general, answering"[No] " or "[NA] " is not grounds for rejection. While the questions are phrased in a binary way, weacknowledge that the true answer is often more nuanced, so please just use your best judgment andwrite a justification to elaborate. All supporting evidence can appear either in the main paper or thesupplemental material, provided in appendix. If you answer [Yes] to a question, in the justificationplease point to the section(s) where related material for the question can be found.IMPORTANT, please:•Delete this instruction block, but keep the section heading “NeurIPS paper checklist" ,•Keep the checklist subsection headings, questions/answers and guidelines below.•Do not modify the questions and only use the provided macros for your answers .1.ClaimsQuestion: Do the main claims made in the abstract and introduction accurately reflect thepaper’s contributions and scope?Answer: [Yes]Justification: Our goal is to understand how schema-based instruction can lead to moreeffective learning in students, and the remainder of the paper expands on that theme. We donot claim to have direct evidence with students, but do experiments with automated metrics.Guidelines:•The answer NA means that the abstract and introduction do not include the claimsmade in the paper.•The abstract and/or introduction should clearly state the claims made, including thecontributions made in the paper and important assumptions and limitations. A No orNA answer to this question will not be perceived well by the reviewers.•The claims made should match theoretical and experimental results, and reflect howmuch the results can be expected to generalize to other settings.•It is fine to include aspirational goals as motivation as long as it is clear that these goalsare not attained by the paper.2.LimitationsQuestion: Does the paper discuss the limitations of the work performed by the authors?Answer: [Yes]17Justification: There is a discussion of limitations at the end of the paper.Guidelines:•The answer NA means that the paper has no limitation while the answer No means thatthe paper has limitations, but those are not discussed in the paper.• The authors are encouraged to create a separate "Limitations" section in their paper.•The paper should point out any strong assumptions and how robust the results are toviolations of these assumptions (e.g., independence assumptions, noiseless settings,model well-specification, asymptotic approximations only holding locally). The authorsshould reflect on how these assumptions might be violated in practice and what theimplications would be.•The authors should reflect on the scope of the claims made, e.g., if the approach wasonly tested on a few datasets or with a few runs. In general, empirical results oftendepend on implicit assumptions, which should be articulated.•The authors should reflect on the factors that influence the performance of the approach.For example, a facial recognition algorithm may perform poorly when image resolutionis low or images are taken in low lighting. Or a speech-to-text system might not beused reliably to provide closed captions for online lectures because it fails to handletechnical jargon.•The authors should discuss the computational efficiency of the proposed algorithmsand how they scale with dataset size.•If applicable, the authors should discuss possible limitations of their approach toaddress problems of privacy and fairness.•While the authors might fear that complete honesty about limitations might be used byreviewers as grounds for rejection, a worse outcome might be that reviewers discoverlimitations that aren’t acknowledged in the paper. The authors should use their bestjudgment and recognize that individual actions in favor of transparency play an impor-tant role in developing norms that preserve the integrity of the community. Reviewerswill be specifically instructed to not penalize honesty concerning limitations.3.Theory Assumptions and ProofsQuestion: For each theoretical result, does the paper provide the full set of assumptions anda complete (and correct) proof?Answer: [NA]Justification: This is an empirical paper with no theoretical components.Guidelines:• The answer NA means that the paper does not include theoretical results.•All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.•All assumptions should be clearly stated or referenced in the statement of any theorems.•The proofs can either appear in the main paper or the supplemental material, but ifthey appear in the supplemental material, the authors are encouraged to provide a shortproof sketch to provide intuition.•Inversely, any informal proof provided in the core of the paper should be complementedby formal proofs provided in appendix or supplemental material.• Theorems and Lemmas that the proof relies upon should be properly referenced.4.Experimental Result ReproducibilityQuestion: Does the paper fully disclose all the information needed to reproduce the main ex-perimental results of the paper to the extent that it affects the main claims and/or conclusionsof the paper (regardless of whether the code and data are provided or not)?Answer: [Yes]Justification: There is an evaluation section in the main paper, and many more details areprovided in the appendix. Together these make it possible to reproduce the results.Guidelines:18• The answer NA means that the paper does not include experiments.•If the paper includes experiments, a No answer to this question will not be perceivedwell by the reviewers: Making the paper reproducible is important, regardless ofwhether the code and data are provided or not.•If the contribution is a dataset and/or model, the authors should describe the steps takento make their results reproducible or verifiable.•Depending on the contribution, reproducibility can be accomplished in various ways.For example, if the contribution is a novel architecture, describing the architecture fullymight suffice, or if the contribution is a specific model and empirical evaluation, it maybe necessary to either make it possible for others to replicate the model with the samedataset, or provide access to the model. In general. releasing code and data is oftenone good way to accomplish this, but reproducibility can also be provided via detailedinstructions for how to replicate the results, access to a hosted model (e.g., in the caseof a large language model), releasing of a model checkpoint, or other means that areappropriate to the research performed.•While NeurIPS does not require releasing code, the conference does require all submis-sions to provide some reasonable avenue for reproducibility, which may depend on thenature of the contribution. For example(a)If the contribution is primarily a new algorithm, the paper should make it clear howto reproduce that algorithm.(b)If the contribution is primarily a new model architecture, the paper should describethe architecture clearly and fully.(c)If the contribution is a new model (e.g., a large language model), then there shouldeither be a way to access this model for reproducing the results or a way to reproducethe model (e.g., with an open-source dataset or instructions for how to constructthe dataset).(d)We recognize that reproducibility may be tricky in some cases, in which caseauthors are welcome to describe the particular way they provide for reproducibility.In the case of closed-source models, it may be that access to the model is limited insome way (e.g., to registered users), but it should be possible for other researchersto have some path to reproducing or verifying the results.5.Open access to data and codeQuestion: Does the paper provide open access to the data and code, with sufficient instruc-tions to faithfully reproduce the main experimental results, as described in supplementalmaterial?Answer: [Yes]Justification: The primary datasets, GSM8K, is open source. The data and models used forthe schema classifier will be made available via GitHub if the paper is accepted. We’ll alsomake the code available there.Guidelines:• The answer NA means that paper does not include experiments requiring code.•Please see the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•While we encourage the release of code and data, we understand that this might not bepossible, so “No” is an acceptable answer. Papers cannot be rejected simply for notincluding code, unless this is central to the contribution (e.g., for a new open-sourcebenchmark).•The instructions should contain the exact command and environment needed to run toreproduce the results. See the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•The authors should provide instructions on data access and preparation, including howto access the raw data, preprocessed data, intermediate data, and generated data, etc.•The authors should provide scripts to reproduce all experimental results for the newproposed method and baselines. If only a subset of experiments are reproducible, theyshould state which ones are omitted from the script and why.19•At submission time, to preserve anonymity, the authors should release anonymizedversions (if applicable).•Providing as much information as possible in supplemental material (appended to thepaper) is recommended, but including URLs to data and code is permitted.6.Experimental Setting/DetailsQuestion: Does the paper specify all the training and test details (e.g., data splits, hyper-parameters, how they were chosen, type of optimizer, etc.) necessary to understand theresults?Answer: [Yes]Justification: All of those details are in the appendix.Guidelines:• The answer NA means that the paper does not include experiments.•The experimental setting should be presented in the core of the paper to a level of detailthat is necessary to appreciate the results and make sense of them.•The full details can be provided either with the code, in appendix, or as supplementalmaterial.7.Experiment Statistical SignificanceQuestion: Does the paper report error bars suitably and correctly defined or other appropriateinformation about the statistical significance of the experiments?Answer: [Yes]Justification: We ran tests of significance on the metrics comparing SBI-RAG with GPT 3.5and 4.0Guidelines:• The answer NA means that the paper does not include experiments.•The authors should answer "Yes" if the results are accompanied by error bars, confi-dence intervals, or statistical significance tests, at least for the experiments that supportthe main claims of the paper.•The factors of variability that the error bars are capturing should be clearly stated (forexample, train/test split, initialization, random drawing of some parameter, or overallrun with given experimental conditions).•The method for calculating the error bars should be explained (closed form formula,call to a library function, bootstrap, etc.)• The assumptions made should be given (e.g., Normally distributed errors).•It should be clear whether the error bar is the standard deviation or the standard errorof the mean.•It is OK to report 1-sigma error bars, but one should state it. The authors shouldpreferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesisof Normality of errors is not verified.•For asymmetric distributions, the authors should be careful not to show in tables orfigures symmetric error bars that would yield results that are out of range (e.g. negativeerror rates).•If error bars are reported in tables or plots, The authors should explain in the text howthey were calculated and reference the corresponding figures or tables in the text.8.Experiments Compute ResourcesQuestion: For each experiment, does the paper provide sufficient information on the com-puter resources (type of compute workers, memory, time of execution) needed to reproducethe experiments?Answer: [Yes]Justification: The paper describes the machine on which the experiments were run withapproximate timingGuidelines:20• The answer NA means that the paper does not include experiments.•The paper should indicate the type of compute workers CPU or GPU, internal cluster,or cloud provider, including relevant memory and storage.•The paper should provide the amount of compute required for each of the individualexperimental runs as well as estimate the total compute.•The paper should disclose whether the full research project required more computethan the experiments reported in the paper (e.g., preliminary or failed experiments thatdidn’t make it into the paper).9.Code Of EthicsQuestion: Does the research conducted in the paper conform, in every respect, with theNeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ?Answer: [Yes]Justification: We’ve read the code of ethics and believe that we are conformanGuidelines:•The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.•If the authors answer No, they should explain the special circumstances that require adeviation from the Code of Ethics.•The authors should make sure to preserve anonymity (e.g., if there is a special consid-eration due to laws or regulations in their jurisdiction).10.Broader ImpactsQuestion: Does the paper discuss both potential positive societal impacts and negativesocietal impacts of the work performed?Answer: [Yes]Justification: The positive impacts are clearly discussed, and we see no negative impacts ofhaving LLMs explain their reasoning steps for math word problems.Guidelines:• The answer NA means that there is no societal impact of the work performed.•If the authors answer NA or No, they should explain why their work has no societalimpact or why the paper does not address societal impact.•Examples of negative societal impacts include potential malicious or unintended uses(e.g., disinformation, generating fake profiles, surveillance), fairness considerations(e.g., deployment of technologies that could make decisions that unfairly impact specificgroups), privacy considerations, and security considerations.•The conference expects that many papers will be foundational research and not tiedto particular applications, let alone deployments. However, if there is a direct path toany negative applications, the authors should point it out. For example, it is legitimateto point out that an improvement in the quality of generative models could be used togenerate deepfakes for disinformation. On the other hand, it is not needed to point outthat a generic algorithm for optimizing neural networks could enable people to trainmodels that generate Deepfakes faster.•The authors should consider possible harms that could arise when the technology isbeing used as intended and functioning correctly, harms that could arise when thetechnology is being used as intended but gives incorrect results, and harms followingfrom (intentional or unintentional) misuse of the technology.•If there are negative societal impacts, the authors could also discuss possible mitigationstrategies (e.g., gated release of models, providing defenses in addition to attacks,mechanisms for monitoring misuse, mechanisms to monitor how a system learns fromfeedback over time, improving the efficiency and accessibility of ML).11.SafeguardsQuestion: Does the paper describe safeguards that have been put in place for responsiblerelease of data or models that have a high risk for misuse (e.g., pretrained language models,image generators, or scraped datasets)?21Answer: [NA]Justification: The only model is one that classifies math word problems according to theirrelevant schemas. We can see no opportuities for miuse of this model.Guidelines:• The answer NA means that the paper poses no such risks.•Released models that have a high risk for misuse or dual-use should be released withnecessary safeguards to allow for controlled use of the model, for example by requiringthat users adhere to usage guidelines or restrictions to access the model or implementingsafety filters.•Datasets that have been scraped from the Internet could pose safety risks. The authorsshould describe how they avoided releasing unsafe images.•We recognize that providing effective safeguards is challenging, and many papers donot require this, but we encourage authors to take this into account and make a bestfaith effort.12.Licenses for existing assetsQuestion: Are the creators or original owners of assets (e.g., code, data, models), used inthe paper, properly credited and are the license and terms of use explicitly mentioned andproperly respected?Answer: [Yes]Justification: All open source models and data are clearly cited.Guidelines:• The answer NA means that the paper does not use existing assets.• The authors should cite the original paper that produced the code package or dataset.•The authors should state which version of the asset is used and, if possible, include aURL.• The name of the license (e.g., CC-BY 4.0) should be included for each asset.•For scraped data from a particular source (e.g., website), the copyright and terms ofservice of that source should be provided.•If assets are released, the license, copyright information, and terms of use in thepackage should be provided. For popular datasets, paperswithcode.com/datasetshas curated licenses for some datasets. Their licensing guide can help determine thelicense of a dataset.•For existing datasets that are re-packaged, both the original license and the license ofthe derived asset (if it has changed) should be provided.•If this information is not available online, the authors are encouraged to reach out tothe asset’s creators.13.New AssetsQuestion: Are new assets introduced in the paper well documented and is the documentationprovided alongside the assets?Answer: [Yes]Justification: The small dataset used to train the schema classifier is described clearly in theappendix.Guidelines:• The answer NA means that the paper does not release new assets.•Researchers should communicate the details of the dataset/code/model as part of theirsubmissions via structured templates. This includes details about training, license,limitations, etc.•The paper should discuss whether and how consent was obtained from people whoseasset is used.•At submission time, remember to anonymize your assets (if applicable). You can eithercreate an anonymized URL or include an anonymized zip file.2214.Crowdsourcing and Research with Human SubjectsQuestion: For crowdsourcing experiments and research with human subjects, does the paperinclude the full text of instructions given to participants and screenshots, if applicable, aswell as details about compensation (if any)?Answer: [NA]Justification: This paper did not involve human subjects of any kind.Guidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Including this information in the supplemental material is fine, but if the main contribu-tion of the paper involves human subjects, then as much detail as possible should beincluded in the main paper.•According to the NeurIPS Code of Ethics, workers involved in data collection, curation,or other labor should be paid at least the minimum wage in the country of the datacollector.15.Institutional Review Board (IRB) Approvals or Equivalent for Research with HumanSubjectsQuestion: Does the paper describe potential risks incurred by study participants, whethersuch risks were disclosed to the subjects, and whether Institutional Review Board (IRB)approvals (or an equivalent approval/review based on the requirements of your country orinstitution) were obtained?Answer: [NA]Justification: This paper did not involve human subjects of any kind.Guidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Depending on the country in which research is conducted, IRB approval (or equivalent)may be required for any human subjects research. If you obtained IRB approval, youshould clearly state this in the paper.•We recognize that the procedures for this may vary significantly between institutionsand locations, and we expect authors to adhere to the NeurIPS Code of Ethics and theguidelines for their institution.•For initial submissions, do not include any information that would break anonymity (ifapplicable), such as the institution conducting the review.23 |
f6263OEVBv | miniCTX : Neural Theorem Proving with(Long-)ContextsJiewen Hu Thomas Zhu Sean WelleckCarnegie Mellon UniversityAbstractReal-world formal theorem proving often depends on a wealth of context, includingdefinitions, lemmas, comments, file structure, and other information. We introduceminiCTX , which tests a model’s ability to prove formal mathematical theorems thatdepend on new context not encountered in training. miniCTX contains theoremssourced from real Lean projects and textbooks, each associated with a context thatcan span tens of thousands of tokens. Models are tasked with proving a theoremgiven access to code from the theorem’s repository, which contains context thatis needed for the proof. As a baseline for miniCTX , we tested fine-tuning andprompting methods that condition theorem proving on preceding context. Bothapproaches substantially outperform traditional methods that rely solely on stateinformation. We found that this ability to use context is not captured by previousbenchmarks such as miniF2F . Alongside miniCTX , we offer NTP-TOOLKIT forautomatically extracting and annotating theorem proving data, making it easy toadd new projects into miniCTX to ensure that contexts are not seen during training.miniCTX offers a challenging and realistic evaluation of neural theorem provers.1 IntroductionFormal theorem proving in interactive theorem provers (ITPs) provides a testbed for evaluating thereasoning capabilities of large language models (LLMs). Theorem proving capabilities can thendirectly translate to automation for mathematicians, such as via tools that complete or formalizeproofs [ 1–4]. However, despite their promise, we see a gap between the evaluation of current languagemodel-based provers and the complexity of real-world theorem proving.Our motivating observation is that theorems and proofs depend on various forms of context , suchas newly-defined definitions and lemmas. For instance, to prove results about a square, one mightfirst formalize a definition of a rectangle, prove some results about rectangles, then specialize themto a newly-defined square [ 5] (Figure 1). However, existing methods for training and evaluatingLLM-based theorem provers often fail to incorporate the full range of contextual information availablein real-world projects. For example, benchmarks often focus on proving standalone competitionproblems (e.g., miniF2F [6]) or theorems from a library that the model has trained on (e.g., Mathlib [ 7,8]), and state-of-the-art LLM-based provers are trained to accept only a proof state as input, makingthem unaware of new theorems and definitions [ 9–11]. While some existing work, including premiseselection techniques [ 12,13,8] and datasets like CoqGym [ 14], have explored theorem proving basedon information beyond the current state, they often focus on a subset of the available information.They primarily focus on providing relevant premises, such as lemmas, to assist in proof construction.Building on these foundations, we propose miniCTX : a benchmark that seeks to expand the scope ofcontext used in theorem proving. We extend beyond traditional premise selection explored in priorbenchmarks (e.g., [ 8,14]) by incorporating a more comprehensive set of contextual elements. Thisincludes premises, but also prior proofs, comments, notation, and structural components like importsand declarations. By doing so, miniCTX aims to drive the development of methods that understand38th Conference on Neural Information Processing Systems (NeurIPS 2024).Table 1: Comparison of theorem proving benchmarks across several key features.Benchmark Language Premise Full Context Multi-source Temporal SplitminiF2F [6] Multiple ✗ ✗ ✗ ✗ProofNet [15] Lean ✗ ✓ ✓ ✗LeanDojo [8] Lean ✓ ✗ ✗ ✗LeanStep [7] Lean ✓ ✗ ✓ ✗CoqGym [14] Coq ✓ ✗ ✓ ✗PISA [16] Isabelle ✗ ✗ ✓ ✗miniCTX (Ours) Lean ✓ ✓ ✓ ✓and work with context that occurs in complex, real-world theorem proving tasks. Additionally,considering the common use of pre-trained language models we mitigate potential data contaminationby continually and automatically updating miniCTX with new Lean projects, so that evaluatedtheorems are not seen during training. Our key contributions are:miniCTX Benchmark: We introduce miniCTX , the first benchmark designed specifically to evaluatetheorem proving in real-world settings where proofs depend on in-file definitions, lemmas, andcontext from formal projects. miniCTX presents a unique challenge by requiring models to reasonover long contexts and handle dependencies that arise in real-world theorem proving tasks.NTP-TOOLKIT :To facilitate the automatic updating of miniCTX , we developed the NTP-TOOLKIT ,which automatically extracts relevant theorems and contexts from Lean projects. Additionally, weprovide a Lean REPL wrapper that enables simpler evaluation on miniCTX .Baseline Evaluations: We evaluate miniCTX on several existing baseline models, including differentfine-tuning and prompting strategies, as well as premise selection. We also propose file-tuning, astrong baseline method for training models using full file contexts, where both the theorem statementsand their surrounding context are provided during training. This approach establishes a robustbaseline for future work on context-dependent theorem proving.Figure 1: Many state of the art provers are trained on a static dataset of theorems and proofs, thenevaluated on standalone problems such as competition problems (left). We argue that neural proversmust also operate in the realistic context-dependent setting, in which results depend on workingwith new mathematical objects and their facts, notations, and the structural elements of the project(imports, variables, etc.) (right).2 Theorem proving with contextFormal theorem proving involves two stages: defining mathematical objects and facts relevant to thedesired result, then stating and proving the result itself. Many current language model-based proversfocus on the proving process and are trained on static datasets that only use a fixed set of definitions.As a result, they lack the ability to recognize new definitions or lemmas at test time (Figure 1).2Context-dependent proving. We study context-dependent theorem proving , where the goal is fora model to generate proofs yfor new theorems x, based on a context cthat includes backgroundinformation such as definitions, lemmas, or natural language comments. Formally, the problem ismaximize ME(x,c)∼pEy∼M(·|x,c)v(x, c, y ), (1)where (x, c)∼pdenotes a (theorem, context) pair from a context distribution p,Mis a model thatproduces a proof y, andvreturns 1 if the proof is correct and 0 otherwise.We choose Lean [ 17] as the verifier v, because of the large body of recent theorems in Lean that canbe used as evaluation data, and the abundance of proving methods in Lean that we use as baselines.We treat a Lean repository as the distribution p. Each context cis a subset of the repository, includingnew definitions, lemmas, notations, imports, and comments that are relevant to the theorem.3miniCTX : a benchmark for theorem proving with contextWe develop miniCTX , a Lean 4 theorem proving benchmark of theorems that depend on newly-defined lemmas, definitions, and proofs from within a project. miniCTX is currently based on 376theorems from four projects: (1) Prime Number Theorem ( Prime ) [18], (2) Polynomial Freman-Ruzsa Conjecture ( PFR ) [19], (3) an introductory text on theorem proving ( HTPI ) [20], (4) recentresults from the standard mathematical library ( Mathlib ) [21] (motivation and details in §D), (5) highenergy physics formalization in HepLean ( HEP ) [22], and (6) scientific computing formalizations(SciLean ) [23].Each theorem in miniCTX consists of the theorem statement, preceding file contents up to the theoremstatement, and metadata, in JSON (see §E.1).1. Theorem statement,2. Preceding file contents up to the theorem statement,3.Metadata, including file name, commit and time which the theorem was added, positionand length of the theorem and proof, and the number and types of premises used in thehuman-written proof.Using our benchmark, users can easily reconstruct the complete context for each theorem, includingboth in-file and cross-file context. The in-file context is provided directly by preceding file contents,while the cross-file context can be reconstructed using the metadata, which includes information onimported modules. We open-source the dataset and evaluation code.3.1 Key features and challengesminiCTX introduces several key features that distinguish it from other theorem proving benchmarks,addressing challenges that have not been tackled by previous benchmarks:Real-world theorem proving. Unlike popular benchmarks (e.g., miniF2F [ 6], ProofNet [ 15],FIMO [ 24]) that focus on isolated competition problems, real-world research-level theorem proving isheavily dependent on rich mathematical contexts. Therefore, miniCTX includes real-world, complextheorems from a variety of ongoing Lean projects, such as Prime Number Theorem (Prime) andPolynomial Freiman-Ruzsa Conjecture (PFR). They rigorously test a model’s ability in real-worldformalization projects. This diversity contrasts with the LeanDojo benchmark [ 8], which focusessolely on Mathlib, enabling miniCTX to better test a model’s generalization in different settings.Contextual evaluation. Proving a theorem often depends on new definitions, lemmas, or othercontextual information, which a model may not have seen during training. miniCTX includes theoremsalong with this new context. During evaluation, the model is expected to leverage the provided newcontext to help prove the theorem.Beyond previous datasets like LeanDojo [ 8] and CoqGym [ 14], which include relevant definitions andtheorems, miniCTX includes additional useful contextual information that may make some theoremseasier to prove compared to standalone theorems. For instance, Lean source code can have naturallanguage comments that may help constrain the space of possible proofs. Moreover, some proofswithin a file often have analogous patterns or structure, which may make subsequent theorems easierto prove (see §E.2). These additional forms of context occur in the real-world process of formalization,yet their use in neural theorem proving is underexplored.3Automatically updating the benchmark. Most modern neural theorem provers use a large languagemodel as a backbone. Therefore, it is crucial to ensure that evaluation content is not seen during(pre-)training, a problem not addressed by previous benchmarks. miniCTX ’s format is amenable toperiodically updating the benchmark with new projects to ensure that proofs are not seen by languagemodels trained prior to a particular date. Future periodic updates will be automatically extracted fromnew Lean projects and commits using NTP-TOOLKIT (§??). See Figure 2 for an illustration.GPT-4o, DeepSeek training cutoff miniCTX compiled with new Lean theorems Evaluate using GPT-4o, DeepSeek Future LLMs training cutoff miniCTX-v2 compiled with newer Lean theorems Evaluate using future LLMs ...Previous benchmarks compiled with Lean theorems (Mathlib, miniF2F) 2023 2024 2025 New Lean theorems New Lean theorems Lean theorems Figure 2: miniCTX is automatically updated with Leanprojects to stay ahead of LLM training cutoff dates, makingit a suitable benchmark for real-world theorem proving forpre-trained models. Figure 3: State-tactic vs. file tuning.4 Experiments4.1 BaselinesWe evaluate several baselines on miniCTX , demonstrating the importance of context in real-worldtheorem proving. Our investigation reveals several open challenges that we discuss in §A. See §B fora detailed description of the motivation, baselines, data extraction, and evaluation setup, and §H forfull results and more detailed analysis. The baselines are as follows:Prompting LLMs. We first test the ability of a state of the art API-based model, GPT-4o, to generatethe complete proof in one pass (pass@1) given the theorem statement, with several few-shot examplesprovided for guidance. We additionally test whether adding context in the form of preceding filecontents improves the proof rate of GPT-4o.State-tactic prompting. Another common approach to theorem proving using language models is tolet the model generate a tactic given the current proof state [ 7–9,25]. Therefore, we test the state-tactic prompting setting, which prompts a model specialized for mathematical tasks, Llemma-7b [ 26],to output a tactic given a proof state. At test time, the model generates one tactic at a time, and weuse a best-first search to construct full proofs [7–9, 1].State-tactic tuning. We follow this state-tactic framework and fine-tune a state-tactic tuned modelfrom DeepSeek-Coder-1.3b [ 27] to input proof states and output tactics, trained on human-writtentactics in Mathlib, the main mathematical library in Lean, extracted by NTP-TOOLKIT .File-tuning. We then test whether supplying context, in the form of preceding file contents, to themodel improves performance. Similar to state-tactic tuning, we fine-tune a 1.3b model to generate atactic based on (proof state, context) pairs, resulting in the file-tuned model.Premise selection. To better simulate a complete context and evaluate on project-level generalization,we apply premise selection to extract relevant premises from imported files within the same repository.We use the premise retriever provided by LeanDojo [ 8] to identify the top 20 most relevant definitionsor lemmas from imported modules and append them to the in-file context.4.2 ResultsContext-dependent methods improve theorem proving. Table 2 shows baseline performancesonminiCTX . We see a dramatic improvement for the file-tuned model (trained on full file context)over the state-tactic model (trained only on proof states) (35.94% vs. 19.53%). Similarly, providingthe preceding file context, which includes definitions and lemmas, to GPT-4o results in dramaticimprovement compared to using just the proof state (27.08% vs. 11.72%). Figure 4 shows theperformance of state-tactic tuned model and file-tuned model on problems with in-file dependenciescompared to those without. These findings highlight the importance of providing models with rich4Table 2: Performance comparison (%) of different models on miniF2F andminiCTX .miniF2F miniCTXMethod Test Prime PFR PFR cross Mathlib HTPI HEP SciLean Avg.GPT-4o (full proof) — 7.06 1.85 6.98 14.00 13.33 31.15 6.52 11.72+ context — 31.76 5.56 34.88 26.00 17.78 49.18 17.39 27.08+ context + premise — 29.41 7.41 39.53 — 15.56 44.26 21.74 26.82State-tactic prompting 28.28 20.00 5.56 0.00 16.00 0.00 31.15 19.57 14.58State-tactic tuning 32.79 17.65 5.56 0.00 22.00 11.11 52.46 19.57 19.53File tuning 33.61 40.00 5.56 44.19 34.00 15.56 60.66 45.65 35.94+ premise — 42.35 11.11 16.28 — 8.89 50.82 32.61 30.21Standaloneproblems(miniF2F)No in-filedependency(miniCTX)With in-filedependency(miniCTX)0.00.10.20.30.40.50.6Performance (%)Performance on theorems by in-file dependencyState-tactic tuningFile-tuningLow dependencyon all premisesHigh dependencyon in-file premisesHigh dependency oncross-file premisesHigh dependencyon all premises0.00.10.20.30.40.50.6Performance (%)Performance on miniCTX theorems by dependency levelGPT-4oGPT-4o + contextGPT-4o + context + premiseFigure 4: Model performance by dependency on premises. For each theorem in miniCTX , we recordas metadata whether its human-written proof depends on other definitions or theorems in the samefile (“in-file”) or in other files (“cross-file”), and test the performance of baselines on each type.contextual information beyond the immediate proof state, also demonstrating that miniCTX is able tomeasure this ability of context-dependent proving.Premise selection improves performance on high cross-file dependency splits. The results inTable 2 indicate that premise selection has a mixed impact on model performance. For the GPT-4o,premise selection improves performance on high cross-file dependency splits, such as PFR, PFR cross,and SciLean. This suggests that premise selection helps capture the cross-file context, enablingGPT-4o to make better use of cross-file information. However, for the file-tuned model, premiseselection does not consistently improve results, and even performs worse on the PFR crosssplit, whichwas designed to evaluate the effective use of cross-file premises. Also shown in Figure 4, GPT-4obenefits significantly from premise selection on problems with high cross-file dependencies, butdegrades in other cases. This suggests that the retrieved premises differ significantly from the in-filecontext. Therefore, developing methods that effectively support the integration of cross-file context(e.g., premise selection) alongside in-file context remains an interesting open research direction forimproving performance on the miniCTX benchmark.Evaluation on miniF2F .We evaluate baselines on miniF2F , a standard benchmark based on com-petition problems that do not require context. The file-tuned model improves very little beyond thestate-tactic model (33.61% vs. 32.79%), showing that the dramatic difference in context-dependentproving abilities seen on miniCTX cannot be captured by miniF2F .Additional analysis. Further analysis shows that file-tuning delivers greater gains on problems withstronger dependencies on new lemmas. Both definitions and theorems are crucial in the context, andmodels show ability to learn proof structure from previous lemmas. See § ??for more details.5 ConclusionWe studied the realistic setting of proving theorems that depend on new information and projectconstraints, and formulated an evaluation framework for testing generalization using real Leanprojects. We built miniCTX , and found that the predominant method for training neural theoremprovers fails to enable context dependent proving. Our file tuning method provides a strong startingpoint for the new challenges opened by our investigation into theorem proving with context.5References[1]Sean Welleck and Rahul Saha. Llmstep: Llm proofstep suggestions in lean. arXiv preprintarXiv:2310.18457 , 2023.[2]Peiyang Song, Kaiyu Yang, and Anima Anandkumar. Towards large language models as copilots fortheorem proving in Lean. arXiv preprint arXiv: Arxiv-2404.12534 , 2024.[3] Sean Welleck. Llmlean. https://github.com/cmu-l3/llmlean , 2024.[4]Ayush Agrawal, Siddhartha Gadgil, Navin Goyal, Ashvni Narayanan, and Anand Tadipatri. Towards amathematics formalisation assistant using large language models. arXiv preprint arXiv:2211.07524 , 2022.[5]Alex Kontorovich. Prime number theorem and; rectangle.lean. https://github.com/AlexKontorovich/PrimeNumberTheoremAnd/blob/main/PrimeNumberTheoremAnd/Rectangle.lean , 2024.[6]Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. miniF2F: a cross-system benchmark for formalolympiad-level mathematics. In International Conference on Learning Representations , 2022.[7]Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward Ayers, and Stanislas Polu. Proof artifact co-trainingfor theorem proving with language models. In International Conference on Learning Representations ,2022.[8]Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, RyanPrenger, and Anima Anandkumar. LeanDojo: Theorem proving with retrieval-augmented language models.InNeural Information Processing Systems (NeurIPS) , 2023.[9] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving, 2020.[10] Huaiyuan Ying, Shuo Zhang, Linyang Li, Zhejian Zhou, Yunfan Shao, Zhaoye Fei, Yichuan Ma, JiaweiHong, Kuikun Liu, Ziyi Wang, et al. Internlm-math: Open math large language models toward verifiablereasoning. arXiv preprint arXiv:2402.06332 , 2024.[11] Huajian Xin, Z. Z. Ren, Junxiao Song, Zhihong Shao, Wanjia Zhao, Haocheng Wang, Bo Liu, LiyueZhang, Xuan Lu, Qiushi Du, Wenjun Gao, Qihao Zhu, Dejian Yang, Zhibin Gou, Z. F. Wu, Fuli Luo, andChong Ruan. Deepseek-prover-v1.5: Harnessing proof assistant feedback for reinforcement learning andmonte-carlo tree search. arXiv preprint arXiv:2408.08152 , 2024.[12] Albert Qiaochu Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzygó ́ zd ́ z, PiotrMiło ́s, Yuhuai Wu, and Mateja Jamnik. Thor: Wielding hammers to integrate language models andautomated theorem provers. Advances in Neural Information Processing Systems , 35:8360–8373, 2022.[13] Maciej Mikuła, Szymon Antoniak, Szymon Tworkowski, Albert Qiaochu Jiang, Jin Peng Zhou, ChristianSzegedy, Łukasz Kuci ́nski, Piotr Miło ́s, and Yuhuai Wu. Magnushammer: A transformer-based approachto premise selection. arXiv preprint arXiv:2303.04488 , 2023.[14] Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants, 2019.[15] Zhangir Azerbayev, Bartosz Piotrowski, Hailey Schoelkopf, Edward W Ayers, Dragomir Radev, andJeremy Avigad. Proofnet: Autoformalizing and formally proving undergraduate-level mathematics. arXivpreprint arXiv:2302.12433 , 2023.[16] Albert Qiaochu Jiang, Wenda Li, Jesse Han, and Yuhuai Wu. Lisa: Language models of isabelle proofs.https://aitp-conference.org/2021/abstract/paper_17.pdf , 2021.[17] Leonardo de Moura and Sebastian Ullrich. The lean 4 theorem prover and programming language. InAutomated Deduction–CADE 28: 28th International Conference on Automated Deduction, Virtual Event,July 12–15, 2021, Proceedings 28 , pages 625–635. Springer, 2021.[18] Alex Kontorovich. Prime number theorem and. https://github.com/AlexKontorovich/PrimeNumberTheoremAnd , 2024.[19] Terence Tao. Pfr. https://github.com/teorth/pfr , 2023.[20] Heather Macbeth. The mechanics of proof. https://github.com/hrmacbeth/math2001 , 2023.[21] Mathlib Community. The lean mathematical library. In Proceedings of the 9th ACM SIGPLAN InternationalConference on Certified Programs and Proofs , POPL ’20. ACM, January 2020.6[22] Joseph Tooby-Smith. Heplean: Digitalising high energy physics. arXiv preprint arXiv:2405.08863 , 2024.[23] Tomáš Sk ˇrivan. Scientific computing in lean. https://github.com/lecopivo/SciLean , 2021.[24] Chengwu Liu, Jianhao Shen, Huajian Xin, Zhengying Liu, Ye Yuan, Haiming Wang, Wei Ju, ChuanyangZheng, Yichun Yin, Lin Li, et al. Fimo: A challenge formal dataset for automated theorem proving. arXivpreprint arXiv:2309.04295 , 2023.[25] Guillaume Lample, Timothee Lacroix, Marie-Anne Lachaux, Aurelien Rodriguez, Amaury Hayat, ThibautLavril, Gabriel Ebner, and Xavier Martinet. Hypertree proof search for neural theorem proving. Advancesin neural information processing systems , 35:26337–26349, 2022.[26] Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen Marcus McAleer,Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model formathematics. In The Twelfth International Conference on Learning Representations , 2024.[27] Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi,Y . Wu, Y . K. Li, Fuli Luo, Yingfei Xiong, and Wenfeng Liang. Deepseek-coder: When the large languagemodel meets programming – the rise of code intelligence, 2024.[28] Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya Sutskever.Formal mathematics statement curriculum learning, 2022.[29] Daniel J. Velleman. How to Prove It: A Structured Approach . Cambridge University Press, 3 edition, 2019.[30] Lean Prover Community. repl. https://github.com/leanprover-community/repl , 2024.7AppendixA Discussion and future challengesIn addition to general improvements in performance, we comment on some specific open challenges.Making better use of long-contexts. Our file-tuning method simply truncates contexts to be withina token budget (1024), which can discard useful contextual information. We found gains in providingGPT-4o 8,000 tokens of context compared to not providing it context, but its absolute performancewas still low. There are several possible strategies that can be explored in future work, includingfeeding in the entire context, retrieval, or mixtures of the two.Repository-level context. We focused on evaluating in-file context in this paper. As shown in§H.1, many problems require using context outside of the current file. Although we incorporatedpremise selection as a means of leveraging cross-file context, our experiments indicate that it does notconsistently improve performance, even on datasets with high cross-file dependencies. This suggestsa need to further investigate how to better integrate premise selection with in-file context. miniCTXprovides sufficient metadata to reconstruct the entire environment, allowing for comprehensiveinvestigation into premise selection and other potential methods for leveraging cross-file context.Challenging proofs. Using context through file tuning did not improve performance on the challeng-ing PFR proofs. Moreover, performance is relatively low (19%) on proofs that had a human-writtenproof of longer than five lines (see §H.2). Proving these kinds of theorems remains an open problem.Working with constraints. As shown in Table 6, model performance drops when the proof cannotuse powerful automation tactics. Models have a tendency to invoke these powerful tactics, andstruggle with more explicit step-by-step proofs. Improving performance in this setting of miniCTX isan interesting future direction.B Experiment DetailsB.1 Motivation for BaselinesWriting a proof can be seen as a sequential process (x1, y1),(x2, y2), . . . ofstates xtandtacticsyt. A state contains what is left to prove (the goal), and available information (the hypotheses ).Atactic transitions the proof to a new state. If the state contains no remaining goals, the proof iscomplete. Concretely, a user applies tactics by writing Lean code, Lean keeps track of the state, andthe development environment shows the state and the written code.The traditional approach to training a language model for theorem proving is to train a model on(state, tactic) pairs, i.e., train it to predict the next step of a proof (i.e., the tactic), given the stateprovided by the proof assistant (i.e., the proof state) [ 9,7,25,8]. A drawback to this approach is thatat test time, the model is not aware of new context outside of the proof state, such as new lemmas. Wewill see later on that models trained with this state-tactic approach fail at context-dependent proving.As a stronger baseline for context-dependent proving, we present file-tuning , a simple recipe thattrains with (preceding file context, proof state, next-tactic) tuples instead of training with (proof state,next-tactic) pairs (Figure 3). This lets the model use new definitions, theorems, or other informationthat are defined prior to the current tactic at training or at test time. In practice, file-tuning requiresextracting contexts and proof states from Lean, which is done by NTP-TOOLKIT .B.2 Data extractionWe ran NTP-TOOLKIT ’s next-tactic extraction on a snapshot of Mathlib, yielding 307,049 examplesavailable at l3lab/ntp-mathlib . We then ran NTP-TOOLKIT ’s instruction tuning script on theseexamples, yielding file-tuning examples and state-tactic examples. For the file-tuning examples, as aninitial method for handling the long Lean files, we either truncate the middle of an input file so thatthe file contents is 1024 tokens, or take only the preceding 1024 tokens, with the strategy selected atrandom for each example. The state-tactic examples are at l3lab/ntp-mathlib-instruct-st . Theunion of file-tuning and state-tactic examples are at l3lab/ntp-mathlib-instruct-context , splitinto 583k train, 15k dev, and 15k test examples.8B.3 Baseline SetupsFile-tuning. Next, we fine-tune a language model on the union of file-tuning and state-tacticexamples. We use the DeepSeek-Coder-1.3b language model [ 27] based on its performance on codegeneration tasks and its size, which allowed us to fine-tune with our computational budget. We fine-tune the model for 3 epochs and select the model based on held-out validation perplexity evaluatedevery 4,000 steps. The model is available at l3lab/ntp-mathlib-context-deepseek-coder-1.3b .State-tactic tuning. We fine-tune a similar model on l3lab/ntp-mathlib-instruct-st using thesame hyperparameters. The model is available at l3lab/ntp-mathlib-st-deepseek-coder-1.3b .B.4 Evaluation setupWe evaluate models for the task of tactic-based theorem proving: given a theorem statement, a modelgenerates one tactic at a time while receiving states from the proof assistant. We use a standardbest-first search strategy [ 9,7,28,8,1] which prioritizes partial proofs based on the model’s averagelog probabilities. This search method is parameterized by the number of generated tactics per iterationS, and the maximum number of iterations T. We use the setting from [ 26,1] (S= 32 , andT= 100 ).We evaluate five types of baselines: (1) pass@1 full proof generation using GPT-4o: we prompt themodel with only the theorem statement and require it to generate a complete proof (see Appendix(§G.2) for details of the prompts and few-shot examples); (2) pass@1 full proof generation within-file context using GPT-4o: we supplement the theorem statement with up to 8000 tokens of in-filecontext; (3) the file-tuning model described in (§4.1); (4) the state-tactic model described in (§4.1);and (5) a state-tactic prompting model: we prompt a pre-trained language model with (state, tactic)examples, using Llemma-7b [26].C AnalysisWe analyze the baseline models on miniCTX further along several axes, including the kinds ofcontextual dependencies, the difficulty, and the content made available in the context.File-tuning especially helps on problems with infile dependencies. We use the miniCTX metadatato categorize theorems based on their in-file dependencies. Figure 6 shows the performance ofstate-tactic tuned model and file-tuned model on problems with in-file dependencies compared tothose without. We also show miniF2F as an additional reference point for problems without in-filedependencies. The file-tuned model shows a marked improvement over the state-tactic tuned model,especially in problems that have dependencies on context. We conclude that file-tuning specificallyhelps in the realistic setting of theorem proving with new definitions and theorems in context.Premise selection helps but may interfere with in-file context. We use miniCTX metadata tocategorize problems based on their cross-file dependencies, evaluating the impact of premise selectionacross the entire dataset. As shown in Figure 4, GPT-4o benefits significantly from premise selectionon problems with high cross-file dependencies, showing improved performance when leveragingrelevant premises from imported files. However, we also observe that premise selection can interferewith in-file context, leading to inconsistent results, particularly when the available in-file context isrelatively short. This suggests that adding cross-file premises may sometimes disrupt the model’sability to focus on the in-file information. Further analysis of this interference is included in §H.3.This highlights the need for more sophisticated integration strategies that can balance both in-file andcross-file contexts effectively.Models can learn from previous proofs in the file context. To determine the contribu-tion of different components in the in-file context, we conducted an ablation study on thePFR.ForMathlib.Entropy.Basic file, which contains numerous co-related lemmas and rich natu-ral language comments, making it an ideal candidate to investigate the influence of different contextcomponents. In this ablation, we systematically removed specific parts of the in-file context andevaluated the model’s ability to generate proofs under these modified conditions. As shown in Table 3,both the file-tuned model and GPT-4o benefit from the inclusion of previous proofs in the file context.This indicates that models are capable of learning proof strategies from existing proofs in the file andeffectively applying them to new problems (see §H.4 for more examples).9Table 3: Ablation study on different context components for theorem proving.Environment Definitions Lemma Lemma Natural Language File-tuning GPT-4oStatement Proof Comments✗ ✗ ✗ ✗ ✗ 14.12% 8.24%✓ ✗ ✗ ✗ ✗ 25.88% 2.35%✓ ✓ ✗ ✗ ✗ 24.71% 9.41%✓ ✓ ✓ ✗ ✗ 27.06% 22.35%✓ ✓ ✓ ✓ ✗ 32.94% 34.12%✓ ✓ ✓ ✗ ✓ 28.24% 23.53%✓ ✓ ✓ ✓ ✓ 35.29% 31.76%Table 4: Overview of problem statistics in miniF2F andminiCTX .Split Problems Avg. Context Length (tokens) Avg. Proof StepsminiF2F [6] Valid/Test 488 153* 3.0†miniCTXPrime 87 10,630 3.6PFR 54 17,495 27.7Mathlib 50 14,440 6.1HTPI 185 39,050 10.7†All 376 26,106 10.9*Only counting library imports and definitions.†Excluding theorems without proofs.Natural language comments contribute in certain settings. Our ablation also explored the effectof natural language comments in the in-file context. Though the impact was not dramatic, commentswritten in natural language were found to be helpful in certain settings. In scenarios where proofswere excluded from the context, adding comments resulted in slight performance gains for bothmodels. For the file-tuned model, these gains were further amplified when proofs were includedalongside comments, demonstrating the value of combining formal context with explanatory naturallanguage. However, for GPT-4o, the presence of comments when proofs were included led to a slightdecrease in performance, suggesting that effective context selection may vary depending on the modelarchitecture and underlying training characteristics.File-tuning improves across all difficulty levels and context lengths. Finally, Appendix §H.2 showsperformance on problems categorized by the length of the human-written proof (when available),which we take as a rough proxy of the problem difficulty. The file-tuned model improved on allthree difficulty categories. Appendix §H.2 also shows that file-tuning had improved accuracy acrosscontext lengths, particularly for problems with longer contexts. Longer contexts may imply moredependencies, suggesting that these problems can benefit more from file-tuning.Models rely on common symbolic automation. To demonstrate an additional kind of context-dependence, we perform an additional analysis on Math2001 [ 20], which is another Lean textbooksetting.1In particular, the textbook code disables powerful automation tactics including simpandlinarith to promote manual reasoning, akin to traditional textbook exercises. For example,Math2001 includes numerous arithmetic problems that are trivial with automation tactics (e.g.,linarith ) but are challenging for models to explicitly prove with step-by-step reasoning (e.g.,viacalc ). In Table 6 we evaluate models with the automation disabled, and observe substantialperformance drops, confirming the reliance on automation tactics. We also find that the state-tactictuned model relies on simp for unseen definitions, making it performing similarly well to thefile-tuned model on theorems that only rely on new definitions (§H.6).DminiCTX SourcePrime Number Theorem. PrimeNumberTheoremAnd [ 18] is a project started in January 2024 thatformalizes the prime number theorem in Lean as well as related concepts, such as residue calculuson rectangles in C. We find the files Rectangle.lean andResidueCalcOnRectangles.leansuitable for our purpose of testing context-dependent theorem proving, especially when we usepreceding file content as context, as each file is self-contained within the project and contains newdefinitions (rectangles, squares) and many interdependent lemmas. See §E.2 for an illustration of such1See Appendix F.1 for further details on Math2001. Due to licensing we do not include it in miniCTX .10lemmas. In addition, most theorems from ResidueCalcOnRectangles.lean rely on the definitionsfrom Rectangle.lean , which serves as a perfect example of cross-file dependencies. We extracted87 theorems from these files. Assuming that a model was trained prior to January 2024, this splitguarantees the evaluation of project-level, context-level, and theorem-level generalization.PFR. PFR [ 19] is a project started in November 2023 that formalizes a proof of the PolynomialFreiman–Ruzsa (PFR) conjecture. We included 54 theorems from PFR. We find that proofs oftheorems in PFR tend to be much more monolithic and longer in length than those in Mathlib or otherlibraries. PFR also defines custom mathematical concepts and notations (such as Ruzsa distance) anda proof typically depends on many lemmas in PFR outside the current file. All of the theorems wereadded to PFR after November 2023. Assuming that the model was trained prior to this date, this splitguarantees the evaluation of project-level, context-level, and theorem-level generalization.Recent Mathlib Commits. Lean’s mathematical library, Mathlib [ 21], is a community-maintainedLean repository including mathematical concepts as well as programming APIs and common tactics.It is the single largest Lean library that users contribute to, and is therefore representative of theproduction environment in which neural theorem provers are deployed. Mathlib is a long-standingproject, and it is common practice to train language model-based provers on Mathlib. It is thereforelikely that Mathlib source files have been observed during training. However, Mathlib is frequentlyupdated, with new definitions, theorems, and refactorings occurring on a daily basis. Hence we cantest theorem-level generalization by tasking a model with proving newly added theorems, given acontext that may or may not have been observed during training.We included 50 theorems added to Mathlib in April 2024, by filtering recent Mathlib commits to onesthat only add new theorems. Many of the theorems added are simple lemmas that depend on earlierones (e.g., ones seen during training). As Mathlib generally refactors new theorems by breaking downlong theorems to shorter lemmas, most new theorems are not difficult to prove, and give a realisticrepresentation of where neural theorem provers are used. Assuming that the model was trained priorto April 2024, the Mathlib split guarantees the evaluation of theorem-level generalization.HTPI. HTPI contains the Lean code for the book How to Prove It (HTPI) [ 29], which explainsa systematic approach to constructing mathematical proofs with Lean. It covers various topics,including elementary logic, number theory, and proving techniques like mathematical induction,along with their implementation in Lean. As supplementary material to the textbook, the problems inHTPI are formulated in a similar fashion: the files typically start with basic definitions and lemmasthat might be used throughout the entire file, followed by exercises and several example problems.2Therefore, models can utilize definitions, lemmas, and proof structures from example problems tosolve similar exercises. Intuitively, the model must understand and apply the context provided withineach file, making it an effective benchmark for testing context-aware theorem-proving models.EminiCTX ExamplesHere we give some examples of the miniCTX and its sources to illustrate the format of the data andhow and why we collect certain theorems.E.1 Example EntryAn entry in the miniCTX dataset consists of the theorem statement, preceding file contents, andmetadata information. For example, given the following theorem s_eq_pow_two in context:import Mathlib.Data.Real.Basic/-!# Square functionWe define the squaring function `s :R→R`to be `s x := x * x `.-/2The main chapter and exercises are separated in the original project: HTPILeanPackage. We manuallymerged them for evaluation in the fork: HTPILeanPackage4.7.11def s (x : R) :R:= x * xlemma s_eq_pow_two {x : R} : s x = x ^ 2 := byrw [s, pow_two]We collect its data into miniCTX , formatted in JSON as follows:{# Preceding file content"srcContext": "import␣Mathlib.Data.Real.Basic\\n\\n/-!\\n#␣Square␣function\\nWe␣define␣the␣squaring␣function␣ `s␣:␣\\u211d␣\\u2192␣\\u211d `␣to␣be␣ `s␣x␣:=␣x␣*␣x`.\\n-/\\n\\ndef␣s␣(x␣:␣\\u211d)␣:␣\\u211d␣:=␣x␣*␣x\\n\\n",# Theorem statement"theoremStatement": "lemma␣s_eq_pow_two␣{x␣:␣\\u211d}␣:␣s␣x␣=␣x␣^␣2",# Fully qualified theorem name"theoremName": "s_eq_pow_two",# Temporal metadata"fileCreated": "(git␣commit)","theoremCreated": "(git␣commit)",# Source metadata"file": "MyProject/Square.lean","module": "MyProject.Square","positionMetadata": {# Line number the theorem is on"lineInFile": 10,# Number of tokens before the theorem"tokenPositionInFile": 152,# Number of premises (definitions, theorems) before the theorem"theoremPositionInFile": 1},# Dependency metadata"dependencyMetadata": {# Number of definitions or lemmas defined in this file that the theorem uses"inFilePremises": true,"numInFilePremises": 1,# Number of definitions or lemmas defined in this repository that the theoremuses (including in-file ones)"repositoryPremises": true"numRepositoryPremises": 1,# Number of total premises (in file, repository, or otherwise)"numPremises": 2,# Modules imported in the current file"importedModules": ["Mathlib.Data.Real.Basic", ...]},# Proof metadata"proofMetadata": {"hasProof": true,"proof": "by\n␣␣rw␣[s,␣pow_two]","proofType": "tactic","proofLengthLines": 2,"proofLengthTokens": 20}}In additional to individual entries, we also record the version (git commit) of the repository.12E.2 Prime Number Theorem exampleWe collect theorems from the Rectangle.lean file in PrimeNumberTheoremAnd. The followingexcerpt from Rectangle.lean demonstrates the scenario that often arises in a theorem provingenvironment where context is critical to producing a proof:import Mathlib.Analysis.Complex.CauchyIntegralimport Mathlib.Analysis.Complex.Convexopen Complex Set Topologyopen scoped Intervalvariable {z w : C} {c : R}/-%%\begin{definition}\label{Rectangle}\lean{Rectangle}\leanokA Rectangle has corners $z$ and $w \in \C$.\end{definition}%%-//-- A `Rectangle `has corners `z`and `w`. -/def Rectangle (z w : C) : Set C:= [[z.re, w.re]] ×C[[z.im, w.im]]namespace Rectanglelemma symm : Rectangle z w = Rectangle w z := bysimp [Rectangle, uIcc_comm]lemma symm_re : Rectangle (w.re + z.im * I) (z.re + w.im * I) = Rectangle z w := bysimp [Rectangle, uIcc_comm]When proving the final lemma symm_re , a model can benefit much from the preceding file contents,which include (1) the existing imports from Mathlib, variable declarations, and open namespacesthat provide a syntactic context for this theorem, (2) the new definition Rectangle in the context,which the model has not seen in training, (3) natural language and LaTeX documentation of the fileandRectangle definition, (4) the analogous (in this case identical) proof of the preceding theoremsymm . We demonstrate that performance on Rectangle.lean is indeed much higher when precedingfile contents are given as context to a model.For future data added to miniCTX that specifically test the preceding file contents as context, we willensure it is standalone like Rectangle.lean , i.e. it does not import any other unseen files from thesame repository, so the preceding file contents already contain all important information relevant tothe proof.F Additional datasetsIn addition to problems in miniCTX , we also evaluated other datasets that are not included due tocopyright reasons.F.1 Math2001Math2001 [ 20] contains the Lean code for the book The Mechanics of Proof by Heather Macbeth, anintroductory text on mathematical theorem proving with accompanying Lean code. Each chapter ofThe Mechanics of Proof covers an introductory topic and walks through how to write the associatedmathematics in Lean, along with exercises. The topics include proofs by calculation, proofs withstructure, parity and divisibility, logic, induction, number theory, functions, sets, and relations. Aunique aspect of Math2001 is that it disables common Lean automation for pedagogical purposes.For example, a student must write out an equality proof in detail, with each step justified. It alsodefines new tactics and definitions separate from the common Lean libraries. Typically a file in thetextbook will show examples of such proofs, followed by exercises for a student to complete. Wecan view this as a form of contextual adaptation: a model must prove the theorem according to theconstraints of the textbook. Math2001 has 41 files that include examples and exercises. We selected 113Models Math2001GPT-4o (full proof) 11.76%GPT-4o (+ context) 43.13%State-tactic prompting 31.37%State-tactic tuning 27.45%File tuning 41.18%Table 5: Performance comparison of different models on Math2001.Automation File (%) State-tactic (%)Enabled 41.18 11.76Disabled 27.45 7.84Table 6: Performance on the Math2001 split with and without access to standard automation.to 2 theorems from each file (depending on the length of the file), for a total of 50 theorems. Of these,31 have no proof in the Math2001 repository, hence testing theorem-level generalization.Context-aware models surpass state-based models Table 5 shows the performance comparisonof different models. Both the GPT-4o model, which includes context in the input, and the file-tunedmodel perform significantly better than the other models. This demonstrates the importance of contextinformation in context-dependent textbook-style problems.Models rely on common symbolic automation. The Math2001 split originally disables powerfulautomation tactics including simp andnlinarith to promote manual reasoning, akin to traditionaltextbook exercises. In Table 6 we evaluate models with the automation disabled, and observesubstantial performance drops, confirming a heavy reliance of current models on these automationtactics. An examination of the training corpus further revealed a general dependency on automatedtactics within real Lean projects, indicating that our models have learned to rely on these tactics.G NTP-TOOLKIT and file-tuning detailsG.1 Data extractionNTP-TOOLKIT contains a general-purpose data extraction tool that extracts examples from an arbitraryLean 4 repository and formats them into examples that can be used to compile miniCTX , as wellas for language-model fine-tuning. The tool is implemented in Lean based on Kim Morrison’slean-training-data .Specifically, NTP-TOOLKIT takes in a configuration file with one or more Lean repositories specified.Each repository is transformed into next-tactic andfull proof examples stored in JSON Lines files.The next-tactic data is suitable for making file-tuning examples of the form (context, state, next-tactic):{"state": # tactic state ,"nextTactic": # pretty-printed next tactic,"srcUpToTactic": # source code in the file up to the tactic invocation,"decl": # declaration without proof (e.g., statement of a theorem),"declUpToTactic": # source code in the declaration up to the tactic invocation,"declId": # unique identifier of the declaration}The full proof data is suitable for making evaluation examples of the form (context, theorem, proof):{"srcUpToDecl": # source code in the file up to the declaration,"decl": # declaration without proof (e.g., statement of a theorem),"declId": # unique identifier of the declaration,"proof": # proof14}Full proof data is also suitable for training a model to directly generate a full proof, and NTP-TOOLKITalso provides Lean source with proof states interleaved, both of which we do not explore in this work.G.2 Input-output formatting.Below we show the inputs and outputs for file-tuning and state-tactic tuning. In the paper we refer tothe natural language description at the beginning of the input as an “instruction”, and refer to a set ofinputs and outputs as described below as “instruction-tuning data”.G.2.1 File tuning.Given an example containing a state, next-tactic, and preceding file contents ( srcUpToTactic ), thedata is formatted as:Input :/- You are proving a theorem in Lean 4.You are given the following information:- The file contents up to the current tactic, inside [CTX]...[/CTX]- The current proof state, inside [STATE]...[/STATE]Your task is to generate the next tactic in the proof.Put the next tactic inside [TAC]...[/TAC]-/[CTX]{srcUpToTactic}[/CTX][STATE]{state}[/STATE][TAC]Output :{nextTactic}[/TAC]G.2.2 State-tactic tuning.Given an example containing a state and next-tactic, the data is formatted as:Input :/- You are proving a theorem in Lean 4.You are given the following information:- The current proof state, inside [STATE]...[/STATE]Your task is to generate the next tactic in the proof.Put the next tactic inside [TAC]...[/TAC]-/[STATE]{state}[/STATE][TAC]15Output :{nextTactic}[/TAC]G.2.3 GPT-4o promptFor full proof generation task with only theorem statement, we use the following prompt:Your task is to generate complete proofs for problems stated in Lean4. You may use anytactics available in Mathlib, but no additional context, definitions, or theorems from theproblem’s file will be provided. Focus on crafting proofs using general knowledge andtechniques applicable in Lean4. Here are some examples:lemma deriv_scale {f : CS (n + 1) E} : (f.scale R).deriv = R−1·f.deriv.scale R := byext v ; by_cases hR : R = 0 <;> simp [hR, scale]·simp [deriv, smul] ; exact deriv_const _ _·exact ((f.hasDerivAt (R−1·v)).scomp v (by simpa using (hasDerivAt_idv).const_smul R−1)).derivtheorem mul_dvd_mul_left (a : α) (h : b | c) : a * b | a * c := byobtain ⟨d, rfl ⟩:= huse drw [mul_assoc]/- Now here is your exercise. There is no need to restate the problem. Ifneeded, think through the proof using comments. -/{theorem statement}For full proof generation task with additional infile context, we use the following prompt:16Your task is to generate complete proofs for problems stated in Lean4. For each problem, youwill be provided with the context from the file in which the theorem is stated. This contextincludes useful external libraries, along with important definitions and theorems that arerelevant to the proof. You are encouraged to use any tactics, definitions, lemmas, or theoremsdefined within this context to construct your proof. Please pay careful attention to indentationand formatting to ensure that the proof adheres to Lean4 syntax standards. Here are someexamples:#Context:import Mathlib.Analysis.Calculus.Deriv.Supportimport Mathlib.Analysis.Distribution.SchwartzSpaceimport Mathlib.Order.Filter.ZeroAndBoundedAtFilteropen Real Complex MeasureTheory Filter Topology BoundedContinuousFunctionSchwartzMap BigOperatorsvariable {E : Type*} [NormedAddCommGroup E] [NormedSpace RE] {{n : N}}@[ext] structure CS (n : N) (E : Type*) [NormedAddCommGroup E] [NormedSpaceRE] wheretoFun : R→Eh1 : ContDiff Rn toFunh2 : HasCompactSupport toFunnoncomputable def scale (g : CS n E) (R : R) : CS n E := byby_cases h : R = 0·exact ⟨0, contDiff_const, by simp [HasCompactSupport, tsupport] ⟩·refine ⟨fun x => funscale g R x, ?_, ?_ ⟩·exact g.h1.comp (contDiff_const.smul contDiff_id)·exact g.h2.comp_smul (inv_ne_zero h)/- Truncated -//- Now here is your exercise. There is no need to restate the problem. Ifneeded, think through the proof using comments. -/#Context:{}#Problem:{}{theorem statement}H Additional results and analysisH.1 Dependency distributionDepends onin-file definitionsDepends onin-file theoremsDepends onin-repo. definitionsDepends onin-repo. theorems020406080100Percentage of theorems (%)Proportion of different dependenciesFigure 5: Percentage of different dependencies in the human-written proof of theorems in miniCTX17H.2 Performance by proof length and context lengthStandaloneproblems (miniF2F)No in-filedependencyDefinitiondependencyTheoremdependencyAll dependencies010203040Performance (%)Performance on theorems by dependency typeState-tactic tuningFile-tuningFigure 6: Performance by dependency type. For each theorem in miniCTX , we record as metadatawhether its human-written proof depends on other definitions or theorems in the same file, and testthe performance of baselines on each type. File-tuned models substantially outperform state-tactictuned models on theorems with definition and/or theorem dependencies. 2 25 > 5Proof length (lines)0204060Performance (%)Performance by proof lengthState-tactic tuningFile-tuning 3000 3000 8000 > 8000In-file context (tokens)0204060Performance (%)Performance by in-file context lengthState-tactic tuningFile-tuningFigure 7: Performance of two baselines on different difficulty levels and context lengths, as measuredby the length of human-written proof in lines and the size of the preceding file contents in tokens.File-tuning substantially improves theorem-proving abilities across all cases, but especially when thetheorem is easier and the context is longer.H.3 Interference between in-file context and retrieved premisesIn our experiments, we attempted to supply both in-file context (in the form of preceding code) andpremise context (in the form of retrieved premises) to GPT-4o for proving a theorem. In Figure 8, wepresent an analysis of the impact of the length of retrieved premises on the resulting proof successrate.Longer retrieved premises hurt performance. The results indicate that problems with a lowerpremise-to-context length ratio tend to have higher success rates. Specifically, successful problemsoften feature relatively shorter premises as proportion of the full context length. This suggests thatmodels are better able to utilize and focus on relevant in-file context when the cross-file premisesare proportionally smaller. Conversely, when the length of the premises becomes relatively largecompared to the full context, it may overwhelm or distract the model, reducing its ability to effec-tively utilize the in-file information. This finding highlights the importance of ensuring a balancedintegration of premises with the in-file context to maintain model focus and improve proof generationperformance.180.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8T otal size of retrieved premises, as proportion of promptPerformance by size of retrieved premisesSuccessFailureFigure 8: Impact of length of retrieved premises on GPT-4o model performance. A higher premise-to-context length ratio is correlated with lower success rates, suggesting that too much premise incontext overwhelms the model.H.4 Example of learning proofs from contextThe file-tuned model is able to utilize the proofs in the context. Here is an example of the modelmaking minimal modification to the proofs from the context:Input :...theorem Set.left_not_mem_uIoo {a b : R} : a /∈Set.uIoo a b :=fun⟨h1, h2 ⟩ 7→(left_lt_sup.mp h2) (le_of_not_le (inf_lt_left.mp h1))theorem Set.right_not_mem_uIoo {a b : R} : b /∈Set.uIoo a b := byOutput :rintro ⟨h1, h2 ⟩exact (right_lt_sup.mp h2) (le_of_not_le (inf_lt_right.mp h1))H.5 Example of using theorems from contextThe file-tuned model is able to utilize the stated theorems in the context. Here is an example of themodel using the previously defined theorem in the proof:Input :19. . .lemma RectSubRect {x 0x1x2x3y0y1y2y3:R} (x 0_le_x 1: x 0≤x1)(x1_le_x 2: x 1≤x2)(x2_le_x 3: x 2≤x3) (y 0_le_y 1: y 0≤y1) (y 1_le_y 2: y 1≤y2) (y 2_le_y 3: y 2≤y3) :Rectangle (x 1+ y 1* I) (x 2+ y 2* I)⊆Rectangle (x 0+ y 0* I) (x 3+ y 3*I) := byrw [rect_subset_iff, mem_Rect, mem_Rect]refine ⟨⟨?_, ?_, ?_, ?_ ⟩, ?_, ?_, ?_, ?_ ⟩all_goals simpa using by linarithlemma RectSubRect '{z0z1z2z3:C} (x 0_le_x 1: z 0.re≤z1.re) (x 1_le_x 2:z1.re≤z2.re)(x2_le_x 3: z 2.re≤z3.re) (y 0_le_y 1: z 0.im≤z1.im) (y 1_le_y 2: z 1.im≤z2.im)(y2_le_y 3: z 2.im≤z3.im) :Rectangle z 1z2⊆Rectangle z 0z3:= byOutput :simpa using RectSubRect x 0_le_x 1x1_le_x 2x2_le_x 3y0_le_y 1y1_le_y 2y2_le_y 3H.6 Example of using unseen definitionsThe state-tactic tuned model is able to utilize the unseen definitions that appear in the proof state. Inthe following example Set.uIoo is a newly defined definition, which is never seen for state-tactictuned model:Input :theorem uIoo_of_le { α: Type*} [Lattice α] {a b : α} (h : a ≤b) : Set.uIooa b = Ioo a b := by[STATE]1 goalzw:Cc:Rα: Type u_1inst: Lattice αab:αh: a≤b⊢uIoo a b = Ioo a b[/STATE]Output :simp [uIoo, h]This leads the state-tactic tuned model to perform similarly well as the file-tuned model on problemsthat only rely on new definitions, and not new theorems (see Figure 6).20I Dataset hosting and maintenanceminiCTX is released on HuggingFace: l3lab/miniCTX , distributed under the Apache 2.0 license.Data extraction tool NTP-TOOLKIT is released on GitHub: cmu-l3/ntp-toolkit , under the MITlicense. We note that the underlying data for the individual splits of miniCTX are also released underthe Apache 2.0 license. We include the licensing information in the dataset repository. We plan toregularly update and maintain the dataset to include examples from new projects.J NTP-T OOLKIT guidelineWe introduced NTP-TOOLKIT in §??. With the NTP-TOOLKIT , users can extract and annotate newtheorems and proofs from any valid Lean project, in miniCTX format. The extracted data can beused either as updates to miniCTX , or as training data (for which we also provide instruction tuningutilities). We also develop a lightweight evaluation framework for easy evaluation on miniCTX .J.1 PreliminaryThe evaluation code relies heavily on the Lean REPL [ 30], which operates within the projectenvironment. Therefore, it is essential that the project builds without any errors. Additionally, theversion of Lean used in the project should match the version supported by the REPL. While the LeanREPL supports versions ≥4.3.0, for the best experience with data extraction and evaluation, werecommend evaluating projects that use Lean version 4.7.0 (all miniCTX theorems are in 4.7.0). Weplan to continuously update NTP-TOOLKIT to support newer versions.J.2 Using the NTP-TOOLKITThe NTP-TOOLKIT is designed to easily extract and annotate theorem proving data from Lean projects,by simply providing the project URL. To use the NTP-TOOLKIT for data extraction, follow thesesteps:1.Installation: Clone the NTP-TOOLKIT repository from GitHub to your local machine, andcheckout the Lean version tag corresponding to the extracted project (e.g., v4.7.0 ). Ensurethat you have the required dependencies installed, as listed in the repository’s READMEfile.2.Configuration: Supply GitHub URL, commit hash, and root modules of your Lean projectin a JSON configuration file. Make sure that your project is using a compatible version ofLean. NTP-TOOLKIT will extract data from all modules imported by the root modules.3.Data extraction: Run the data extraction script provided by the toolkit. Specify the--full_proof_training_data and--premises options to extract miniCTX -style data,which will be stored in an minictx.jsonl output file. Specify the --declarationsoption to additionally extract the premises in each module, for premise retrieval. Thefull_proof_training_data outputs can be additionally used for fine tuning (assumingthe extracted data is dated before the current temporal split of miniCTX ).For detailed commands and additional options, please refer to the README file in the NTP-TOOLKITrepository.J.3 miniCTX EvaluationWe provide a comprehensive evaluation pipeline in the miniCTX-eval repository, supporting bothtactic-prediction and full-proof generation tasks. Users should place the extracted JSONL file fromtheNTP-TOOLKIT into the data folder. To run an evaluation task, execute the task script by specifyingthe dataset path, the corresponding project path, and the path to the Lean REPL. This setup ensuresthat the evaluation is conducted within the correct environment and with the necessary data inputs.21 |
et2T8SKF1O | Library Learning Doesn’t: The Curious Case of theSingle-Use “Library”Ian Berlot-AttwellUniversity of TorontoVector [email protected] RudziczDalhousie UniversityVector [email protected] SiUniversity of TorontoVector [email protected] in Large Language Models (LLMs) have spurred a wave of LLM librarylearning systems for mathematical reasoning. These systems aim to learn a reusablelibrary of tools , such as formal Isabelle lemmas [Paulson, 1994] or Python programsthat are tailored to a family of tasks. Many of these systems are inspired by thehuman structuring of knowledge into reusable and extendable concepts [Ellis et al.,2021], but do current methods actually learn reusable libraries of tools?We study two library learning systems for mathematics which both reported in-creased accuracy: LEGO-Prover [Wang et al., 2024a] and TroVE [Wang et al.,2024b]. We find that function reuse is extremely infrequent on miniF2F [Zhenget al., 2022] and MATH [Hendrycks et al., 2021]. Our followup ablation experi-ments suggest that, rather than reuse, self-correction and self-consistency are theprimary drivers of the observed performance gains. Our code and data are availableathttps://github.com/ikb-a/curious-case .1 IntroductionMathematical progress is made by building with, and building upon, the tools of those who camebefore. Consequently, it is no surprise that there is research interest in developing systems thatcan automatically learn such reusable mathematical tools. Recently, LLMs have enabled new tool-learning methods with improved performance [Wang et al., 2024a,b, Zhang et al., 2024a, Yuan et al.,2024] – but are these systems truly learning generalized, reusable knowledge or is performanceimproved through other mechanisms? In this work, we study two prior systems: LEGO-Prover whichaims to learn reusable formal Isabelle lemmas, and TroVE which aims to learn reusable Pythonfunctions. For both, our analysis of the model’s behaviour reveals that direct reuse is negligible.Furthermore, we perform two ablation studies supporting our position that function reuse plays alimited role in these systems’ improved mathematical reasoning.2 Related WorkLLM library learning, i.e., creating and reusing tools, depends on LLMs’ ability to use tools. Priorevaluations of tool-use (typically assuming tools as REST APIs) [Qu et al., 2024] included real-worldqueries [Yan et al., 2024], dedicated test environments [Li et al., 2023], and metrics ranging fromLLM-as-a-judge [Guo et al., 2024] to tracking task-checkpoint completion [Lu et al., 2024].38th Conference on Neural Information Processing Systems (NeurIPS 2024).Table 1: Lemma reuse in LEGO-Prover released logs. Note that lemma reuse is very uncommon ,andno lemma reused twice . For each split, we report the number of problems solved, the number ofunique lemmas occurring in the PROVER ’s input prompts, the number of lemmas reused verbatimonce, or more than once, and the number of lemmas whose name is reused once, or more than once.A lemma is reused Ntimes if it appears in N+ 1solutions (i.e., the initial use, and then Nreuses).Verbatim reused Name reusedSplit Problems Solved Lemmas in Prompts 1 2+ 1 2+valid+GPT 127 374 0 0 1 0valid+Human 135 265 0 0 1 0test+GPT 111 255 0 0 2 0test+Human 122 339 1 0 2 0In contrast, the evaluation of library learning systems has been limited. Accuracy is the metric ofchoice [Wang et al., 2024a,b, Zhang et al., 2024a, Yuan et al., 2024], but cannot capture the extentor quality of reuse: an excellent library is useless to a weak reasoner, and a powerful reasoner canignore a useless library and derive results from first principles. Prior attempts to evaluate librarylearning have been limited to static measures of individual functions such as cyclomatic complexity[McCabe, 1976, Zhang et al., 2024a] and abstract syntax tree depth [Wang et al., 2024b], or haveanswered specific questions such as the ease of human verification [Wang et al., 2024b], accuracyunder domain transfer [Zhang et al., 2024a, Qian et al., 2023], or performance in the sub-problem ofrefactoring ground truth solutions[Lin et al., 2024].In this study, we evaluate two library learning systems for mathematical reasoning: LEGO-Prover,and TroVE (see Sections 2.1 and 2.2). For a review of library learning systems, see Appendix A.2.1 LEGO-Prover: Purpose & ArchitectureLEGO-Prover consumes a set of proposed theorems to produce corresponding formal Isabelle[Paulson, 1994] proofs. It was evaluated on the miniF2F [Zheng et al., 2022] dataset: each problemwas attempted 100 times, and the system obtained feedback from the Isabelle verifier after eachattempt. LEGO-Prover was designed to perform library learning. Using the term skills in placeoftools , Wang et al. [2024a] claimed that “LEGO-Prover enables LLMs to utilize existing skillsretrieved from the library” and “[m]odular and reusable skills are constantly added to the libraryto enable tackling increasingly intricate mathematical problems.” LEGO-Prover performs librarylearning via two LLM systems: 1) The PROVER which uses the library to create proofs, and 2) theEVOLVER which iteratively refines the library. They communicate through shared databases, such astherequest db which stores proposed lemmas to be proven and added to library.2.2 TroVE: Purpose & ArchitectureTroVE is a “method for inducing a toolbox of reusable functions to use in solving programmatictasks,” designed to receive a stream of word problems without a ground truth or verifier [Wang et al.,2024b]. For each problem, it attempts to produce a Python program that prints the correct solution.TroVE’s mathematical reasoning was evaluated with the MATH dataset Hendrycks et al. [2021].Each problem is considered once: an LLM generates 15 solutions, and the best is selected based onself-consistency (i.e., majority vote) [Wang et al., 2023]. In generation, 5 solutions ignore the libraryand directly generate a program ( SKIPmode), 5 create a reusable helper function for inclusion in thelibrary (C REATE mode), and 5 use a function from the library (I MPORT mode).3 Analysis of LEGO-ProverWe begin by analyzing the publicly released LEGO-Prover evaluation log files1[Wang et al., 2024a].These logs are a subset of the unreleased PROVER logs corresponding to the final attempts on the1https://github.com/wiio12/LEGO-Prover/blob/357672c7751cd0c84aff6bf72a3d1bf97614e81d/result/lego_result.zip20 10 20 30 40 50Solver Attempt02040% of Problems SolvedLEGO-ProverZero reuse ablationFigure 1: LEGO-Prover performance on a subset of the miniF2F validation split. The ablated modelcannot reuse lemmas and performs similarly. The shaded region is one standard deviation, capturingvariations in LLM output and race conditions.successfully solved problems. Note that LEGO-Prover was evaluated on 4 data splits, and learnedover 20,000 lemmas overall [Wang et al., 2024a].We find that only 1,233 lemmas ( ∼6%) are used in the final solving step (i.e., are inputs to thePROVER ). Of these, exactly one lemma is reused by the PROVER , and it is reused once (i.e.,appears verbatim in two solutions). As the PROVER may be adjusting a lemma (e.g., paraphrasing,commenting, etc...) we repeat the analysis, checking only for the lemma’s name. Again, lemma reuseis rare, and no lemma is reused more than once (i.e., no lemma has its name appear in 3 or moresolutions). See Table 1 for details. For an example of verbatim vs. name use, see Appendix B.Given these findings, there are only two possibilities by which LEGO-Prover may be performingreuse: 1) indirect reuse (e.g., the learned tools are useful, reusable exemplars, rather than directlyused in the final solution), or 2) direct reuse occurs in the E VOLVER .Instead, we hypothesize that reuse is not significantly boosting performance. We propose that self-correction [Pan et al., 2023] via the request db is the main mechanism of action. Note that the PROVERpopulates the request db by: 1) adding lemmas that the LLM suggests may be helpful sub-steps, and 2)adding lemmas from solution attempts that Isabelle could not verify. The EVOLVER uses the requestdbto modify existing tools to “aid in solving requests”, and to “resolv[e] decomposed sub-goals”using the library [Wang et al., 2024a]. Thus, the performance gains may be due to a combinationof chain-of-thought [Wei et al., 2022] (through the PROVER ’s proposal of helpful lemmas for theEVOLVER to solve) and self-correction (through the E VOLVER ’s retrying of failed lemmas).To test whether any form of reuse is increasing performance, we ablate LEGO-Prover to removecross-problem sharing: each theorem is solved with its own independent state and databases. E.g.,in place of a global request db , each problem now has its own independent request db . We evaluateon a random size 12 subset of the validation split and use 50 attempts per problem. We perform ourablation using OpenAI’s GPT-4o-mini as the original results were published using now deprecatedversions of GPT-3.5-Turbo; see Appendix E for full details of the ablation. Running 2 trials, we findthat the ablation’s performance is strong, solving only 1 question less than the baseline (see Figure 1).Studying the problems solved by only the baseline, we find that only the simplest of the input lemmasare possibly used (namely a2≥0andax2+bx+c= 0⇒c=−(ax2+bx); see Appendix C). It isunclear as these facts are not treated as lemmas, and are given different justifications. This suggeststhat: 1) the LLM may be too weak if it needs examples of basic facts 2) the LLM struggles at reuseas it does not copy the given, verified, proofs.4 Analysis of TroVEAs TroVE logs were not released, we re-ran TroVE on MATH, achieving accuracy within ±2%(absolute) of reported (see Appendix, Table 3). Note that the TroVE library also learns importstatements; we ignore these in our analysis for two reasons. Firstly, our interest is in whether thesystem learns and reuses non-trivial tools, unlike statements such as “ import math ” and “ fromsympy import symbols ”. Secondly, as TroVE includes the entire library as part of the IMPORTprompt, and import statements are innately simple, it is impossible to determine whether an importstatement is included in the LLM output due to reuse, or the LLM’s innate knowledge.3Table 2: TroVE performance on MATH for the ablation and the baseline. Mean and standard deviationover 5 trials are reported. The variations arise from LLM output. †indicates that mean ablationperformance is significantly strictly higher than the baseline’s, at the Bonferroni-corrected 0.05 level,using a 2-sample 1-sided Welch’s t-test (note, this test assumes approximate normality).Accuracy on MATH test splitModel count geo inte numTroVE Reproduced 0.236 ±0.008 0.058±0.004 0.120 ±0.006 0.258 ±0.007No Reuse Ablation 0.250±0.000†0.050±0.000 0.134±0.014 0.290±0.014†Analyzing the logs, we find that TroVE’s final libraries only contain 15 learned functions, havinglearned functions for only 3 of the 7 MATH subject test splits: counting, number, and pre-algebra.No functions are learned in the algebra, geometry, intermediate algebra, or pre-calculus splits. Of the15 learned functions, only 2 are reused in a correct solution: is_perfect_square(n) is reused inone correct solution and is_prime(num) is reused in two correct solutions.Given 3 successful reuses in 3,201 test questions, we believe that TroVE’s improvements over the base-lines are not due to function reuse. Instead, we believe that ensembling and self-consistency are respon-sible. To test this, we ablate the model by disabling IMPORT mode, but maintaining the 15 solutionattempts: we generate 8 solutions ignoring the library (i.e., SKIPmode) and 7 attempting to create ahelper function (i.e., CREATE mode). As in the original work we use CodeLlama-7b-Instruct-hf[Rozière et al., 2023]; see Appendix F for the full ablation details. Ablating IMPORT mode preventsreuse as the library never appears in the model’s input, thus also preventing library learning of importstatements. As to why this ablation could still be performant, prior work established the benefitsof self-consistency and increased sampling [Brown et al., 2024], and it’s known that library-lesstool-creation can boost performance by forcing abstract reasoning [Yuan et al., 2024].We evaluate our ablated model on the intermediate_algebra test split (reportedly the largest per-formance gain over non-reuse baselines), and the geometry, number, and count test splits. On theintermediate_algebra, number, and count splits, our ablation exceeds the baseline’s performance, withthe improvement being statistically significant on two splits (See Table 2). On only the geometry splitdoes the base model perform slightly better, though the learned libraries only contains import state-ments. From this we can conclude that library learning import statements can be slightly beneficial,but only for certain domains. Typically, TroVE’s library learning degrades its performance.5 ConclusionsIn this study, we find that both TroVE and LEGO-Prover do not directly reuse the tools they learn.Furthermore, the results of our ablations suggest that their performance gains cannot be solelyattributed to indirect reuse either.We intend that this paper be a call for the better understanding of the limitations of current librarylearning systems, and for improved evaluation. We show that accuracy is misleading in isolation: thesystem’s reuse behaviour is paramount, and careful ablation is critical. Both papers studied madesensible claims as the created systems were deliberately designed for library learning and were testedagainst ablations that were not unreasonable – however they also relied heavily on accuracy as ametric instead of directly observing the systems’ use of the library, and both chose ablations that inhindsight were too aggressive. It is clear that, particularly for ablations of library learning systems,minimal changes are preferable, and considerable thought should be put into other possible causes ofimprovements. There is a clear need for a broadly applicable framework for the evaluation of librarylearning specifically; this framework must rely on more than task accuracy and ablations to evaluatelibrary learning and reuse.Finally, considering library learning for mathematics in general: are LLMs capable learning tools andperforming direct, verbatim reuse? Given that the observed improvements do not come from directreuse, would direct reuse actually improve systems for mathematical reasoning, or is it overly brittlemaking soft reuse desirable? These important questions follow from our findings, and should informthe design of future research into library learning systems.46 Limitations & Broader ImpactDue to resource constraints, our ablation studies could be more thorough. Most obviously, we onlystudy two models, and on two datasets. The LEGO-Prover ablation is not ideal, as library learningis disadvantaged by operating on a subset of the questions; this was necessary due to resourceconstraints. Another limitation is that LEGO-Prover’s databases are pre-loaded with the full datasetof problems; consequently, the EVOLVER s are exposed to other problem statements – note, however,that the impact on testing reuse is minimal. Firstly, the PROVER cannot attempt to solve any of theseother problems, thus the request db cannot gain pending lemmas related to other problems. Secondly,under the ablated model, tasks cannot share lemmas – any performance gains would come fromhaving access to other sample problems instead of reuse.While we demonstrate that the performance gains in mathematical reasoning seen by TroVE andLEGO-Prover cannot be attributed to the direct learning and reuse of tools, there is a very importantbutsubtly different question which remains unanswered: whether these systems are at all capable oflibrary learning. It is possible that these systems have the capacity to learn reusable functions andlemmas, but the datasets do not provide the opportunity. Manually inspecting the MATH dataset,our tentative conclusion is that the dataset is intrinsically not amenable to function learning withPython – we suspect the questions are too diverse, with the shared components already being capturedby standard libraries. How this could be more formally demonstrated remains an important openquestion that is beyond the scope of this work.This work has no immediate societal impact, rather, it highlights current limitations and challengesassumptions in this field. However, deploying tool-learning systems may carry a security risk fromexecuting LLM-generated code (we sandboxed TroVE). More generally, library learning systems areself-improving through code generation, an approach that has raised concerns [Zelikman et al., 2023].Unexpected behaviours may develop, thus requiring sandboxing and monitoring, at the very least.Acknowledgments and Disclosure of FundingResources used in preparing this research were provided, in part, by the Province of Ontario,the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/partnerships/ . Generous support was also provided by the MicrosoftAccelerating Foundation Models Research (AFMR) program.We would also like to thank Zhiruo Wang, Zhaoyu Li, William Cunningham, and our anonymousreviewers for their time and conversations that helped in various ways to shape and improve thiswork. Finally, the lead author would like to thank Frank Rudzicz for years of guidance and support,and Xujie Si for both encouraging this work as being of interest to the mathematical reasoningcommunity, and for providing critical resources without which it could not have been possible. Thankyou everyone for helping make this work possible.ReferencesLawrence C. Paulson. Isabelle - A Generic Theorem Prover , volume 828 of Lecture Notes inComputer Science . Springer, 1994. ISBN 3-540-58244-4. doi: 10.1007/BFB0030541. URLhttps://doi.org/10.1007/BFb0030541 .Kevin Ellis, Catherine Wong, Maxwell I. Nye, Mathias Sablé-Meyer, Lucas Morales, Luke B. Hewitt,Luc Cary, Armando Solar-Lezama, and Joshua B. Tenenbaum. DreamCoder: bootstrappinginductive program synthesis with wake-sleep library learning. In Stephen N. Freund and Eran Yahav,editors, PLDI ’21: 42nd ACM SIGPLAN International Conference on Programming LanguageDesign and Implementation, Virtual Event, Canada, June 20-25, 2021 , pages 835–850. ACM,2021. doi: 10.1145/3453483.3454080. URL https://doi.org/10.1145/3453483.3454080 .Haiming Wang, Huajian Xin, Chuanyang Zheng, Zhengying Liu, Qingxing Cao, Yinya Huang, JingXiong, Han Shi, Enze Xie, Jian Yin, Zhenguo Li, and Xiaodan Liang. LEGO-Prover: Neuraltheorem proving with growing libraries. In The Twelfth International Conference on LearningRepresentations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024a. URLhttps://openreview.net/forum?id=3f5PALef5B .5Zhiruo Wang, Graham Neubig, and Daniel Fried. TroVE: Inducing verifiable and efficient tool-boxes for solving programmatic tasks. In Forty-first International Conference on MachineLearning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net, 2024b. URLhttps://openreview.net/forum?id=DCNCwaMJjI .Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. miniF2F: a cross-system benchmark for formalOlympiad-level mathematics. In The Tenth International Conference on Learning Representations,ICLR 2022, Virtual Event, April 25-29, 2022 . OpenReview.net, 2022. URL https://openreview.net/forum?id=9ZPegFuFTFv .Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, DawnSong, and Jacob Steinhardt. Measuring Mathematical Problem Solving With the MATH Dataset.NeurIPS , 2021.Shaokun Zhang, Jieyu Zhang, Jiale Liu, Linxin Song, Chi Wang, Ranjay Krishna, and QingyunWu. Offline training of language model agents with functions as learnable weights. In Forty-firstInternational Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 .OpenReview.net, 2024a. URL https://openreview.net/forum?id=2xbkWiEuR1 .Lifan Yuan, Yangyi Chen, Xingyao Wang, Yi Fung, Hao Peng, and Heng Ji. CRAFT: CustomizingLLMs by creating and retrieving from specialized toolsets. In The Twelfth International Conferenceon Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024.URL https://openreview.net/forum?id=G0vdDSt9XM .Changle Qu, Sunhao Dai, Xiaochi Wei, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Jun Xu, andJi-Rong Wen. Tool learning with large language models: A survey. CoRR , abs/2405.17935, 2024.doi: 10.48550/ARXIV .2405.17935. URL https://doi.org/10.48550/arXiv.2405.17935 .Fanjia Yan, Huanzhi Mao, Charlie Cheng-Jie Ji, Tianjun Zhang, Shishir G. Patil, Ion Stoica, andJoseph E. Gonzalez. Berkeley function calling leaderboard. https://gorilla.cs.berkeley.edu/blogs/8_berkeley_function_calling_leaderboard.html , 2024.Minghao Li, Yingxiu Zhao, Bowen Yu, Feifan Song, Hangyu Li, Haiyang Yu, Zhoujun Li, FeiHuang, and Yongbin Li. API-Bank: A comprehensive benchmark for tool-augmented LLMs. InHouda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference onEmpirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10,2023 , pages 3102–3116. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.EMNLP-MAIN.187. URL https://doi.org/10.18653/v1/2023.emnlp-main.187 .Zhicheng Guo, Sijie Cheng, Hao Wang, Shihao Liang, Yujia Qin, Peng Li, Zhiyuan Liu, MaosongSun, and Yang Liu. StableToolBench: Towards stable large-scale benchmarking on tool learning oflarge language models. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Findings ofthe Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting,August 11-16, 2024 , pages 11143–11156. Association for Computational Linguistics, 2024. URLhttps://aclanthology.org/2024.findings-acl.664 .Jiarui Lu, Thomas Holleis, Yizhe Zhang, Bernhard Aumayer, Feng Nan, Felix Bai, Shuang Ma, ShenMa, Mengyu Li, Guoli Yin, Zirui Wang, and Ruoming Pang. ToolSandbox: A stateful, conversa-tional, interactive evaluation benchmark for LLM tool use capabilities. CoRR , abs/2408.04682,2024. doi: 10.48550/ARXIV .2408.04682. URL https://doi.org/10.48550/arXiv.2408.04682 .T.J. McCabe. A complexity measure. IEEE Transactions on Software Engineering , SE-2(4):308–320,1976. doi: 10.1109/TSE.1976.233837.Cheng Qian, Chi Han, Yi Ren Fung, Yujia Qin, Zhiyuan Liu, and Heng Ji. CREATOR: Toolcreation for disentangling abstract and concrete reasoning of large language models. In HoudaBouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for ComputationalLinguistics: EMNLP 2023, Singapore, December 6-10, 2023 , pages 6922–6939. Associationfor Computational Linguistics, 2023. doi: 10.18653/V1/2023.FINDINGS-EMNLP.462. URLhttps://doi.org/10.18653/v1/2023.findings-emnlp.462 .6Xiaohan Lin, Qingxing Cao, Yinya Huang, Zhicheng Yang, Zhengying Liu, Zhenguo Li, andXiaodan Liang. ATG: Benchmarking automated theorem generation for generative languagemodels. In Kevin Duh, Helena Gómez-Adorno, and Steven Bethard, editors, Findings of theAssociation for Computational Linguistics: NAACL 2024, Mexico City, Mexico, June 16-21, 2024 ,pages 4465–4480. Association for Computational Linguistics, 2024. doi: 10.18653/V1/2024.FINDINGS-NAACL.279. URL https://doi.org/10.18653/v1/2024.findings-naacl.279.Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V . Le, Ed H. Chi, Sharan Narang, AakankshaChowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in languagemodels. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali,Rwanda, May 1-5, 2023 . OpenReview.net, 2023. URL https://openreview.net/forum?id=1PL1NIMMrw .Liangming Pan, Michael Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, and William YangWang. Automatically correcting large language models: Surveying the landscape of diverseself-correction strategies. CoRR , abs/2308.03188, 2023. doi: 10.48550/ARXIV .2308.03188. URLhttps://doi.org/10.48550/arXiv.2308.03188 .Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi,Quoc V . Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large languagemodels. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh,editors, Advances in Neural Information Processing Systems 35: Annual Conference on NeuralInformation Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 -December 9, 2022 , 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html .Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, YossiAdi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton,Manish Bhatt, Cristian Canton-Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez,Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, andGabriel Synnaeve. Code Llama: Open foundation models for code. CoRR , abs/2308.12950, 2023.doi: 10.48550/ARXIV .2308.12950. URL https://doi.org/10.48550/arXiv.2308.12950 .Bradley C. A. Brown, Jordan Juravsky, Ryan Saul Ehrlich, Ronald Clark, Quoc V . Le, ChristopherRé, and Azalia Mirhoseini. Large Language Monkeys: Scaling inference compute with repeatedsampling. CoRR , abs/2407.21787, 2024. doi: 10.48550/ARXIV .2407.21787. URL https://doi.org/10.48550/arXiv.2407.21787 .Eric Zelikman, Eliana Lorch, Lester Mackey, and Adam Tauman Kalai. Self-taught optimizer (STOP):Recursively self-improving code generation. CoRR , abs/2310.02304, 2023. doi: 10.48550/ARXIV .2310.02304. URL https://doi.org/10.48550/arXiv.2310.02304 .Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large language models astool makers. In The Twelfth International Conference on Learning Representations, ICLR 2024,Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. URL https://openreview.net/forum?id=qV83K9d5WB .Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, andAnima Anandkumar. V oyager: An open-ended embodied agent with large language models. Trans.Mach. Learn. Res. , 2024, 2024c. URL https://openreview.net/forum?id=ehfRiF0R3a .Weihao Tan, Wentao Zhang, Xinrun Xu, Haochong Xia, Ziluo Ding, Boyu Li, Bohan Zhou, JunpengYue, Jiechuan Jiang, Yewen Li, Ruyi An, Molei Qin, Chuqiao Zong, Longtao Zheng, YujieWu, Xiaoqiang Chai, Yifei Bi, Tianbao Xie, Pengjie Gu, Xiyun Li, Ceyao Zhang, Long Tian,Chaojie Wang, Xinrun Wang, Börje F. Karlsson, Bo An, Shuicheng Yan, and Zongqing Lu.Cradle: Empowering foundation agents towards general computer control, 2024. URL https://arxiv.org/abs/2403.03186 .Zhiyong Wu, Chengcheng Han, Zichen Ding, Zhenmin Weng, Zhoumianze Liu, Shunyu Yao, TaoYu, and Lingpeng Kong. OS-Copilot: Towards generalist computer agents with self-improvement.CoRR , abs/2402.07456, 2024. doi: 10.48550/ARXIV .2402.07456. URL https://doi.org/10.48550/arXiv.2402.07456 .7Haiteng Zhao, Chang Ma, Guoyin Wang, Jing Su, Lingpeng Kong, Jingjing Xu, Zhi-Hong Deng,and Hongxia Yang. Empowering large language model agents through action learning. CoRR ,abs/2402.15809, 2024. doi: 10.48550/ARXIV .2402.15809. URL https://doi.org/10.48550/arXiv.2402.15809 .Zhenfang Chen, Rui Sun, Wenjun Liu, Yining Hong, and Chuang Gan. GENOME: Generativeneuro-symbolic visual reasoning by growing and reusing modules. In The Twelfth InternationalConference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenRe-view.net, 2024. URL https://openreview.net/forum?id=MNShbDSxKH .Chun-Yi Kuan, Chih-Kai Yang, Wei-Ping Huang, Ke-Han Lu, and Hung-yi Lee. Speech-Copilot:Leveraging large language models for speech processing via task decomposition, modularization,and program generation. CoRR , abs/2407.09886, 2024. doi: 10.48550/ARXIV .2407.09886. URLhttps://doi.org/10.48550/arXiv.2407.09886 .Min Zhang, Jianfeng He, Shuo Lei, Murong Yue, Linhan Wang, and Chang-Tien Lu. Can LLMfind the green circle? investigation and human-guided tool manipulation for compositional gen-eralization. In IEEE International Conference on Acoustics, Speech and Signal Processing,ICASSP 2024, Seoul, Republic of Korea, April 14-19, 2024 , pages 11996–12000. IEEE, 2024b.doi: 10.1109/ICASSP48485.2024.10446355. URL https://doi.org/10.1109/ICASSP48485.2024.10446355 .Gabriel Grand, Lionel Wong, Matthew Bowers, Theo X. Olausson, Muxin Liu, Joshua B. Tenenbaum,and Jacob Andreas. LILO: Learning interpretable libraries by compressing and documentingcode. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna,Austria, May 7-11, 2024 . OpenReview.net, 2024. URL https://openreview.net/forum?id=TqYbAWKMIe .Larry A. Rendell. Toward a unified approach for conceptual knowledge acquisition. AI Mag. , 4(4):19–27, 1983. URL https://ojs.aaai.org/index.php/aimagazine/article/view/413 .Ray J. Solomonoff. A formal theory of inductive inference. Part I. Inf. Control. , 7(1):1–22, 1964.doi: 10.1016/S0019-9958(64)90223-2. URL https://doi.org/10.1016/S0019-9958(64)90223-2 .Yoshua Bengio and Nikolay Malkin. Machine learning and information theory concepts towardsan AI mathematician. CoRR , abs/2403.04571, 2024. doi: 10.48550/ARXIV .2403.04571. URLhttps://doi.org/10.48550/arXiv.2403.04571 .Zhaoyu Li, Jialiang Sun, Logan Murphy, Qidong Su, Zenan Li, Xian Zhang, Kaiyu Yang, andXujie Si. A survey on deep learning for theorem proving. CoRR , abs/2404.09939, 2024. doi:10.48550/ARXIV .2404.09939. URL https://doi.org/10.48550/arXiv.2404.09939 .Jin Peng Zhou, Yuhuai Wu, Qiyang Li, and Roger Baker Grosse. REFACTOR: Learning to extracttheorems from proofs. In The Twelfth International Conference on Learning Representations, ICLR2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. URL https://openreview.net/forum?id=fgKjiVrm6u .Elias Stengel-Eskin, Archiki Prasad, and Mohit Bansal. ReGAL: Refactoring programs to discovergeneralizable abstractions. In Forty-first International Conference on Machine Learning, ICML2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net, 2024. URL https://openreview.net/forum?id=FovMAzXUpj .AppendixA Extended Related WorkCurrent LLM-based library learning systems tend to fall into two main camps: systems designed forgeneral word problem solving, typically including mathematical reasoning and typically generatingPython functions (e.g., Cai et al. [2024], Yuan et al. [2024], Wang et al. [2024b]), and agentic systems8designed to interact with a specific, complex environment (e.g., Wang et al. [2024c], Tan et al. [2024],Wu et al. [2024], Zhang et al. [2024a], Zhao et al. [2024]).Generally, such systems access the library via in-context learning (ICL); some place the entire libraryin the context [Wang et al., 2024b, Zhang et al., 2024a], whereas others first use a semantic-similarityretrieval step to allow for larger libraries. Yuan et al. [2024] in particular uses a retrieval system thatincorporates a LLM-generated description of the tool to be retrieved; LEGO-Prover behaves similarlyby having several phases where the system alternates between proposing useful tools to be added tothe library, attempting to create these tools, and possibly retrieving these tools.These systems are typically bottom-up (iteratively developing a library over time), though a handful oftop-down approaches exist. These top-down approaches instead decompose a high-level descriptionof the tasks into reusable modules [Chen et al., 2024, Kuan et al., 2024, Zhao et al., 2024, Zhanget al., 2024b]; to the best of the authors’ knowledge this approach is yet to be applied to mathematicalreasoning.These LLM-based systems typically attempt to produce reusable tools via ICL: prompting the LLMto generate “reusable functions”. In comparison, an older family of library learning work (e.g.,Dreamcoder [Ellis et al., 2021] and LILO [Grand et al., 2024]) instead frame library learning as amatter of compression. In principle a function that compresses a set of solutions must be broadlyapplicable, and in practice a high-level function reduces the symbolic search space for programinduction. More generally, compression has been of long standing interest in the field of artificialintelligence. Rendell [1983] defined conceptual knowledge as the ability to compress a raw spaceof possibilities into useful classes, and there are long-standing connections between compressionand inductive reasoning. Framing inductive reasoning as the task of capturing the underlying patternin a provided substring for the purposes of prediction, Solomonoff [1964] formalized induction asBayesian reasoning under a prior favouring low Kolmogorov complexity. In other words, formalizingthe concept of Occam’s razor – that the simplest solution, that which can be highly compressed into ashort description, is more likely. For a recent treatise on the value of compression, specifically withinthe area of mathematical reasoning, see Bengio and Malkin [2024].Turing our attention to mathematics, deep learning in general and LLMs in particular have foundbroad application in theorem proving [Li et al., 2024]. Considering library learning specifically, avery closely related branch of work considers the problem of refactoring a collection of ground-truth solutions into reusable components. ATG [Lin et al., 2024] and REFACTOR [Zhou et al.,2024] train models to extract reusable formal lemmas from a provided set of ground-truth formalproofs. Similarly, ReGAL [Stengel-Eskin et al., 2024] refactors ground-truth Python solutions forthe MATH dataset into a reusable library. These systems are valuable and may represent a betterfirst step towards reusable knowledge, but their dependence on ground-truth solutions prevents themfrom being conventional library learning systems. In comparison, LEGO-Prover attempts to learnreusable lemmas and produce formal proofs from only formal problem statements, and informalnatural language proofs – furthermore, Wang et al. [2024a] demonstrated that the latter could beautomatically generated by ChatGPT with only a small degradation in system performance.B Example of Verbatim Use versus Name Use by LEGO-ProverFigure 2 is an example of verbatim use where an input lemma to the PROVER is used verbatim in theoutputted solution.In contrast, Figure 3 is an example of name use, where the name of the input lemma appears in thesolution. In this case, the contents of the lemma are similar, but have significant differences. Notethat an instance of verbatim use would, necessarily, also be an instance of name use.A lemma is reused Ntimes if it is used N+ 1times – i.e., if the lemma is used in N+ 1solutions.C LEGO-Prover Solutions not Found by Reuse-Free AblationWe performed two runs of the original model, in both cases it outperformed the ablation by solvingone additional problem. We present the found proofs and input lemmas in Figures 4 and 6. Forimproved legibility, we also provide a typeset approximation in Figures 5 and 7. In addition to the9PROVER input PROVER output[... System prompt ...]Here some useful skill for reference:###### useful skill 1: ######```isabellelemma step3: fixes x y ::real shows "x^2 + y^2 + (x^2 * y^2) + 1 \<ge> 1"proof - have "x^2 + y^2 + (x^2 * y^2) + 1 - 1 = x^2 + y^2 + (x^2 * y^2)" by simp then have "x^2 + y^2 + (x^2 * y^2) + 1 - 1 \<ge> 0" by simp then show ?thesis by simpqed```###### useful skill 2: ######```isabellelemma algebra_sqineq_2xyxysq: fixes x y :: real shows "x^2 + y^2 \<ge> 2 * x * y"proof - have "(x - y)^2 \<ge> 0" by simp then have "x^2 - 2 * x * y + y^2 \<ge> 0" by (simp add: algebra_simps power2_diff) then have "x^2 + y^2 \<ge> 2 * x * y" by simp then show ?thesis by simpqed```[... Additional skills and example problems ...]## ProblemsWhat is the least possible value of $(xy-1)^2+(x+y)^2$ for real numbers $x$ and $y$? $\textbf{(A)} ~0\qquad\textbf{(B)} ~\frac{1}{4}\qquad\textbf{(C)} ~\frac{1}{2} \qquad\textbf{(D)} ~1 \qquad\textbf{(E)} Show that it is \textbf{(D)} ~1.## Informal proofStep 1: Expand the expression $(xy-1)^2+(x+y)^2$ to obtain $x^2+2xy+y^2+x^2y^2-2xy+1$.Step 2: Simplify the expression to $x^2+y^2+x^2y^2+1$.Step 3: Observe that all squares are nonnegative, so the minimum value of the expression is 1.Step 4: Show that the minimum value of 1 can be achieved when $x=y=0$.## Formal statement```isabelletheorem amc12a_2021_p7: fixes x y ::real shows "1 \<le> ((x * y) - 1)^2 + (x + y)^2" apply (auto simp:algebra_simps power2_eq_square)```## Proof```isabelletheory amc12a_2021_p7 imports Complex_Mainbegin(* lemma 1. A lemma that shows the distribution properties of real numbers will be helpful for step 1, which requires this lemma to expand the equation *)lemma step1: fixes x y :: real shows "(x * y - 1)^2 + (x + y)^2 = x^2 + 2 * x * y + y^2 + x^2 * y^2 - 2 * x * y + 1" by (simp add: power2_eq_square algebra_simps)(* lemma 2. A lemma that shows the properties of quadratic equation will be useful for step 2, which requires this lemma to simplify the equation *)lemma step2: fixes x y ::real shows "x^2 + 2 * x * y + y^2 + x^2 * y^2 - 2 * x * y + 1 = x^2 + y^2 + x^2 * y^2 + 1"proof - have "x^2 + 2 * x * y + y^2 + (x^2 * y^2) - 2 * x * y + 1 = x^2 + y^2 + (x^2 * y^2) + 1" by (simp add: algebra_simps) then show ?thesis by simpqed(* lemma 3. A lemma that shows the properties of quadratic equation will be useful for step 3, which requires this lemma to conclude the equation *)lemma step3: fixes x y ::real shows "x^2 + y^2 + (x^2 * y^2) + 1 \<ge> 1"proof - have "x^2 + y^2 + (x^2 * y^2) + 1 - 1 = x^2 + y^2 + (x^2 * y^2)" by simp then have "x^2 + y^2 + (x^2 * y^2) + 1 - 1 \<ge> 0" by simp then show ?thesis by simpqed(* formal statement copy from the input *)theorem amc12a_2021_p7: fixes x y ::real shows "1 \<le> ((x * y) - 1)^2 + (x + y)^2" apply (auto simp:algebra_simps power2_eq_square) using step1 step2 step3 by autoend```This is the proof for the given problem.Figure 2: Example of verbatim reuse by the LEGO-Prover. The input lemma is reproduced exactly inthe P ROVER ’s output.10PROVER input PROVER output[... System prompt ...]Here some useful skill for reference:###### useful skill 1: ######```isabelletheorem algebra_2rootspoly_apatapbeq2asqp2ab: fixes a b :: complex shows "(a+a) * (a+b) = 2 * a^2 + 2 * (a*b)"proof - have "(a+a) * (a+b) = (a+a)*a + (a+a)*b" by (simp add: distrib_left) also have "... = 2*a^2 + 2*a*b" by (metis calculation distrib_left mult.commute mult_2 numeral_2_eq_2 power2_eq_square) finally show ?thesis by simpqed```###### useful skill 2: ######```isabellelemma multi_distrib_complex: fixes a b c d :: complex shows "(a + b) * (c + d) = a * c + a * d + b * c + b * d" by (simp add: distrib_left distrib_right)```###### useful skill 3: ######```isabellelemma algebra_1_generalized : fixes a b :: nat shows "a^2 + 2*a*b + b^2 = (a+b)^2" by (smt (z3) ab_semigroup_add_class.add_ac(1) add.assoc add.commute add.left_commute power2_sum)```[... Additional skills and example problems ...]## ProblemsSimplify $(a-1)(a+1)(a+2) - (a-2)(a+1).$ Show that it is a^3 + a^2.## Informal proofStep 1: Expand the expression $(a-1)(a+1)(a+2) - (a-2)(a+1)$ by multiplying the binomials.Step 2: Simplify the expression to obtain $(a^2-1)(a+2)-(a-2)(a+1)$.Step 3: Expand further to get $(a^3 + 2a^2 - a - 2) - (a^2 -a -2)$.Step 4: Combine like terms to simplify the expression to $a^3 + a^2$.Step 5: Conclude that the simplified expression is equal to $a^3 + a^2$.## Formal statement```isabelletheorem mathd_algebra_568: fixes a :: real shows "(a-1) * (a+1) * (a+2) - (a-2) * (a+1) = a^3 + a^2"```## Proof```isabelletheory mathd_algebra_568 imports Complex_Mainbegin(* Lemma 1: Distributive property of multiplication to simplify the main equation *)lemma algebra_1_generalized : fixes a b c :: real shows "(a - b) * (a + c) = a^2 + a * c - b * a - b * c" by (simp add: algebra_simps)(* formal statement copy from the input *)theorem mathd_algebra_568: fixes a :: real shows "(a-1) * (a+1) * (a+2) - (a-2) * (a+1) = a^3 + a^2"proof - (* Step 1: Expand the expression $(a-1)(a+1)(a+2) - (a-2)(a+1)$ by multiplying the binomials. *) have c0: "(a-1) * (a+1) * (a+2) - (a-2) * (a+1) = (a^2 - 1) * (a+2) - (a-2) * (a+1)" by (simp add: algebra_simps) (* Step 2: Simplify the expression to obtain $(a^2-1)(a+2)-(a-2)(a+1)$. *) have c1: "(a^2 - 1) * (a+2) - (a-2) * (a+1) = (a^3 + 2*a^2 - a - 2) - (a^2 - a - 2)" using algebra_1_generalized[of "a^2" 1 2] algebra_1_generalized[of "a" 2 1] by (simp add: algebra_simps) (* Step 3: Expand further to get $(a^3 + 2a^2 - a - 2) - (a^2 -a -2)$ *) have c2: "(a^3 + 2*a^2 - a - 2) - (a^2 - a - 2) = a^3 + a^2" by simp (* Step 4: Combine like terms to simplify the expression to $a^3 + a^2$. *) show ?thesis using c0 c1 c2 by simpqedend```Figure 3: Example of name reuse by the LEGO-Prover. Only the name of the input lemma needs tobe reproduced exactly in the output. In this case, the body of the input lemma has been significantlyadjusted. Note Figure 2 is also an example of name reuse, as the input lemma’s name appears in thesolution (in that particular case, along with the rest of the lemma).11Table 3: TroVE performance on MATH. For comparison with Wang et al. [2024b], all reportednumbers are best over 5 trials. Variation between trials arises from the stochastic sampling of theunderlying LLM.Best-of-5 accuracy on MATH test splitModel count geo inte numTroVE, Reported 0.26 0.08 0.11 0.25TroVE Reproduced (ours) 0.24 0.06 0.13 0.27TroVE, Reported C REATE -only ablation 0.14 0.06 0.05 0.16No Reuse Ablation (ours) 0.25 0.05 0.15 0.31Table 4: LEGO-Prover hyperparametersHyperparameter valueSolution attempts per problem (num_attempts) 50Number of P ROVER processes (num_prover) 3Number of E VOLVER processes (num_evolver) 8Temperature (temperature) 0.7observations in the main paper, it should be noted that there is redundancy among the retrievedlemmas – deduplication and retrieval of lemmas remain areas for improvement.D TroVE MATH reproductionSee table 3 for the best-of-five accuracies reported by TroVE, and achieved by our reproduction oftheir results.E LEGO-Prover Hyperparameters and Experiment DetailsAt the time of publication, the LEGO-Prover logs released by Wang et al. [2024a] andused in our analysis are available at https://github.com/wiio12/LEGO-Prover/blob/357672c7751cd0c84aff6bf72a3d1bf97614e81d/result/lego_result.zip .LEGO-Prover is built on OpenAI’s GPT-3.5-Turbo and the 2022 release of the Isabelleproof assistant, specifically using its abilities as a proof verifier. Note that due tothe deprecation of the LLMs originally used by LEGO-Prover ( gpt-3.5-turbo-0301 ,gpt-3.5-turbo-0613 , gpt-3.5-turbo-16k , gpt-3.5-turbo-16k-0613 ,gpt-3.5-turbo-16k ,gpt-3.5-turbo-16k-0613 ), we upgrade the underlying LLM fromGPT-3.5-Turbo to GPT-4o-mini.We use the default LEGO-Prover hyperparameters, except for the number of retry attempts which,following Wang et al. [2024a]’s ablations, we reduce to 50. See Table 4 for details.Note that the LEGO-Prover is initialized with a seed library of tools, and our ablation retains thisinitialization. The core claim we aim to disprove is that the model’s performance gains predominantlycome from reusable lemmas, and our ablation prevents any cross-task reuse.The specific 12 problems chosen uniformly at random for our ablation studyare: aime_1991_p6.json, algebra_2varlineareq_xpeeq7_2xpeeq3_eeq11_xeqn4.json,amc12a_2008_p15.json, amc12a_2013_p8.json, amc12a_2021_p7.json, amc12b_2002_p3.json,amc12b_2003_p9.json, mathd_algebra_31.json, mathd_algebra_109.json, mathd_algebra_116.json,mathd_numbertheory_149.json, and numbertheory_sqmod4in01d.jsonNote that LEGO-Prover requires both the problem statement, and an informal natural lan-guage proof for conversion. We use the same human-generated informal proofs as Wanget al. [2024a]. The authors bundled said informal proofs inside of the miniF2F .json fileslisted above, available for download from https://github.com/wiio12/LEGO-Prover/tree/12Input Lemmas Final Proof###### useful skill 1: ######lemma quadratic_root_substitution: fixes a b c k x :: real assumes "a * x^2 + b * x + c = 0" shows "c = - (a * x^2 + b * x)"proof - obtain lhs where eq: "lhs = a * x^2 + b * x + c" using assms by simp have "lhs = 0" using assms by (metis eq) thus ?thesis by (simp add: eq)qed###### useful skill 2: ######lemma sqrt_limit_general: fixes x :: real assumes "n > 0" "k > 0" "k = sqrt(x + k)" shows "x = k^2 - k"proof - have "k^2 = x + k" using assms(3) by (smt (verit) assms(2) less_eq_real_def real_sqrt_le_iff real_sqrt_pow2_iff real_sqrt_zero) then show ?thesis by autoqed###### useful skill 3: ######lemma sqrt_difference: fixes a b :: real assumes "a >= 0" "b >= 0" shows "sqrt a - sqrt b = (a - b) / (sqrt a + sqrt b)"proof - have "sqrt a - sqrt b = (sqrt a + sqrt b) * (sqrt a - sqrt b) / (sqrt a + sqrt b)" by (metis add.left_cancel add_cancel_right_right add_le_same_cancel2 assms(1) assms(2) cancel_comm_monoid_add_class.diff_cancel dual_order.eq_iff eq_divide_eq mult.commute nonzero_mult_div_cancel_right real_sqrt_eq_zero_cancel_iff real_sqrt_ge_0_iff real_sqrt_le_0_iff real_sqrt_zero) also have "... = (a - b) / (sqrt a + sqrt b)" by (smt (verit) assms(1) assms(2) divide_cancel_right real_sqrt_le_mono real_sqrt_less_mono real_sqrt_mult_self square_diff_square_factored) finally show ?thesis by (simp add: field_simps)qed###### useful skill 4: ######lemma sqrt_limit_generalized: fixes x :: real assumes "k = sqrt(x + k)" "k > 0" shows "x = k^2 - k"proof - have "k^2 = x + k" using assms by (metis add.commute real_sqrt_ge_0_iff real_sqrt_ge_zero real_sqrt_gt_0_iff real_sqrt_pow2 verit_comp_simplify1(3) zero_le_mult_iff zero_le_square) hence "x = k^2 - k" by simp thus ?thesis by autoqedtheory mathd_algebra_116 imports Complex_Mainbegin(* lemma 1: A lemma to calculate the square of a difference *)lemma square_of_difference: fixes a b :: real shows "(a - b)^2 = a^2 - 2 * a * b + b^2"proof - have "a - b = a + -b" by simp then show ?thesis by sosqed(* formal statement copy from the input *)theorem mathd_algebra_116: fixes k x :: real assumes h0 : "x = (13 - sqrt 131) / 4" and h1 : "2 * x^2 - 13 * x + k = 0" shows "k = 19/4"proof - (* Step 1: Substitute x into the quadratic equation to express it in terms of k *) have "2 * x^2 - 13 * x + k = 0" using h1 by simp then have "k = 13 * x - 2 * x^2" by simp (* Step 2: Calculate 2 * x^2 *) have "x^2 = ((13 - sqrt 131) / 4)^2" using h0 by blast then have "x^2 = (13 - sqrt 131)^2 / 16" by (simp add: power2_eq_square) then have "2 * x^2 = 2 * (13 - sqrt 131)^2 / 16" by simp then have "2 * x^2 = (13 - sqrt 131)^2 / 8" by simp (* Step 3: Expand (13 - sqrt 131)^2 *) have "(13 - sqrt 131)^2 = 169 - 26 * sqrt 131 + 131" using square_of_difference[of 13 "sqrt 131"] by simp then have "(13 - sqrt 131)^2 = 300 - 26 * sqrt 131" by simp (* Step 4: Substitute back to find 2 * x^2 *) have "2 * x^2 = (300 - 26 * sqrt 131) / 8" by (metis \<open>(13 - sqrt 131)\<^sup>2 = 300 - 26 * sqrt 131\<close> \<open>2 * x\<^sup>2 = (13 - sqrt 131)\<^sup>2 / 8\<close>) (* Step 5: Calculate 13 * x *) have "13 * x = 13 * ((13 - sqrt 131) / 4)" using h0 by simp then have "13 * x = (169 - 13 * sqrt 131) / 4" by simp (* Step 6: Adjust the expression for k *) have "k = (169 - 13 * sqrt 131) / 4 - (300 - 26 * sqrt 131) / 8" using `k = 13 * x - 2 * x^2` by (metis \<open>13 * x = (169 - 13 * sqrt 131) / 4\<close> \<open>2 * x\<^sup>2 = (300 - 26 * sqrt 131) / 8\<close> diff_add_cancel diff_diff_eq diff_diff_eq2 diff_zero divide_numeral_1 eq_diff_eq eq_iff_diff_eq_0 frac_eq_eq mult.commute mult.left_commute mult_2_right mult_eq_0_iff mult_numeral_1_right nonzero_mult_div_cancel_right power2_eq_square right_diff_distrib' zero_neq_numeral) (* Step 7: Combine the fractions *) have "k = (169 - 13 * sqrt 131) / 4 - (300 - 26 * sqrt 131) / 8" by (metis \<open>k = (169 - 13 * sqrt 131) / 4 - (300 - 26 * sqrt 131) / 8\<close>) then have "k = (2 * (169 - 13 * sqrt 131) - (300 - 26 * sqrt 131)) / 8" by (simp add: field_simps) (* Step 8: Simplify the combined expression *) have "2 * (169 - 13 * sqrt 131) = 338 - 26 * sqrt 131" by simp then have "k = (338 - 26 * sqrt 131 - 300 + 26 * sqrt 131) / 8" by (smt (verit) \<open>k = (2 * (169 - 13 * sqrt 131) - (300 - 26 * sqrt 131)) / 8\<close>) then have "k = (338 - 300) / 8" by simp thus ?thesis by simpqedendFigure 4: LEGO-Prover input lemmas (left) and found proof (right). The proof proves that ∀k∈R:ifx= (13−√131)/4and2x2−13x+k= 0thenk= 19/4. See Figure 5 for a typeset approximation,and commentary of LEGO-Prover’s use (and failure to use) the input lemmas.13Input Lemmas Output ProofUseful skill 1:∀a, b, c, k, x ∈R:ax2+bx+c= 0⇒c=−(ax2+bx)Demonstrates: assms, simp, (metis eq),(simp add: eq)Useful skill 2:∀x∈R:n >0, k > 0k=√x+k⇒x=k2−kDemonstrates: assms, (smt (verit)assms(2) less eqrealdef real sqrtleiffrealsqrtpow2 iff real sqrtzero), autoUseful skill 3:∀a, b∈R:√a−√b= (a−b)/(√a+√b)Demonstrates: (metis add.left canceladdcancel right right add lesame cancel2assms(1) assms(2) can-celcomm monoid addclass.diff canceldual order.eq iff eq divide eqmult.commute nonzero mult divcancel rightrealsqrteqzero cancel iff real sqrtge0iffrealsqrtle0iff real sqrtzero),(smt (verit) assms(1) assms(2) di-vide cancel right real sqrtlemonorealsqrtlessmono real sqrtmult selfsquare diffsquare factored), (simp add:field simps)Useful skill 4:∀x∈R:k=√x+k, k > 0⇒x=k2−kDemonstrates: assms, (metis add.commuterealsqrtge0iff real sqrtgezerorealsqrtgt0iff real sqrtpow2verit comp simplify1(3) zero lemult iffzero lesquare), simp, autoDefine Lemma square ofdifference:∀a, b∈R: (a−b)2=a2−2ab+b2Proof of Lemma:a−b=a+ (−b)Method: simpLemma square ofdifference follows using method sosDefine theorem mathd algebra 116:∀k∈R:Assume x= (13 −√131)/4Assume 2 x2−13x+k= 0Then: k= 19/4Proof:2x2−13x+k= 0k= 13x−2x2Method: simpx2= ((13 −√131)/4)2Method: blastx2= (13 −√131)2/16Method: (simp add: power2 eqsquare)2x2= 2(13 −√131)2/162x2= (13 −√131)2/8Method: simp(13−√131)2= 169 −26√131 + 131Method: using lemma square ofdifference and sos(13−√131)2= 300 −26√131Method: simp2x2= (300 −26√131)/8Method: (metis \<open>(13 - sqrt 131) \<ˆsup>2 = 300 - 26 * sqrt131\<close>\<open>2 * x\<ˆsup>2 = (13 - sqrt 131) \<ˆsup>2 / 8\<close>)13x= 13((13 −√131)/4)13x= (169 −13√131)/4Method: simpk= (169 −13√131)/4−(300−26√131)/8Method: (metis \<open>13 * x = (169 - 13 * sqrt 131) / 4 \<close>\<open>2* x\<ˆsup>2 = (300 - 26 * sqrt 131) / 8 \<close>diffaddcanceldiffdiffeq diff diffeq2 diff zero divide numeral 1 eq diffeq eq iffdiffeq0fraceqeq mult.commute mult.left commute mult 2right mult eq0iffmult numeral 1right nonzero mult divcancel right power2 eqsquareright diffdistrib’ zero neqnumeral)k= (169 −13√131)/4−(300−26√131)/8Method: (metis \<open>k = (169 - 13 * sqrt 131) / 4 - (300 - 26 * sqrt 131) /8\<close>)k= (2(169 −13√131)−(300−26√131))/8Method: (simp add: field simps)2(169−13√131) = 338 −26√131Method: simpk= (338 −26√131−300 + 26√131)/8Method: (smt (verit) \<open>k = (2 * (169 - 13 * sqrt 131) - (300 - 26 * sqrt131)) / 8 \<close>)k= (338 −300)/8Method: simp. Theorem follows.1Figure 5: A typset approximation of LEGO-Prover input lemmas (left) and found proof (right). Theproof proves that ∀k∈R:ifx= (13 −√131)/4and 2x2−13x+k= 0 thenk= 19/4. SeeFigure 4 for the original Isabelle lemmas and proof. Note that skill 1 may have been indirectly usedin rewriting 2x2−13x+k= 0ask= 13x−2x2. Skills 2, 3 and 4 do not seem to be used directlyor indirectly; furthermore skills 2 and 4 are the same lemma. Their use of smt and metis may haveencouraged the prover to use these same tools.14Input Lemmas Final ProofHere some useful skill for reference:###### useful skill 1: ######lemma nonneg_square: fixes a :: real shows "a^2 \<ge> 0"proof - have "a^2 = a * a" by (simp add: power2_eq_square) thus ?thesis by (simp add: mult_nonneg_nonneg)qed###### useful skill 2: ######lemma square_non_negative: fixes a :: real shows "a^2 >= 0"proof - have "a^2 = a * a" by sos then show ?thesis by autoqed###### useful skill 3: ######lemma square_sum_nonneg: fixes x y :: real shows "x^2 + y^2 \<ge> 0" by (simp)###### useful skill 4: ######lemma min_sum_of_squares: fixes x y :: real shows "x^2 + y^2 >= 0"proof - have "x^2 >= 0" by auto moreover have "y^2 >= 0" by auto ultimately have "x^2 + y^2 >= 0 + 0" by (simp add: add_nonneg_nonneg) thus "x^2 + y^2 >= 0" by simpqedtheory amc12a_2021_p7 imports Complex_Main begin(* Lemma: The expression can be expressed in terms of squares to show its non-negativity. *)lemma expression_nonneg: fixes x y :: real shows "((x * y) - 1)^2 + (x + y)^2 >= 1"proof - (* Step 1: Define the expression we want to minimize as \( E = (xy - 1)^2 + (x + y)^2 \). *) let ?E = "((x * y) - 1)^2 + (x + y)^2" (* Step 2: Expand the expression \( E \): \( (xy - 1)^2 + (x + y)^2 \) *) have "((x * y) - 1)^2 = (x^2 * y^2 - 2 * x * y + 1)" by sos have "(x + y)^2 = x^2 + 2 * x * y + y^2" by sos (* This simplifies to: \( E = x^2y^2 + x^2 + y^2 + 1 \) *) have "?E = (x^2 * y^2 - 2 * x * y + 1) + (x^2 + 2 * x * y + y^2)" by sos (* Thus, we have: \( E = x^2 * y^2 + x^2 + y^2 + 1 \) *) have "?E = x^2 * y^2 + x^2 + y^2 + 1" by sos (* Step 3: Recognize that \( x^2y^2 \), \( x^2 \), and \( y^2 \) are all non-negative. *) have "x^2 * y^2 >= 0" by auto have "x^2 >= 0" by auto have "y^2 >= 0" by auto (* Therefore, the minimum value of \( E \) occurs when each of these terms is minimized. *) (* The minimum occurs when \( x = 0 \) and \( y = 0 \). *) have "?E >= 1" by sos (* Step 4: Set \( x = 0 \) and \( y = 0 \) to achieve the minimum: \( E(0, 0) = 1 \). *) have "?E = 1" when "x = 0" and "y = 0" using that by auto (* Step 5: Conclude that the least possible value of \( E \) is \( 1 \). *) thus ?thesis by sosqed(* formal statement copy from the input *)theorem amc12a_2021_p7: fixes x y ::real shows "1 \<le> ((x * y) - 1)^2 + (x + y)^2" apply (auto simp:algebra_simps power2_eq_square) using expression_nonneg by sos endFigure 6: LEGO-Prover input lemmas (left) and found proof (right). The proof proves that ∀x, y∈R: 1≤(xy−1)2+ (x+y)2. See Figure 7 for a typeset approximation, and commentary ofLEGO-Prover’s use (and failure to use) the input lemmas.15Input Lemmas Output ProofUseful skill 1: ∀a∈R:a2≥0Demonstrates: (simp add: power2 eqsquare), (simp add:mult nonneg nonneg)Useful skill 2: ∀a∈R:a2≥0Demonstrates: sos, autoUseful skill 3: ∀x, y∈R:x2+y2≥0Demonstrates: simpUseful skill 4: ∀x, y∈R:x2+y2≥0Demonstrates: auto, (simp add: add nonneg nonneg), simpDefine Lemma expression nonneg:∀x, y∈R: (xy−1)2+ (x+y)2≥1Proof of Lemma:LetE= (xy−1)2+ (x+y)2(xy−1)2= (x2y2−2xy+ 1)(x+y)2=x2+ 2xy+y2E= (x2y2−2xy+ 1) + ( x2+ 2xy+y2)E=x2y2+x2+y2+ 1Method: sosx2y2≥0x2≥0y2≥0Method: autoE≥1Method: sosE= 1 when x, y= 0Method: autoLemma expression nonneg follows using method sosDefine theorem amc12a 2021 p7:∀x, y∈R: 1≤(xy−1)2+ (x+y)2Proof:Follows Lemma.Method: sos, applying (auto simp:algebra simpspower2 eqsquare)2Figure 7: Typeset approximation of LEGO-Prover input lemmas (left) and found proof (right).See Figure 6 for the original Isabelle lemmas and proof. The proof proves that ∀x, y∈R: 1≤(xy−1)2+ (x+y)2. Skills 1 and 2 are the same; the fact that x2≥0is used, though the exactproof differs from the lemmas. Skills 3 & 4 are also the same, though they do not seem to be used.357672c7751cd0c84aff6bf72a3d1bf97614e81d/data/full_data/valid at the time of pub-lication.Note that the mean and standard deviation in Figure 1 are calculated using Python 3.8.9, numpy1.22.2, numpy.mean() andnumpy.std() .Our experiments were run on an internal cluster, running one trial at a time. Each trial used 180GB of RAM, 50 CPU cores, OpenAI credits, and ran within 24 hours. We upper bound the totalcompute time required to run our LEGO-Prover experiments at 96 hours. The full project requiredmore compute than the experiments reported as one trial failed due to an out-of-memory error. Basedon Wang et al. [2024a]’s estimate of $300 per trial, we estimate the cost in OpenAI credits of ourexperiments to be $7.38 per trial as we run half the number of attempts and one twentieth the numberof questions. Under this estimate, the total cost of all our experiments is ∼$30.Our code is modified from the released LEGO-Prover code base, available at https://github.com/wiio12/LEGO-Prover [Wang et al., 2024b], released under an MIT License. Evaluation isdone using the miniF2F Zheng et al. [2022] dataset, available at https://github.com/openai/miniF2F/tree/main , which was released under the Apache License Version 2.0.Our code is documented and released, alongside the generated LEGO-Prover logs. It is a minormodification to the existing code base, and there is no training stage or new limitations. The code isreleased under the same license as the parent repository.F TroVE Hyperparameters and Experiment DetailsTroVE uses CodeLlama-7b-Instruct-hf [Rozière et al., 2023] interacting with the Python3interpreter. We use the hyperparameters specified in the paper, outlined in Table 5. The samehyperparameters are used for the ablation, and our reproduction of baseline TroVE.The mean and standard deviation of our 5 experiment runs are reported in Table 2. They are calculatedusing Python 3.8.9, numpy 1.22.2, numpy.mean() andnumpy.std() . The 2-sided t-test reported the16Table 5: TroVE hyperparametersHyperparameter valueLibrary trim frequency (trim_steps) 500Solution execution timeout in seconds (exec_timeout) 100top-p (top_p) 0.95Samples per prompt (num_return_sequences) 5Temperature (temperature) 0.6Max decode length (max_new_tokens) 512same table is performed using the same version of Python, scipy 1.8.1, scipy.stats.ttest_ind() ,with the settings equal_var=False andalternative=’less’ .Our experiments were run on an internal cluster, running up to 4 trials at once. Each trial used 1 NvidiaA40 GPU, 64 GB of RAM, 16 CPU cores, and ran within 12 hours. Smaller datasets completed morequickly. We upper bound the total compute time required to run our TroVE experiments at 480 hours.The full project required more compute than the experiments reported as we also tried running TroVEwith quantized CodeLlama, CodeLlama 13B and 70B, and GPT-4o-mini.Our code is modified from the released TroVE code base, available at https://github.com/zorazrw/trove [Wang et al., 2024b], which was released under the CC-BY-SA-4.0 license. Evalua-tion is done using the MATH Hendrycks et al. [2021] dataset, available at https://github.com/hendrycks/math , which was released under an MIT License.Our code is documented and released, alongside the generated TroVE logs. It is a minor modificationto the existing code base, and there is no training stage or new limitations. The code is released underthe same license as the parent repository.F.1 Additional TroVE experimentsWe also ran baseline TroVE using the larger CodeLlama 13B model, and found similar results withvery little direct function use. The key difference with the 7B model was that a single function waslearned for the geometry split, but it was never reused in a correct solution.We also attempted to run baseline TroVE using the 70B model, however we discarded the resultsas the LLM’s ethical safeguards were frequently tripped (e.g., giving reasons such as “it is notappropriate or ethical to provide assistance with academic assignments or graded exercises”).17NeurIPS Paper Checklist1.ClaimsQuestion: Do the main claims made in the abstract and introduction accurately reflect thepaper’s contributions and scope?Answer: [Yes]Justification: We analyze LEGO-Prover logs and ablate the model in Section 3, and weanalyze the TroVE logs and ablate the model in Section 4. In both cases we find little directreuse, and our ablation performs similarly.Guidelines:•The answer NA means that the abstract and introduction do not include the claimsmade in the paper.•The abstract and/or introduction should clearly state the claims made, including thecontributions made in the paper and important assumptions and limitations. A No orNA answer to this question will not be perceived well by the reviewers.•The claims made should match theoretical and experimental results, and reflect howmuch the results can be expected to generalize to other settings.•It is fine to include aspirational goals as motivation as long as it is clear that these goalsare not attained by the paper.2.LimitationsQuestion: Does the paper discuss the limitations of the work performed by the authors?Answer: [Yes]Justification: See Section 6. Primary limitations are scope (2 models and 2 datasets), andresource constraints on the ablations.Guidelines:•The answer NA means that the paper has no limitation while the answer No means thatthe paper has limitations, but those are not discussed in the paper.• The authors are encouraged to create a separate "Limitations" section in their paper.•The paper should point out any strong assumptions and how robust the results are toviolations of these assumptions (e.g., independence assumptions, noiseless settings,model well-specification, asymptotic approximations only holding locally). The authorsshould reflect on how these assumptions might be violated in practice and what theimplications would be.•The authors should reflect on the scope of the claims made, e.g., if the approach wasonly tested on a few datasets or with a few runs. In general, empirical results oftendepend on implicit assumptions, which should be articulated.•The authors should reflect on the factors that influence the performance of the approach.For example, a facial recognition algorithm may perform poorly when image resolutionis low or images are taken in low lighting. Or a speech-to-text system might not beused reliably to provide closed captions for online lectures because it fails to handletechnical jargon.•The authors should discuss the computational efficiency of the proposed algorithmsand how they scale with dataset size.•If applicable, the authors should discuss possible limitations of their approach toaddress problems of privacy and fairness.•While the authors might fear that complete honesty about limitations might be used byreviewers as grounds for rejection, a worse outcome might be that reviewers discoverlimitations that aren’t acknowledged in the paper. The authors should use their bestjudgment and recognize that individual actions in favor of transparency play an impor-tant role in developing norms that preserve the integrity of the community. Reviewerswill be specifically instructed to not penalize honesty concerning limitations.3.Theory Assumptions and ProofsQuestion: For each theoretical result, does the paper provide the full set of assumptions anda complete (and correct) proof?18Answer: [NA]Justification: This work is empirical.Guidelines:• The answer NA means that the paper does not include theoretical results.•All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.•All assumptions should be clearly stated or referenced in the statement of any theorems.•The proofs can either appear in the main paper or the supplemental material, but ifthey appear in the supplemental material, the authors are encouraged to provide a shortproof sketch to provide intuition.•Inversely, any informal proof provided in the core of the paper should be complementedby formal proofs provided in appendix or supplemental material.• Theorems and Lemmas that the proof relies upon should be properly referenced.4.Experimental Result ReproducibilityQuestion: Does the paper fully disclose all the information needed to reproduce the main ex-perimental results of the paper to the extent that it affects the main claims and/or conclusionsof the paper (regardless of whether the code and data are provided or not)?Answer: [Yes]Justification: Hyperparameters are reported in Appendices E and F, the TroVE and LEGO-Prover codebases are publicly available as are the MATH and miniF2F datasets, our ablationsare described in Sections 3 and 4, and we release our code, logs, and log analysis code.As to the underlying LLMs, TroVE uses open source CodeLlama, and our LEGO-Proverablation runs on a much smaller dataset to reduce the OpenAI API costs.Guidelines:• The answer NA means that the paper does not include experiments.•If the paper includes experiments, a No answer to this question will not be perceivedwell by the reviewers: Making the paper reproducible is important, regardless ofwhether the code and data are provided or not.•If the contribution is a dataset and/or model, the authors should describe the steps takento make their results reproducible or verifiable.•Depending on the contribution, reproducibility can be accomplished in various ways.For example, if the contribution is a novel architecture, describing the architecture fullymight suffice, or if the contribution is a specific model and empirical evaluation, it maybe necessary to either make it possible for others to replicate the model with the samedataset, or provide access to the model. In general. releasing code and data is oftenone good way to accomplish this, but reproducibility can also be provided via detailedinstructions for how to replicate the results, access to a hosted model (e.g., in the caseof a large language model), releasing of a model checkpoint, or other means that areappropriate to the research performed.•While NeurIPS does not require releasing code, the conference does require all submis-sions to provide some reasonable avenue for reproducibility, which may depend on thenature of the contribution. For example(a)If the contribution is primarily a new algorithm, the paper should make it clear howto reproduce that algorithm.(b)If the contribution is primarily a new model architecture, the paper should describethe architecture clearly and fully.(c)If the contribution is a new model (e.g., a large language model), then there shouldeither be a way to access this model for reproducing the results or a way to reproducethe model (e.g., with an open-source dataset or instructions for how to constructthe dataset).(d)We recognize that reproducibility may be tricky in some cases, in which caseauthors are welcome to describe the particular way they provide for reproducibility.In the case of closed-source models, it may be that access to the model is limited insome way (e.g., to registered users), but it should be possible for other researchersto have some path to reproducing or verifying the results.195.Open access to data and codeQuestion: Does the paper provide open access to the data and code, with sufficient instruc-tions to faithfully reproduce the main experimental results, as described in supplementalmaterial?Answer: [Yes]Justification: As explained in the previous question on reproducibility, we release our codealong with the logs analyzed. Furthermore, the core TroVE and LEGO-Prover code bases arealready publicly available, and can be easily modified to implement the ablations described.Guidelines:• The answer NA means that paper does not include experiments requiring code.•Please see the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•While we encourage the release of code and data, we understand that this might not bepossible, so “No” is an acceptable answer. Papers cannot be rejected simply for notincluding code, unless this is central to the contribution (e.g., for a new open-sourcebenchmark).•The instructions should contain the exact command and environment needed to run toreproduce the results. See the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•The authors should provide instructions on data access and preparation, including howto access the raw data, preprocessed data, intermediate data, and generated data, etc.•The authors should provide scripts to reproduce all experimental results for the newproposed method and baselines. If only a subset of experiments are reproducible, theyshould state which ones are omitted from the script and why.•At submission time, to preserve anonymity, the authors should release anonymizedversions (if applicable).•Providing as much information as possible in supplemental material (appended to thepaper) is recommended, but including URLs to data and code is permitted.6.Experimental Setting/DetailsQuestion: Does the paper specify all the training and test details (e.g., data splits, hyper-parameters, how they were chosen, type of optimizer, etc.) necessary to understand theresults?Answer: [Yes]Justification: Hyperparameters are in Sections E and F, there is no training data, and theTroVE test set is the same as Wang et al. [2024b], and the LEGO-Prover test set a subsetof that used in Wang et al. [2024a]. The exact problems used in the subset are listed in thesame section as the hyperparameters.Guidelines:• The answer NA means that the paper does not include experiments.•The experimental setting should be presented in the core of the paper to a level of detailthat is necessary to appreciate the results and make sense of them.•The full details can be provided either with the code, in appendix, or as supplementalmaterial.7.Experiment Statistical SignificanceQuestion: Does the paper report error bars suitably and correctly defined or other appropriateinformation about the statistical significance of the experiments?Answer: [Yes]Justification: For the LEGO-Prover ablation, error regions of 1 standard deviation aredisplayed in Figure 1, the caption states that the source of variation is the LLM output andrace conditions within the system; the method used to compute mean and standard deviation(numpy) is stated in Appendix E. For the TroVE ablation, we report the mean and standarddeviation in Table 2. The best-of-five accuracy is reported in the Appendix, Table 3) so20that our values are comparable to those reported in Wang et al. [2024b]. Both tables statethat variation arises from sampling from the LLM. The method used to compute mean andstandard deviation (numpy) is stated in Appendix F.Guidelines:• The answer NA means that the paper does not include experiments.•The authors should answer "Yes" if the results are accompanied by error bars, confi-dence intervals, or statistical significance tests, at least for the experiments that supportthe main claims of the paper.•The factors of variability that the error bars are capturing should be clearly stated (forexample, train/test split, initialization, random drawing of some parameter, or overallrun with given experimental conditions).•The method for calculating the error bars should be explained (closed form formula,call to a library function, bootstrap, etc.)• The assumptions made should be given (e.g., Normally distributed errors).•It should be clear whether the error bar is the standard deviation or the standard errorof the mean.•It is OK to report 1-sigma error bars, but one should state it. The authors shouldpreferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesisof Normality of errors is not verified.•For asymmetric distributions, the authors should be careful not to show in tables orfigures symmetric error bars that would yield results that are out of range (e.g. negativeerror rates).•If error bars are reported in tables or plots, The authors should explain in the text howthey were calculated and reference the corresponding figures or tables in the text.8.Experiments Compute ResourcesQuestion: For each experiment, does the paper provide sufficient information on the com-puter resources (type of compute workers, memory, time of execution) needed to reproducethe experiments?Answer: [Yes]Justification: Outlined in Appendix E for the LEGO-Prover experiments, and Appendix Ffor the TroVE experiments.Guidelines:• The answer NA means that the paper does not include experiments.•The paper should indicate the type of compute workers CPU or GPU, internal cluster,or cloud provider, including relevant memory and storage.•The paper should provide the amount of compute required for each of the individualexperimental runs as well as estimate the total compute.•The paper should disclose whether the full research project required more computethan the experiments reported in the paper (e.g., preliminary or failed experiments thatdidn’t make it into the paper).9.Code Of EthicsQuestion: Does the research conducted in the paper conform, in every respect, with theNeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ?Answer: [Yes]Justification: There are no human subjects, to the best of our knowledge there are no dataconcerns, or immediate societal impact or harms (the possible future risks from deployingtool-learning systems, and the precautions that should be taken in future research in self-improving systems are outlined in Section 6), and to the best of our knowledge our work isreproducible and legal.Guidelines:•The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.•If the authors answer No, they should explain the special circumstances that require adeviation from the Code of Ethics.21•The authors should make sure to preserve anonymity (e.g., if there is a special consid-eration due to laws or regulations in their jurisdiction).10.Broader ImpactsQuestion: Does the paper discuss both potential positive societal impacts and negativesocietal impacts of the work performed?Answer: [Yes]Justification: We do not anticipate any immediate societal impact or harms, but we dodiscuss the possible future risks from deploying tool-learning systems, and the precautionsthat should be taken in future research in self-improving systems in Section 6.Guidelines:• The answer NA means that there is no societal impact of the work performed.•If the authors answer NA or No, they should explain why their work has no societalimpact or why the paper does not address societal impact.•Examples of negative societal impacts include potential malicious or unintended uses(e.g., disinformation, generating fake profiles, surveillance), fairness considerations(e.g., deployment of technologies that could make decisions that unfairly impact specificgroups), privacy considerations, and security considerations.•The conference expects that many papers will be foundational research and not tiedto particular applications, let alone deployments. However, if there is a direct path toany negative applications, the authors should point it out. For example, it is legitimateto point out that an improvement in the quality of generative models could be used togenerate deepfakes for disinformation. On the other hand, it is not needed to point outthat a generic algorithm for optimizing neural networks could enable people to trainmodels that generate Deepfakes faster.•The authors should consider possible harms that could arise when the technology isbeing used as intended and functioning correctly, harms that could arise when thetechnology is being used as intended but gives incorrect results, and harms followingfrom (intentional or unintentional) misuse of the technology.•If there are negative societal impacts, the authors could also discuss possible mitigationstrategies (e.g., gated release of models, providing defenses in addition to attacks,mechanisms for monitoring misuse, mechanisms to monitor how a system learns fromfeedback over time, improving the efficiency and accessibility of ML).11.SafeguardsQuestion: Does the paper describe safeguards that have been put in place for responsiblerelease of data or models that have a high risk for misuse (e.g., pretrained language models,image generators, or scraped datasets)?Answer: [NA]Justification: We present ablations of already publicly available models (LEGO-Proverand TroVE), neither of which we believe has a higher risk for misuse than the constituentpublicly available LLM.Guidelines:• The answer NA means that the paper poses no such risks.•Released models that have a high risk for misuse or dual-use should be released withnecessary safeguards to allow for controlled use of the model, for example by requiringthat users adhere to usage guidelines or restrictions to access the model or implementingsafety filters.•Datasets that have been scraped from the Internet could pose safety risks. The authorsshould describe how they avoided releasing unsafe images.•We recognize that providing effective safeguards is challenging, and many papers donot require this, but we encourage authors to take this into account and make a bestfaith effort.12.Licenses for existing assets22Question: Are the creators or original owners of assets (e.g., code, data, models), used inthe paper, properly credited and are the license and terms of use explicitly mentioned andproperly respected?Answer: [Yes]Justification: The creators of TroVE [Wang et al., 2024b], LEGO-Prover [Wang et al.,2024a], the MATH dataset [Hendrycks et al., 2021], and miniF2F [Zheng et al., 2022] areall cited in the abstract. The URLs and licenses are stated in Appendices E and F.Guidelines:• The answer NA means that the paper does not use existing assets.• The authors should cite the original paper that produced the code package or dataset.•The authors should state which version of the asset is used and, if possible, include aURL.• The name of the license (e.g., CC-BY 4.0) should be included for each asset.•For scraped data from a particular source (e.g., website), the copyright and terms ofservice of that source should be provided.•If assets are released, the license, copyright information, and terms of use in thepackage should be provided. For popular datasets, paperswithcode.com/datasetshas curated licenses for some datasets. Their licensing guide can help determine thelicense of a dataset.•For existing datasets that are re-packaged, both the original license and the license ofthe derived asset (if it has changed) should be provided.•If this information is not available online, the authors are encouraged to reach out tothe asset’s creators.13.New AssetsQuestion: Are new assets introduced in the paper well documented and is the documentationprovided alongside the assets?Answer: [Yes]Justification: Our code is documented and released, alongside the log files used in ouranalysis. As new assets are minor modifications to existing code bases with no training ornew limitations, we simply state as much in Appendices E and F; the code will be releasedunder the same license as the parent repositories.Guidelines:• The answer NA means that the paper does not release new assets.•Researchers should communicate the details of the dataset/code/model as part of theirsubmissions via structured templates. This includes details about training, license,limitations, etc.•The paper should discuss whether and how consent was obtained from people whoseasset is used.•At submission time, remember to anonymize your assets (if applicable). You can eithercreate an anonymized URL or include an anonymized zip file.14.Crowdsourcing and Research with Human SubjectsQuestion: For crowdsourcing experiments and research with human subjects, does the paperinclude the full text of instructions given to participants and screenshots, if applicable, aswell as details about compensation (if any)?Answer: [NA]Justification: No crowdsourcing or human subjects was done.Guidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Including this information in the supplemental material is fine, but if the main contribu-tion of the paper involves human subjects, then as much detail as possible should beincluded in the main paper.23•According to the NeurIPS Code of Ethics, workers involved in data collection, curation,or other labor should be paid at least the minimum wage in the country of the datacollector.15.Institutional Review Board (IRB) Approvals or Equivalent for Research with HumanSubjectsQuestion: Does the paper describe potential risks incurred by study participants, whethersuch risks were disclosed to the subjects, and whether Institutional Review Board (IRB)approvals (or an equivalent approval/review based on the requirements of your country orinstitution) were obtained?Answer: [NA]Justification: There were no human study participants.Guidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Depending on the country in which research is conducted, IRB approval (or equivalent)may be required for any human subjects research. If you obtained IRB approval, youshould clearly state this in the paper.•We recognize that the procedures for this may vary significantly between institutionsand locations, and we expect authors to adhere to the NeurIPS Code of Ethics and theguidelines for their institution.•For initial submissions, do not include any information that would break anonymity (ifapplicable), such as the institution conducting the review.24 |
esbIrV8N12 | Synchronizing Verbal Responses and Board Writingfor Multimodal Math Instruction with LLMsYuan-Hao Jiang1,2,4,5,Ruijia Li6,7,Yuang Wei1,2,3,5,Rui Jia1,2,5,Xiaobao Shao1,Hanglei Hu1,Bo Jiang1,2,∗1School of Computer Science and Technology, East China Normal University2Lab of Artificial Intelligence for Education, East China Normal University3School of Computing, National University of Singapore4Graduate School, Shanghai Jiao Tong University5Shanghai Institute of Artificial Intelligence for Education, East China Normal University6Faculty of Education, East China Normal University7Institute of Artificial Intelligence, China TelecomAbstractThe advancement of large language models (LLMs) has greatly facilitated mathinstruction, with the generated textual content serving as verbal responses to ad-dress student inquiries. However, in instructional settings, teachers often provideboth verbal responses and board writing (BW) simultaneously to enhance students’knowledge construction. To address this, we introduce MathBoard, a multimodallarge language model (MLLM) designed for elementary mathematics education,which progressively generates BW. Our study focuses on the provision of BW tolearners, aiming to reduce their cognitive load effectively. Furthermore, MathBoardcan be integrated with other approaches that enhance mathematical reasoningcapabilities. An empirical study involving 34 pre-service teachers demonstratedthat the multimodal interactions facilitated by MathBoard were more highly ac-cepted and impactful across various dimensions compared to text-only interactions,significantly promoting learners’ social construction of knowledge.aLearner QuestionInput QueryReasoning StepsCorrect Answer ReasoningVerbal Responses GeneratebLearner QuestionInput AskHuman teachersReasoning StepsCorrect Answer Problem-SolvingVerbal ResponsesProvideBoard WritingcLearner QuestionInput QueryMath WhiteboardReasoning StepsCorrect AnswerVerbal ResponsesBoard WritingdCurrent BW Content Conversation HistoryReasoningInquiryCross-ModalReasoningReasoningSynchronisationBWUpdateLLMsFigure 1: The cross-modal reasoning process of MathBoard in solving mathematical problems andits user interface design. In (a), (b), and (c), the reasoning details of LLMs, human teachers, andMathBoard in assisting students with solving mathematical problems are presented, respectively. (d)also illustrates the user interface of MathBoard.∗Corresponding Author: [email protected] Conference on Neural Information Processing Systems (NeurIPS 2024).1 IntroductionIn recent years, LLMs have shown immense potential in natural language processing and haveplayed significant roles across multiple disciplines, particularly in mathematics( 1;2). LLMs canautomatically generate exercises, provide instructional support, and deliver personalized feedback forstudents( 3;4). For specific educational needs, models like EduChat offer personalized, equitable,and empathetic services through fine-tuning( 5), while LoRA fine-tuning strategies facilitate theautomation of educational data annotation( 6). The MinT model focuses on enhancing logicalreasoning and generalization abilities( 7). Looking ahead, further efforts to improve the sustainabilityand interpretability of LLMs will be essential for enhancing their trustworthiness and reliability ineducational contexts(8; 9).The concept of shared whiteboards has proven effective in improving efficiency in collaborativeteams( 10). Recent research has focused on integrating LLMs into whiteboard collaboration envi-ronments to promote creative cooperation and problem-solving through Blackboard Writing (BW)technology. For instance, the AI-AB framework provides an interactive whiteboard platform thatfacilitates idea exchange between humans and LLMs(11). Related works include the Visual Sketch-pad, which allows LLMs to add auxiliary lines when solving mathematical problems( 12), and theWhiteboard-of-Thought project, which demonstrates how LLMs can improve their OCR performanceon whiteboards by enhancing reasoning abilities( 13). While these studies primarily focus on generat-ing Python code to improve LLMs’ reasoning capabilities, they do not directly serve as visual teachingaids for human learners. Therefore, we propose further exploration into the generative capabilities ofLLMs to create cross-modal learning resources, potentially transforming human-computer interactionmodels and providing learners with a more personalized and intuitive learning experience.The primary contribution of this study is the development of MathBoard, powered by LLMs, whichsynchronously generates both Verbal Responses and Board Writing, thereby offering learners across-modal mathematics learning experience. However, a current limitation of MathBoard is itsapplicability solely to elementary-level math instruction, which requires further refinement in futurework. This study seeks to address the following research questions:•How can the generative capabilities of LLMs be leveraged to provide cross-modal guidancein mathematics learning?• Is the proposed cross-modal teaching method more acceptable and engaging for learners?•Does the integration of Board Writing in mathematics instruction foster learners’ socialconstruction of knowledge?2 Related Work2.1 Multimodal Large Language Models for EducationLLMs have seen widespread application in education ( 14;15;16). With the rapid advancementsin Multimodal Large Language Models (MLLMs), numerous educational case studies have high-lighted their effectiveness and potential utility ( 17;18;19). For instance, MLLMs are capable ofgenerating multimodal writing suggestions through diverse channels, including text, visuals, andaudio, thereby aiding learners in enhancing their writing proficiency ( 20). Additionally, MLLMs canintegrate multimodal data collected during classroom activities to produce more precise transcriptions,facilitating post-class study or reference ( 21), as well as to assess student engagement and evaluatethe effectiveness of educational resources and environments ( 22). MLLMs also have the potentialto provide interpretable information in education( 23;24). Notably, given MLLMs’ advanced capa-bilities in processing multimedia information, they hold significant promise for supporting visuallyimpaired learners in acquiring knowledge and understanding the world around them ( 25). Althoughthe deployment of MLLMs necessitates increased data exchange( 26;27), which may pose potentialsecurity risks, techniques such as Federated Learning offer a viable means to mitigate these concerns(25; 28; 29).22.2 MLLMs for Math LearningIn mathematical problem-solving, reasoning skills are essential( 30;31;32). Additionally, given thatmathematical problems often include charts and data, the ability to process multimodal informationis also necessary( 33;34;35). While the integration of multimodal data inputs can provide MLLMswith richer information and greater problem-solving potential, research has demonstrated that manyMLLMs struggle to accurately interpret charts within the problem-solving context, leading to theineffective utilization of multimodal information (36).To improve MLLMs’ comprehension of such data, one effective approach is the use of text-basedquestion-answer pairs to redraw geometric figures, thereby enhancing their understanding of geomet-ric problems ( 37). This approach essentially transforms multimodal data into pure textual information,making it more accessible for MLLMs. Moreover, several other strategies have been employed toboost MLLMs’ problem-solving capabilities: the introduction of skill example repositories ( 38),fine-tuning models using chart data embedded in mathematical problems ( 39). Additionally, design-ing reasoning path retrieval methods suitable for multimodal mathematical problems is crucial forMLLMs. These methods include tree-based multimodal reasoning path searches ( 40) and guidedextraction of key information tailored for solving lengthy mathematical problems ( 41). These methodshave improved MLLMs’ understanding and problem-solving abilities in mathematics, but we believeit is even more crucial to integrate these reasoning results effectively into mathematics educationand tutoring( 42). Therefore, an approach that complements these studies is still needed to providelearners with a multimodal learning experience.3 MathBoardMany existing studies focus on enhancing the reasoning capabilities of LLMs in solving mathematicalproblems. However, our research emphasizes improving the learning experience and reducingcognitive load by utilizing a visualized BW. To this end, we developed the MathBoard. In real-worldclassrooms, mathematics instructors frequently provide verbal explanations in tandem with BWillustrations to guide students through problem-solving processes. For example, a teacher mightsay, “Notice that we need to borrow from the tens place to the ones place. This changes the tensdigit from 4 to 3, and the ones digit from 3 to 13, like this.” Simultaneously, the teacher would drawan arrow on the whiteboard from the tens to the ones place, alter the 4 in the tens place to 3, andupdate the 3 in the ones place to 13. While current methodologies predominantly focus on enhancingLLMs’ reasoning capabilities, they lack mechanisms for progressive BW generation. We illustratethis process in Figure 1, where Figures 1(a), 1(b), and 1(c) present the detailed reasoning pathwaysemployed by LLMs, human teachers, and the proposed MathBoard, respectively, during mathematicalproblem-solving.In detail, MathBoard first generates the reasoning process for a given mathematical problem, pro-ducing both the reasoning steps and the correct solution while simultaneously querying the currentBW content and conversation history. These components are then used for cross-modal reasoning. Ifresponding to the problem for the first time, the system creates a new BW; otherwise, it updates theexisting BW, enabling a progressive generation of the visual content. This iterative process resultsin a synchronized update of both the BW and the verbal response, which together help learnersindependently resolve the mathematical problem. It is important to clarify that the proposed methodis orthogonal to existing approaches aimed at enhancing the reasoning capabilities of LLMs. Theseexisting methods can be effectively applied during the initial reasoning process conducted by Math-Board, facilitating the attainment of more accurate answers and a more detailed reasoning steps.Subsequently, MathBoard can integrate these components for the ensuing cross-modal BW reasoning.Additionally, Figure 1(d) presents the user interface of the MathBoard, comprising three mainsections: the Board Writing Area, Chat Area, and Input Question Area. Learners input mathematicalproblems in the Input Question Area and interact with the MathBoard in the Chat Area. With eachsystem response, both the verbal response and the updated BW are synchronously provided, withthe verbal response displayed in the Chat Area and the BW update rendered in the Board WritingArea. Learners can continue interacting with MathBoard via the Chat Area until the problem is fullyresolved. Detailed information regarding the case study of MathBoard can be found in Appendix A.34 Design of experimentsThe study used the Educational Technology Acceptance & Satisfaction Model (ETAS-M)( 43) todesign a questionnaire, assessing system performance, including improvements in learning efficiency,speed of task completion, and ease of understanding complex concepts. The study also discussed theaccuracy of the information provided by the system, the design of the operation interface, and stability,as well as the role of the system in promoting student interaction, group activity participation, andimprovement in understanding. Further details regarding the experiment can be found in Appendix B.5 Results5.1 Reliability and Validity AnalysisThe reliability analysis of the subjects’ scale data yielded a Cronbach’s alpha of 0.947 for the entirescale, which consists of 30 questions, indicating good internal consistency and suggesting that thesubjects’ understanding of the scale was consistent. The Cronbach’s alpha values decreased afterthe deletion of all question items except for the dialog rounds interaction data, indicating that noquestions needed to be eliminated. Furthermore, to analyze the overall validity of the scale, theagreement between each item and the total was assessed using Pearson’s correlation coefficient, all ofwhich were positively correlated. Generally, a correlation coefficient greater than 0.6 is consideredhigh, greater than 0.4 is moderate, and greater than 0.2 is low. In this set of 30 questions, a total of 20items showed high correlation, and 6 items showed moderate correlation, indicating that the scale hashigh internal consistency and both analytical reliability and validity.5.2 Evaluation of MathBoardTo investigate the actual pedagogical effectiveness and subject acceptance of the scheme proposed inthis study, data from two groups of experimental subjects on ten dimensions were cross-analyzed.Group A is the control group, which uses only text interaction for math learning, and Group Bis the experimental group that uses cross-modal MathBoard learning. The analysis results showthat Group B scored higher than Group A on all dimensions, indicating that the visual presentationand communication approach enhances students’ willingness to participate in group activities andconstruct knowledge in authentic contexts. To further explore the effectiveness of the program,independent samples t-tests were conducted on the dimensions of the experimental and controlgroups. The results show that the social constructivism dimension reached statistical significance(p=0.010), indicating that the system can significantly promote students’ willingness to communicateand can be used as an auxiliary tool for students’ group activities and team discussions during theirstudies.5.3 Acceptance Variability AnalysisTo further investigate whether the acceptance of the cross-modal interactive tutoring scheme proposedin this study varies among groups with different characteristics, data on the teaching experience andgender of the subjects were collected. This was done to explore and analyze whether these variablesinfluence the teaching effectiveness of the platform. Regarding the gender variable, independentsamples t-tests were conducted on the scores of male and female subject groups across differentdimensions. The results indicate that there are no significant differences between the two gendergroups on any dimension, suggesting that the platform’s effectiveness is consistent across differentgenders, with no gender bias present. Additionally, for teaching experience, a one-way ANOV A wasconducted with teaching experience (1-7) as the independent variable. It was found that there are nosignificant differences across different teaching experience groups on any dimension. This suggeststhat both experienced and less experienced groups show no significant difference in acceptance ofthe platform. In summary, it can be concluded that the platform does not produce biased effects ondifferent subject groups.4Figure 2: Comparison of platform acceptance6 Discussion and conclusionThis study, through comparative analysis, found that Group B, which used cross-modal learningtools, performed better than Group A across various learning dimensions. This result supports thecross-modal Learning Theory, which posits that the combination of visual and textual elementscan enhance learners’ information processing and memory retention capabilities. Additionally, theSocial Constructivism theory also explains Group B’s superior performance, emphasizing the roleof social interaction and cultural tools in knowledge construction. The study also pointed out thatalthough cross-modal learning tools have significant advantages in promoting communication andcollaboration, their effects may not be as pronounced in other areas, such as information quality orsystem quality.The study offers recommendations for educational practice, highlighting the importance of integratingcross-modal learning tools into instructional design to enhance student engagement and motivation.It also suggests that educational policymakers consider investing in cross-modal learning technologywhen allocating resources and support the promotion of these tools through teacher training andcurriculum development. These tools can not only supplement traditional teaching methods but alsoprovide students with a richer learning experience.Although the study’s results are enlightening, there are some limitations, such as the small samplesize that may affect the generalizability of the findings, and the study mainly focused on short-termlearning outcomes. Future research should expand the sample size, explore the long-term effects ofcross-modal learning tools in different subjects and educational environments, and how to promotethe effective integration and application of these tools through educational policies and teacherprofessional development. Through these efforts, a better understanding of the potential of cross-modal learning tools can be achieved, and they can be utilized to enhance educational quality and thelearning experience.7 AcknowledgmentsThis work was partially supported by the National Natural Science Foundation of China, under Grant62477012, and the Natural Science Foundation of Shanghai, under Grant 23ZR1418500, and theSpecial Foundation for Interdisciplinary Talent Training in "AI Empowered Psychology / Education"of the School of Computer Science and Technology, East China Normal University, under the Grant2024JCRC-03, and the Doctoral Research and Innovation Foundation of the School of ComputerScience and Technology, East China Normal University, under the Grant 2023KYCX-03.5References[1]R. Li, Y . Wang, C. Zheng, Y .-H. Jiang, and B. Jiang, “Generating Contextualized MathematicsMultiple-Choice Questions Utilizing Large Language Models,” in Artificial Intelligence inEducation. Posters and Late Breaking Results, Workshops and Tutorials, Industry and InnovationTracks, Practitioners, Doctoral Consortium and Blue Sky , A. M. Olney, I.-A. Chounta, Z. Liu,O. C. Santos, and I. I. Bittencourt, Eds. Cham: Springer Nature Switzerland, Jul. 2024, pp.494–501.[2]Y .-H. Jiang, “Multi-Agent System for Math Learning: Contextualized Mathematics Multiple-Choice Question Generation with Agentic Workflow,” in 2nd Global Summit On ArtificialIntelligence . East Windsor: Health Sciences Publishing Institute, Aug. 2024, p. 22.[3]R. Li, Y .-H. Jiang, Y . Wang, H. Hu, and B. Jiang, “A Large Language Model-Enabled Solution for the Automatic Generation of Situated Multiple-Choice MathQuestions,” in Conference Proceedings of the 28th Global Chinese Conference onComputers in Education (GCCCE 2024) . Chongqing, China: Global Chinese Conferenceon Computers in Education, Jun. 2024, pp. 130–136. [Online]. Available: http://gccce2024.swu.edu.cn/GCCCE2024_gongzuofanglunwenji2024-06-23A.pdf#page=148[4]Y . Zhou, M. Zhang, Y .-H. Jiang, N. Liu, and B. Jiang, “A Study on EducationalData Analysis and Personalized Feedback Report Generation Based on Tags andChatGPT,” in Conference Proceedings of the 28th Global Chinese Conference onComputers in Education (GCCCE 2024) . Chongqing, China: Global Chinese Conferenceon Computers in Education, Jun. 2024, pp. 108–115. [Online]. Available: http://gccce2024.swu.edu.cn/GCCCE2024_gongzuofanglunwenji2024-06-23A.pdf#page=126[5]E. Andy, “ChatGPT has entered the classroom: how LLMs could transform education,” Nature ,vol. 623, no. 7987, pp. 474–477, 2023. [Online]. Available: https://www.nature.com/articles/d41586-023-03507-3[6]H. Hu, Y .-H. Jiang, and R. Li, “Finetuning Large Language Models to AutomaticallyClassify Cognitive Skills in Mathematical Problems,” in Conference Proceedings of the 28thGlobal Chinese Conference on Computers in Education (GCCCE 2024) . Chongqing, China:Global Chinese Conference on Computers in Education, Jun. 2024, pp. 145–152. [Online].Available: http://gccce2024.swu.edu.cn/GCCCE2024_gongzuofanglunwenji2024-06-23A.pdf#page=163[7]Z. Liang, D. Yu, X. Pan, W. Yao, Q. Zeng, X. Zhang, and D. Yu, “MinT: BoostingGeneralization in Mathematical Reasoning via Multi-View Fine-Tuning,” Jul. 2023. [Online].Available: http://arxiv.org/abs/2307.07951[8]X. Li, S. Guo, J. Wu, and C. Zheng, “An interpretable polytomous cognitivediagnosis framework for predicting examinee performance,” Information Processing& Management , vol. 62, no. 1, p. 103913, Jan. 2025. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0306457324002723[9]T.-Y . Liu, Y .-H. Jiang, Y . Wei, X. Wang, S. Huang, and L. Dai, “Educational Practices andAlgorithmic Framework for Promoting Sustainable Development in Education by IdentifyingReal-World Learning Paths,” Sustainability , vol. 16, no. 16, p. 6871, Jan. 2024. [Online].Available: https://www.mdpi.com/2071-1050/16/16/6871[10] S. Mailles-Viard Metz, P. Marin, and E. Vayre, “The shared online whiteboard: Anassistance tool to synchronous collaborative design,” European Review of AppliedPsychology , vol. 65, no. 5, pp. 253–265, Sep. 2015. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S116290881500064X[11] J. He, S. Houde, G. E. Gonzalez, D. A. Silva Moran, S. I. Ross, M. Muller, and J. D. Weisz, “AIand the Future of Collaborative Work: Group Ideation with an LLM in a Virtual Canvas,” inProceedings of the 3rd Annual Meeting of the Symposium on Human-Computer Interaction forWork , ser. CHIWORK ’24. New York, NY , USA: Association for Computing Machinery, Jun.2024, pp. 1–14. [Online]. Available: https://dl.acm.org/doi/10.1145/3663384.36633986[12] Y . Hu, W. Shi, X. Fu, D. Roth, M. Ostendorf, L. Zettlemoyer, N. A. Smith, and R. Krishna,“Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models,”Jul. 2024. [Online]. Available: http://arxiv.org/abs/2406.09403[13] S. Menon, R. Zemel, and C. V ondrick, “Whiteboard-of-Thought: Thinking Step-by-Step AcrossModalities,” Jun. 2024. [Online]. Available: http://arxiv.org/abs/2406.14562[14] X. Zhuang, H. Wu, X. Shen, P. Yu, G. Yi, X. Chen, T. Hu, Y . Chen, Y . Ren, Y . Zhang, Y . Song,B. Liu, and M. Lan, “TOREE: Evaluating Topic Relevance of Student Essays for ChinesePrimary and Middle School Education,” in Findings of the Association for ComputationalLinguistics ACL 2024 , L.-W. Ku, A. Martins, and V . Srikumar, Eds. Bangkok, Thailandand virtual meeting: Association for Computational Linguistics, Aug. 2024, pp. 5749–5765.[Online]. Available: https://aclanthology.org/2024.findings-acl.342[15] Y . Wu, Y .-H. Jiang, Y . Chen, and W. Zhang, “Multi-Agent Systems Supported byLarge Language Models: Technical Pathways, Educational Applications, and FutureProspects,” Open Education Research , vol. 30, no. 5, pp. 63–75, 2024. [Online]. Available:https://doi.org/10.13966/j.cnki.kfjyyj.2024.05.007[16] Y .-H. Jiang, J. Shi, Y . Tu, Y . Zhou, W. Zhang, and Y . Wei, “For Learners: AI Agent is All YouNeed,” in Enhancing Educational Practices: Strategies for Assessing and Improving LearningOutcomes , ser. Education in a Competitive and Globalizing World, Y . Wei, C. Qi, Y .-H. Jiang,and L. Dai, Eds. New York, NY , USA: Nova Science Publishers, Oct. 2024, pp. 21–46.[Online]. Available: https://doi.org/10.52305/RUIG5131[17] A. Bewersdorff, C. Hartmann, M. Hornberger, K. Seßler, M. Bannert, E. Kasneci, G. Kasneci,X. Zhai, and C. Nerdel, “Taking the Next Step with Generative Artificial Intelligence: TheTransformative Role of Multimodal Large Language Models in Science Education,” Sep. 2024.[Online]. Available: http://arxiv.org/abs/2401.00832[18] G.-G. Lee, L. Shi, E. Latif, Y . Gao, A. Bewersdorff, M. Nyaaba, S. Guo, Z. Wu, Z. Liu,H. Wang, G. Mai, T. Liu, and X. Zhai, “Multimodality of AI for Education: Towards ArtificialGeneral Intelligence,” Dec. 2023. [Online]. Available: http://arxiv.org/abs/2312.06037[19] T. Gao, P. Chen, M. Zhang, C. Fu, Y . Shen, Y . Zhang, S. Zhang, X. Zheng, X. Sun, L. Cao,and R. Ji, “Cantor: Inspiring Multimodal Chain-of-Thought of MLLM,” Apr. 2024. [Online].Available: http://arxiv.org/abs/2404.16033[20] N. Singh, G. Bernal, D. Savchenko, and E. L. Glassman, “Where to Hide a StolenElephant: Leaps in Creative Writing with Multimodal Machine Intelligence,” ACMTrans. Comput.-Hum. Interact. , vol. 30, pp. 68:1–68:57, Sep. 2023. [Online]. Available:https://dl.acm.org/doi/10.1145/3511599[21] M. Wang, Y . Wang, T.-T. Vu, E. Shareghi, and G. Haffari, “Exploring the Potential ofMultimodal LLM with Knowledge-Intensive Multimodal ASR,” Jun. 2024. [Online]. Available:http://arxiv.org/abs/2406.10880[22] G.-G. Lee and X. Zhai, “Realizing Visual Question Answering for Education: GPT-4V as aMultimodal AI,” May 2024. [Online]. Available: http://arxiv.org/abs/2405.07163[23] H. Abu-Rasheed, C. Weber, and M. Fathi, “Experimental Interface for Multimodal and LargeLanguage Model Based Explanations of Educational Recommender Systems,” Jan. 2024.[Online]. Available: http://arxiv.org/abs/2402.07910[24] Y . Tai, W. Fan, Z. Zhang, and Z. Liu, “Link-Context Learning for Multimodal LLMs,” 2024, pp.27 176–27 185. [Online]. Available: https://openaccess.thecvf.com/content/CVPR2024/html/Tai_Link-Context_Learning_for_Multimodal_LLMs_CVPR_2024_paper.html[25] A. Bala, “Multimodal LLM using Federated Visual Instruction Tuning for Visually Impaired,”Master’s thesis, University at Buffalo, New York, USA, May 2024.7[26] M. A. Rahman, L. Alqahtani, A. Albooq, and A. Ainousah, “A Survey on Security and Privacyof Large Multimodal Deep Learning Models: Teaching and Learning Perspective,” in 202421st Learning and Technology Conference (L&T) , Jan. 2024, pp. 13–18. [Online]. Available:https://ieeexplore.ieee.org/abstract/document/10469434[27] S. Liu, W. Pu, C. Xu, Z. Huang, Q. Li, H. Wang, C. Lin, and C. Shen, “A ComprehensiveSurvey of Multimodal Large Language Models: Concept, Application and Safety,” Oct. 2024.[Online]. Available: https://www.researchsquare.com/article/rs-5270567/v1[28] S. Küchemann, K. Avila, Y . Dinc, C. Hortmann, N. Revenga Lozano, V . Ruf, N. Stausberg,S. Steinert, F. Fischer, M. Fischer, E. Kasneci, G. Kasneci, T. Kuhr, G. Kutyniok, S. Malone,M. Sailer, A. Schmidt, M. Stadler, J. Weller, and J. Kuhn, Are Large Multimodal FoundationModels all we need? On Opportunities and Challenges of these Models in Education , Jan. 2024.[29] L. Sun, J. Wu, Y . Xu, and Y . Zhang, “A federated learning and blockchain framework forphysiological signal classification based on continual learning,” Information Sciences , vol. 630,pp. 586–598, Jun. 2023. [Online]. Available: https://www.sciencedirect.com/science/article/abs/pii/S0020025523001767[30] V . Shah, D. Yu, K. Lyu, S. Park, J. Yu, Y . He, N. R. Ke, M. Mozer, Y . Bengio, S. Arora,and A. Goyal, “AI-Assisted Generation of Difficult Math Questions,” Oct. 2024. [Online].Available: http://arxiv.org/abs/2407.21009[31] J. Ahn, R. Verma, R. Lou, D. Liu, R. Zhang, and W. Yin, “Large Language Modelsfor Mathematical Reasoning: Progresses and Challenges,” Sep. 2024. [Online]. Available:http://arxiv.org/abs/2402.00157[32] W. Liu, H. Hu, J. Zhou, Y . Ding, J. Li, J. Zeng, M. He, Q. Chen, B. Jiang, A. Zhou,and L. He, “Mathematical Language Models: A Survey,” Feb. 2024. [Online]. Available:http://arxiv.org/abs/2312.07622[33] P. Lu, H. Bansal, T. Xia, J. Liu, C. Li, H. Hajishirzi, H. Cheng, K.-W. Chang, M. Galley,and J. Gao, “MathVista: Evaluating Mathematical Reasoning of Foundation Models in VisualContexts,” Jan. 2024. [Online]. Available: http://arxiv.org/abs/2310.02255[34] Y . Huang, W. Zhang, L. Feng, X. Wu, and K. C. Tan, “How Multimodal Integration Boost thePerformance of LLM for Optimization: Case Study on Capacitated Vehicle Routing Problems,”Mar. 2024. [Online]. Available: http://arxiv.org/abs/2403.01757[35] Z. Liang, T. Yang, J. Zhang, and X. Zhang, “UniMath: A Foundational andMultimodal Mathematical Reasoner,” Dec. 2023, pp. 7126–7133. [Online]. Available:https://aclanthology.org/2023.emnlp-main.440[36] R. Zhang, D. Jiang, Y . Zhang, H. Lin, Z. Guo, P. Qiu, A. Zhou, P. Lu, K.-W. Chang, P. Gao, andH. Li, “MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual MathProblems?” Aug. 2024. [Online]. Available: http://arxiv.org/abs/2403.14624[37] J. Gao, R. Pi, J. Zhang, J. Ye, W. Zhong, Y . Wang, L. Hong, J. Han, H. Xu, Z. Li, and L. Kong,“G-LLaV A: Solving Geometric Problem with Multi-Modal Large Language Model,” Dec. 2023.[Online]. Available: http://arxiv.org/abs/2312.11370[38] A. Didolkar, A. Goyal, N. R. Ke, S. Guo, M. Valko, T. Lillicrap, D. Rezende, Y . Bengio,M. Mozer, and S. Arora, “Metacognitive Capabilities of LLMs: An Exploration in MathematicalProblem Solving,” May 2024. [Online]. Available: http://arxiv.org/abs/2405.12205[39] Y . Han, C. Zhang, X. Chen, X. Yang, Z. Wang, G. Yu, B. Fu, and H. Zhang, “ChartLlama: AMultimodal LLM for Chart Understanding and Generation,” Nov. 2023. [Online]. Available:http://arxiv.org/abs/2311.16483[40] J. Kang, X. Z. Li, X. Chen, A. Kazemi, Q. Sun, B. Chen, D. Li, X. He, Q. He, F. Wen, J. Hao,and J. Yao, “MindStar: Enhancing Math Reasoning in Pre-trained LLMs at Inference Time,”Jun. 2024. [Online]. Available: http://arxiv.org/abs/2405.162658[41] X. Xu, T. Xiao, Z. Chao, Z. Huang, C. Yang, and Y . Wang, “Can LLMs Solve longer MathWord Problems Better?” May 2024. [Online]. Available: http://arxiv.org/abs/2405.14804[42] P. Lu, “Advancing Mathematical Reasoning with Language Models: A Multimodal andKnowledge-Intensive Perspective,” Ph.D. dissertation, UCLA, 2024. [Online]. Available:https://escholarship.org/uc/item/678864d8[43] P. Wessa and S. Poelmans, “MODELING EDUCATIONAL TECHNOLOGY ACCEPTANCEAND SATISFACTION,” EDULEARN09 Proceedings , pp. 5882–5889, 2009. [Online].Available: https://library.iated.org/view/WESSA2009MOD9A Case StudyThe following content presents the details of MathBoard in the case of subtraction, illustrating acomplete dialogue workflow. Due to space limitations, we only showcase the content of MathBoard’sBoard Writing (BW) for the key steps. In practice, each verbal response is generated synchronouslywith the BW. Figures 3 and 4 display the first and second halves of this case, respectively.34??MathBoard- ?27 given away apples:�...Emma already has 43 apples:������������...How many apples does Emma have now?� Counting apples72-Well, can you help me understand this problem? I just started learning subtraction, and I'm not very good at solving these kinds of problems yet. Thank you!Of course, Bob! Let's break it down together. We'll go step by step. Subtraction is like taking away some apples from a bunch you already have. Let's see what happens when we subtract the apples Emma gave to Jack.Let me think... I'm having trouble with this apple subtraction problem. I can't subtract 7 from 3. What should I do?Great question, Bob! When you need to subtract a bigger number from a smaller one, you can borrow from the next digit. Let me show you how to do that.Hello, Bob! Thank you for your kind words! I'm here to help you with your math problem. Do you have any questions about the problem we're working on?Hello! I am Bob, and I’m currently in elementary school.What a coincidence! This morning, my mom also gave me an apple to eat. Apples are really delicious!That's awesome, Bob! Apples are indeed delicious and healthy! Now, let's think about the apples in our math problem. How many apples do you think Emma has left after giving some to Jack?I am MathBoard, your math learning assistant. I will help you solve any questions you have about this math problem.Figure 3: Details of the first half of MathBoard in a subtraction case.B Experimental SetupThis study comprehensively evaluates the MathBoard teaching aid based on the Educational Technol-ogy Acceptance & Satisfaction Model (ETAS-M)( 43). The research process is designed rigorously,initially recruiting 60 pre-service teachers from normal colleges through recruitment notices. Partici-pants have diverse backgrounds, including undergraduate, master’s, and doctoral students, coveringvarious stages of education. The Richter Scale was used to assess the participants’ teaching experi-10Okay, I see. Can you tell me where we need to borrow from?We need to borrow from the tens place because 7 is greater than 3. I'll show you how to borrow so we can subtract correctly.So the result in the ones place is 6! Am I right?That's correct! After borrowing and subtracting, the result in the ones place is indeed 6. Let's update the board to reflect this.34?MathBoard� Counting apples72-6317-61\\313Thank you, let's go back to the original question. Now I understand that the result in the ones place is 6! What should we do next step?Great job with the ones place! Now we need to subtract the tens place. Since we borrowed 1 from the tens place, we need to adjust it and then subtract the tens digits. Let’s move to that step.The answer for the tens place is 4 minus 2, which equals 2. Did I calculate that correctly?You're very close, but remember that we had to borrow from the tens place earlier. When we borrowed, the 4 in the tens place became 3. So the correct calculation is 3 minus 2, which equals 1. Let's update that.Sorry... I made a mistake. I see that you crossed out 43. We've finished with the ones place, and now for the tens place. If the digit in the tens place is 1, then the final result would be 16. Am I right?Exactly! You did it! �When you subtracted 7 from 13 in the ones place, you got 6. Then, in the tens place, after borrowing, you correctly subtracted 2 from 3, giving you 1. So the final result of 43 minus 27 is indeed 16! Great job figuring it out!I did it! I think I've figured out how to do subtraction, especially when it comes to borrowing. Thank you!You're very welcome, Bob! � I'm so proud of you for figuring out how to do subtraction with borrowing. Keep practicing, and you'll get even better. If you ever have more questions or need help with anything else, just ask. Happy learning! �Figure 4: Details of the second half of MathBoard in a subtraction case.11Performance ExpectancyRelative AdvantageUsefulnessFacilitatingConditionsInformation QualitySystem QualityEffortExpectancyEase of UsePedagogicalParadigmSocial Constructivism,Peer Support, etc...Experience & AgeGenderShift + InteractionIntention to Use*Actual Use*Exam Scores*Educational Technology Acceptance & Satisfaction Model (ETAS-M)This relationship is redundant*optimal weights have to be appliedFigure 5: The ETAS-M, designed based on the UTAUT model, was created by Wessa P.( 43). It takesinto account the influences from performance expectancy, facilitating conditions, effort expectancy,and the pedagogical paradigm, and posits that these factors affect intention to use and actual use,ultimately impacting exam scores.ence, with 25% of the participants having extensive teaching experience. All participants voluntarilyjoined the study and signed informed consent forms. The research adheres to ethical standardsand has been approved by the ethics committee. The details of ETAS-M are provided in Figure5. It is important to note that, although ETAS-M identifies gender as a potential factor influencingoutcomes, our experiments did not reveal any significant differences between genders. To gathergender information from participants, we provided an text box in the questionnaire, allowing them toself-identify their gender freely rather than selecting from predefined categories.During the experiment, participants received training on how to use the system and solve mathematicalproblems with MathBoard. Researchers recorded detailed interaction data with the system, includingthe number of dialogue rounds and problem-solving efficiency. This data helps to deeply understandthe practical effects of the teaching aid. The results showed that 90% of the participants came from ateacher-type professional background, and 30% had a professional background related to mathematics.Ultimately, 34 participants completed the entire experimental process and provided effective data.We developed MathBoard based on ChatGPT-4o, which is provided by OpenAI under its terms ofservice, and its use is governed by those terms. The experiments were conducted on a device with anAMD Ryzen 9 7945HX processor and 16GB of RAM.C LimitationAlthough this study provides valuable insights into multimodal learning in elementary mathematicseducation and demonstrates the effectiveness of the MathBoard system in reducing cognitive load andpromoting social construction, several limitations should be acknowledged and addressed in futureresearch. The following section outlines these limitations.12First, the study is limited by a relatively small sample size. The findings are based on a sampleof 34 pre-service teachers, which may restrict the generalizability of the results. Future studiesshould consider using a larger and more diverse sample to gain more comprehensive insights into theeffectiveness of the proposed system. Furthermore, the current system is designed specifically forelementary mathematics, which limits its scalability to higher education and other subjects. Futureresearch should explore how this system can be adapted and applied to broader educational contexts.For instance, developing different board-writing generation methods for various subjects or use casescould significantly enhance its scalability.In this study, we observed that multimodal learning supported by LLMs can enhance learners’social construction, contributing to improved learning outcomes. However, the long-term effectsand mechanisms of LLM-supported multimodal learning on learners’ development remain unclearand warrant further investigation. For example, while multimodal information reduces cognitiveload for learners, it may enhance metacognitive activities and improve learning outcomes for some.Conversely, other learners may experience good results when using the MathBoard system butstruggle to perform independently once the system is removed, due to a sudden increase in cognitiveload. This could lower test scores and foster dependency on the system. These hypotheses areintriguing and deserve further exploration.MathBoard represents an innovative exploration of LLM-supported multimodal learning. In futureresearch, we intend to extend the foundational framework of MathBoard to other grade levels, subjects,and educational fields to enhance its applicability in broader educational contexts. Moreover, theissues of data privacy and the ethical implications of using large language models in educationare critical and require further discussion. Given the sensitivity of educational data, future studiesshould focus on ensuring privacy protection and addressing ethical considerations when employingsuch technologies in the classroom. We look forward to further innovations and the advancementof LLM-supported multimodal learning, bringing us closer to realizing the vision of large-scale,personalized education.13 |
eeZG97VjYa | Give me a hint: Can LLMs take a hint to solve mathproblems?Vansh Agrawal, Pratham Singla, Amitoj Singh Miglani, Shivank Garg, Ayush MangalVision and Language GroupIndian Institute of Technology, Roorkee{vansh_a@ph,pratham_s@me,amitoj_sm@ph,shivank_g@mfs,amangal@cs}.iitr.ac.inAbstractWhile state-of-the-art LLMs have shown poor logical and basic mathematical rea-soning, recent works try to improve their problem-solving abilities using promptingtechniques. We propose giving "hints" to improve the language model’s perfor-mance on advanced mathematical problems, taking inspiration from how humansapproach math pedagogically. We also test robustness to adversarial hints anddemonstrate their sensitivity to them. We demonstrate the effectiveness of ourapproach by evaluating various diverse LLMs, presenting them with a broad set ofproblems of different difficulties and topics from the MATH dataset and comparingagainst techniques such as one-shot, few-shot, and chain of thought prompting.Our code is available at https://github.com/vlgiitr/LLM-Math1 IntroductionThe ability to reason and logically solve complex mathematical problems is essential for progressin nearly every field, whether it be modeling complex environments, developing new algorithms,or engineering new devices. Recent works have explored Large Language models’ mathematicalcapabilities and found them lacking in logic and basic mathematical reasoning [ 1] [2]. Improvingthese reasoning capabilities is important in a future where LLMs can be used to assist humans inincreasingly complex mathematical tasks or act like AI math teachers. In this work, taking inspirationfrom how humans are taught math pedagogically, we propose "hinting" LLMs by giving subtleguidance or clues as a method to improve problem-solving capabilities on mathematical tasks andthen compare its effectiveness against other prompting methods, such as one-shot [ 3], few-shot [ 3],and chain of thought prompting [ 4]. We evaluate our approach using the MATH dataset [ 5], consistingof math problems categorized into distinct classes based on subtopics such as algebra, probability,geometry, etc., with different levels of complexity (1-5). We use a diverse set of LLMs, includingbase language models, instruction fine-tuned models, models specifically tuned for mathematicaltasks, and closed source models such as GPT-4o-mini [6] and Gemini Flash [7] for our evaluation.We further examine the robustness of these models to adversarial prompts, misleading hints, and cluesof varying levels. We investigate how sensitive the models are to incorporating these incorrect hints ascontext, which may degrade performance, versus their ability to reject such misleading information.Through these experiments, we seek to contribute to the ongoing research on improving the currentstate-of-the-art language model’s reasoning capabilities and their practical applications in solvingmathematical tasks [8].1.1 Background and Related WorkVarious prompting methods have been shown to increase the accuracy of LLMs in solving complexproblems that require understanding and reasoning [9]. A few popular methods being -38th Conference on Neural Information Processing Systems (NeurIPS 2024).1.One shot prompting [ 3]:Giving a single example problem and its final answer to the modelin-context to learn from.2.Few shot prompting [ 3]:Providing multiple in-context example problems and their finalanswers instead of one.3.Chain of thought prompting (CoT) [ 4]:Providing a detailed step-by-step solution to thein-context example to provide intermediate reasoning steps to solve a problem.However, these methods have not been extensively explored in the context of solving more compli-cated mathematical problems. Further, their generalization capabilities, to apply learned knowledgeto a broader domain of questions (e.g., algebraic equations, geometry problems) rather than specificproblems (e.g., solving a specific algebraic equation, finding roots for a quadratic polynomial),have not been sufficiently researched [ 10]. Previous math benchmarks[ 10] [11] show that math is avaluable ability to test an LLM’s reasoning and problem-solving capabilities. While there has beenwork on hinting [ 12] [13] as a prompting technique, they have not been robust and diverse enough tosee if these techniques are generalizable to various types of models and what the effect and sensitivityof these models to adversarial hinting can be.2 MethodWe first prompt the Gemini 1.5 flash model to generate hints for our dataset by passing it the questionand final answer. These hints are then provided in context to the model and the target problem to helpthe model reason about the task. We believe this aligns more with how humans solve math problemsby getting hints instead of the complete solution for an example problem, as in Chain of Thought, orjust the final answer, as in one/few-shot approaches.To test the adversarial robustness of these models to hints, we provided either an adversarial misleadinghint or a hint from a random question to observe how sensitive the models are to our hints. Similarversions for one/few shot and CoT prompting are also generated as shown in Figure 1. More detailsabout the prompting process can be found in Appendix A.We evaluate a diverse ensemble of LLMs ranging from base models to instruct fine-tuned modelsand math-finetuned LLMs, including nine open-source models and two closed-source models (Moredetails in Appendix C). We use the MATH dataset [ 5], which contains problems of seven differentclasses (e.g., algebra, geometry, etc.) of varying difficulty levels from 1 (easy) to 5 (hard), moredetails about the difficulty levels are given in Appendix B.3 Experiments and Results3.1 SetupWe evaluate a set of 11 models for our experiments, using open-source implementations wherepossible and public API offerings for closed-source models. We evaluate 17 different types ofprompting techniques, ranging from baseline to hinting to adversarial, along with one/few shot andCoT as given in Appendix A. The problem set consisted of all seven topics from the MATH dataset[5], with 100 problems for each topic. Exact details about the data split are given in Appendix B. Wereport the comparison by checking the fraction of questions that the model got correct.Model InputModel OutputThe answer is 5Model OutputThe answer is 4Model Input Model InputA: ?Model OutputThe answer is 7Q1: ...A1: ...Q2: ...A2: ......Baseline Hinting Baseline Few ShotModel InputModel OutputThe answer is 3One ShotModel InputModel OutputThe answer is 4Q: When you divide a binary number...Step 1 :...Step 2: ...Q:If P_b × P_b = 31_b, where P andb represent two distinct digits 0-9and P is one less than b, what is thevalue of the base b?A : ?Chain of ThoughtQ:If P_b × P_b = 31_b, where P and brepresent two distinct digits 0-9 and Pis one less than b, what is the value ofthe base b?Hint: Convert base b number tobase 10 and setup equation formultiplication.A : ?Q:If P_b × P_b = 31_b, where Pand b represent two distinct digits0-9 and P is one less than b, whatis the value of the base b?Q:If P_b × P_b = 31_b, where P andb represent two distinct digits 0-9and P is one less than b, what is thevalue of the base b?A : ?Q : When you divide a binary number...A: ...Q:If P_b × P_b = 31_b, where P and brepresent two distinct digits 0-9 and Pis one less than b, what is the value ofthe base b?A : ?Figure 1: A comparison of various prompting techniques2HintingBaseFew-shot One-shotCoT0.30.40.50.60.560.540.510.50.48Average ScoreBase CoTAdv-Hint Rand-Hint0.30.40.50.60.540.480.47 0.46Average ScoreFigure 2: Left: Comparing different prompting techniques, hinting boosts performance and CoTperforms worst, as explained in section 3.2. Right: Effect of adversarial hints: Adversarial hintinggreatly reduces performances, as explained in section 3.33.2 Evaluating Hint based promptingWe observe that hinting improves the performance of models, as shown in Figure 2. We attributethe poorer performance of other approaches to a lack of generalization. The improvement of hintingover chain of thought, can be explained as the latter restricts the model’s generalization by forcing itto follow the entire reasoning steps of a solution, which might not be the same for all the problems.On the other hand, hints only provide certain helpful directions, giving the model more freedom togeneralize better to different problems.Approaches like the chain of thought [ 4] give step-by-step solutions, which might not generalize toother problems restricting the search space of these models. This can also be due to a "snowball"effect [ 5], in which intermediate-generated steps with mistakes can derail the model from thelogical direction to the right answer. One-shot and few-shot perform relatively better as they do notrestrict the steps, however, the model still gets confined to a narrower subdomain, with few-shotshowing more generalisation due to more examples and hence, better performance. Hinting leads tobetter performance as it helps the model reason about question-specific knowledge more than otherprompting techniques and gives initial steps to ensure the right direction without over-fitting to anexact solution.3.3 Evaluating Adversarial HintingWe find that giving Adversarial hints drastically reduces the model’s performance, dropping it belowCoT, which performed the worst in our non-adversarial approaches, as shown in Fig 2 and Table 1.We also apply this to few-shot[ 3] and one-shot[ 3] settings and find that adversarial hinting affects theperformance in those cases as well. The inferior performance of CoT and its prompting variants canbe attributed to over-fitting various factors within irrelevant information that influence the model’ssensitivity to distractions based on lexical similarity. [14].A : ?Model InputQ:Number of ways to place 2 itemson a 8*8 board such that they arein the same row/column.Model OutputThe answer is 448Model OutputThe answer is 56Model Input Model InputQ:Find r if 3(r - 7) = 4(2 - 2r) +4Step 1: ...Step 2: ......Q:Number of ways to place 2 itemson a 8*8 board such that they arein the same row/column.A : ?Model OutputThe answer is 16Q:Number of ways to place 2 itemson a 8*8 board such that they arein the same row/column.Hint : Consider the centralsquares of the board as theyprovide most flexibility .A : ?Baseline Prompting Chain of Thought Adversarial HintingModel InputModel OutputThe answer is 128Q:Number of ways to place 2 itemson a 8*8 board such that they arein the same row/column.Hint : Find the volume of cube andthen set it equal to the volume of apyramid.A : ?Random HintingFigure 3: Adversarial and random hinting strategies3Prompt Technique Baseline Few-shot One-shot CoTBaseline 0.5355 0.5122 0.4973 0.4815with hint 0.5615 0.5171 0.4675 -adversarial prompting 0.4674 0.4968 0.4742 0.4761random prompting 0.4628 0.4711 0.4753 0.5073Table 1: Comparison of various prompting techniques with adversarial hinting. Base CoT alreadyentails in-context steps and hence doesn’t involve hinting. We observe a drop in performance due toboth, adversarial and random hinting, indicating a sensitivity to hinting.Model category Baseline Hinted Adv hint Rand hintSmall models (<=2B) 0.3314 0.3686 0.3143 0.2314Base models 0.5385 0.5104 0.4898 0.2871Instruct tuned models 0.5471 0.5601 0.4985 0.4971Math Fine-tuned models 0.6180 0.6485 0.5581 0.5828Closed-Source models 0.6685 0.7186 0.6557 0.6928Table 2: A comparison of various model types. We find that hinting performs better for all modelsexcept the base models, which in general are worse at instruction following. We also notice a drasticdrop in performance for all model types when given adversarial hints3.4 Model-wise performanceWhile comparing the performance of different models, we observe that closed-source models exhibitthe highest performance, followed by models fine-tuned for mathematical tasks, instruct-tunedmodels, and finally, base models, as shown in table 2. Additionally, variations in model size impactperformance, with smaller models generally performing worse. However, performance also dependson the extent of fine-tuning; for example, the Qwen-2-Math-Instruct model[ 15] achieves comparableresults to GPT-4o-mini[ 6] as shown in Appendix E. Among the 7B and 8B models, Qwen-2-Math-Instruct[ 15] performs best, while Mistral-Instruct[ 16] ranks the lowest. Our observations furtherreveal that base models struggle to incorporate and utilize hints effectively, performing worse than thebaseline, likely due to their limited ability to follow instructions[ 17]. We list our exhaustive results inAppendix D.Further, we also evaluated hinting multimodal models on visual and mathematical tasks and found asimilar improvement due to hinting and deterioration due to adversarial hinting. We provide moredetails in Appendix C.1.4 ConclusionIn this work, we have evaluated the mathematical reasoning abilities of various models and approaches.Our results indicate that providing hints is more effective than giving direct answers because it guidesthe model to the correct solution instead of restricting its search space like few-shot[ 3], one-shot[ 3],and chain of thought[ 4] which lack generalization. However, CoT may outperform hinting in somecases where the example and target problem are very similar. Hence, for math problems withunknown solutions, it is better to provide the user’s knowledge about the problem as hints to improveits reasoning and problem-solving capabilities than a full solution for a similar problem, and hintingis the natural way in which humans solve a reasoning task as well, as only intermediate directionsto approach a problem are required, the rest can be reasoned by humans themselves. Althoughextracting hints requires and extra inference, it is compensated by the increase in performance overother prompting techniques, which are only useful when dealing with similar problems. Finally, wesee the sensitivity to random and adversarial prompting[ 18] techniques demonstrated performanceloss due to adversarial/random hints in few-shot[3], one-shot[3], and chain-of-thought[4] settings.45 Limitations and Future WorkOur work mainly focuses on prompting LLMs with problems as textual input. Testing these techniquesin multi-modal models like VLMs, where geometrical problems can be passed as image inputs, isa possible future direction. Further, due to computational limitations, our experiments involved asmall subset of problems to test the models. These techniques can also be evaluated on larger samplesizes to increase generalizability. Additionally, It is yet to be tested whether hinting can increaseperformance in other general tasks as well.1.Our techniques are yet to be tested on larger models like the Llama 3.1 70B, 450B[ 19], Fal-con 180B[ 20], OPT 175B[ 21], etc., and other closed source models like Claude Sonnet[ 22],Bard[23], etc due to computational limitations.2.We compare the generated answers with the solutions using LLMs, which can introduce adegree of error.3.These techniques can be further evaluated on the entire MATH dataset and other datasetssuch as GSM-8k[24], etc. to ensure a more exhaustive analysis.References[1]Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solvesimple math word problems? In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer,Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, andYichao Zhou, editors, Proceedings of the 2021 Conference of the North American Chapter of theAssociation for Computational Linguistics: Human Language Technologies , pages 2080–2094,Online, June 2021. Association for Computational Linguistics.[2]Jingyuan Ma, Damai Dai, and Zhifang Sui. Large language models are unconscious of unrea-sonability in math problems. arXiv preprint arXiv:2403.19346 , 2024.[3]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, ArielHerbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler,Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, ScottGray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, IlyaSutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle,M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural InformationProcessing Systems , volume 33, pages 1877–1901. Curran Associates, Inc., 2020.[4]Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le,Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models.Advances in neural information processing systems , 35:24824–24837, 2022.[5]Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, DawnSong, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset.arXiv preprint arXiv:2103.03874 , 2021.[6] OpenAI Team et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023.[7]Gemini Team et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokensof context, 2024.[8]Ankit Satpute, Noah Gießing, André Greiner-Petter, Moritz Schubotz, Olaf Teschke, AkikoAizawa, and Bela Gipp. Can llms master math? investigating large language models on mathstack exchange. In Proceedings of the 47th International ACM SIGIR Conference on Researchand Development in Information Retrieval , pages 2316–2320, 2024.[9]Shubham Vatsal and Harsh Dubey. A survey of prompt engineering methods in large languagemodels for different nlp tasks. arXiv preprint arXiv:2407.12994 , 2024.[10] Janice Ahn, Rishu Verma, Renze Lou, Di Liu, Rui Zhang, and Wenpeng Yin. Largelanguage models for mathematical reasoning: Progresses and challenges. arXiv preprintarXiv:2402.00157 , 2024.5[11] Hongwei Liu, Zilong Zheng, Yuxuan Qiao, Haodong Duan, Zhiwei Fei, Fengzhe Zhou, WenweiZhang, Songyang Zhang, Dahua Lin, and Kai Chen. Mathbench: Evaluating the theory andapplication proficiency of llms with a hierarchical mathematics benchmark. arXiv preprintarXiv:2405.12209 , 2024.[12] Yujun Mao, Yoon Kim, and Yilun Zhou. Champ: A competition-level dataset for fine-grainedanalyses of llms’ mathematical reasoning capabilities. arXiv preprint arXiv:2401.06961 , 2024.[13] Jinlan Fu, Shenzhen Huangfu, Hang Yan, See-Kiong Ng, and Xipeng Qiu. Hint-before-solving prompting: Guiding llms to effectively utilize encoded knowledge. arXiv preprintarXiv:2402.14310 , 2024.[14] Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H Chi, NathanaelSchärli, and Denny Zhou. Large language models can be easily distracted by irrelevant context.InInternational Conference on Machine Learning , pages 31210–31227. PMLR, 2023.[15] An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li,Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report. arXiv preprintarXiv:2407.10671 , 2024.[16] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra SinghChaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, LucileSaulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825 , 2023.[17] Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, DennyZhou, and Le Hou. Instruction-following evaluation for large language models. arXiv preprintarXiv:2311.07911 , 2023.[18] Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, LinyiYang, Wei Ye, Yue Zhang, Neil Zhenqiang Gong, et al. Promptbench: Towards evaluating therobustness of large language models on adversarial prompts. arXiv preprint arXiv:2306.04528 ,2023.[19] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle,Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herdof models. arXiv preprint arXiv:2407.21783 , 2024.[20] Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxan-dra Cojocaru, Mérouane Debbah, Étienne Goffinet, Daniel Hesslow, Julien Launay, QuentinMalartic, et al. The falcon series of open language models. arXiv preprint arXiv:2311.16867 ,2023.[21] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen,Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trainedtransformer language models. arXiv preprint arXiv:2205.01068 , 2022.[22] AI Anthropic. The claude 3 model family: Opus, sonnet, haiku. Claude-3 Model Card , 2024.[23] Haotong Qin, Ge-Peng Ji, Salman Khan, Deng-Ping Fan, Fahad Shahbaz Khan, and Luc VanGool. How good is google bard’s visual understanding? an empirical study on open challenges.Machine Intelligence Research , 20(5):605–613, August 2023.[24] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers tosolve math word problems. arXiv preprint arXiv:2110.14168 , 2021.[25] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li,Yu Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in openlanguage models. arXiv preprint arXiv:2402.03300 , 2024.[26] Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Mingjie Zhan, and Hongsheng Li. Measuringmultimodal mathematical reasoning with math-vision dataset, 2024.6A PromptingWe compare 17 different prompting techniques, including base-prompting techniques such as few-shot, one-shot, and chain-of-thought, and their variants involving hinting and adversarial promptingtechniques for rigorous and exhaustive evaluation. We group the approaches as follows:A.1 Base Prompting techniquesIn our baseline approach, we provide the models with the target questions and then prompt themto solve them. For One-Shot[ 3] and Few-shot prompting[ 3], we provide the model with exampleproblems from the same class as the target problem. Since the dataset is divided into difficulty levels,we provide one Level-3 problem for one-shot and 5 examples, one from each level for few-shotprompting. We only provide the final numerical answer without any explanation for these approaches.More details about the difficulty levels are given in B. For Chain-of-Thought prompting[ 4], the modelis given the complete step-by-step solution of the example problem in context instead of the finalnumerical answer only as shown in (Figure 4)Our final baseline prompting techniques involve:•Baseline: Only target question given•One-shot: (Table 6) One example question and final answer pair (Difficulty level: 3) andthen the target question given•Few-shot: (Table 5) Giving five example questions and final answer pairs, each correspond-ing to one of the 5 difficulty levels, and then the target question•Chain Of Thought: (Table 6) Giving one example QnA pair with step-by-step-solution(Difficulty level: 3) and then the target questionA: ??BaselineA: ??Q: If f(x) is an even function and g(x) isan odd function ...A: ??One-shot Few-shotQ: Find r if 3(r − 7) = 4(2 − 2r) + 4A: 3Q: Let a and b be the solutions ofthe equation 2x^2 − 10x + 5 ...Q: "What is the least prime numbergreater than 25 that will have aremainder of 2 when divided by 25?"Q1: ...A1: ...Q2: ...A 2: .........Q5: ...A 5: ...Q: What is the greatest commondivisor of all of the members of the setA: ??Chain Of ThoughtQ: When the binarynumber 100101 110010_2is divided by 4 ....Step1: ...Step2: ...Figure 4: Examples of Base Prompting techniquesA.2 HintingFor our baseline approach, we only provide hints for the target problem. In the one-shot[ 3] andfew-shot hinting cases[ 3], example problems of the same category and its hint (without the finalanswer) are provided to the model. (Figure 5 )•Baseline + Hinting: Providing each question with its corresponding correct hint•One-shot-Hinting: Providing one example Question and Hint pair (Difficulty level: 3) andthen the target question•Few-shot-Hinting: Providing five example Question and Hint pairs, each corresponding toone of the 5 difficulty levels, and then the target questionA.3 Adversarial PromptsTo test the robustness of our approach, for one and few-shot, we adversarially prompt the modelwith an example problem and the incorrect final answer to test how much the models make use ofthe problem-answer pairs provided. We further extend the idea of adversarial prompting to chain of7Hint: Express the sums of the geometricseries using their formulas ...A: ??HintingHint: Use the Binomial Theorem toexpand the expression ...A: ??Q: If f(x) is an even function and g(x) isan odd function ...A: ??One-shot Hinting Few-shot HintingQ: What is the coef ficient of x^8 in theexpansion of (x − 1)^9?Q: Let a and b be the solutions ofthe equation 2x^2 − 10x + 5 ...Q: Consider two infinite geometric series.The first has leading term a, commonratio b, and sum S. Thesecond has a leading term ...Q1: ...Hint 1: ...Q2: ...Hint 2: .........Q5: ...Hint 5: ...Figure 5: Examples of Hint-based Prompting techniques: Hinting, One-shot Hinting, and Few-ShotHintingthought, by prompting the model with an example problem containing an erroneous step-by-stepsolution to the example problem.(Figure 6)We experiment with two different types of adversarial prompting:•Case-I - Random : The model is provided with an example problem of the same categoryas the target problem, but the hint being provided for the example problem is of a differentcategory.•Case-II - Adversarial: We deliberately make errors in the correct hints of the exampleproblems and feed them to the model, such as changing the signs, reordering the steps, etc.We experiment with both random and adversarial prompting for our baseline, one-shot, and few-shotapproaches.We further extend the concept of adversarial prompts to hinting and make adversarial hints to testthe sensitivity of models to these hints. We add the adversarial hints case for one-shot and few-shotprompting cases similarly by replacing the right hints with the adversarial hints.•Baseline + Random-hint: Giving a question with a random hint (hint of some unrelatedquestion)•Baseline + Adversarial-hint: Giving each question with its adversarial hint (wrong hint ofthe same question)•One-shot-Adversarial: Giving one-example QnA pair but with the adversarial answer(Difficulty level: 3) and then the target question•One-shot-Random-Hinting: Giving one example question and random hint (Case-I) pair(Difficulty level: 3) and then the target question•One-shot-Adversarial-Hinting: Giving one example question and adversarial hint (Case-II)pair (Difficulty level: 3) and then the target question•Few-shot-Adversarial: Giving five example QnA pairs with adversarial answers, eachcorresponding to one of the 5 difficulty levels, and then the target question•Few-shot-Random-Hinting: Giving five example questions and random hints (Case-I) pair,each corresponding to one of the 5 difficulty levels, and then the target question•Few-shot-Adversarial-Hinting: Giving five example questions and adversarial hints (Case-II) pair, each corresponding to one of the 5 difficulty levels, and then the target question•Chain-Of-Thought-Adversarial: Giving one example QnA pair with the adversarial(Case-I) step-by-step-solution (Difficulty level: 3) and then the target question•Chain-Of-Thought-Random: Giving one example QnA pair with random (Case-II) step-by-step-solution (Difficulty level: 3) and then the target question8Hint: Equate the volumes of the twocylinders using the formula for thevolume of a cylinde r ...A: ??Random HintA: ??Q: A sphere is inscribed inside ahemisphere ...A: ??Adversarial hint One-shot AdversarialQ: Find the distance between thepoints (−5, −2) and (7, 3).Hint: Use the distance formula, whichcalculates the cube root of the dif ferenceof the squared dif ferences in the x-coordinates and y-coordinatesQ: Consider two infinite geometric series.The first has leading term a, commonratio b, and sum S. Thesecond has a leading term ...Q: A cube with an edge length of 4 unitshas the same volume as a square-basedpyramid ... Adv A: 5Q: Compute the area of the region thatlies above the graph...A: ??Few-shot Random HintingA: ??Q: If f(x) is an even function and g(x) isan odd function ...A: ??Few-shot Adversarial Hinting Chain-Of-Thought AdversarialQ1: ...Adv Hint 1: ...Q2: ...Adv Hint 2: .........Q5: ...Adv Hint 5: ...Q: Let a and b be the solutions ofthe equation 2x^2 − 10x + 5 ...Q1: ...R. Hint 1: ...Q2: ...R. Hint 2: .........Q5: ...R. Hint 5: ...Q: When you divide a binarynumber ...Adv Step 1 :...Adv Step 2: ......One-shot Random HintingOne-shot Adversarial Hinting Few-shot AdversarialChain-Of-Thought RandomQ: What is the sum of all values of y forwhich the expression (y+6)/(y^2 − 5y +4) is undefined.Random Hint: First calculate the volumeof the cube, then set up an equationequating the volume ...Q: Consider the function f (x) = 5x + 4.What is f (1)?A: ??Q: In how many ways can 4 boys and3 girls be seated in a row of 7 chairssuch that at least 2 boys are next toeach other?Adv-Hint: Find the number ofarrangements where no three boys sittogether ...Q: Jack rolls 5 fair six-sideddice. What is the probability ...A: ??Q1: ...Adv A1: ...Q2: ...Adv A2: .........Q5: ...Adv A5: ...Q: A certain ellipse is definedby ...A: ??Q: What is the remainderwhen the base 4 number ...R. Step 1 :...R. Step 2: ......Q: What is the greatest possible productof any two distinct prime numbersA: ??Figure 6: Examples of Adversarial Prompting techniquesB DatasetWe evaluate on a subset of the MATH dataset [ 5]. The MATH dataset consists of problems fromvarious popular math competitions, including the AMC 12, AIME, and more. Owing to the structureof mathematics, these problems have a particular method of solving them with multiple related steps.Humans generally use problem-solving techniques and “heuristics” to solve such problems, thusmaking them a good metric for assessing a model’s problem-solving and reasoning skills[ 5]. Thedataset categorizes the problems into various categories and difficulties. The seven categories arePre-algebra, Algebra, Number Theory, Counting and Probability, Geometry, Intermediate Algebra,and Pre-calculus. The problems are divided into five levels: level 1, including more straightforwardquestions; and level 5, including advanced questions.We enhanced the dataset with hints and adversarial hints for our experiments. To generate hints foreach problem in the dataset, we used the Gemini-1.5-flash model, prompting it to generate hints andadversarial hints related to the question. Due to computational limitations, we used a subset of thedataset, 100 problems from each topic (20 questions of each difficulty), leading to a total sample setof 700 problems.C ExperimentsAll the prompting techniques were tested on a set of models, chosen in a manner to ensure diversityfor our experimentation and testing. Nine open-sourced models and two closed-source have beenused models as listed below:• Gemma-2-2b-it11https://huggingface.co/google/gemma-2-2b-it9• Qwen2-Math-7B-Instruct2• Meta-Llama-3.1-8B-Instruct3• Meta-Llama-3.1-8B4• Mistral-7B-Instruct-v0.35• Qwen2-7B-Instruct6• Qwen2-7B7• Mathstral-7B-v0.18• Deepseek-math-7b-instruct9• Gemini• GPT-4o miniThe ground truth numerical answers and the generated final numerical answers were compared usingDeepSeek-7B-Math-Compare-Answer [ 25]10, fine-tuned to compare mathematical answers with highaccuracy and extract answers from the generated step-by-step solution by the model.C.1 Experiments on VLMsWe evaluated the mathematical visual question answering ability of multi-modal models by providingthem with the text for a question and an image containing the diagram. We tested the visualproblem-solving with four prompting techniques: baseline, hinting, adversarial hinting, and randomhinting. We evaluated these approaches on the multimodal Gemini and GPT-4o mini models. Forour multimodal experiments, we created another dataset using a subset of 700 non-MCQ questionsfrom the Math Vision Dataset[ 26], which contains problems of various levels from 1-5, in increasingdifficulty. We sampled equally from each difficulty level. Hints were generated using the Gemini-1.5-Flash model using the textual content of the question. We also generated their adversarial hints ina similar manner as our other evaluations. For random hinting, we provided the hint from anotherquestion. As seen in Table 3, hinting improves the overall model performance, and the performanceis degraded due to adversarial and random hinting, similar to our other evaluations. We note thatwe could not exhaustively evaluate a broader set of multimodal models on a larger dataset due tocomputational limitations and see it as an interesting future direction to explore.Model Prompting method OverallGemini 1.5Baseline 0.1714Hinting 0.2457Adversarial Hinting 0.1514Random Hinting 0.1657GPT-4o miniBaseline 0.1886Hinting 0.2514Adversarial Hinting 0.1886Random Hinting 0.1714Table 3: Comparison of Visual Mathematical Question and Answering abilities of Multimodalmodels.2https://huggingface.co/Qwen/Qwen2-Math-7B-Instruct3https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct4https://huggingface.co/meta-llama/Meta-Llama-3.1-8B5https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.36https://huggingface.co/Qwen/Qwen2-7B-Instruct7https://huggingface.co/Qwen/Qwen2-7B8https://huggingface.co/mistralai/Mathstral-7B-v0.19https://huggingface.co/deepseek-ai/deepseek-math-7b-instruct10https://huggingface.co/Tianqiao/DeepSeek-7B-Math-Compare-Answer10Image : A: ??BaselineHint: Use supplementary angles andisosceles triangle properties.A: ??Hinting Adversarial HintQ: In isosceles triangle ABC, if BC isextended to a point X such that AC = CX,what is the number of degreesin the measure of angle AXC?Q: In isosceles triangle ABC, if BC isextended to a point X such that AC = CX,what is the number of degrees in themeasure of angle AXC?Image : Adv Hint: Since AC = CX, you can usethe properties of isosceles triangles tofind the measure of angle ACX.A: ??Q: In isosceles triangle ABC, if BC isextended to a point X such that AC = CX,what is the number of degrees in themeasure of angle AXC?Image : Random HintRandom Hint: Since AC = CX, you canuse the properties of isosceles trianglesto find the measure of angle ACX.A: ??Q: In isosceles triangle ABC, if BC isextended to a point X such that AC = CX,what is the number of degreesin the measure of angle AXC?Image : Figure 7: Examples of prompting techniques to test Visual problem solving abilityD ResultsIn this section, all the results from our experiments have been compiled to maintain transparency andclarity. Table 4 contains all the results for all the prompting techniques, individually mentioning theaccuracy of each question type as well as model-wise performance. Figure 8 shows the average of allthe models on a given technique, giving a better comparison of each type’s performance.base hintrandom hintadversarial hintcotcot adversarialcot randomfew shotfew-shot adversarial hintfew-shot hintfew-shot adversarialfew-shot random-hintone-shotone-shot adversarial hintone-shot hintone-shot-adversarialone-shot-random-hint0.40.420.440.460.480.50.520.540.560.580.60.540.560.460.470.480.480.510.510.50.520.490.470.50.470.470.490.48Average ScoreFigure 8: Comparison of Average Scores for Different Methods11Table 4: Performance for all models for all prompting methodsModel Name Algebra Count & Prob Geometry Int. Algebra Number Theory Pre-algebra Pre-calculus OverallBaselineQwen2-Math-7B-Instruct 0.86 0.82 0.68 0.68 0.85 0.9 0.58 0.7657Gemma-2-2b-it 0.48 0.35 0.22 0.2 0.4 0.44 0.24 0.3314Meta-Llama-3.1-8B-Instruct 0.72 0.44 0.42 0.58 0.46 0.76 0.64 0.5743Meta-Llama-3.1-8B 0.56 0.4 0.62 0.59 0.41 0.62 0.7 0.5571Mistral-7B-Instruct-v0.3 0.18 0.21 0.11 0.22 0.12 0.4 0.16 0.1971Qwen2-7B-Instruct 0.66 0.5 0.46 0.48 0.48 0.68 0.38 0.51Qwen2-7B 0.6 0.54 0.58 0.34 0.46 0.68 0.44 0.52Mathstral-7B-v0.1 0.76 0.51 0.44 0.56 0.63 0.78 0.36 0.5771GPT-4o mini 0.92 0.86 0.76 0.58 0.48 0.88 0.61 0.7257Deepseek-math-7b-instruct 0.8 0.38 0.4 0.44 0.46 0.68 0.42 0.5114Gemini-1.5-Flash 0.78 0.62 0.52 0.44 0.5 0.78 0.65 0.6114HintQwen2-Math-7B-Instruct 0.91 0.8 0.66 0.75 0.82 0.9 0.7 0.7914Gemma-2-2b-it 0.46 0.28 0.3 0.36 0.48 0.5 0.2 0.3686Meta-Llama-3.1-8B-Instruct 0.76 0.54 0.56 0.56 0.58 0.72 0.58 0.6143Meta-Llama-3.1-8B 0.46 0.56 0.49 0.46 0.47 0.58 0.6 0.5143Mistral-7B-Instruct-v0.3 0.32 0.34 0.16 0.32 0.18 0.34 0.34 0.2857Qwen2-7B-Instruct 0.58 0.42 0.52 0.42 0.62 0.58 0.4 0.5057Qwen2-7B 0.54 0.56 0.52 0.42 0.52 0.5 0.48 0.5057GPT-4o mini 0.96 0.88 0.78 0.52 0.56 0.94 0.74 0.7686Mathstral-7B-v0.1 0.69 0.5 0.51 0.49 0.68 0.72 0.55 0.5886Deepseek-math-7b-instruct 0.76 0.52 0.56 0.46 0.56 0.58 0.52 0.5657Gemini-1.5-Flash 0.78 0.78 0.58 0.54 0.52 0.86 0.62 0.6686Adversarial-HintQwen2-Math-7B-Instruct 0.86 0.72 0.62 0.74 0.82 0.80 0.62 0.7401Gemma-2-2B-it 0.48 0.26 0.28 0.18 0.28 0.42 0.30 0.3143Meta-Llama-3.1-8B-Instruct 0.70 0.44 0.52 0.46 0.42 0.58 0.44 0.5085Meta-Llama-3.1-8B 0.46 0.36 0.46 0.60 0.32 0.56 0.65 0.4857Mistral-7B-Instruct-v0.3 0.20 0.18 0.10 0.28 0.16 0.24 0.16 0.1886Qwen2-7B-Instruct 0.56 0.46 0.44 0.40 0.52 0.60 0.44 0.4886Qwen2-7B 0.58 0.42 0.40 0.44 0.54 0.58 0.50 0.4942GPT-4o Mini 0.92 0.90 0.76 0.50 0.56 0.92 0.68 0.7486Mathstral-7B-v0.1 0.60 0.46 0.34 0.50 0.60 0.66 0.36 0.5028Deepseek-Math-7B-Instruct 0.70 0.38 0.36 0.28 0.46 0.54 0.30 0.4314Gemini-1.5-Flash 0.82 0.70 0.44 0.34 0.44 0.78 0.42 0.5629Random-HintQwen2-Math-7B-Instruct 0.88 0.72 0.65 0.70 0.78 0.86 0.54 0.7314Gemma-2-2B-it 0.38 0.14 0.20 0.16 0.22 0.42 0.10 0.2314Meta-Llama-3.1-8B-Instruct 0.62 0.50 0.46 0.48 0.51 0.70 0.44 0.5314Meta-Llama-3.1-8B 0.24 0.24 0.14 0.18 0.18 0.20 0.20 0.1971Mistral-7B-Instruct-v0.3 0.20 0.20 0.06 0.16 0.16 0.26 0.06 0.1571Qwen2-7B-Instruct 0.62 0.44 0.34 0.42 0.50 0.58 0.34 0.4629Qwen2-7B 0.38 0.32 0.34 0.40 0.30 0.54 0.36 0.3771Mathstral-7B-v0.1 0.68 0.48 0.48 0.50 0.64 0.72 0.32 0.5457Deepseek-Math-7B-Instruct 0.66 0.38 0.40 0.32 0.58 0.62 0.34 0.4714Gemini-1.5-Flash 0.88 0.66 0.60 0.42 0.50 0.84 0.56 0.6371GPT-4o Mini 0.96 0.88 0.76 0.48 0.58 0.88 0.70 0.7486Chain of ThoughtQwen2-Math-7B-Instruct 0.88 0.76 0.60 0.76 0.84 0.85 0.58 0.7514Gemma-2-2B-it 0.42 0.26 0.24 0.26 0.32 0.44 0.18 0.3029Meta-Llama-3.1-8B-Instruct 0.82 0.42 0.42 0.32 0.44 0.62 0.28 0.4743Meta-Llama-3.1-8B 0.26 0.18 0.40 0.28 0.22 0.46 0.14 0.2771Mistral-7B-Instruct-v0.3 0.26 0.28 0.12 0.24 0.10 0.40 0.10 0.2143Qwen2-7B-Instruct 0.26 0.36 0.32 0.36 0.22 0.34 0.24 0.3000Qwen2-7B 0.60 0.54 0.58 0.48 0.58 0.52 0.54 0.5486Mathstral-7B-v0.1 0.72 0.52 0.48 0.52 0.62 0.70 0.28 0.5486Deepseek-Math-7B-Instruct 0.78 0.46 0.42 0.40 0.48 0.66 0.30 0.5000Gemini-1.5-Flash 0.78 0.72 0.62 0.48 0.52 0.84 0.56 0.6457GPT-4o Mini 0.94 0.88 0.74 0.46 0.58 0.86 0.68 0.7343CoT AdversarialQwen2-Math-7B-Instruct 0.88 0.72 0.62 0.74 0.88 0.88 0.62 0.7629Gemma-2-2B-it 0.44 0.26 0.30 0.20 0.36 0.40 0.16 0.3029Meta-Llama-3.1-8B 0.28 0.14 0.06 0.22 0.20 0.42 0.10 0.2029Mistral-7B-Instruct-v0.3 0.14 0.26 0.14 0.16 0.08 0.28 0.10 0.1657Qwen2-7B 0.60 0.70 0.54 0.42 0.52 0.60 0.58 0.5657Meta-Llama-3.1-8B 0.68 0.50 0.40 0.38 0.50 0.66 0.30 0.4886Qwen2-7B-Instruct 0.24 0.28 0.22 0.36 0.22 0.36 0.28 0.2800Mathstral-7B-v0.1 0.78 0.54 0.54 0.46 0.60 0.74 0.30 0.5657Deepseek-Math-7B-Instruct 0.70 0.44 0.40 0.38 0.48 0.66 0.42 0.4971Gemini-1.5-Flash 0.90 0.72 0.56 0.40 0.56 0.82 0.56 0.6457GPT-4o Mini 0.94 0.90 0.76 0.52 0.58 0.92 0.70 0.7600CoT RandomQwen2-Math-7B-Instruct 0.82 0.74 0.66 0.74 0.84 0.82 0.58 0.7429Gemma-2-2B-it 0.58 0.24 0.32 0.28 0.32 0.38 0.18 0.3286Meta-Llama-3.1-8B 0.58 0.30 0.54 0.76 0.34 0.32 0.28 0.4457Mistral-7B-Instruct-v0.3 0.16 0.18 0.08 0.24 0.16 0.42 0.16 0.2000Qwen2-7B 0.52 0.48 0.56 0.48 0.50 0.70 0.44 0.5257Meta-Llama-3.1-8B 0.74 0.46 0.38 0.48 0.60 0.65 0.38 0.5257Qwen2-7B-Instruct 0.40 0.28 0.26 0.36 0.38 0.34 0.36 0.3400Mathstral-7B-v0.1 0.78 0.54 0.44 0.46 0.58 0.76 0.32 0.5543Deepseek-Math-7B-Instruct 0.78 0.44 0.48 0.40 0.56 0.66 0.34 0.5129Gemini-1.5-Flash 0.82 0.64 0.70 0.48 0.54 0.84 0.58 0.6571GPT-4o Mini 0.96 0.86 0.78 0.52 0.52 0.86 0.66 0.737112Model Name Algebra Count & Prob Geometry Int. Algebra Number Theory Pre-algebra Pre-calculus OverallFew-ShotQwen2-Math-7B-Instruct 0.86 0.72 0.62 0.72 0.82 0.86 0.7 0.7571Gemma-2-2b-it 0.48 0.26 0.14 0.26 0.32 0.42 0.14 0.2886Meta-Llama-3.1-8B-Instruct 0.68 0.51 0.52 0.48 0.44 0.66 0.5 0.5429Meta-Llama-3.1-8B 0.42 0.32 0.8 0.7 0.46 0.76 0.56 0.5743Mistral-7B-Instruct-v0.3 0.16 0.26 0.16 0.26 0.06 0.16 0.06 0.16Qwen2-7B-Instruct 0.34 0.3 0.26 0.3 0.34 0.28 0.38 0.3143Qwen2-7B 0.36 0.5 0.58 0.34 0.5 0.52 0.34 0.4486Mathstral-7B-v0.1 0.72 0.46 0.48 0.4 0.68 0.72 0.34 0.5429Deepseek-math-7b-instruct 0.74 0.5 0.44 0.44 0.46 0.6 0.32 0.5Gemini-1.5-Flash 0.8 0.84 0.74 0.74 0.7 0.8 0.7 0.76GPT-4o mini 0.92 0.92 0.76 0.52 0.54 0.92 0.64 0.7457Few-shot HintQwen2-Math-7B-Instruct 0.78 0.74 0.56 0.76 0.82 0.85 0.58 0.7257Gemma-2-2b-it 0.46 0.24 0.24 0.3 0.3 0.36 0.16 0.2943Meta-Llama-3.1-8B-Instruct 0.72 0.6 0.5 0.44 0.52 0.7 0.4 0.5543Meta-Llama-3.1-8B 0.64 0.44 0.96 1 0.6 0.8 0.92 0.7657Mistral-7B-Instruct-v0.3 0.22 0.24 0.18 0.18 0.14 0.32 0.2 0.2114Qwen2-7B-Instruct 0.4 0.28 0.18 0.28 0.16 0.22 0.26 0.2543Qwen2-7B 0.54 0.3 0.58 0.38 0.46 0.5 0.46 0.46Deepseek-math-7b-instruct 0.74 0.48 0.4 0.36 0.46 0.68 0.44 0.5086Mathstral-7B-v0.1 0.64 0.44 0.4 0.38 0.6 0.7 0.36 0.5029Gemini-1.5-Flash 0.74 0.72 0.68 0.58 0.52 0.76 0.5 0.6529GPT-4o mini 0.9 0.88 0.88 0.5 0.54 0.92 0.74 0.7686Few-Shot AdversarialQwen2-Math-7B-Instruct 0.82 0.80 0.54 0.68 0.84 0.82 0.62 0.73Gemma-2-2b-it 0.50 0.28 0.18 0.20 0.36 0.38 0.12 0.29Meta-Llama-3.1-8B-Instruct 0.60 0.48 0.50 0.52 0.52 0.70 0.46 0.54Meta-Llama-3.1-8B 0.34 0.40 0.86 0.72 0.12 0.64 0.40 0.50Mistral-7B-Instruct-v0.3 0.14 0.22 0.08 0.24 0.16 0.22 0.10 0.17Qwen2-7B-Instruct 0.52 0.26 0.34 0.26 0.28 0.30 0.28 0.32Qwen2-7B 0.44 0.56 0.40 0.34 0.50 0.52 0.30 0.44Deepseek-Math-7B-Instruct 0.76 0.54 0.40 0.40 0.52 0.68 0.32 0.51Mathstral-7B-v0.1 0.52 0.54 0.42 0.40 0.64 0.74 0.32 0.51Gemini-1.5-Flash 0.76 0.62 0.60 0.42 0.42 0.82 0.42 0.58GPT-4o mini 0.94 0.88 0.78 0.56 0.64 0.88 0.70 0.77Few-Shot Adversarial HintQwen2-Math-7B-Instruct.json 0.86 0.76 0.6 0.64 0.88 0.82 0.56 0.7314Gemma-2-2b-it.json 0.42 0.32 0.22 0.2 0.34 0.38 0.08 0.28Mathstral-7B-v0.1.json 0.72 0.5 0.36 0.5 0.62 0.78 0.44 0.56Meta-Llama-3.1-8B-Instruct.json 0.72 0.48 0.54 0.48 0.58 0.74 0.42 0.5657Meta-Llama-3.1-8B.json 0.22 0.2 0.66 0.74 0.22 0.8 0.36 0.4571Mistral-7B-Instruct-v0.3.json 0.2 0.24 0.14 0.12 0.16 0.36 0.14 0.1943Qwen2-7B-Instruct.json 0.32 0.3 0.14 0.2 0.2 0.32 0.24 0.2457Qwen2-7B.json 0.56 0.56 0.62 0.46 0.6 0.58 0.52 0.5571Deepseek-math-7b-instruct.json 0.65 0.48 0.44 0.44 0.56 0.66 0.4 0.5171Gemini-1.5-Flash 0.82 0.66 0.6 0.54 0.46 0.78 0.52 0.6257GPT-4o mini 0.92 0.82 0.8 0.54 0.48 0.88 0.68 0.7314Few-Shot Random HintQwen2-Math-7B-Instruct 0.90 0.66 0.58 0.74 0.74 0.82 0.54 0.71Mathstral-7B-v0.1 0.74 0.60 0.38 0.44 0.58 0.64 0.40 0.54Gemma-2-2b-it 0.44 0.24 0.16 0.18 0.36 0.42 0.10 0.27Meta-Llama-3.1-8B-Instruct 0.70 0.54 0.42 0.54 0.58 0.74 0.40 0.56Meta-Llama-3.1-8B 0.24 0.16 0.62 0.44 0.46 0.76 0.56 0.46Mistral-7B-Instruct-v0.3 0.20 0.16 0.16 0.24 0.16 0.24 0.08 0.18Qwen2-7B-Instruct 0.18 0.22 0.18 0.20 0.16 0.26 0.20 0.20Qwen2-7B 0.26 0.44 0.36 0.54 0.34 0.38 0.38 0.39Deepseek-Math-7B-Instruct 0.74 0.48 0.42 0.44 0.46 0.66 0.32 0.50Gemini-1.5-Flash 0.78 0.72 0.56 0.46 0.50 0.80 0.52 0.62GPT-4o mini 0.92 0.90 0.82 0.52 0.60 0.90 0.60 0.75One-ShotQwen2-Math-7B-Instruct 0.80 0.72 0.56 0.72 0.74 0.86 0.40 0.69Mathstral-7B-v0.1 0.66 0.52 0.48 0.50 0.58 0.72 0.38 0.55Gemma-2-2b-it 0.44 0.30 0.12 0.18 0.44 0.40 0.18 0.29Meta-Llama-3.1-8B-Instruct 0.74 0.48 0.50 0.50 0.58 0.82 0.54 0.59Meta-Llama-3.1-8B 0.48 0.24 0.60 0.40 0.24 0.30 0.34 0.37Mistral-7B-Instruct-v0.3 0.20 0.20 0.06 0.12 0.16 0.28 0.12 0.16Qwen2-7B-Instruct 0.60 0.42 0.30 0.34 0.51 0.64 0.44 0.47Qwen2-7B 0.52 0.56 0.42 0.50 0.52 0.52 0.44 0.50Deepseek-Math-7B-Instruct 0.72 0.42 0.50 0.36 0.40 0.64 0.38 0.49Gemini-1.5-Flash 0.74 0.68 0.74 0.42 0.52 0.74 0.54 0.63GPT-4o mini 0.88 0.82 0.80 0.54 0.62 0.88 0.62 0.74One-shot HintQwen2-Math-7B-Instruct 0.80 0.80 0.68 0.68 0.80 0.80 0.52 0.73Mathstral-7B-v0.1 0.65 0.52 0.40 0.46 0.58 0.74 0.40 0.53Gemma-2-2b-it 0.54 0.22 0.24 0.18 0.32 0.48 0.14 0.30Deepseek-Math-7B-Instruct 0.68 0.42 0.42 0.46 0.44 0.60 0.36 0.48Meta-Llama-3.1-8B-Instruct 0.78 0.54 0.44 0.56 0.54 0.66 0.34 0.55Meta-Llama-3.1-8B 0.14 0.10 0.12 0.18 0.04 0.42 0.14 0.16Mistral-7B-Instruct-v0.3 0.28 0.26 0.08 0.22 0.12 0.30 0.18 0.21Qwen2-7B-Instruct 0.48 0.38 0.32 0.30 0.40 0.38 0.42 0.38Qwen2-7B 0.42 0.48 0.52 0.46 0.38 0.54 0.32 0.45Gemini-1.5-Flash 0.78 0.62 0.70 0.44 0.48 0.80 0.46 0.61GPT-4o mini 0.96 0.86 0.74 0.48 0.54 0.90 0.68 0.7413Model Name Algebra Count & Prob Geometry Int. Algebra Number Theory Pre-algebra Pre-calculus OverallOne-Shot AdversarialQwen2-Math-7B-Instruct 0.76 0.80 0.62 0.66 0.85 0.84 0.48 0.71Mathstral-7B-v0.1 0.74 0.50 0.46 0.50 0.51 0.82 0.42 0.57Gemma-2-2b-it 0.46 0.28 0.20 0.26 0.30 0.38 0.16 0.29Meta-Llama-3.1-8B-Instruct 0.70 0.58 0.44 0.58 0.42 0.84 0.44 0.57Meta-Llama-3.1-8B 0.36 0.46 0.18 0.26 0.30 0.26 0.32 0.31Mistral-7B-Instruct-v0.3 0.24 0.16 0.12 0.16 0.12 0.32 0.14 0.18Qwen2-7B-Instruct 0.56 0.44 0.46 0.48 0.38 0.60 0.36 0.47Qwen2-7B 0.54 0.44 0.50 0.28 0.40 0.46 0.28 0.41Deepseek-Math-7B-Instruct 0.66 0.48 0.44 0.34 0.44 0.70 0.32 0.48Gemini-1.5-Flash 0.76 0.70 0.64 0.40 0.48 0.78 0.51 0.61GPT-4o mini 0.94 0.88 0.78 0.50 0.54 0.90 0.66 0.74One-Shot Adversarial HintQwen2-Math-7B-Instruct 0.78 0.78 0.70 0.70 0.82 0.82 0.44 0.72Mathstral-7B-v0.1 0.54 0.52 0.42 0.52 0.64 0.78 0.36 0.54Deepseek-Math-7B-Instruct 0.70 0.56 0.40 0.38 0.48 0.60 0.38 0.50Gemma-2-2b-it 0.44 0.22 0.22 0.14 0.32 0.42 0.22 0.28Meta-Llama-3.1-8B-Instruct 0.70 0.64 0.34 0.56 0.51 0.74 0.32 0.55Meta-Llama-3.1-8B 0.04 0.20 0.18 0.12 0.02 0.12 0.32 0.14Mistral-7B-Instruct-v0.3 0.18 0.28 0.10 0.28 0.22 0.32 0.14 0.22Qwen2-7B-Instruct 0.40 0.40 0.36 0.44 0.46 0.48 0.30 0.41Qwen2-7B 0.54 0.51 0.51 0.36 0.46 0.60 0.42 0.49Gemini-1.5-Flash 0.78 0.66 0.76 0.42 0.42 0.80 0.48 0.62GPT-4o mini 0.94 0.88 0.82 0.54 0.56 0.88 0.68 0.76One-Shot Random HintQwen2-Math-7B-Instruct 0.88 0.78 0.60 0.60 0.82 0.82 0.44 0.71Mathstral-7B-v0.1 0.68 0.62 0.46 0.46 0.64 0.78 0.36 0.57Gemma-2-2b-it 0.51 0.24 0.18 0.22 0.30 0.42 0.20 0.30Meta-Llama-3.1-8B-Instruct 0.60 0.58 0.52 0.54 0.52 0.74 0.32 0.55Meta-Llama-3.1-8B 0.06 0.20 0.18 0.22 0.02 0.12 0.32 0.16Mistral-7B-Instruct-v0.3 0.22 0.26 0.10 0.14 0.22 0.32 0.14 0.20Qwen2-7B-Instruct 0.51 0.34 0.48 0.48 0.46 0.48 0.30 0.44Qwen2-7B 0.44 0.44 0.56 0.38 0.46 0.60 0.42 0.47Deepseek-Math-7B-Instruct 0.70 0.36 0.40 0.48 0.48 0.60 0.38 0.49Gemini-1.5-Flash 0.72 0.70 0.62 0.36 0.50 0.86 0.50 0.61GPT-4o mini 0.98 0.90 0.74 0.54 0.48 0.92 0.66 0.75E TablesTable 4: Few-Shot Prompting Examples for All Math Categories used for our experimentationCategory Few shotAlgebra1.Example Problem : "Letf(x) =(ax+ 3,ifx >2,x−5 if−2≤x≤2,2x−bifx <−2.Finda+bif the piecewise function is continuous (which means that itsgraph can be drawn without lifting your pencil from the paper)."Answer : 02.Example Problem : "Sixteen is 64 %of what number?"Answer : 253.Example Problem : "Karl was attempting to calculate economic figures.He found the following equation to be true:fp−w= 10000Iff= 5andw= 5 + 125 i, what is p?"Answer :2001 + 25 i4.Example Problem : "What is the sum of all values of yfor which theexpressiony+6y2−5y+4is undefined?"Answer : 55.Example Problem : "Find the distance between the points (−5,−2)and(7,3)."14Category Few shotAnswer : 13Counting and Probability1.Example Problem : "What is the value of 93+ 3(92) + 3(9) + 1 ?"Answer : 10002.Example Problem : "What is the coefficient of x8in the expansion of(x−1)9?"Answer : -93.Example Problem : "The Smith family has 4 sons and 3 daughters. Inhow many ways can they be seated in a row of 7 chairs such that at least 2boys are next to each other?"Answer : 48964.Example Problem : "John draws a regular five pointed star in the sand,and at each of the 5 outward-pointing points and 5 inward-pointing pointshe places one of ten different sea shells. How many ways can he placethe shells, if reflections and rotations of an arrangement are consideredequivalent?"Answer : 3628805.Example Problem : "Compute179. You are told that156= 5005 and158= 6435 ."Answer : 24310Geometry1.Example Problem : "Square ABCD has its center at (8,−8)and hasan area of 4 square units. The top side of the square is horizontal. Thesquare is then dilated with the dilation center at (0,0) and a scale factor oftwo. What are the coordinates of the vertex of the image of square ABCDthat is farthest from the origin? Give your answer as an ordered pair."Answer : (18,-18)2.Example Problem : "In triangle ABC , we have that EandFaremidpoints of sides ACandAB, respectively. The area of △ABC is 24square units. How many square units are in the area of △CEF ?"Answer : 63.Example Problem : "The consecutive angles of a particular trapezoidform an arithmetic sequence. If the largest angle measures 120◦, what isthe measure of the smallest angle?"Answer :60◦4.Example Problem : "Triangle ABC with vertices A(−2,0),B(1,4)andC(−3,2)is reflected over the y-axis to form triangle A′B′C′. Whatis the length of a segment drawn from CtoC′?"Answer : 65.Example Problem : "A cube with an edge length of 4 units has thesame volume as a square-based pyramid with base edge lengths of 8 unitsand a height of hunits. What is the value of h?"Answer : 315Category Few shotIntermediate Algebra1.Example Problem : "Let a1, a2, . . . be a sequence for which a1= 2,a2= 3, andan=an−1an−2for each positive integer n≥3. What is a2006?"Answer : 32.Example Problem : "When a polynomial is divided by 2x2−7x+ 18,what are the possible degrees of the remainder? Enter all the possiblevalues, separated by commas."Answer : 0,13.Example Problem : "Karl was attempting to calculate economic figures.He found the following equation to be true:fp−w= 10000Iff= 5 andw= 5 + 125 i, what is p?""Let aandbbe nonzero realnumbers such that(2−7i)(a+bi)is pure imaginary. Findab."Answer : -7/24.Example Problem : "Find all real solutions to x4+ (2−x)4= 34 .Enter all the solutions, separated by commas."Answer :1 +√2,1−√25.Example Problem : Shown below are rows 1, 2, and 3 of Pascal’striangle.1 11 2 11 3 3 1Let(ai),(bi),(ci)be the sequence, from left to right, of elements in the2005th, 2006th, and 2007th rows, respectively, with the leftmost elementoccurring at i= 0.Compute2006Xi=0bici−2005Xi=0aibi.Answer : 1/2Number theory1.Example Problem : "IfAAA 4can be expressed as 33b, where Ais adigit in base 4 and bis a base greater than 5, what is the smallest possiblesumA+b?"Answer : 72.Example Problem : "Abigail, Beatrice, and Carson combine their eggsto sell them at the market. If Abigail has 37 eggs, Beatrice has 49 eggs,and Carson has 14 eggs, and if eggs can only be sold in cartons of 12, howmany eggs will be left over if all cartons are sold?"Answer : 43.Example Problem : "For how many positive integers ndoes1nyield aterminating decimal with a non-zero hundredths digit?"Answer : 1116Category Few shot4.Example Problem : "Find the remainder when the sum75 + 76 + 77 + 78 + 79 + 80 + 81 + 82is divided by 16."Answer : 45.Example Problem : "When the binary number 100101110010 2isdivided by 4, what is the remainder (give your answer in base 10)?"Answer : 2Pre-algebra1.Example Problem : "Find the area in square feet of a square with aperimeter of 32ft."Answer : 642.Example Problem : "Find rif3(r−7) = 4(2 −2r) + 4 ."Answer : 33.Example Problem : "Megan has lost Fatima’s phone number. Meganknows that the first three digits are either 296 or 299. The remaining fourdigits are 0, 1, 6 and 7, but she isn’t sure of the order of these digits. IfMegan randomly dials a seven-digit number that meets these conditions,what is the probability that she dials Fatima’s correct number? Expressyour answer as a common fraction."Answer : 1/484.Example Problem : "What is25divided by 3?"Answer : 2/155.Example Problem : "Twenty gremlins and fifteen imps are at theAnnual Mischief Convention. The imps have had a lot of in-fighting latelyand refuse to shake hands with each other, but they readily shake handswith all of the gremlins. Meanwhile, all the gremlins are quite friendlyand shake hands with all of the other gremlins as well as imps. Each pairof creatures shakes hands at most once. How many handshakes were atthe convention?"Answer : 490Pre-calculus1.Example Problem : Compute7 3−1 2.Answer : 172.Example Problem : "LetA= 0 1 21 0 12 1 0!.There exist constants p,q, andrsuch thatA3+pA2+qA+rI=0,where Iand0are the 3×3identity matrix and zero matrix, respectively.Enter the ordered triple (p, q, r )."Answer : (0,-6,-4)17Category Few shot3.Example Problem : "Let Sbe a region in the plane with area 10. Whenwe apply the matrix2 17−3toS,we obtain the region S′.Find the area of S′."Answer : 1304.Example Problem : "Find the least positive integer nsuch that1sin 45◦sin 46◦+1sin 47◦sin 48◦+···+1sin 133◦sin 134◦=1sinn◦.Answer : 15.Example Problem : "Find the area of the triangle with vertices (−1,4),(7,0),and(11,5)."Answer : 28Table 5: Chain-Of-Thought Prompting Examples for All Math Categories used for experimentationCategory COTAlgebraExample Problem : "What is the sum of all values of yfor which theexpressiony+6y2−5y+4is undefined?"Solution :*Step 1 : Understanding the Problem: The expression is undefined whenits denominator is zero. We need to find the values of ythat make thedenominator zero.*Step 2 : Identify the Denominator: The denominator is y2−5y+ 4. Weneed to find values of yfor which this quadratic expression equals zero.*Step 3 : Set the Denominator to Zero: Set the quadratic equation to zero:y2−5y+ 4 = 0 .*Step 4 : Finding the Sum of the Zeros: The sum of the solutions (orzeros) of a quadratic equation ax2+bx+c= 0is given by −ba.*Step 5 : Apply the Formula: Here, a= 1,b=−5, andc= 4. Using theformula: Sum of the zeros =−−51= 5.* Step 6 : Conclusion: The sum of the zeros of y2−5y+ 4is5.Counting and ProbabilityExample Problem : "What is the coefficient of x8in the expansion of(x−1)9?"Solution :*Step 1 : Understanding the Problem: Find the coefficient of a specificterm in the expansion of (x−1)9using the Binomial Theorem.18Category COT* Step 2 : Recall the Binomial Theorem: For (a+b)n, the expansion is:(a+b)n=nXk=0nkan−kbkwherenkis the binomial coefficient.*Step 3 : Identify the Term: We’re interested in the term where x8appearsin the expansion of (x−1)9. In the general term, a=xandb=−1.* Step 4 : Apply the Binomial Theorem: The general term is given by:9kx9−k(−1)kForx8,9−k= 8, sok= 1.*Step 5 : Calculate the Coefficient: Substitute k= 1 to find the coeffi-cient: 91x8(−1)1= 9×(−1) =−9*Step 6 : Conclusion: The coefficient of x8in the expansion of (x−1)9is−9.GeometryExample Problem : "A cube with an edge length of 4 units has the samevolume as a square-based pyramid with base edge lengths of 8 units and aheight of hunits. What is the value of h?"Solution :*Step 1 : Determine the V olume of the Cube: The volume of the cube isgiven by 43. We calculate this as:43= 64So, the volume of the cube is 64 cubic units.*Step 2 : Write the V olume Formula for the Pyramid: The volume Vof apyramid is given by:V=13×base area ×heightFor this pyramid, the base area is 82. Thus, the volume formula becomes:V=13×82×h.*Step 3 : Set Up the Equation: We know the volume of the pyramid isequal to the volume of the cube, which is 64. So, we set up the equation:64 =13×82×h.* Step 4 : Simplify the Equation: Calculate 82:82= 6419Category COTSubstitute this into the volume formula:64 =643×h.* Step 5 : Solve for h: Rearrange the equation to solve for h:64 =643×hTo isolate h, multiply both sides by 3:64×3 = 64 ×h. Divide both sides by 64:h=64×364= 3* Step 6 : Conclusion: Therefore, the height hof the pyramid is 3.Intermediate AlgebraExample Problem : "Let aandbbe nonzero real numbers such that(2−7i)(a+bi)is pure imaginary. Findab."Solution :*Step 1 : Expand the Expression: Expand the given expression using thedistributive property:(2−7i)(a+bi) = 2( a+bi)−7i(a+bi)= 2a+ 2bi−7ai−7bi2*Step 2 : Simplify Using i2=−1: Recall that i2=−1, so−7bi2= 7b.Substituting this gives:= 2a+ 2bi−7ai+ 7b* Step 3 : Combine Like Terms: Group the real and imaginary parts:= (2a+ 7b) + (−7a+ 2b)i*Step 4 : Identify the Pure Imaginary Condition: The expression is pureimaginary, meaning the real part must be zero. Set the real part equal tozero:2a+ 7b= 0* Step 5 : Solve forab: Rearrange the equation to solve for a:2a=−7bDivide by b:ab=−72* Step 6 : Conclusion: Therefore, the value ofabis−72.20Category COTNumber TheoryExample Problem : "When the binary number 100101110010 2is dividedby 4, what is the remainder (give your answer in base 10)?"Solution :*Step 1 : Understand the Binary Number Representation: The binarynumber bkbk−1. . . b 2b1b0represents a decimal number where each digitbiis multiplied by 2i. The decimal value of the binary number is givenby:2kbk+ 2k−1bk−1+···+ 4b2+ 2b1+b0*Step 2 : Find the Remainder When Dividing by 4: To find the remainderwhen this number is divided by 4, observe that only the last two binarydigits affect the remainder. This is because 22= 4 is the base of themodulus. Therefore:Remainder = 2b1+b0*Step 3 : Apply the Method to the Given Binary Number: Consider thebinary number 100101110010 2. Identify the last two digits of this binarynumber:Last two digits = 10*Step 4 : Calculate the Remainder: Use the formula 2b1+b0withb1= 1andb0= 0:Remainder = 2·1 + 0 = 2*Step 5 : Conclusion: Therefore, when the binary number100101110010 2is divided by 4, the remainder is 2.Pre-algebraExample Problem : "Find rif3(r−7) = 4(2 −2r) + 4 "Solution :*Step 1 : Expand Both Products: We start with the equation involvingproducts. Expanding both products, we get:3r−3·7 = 4·2−4·2r+ 4* Step 2 : Calculate the Products: Compute each product separately:3r−21 = 8 −8r+ 4*Step 3 : Simplify the Right-Hand Side: Combine the terms on the right-hand side:8−8r+ 4 = 12 −8rThus, the equation becomes:3r−21 = 12 −8r21Category COT*Step 4 : Solve for r: To isolate r, first add 8rto both sides of theequation:3r−21 + 8 r= 12−8r+ 8rThis simplifies to:11r−21 = 12* Step 5 : Add 21 to Both Sides: To isolate 11r, add 21 to both sides:11r−21 + 21 = 12 + 21This simplifies to:11r= 33* Step 6 : Solve for r: Divide both sides by 11 to find r:r=3311= 3* Step 7 : Conclusion: Therefore, the value of ris3.Pre-calculusExample Problem : "If cosx+ cos 2 x+ cos 3 x= 3, then find sinx+sin 2x+ sin 3 x."Solution :•Step 1: Understand the Given Expression: We are given thatcosx+ cos 2 x+ cos 3 x= 3.•Step 2: Identify Possible Values for the Cosine Function: Recallthat the maximum value of cosθis 1. Therefore, for cosx+cos 2x+ cos 3 x= 3, each term must be at its maximum value,which means cosx= cos 2 x= cos 3 x= 1.•Step 3: Solve for x: For cos 2x= 1, substitute x= 2nπintocos 2x, giving cos 4nπ= 1, which is true. For cos 3x= 1,substitute x= 2nπintocos 3x, giving cos 6nπ= 1, which istrue. Thus, x= 2nπsatisfies all conditions.•Step 4: Calculate sinx+ sin 2 x+ sin 3 x: Forx= 2nπ, we havesinx= sin 2 nπ= 0. Similarly, sin 2x= sin 4 nπ= 0. Andsin 3x= sin 6 nπ= 0.•Step 5: Conclusion: Adding these results, we get sinx+sin 2 x+sin 3x= 0 + 0 + 0 = 0 . Therefore, the value of sinx+ sin 2 x+sin 3xis0.22Table 6: One-Shot Prompting Examples for All Math Categories used for our experimentationCategory One shotAlgebraExample Problem : "What is the sum of all values of yfor which theexpressiony+6y2−5y+4is undefined?"Answer : 5Counting and ProbabilityExample Problem : "What is the coefficient of x8in the expansion of(x−1)9?"Answer : -9GeometryExample Problem : "A cube with an edge length of 4 units has the samevolume as a square-based pyramid with base edge lengths of 8 units and aheight of hunits. What is the value of h?"Answer : 3Intermediate algebraExample Problem : "Let aandbbe nonzero real numbers such that(2−7i)(a+bi)is pure imaginary. Findab."Answer : 7/2Number theoryExample Problem : "When the binary number 100101110010 2is dividedby 4, what is the remainder (give your answer in base 10)?"Answer : 2Pre-algebraExample Problem : "Find rif3(r−7) = 4(2 −2r) + 4 ."Answer : 3Pre-calculusExample Problem : "If cosx+ cos 2 x+ cos 3 x= 3, then find sinx+sin 2x+ sin 3 x."Answer : 023 |
eQrkAPcGRF | Math2Sym: A System for Solving ElementaryProblems via Large Language Models and SymbolicSolversMinh Phu Nguyen1Minh Phuong Pham1Tuan Minh Kha1,2†Minh Man Ngo1,21VNUHCM-University of Science2Maverick [email protected] , [email protected]@finsavvy.vn, [email protected] models for solving math word problems (MWPs) often struggle tocapture both linguistic context and arithmetic reasoning. We propose Math2Sym, anovel approach integrating large language models (LLMs) with symbolic solvers.This method leverages LLMs’ language comprehension and symbolic computa-tion’s precision to efficiently convert MWPs into solvable symbolic form. Weintroduce the EMSF dataset for training models to formalize math problems acrossvarious complexities. On our defined test set benchmark, fine-tuned models outper-form GPT-3.5 by 17% in few-shot tasks and perform comparably to GPT-4-minion elementary math problems.11 IntroductionMath word problems (MWPs) present a unique challenge in artificial intelligence, requiring theintegration of linguistic comprehension and mathematical reasoning to solve questions based oncontextual descriptions [ 1]. The primary obstacles lie in understanding the problem’s contextand translating it into appropriate mathematical operations, particularly when mapping linguisticinformation to complex mathematical expressions [2].MWP solving has evolved from early rule-based systems like STUDENT [ 3] to machine learningmethods [ 4], improving accuracy but still facing challenges in complex domains. Recent advance-ments include Chain-of-Thought (CoT) prompting [ 5], which enhances reasoning by breaking downproblems into structured steps, and Program-Aided Language models (PaL) [ 6], which generatePython code for external computation.Our work advances the field with the following key contributions:•We introduce a novel approach that fine-tunes models to generate symbolic forms of MWPs,enhancing language models’ capabilities in converting problems into representations com-patible with our custom SymPy-based solver [7].•We present the EMSF dataset for converting elementary math word problems into symbolicform across five problem types, facilitating improved formalization of MWPs.•We demonstrate that fine-tuning 7B-parameter models on EMSF outperforms larger modelssuch as GPT-3.5 in MWP solving.1The code and the new EMSF dataset are available at https://github.com/pepoo20/Math2Sym,https://huggingface.co/datasets/MathSymbol/EMSF†Corresponding author38th Conference on Neural Information Processing Systems (NeurIPS 2024).This approach enhances MWP solving accuracy and versatility, paving the way for more robustAI systems in mathematical problem-solving. Its transparent step-by-step reasoning also offerseducational value, fostering a deeper understanding of problem-solving processes.2 Related Work2.1 MWP SolversMWP solving has evolved from early rule-based systems like STUDENT [ 8], which relied on prede-fined schemas, to statistical machine learning models [ 4], improving the mapping from linguistic inputto mathematical representations. Deep learning approaches, such as encoder-decoder architectures,further advanced the field [ 9]. However, many models remain limited to basic arithmetic problems orlinear equations, struggling with more complex tasks like systems of equations or inequalities.2.2 Integration of External Tools with Language ModelsRecent research has focused on enhancing language models (LMs) by integrating external tools likecalculators, search engines, and symbolic solvers to address limitations in precise calculations oraccessing real-time information [ 10,11]. Two main approaches for training LMs to use these toolshave emerged: creating large, supervised datasets with explicit examples of tool usage and usingfew-shot learning with prompts demonstrating tool use [6, 12].2.3 Auto-FormalizationAuto-formalization, the task of converting natural language into symbolic representations, plays acentral role in mathematical reasoning. Recent work in this area leverages symbolic manipulationtools like SymPy [ 7], alongside proof assistants such as Isabelle/HOL [ 13], to enable computationalformal reasoning. Unlike [ 14], which uses BERT for simpler problems, and [ 15], which employsLLMs via prompting without any fine-tuning, our method targets more complex problem types withenhanced LLM-based approaches.3 Math2SymMath2Sym integrates large language models (LLMs) with symbolic solvers to address math wordproblems (MWPs). By transforming natural language problem descriptions into symbolic representa-tions, this approach tackles two key challenges: understanding linguistic complexity and ensuringprecise computation. LLMs extract variables and conditions from word problems, while a symbolicsolver handles mathematical computations.3.1 Method FrameworkMath2Sym converts natural language word problems into standardized Symbolic Forms through threecore steps:1.Extraction of Variables: Identify relevant entities (variables, constants, relationships) fromthe natural language description.2.Formulation of Mathematical Expressions: Formalize extracted elements into precisemathematical expressions, adhering to the problem’s logic and conditions.3.Conversion to Symbolic Form: Transform mathematical expressions into a predefinedSymbolic Form compatible with a symbolic solver.To illustrate this process, consider the following word problem and its symbolic formalization:2Word Problem: The length of arectangle is equal to triple the width.Which system of equations can beused to find the dimensions of therectangle if the perimeter is 86 cen-timeters?Answer: Define the variables and formulate the linear system ofequations: Let variable xrepresent the length of the rectangle andvariable yrepresent the width of the rectangle. The length of arectangle is equal to triple the width, so the equation is x= 3y.The perimeter of the rectangle is 86 centimeters, leading to theequation 2x+ 2y= 86 .System of equations: {x= 3y,2x+ 2y= 86}Symbolic Form: [x−3y,2x+ 2y−86, x, y, solve]This structured symbolic form provides the solver with a clear and unambiguous mathematicalrepresentation, ensuring consistent and accurate solutions across various problem types.3.2 Symbolic SolverThe problem-solving process involves the language model systematically normalizing problemsinto standard forms, which are then converted into predefined Symbolic Forms. To mitigate thefrequent arithmetic errors produced by language models, our approach trains the model to formalizeproblems while avoiding direct calculations. Unlike other LLMs that attempt computations withintheir reasoning steps, we delegate all arithmetic to an external solver in Symbolic Form, allowing themodel to focus on formalization.Our solver is built using SymPy [ 7], a Python library for symbolic computation. SymPy’s capabilityto handle a wide range of mathematical problems and its ease of use make it suitable for both currentneeds and future scalability. The Symbolic Form follows this structure:Symbolic Form: [[constants or expressions, variables, actions]]Each mathematical problem type is associated with a specific action, which corresponds directly to aSymPy method (e.g., ‘solve’ for equations or inequalities, ‘igcd’ for greatest common divisors).To address LLMs’ tendency to produce lengthy outputs, we enclose Symbolic Form answers indouble square brackets. This formatting is achieved through prompting or fine-tuning with structureddata, facilitating the consistent conversion of problems into Symbolic Forms.4 ExperimentsWe developed a custom test dataset of 92 questions across five categories: greatest common divi-sor, least common multiple, systems of equations, linear inequalities, and compound inequalities.Questions vary in difficulty (levels 1-5) and include both word and purely mathematical problems toassess generalization. The questions in the dataset were inspired by problems found in high schoolmathematics textbooks and reputable online educational resources.4.1 Training Dataset and ModelsLanguage models ranging from a few hundred million to approximately 7 billion parameters werefine-tuned on the EMSF dataset using Low-rank adaptation (LoRA) [ 16]. Detailed parameters are inA.The EMSF dataset consists of three parts, as detailed in B:• Pretrain: Focuses on standard mathematical formalization.•Basic: Synthetically generated using Mixtral 8x7B [ 17], involves direct extraction of simpleproblem elements.•Advanced: Generated using LLaMA3 70B [ 18], requires reasoning steps for complexproblems.In short, the basic dataset involves direct extraction and declaration of simple problem elements,whereas the advanced dataset requires reasoning steps and aggregation of information from morecomplex problems.34.2 Answer EvaluationPerformance was evaluated through direct evaluation, few-shot learning, and two fine-tuned settings(on basic and on advanced datasets). Model-generated symbolic forms were compared to groundtruths, both processed through the symbolic solver.Evaluation scores were weighted by problem’s difficulty, with the total score Stotalcalculated as:Stotal=NXi=1Ci×Diwhere Ci∈ {0,1}is the correctness score for problem i, and Di∈ {1,2,3,4,5}is based onproblem’s difficulty.5 ResultsTable 1: Score on our test dataset for the few-shot PaL and solver, both ran on 5-shot prompt withspecific prompt for each type of problemZero-shot-CoTFew ShotPaLFew Shot +SolverFine-tunedbasicFine-tunedadvancedMistral 7B 69 113 155 183 210Mistral 8x7B 135 145 154 Nan NanOrca 7B 25 78 76 133 171Qwen 7B 116 126 139 170 207WizardMath 7B 133 135 140 183 217LLama3.1 8B 156 140 168 197 211Qwen 0.5B 15 22 7 168 132Qwen 1.8B 24 76 52 153 136Gemma 2B 10 20 39 133 135GPT-neo 350M 0 70 36 130 120Qwen2-Math-Instruct 7B163 170 144 Nan NanGPT 3.5 172 171 179 Nan Nangpt-4o mini 217 182 182 Nan NanMax score: 231Enhanced Performance with Solver Integration : Our experiments demonstrate that integratingsymbolic solvers with LLMs significantly improved performance across models like Mistral 7B,Qwen 7B, WizardMath 7B, and GPT-3.5. This integration outperformed approaches such as Program-aided Language models (PaL) and Zero-shot Chain-of-Thought (CoT) prompting. For instance,Mistral 7B showed a 37% improvement with solver integration compared to Few-Shot PaL, whileQwen 7B demonstrated a 10% increase in performance. These findings underscore the efficacyof the Math2Sym framework, which leverages symbolic solvers to enhance the natural languagecomprehension and reasoning capabilities of LLMs.Comparison with Qwen2-Math : Our approach outperformed Qwen2-Math-Instruct 7B [ 19] in over-all performance across problem complexities. While Qwen2-Math-Instruct excelled in high-difficultyproblems (scoring 170 and solving 5/6 of the most difficult problems), it showed inconsistencieson simpler tasks. In contrast, our models, particularly WizardMath 7B, maintained consistent per-formance across all difficulty levels, achieving a total score of 217. This consistency demonstratesMath2Sym’s versatility in handling both simple and complex tasks.Dataset-Driven Success in High-Difficulty Problem Solving : Models fine-tuned on our EMSFdataset excelled in high-difficulty problems (levels 4 and 5). WizardMath 7B achieved a score of 217,significantly outperforming Zero-shot CoT (133) and GPT-3.5-turbo’s best in-context learning (179).This success stems from our dataset’s ability to teach diverse problem-to-symbolic-form mapping,4enhancing solver utilization. Notably, our approach yielded results comparable to GPT-4o Mini,demonstrating its competitiveness in challenging problems.Performance of Smaller Models in Specific Contexts : In specific contexts, models with fewerthan 1 billion parameters occasionally outperformed mid-sized models. For instance, the fine-tuned Qwen 0.5B model scored approximately 10% higher than the Qwen 1.8B model, suggestingthat smaller models may benefit from improved learning efficiency before encountering overfittingissues. However, while smaller models excelled in simpler tasks, larger models like WizardMath 7Bconsistently outperformed them on complex problems, highlighting the importance of model size inmanaging problem complexity (See 1).Influence of Dataset Complexity on Model Performance : Our findings reveal that dataset com-plexity plays a pivotal role in determining model performance. Smaller models excelled on theBasic dataset, but struggled with the Advanced dataset. For instance, Qwen 0.5B’s performancedropped by almost 21% when moving from Basic to Advanced tasks. Conversely, larger models likeWizardMath 7B improved by about 19% on the Advanced dataset compared to the Basic one. Theseresults highlight the importance of aligning dataset complexity with model capacity, especially fortasks requiring advanced reasoning skills (See 2).References[1]S. S. Sundaram, S. Gurajada, D. Padmanabhan, S. S. Abraham, and M. Fisichella. Does alanguage model “understand” high school math? a survey of deep learning based word problemsolvers. WIREs Data Mining and Knowledge Discovery , page e1534, 2024.[2] Ernest Davis. Mathematics, word problems, common sense, and artificial intelligence, 2023.[3]Denise Dellarosa. A computer simulation of children’s arithmetic word-problem solving.Behavior Research Methods, Instruments, & Computers , 18(2):147–154, March 1986.[4]Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. Learning to automaticallysolve algebra word problems. In Kristina Toutanova and Hua Wu, editors, Proceedings of the52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) ,pages 271–281, Baltimore, Maryland, June 2014. Association for Computational Linguistics.[5]Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal,Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, ArielHerbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M.Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, MateuszLitwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, AlecRadford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020.[6]Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan,and Graham Neubig. Pal: Program-aided language models, 2023.[7]Aaron Meurer, Christopher P. Smith, Mateusz Paprocki, Ond ˇrejˇCertík, Sergey B. Kirpichev,Matthew Rocklin, Amit Kumar, Sergiu Ivanov, Jason K. Moore, Sartaj Singh, Thilina Rath-nayake, Sean Vig, Brian E. Granger, Richard P. Muller, Francesco Bonazzi, Harsh Gupta,Shivam Vats, Fredrik Johansson, Fabian Pedregosa, Matthew J. Curry, Andy R. Terrel, Št ˇepánRouˇcka, Ashutosh Saboo, Isuru Fernando, Sumith Kulal, Robert Cimrman, and Anthony Sco-patz. Sympy: symbolic computing in python. PeerJ Computer Science , 3:e103, January2017.[8]Daniel G. Bobrow. A question-answering system for high school algebra word problems. InProceedings of the October 27-29, 1964, Fall Joint Computer Conference, Part I , AFIPS ’64(Fall, part I), page 591–614, New York, NY , USA, 1964. Association for Computing Machinery.[9]Yan Wang, Xiaojiang Liu, and Shuming Shi. Deep neural solver for math word problems. InMartha Palmer, Rebecca Hwa, and Sebastian Riedel, editors, Proceedings of the 2017 Confer-ence on Empirical Methods in Natural Language Processing , pages 845–854, Copenhagen,Denmark, September 2017. Association for Computational Linguistics.5[10] Mojtaba Komeili, Kurt Shuster, and Jason Weston. Internet-augmented dialogue generation. InSmaranda Muresan, Preslav Nakov, and Aline Villavicencio, editors, Proceedings of the 60thAnnual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) ,pages 8460–8478, Dublin, Ireland, May 2022. Association for Computational Linguistics.[11] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha,Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee,Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun,Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts,Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch,Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith RingelMorris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen,Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina,Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm,Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. Lamda: Language models for dialogapplications, 2022.[12] Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. Internet-augmented language models through few-shot prompting for open-domain question answering,2022.[13] Lawrence C. Paulson. Isabelle - A Generic Theorem Prover (with a contribution by T. Nipkow) ,volume 828 of Lecture Notes in Computer Science . Springer, 1994.[14] Jinghui Qin, Xiaodan Liang, Yining Hong, Jianheng Tang, and Liang Lin. Neural-symbolicsolver for math word problems with auxiliary tasks, 2021.[15] Joy He-Yueya, Gabriel Poesia, Rose E. Wang, and Noah D. Goodman. Solving math wordproblems by combining language models with symbolic solvers, 2023.[16] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang,Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models, 2021.[17] Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, ChrisBamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand,Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier,Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak,Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, andWilliam El Sayed. Mixtral of experts, 2024.[18] AI@Meta. Llama 3 model card. 2024.[19] An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li,Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report. arXiv preprintarXiv:2407.10671 , 2024.[20] Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, andYongqiang Ma. Llamafactory: Unified efficient fine-tuning of 100+ language models. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics(Volume 3: System Demonstrations) , Bangkok, Thailand, 2024. Association for ComputationalLinguistics.[21] Matthijs Douze, Alexandr Guzhva, Chengqi Deng, Jeff Johnson, Gergely Szilvasy, Pierre-Emmanuel Mazaré, Maria Lomeli, Lucas Hosseini, and Hervé Jégou. The faiss library. 2024.[22] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural LanguageProcessing . Association for Computational Linguistics, 11 2019.6A Training parameterFor fine-tuning the models in our experiments, we used LLama-Factory [ 20], an open-source frame-work for inference and fine-tuning of language models. All fine-tuning was performed on a singleNVIDIA L4 GPU using the LoRA method. The hyperparameters used during training are as follows,with all other parameters set to their default values:• LoRA: rank = 16, alpha = 16, learning rate = 5e-05The training process consisted of three steps:1. Pre-training with a dataset containing non-word problems.2. Supervised fine-tuning (SFT) on the Basic dataset.3. Supervised fine-tuning (SFT) on the Advanced dataset.The system prompt used during training was:"You are a Math Teacher. Your goal is to understand a math word problem,recognize and distinguish the type of problem, define the variables (if needed), andformulate the problem in symbolic form."B DataB.1 Data LeakageTo mitigate data leakage in our dataset, we employed FAISS [ 21] and Sentence Transformers [ 22] forvector embedding and similarity filtering. Each mathematical word problem was embedded into anumerical vector using the all-MiniLM-L6-v2 pretrained model from Sentence Transformers.We then applied FAISS to detect and filter out similar data points, utilizing a cosine similarity thresholdof 0.8. Any pairs of word problems exceeding this threshold were removed to prevent overlap betweenthe training and test sets. This approach ensured the integrity of the dataset, minimizing leakage andmaintaining valid evaluation metrics for model performance.B.2 Pre-train DatasetThe pre-train dataset comprises 1 million samples generated from standard mathematical problemswith randomly selected constants and parameters. These problems are converted into symbolic form,providing the model with a foundation for handling structured mathematical tasks without naturallanguage.Table 2: Examples of pre-train datasetQuestion Symbolic FormSolve the given system of equations: x+y=1,−3x−6y= 9[x+y−1,−3x−6y−9, x, y, solve]Inequality: 14x−7<12x+ 19 [14 x−7, <,12x+ 19, x,solve]Solve this compound inequality: 5<−11x+50<6[5<−11x+ 50,−11x+ 50 <6, x,solve]Find the greatest common divisor of 18 and 27 [18,27,igcd]Calculate the least common multiple of 25 and92[25,92,ilcm]B.3 Basic DatasetThe basic supervised fine-tuning (SFT) dataset, containing 149,000 samples, was generated usingthe Mixtral 8x7b model [ 17] with few-shot prompting. Data generation parameters were set at7temperature = 1.5, top_p = 0.9, and max_length = 512 . The dataset consists of 38% single-variableand compound inequalities, 28% two-variable system of equations, and 17% greatest common divisorand least common multiple problems.8Table 3: Examples of basic datasetSystem of equationsQuestion AnswerWord Problem: The length of a rect-angle is equal to triple the width.Which system of equations can beused to find the dimensions of therectangle if the perimeter is 86 cen-timeters?Define the variables and formulate the linear system ofequations: Let xrepresent the length of the rectangle and yrepresent the width of the rectangle. The equation is x= 3y.The perimeter is 86 centimeters: 2x+ 2y= 86 .System of equations: {x= 3∗y,2∗x+ 2∗y= 86}Symbolic form: [x−3∗y,2∗x+ 2∗y−86, x, y, solve]InequalitiesQuestion AnswerWord Problem: A taxi charges aflat rate of $1.75, plus an additional$0.65 per mile. If Erica has at most$10 to spend on the cab ride, howfar could she travel?Define the variables and formulate the inequality: Let drepresent the distance. The total cost is represented by1.75 + 0 .65∗d <= 10 .Inequality: 1.75 + 0 .65∗d <= 10Symbolic form: [1.75 + 0 .65∗d <= 10, d,solve]Compound inequalitiesQuestion AnswerWord Problem: About 20% of thetime you sleep is spent in rapid eyemovement (REM) sleep. If an adultsleeps 7 to 8 hours, how much timeis spent in REM sleep?Define the variables and formulate the compound inequality:Letrrepresent time spent in REM sleep. The inequality is0.2×7<=r <= 0.2×8.Compound inequality: 0.2×7<=r <= 0.2×8Symbolic form: [r−0.2×7≥0, r−0.2×8<= 0, r,solve]Greatest common divisorQuestion AnswerWord Problem: Sara has 16 redflowers and 24 yellow flowers. Shewants to make bouquets with thesame number of each color flowerin each bouquet. What is the great-est number of bouquets she canmake?Find the greatest common factor of 16 and 24.Symbolic form: [16,24,igcd]Least common multipleQuestion AnswerWord Problem: Today, both the soc-cer team and the basketball teamhad games. The soccer team playsevery three days, and the basketballteam plays every five days. Whenwill both teams have games on thesame day again?Find the least common multiple of 3 and 5.Symbolic form: [3,5,ilcm]9B.4 Advanced DatasetThe advanced dataset was synthetically generated through a prompting process utilizing a largerlanguage model due to the increased complexity of the problems. Specifically, we employed theLLaMA3-70B-Instruct model [ 18]. The data generation method mirrored that of the basic dataset,with parameters set to a temperature of 1.5, a top_p of 0.9, and a maximum length of 512 tokens, butwith variations in the prompts to accommodate the more complex problem types.Each solution in the advanced dataset follows a structured approach:1.Analysis and Summarization : The system begins by analyzing and summarizing the keypoints of the word problem.2.Problem Type Identification : It identifies the specific type of mathematical problempresented.3.Information Extraction and Reformulation : Essential variables and information areextracted from the problem, which are then used to reformulate the word problem into astandard form.4.Symbolic Representation : The standard form is converted into a symbolic representation,preparing it for processing by the symbolic solver to derive the solution.The advanced supervised learning data maintains a structure similar to the basic dataset, consisting ofword problems and their conversion to symbolic forms. However, the problems in this dataset aremore intricate, including types such as percentage interest problems.The advanced dataset comprises 79,000 samples, distributed as follows: 60% single-variable andcompound inequality problems, 25% two-variable equation problems, and 15% least commonmultiple and greatest common divisor problems.System of equationsQuestion AnswerWord Problem: Arnold in-vested $64,000, some at 5.5%interest and the rest at 9%.How much did he invest ateach rate if he received $4,500in interest in one year?Break down the word problem into smaller information:1. Determine how much Arnold invested at each interest rate.2. Arnold invested $64,000 in total.3. He invested some at 5.5% interest and the rest at 9%.Define the variables and formulate the system of equations:Letxbe the amount invested at 5.5% and ybe the amountinvested at 9%.The system of equations: x+y= 64000 and0.055∗x+0.09∗y=4500 .Symbolic form:[x+y−64000 ,0.055∗x+ 0.09∗y−4500, x, y, solve]InequalitiesQuestion AnswerWord Problem: A taxi charges aflat rate of $1.75, plus an additional$0.65 per mile. If Erica has atmost $10 to spend, how far can shetravel?Break down the word problem:1. Our goal is to find the maximum distance Erica can travel.2. The taxi charges $1.75 flat and $0.65 per mile.Define the variables and inequality:Letxbe the distance traveled.The total cost: 1.75 + 0 .65∗x <= 10 .Symbolic form: [1.75 + 0 .65∗x <= 10, x,solve]10Compound inequalitiesQuestion AnswerWord Problem: A ski shop car-ries skis that are between 150 and220cm long. They recommend thatthe skis be 1.25 longer than yourheight.Calculate the tallest heightthat a person can be and still rentskis from the shop.Break down the word problem into smaller information:1. Our goal is to find the tallest height that a person can beand still rent skis from the shop.2. The ski shop carries skis that are between 150 and 220cmlong.3. They recommend that the skis be 1.25 longer than yourheight.Identify problem type: The ski shop carries skis that are be-tween 150 and 220cm long sets the range of acceptable skilengths for the customer. This implies the use of compoundinequalities to solve the problem.Define the variables and formulate the compound inequal-ity:Let x be the height of the person.between 150 and 220cm long can be written as 150 <= Skilength <= 220The skis should be 1.25 times longer than your height sothe ski length is 1.25*xCompound Inequality: 150<= 1.25∗x <= 220Symbolic Form: [[150 <= 1 .25∗x,1.25∗x < =220, x, solve ]]Greatest common divisorQuestion AnswerWord Problem: Sara has 16 redflowers and 24 yellow flowers. Shewants to make bouquets with thesame number of each color. Whatis the greatest number of bouquetsshe can make?Break down the word problem:1. Find the greatest number of equal bouquets Sara canmake.2. She has 16 red and 24 yellow flowers.The problem asks for the greatest common divisor of 16 and24.Symbolic form: [16,24,igcd]Least common multipleQuestion AnswerWord Problem: The soccer teamplays every three days, and the bas-ketball team plays every five days.When will both teams have gameson the same day again?Break down the word problem:1. Find when both teams will have games on the same day.2. The soccer team plays every 3 days, and the basketballteam plays every 5 days.The least common multiple of 3 and 5 is needed.Symbolic form: [3,5,ilcm]11C Scores Charts<1B <7B 7B140160180200220168153217Size of modelsScoreFigure 1: Highest Score of model by sizeWizardMath 7BLLama3.1 8BMistral 7BQwen 7BOrca 7BQwen 0.5B Qwen 1.8B Gemma 2BGPT-neo 350M100150200250183197183170133168153133130217211 210207171132136 135120ModelScoreFine-tuned basicFine-tuned advancedFigure 2: Score of models fine-tuned on basic and advanced datasets12D Qwen2Math Comparison of SolutionsTable 4: Comparison of two solutions for finding the greatest possible donationGiven Problem: Anthony’s mom gave him 30 dollars while Jennifer’s mom gave her 36 dollars.They want to give the same amount of money to a charity. What is the greatest donation they canmake? To solve this problem, follow the steps provided in the examples.Their Solution Our SolutionStep 1:Find the prime factorization of eachnumber.- 30 = 2 * 3 * 5- 36 = 22* 32Step 2:Identify the common prime factors andtheir lowest powers.- The common prime factors are 2 and 3, withthe lowest powers being 21and 31.The greatest common divisor is the product ofthe common prime factors.- GCD = 21* 31= 6Step 3:To find the greatest donation they canmake, divide the amount of money each personhas by the GCD.- For Anthony: 30 / 6 = 5- For Jennifer: 36 / 6 = 6The greatest donation they can make is theGCD multiplied by the smallest quotient fromStep 4.- Greatest donation = 6 * 5 = 30Final Answer: The greatest donation they canmake is 30 dollars.Break down the word problem into smallerinformation:1. Our goal is to find the greatest donationAnthony and Jennifer can make.2. Anthony’s mom gave him 30 dollars.3. Jennifer’s mom gave her 36 dollars.4. They want to give the same amount of moneyto a charity.Identify problem type:The problem states that Anthony and Jenniferwant to give the same amount of money to acharity, which implies that we need to find anumber that divides both the amount of moneyAnthony’s mom gave him (30) and the amountof money Jennifer’s mom gave her (36) evenly.The problem asks for the greatest donation,indicating that we should use the greatestcommon divisor to determine the maximumamount they can donate while maintainingequal distribution.Symbolic Form:[[30, 36, igcd]]Final Answer: 613Table 5: Comparison of two solutions for solving the word problemProblem: Eun-Kyung works as a tutor and earns $60 per hour. She has $792 in savings. She isplanning an anniversary party for her parents. She would like to invite 40 guests. The party will costher $1,520 for food and drinks and $150 for the photographer. She will also have a favor for each ofthe guests, and each favor will cost $7.50. How many hours must she tutor to have enough money forthe party? (Assume she will not spend any of her savings.)Their Solution Our SolutionStep 1: Calculate the total cost of the party:Total cost = Cost of food and drinks + Cost ofphotographer + Cost of favorsTotal cost = $1520 + $150 + (40 * $7.50)Total cost = $1520 + $150 + $300Total cost = $2070Break down the word problem into smaller in-formation:1. Our goal is to find the number of hours Eun-Kyung must tutor to have enough money for theparty.2. Eun-Kyung earns $60 per hour.3. She has $792 in savings.4. The party will cost her $1,520 for food anddrinks and $150 for the photographer.5. She will also have a favor for each of the 40guests, and each favor will cost $7.50.Step 2: Let x represent the number of hoursEun-Kyung needs to tutor. Write the inequalityfor the total earnings:60x >= 2070Identify problem type:The problem asks for the number of hours Eun-Kyung must tutor to have enough money for theparty which implies finding the minimum valuethat satisfies the constraints. The total cost ofthe party should be greater than or equal to thetotal funds available is a lower bound constraint.We can use the inequality to solve this problem.Step 3: Solve the inequality:x >= 2070/60x >= 34.5Since Eun-Kyung cannot tutor a fractionof an hour, we round up to the nearest wholenumber.x >= 35Define the variables and formulate the inequal-ity:Let x be the number of hours Eun-Kyung musttutor.Total cost = Cost of food and drinks + Cost ofphotographer + Cost of favors + Total cost ofthe partyTotal money available = Total funds available +Earnings from tutoringThe party will cost her $1,520 for food anddrinks so the cost of food and drinks is 1520The party will cost her $150 for the photogra-pher so the cost of photographer is 150She will also have a favor for each of the 40guests, and each favor will cost $7.50 so thecost of favors is 7.5*40She has $792 in savings so the total funds avail-able is 792Eun-Kyung earns $60 per hour so Earningsfrom tutoring is x*60Total money available should be greater than orequal to the total cost of the party so the totalmoney available >= 1520 + 150 + 7.5*40Inequality: 792 + x*60 >= 1520 + 150 + 7.5*40Symbolic Form: [[792 + x*60, >=, 1520 + 150+ 7.5*40, x, solve]]Final Answer: Eun-Kyung must tutor for atleast 35 hours to have enough money for theparty.Final Answer: x >= 19.6314Table 6: Comparison of two solutions for solving the word problemProblem: Five student government officers want to go to the state convention. It will cost them $110for registration, $375 for transportation and food, and $42 per person for the hotel. There is $450budgeted for the convention in the student government savings account. They can earn the rest of themoney they need by having a car wash. If they charge $5 per car, how many cars must they wash inorder to have enough money to pay for the trip?Their Solution Our SolutionStep 1: Calculate the total cost of the trip.- Registration cost: $110- Transportation and food cost: $375- Hotel cost per person: $42 * 5 = $210- Total hotel cost: $210Total cost = $110 + $375 + $210 = $795Break down the word problem into smaller in-formation:1. Our goal is to find the number of cars thestudent government officers must wash to haveenough money to pay for the trip.2. The cost of registration is $110.3. The cost of transportation and food is $375.4. The cost of the hotel is $42 per person.5. There are 5 student government officers.6. There is $450 budgeted for the convention inthe student government savings account.7. They can earn the rest of the money theyneed by having a car wash.8. They charge $5 per car.Step 2: Determine how much more money isneeded to meet the budget.Budgeted amount: $450Total cost: $795Additional amount needed = $795 - $450 =$345Identify problem type:The problem asks for the number of cars thestudent government officers must wash to haveenough money to pay for the trip which impliesfinding the minimum value that satisfies theconstraints. The total cost of the trip should begreater than or equal to the total funds availableis a lower bound constraint. We can use theinequality to solve this problem.Step 3: Calculate how many cars they need towash to earn the remaining amount.Car wash charge per car: $5Number of cars needed = $345 / $5 = 69Define the variables and formulate the inequal-ity:Let x be the number of cars the student govern-ment officers must wash.Total cost = Cost of registration + Cost of trans-portation and food + Cost of hotel + Total costof the tripTotal money available = Total funds available +Earnings from the car washThe cost of registration is $110 so the cost ofregistration is 110The cost of transportation and food is $375 sothe cost of transportation and food is 375The cost of the hotel is $42 per person so thecost of the hotel is 42*5There is $450 budgeted for the convention inthe student government savings account so thetotal funds available is 450They can earn the rest of the money they needby having a car wash so Earnings from the carwash is x*5Total money available should be greater thanor equal to the total cost of the trip so the totalmoney available >= 110 + 375 + 42*5Inequality: 450 + x*5 >= 110 + 375 + 42*5Symbolic Form: [[450 + x*5, >=, 110 + 375 +42*5, x, solve]]Final Answer: They need to wash 69 cars tohave enough money to pay for the trip.Final Answer: x >= 4915Table 7: Comparison of two solutions for finding the least number of days to save the same amountGiven Problem: Steve can save 9 dollars every day while Maria can save 12 dollars every day. Whatis the least number of days it will take each person to save the same amount of money? To solve thisproblem, we need to find the least common multiple (LCM) of the amounts saved by Steve and Mariaeach day.Their Solution Our Solution1. Find the LCM of 9 and 12:- Prime factorization of 9 = 32- Prime factorization of 12 = 22* 3- The LCM is 22* 32= 36 dollars2. Determine how many days it will take Steveto save 36 dollars:- 36 / 9 = 4 days3. Determine how many days it will take Mariato save 36 dollars:- 36 / 12 = 3 daysThe least number of days it will take eachperson to save the same amount of money isthe LCM of the number of days it takes each tosave 36 dollars, which is 12 days.Final Answer: 12 daysBreak down the word problem into smallerinformation:1. Our goal is to find the least number of days itwill take each person to save the same amountof money.2. Steve can save 9 dollars every day.3. Maria can save 12 dollars every day.Identify problem type:The problem involves two people saving moneyat different rates. The question asks for theleast number of days it will take each personto save the same amount of money, indicatingthat we need to find the least common multipleof the number of days to determine when theywill have the same amount of money.Symbolic Form:[[9, 12, ilcm]]Final Answer: 3616 |
cBFsFt1nDW | Transformers Can Do Arithmetic with the RightEmbeddingsSean McLeish1*, Arpit Bansal1∗, Alex Stein1, Neel Jain1, John Kirchenbauer1,Brian R. Bartoldson2, Bhavya Kailkhura2, Abhinav Bhatele1, Jonas Geiping3,Avi Schwarzschild4, Tom Goldstein11University of Maryland,2Lawrence Livermore National Laboratory,3ELLIS Institute Tübingen,Max Planck Institute for Intelligent Systems, Tübingen AI Center,4Carnegie Mellon UniversityAbstractThe poor performance of transformers on arithmetic tasks seems to stem in largepart from their inability to keep track of the exact position of each digit inside ofa large span of digits. We mend this problem by adding an embedding to each digitthat encodes its position relative to the start of the number. In addition to the boostthese embeddings provide on their own, we show that this fix enables architecturalmodifications such as input injection to improve performance even further.With positions resolved, we can study the logical extrapolation ability oftransformers. Can they solve arithmetic problems that are larger and morecomplex than those in their training data? We find that training on only 20digitnumbers with a single GPU for one day, we can reach state-of-the-art performance,achieving up to 99% accuracy on 100digit addition problems. Finally, we showthat these gains in numeracy also unlock improvements on other multi-stepreasoning tasks including sorting and multiplication.1 IntroductionMuch of the recent work on Large Language Models (LLMs) focuses on their ability to solveproblems in natural language and code generation. Despite progress in these domains, transformersstill struggle to perform complex multi-step and algorithmic reasoning tasks in a zero shot settingwithout resorting to tool use. To study algorithmic reasoning in a sterile laboratory setting, theacademic community focuses on simple arithmetic test problems like addition. Addition is simpleenough that modest-sized LLMs can (in principle) be trained from scratch to do it without runninginto capacity and training budget limitations, yet complex enough that even large industrial modelsfail on large numbers without a code interpreter (Loeber, 2024).Prior studies indicate that addition is hard for transformers (Lee et al., 2023; Shen et al., 2023; Zhouet al., 2023, 2024). Our experiments indicate that this difficulty stems from their inability to clearlyrepresent the exact position of a digit within a long sequence of digits. To address this problem, wepropose a simple modification to the data representation that directly addresses this shortcoming.OurAbacus Embeddings are simple learned positional embeddings that are used to encode positionswithin each span of numerical tokens. Combining Abacus Embeddings and standard positionalembeddings, we observe dramatic improvements in accuracy such that models trained with at most 20digit operands can generalize to problems with 120 digit operands. This represents a state-of-the-artgeneralization factor of 6×, with the previous state of the art being only 2.5×. To the best of ourknowledge, these are the longest sequences on which learned addition has ever been demonstrated.∗Equal Contribution, correspondence to: [email protected] ,[email protected] .38th Conference on Neural Information Processing Systems (NeurIPS 2024).We also study several other methods of improving arithmetic and generalization in transformers.We find that incorporating input injection —skip connections inserted between the input layer andeach decoder layer—can reduce generalization errors by 50% over the Abacus Embedding baseline.We also find that together with our embeddings looped transformer architectures, which containrecurrent layers in which the same parameters are re-used multiple times, can achieve near-perfectgeneralization on addition problems we consider. These results are shown in Appendix Section A.4Since our proposed methods solve large addition problems successfully, we evaluate whether thesame approaches can be used to improve other kinds of algorithmic learning. In Appendix SectionA.3, we explore multiplication problems of up to 15 digit numbers and sorting over arrays of up to 10numbers, making this the first study of extreme length generalization techniques for addition thattransfer to other algorithmic tasks. Our contributions can be summarized as follows.•We propose a new positional embedding called Abacus Embeddings to better capture thesignificance of each digit, which leads to near-perfect in-distribution generalization.•We show that when we combine Abacus Embeddings with input injection and loopedtransformers performance further improves, increasing from 92.9%to99.1%in out-of-distribution accuracy, an 87% reduction in error compared to using the embeddings withstandard architectures alone.2 Related WorkArithmetic. Solving arithmetic with next token prediction is a difficult problem that attracts a lot ofattention (e.g. Saxton et al., 2019). However, in zero-shot settings, even incredibly strong commercialAPI models struggle with very large addition problems (e.g. up to 100 digits) without access to tools.Among attempts to improve arithmetic performance of transformer-based models, reversing the digitsso the arguments are written with the least significant digit first is popular (Lee et al., 2023; Shenet al., 2023; Zhou et al., 2023, 2024). Furthermore, changing the data format by adding explicit indexcharacters improves model capability for addition (Zhou et al., 2023, 2024; Olsson et al., 2022).Weight Sharing. Weight sharing and recurrence can be used to make models adaptive and helpgeneralize to harder problems (Dehghani et al., 2018; Sukhbaatar et al., 2019; Lan et al., 2020;Ibarz et al., 2022). Schwarzschild et al. (2021) and Bansal et al. (2022) explore an end-to-endlearning approach using recurrent convolutional neural networks to learn algorithms from input-output pairs, tackling algorithmic tasks like prefix sums, mazes, and chess. Weight sharing foralgorithmic reasoning is also helpful with transformers and we use the looped transformer in someof our experiments below. A looped transformer has a transformer block called recurrently on itsown output lending itself to executing iterative algorithms (Giannou et al., 2023; Yang et al., 2023a;de Luca & Fountoulakis, 2024).Positional Embeddings. Indicating the position of tokens in a sequence to transformer models iscritical for language modeling (Vaswani et al., 2017). Absolute positional embeddings (APE) arelearned embeddings that are added to token embeddings before the first layer of the transformer(Vaswani et al., 2017). However, these absolute embeddings inhibit length generalization (Press et al.,2022). Kazemnejad et al. (2023) show that decoder layers can still learn positional information withno explicit positional embeddings. No positional embeddings (NoPE) can achieve good length gener-alization performance for small algorithmic tasks and even outperform some specialized embeddings.The latest and most useful for arithmetic is Functional Interpolation for Relative Position Embeddings(FIRE) (Li et al., 2023). FIRE shows the strongest length generalization to date, which leads to lengthgeneralization by 2.5×on addition (Zhou et al., 2024) when combined with randomized embeddings(Ruoss et al., 2023). We go into more detail on positional embeddings in Appendix A.1.1. In thiswork, we focus on NoPE and FIRE embeddings since these are the best performers for addition inreversed format among existing embeddings (Zhou et al., 2024).3 Achieving Length Generalization for AdditionWe focus on two main hypotheses: (1) the positional information for individual digits within numbersis being lost and (2) recurrence can improve the reasoning abilities of transformer architectures on2Figure 1: Visualization of data formats and positional embeddings. Abacus Embeddings give thesame positional embeddings to all digits of the same significance.multi-step arithmetic reasoning problems. We briefly discuss the training and evaluation setup beforedescribing each of our improvements in detail.Experimental Setup. We train decoder-only causal language models to solve addition problems.Following prior work (Zhou et al., 2023, 2024; Shen et al., 2023; Kazemnejad et al., 2023; Lee et al.,2023), inputs are formatted least significant digit first, e.g. 98282 + 3859172 = 2787472 . Unlikeprior work, we do not add any padding between digits (Shen et al., 2023) and do not pad any numberswith zeros, neither in the case of carry digits (Zhou et al., 2024), nor to make all operands the samelength (Shen et al., 2023). We train on all combinations of operand lengths less than or equal to iandjwhere iandjare the maximum lengths of the first and second operands, respectively. For thisstudy all training sets have 20million samples and i=j, hence we can use one number to define thedataset i, where iis the maximum length of either operand. For further details on data constructionand training we refer to Appendix A.6.We report model accuracy for each (i, j)length pair and unlike most existing work, we also includeaccuracy for pairs where i̸=jto highlight all instances of extrapolation. This extensive tabulation iscostly and makes inference the main computational burden of this study. We measure accuracy in thestrict sense where only exact matches of all output digits are counted as correct, i.e. if a single digitis incorrect then the example is marked as wrong and we refer to this as exact match accuracy . Wehave the following three evaluation categories: (i) in distribution (ID) where the models are testedon problems up to the maximum size seen during training; (ii) out of distribution (OOD) where themodels are tested on problems greater than the maximum size seen during training but both operandsare at most 100digits; (iii) and extreme out of distribution ( 100+ digit OOD) where the models aretested on problems where both operands are of the same length and are both more than 100digits andless than 160digits. In the 100+ OOD setting, we only analyze problems where the operands are thesame length ( i=j) due to inference costs at this scale.We consider two standard transformer architectures. First, we use a standard autoregressive trans-former model (ST) where multiple decoder layers are stacked in a feedforward manner. Second,we enhance this standard transformer model by incorporating input injection (ST w/ II), where theembedded inputs are added to the input of each decoder layer (Ma et al., 2022; Bansal et al., 2022;Anil et al., 2022a). We visually describe the architectures in the Appendix Figure 19.ST ST w/ IIArchitecture Type020406080100Exact Match Accuracy92.93.64.397.92.93.226.70030.600Abacus, OODAbacus, 100+ OODFIRE, OODFIRE, 100+ OODNoPE, OODNoPE, 100+ OODFigure 2: Mean exact match accuracy of three models of depth sixteen on size 20data, varying thearchitecture and embeddings. Abacus Embeddings improve accuracy for addition over FIRE andNoPE.33.1 Abacus Embeddings Help Align DigitsFrom prior work and our own initial experiments, we observe that even when input numbers arepresented least-significant digit first and training data is stratified and abundant (several millionexamples), standard transformers struggle to learn multi-digit addition. We also observe that humansdo long addition by first aligning the digits of the same significance into columns. Thus, our firsthypothesis is that the significance of each digit (i.e. each digit’s position relative to the beginning ofthe number) is not easy for transformers to represent, and that this sub-problem presents more of ahurdle than the actual addition itself.Prior work addresses this by proposing explicit index hints in the inputs and outputs of the addition,for example a6b7c5 +a1b6c3 =a7b3c9; finding that transformers perform much better on additionwith the information provided by such hints (Zhou et al., 2023, 2024). However, index hints of thisform increase the input context length required and double the output length and inference cost ofsolving a given addition problem. Furthermore, Zhou et al. (2024) find that the ability of modelstrained with index hints to generalize is sensitive to the particular random initialization.To address the limitations of transformers at representing positional information, we design a speciallybuilt positional embedding that encodes the location of each digit relative to the start of the currentnumber. We call this Abacus Embeddings . We apply the same positional embedding to all digits ofthe same significance, providing an explicit signal that the model can use to align digits. We visuallydescribe these embeddings in Figure 1.2We take inspiration from Randomized Embeddings (Ruoss et al., 2023) but instead of using randomascending indices to represent positions in a sample, we use consecutive ascending indices with arandom starting position to allow for length generalization. Specifically, during training we giveconsecutive positional embeddings to each digit in a number, starting from a randomly chosen offsetvalue from U[1, k], where kis a hyperparameter. Unless otherwise stated the default value for kinthis study is 100. For example, if the input is 123, the positional encodings are β, β+ 1, β+ 2whereβ∼U[1,100], which are then passed through a learned embedding matrix. The value sampled fromU[1, k]is the same for all numbers in a batch, meaning all digits of the same significance obtain thesame positional embedding. This training scheme allows the model to see a wide range of positionalembeddings, even when training sequences are short. At test time, we set β= 1.Abacus Embeddings Solve Addition. Abacus Embeddings improve generalization performanceup to 100digits and beyond for standard transformer architectures. In Figure 2, we highlight thecomparative boost Abacus Embeddings have over standard transformer architectures and embeddingsfor performing addition, taking the mean accuracy of three models in all cases. Additionally, InAppendix A.5.4, we present 2D grid plots for several other experiments that are depicted as bar chartsin the main text. Zhou et al. (2024) find that operand lengths of up to forty digits are required duringtraining for good generalization to 100digit addition during testing (albeit not robustly). We findthat with our Abacus Embeddings, we can achieve similar accuracy and larger extrapolation using astandard model with input injection trained on maximum operand sizes of 20digits.As Abacus Embeddings are a variant of absolute positional embeddings, technically they cannotgeneralize beyond the relative positions seen during training. However the hyperparameter kthatrandomizes the starting offset used for each individual addition example can be increased to enablegeneralization by training a larger range of embeddings for a given computational budget. Relatedly,Appendix Figure 8 shows that training on larger datasets improves performance, even for operandswith fewer than 100digits.4 DiscussionAcross our experiments, we find that our novel Abacus Embeddings improve performance dra-matically both when applied to standard transformers as well as recurrent variants. We hope thatour work deepens the community’s understanding of these problems and paves the way for furtheradvancements in the algorithmic reasoning capabilities of large language models.2In Appendix A.2, we motivate these embeddings further with experiments demonstrating their utility insolving a bitwise OR task and show their performance on multiplication and sorting in Appendix A.3.4ReferencesAnil, C., Pokle, A., Liang, K., Treutlein, J., Wu, Y ., Bai, S., Kolter, J. Z., and Grosse, R. B. Pathindependent equilibrium models can better exploit test-time computation. Advances in NeuralInformation Processing Systems , 35:7796–7809, 2022a.Anil, C., Wu, Y ., Andreassen, A., Lewkowycz, A., Misra, V ., Ramasesh, V ., Slone, A., Gur-Ari, G.,Dyer, E., and Neyshabur, B. Exploring length generalization in large language models. Advancesin Neural Information Processing Systems , 35:38546–38556, 2022b.Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450 ,2016.Bansal, A., Schwarzschild, A., Borgnia, E., Emam, Z., Huang, F., Goldblum, M., and Goldstein, T.End-to-end algorithm synthesis with recurrent networks: Logical extrapolation without overthink-ing.Advances in Neural Information Processing Systems , 35, 2022.Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam,P., Sastry, G., Askell, A., et al. Language models are few-shot learners. Advances in neuralinformation processing systems , 33:1877–1901, 2020.Chi, T.-C., Fan, T.-H., Ramadge, P., and Rudnicky, A. Kerple: Kernelized relative positionalembedding for length extrapolation. In Advances in Neural Information Processing Systems , 2022.Chi, T.-C., Fan, T.-H., Rudnicky, A., and Ramadge, P. Dissecting transformer length extrapolation viathe lens of receptive field analysis. In Proceedings of the 61st Annual Meeting of the Associationfor Computational Linguistics (Volume 1: Long Papers) , pp. 13522–13537, 2023.de Luca, A. B. and Fountoulakis, K. Simulation of graph algorithms with looped transformers. arXivpreprint arXiv:2402.01107 , 2024.Dehghani, M., Gouws, S., Vinyals, O., Uszkoreit, J., and Kaiser, L. Universal transformers. InInternational Conference on Learning Representations , 2018.Dziri, N., Lu, X., Sclar, M., Li, X. L., Jian, L., Lin, B. Y ., West, P., Bhagavatula, C., Bras, R. L.,Hwang, J. D., et al. Faith and fate: Limits of transformers on compositionality. arXiv preprintarXiv:2305.18654 , 2023.Geiping, J. and Goldstein, T. Cramming: Training a language model on a single gpu in one day. InInternational Conference on Machine Learning , pp. 11117–11143. PMLR, 2023.Giannou, A., Rajput, S., Sohn, J.-y., Lee, K., Lee, J. D., and Papailiopoulos, D. Looped transformersas programmable computers. In International Conference on Machine Learning , pp. 11398–11442.PMLR, 2023.Golkar, S., Pettee, M., Eickenberg, M., Bietti, A., Cranmer, M., Krawezik, G., Lanusse, F., McCabe,M., Ohana, R., Parker, L., et al. xval: A continuous number encoding for large language models.arXiv preprint arXiv:2310.02989 , 2023.Ibarz, B., Kurin, V ., Papamakarios, G., Nikiforou, K., Bennani, M., Csordás, R., Dudzik, A. J.,Bošnjak, M., Vitvitskyi, A., Rubanova, Y ., et al. A generalist neural algorithmic learner. InLearning on graphs conference , pp. 2–1. PMLR, 2022.Jelassi, S., d’Ascoli, S., Domingo-Enrich, C., Wu, Y ., Li, Y ., and Charton, F. Length generalization inarithmetic transformers. arXiv preprint arXiv:2306.15400 , 2023.Kazemnejad, A., Padhi, I., Ramamurthy, K. N., Das, P., and Reddy, S. The impact of positionalencoding on length generalization in transformers. arXiv preprint arXiv:2305.19466 , 2023.Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. Albert: A lite bert forself-supervised learning of language representations. In International Conference on LearningRepresentations , 2020. URL https://openreview.net/forum?id=H1eA7AEtvS .Lee, N., Sreenivasan, K., Lee, J. D., Lee, K., and Papailiopoulos, D. Teaching arithmetic to smalltransformers. arXiv preprint arXiv:2307.03381 , 2023.5Li, S., You, C., Guruganesh, G., Ainslie, J., Ontanon, S., Zaheer, M., Sanghai, S., Yang, Y ., Kumar,S., and Bhojanapalli, S. Functional interpolation for relative positions improves long contexttransformers. arXiv preprint arXiv:2310.04418 , 2023.Loeber, J. #16: Notes on Arithmetic in GPT-4, February 2024. URL https://loeber.substack.com/p/16-notes-on-arithmetic-in-gpt-4 .Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 ,2017.Ma, X., Zhou, C., Kong, X., He, J., Gui, L., Neubig, G., May, J., and Zettlemoyer, L. Mega: movingaverage equipped gated attention. arXiv preprint arXiv:2209.10655 , 2022.McLeish, S., Schwarzschild, A., and Goldstein, T. Benchmarking chatgpt on algorithmic reasoning.arXiv preprint arXiv:2404.03441 , 2024.Olsson, C., Elhage, N., Nanda, N., Joseph, N., DasSarma, N., Henighan, T., Mann, B., Askell, A.,Bai, Y ., Chen, A., et al. In-context learning and induction heads. arXiv preprint arXiv:2209.11895 ,2022.OpenAI. Gpt-4 technical report. ArXiv , abs/2303.08774, 2023. URL https://api.semanticscholar.org/CorpusID:257532815 .Peng, B., Quesnelle, J., Fan, H., and Shippole, E. Yarn: Efficient context window extension of largelanguage models. International Conference on Learning Representations , 2024.Press, O., Smith, N., and Lewis, M. Train short, test long: Attention with linear biases enablesinput length extrapolation. In International Conference on Learning Representations , 2022. URLhttps://openreview.net/forum?id=R8sQPpGCv0 .Qian, J., Wang, H., Li, Z., Li, S., and Yan, X. Limitations of language models in arithmetic andsymbolic induction. arXiv preprint arXiv:2208.05051 , 2022.Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y ., Li, W., and Liu, P. J.Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machinelearning research , 21(140):1–67, 2020.Rodionov, G. and Prokhorenkova, L. Discrete neural algorithmic reasoning. arXiv preprintarXiv:2402.11628 , 2024.Ruoss, A., Delétang, G., Genewein, T., Grau-Moya, J., Csordás, R., Bennani, M., Legg, S., andVeness, J. Randomized positional encodings boost length generalization of transformers. arXivpreprint arXiv:2305.16843 , 2023.Saxton, D., Grefenstette, E., Hill, F., and Kohli, P. Analysing mathematical reasoning abilities ofneural models. arXiv preprint arXiv:1904.01557 , 2019.Schwarzschild, A., Borgnia, E., Gupta, A., Huang, F., Vishkin, U., Goldblum, M., and Goldstein, T.Can you learn an algorithm? generalizing from easy to hard problems with recurrent networks.Advances in Neural Information Processing Systems , 34, 2021.Shaw, P., Uszkoreit, J., and Vaswani, A. Self-attention with relative position representations. arXivpreprint arXiv:1803.02155 , 2018.Shazeer, N. Glu variants improve transformer. arXiv preprint arXiv:2002.05202 , 2020.Shen, R., Bubeck, S., Eldan, R., Lee, Y . T., Li, Y ., and Zhang, Y . Positional description matters fortransformers arithmetic. arXiv preprint arXiv:2311.14737 , 2023.Su, J., Ahmed, M., Lu, Y ., Pan, S., Bo, W., and Liu, Y . Roformer: Enhanced transformer with rotaryposition embedding. Neurocomputing , 568:127063, 2024.6Sukhbaatar, S., Grave, E., Bojanowski, P., and Joulin, A. Adaptive attention span in transformers. InKorhonen, A., Traum, D., and Màrquez, L. (eds.), Proceedings of the 57th Annual Meeting of theAssociation for Computational Linguistics , pp. 331–335, Florence, Italy, July 2019. Association forComputational Linguistics. doi: 10.18653/v1/P19-1032. URL https://aclanthology.org/P19-1032 .Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y ., Bashlykov, N., Batra, S.,Bhargava, P., Bhosale, S., et al. Llama 2: Open foundation and fine-tuned chat models. arXivpreprint arXiv:2307.09288 , 2023.Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., andPolosukhin, I. Attention is all you need. Advances in neural information processing systems , 30,2017.Veliˇckovi ́c, P., Badia, A. P., Budden, D., Pascanu, R., Banino, A., Dashevskiy, M., Hadsell, R., andBlundell, C. The clrs algorithmic reasoning benchmark. In International Conference on MachineLearning , pp. 22084–22102. PMLR, 2022.Wang, H., Ma, S., Dong, L., Huang, S., Zhang, D., and Wei, F. DeepNet: Scaling Transformers to1,000 Layers. arXiv:2203.00555 [cs] , March 2022. URL http://arxiv.org/abs/2203.00555 .Yang, L., Lee, K., Nowak, R., and Papailiopoulos, D. Looped transformers are better at learninglearning algorithms. arXiv preprint arXiv:2311.12424 , 2023a.Yang, Z., Ding, M., Lv, Q., Jiang, Z., He, Z., Guo, Y ., Bai, J., and Tang, J. Gpt can solve mathematicalproblems without a calculator. arXiv preprint arXiv:2309.03241 , 2023b.Zhai, X., Kolesnikov, A., Houlsby, N., and Beyer, L. Scaling vision transformers. In Proceedings ofthe IEEE/CVF conference on computer vision and pattern recognition , pp. 12104–12113, 2022.Zhou, H., Bradley, A., Littwin, E., Razin, N., Saremi, O., Susskind, J., Bengio, S., and Nakkiran,P. What algorithms can transformers learn? a study in length generalization. arXiv preprintarXiv:2310.16028 , 2023.Zhou, Y ., Alon, U., Chen, X., Wang, X., Agarwal, R., and Zhou, D. Transformers can achieve lengthgeneralization but not robustly. arXiv preprint arXiv:2402.09371 , 2024.7A AppendixLimitations There are some intrinsic limitations that accompany any study involving languagemodel training from scratch under compute constraints. However, the primary point of relevance forthis study is that although we show the compatibility of Abacus Embeddings with FIRE and RoPEembeddings, we do not actually explore any natural language tasks. In the future, a larger scale studyincluding natural language would be needed to understand further how Abacus Embeddings wouldperform on heterogeneous tasks comprising both numerical and natural language inputs.A.1 Extended Related WorksA.1.1 Positional Embeddings.To address this issue of absolute embeddings not generalizing, Shaw et al. (2018) propose relativeembeddings (RPE) which are embedded during the attention computation, a mechanism furthersimplified by Raffel et al. (2020). Others further modify relative embeddings to improve lengthgeneralization including Sandwich (Chi et al., 2023), Kerple (Chi et al., 2022), and Alibi (Presset al., 2022) positional embeddings. Rotary Positional Embeddings (RoPE) (Su et al., 2024) arecommonly used in state-of-the-art open source transformers (e.g. Touvron et al., 2023). However,RoPE does limit the length generalization as models are trained only using rotations based on trainingdata length (Kazemnejad et al., 2023; Press et al., 2022). For improved length generalization, one canadd post-training extensions (Peng et al., 2024).FIRE embeddings are additive embeddings in the attention mechanism: ARPE(X) =XW Q(XW K)T+Bwhere Bi,j=fθlog(c(i−j)+1)log(cmax(i,L)+1)andc, Lare learnable parameters. Liet al. (2023) show empirically that these embeddings allow for length generalization and theoreticallyshow they are capable of representing many other embedding types. Ruoss et al. (2023) proposeusing a random subset of a larger set of possible positions during training so that larger positionalembeddings are trained. Zhou et al. (2024) use randomized FIRE (Ruoss et al., 2023; Li et al., 2023)embeddings to achieve length generalization on arithmetic tasks, which use randomized positions asinput to the small multi layer perceptron used in FIRE embeddings.A.1.2 Arithmetic and Algorithmic Reasoning.Golkar et al. (2023) approach arithmetic by embedding real numbers by scaling a single fixed token-embedding for numbers. Moreover, Dziri et al. (2023) show multiplication is a hard problem forGPT-3 (Brown et al., 2020) even when finetuned on this task. Dziri et al. (2023) further show thatGPT-4 (OpenAI, 2023) struggles to obtain high in-distribution accuracy on multiplication, even witha scratchpad. However, Lee et al. (2023) find that with a detailed scratchpad, small transformerscan perform multiplication in-distribution. Arithmetic is a subset of the larger class of algorithmicreasoning problems that focus on the ability to learn and execute algorithms and generalize to longerproblems (Anil et al., 2022b; Jelassi et al., 2023; Yang et al., 2023b; Veli ˇckovi ́c et al., 2022; Rodionov& Prokhorenkova, 2024). The more general algorithmic reasoning field includes work on variousarchitectures and data modalities aimed at learning algorithms from data. Veli ˇckovi ́c et al. (2022) andRodionov & Prokhorenkova (2024), for example, train neural networks to execute specific algorithmictasks by training on input-output pairs as well as intermediate steps and hints. Additionally, recentwork aims to improve reasoning in LLMs (Zhou et al., 2023), but McLeish et al. (2024) demonstratethat LLMs, even with code interpreters, are less than perfect at algorithmic reasoning tasks, indicatinga crucial need for advancements in our methodologies. This paper takes a step towards improvingLLM arithmetic and algorithmic capabilities without tool use.A.2 Bitwise OR on Binary VectorsA necessary condition to perform addition is aligning digits of the same significance. We beginby examining positional embeddings for exactly this task. To do this we analyze the bitwise ORtask, where the model has to output left aligned position wise OR of two binary vectors. We presentsamples from the dataset in Section A.2.1, these are left aligned to be representative of the task ofaligning digits for reversed addition.8LT ST ST w/ IIArchitecture Type020406080100Exact Match Accuracy71.87.110.583.13.76.855.66.44.7Abacus, OOD FIRE, OOD NoPE, OODFigure 3: Accuracy of models on the bitwise OR task when trained on data with size up to 20,varying over different positional embeddings and architectures. Abacus Embeddings heavily improveperformance on this task.We train standard transformer, standard transformer with input injection and looped transformermodels on the position wise or task, on a dataset where the maximum length of either input vector istwenty. This result is shown in Figure 3. Here we see that the Abacus Embeddings allow all modelsto generalize further on this task than the other embeddings which prior work for addition focuses on.As with addition, we see that looped transformers perform better than the standard architectures withFIRE or NoPE embeddings. We do note that these accuracies are not as high we report for addition.We hypothesize this is because the model is having to repeatedly predict the same token multipletimes, this has been thought to be the cause of errors in prior addition work(Qian et al., 2022). Whenwe analyzed the errors in this task we found they were predominantly caused by the model outputtingone too few or too many zeros.A.2.1 Example Data000010 ⊕00000000000000 = 00001000000000000100 ⊕0000000 = 0001000001⊕00000 = 00100A.3 Pushing the Limits of Algorithmic Reasoning for TransformersWhile there is an emphasis on addition as a difficult problem in existing work, our methods performso well that we look beyond addition and apply our tools to even more difficult problems, includingmultiplication and sorting.A.4 Recurrence In Transformers Boosts PerformanceWith positional embeddings addressed, next we explore whether recurrent architectures can furtherimprove the ability of transformers to perform multi-digit addition. We use the term recurrent blockto refer to a set of decoder layers with distinct weights and recurrences to refer to the number oftimes the recurrent block is repeated. We use the term effective depth to mean the number of layersused in a transformer, whether their weights are unique or not. Unless otherwise stated, we use amaximally recurrent architecture, i.e. only one unique layer recurred to achieve the effective depth.We also employ input injection, skip-connections that propagate a copy of the input to each layer inthe network.The Benefits of Recurrence. We explore the effect of varying the size of the recurrent block whilekeeping the effective depth fixed. We perform this ablation by halving the number of layers in therecurrent block and doubling the number of recurrences, sweeping from a model with sixteen layers in9the block and a single recurrence ( 16×1, i.e. a standard transformer), through to one layer in the blockwith sixteen recurrences ( 1×16). Analyzing Figure 4, we see further performance improvementsare possible in some cases with the combination of both recurrence and Abacus Embeddings. Inparticular, a model with two recurrences ( 8×2) incurs half the error of the purely non-recurrent model(16×1) for OOD problems and enjoys increased accuracy on 100+ OOD problems. Although theexperiments presented in Figure 4 are a fair comparison across depth, the purely standard transformermodels have many more parameters than their recurrent counterparts.16x1 8x2 4x4 2x8 1x16Layers in Recurrent Block X Number of Recurrences020406080100Exact Match Accuracy97.92.93.299.13.63.798.85.54.897.93.72.879.85.37.930.60031.30030.10029.10013.700Abacus, OODAbacus, 100+ OODFIRE, OODFIRE, 100+ OODNoPE, OODNoPE, 100+ OODFigure 4: Varying the size of the recurrent block, while maintaining an effective depth of 16andtraining on size 20data. We see that a recurrent model with eight layers in the recurrent block andtwo recurrences is the most accurate of all effective depth 16models, halving the error rate of astandard model with input injection in the OOD evaluation when using Abacus Embeddings.A.4.1 Integer MultiplicationWe now study a harder task, multiplication of natural numbers, where the length of the output may bethe sum of the lengths of the operands. Compared to addition, where the output is at most one digitmore than the longest operand, multiplication has longer-distance dependency and the output lengthscales much faster as problem size increases.To adapt from addition to multiplication, we make some small changes to our set-up. First, weremove the input injection from inside the recurrent block and second, we divide the gradients in therecurrent block by the number of recurrences, down-weighing the gradient update from batches withmany recurrences (Bansal et al., 2022). (We analyze the impact of these design decisions for additionmodels in Appendix Figure 16.) We only examine looped transformers as the compute required fortraining and hyperparameter search for multiplication is far greater than for addition, limiting us to amuch smaller scale analysis.Abacus Embeddings help looped transformers reach near-perfect accuracy in-distribution for mul-tiplication. In Figure 5, we show how the training distribution, surrounded by the red square fullysaturates with Abacus Embeddings. In fact, models with our Abacus Embeddings achieve higher indistribution accuracy on 15digit multiplication than prior work (Shen et al., 2023) and do not requirepadding each operand to the same length with zeros. In particular, we highlight that the specificproblems that models trained with FIRE embeddings struggle to solve are the hardest problems in thetraining set and Abacus Embeddings outperform them in this key area (see the lower right corner ofthe red boxes in Figure 5).A.4.2 Array SortingWhile both addition and multiplication accept only two operands, we now analyze the task of sortingarrays of multiple variable length numbers, a more challenging testbed for evaluating the generaliza-tion abilities of our Abacus Embeddings. We present each sorting problem using alphabetical indicesfor each (reversed) number in an input array where the expected output is the alphabetical indices inascending order. For example, a: 64957 , b: 99963 , c: 10218 , d: 7141 , e: 05781 = d, e, b, a, c . Wetrain with arrays of up to 10numbers each having up to 10digits and then evaluate with arrays of100 510 15 2005101520Abacus0 510 15 20Abacus + FIRE0 510 15 20FIRE050100AccuracyLength of Operand TwoLength ofOperand OneFigure 5: Exact match accuracy of looped transformer models trained on multiplication, with fourlayers in the recurrent block and four recurrences. The red square denotes in distribution testing on upto15digit operands. We see the models with Abacus Embeddings achieve near perfect in distributionaccuracy. Combining Abacus Embeddings with FIRE also improves in distribution accuracy on thehardest in distribution problems (bottom right), comparing to the FIRE-only baseline.Table 1: Exact match accuracy for sorting with various positional embeddings. All results arepercentages of the test set and all models here are standard transformers with eight layers.FIRE Abacus Abacus + FIREOOD (number length - 30)55.32 68.63 67.28OOD (array length - 30) 21.35 9.67 21 .11All OOD ( 30×30) 3.73 2 .65 4.48All OOD ( 20×20) 14.65 9 .78 16.91Table 2: Accuracy for sorting with various architectures for sorting. ST denotes standard transformer,ST w/ II denotes standard transformer with input injection, and LT denotes looped transformermodels. The standard transformer has the best exact match accuracy. When measuring the accuracyon identifying only the minimum element of the array, looped transformers outperform all others. Allresults are percentages of the test set.ST ST w/ II LTAll OOD (exact string match) 4.48 3.84 2 .60All OOD (min. elem. only) 49.73 60 .09 68.51up to 30numbers each having up to 30digits. We give more detail on the sorting data constructionprocess in Appendix A.6.In this setting, we explore two axes of generalization. First, we increase the maximum possiblelength of the input numbers to 30digits while maintaining the maximum array length to 10and referto this scenario as “OOD (number length - 30).” Second, we increase the number of inputs in thearray to be sorted to 30while keeping the maximum digit length of each number at 10and term thisscenario “OOD (array length - 30).” Finally, we consider a scenario where both axes are increasedsimultaneously, referred to as “all OOD.”In Table 1, we illustrate the performance of a standard transformer (eight layers) trained with differentembeddings—FIRE, Abacus, and their combination. Again, our results demonstrate that the combinedembedding approach enhances the model’s ability to generalize, surpassing the performance of eitherembedding alone in the “all OOD” setting. However, in Table 2, we observe mixed results whenpairing the Abacus+FIRE Embeddings combination with different model architectures with effectivedepth eight. For sorting, different architectures appear to be better suited to different types ofextrapolation, for example the looped transformer is best at extrapolating for finding the minimumelement but not for sorting the whole array.Overall, the superior sorting performance of the Abacus Embeddings underscores their potentialutility across a broader spectrum of algorithmic tasks beyond basic arithmetic. Abacus Embeddingsmay be instrumental in use cases requiring transformer models to perform a variety of complexpositional, numerical, and/or relational reasoning tasks.11A.4.3 Abacus and Relative EmbeddingsAs Abacus Embeddings are only applied to numbers, to incorporate Abacus Embeddings into ageneral purpose model, they must be compatible with other relative embeddings to maintain gooddownstream performance on non-arithmetic tasks. We examine these types of combinations here andconclude that Abacus Embeddings complement techniques that are good for natural language well,suggesting that these combinations could be powerful for large-scale general models.Although Abacus Embeddings are implicitly combined with NoPE (no positional embeddings)embeddings for all experiments seen so far, most state-of-the-art open source models use RotaryEmbeddings. Rotary Embeddings are weak for length generalization. We show that combiningAbacus Embeddings with RoPE does, in fact, yield improvement in operand length generalization.However, in Figure 6, we demonstrate the true potential for integrating Abacus Embeddings intoa more general system, showing that the combination of Abacus Embeddings with FIRE unlocksgeneralization well beyond the problems that FIRE embeddings can solve on their own.0 50 100050100ST w/ IIAbacus + FIRE0 50 100ST w/ IIFIRE0 50 100ST w/ IIAbacus + RoPE0 50 100ST w/ IIRoPE050100AccuracyLength of Operand TwoLength of Operand OneFigure 6: Exact match accuracy of standard transformer of depth 16 with input injection, trained onup to size 20data. The red square denotes in distribution testing. Combining Abacus Embeddingswith FIRE or RoPE embeddings improves out of distribution accuracy for addition, over the baselinemodels without Abacus Embeddings.A.5 Further Addition ResultsA.5.1 The Impact of Recurrence without AbacusIn Figure 7, we compare all architecture variants using both FIRE and NoPE embeddings trained onaddition over operands with up to 40digits. Despite having approximately 10×fewer parameters thanthe other models, we see that the looped transformer (recurrent, with input injection and progressiveloss), achieves the best out of distribution performance using either position embedding. In Figure 8in the Appendix, we show this result is robust across multiple training data sizes.With recurrent models, we can choose to vary the number of recurrences for each forward pass whiletraining. This tends to improve generalization to harder tasks at test time and is also referred to asprogressive loss computation (Bansal et al., 2022). This loss function is a convex combination of theloss values from two forward passes, one with the nominal number of recurrences (so 16for a1×16model) and one with a random smaller number of recurrences.A.5.2 Addition Models Trained on Varying Data SizesAcross Figure 8, we see that increasing the size of the operands in the training set allows for bettergeneralization above one hundred digits for all models. This is partially due to the sampling methodfor training Abacus Embeddings. As the offset randomization hyperparameter k= 100 is fixed acrossexperiments, there are more embeddings trained if the operands seen during training are longer. Thesize of the OOD set below 100is reduced as the size of the operands seen during training increases,as the ID category now includes this data. However, this does still show that the size of the operandsseen during training directly impacts the generalization, with larger training sizes allowing for bettergeneralization.12LT ST ST w/ IIArchitecture Type051015202530Exact Match Accuracy24.323.915.311.48.711.0FIRE, OOD NoPE, OODFigure 7: Mean exact match accuracy of three models of effective depth sixteen on size 40data,varying over NoPE or FIRE embeddings and architectures. Recurrent looped transformer modelsimprove accuracy for addition for both the FIRE and NoPE embeddings.LT ST ST w/ IIArchitecture Type050100Exact Match Accuracy35.80.81.121.71.10.768.90.90.73.3000.5009.100Train Data Size: 10LT ST ST w/ IIArchitecture Type05010079.85.37.992.93.64.397.92.93.213.70026.70030.600Train Data Size: 20LT ST ST w/ IIArchitecture Type050100Exact Match Accuracy96.816.115.899.66.27.699.45.86.632.50048.00047.000Train Data Size: 30LT ST ST w/ IIArchitecture Type05010099.124.323.999.915.311.499.38.711.060.40064.10062.500Train Data Size: 40Abacus, OODAbacus, 100+ OODFIRE, OODFIRE, 100+ OODNoPE, OODNoPE, 100+ OODFigure 8: Mean exact match accuracy of three models of effective depth sixteen, varying the trainingdata and architecture. We omit from the plot the in distribution accuracies as these are all 100% orvery close to 100% for all models, this can be verified by the dark blue inside of all of the red squaresin Section A.5.4. Models trained on larger operands achieve higher OOD accuracy.A.5.3 Extreme Length Generalization for AdditionAbsolute positional embeddings must be learned during training otherwise they are unusable attest time. This limits our Abacus Embeddings which are trained with the offset randomizationhyperparameter k= 100 . One possible way to resolve this generalization problem is to increase thevalue of kduring testing. In Figure 9, we show the exact match accuracy of five looped transformermodels, with eight layers in the recurrent block and two recurrences trained on size 20data withAbacus Embeddings and k= 101 , generalizing to 120digit addition. We only show the accuracy foroperands of the same length in Figure 9, seeing these models consistently achieve accuracies of 95%and above. We see this across the paper this method is much more robust than that presented by Zhouet al. (2024).130102030405060708090100110120Operand Length0102030405060708090100Exact Match AccuracyRun 1Run 2Run 3Run 4Run 5In DistributionFigure 9: Exact match accuracy of five models trained on size 20data, generalizing well to 120digitaddition, an extrapolation of 6×. Only showing the accuracy for operands of the same length.A.5.4 Addition Full 100 x 100 PlotsHere we present the mean accuracy as heatmaps for the main addition experiments shown throughoutthe paper. Figure 10 (left) corresponds to Top Left of Figure 8. Figure 10 (right) corresponds to TopRight of Figure 8 and Figure 2. Figure 11 (left) corresponds to Bottom Left Figure 8. Figure 11(right) corresponds to Bottom Right Figure 8 and Figure 7. Figure 12 corresponds to Figure 4. All ofthese figures show the Abacus Embeddings ability to generalize in both dimensions of the additionproblem.050100LT, Abacus LT, FIRE LT, NoPE050100ST, Abacus ST, FIRE ST, NoPE0 50 100050100ST w/ II, Abacus0 50 100ST w/ II, FIRE0 50 100ST w/ II, NoPE020406080100Length of Operand TwoLength of Operand One050100LT, Abacus LT, FIRE LT, NoPE050100ST, Abacus ST, FIRE ST, NoPE0 50 100050100ST w/ II, Abacus0 50 100ST w/ II, FIRE0 50 100ST w/ II, NoPE020406080100Length of Operand TwoLength of Operand OneFigure 10: Full 100×100exact match accuracy plots, taking the mean over three models. Left: Size10 training data, corresponding to Top Left of Figure 8; Right: Size20training data, correspondingto Top Right of Figure 8 and Figure 2.A.6 DatasetsAddition: We sample equally, with replacement, from all i×ipossible operand lengths up to themaximum dataset size of 20million, we call this a dataset of size iin the main text. For evaluationwe sample 100samples for each pair of operand lengths evaluated.Bitwise OR: The input for this problem is two binary vectors, the longer input vector is all zerosand the shorter input contains a one. The output should be the length of the longer vector with theone in the same position as in the shorter vector. If the inputs are the same length, the one can bein either vector. E.g. 001⊕00000 = 00100 . For training, we exhaustively sample the space of allvectors of sizes less than or equal to the predefined maximum input vector size.14050100LT, Abacus LT, FIRE LT, NoPE050100ST, Abacus ST, FIRE ST, NoPE0 50 100050100ST w/ II, Abacus0 50 100ST w/ II, FIRE0 50 100ST w/ II, NoPE020406080100Length of Operand TwoLength of Operand One050100LT, Abacus LT, FIRE LT, NoPE050100ST, Abacus ST, FIRE ST, NoPE0 50 100050100ST w/ II, Abacus0 50 100ST w/ II, FIRE0 50 100ST w/ II, NoPE020406080100Length of Operand TwoLength of Operand OneFigure 11: Full 100×100exact match accuracy plots, taking the mean over three models. Left: Size30 training data, corresponding to Bottom Left Figure 8; Right: Size 40 training data, correspondingto Bottom Right Figure 8 and Figure 7.05010016x1, Abacus 16x1, FIRE 16x1, NoPE0501008x2, Abacus 8x2, FIRE 8x2, NoPE0501004x4, Abacus 4x4, FIRE 4x4, NoPE0501002x8, Abacus 2x8, FIRE 2x8, NoPE0 50 1000501001x16, Abacus0 50 1001x16, FIRE0 50 1001x16, NoPE020406080100Length of Operand TwoLength of Operand OneFigure 12: Full 100x100 exact match accuracy plots, taking the mean over three models, relating toFigure 4.Sorting: Given a list of reversed integers indexed by characters, output the characters in ascendingorder. E.g. a: 64957 , b: 99963 , c: 10218 , d: 7141 , e: 05781 = d, e, b, a, c . We implement thesampling process for sorting in a grid like manor. We query each “square” of an [1, n]×[1, n]grid15until the maximum size has been reached for the dataset. When querying “square” (i, j)we randomlysample iintegers of size less than or equal to jdigits. We randomly sample consecutive indices forthe natural numbers in our list at both train and test time.Multiplication: We implement the multiplication datasets for both training and testing the exactsame manor as for addition, only changing the operation used to calculate the answer.A.7 Addition AblationsA.7.1 Analyzing the Intermediate Properties of RecurrenceThanks to the looped transformer architecture, we can extract intermediate solutions from the models,allowing us to plot the models outputs over iterations of the recurrent block. We present an examplein Figure 13 and suggest that this level of interpretability could be leveraged in future work. Themodel presented is a 1×16model, one decoder layer and sixteen recurrences. We do not show thefull16iterations in this plot for readability but these models do maintain a fixed point to 16iterationsand beyond.Figure 13: Plot showing the improvement of the prediction over “thinking” iterations on a 100digitaddition problem.Input Prompt:5879287854346790803556089719498716671892210129414436974968915190512644198885716170096255295233702836+4358110391552830769683978480187501721764900525218097903808750786159803668915002036143168815597779644=Answer:91957607362637455084591168463002008419165877289199410541852759575026294320392841758606474262584957001[EOS](Note that the plot is truncated.)A.7.2 Removing Masking Before EqualsWe mask all tokens before the equals sign in all of our experiments, we hypothesize that with moretraining time this constraint may be able to be removed. In Figure 14, we show the effect of trainingwith the same amount of flops as the other addition experiments without masking before the equalssign.A.7.3 Varying Effective DepthIn Figure 15, we present models with effective depths 8and more than 16, respectively. In Figure15 (left), we see that the effective depth 8models under perform the models with 8layers in therecurrent block and two recurrences shown in Figure 4, demonstrating the benefit of recurrence in thiscase. We see very high accuracy from all models in Figure 15 (right). Again, the depth 32recurrent16LT ST ST w/ IIArchitecture Type020406080100Exact Match Accuracy9.10.50.853.00.71.384.93.52.1Abacus, OOD FIRE, OOD NoPE, OODFigure 14: Effect of removing the masking of the loss before the “=” sign in the addition task. Allmodels perform worse when trained for 24hours on a single Nvidia RTXA4000 if we do not maskthe input question in the loss function.models outperform the standard models with input injection, even though it only has approximatelya quarter of the parameters and achieves the highest OOD mean accuracy of all models presented.These ablations show that with Abacus Embeddings the addition task can be learned across manyeffective depths to varying degrees of accuracy.LT ST ST w/ IIArchitecture Type020406080100Exact Match Accuracy49.24.12.296.44.54.797.54.13.60.10028.10029.700Abacus, OODAbacus, 100+ OODFIRE, OODFIRE, 100+ OODNoPE, OODNoPE, 100+ OOD1x32 4x8 8x8Recurrences X Size of Block020406080100Exact Match Accuracy98.63.43.999.64.95.598.24.14.228.60030.50029.100Abacus, OODAbacus, 100+ OODFIRE, OODFIRE, 100+ OODNoPE, OODNoPE, 100+ OODFigure 15: Left: Effective depth 8 models, trained on size 20data. These models under performthe models with eight layers in the recurrent block and two recurrences shown in Figure 4, showingthe benefit of recurrence for addition. Right: Effective depth >16 models, trained on size 20data.The models contain many more parameters than all other models we present, showing more that aneffective depth of more than 16does not necessarily improve accuracy in this setting.In Figure 16 (left), we remove the input injection to the intermediate layers in the recurrent block,only keeping input injection to the first layer of the recurrent block. In Figure 16 (right) we dividethe gradients in the recurrent block by the number of recurrences for the looped transformer modelsduring training. We see very minor performance changes for all models shown in Figure 16, with the2×8model improving its performance slightly in left plot and the 4×4model improving slightlyin the right plot. We ablate this design choices as we have to remove the input injection inside ofthe recurrent and divide the gradients in the recurrent block by the number of recurrences for themultiplication models show in Figure 5. Hence, we can conclude there would only be very minorperformance changes in this case for addition.A.7.4 Adding randomized PaddingAbacus Embeddings give strong priors for numerical tasks but without them, looped transformersperform better than the standard transformer architectures we present. The result shown in Figure17 aligns well with the hypothesis that with fewer priors the looped transformer models are able togeneralize better. In this case the priors are reduced as the training data is noised with random padsymbols, a method which was shown to improve length generalization in prior work (Shen et al.,2023).178x2 4x4 2x8Layers in Recurrent Block X Number of Recurrences020406080100Exact Match Accuracy97.198.398.129.130.726.3Abacus, OOD Abacus, 100+ OOD8x2 4x4 2x8Layers in Recurrent Block X Number of Recurrences020406080100Exact Match Accuracy98.399.695.229.930.731.3Abacus, OOD Abacus, 100+ OODFigure 16: Replicas of the looped transformer models shown in Figure 4, to check the modificationswe use to train addition models do not adversarially impact addition training, taking the mean ofthree models in each case. Left: without the input injection to the layers inside of the recurrent block,only to the first layer of the recurrent block. Right: dividing the gradients in the recurrent block bythe number of recurrences.LT ST ST w/ IIArchitecture Type020406080100Exact Match Accuracy72.84.96.31.53.63.31.42.93.4Abacus, OOD FIRE, OOD NoPE, OODFigure 17: Effect of adding randomized padding into training data only for the addition task. Loopedtransformer models are able to maintain high accuracy when random padding is added into the data.A.7.5 Index HintsZhou et al. (2023) “randomly sample consecutive index hints from a pre-defined ordered set of hintswith 102 symbols,” for example a6b7c5 +a1b6c3 =a7b3c9. We implement this method two ways.Firstly, cyclic, here we treat the list as cyclic when sampling. Secondly, non-cyclic, this reduces thenumber of samples which receive the embeddings later in the ordering as we only sample from thelist in order. We see similar results for models trained on up to twenty digits as Zhou et al. (2023).We do note that our format of taking the mean exact match accuracy does highlight robustness asif one of the three models tested were to not generalize well, this would impact reported accuracyheavily. We only show a comparison to size 20training data due to the increased cost of evaluatingthese index hint models, as the inputs and outputs are approximately double the length of regularquestions the inference time is heavily increased. Due to the robustness issues highlighted by Zhouet al. (2024) with their methods, we try to the best of our abilities to faithfully reproduce their workwithin our experimental set up, noting that perhaps a better random seed or initialization may be ableto produce better results for these models.A.8 Additional Experimental InformationIn this work, we consider three different model types, the classical standard transformer, standardtransformer with input injection, and looped transformers. We visually describe these in Figure 19.Due to the looped transformer architecture the number of recurrences at train time can be different tothe number of recurrences at test time, although we do not make use of this in this work.18LT ST ST w/ IIArchitecture Type020406080100Exact Match Accuracy13.86.014.612.39.95.8FIRE Rand, OOD FIRE Rand Circular Hints, OODFigure 18: Using index hints and randomized FIRE embeddings, presented by Zhou et al. (2024),training on size 20data with our methodology, such as masking before the equals sign. This wouldbe comparable to “ 1to20” in Figure 13 presented by Zhou et al. (2024) and Figure 2 of our work.Figure 19: Visualization of the three architectures we study.As Abacus Embeddings are a variant of absolute embeddings, reused only for numbers, they couldbe combined with relative embeddings being deployed in current models. If all digits input to themodel are tokenized individually, we can perform a linear time operation to find and assign relativeembeddings to all numbers in an input, which is lower than the quadratic cost incurred by attention.Training a small number of Abacus Embeddings may be enough to handle all numerical inputs foraddition as they are reused. To fully implement our methodology all numbers also have to be reversed,this can be implemented with simple regular expressions on all inputs and outputs.To facilitate training of many models from scratch, we use a language model cramming setup (Geiping& Goldstein, 2023) and limit each training run to 8exaFLOP of compute (a single Nvidia RTXA4000GPU for 24hours); for multiplication results we allow 64exaFLOP (eight Nvidia RTXA4000 GPUsfor24hours). During training, we mask the input question and only compute loss on the answerdigits. We use a character level tokenizer for all experiments and greedy decoding in all testing.We train all models with a local batch size which is the maximum batch size that is a power of twothat will fit into the sixteen gigabytes of GPU memory. For multiplication models we first take themean loss across samples before taking the mean across all samples in a batch, instead of taking themean loss across all token in a batch; we find this leads to slightly more stable training. We note thattraining models to solve multiplication requires more hyperparameter tuning than addition, perhapsimplying it is a trickier task to learn. Also, FIRE models require a much greater compute budget forhyperparameter search as compared to Abacus models for multiplication. In Table 3, we present theapproximate parameter counts for models trained with input injection and Abacus Embeddings.Compute Usage. We detail the default use of GPUs for each experiment in Table 4. For someexperiments, such as extreme length generalization (Figure 9) and index hints (Figure 18) moreGPU hours are required for testing, these are included in the total number of GPU hours used. Ourtesting pipeline for addition and Bitise OR uses Nvidia V100 GPUs. Due to a technical problem,‘torch.compile’ cannot be used on the V100 GPUs we use, therefore others may be able to reducethis compute time in future studies. All compute was provided by internal resources. During theexploratory phase of this project, we used more GPU hours to test and design the experiments shown,19Table 3: Number of parameters, to the nearest million, in a model with Abacus Embeddings and inputinjection.Layers in Recurrent Block Recurrences Parameters (Millions)16 1 1228 2 644 4 342 8 191 16 12Table 4: Default number of Nvidia GPU hours used to train a model.Dataset Number of GPU Hours (training) Number of GPU Hours (testing)Addition 24 - RTXA4000 65.8 - V100Bitwise OR 1 - RTXA4000 45 - V100Sorting 24 - RTXA4000 64 - RTXA4000Multiplication 192 - RTXA4000 0.83 - RTXA4000using approximately 1.5terabytes of storage of the entire project. An estimate of the total computerequired for all of the results presented in the main paper is 10,039GPU hours. The appendix resultsrequire a further 18,278GPU hours.A.8.1 HyperparametersWe detail what we believe to be an important subset of the default hyperparameter values in Table5. A full list of all hyperparameters and model configurations is contained in the code release. Formultiplication models with FIRE embeddings, the learning rate is 0.00006 , due to large instabilitiesin higher learning rates which were not experienced for the Abacus Embeddings.A.8.2 Code ReleaseWe will release all code and datasets on GitHub with an MIT License.Table 5: Default hyperparameter values.Hyperparameter Default ValueHidden Size 1024Intermediate Size 2048Embedding Size 1024Number of Attention Heads 16Progressive Loss Alpha (Bansal et al., 2022) 1.0Data Type float16/float32Optimizer AdamW (Loshchilov & Hutter, 2017)Global Batch Size 8192Batch Size Ramp 0.6Learning Rate 0.0001Learning Rate Scheduler Trapezoid (Zhai et al., 2022)Activation Function GELUglu (Shazeer, 2020)Normalization Layer LayerNorm (Ba et al., 2016)Normalization Type PostOffset Randomization Hyperparameter ( k) 100Initialization Deepnorm (Wang et al., 2022)20 |
b2Ni828As7 | Transformers to Predict the Applicability of SymbolicIntegration RoutinesRashid BarketCoventry [email protected] ShafiqCoventry [email protected] EnglandCoventry [email protected]̈rgen [email protected] integration is a fundamental problem in mathematics: we consider howmachine learning may be used to optimise this task in a Computer Algebra System(CAS). We train transformers that predict whether a particular integration methodwill be successful, and compare against the existing human-made heuristics (calledguards) that perform this task in a leading CAS. We find the transformer canoutperform these guards, gaining up to 30% accuracy and 70% precision. Wefurther show that the inference time of the transformer is inconsequential whichshows that it is well-suited to include as a guard in a CAS. Furthermore, weuse Layer Integrated Gradients to interpret the decisions that the transformer ismaking. If guided by a subject-matter expert, the technique can explain some ofthe predictions based on the input tokens, which can lead to further optimisations.1 IntroductionSymbolic integration is well-studied, proven to be undecidable (see e.g. discussion in Chapter22 in Gerhard and V on zur Gathen [2013]). The most widely applicable method is the Rischalgorithm (Risch [1969]). However, the original algorithm does not work with special functions,and implementations of this algorithm has trouble with algebraic extensions. It is also complicated,taking more than 100 pages to explain in (Geddes et al. [1992]): no Computer Algebra System (CAS)has a full implementation. Hence, researchers in CA have designed a variety of other symbolicimplementation methods.More recently have there been attempts to use Machine Learning (ML) to perform symbolic integra-tion. First, Lample and Charton [2020] implemented a transformer to directly integrate an integrand,outperforming several mature CASs on the generated test data. Further attempts have been madeusing LLMs (Noorbakhsh et al. [2021]), and by chaining explainable rules (Sharma et al. [2023]).More generally, there has been work developing a single LLM to perform a variety of mathematicalreasoning tasks including (in)definite integration such as (Drori et al. [2022], Hendrycks et al. [2021]).We are interested in improving the efficiency of indefinite integration within a CAS using ML,while still ensuring that correctness of the answer is guaranteed, through ML-based optimisationand algorithm selection. Recent work applied TreeLSTMs to show there is room for significantperformance improvements (Barket et al. [2024a]). We focus on the popular commercial CAS,Maple. Maple’s user-level integration call is essentially a meta-algorithm: it can employ severaldifferent methods for symbolic integration. They are currently tried in a deterministic order until onesucceeds, at which point the answer is returned without trying the other methods. A key part of this38th Conference on Neural Information Processing Systems (NeurIPS 2024).Figure 1: A high-level overview of how the indefinite integral command works in Maple to calculateF(x) =Rf(x)dx.implementation is what we call the guards : code that is run prior to calling one of the methods todecide whether or not attempting to use the method is worthwhile. The reasoning behind having aguard is that some of the methods are computationally expensive: it is a waste of time to go through acomplex algorithm just to find out that it fails. Figure 1 gives a high-level overview of this idea.Not all methods have a guard, and some guards do partial work for the method making them expensivefor a quick check. Appendix A gives a list of methods and their guard (should they exist). We createsmall ML models to predict success for methods and investigate whether these can provide guards formethods that lack them, or replace the computationally expensive guards (provided they do better).We also investigate whether interpretations of these models might inform further guard development.2 DatasetOur dataset is composed of integrable expressions in their prefix notation. The data comes from sixdifferent data generators: the FWD, BWD, and IBP generators from Lample and Charton [2020];the RISCH generation method from Barket et al. [2023]; the SUB generation method from Barketet al. [2024a]; and the LIOUVILLE generation method from Barket et al. [2024b]. For moreinformation see Appendix B.1. We obtain the labels for each integrand by recording which methodssucceed ( 1) and which fail ( 0) for each expression. We note that this labelling is considerably morecomputationally expensive than the ML training later. The data is stored as a list where, for position iin the list, the value records if method iis a success or failure.The data goes through several pre-processing steps to shrink the vocabulary size and to avoid havingexpressions of similar form over-represented in the dataset, as described in Appendix B.2. In total,there are 1.5M and 60K samples for training and testing respectively, with an equal split from eachdata generator. The labels do not occur equally however: some methods have a much higher rateof success than others. This is because some of the methods are only made for certain types ofintegrands (e.g. Elliptic ) whereas others are more general-purpose (e.g. Risch ). Figure 2 showsthe frequency of integrating successfully over the train set from each method.Figure 2: Frequency of each method being successful on 1.5M examples from six data generators.2Table 1: Comparing each transformer to predicting positive every time on the test dataset. Thesemethods have no guard; they always run when Maple’s algorithm reaches that specific method.MethodAccuracy (%)Transformer GuardDefault 94.86 82.15DDivides 94.13 28.18Parts 93.10 37.05Risch 94.53 89.35Norman 95.74 71.67Orering 97.21 37.88ParallelRisch 97.82 82.73Table 2: Comparing each transformer to the Maple guard on the full test dataset.MethodAccuracy (%) Precision (%)Transformer Guard Transformer GuardTrager 98.21 67.55 92.21 15.88MeijerG 96.78 88.72 89.58 57.78PseudoElliptic 99.28 62.61 62.86 2.03Gosper 94.04 92.51 92.08 80.213 Training Transformers to Predict Method Success3.1 Experiment SetupFor each of the methods in Maple 2024, we train a classifier to determine whether it is worthwhileattempting said method. We use an encoder-only transformer Vaswani et al. [2017] to predictsuccess or failure, with architecture similar to Lample and Charton [2020] and Sharma et al. [2023].Hyper-parameters and the full model architecture are discussed in Appendix C. The results are thencompared to the guards implemented by human experts in Maple. In this scenario, a false positive isconsidered worse than a false negative. This is because trying a method and failing wastes computetime, whereas skipping an algorithm that may succeed may be okay because another method couldpossibly succeed (and if none do, we could always then try the guarded ones at the end). Thus, weevaluate accuracy and precision when comparing each transformer to each guard.3.2 ResultsTable 1 shows the results for methods that do nothave a guard. In this case, the transformers arecompared to a hypothetical guard that simply predicts a positive label every time, since the currentworkflow would always attempt such methods. These transformers range in accuracy from 93% to98%. Hence, there is much scope for using ML to prevent wasteful computation. An example of sucha case is for Risch andNorman : both have an operation, Field , that creates the base field along withits field extensions before proceeding with the algorithm. This operation takes nearly 20% of theirruntime before determining whether to run the rest of the algorithm or output fail. On a batch of 1024examples, the mean transformer runtime was only 0.0895 s compared to Field which took 0.125s(39.7%longer). In such a case, the transformers can help save computation time before performingtheField operation.Table 2 shows the results for the methods that do already have a guard. In each of these cases, we seethat a transformer can beat the guards, and some by a margin of over 30%. We note that the guardsforTrager andPseudoElliptic are particularly weak compared to the transformers, with muchlower accuracy and precision. The transformers can avoid the false positives that the guards cannot.The number of positive samples for PseudoElliptic is scarce and future work may include ananomaly detection approach for this method. A caveat is that the inference time of the transformers ishigher than the existing guards. As mentioned above, the transformers took 8.95e−2s on average persample whereas Trager ,MeijerG ,PseudoElliptic , and Gosper took1.1e−6,1.4e−6,4.7e−7,and4.7e−7s respectively. The transformers take magnitudes longer to complete inference. However,we hypothesize that the huge gains in precision over these guards will help prevent a lot of wasteful3Table 3: Comparing each transformer to a perfect Maple guard on the filtered dataset.Method # SamplesAccuracy (%) Precision (%)Transformer Guard Transformer GuardTrager 19287 95.37 15.88 93.39 15.88PseudoElliptic 19287 98.04 2.03 63.37 2.03Gosper 18929 86.26 80.21 96.13 80.21computation and so overall, Maple’s integration algorithm would save time, to be confirmed in futurework.Some guards are known to be perfect at predicting incorrectness: i.e. if such a guard outputs that amethod will not work, then it is a mathematical proof that it will never succeed. Although such guardswill always predict the true negatives correctly, they may make mistakes on the true positives. Table3 reports performance after we filter the data with such guards, by removing the samples the guardfinds negative (the first two have a similar guard which caused them to have the same filtered dataset).Note that because these guards are perfect at predicting the negative target class, their accuracy andprecision in Table 3 are exactly that of the precision on the full dataset in Table 2. We see that, unlikethe guards, the transformers have good performance on both classes.4 Layer Integrated Gradients to Interpret the ClassifiersWe have seen that ML can optimise the work of CASs here, but we are also interested in how theML models make their decisions: both to give confidence in their results and perhaps to informbetter hard-coded guards. This proved to be the case when SHAP values were used to explain thepredictions of traditional ML models optimising Cylindrical Algebraic Decomposition (Pickeringet al. [2024]). However, transformers are most widely used in Math-AI Zhou et al. [2024], Sharmaet al. [2023], Charton [2024] and their explanation remains a challenge. Sharma et al. [2023] proposedan explainable approach for symbolic integration: computing integrals by applying rules step-by-stepto form a chain from input to output. However, while this explains how the integral may be computed,it does not explain the decisions the transformers make to select these rules.We further experiment with interpreting the transformers using Layered Integrated Gradients (LIGs)Sundararajan et al. [2017]; an extension of Integrated Gradients that gives insight into different layersof a neural network. We used LIGs with the embedding layer of our model to compute the attributionscore of each input token as the embedding layer converts the input tokens into a dense vector space.We used 50 steps and the default baseline as described in Kokhlikyan et al. [2020].One interesting observation concerns the Risch method and the attribution scores found for thetoken of the absolute value function ( abs). These always contribute negatively, usually highly so,in the observed samples which indicates that the presence of this token will adversely impact theprediction for use of this method. A visualisation for a particular example is shown in Figure 3.After discussing with domain experts, we find this a satisfying interpretation: the Risch algorithm issuited for elementary functions and the absolute value function does not fall in this category. In fact,the Risch algorithm is not proven to work when the absolute value function is included in the fielddefined in the algorithm (Gerhard and V on zur Gathen [2013]). Note this explanation also suggests asimple edit to improve the existing guard code!The attribution scores for all the samples were calculated and aggregated to plot the graph in Figure 4.It can be observed that the attribution scores for absare the most negative, further demonstratingthe alignment of the interpretation and the human expert. Of course, this is just one observationfor a single method and domain expertise remains crucial for validating the insight. But it points topotential for further progress through XAI tools.Figure 3: Example sequence to depict the attribution scores corresponding to different tokens whereblue is positive and red is negative. Note the strongly negative score for the abstoken.4Figure 4: Aggregated attribution scores for all test samples containing the abstoken for the Rischmethod. Note the abstoken has a strongly negative score.5 ConclusionsIn summary, we have demonstrated how we may improve the symbolic integration function in Maplewithout risking its mathematical correctness, by using transformers to predict when an integrationmethod in Maple will succeed. The transformers were compared to heuristics crafted by domainexperts: Table 1 shows that when a heuristic does not exist, a transformer is a suitable guard atpredicting success, achieving between 93-98% accuracy. For the four methods that do have a guard,we also show that transformers make better predictions than these guards. In some cases, accuracyand precision increase significantly compared to the guards, over 30% and 70% respectively.The models, specifically the embedding layers, were then analysed using LIGs to try to interpret themodels. We demonstrated that this approach can give a satisfying explanation for some predictions(the presence of the abstoken being heavily associated with a negative label for the Risch method)although we note that domain knowledge is needed to understand why LIGs produce this result. Thisexplanation in turn suggests improvements to the human-designed guards.5.1 Future WorkWe have shown the potential for ML optimisation of the Maple integration meta-algorithm. As nextsteps we will embed the transformers and evaluate the savings: measured both in number of methodstried (by trying them in order of probability of success) and in overall CPU time.The existence of some perfect guards suggests there is scope for a hybrid approach between algebraictools and transformers which we will also explore in future work. After that, we will seek toimplement our best approach into the actual workflow of Maple’s symbolic integration algorithm.We would then be able to empirically measure how much time is saved in the algorithm rather thananalysing each method individually.We will also continue to experiment with XAI tools to see if the transformers are learning the samerules as the integration method it is learning about, inspired by the insight observed above. TheLIGs only offered explanations at a token level and most likely we need a tool that can also analysethe presence and prevalence of sequence and groups of tokens. While LIGs provided us with someinsight, there was a need for a domain expert to validate this, and that is likely to remain the case.Finally, we note again that in similar applications such as Pickering et al. [2024] and Coates et al.[2023], the authors were able to discover new interpretable heuristics based on the observations fromthe ML models. Being able to understand the predictions is interesting, but the ultimate goal wouldbe to discover new interpretable heuristics for the methods that do not have a guard using XAI tools.This can give better insight on how we understand symbolic integration as a whole and discover newmathematical properties associated with these methods.5ReferencesRashid Barket, Matthew England, and Jürgen Gerhard. Generating elementary integrable expressions.In F. Boulier, M. England, I. Kotsireas, T.M. Sadykov, and E.V . V orozhtsov, editors, ComputerAlgebra in Scientific Computing , volume 14139 of Lecture Notes in Computer Science , pages21–38. Springer Nature Switzerland, 2023. doi: 10.1007/978-3-031-41724-5_2.Rashid Barket, Matthew England, and Jürgen Gerhard. Symbolic integration algorithm selec-tion with machine learning: LSTMs vs tree LSTMs. In Mathematical Software – ICMS2024 , pages 167–175. Springer Nature Switzerland, 2024a. URL https://doi.org/10.1007/978-3-031-64529-7_18 .Rashid Barket, Matthew England, and Jürgen Gerhard. The liouville generator for producingintegrable expressions. In François Boulier, Chenqi Mou, Timur M. Sadykov, and Evgenii V .V orozhtsov, editors, Computer Algebra in Scientific Computing , pages 47–62. Springer NatureSwitzerland, 2024b. doi: 10.1007/978-3-031-69070-9_4.Francois Charton. Learning the greatest common divisor: explaining transformer predictions.InThe Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=cmcD05NPKa .Tom Coates, Alexander Kasprzyk, and Sara Veneziale. Machine learning detects terminal singu-larities. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors,Advances in Neural Information Processing Systems , volume 36, pages 67183–67194. Curran As-sociates, Inc., 2023. URL https://proceedings.neurips.cc/paper_files/paper/2023/file/d453490ada2b1991852f053fbd213a6a-Paper-Conference.pdf .Ernest Davis. The use of deep learning for symbolic integration: A review of (Lample and Charton,2019). arXiv , 2019. URL https://doi.org/10.48550/arXiv.1912.05752 .Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deepbidirectional transformers for language understanding. In Proceedings of the 2019 Conference ofthe North American Chapter of the Association for Computational Linguistics: Human LanguageTechnologies, Volume 1 , pages 4171–4186. Association for Computational Linguistics, June 2019.doi: 10.18653/v1/N19-1423.Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard Tang, Albert Lu, Elizabeth Ke, Kevin Liu,Linda Chen, Sunny Tran, Newman Cheng, et al. A neural network solves, explains, and generatesuniversity math problems by program synthesis and few-shot learning at human level. Proceedingsof the National Academy of Sciences , 119(32):e2123433119, 2022. doi: 10.1073/pnas.2123433119.K. O. Geddes, S. R. Czapor, and G. Labahn. Algorithms for Computer Algebra . Springer New York,NY , 1992. doi: 10.1007/b102438.Jürgen Gerhard and Joachim V on zur Gathen. Modern computer algebra . Cambridge UniversityPress, 2013.Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,and Jacob Steinhardt. Measuring mathematical problem solving with the MATH dataset. InThirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track(Round 2) , 2021. URL https://openreview.net/forum?id=7Bywt2mQsCe .Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, JonathanReynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. Captum: A unified and generic model interpretability library for pytorch, 2020.Guillaume Lample and François Charton. Deep learning for symbolic mathematics. In Proc. Intl.Conf. on Learning Representations (ICLR) , 2020. doi: 10.48550/arxiv.1912.01412.Kimia Noorbakhsh, Modar Sulaiman, Mahdi Sharifi, Kallol Roy, and Pooyan Jamshidi. Pretrainedlanguage models are symbolic mathematics solvers too! ArXiv , abs/2110.03501, 2021. URLhttps://api.semanticscholar.org/CorpusID:238419670 .6Lynn Pickering, Tereso del Río Almajano, Matthew England, and Kelly Cohen. Explainable AIInsights for Symbolic Computation: A case study on selecting the variable ordering for cylindricalalgebraic decomposition. Journal of Symbolic Computation , 123, July 2024. ISSN 0747-7171.doi: 10.1016/j.jsc.2023.102276.Bartosz Piotrowski, Josef Urban, Chad E. Brown, and Cezary Kaliszyk. Can neural networks learnsymbolic rewriting? In Proc. Artificial Intelligence and Theorem Proving (AITP) , 2019. doi:10.48550/arXiv.1911.04873.Robert H Risch. The problem of integration in finite terms. Transactions of the American Mathemati-cal Society , 139:167–189, 1969. doi: 10.1090/S0002-9947-1969-0237477-8.Vaibhav Sharma, abhinav nagpal, and Muhammed Fatih Balin. SIRD: Symbolic integration rulesdataset. In The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS’23 , 2023. URLhttps://openreview.net/forum?id=WWDsbsgyhS .Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In DoinaPrecup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on MachineLearning , volume 70 of Proceedings of Machine Learning Research , pages 3319–3328. PMLR,06–11 Aug 2017. URL https://proceedings.mlr.press/v70/sundararajan17a.html .Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V onLuxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, edi-tors, Advances in Neural Information Processing Systems , volume 30. Curran Associates,Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf .Hattie Zhou, Arwen Bradley, Etai Littwin, Noam Razin, Omid Saremi, Joshua M. Susskind, SamyBengio, and Preetum Nakkiran. What algorithms can transformers learn? a study in lengthgeneralization. In The Twelfth International Conference on Learning Representations , 2024. URLhttps://openreview.net/forum?id=AssIuHnmHX .7A Integration Methods and their GuardsThere are 13 different integration methods exposed to the user in Maple 2024. These can be passed tothemethod parameter in the intfunction. We obtain results for all sub-routines simultaneously bypassing method=_RETURNVERBOSE to the command. An example screenshot is shown in Figure 5.Running this process over the entire dataset is how we generate the labels for the experiment.Figure 5: An example output from the indefinite integration command when all the methods arecalled. Some answers have a different form but are all mathematically equivalent.Exposing the guards is a more subtle issue. Within Maple there exists the packageIntegrationTools:-Indefinite which lets the user gain access to the indefinite integrationcommands, including each method listed in Figure 5 individually. With the Maple commandshowstat() , the user can then see (some of the) source code for them. The list of guards can befound in the GitHub link (provided after reviews) but we give a plain-language explanation of eachone here:•Trager : check if the expression is a radical algebraic function.•MeijerG : Check if the expression is a special function or algebraic. Our dataset includes nospecial functions so we only use the latter condition.•PseudoElliptic : Check if the expression is a radical function.•Gosper : Using the DETools package, check if the expression is hyperexponential. Afunction His hyperexponential of xifddxH(x)H(x)=R(x),a rational function of xFor more information on the meaning of the data types described, see Maple’s type help page:https://maplesoft.com/support/help/maple/view.aspx?path=type .B Data Generation and PreprocessingB.1 Generation MethodsLample and Charton [2020] were the first to describe data generation methods for indefinite integration.They presented three methods to produce (integrand, integral) pairs:•FWD: Integrate an expression fthrough a CAS to get Fand add pair ( f, F) to the dataset.•BWD: Differentiate fto get f′and add the pair ( f′, f) to the dataset.•IBP: Given two expressions fandg, calculate f′andg′. IfRf′gis known (i.e. exists in adatabase) then the following holds (integration-by-parts):Rfg′=fg−Rf′g. Thus, weadd the pair ( fg′,fg−Rf′g) to the dataset.While this can generate a sufficient quantity of data, there exists a problem with insufficient varietyof data. The present authors and others have discussed the issues (Davis [2019], Piotrowski et al.[2019], Barket et al. [2023]). We briefly overview the problems here:•There is a bias in the length of the expressions where a generator can produce long integrandsand short integrals or vice-versa.•Generated integrands can end up having too similar of a form within the same dataset (i.e.differing only by their coefficients).8• They have a hard time generating integrands of certain forms (see Barket et al. [2024b] formore on this point).Prior work by the authors creates several different generators to help address such issues based onmathematical properties of integrands. Liouville is the first to have described the general form of anelementary integral if it exists. Two of the generators are based on the work of Liouville so we firststate Liouville’s theorem (Section 12.4in Geddes et al. [1992]) for symbolic integration.Theorem 1 (Liouville’s theorem) : Let Dbe a differential field with constant field K(whosealgebraic closure is L). Suppose f∈Dand there exists g, elementary over D, such that g′=f.Then,∃v0, . . . , v m∈D, c 1, . . . , c m∈Lsuch thatf=v′0+mXi=1civ′ivi=⇒g=v0+mXi=1cilog(vi).The two generators based on Liouville’s theorem are:•RISCH: (Barket et al. [2023]) Reverse engineer the steps of the Risch algorithm, whosecorrectness is underpinned by Liouville’s Theorem, which outputs the integral of a giveninput. We generate an expression f=Rbsuch that bis a fixed expression and Rarerepresented as symbolic coefficients. We then follow the steps of the Risch algorithm todetermine what values these symbolic coefficients can be in order to satisfy integrability.This guarantees fis integrable and the Risch algorithm outputs the integral of f.•LIOUVILLE: (Barket et al. [2024b]) Similar to the BWD method but follows the formof Liouville’s theorem. Generate a random rational expressionNDand also generate twosums of logarithms A=Pji=0cilog(ai)andA=Pki=0dilog(bi). The key differencebetween AandBare that aiare from the factors of Dandbiarenotfactors of D. Then wesetf=ND+A+Band we add ( f′, f) to our dataset.Finally, similar to IBP, we also have one more data generator based on the substitution rule fromcalculus and an existing dataset.•SUB : (Barket et al. [2024a]) Given fandg, calculate g′and let u=g(x). IfRfis alreadyknown, then the following holds (substitution rule):Rf(g(x))g′(x)dx=Rf(u)du. Weadd (f(u),Rf(u)) to our dataset.B.2 Data PreprocessingThe data preprocessing steps are primarily the same as described in Barket et al. [2024a] and we listthem here for ease of access.Each number is preceded by an INT+ orINT- token to denote whether the integer being encoded ispositive or negative. Then, we replace the actual numbers in the following manner:1. if the number is 0,1or2, then create a corresponding token;2.if the number is a single-digit number that is not 0,1or2, replace the number with a CONSTtoken;3. if the number has two digits, replace by a CONST2 token; and4. for all other cases, replace by a CONST3 token.As this is a classification task, the value of the constant does not (usually) matter and this helpssimplify the data without losing much information. We keep the integers in the range [−2,2]as theycan end up being important (e.g. (x+ 1)1and(x+ 1)−1integrate very differently). If any integrandsmatch after this transformation, we only keep one unique copy to ensure a variety of expressionsin the data. Lastly, we follow Devlin et al. [2019] and include a [CLS] token at the start of everysequence as a representation of the entire sequence. The output of this token after going through theencoding layers is used for the classification task.9C Model Architecture and CodeAll data used in the paper and all code to generate the figures and results are available onZenodo: zenodo.org/records/14013787. We also provide the Github link for the wider project:github.com/rbarket/TransformersVsGuards.The Zenodo link is all the current code used for this paper which will remain fixed. At the the GitHublink we will continue to grow the dataset size, make small updates to the algorithm, and continue toadd more elementary and special functions to the generator.One of the goals in Section 3 was to make a smaller transformer than seen in previous literaturefor similar tasks (Sharma et al. [2023], Lample and Charton [2020]). This is because the task isconceptually easier (binary classification), and this would also lead to faster inference times. To thisend, the model was an encoder-only transformer with 4 heads and 4 layers with dimension 128. Weinclude a dropout layer with p= 0.3, and a linear layer with sigmoid activation at the final step. Weused the Adam optimizer, BCE loss, and the LR scheduler ReduceLRonPlateau(factor=0.05,patience=3) . The batch size was 256. Data ranged from 1 to 1012 tokens in prefix notation. Themodel is trained on two Quadro RTX 8000 GPU’s. Each model is trained for 100 epochs with earlystopping implemented. We also checked the inference time of the model and a batch of 256samplestook on average 0.0495 s over 10 runs using Pytorch’s Benchmark Timer functionality.The Layer Integrated Gradients in Section 4 had two parameters. First was the number of splits whichwas kept 50 and second was the baseline vector which was left to the default value.10 |
aYlKvzY6ob | Mining Math Conjectures from LLMs: A PruningApproachJake Chuharski1∗Elias Rojas Collins1∗Mark Meringolo21Massachusetts Institute of Technology2Unaffiliated{chuharsk,erojasc}@[email protected] present a novel approach to generating mathematical conjectures using LargeLanguage Models (LLMs). Focusing on the solubilizer, a relatively recent con-struct in group theory, we demonstrate how LLMs such as ChatGPT, Gemini, andClaude1can be leveraged to generate conjectures. These conjectures are pruned byallowing the LLMs to generate counterexamples. Our results indicate that LLMsare capable of producing original conjectures that, while not groundbreaking, areeither plausible or falsifiable via counterexamples, though they exhibit limitationsin code execution.1 IntroductionArtificial intelligence, specifically deep learning, has created much discussion around the possibilityto augment human creativity with computational capability. Among the leading technologies pushingthis discussion are large language models (LLM’s) such as OpenAI’s ChatGPT, Anthropic’s Claude,and Google’s Gemini ( 1;2;3). While LLMs have been widely recognized for their competence intext generation, their interactions within abstract academic fields such as mathematics, specificallywith conjecture creation, remain under-explored. Initial work has evaluated LLMs’ ability to passexams like the SAT and MBA qualifying exams ( 4;1). More recently, efforts have focused onbench-marking their capacity to generate mathematical proofs ( 5). However, there has been littlework on bench-marking the ability of language models to act as a creative agent towards coming upwith new conjectures.In this study, we use the Claude Sonnet, Gemini 1.5, and GPT-4 APIs to both generate conjecturesand write GAP computer algebra code to check them for plausibility. GAP (Groups, Algorithms, andProgramming) is a computer algebra system designed for computational group theory and relatedareas in abstract algebra. However, GAP is not a proof assistant so does not give the user proofsfor theorems, but it can be used to check conjectures for immediate counterexamples. We workspecifically on the solubilizer subset which is a relatively new/unexplored construction in grouptheory that contains much potential for novel conjectures (see Appendix A.1). GAP computer algebracan check the conjecture for chosen groups and allows for language models to "guess and check".The system provides a method to mine for conjectures using language models and a pruning step toremove conjectures that are false for obvious (or sometimes non-obvious) reasons. This approachoffers a systematic method for generating and validating conjectures, combining model output withautomated computational verification without requiring a strong formal theorem prover.*These authors contributed equally to this work1We additionally add information from a small sample on OpenAI o1 preview in Appendix A.538th Conference on Neural Information Processing Systems (NeurIPS 2024).2 Related WorkRecent studies have explored LLMs’ role in conjecture generation. Johansson and Smallbone observethat many of the symbolic structures generated by LLMs may already exist in training data, raisingconcerns about genuine originality (6). They note that GPT-4 appears to have been trained on prooflibraries like QuickSpec, Hipster, and Isabelle/HOL, providing a potential caveat for verifying theoriginality of any generated conjectures.We mitigate this challenge by deliberately focusing on a mathematical area with limited prior exposure:the solubilizer (see Appendix A.1). By iteratively updating prompts, we also attempt to steer themodels away from generating as many redundant conjectures which they also found to be a problembecause “GPT-4 usually produces the same kind of ‘generic’ lemmas every time” (6).Other studies, such as Davies et al. ( 7), use machine learning to assist mathematicians in proofcreation rather than conjecture generation. In Wu et al. ( 8), LLMs are shown to autoformalize naturallanguage math into formal theorem provers like Isabelle, translating competition problems into formalproofs with impressive accuracy. In Si et al. ( 9), LLMs are evaluated on the ability to be creativeagents in coming up with research ideas, however math was touched minimally. They additionallycorroborate the claim that “LLMs lack diversity in idea generation”( 9). These approaches focus onproof generation, formalization, or assistance, whereas our work emphasizes the initial creative stepof formulating new conjectures, and then provides an immediate ‘guess-and-check’ step to verifyplausibility.3 MethodologyThe method that we propose to “mine” for math theorems, shown in Table 10, is as follows:1.We begin with a prompt that includes literature on the solubilizer from ( 10;11;12;13;14;15;16). The model is prompted to generate theorems related to the literature provided andwrite GAP code to test conjectures on groups. Full prompting is provided in Appendix A.2.2. The LLM then generates GAP code and the GAP code is run.!If the code compiles and runs and the outcome is recorded.%If the code does not compile the LLM is prompted again to fix the code, provided withthe output of the failing program. It is given the chance to do two revisions (in practiceallowing for further revision almost never results in working code).3.If the result is that the conjecture is false, the theorem and it’s result is added to the prompt,and the process is repeated with the false conjecture added to the set of ideas that are knownto fail.This process was run with three models: ChatGPT 4 ( gpt-4o-2024-05-13 ), Claude Sonnet(claude-3-5-sonnet-20240620 ), and Gemini 1.5 ( gemini-1.5-flash ). LLM’s have a “Tem-perature” parameter which varies the level of randomness in the outputs to a given prompt. Thisis sometimes taken as a proxy for “Creativity”, although this description is dispuited ( 17). Thetemperature for the Claude model was set to 1 for conjecture generation and .1 for code generation.The GPT-4 conjecture was set to 1.08 for conjecture generation and was left at default for codegeneration. The Gemini 1.5 conjecture generation was set to 1.5 ( top_k: 5, top_p:.99 ) anddefault for code generation. The values were generated by trial and error where the authors observedqualitatively the most consistent conjecture variation without extreme hallucinations2.3.1 Area of FocusThe mathematical area of focus is called the solubilizer and is defined as follows:Definition 3.1. LetGbe a finite group. For any element x∈G, the solubilizer ofxinGis definedas:SolG(x) :={y∈G| ⟨x, y⟩is soluble }.More introductory and historical information on the solubilizer can be found in the Appendix A.1.2For example, if the temperature is set too high in GPT-4, the model will return output in multiple languages2Figure 1: Method4 Results4.1 Performance OverviewThe experiment provided three types of outcomes (Summarized in Table 3):•Successful generation of counterexample-finding code in 25.95% of cases (109 out of 420unique outputs).•Generation of conjectures without counterexamples in 9.52% of cases (40 out of 420 uniqueoutputs).• Generation of non-executable code in 64.52% of cases (271 out of 420 unique outputs).Table 1: Classification of OutputsCategory ChatGPT Claude Gemini TotalUnique Conjectures 94 89 237 420Total Output 249 258 250 757No Counter-Examples 25 4 11 40Couldn’t Execute Code 33 44 194 271Conjecture Failed 36 41 32 1094.2 ExamplesThe following is an example with no counter-examples from Claude:Conjecture 4.1. LetGbe a non-solvable group. For any element xinG, ifSolG(x)is a subgroup,then the Frattini subgroup of SolG(x)is contained in the Frattini subgroup of G.This result is simple enough that the model can be prompted to prove the conjecture with slightmodification. See appendix A.4.1. The following is a conjecture that failed from Gemini (see A.4.2):Conjecture 4.2. LetGbe a non-solvable group, and let x∈G. If|SolG(x)|is divisible by exactlytwo primes, then one of them is 2.Output 4.3. Conjecture failed for group: PSL(3 ,2)Where PSL(3 ,2)is the projective special linear group of 3x3 matrices over the finite field F2. Lastly,we have an example where code could not be executed from GPT-4:Conjecture 4.4. LetGbe a finite non-solvable group and x∈G. Then for every abelian subgroupAofG, we have SolG(x)∩A̸={1}.4.3 Similarity AnalysisTo quantitatively measure diversity, we calculated the cosine self-similarity, and similarity betweenconjecture sets/literature. The similarity results are summarized in Table 2. Where we see a maintainedlevel of similarity throughout the experiments and between models. See Appendix A.7 for heatmaps.Scaling the system further by increasing the number of trials or generating more variations in theprompt failed to yield significantly more diverse conjectures. However the quality did not decreaseeither. This finding suggests that a different approach, such as multi-modal model interaction orcombining LLMs with automated theorem provers, could help diversity.3Table 2: Output Cosine Similarity DistributionMetric Max Min Mean MedianChatGPT vs. ChatGPT 0.9284 0.0486 0.2911 0.2658Claude vs. Claude 0.9950 0.0921 0.4238 0.3683Gemini vs. Gemini 0.8742 0.0132 0.2451 0.2264ChatGPT vs. Claude 0.8695 0.0356 0.2699 0.2552ChatGPT vs. Gemini 0.8892 0.02638 0.2442 0.2292Claude vs. Gemini 0.7662 0.0149 0.2220 0.2102Claude vs. Literature 0.7101 0 0.1501 0.1332Gemini vs. Literature 0.8274 0 0.1812 0.1510GPT-4 vs. Literature 0.9388 0 0.2139 0.19785 Discussion5.1 ObservationsAmong the 757 outputs generated by the LLMs, 420 unique conjectures were identified. The highnumber of duplicates shows a considerable redundancy in the results, as approximately 55.48% ofthe conjectures were deemed to be unique. While not unexpected, the duplicates suggest that LLMslikely rely on similar patterns when prompted similarly across trials as described in ( 6). However,this did not significantly hinder overall performance other than increase the number of total iterationsneeded to yield a desirable number of conjectures.Not all of the conjectures generated by the models were entirely original and was verified by one ofthe authors of the seven original solubilizer papers for all 420 unique conjectures. For example,Theorem 5.1. Let G be an insoluble group and x an element of G. Then the cardinality of cannot beequal to p2for any prime p.shows up in (12) and GPT-4 conjectured:Conjecture 5.2. LetGbe an insoluble group and x∈G. Then the cardinality of SolG(x)cannot beequal to p2for any prime p.That being said, this result was contained in the system prompt and can be ignored. In all other casesthe models output conjectures that were distinct from anything found in literature or their systemprompt.In 109 cases (25.95%), the generated code successfully identified counterexamples, which is criticalfor falsifying conjectures. Secondly, of the 420 unique outputs, only 40 (9.52%) produced conjectureswith no counterexamples. ChatGPT significantly outperformed both Claude and Gemini in this area,generating 26.60% valid conjectures compared to Gemini’s 4.64% and Claude’s 4.49%. This showsthat ChatGPT was more effective at producing conjectures that are plausible at first glance. However,a large portion of the GPT-4 conjectures were looking at the size of the solubilizer rather than aboutinteractions with other groups, group structure, or subgroup properties. Therefore, one could arguethat they were easier to write code for, or at least more likely to succeed based on similarity. Furtherstill, results classified as having "no counterexamples" by GPT-4 seemed to be qualitatively moreobvious than those by Claude or Gemini (see Conjecture A.5 vs. Conjecture A.4 vs. Conjecture A.9).Lastly, the fact that the models are able to generate novel, original conjectures at all provides promisefor these models to be used as useful tools when developing the theory of a new construction.5.2 LimitationsA limitation observed in both models was the generation of non-executable code, which occurred in271 instances (64.52% of unique outputs). Gemini and Claude struggled more with code execution,having 81.86% and 49.44% instances of non-executable code respectively, compared to ChatGPT’s37.76%. This potentially points to differences in how the models handle code syntax in GAP, howeverthe models were prompted to have a near identical code format (see Appendix A.2). This corroboratesthe idea that some conjectures set by Claude/Gemini are more difficult to write code for, and thereforemore likely to fail in this system. Interestingly, there are some examples that Claude/Gemini gave4that would be much harder to check in GAP with a constrained time limit where Claude/Geminicould not write executable code (see Appendix A.4.6). Lastly, we note that the models had differentapproaches to generating code to test conjectures, with Claude and Gemini being more similar.We found that ChatGPT liked to preemptively restrict the groups it would consider. For example,ChatGPT conjectured that the solubilizer couldn’t be bigger than or that it couldn’t be exactly equalto any of the following numbers (for all non-solvable groups) in seperate conjectures: [2, 3, 6, 8, 9,10, 11, 12, 14, 15, 16, 18, 20, 24, 25, 27, 32, 49, 50, 126].While this study focuses on group theory and solubilizers, a relatively unexplored area, the approachcould be generalized to other domains. We acknowledge limitations of using GAP, an algebrasoftware. However, future work could easily extend this methodology to fields like number theory,geometry, representation theory, or combinatorics by integrating tools like SageMath, MAGMA, orother computational solvers.5.3 Future WorkWe propose the investigation of conjecture generation in fields where existing conjectures are sparseor absent. For example, LLMs could be applied to generate conjectures in newer or less exploredareas such as tropical geometry or higher homotopy theory, where automated tools exist but haveyet to be fully integrated with LLMs. Furthermore, the study above was limited to using a singleLLM. If one model is better at writing code and the other is better at conjectures, using a combinationstructure could yield better results. We remark that a quantitative metric for ‘interestingness’ of amath conjecture or problem seems to be elusive, nontrivial, yet useful (see Appendix A.6).6 ConclusionThe study opens up several promising avenues for the use of LLMs in research. Our work, whilesmall, shows the potentially impactful way that LLM’s augmented with other computational capacitycan solve more complex problem. For example, further work integrating conjecture generation withproof validation systems could streamline the process of discovery.That being said, LLM-based conjecture generation is still very limited to existing knowledge. Ratherthan producing fundamentally new ideas, LLMs are likely to lean on known results, limiting theirability to drive groundbreaking discoveries ( 18). Indeed, when thinking of language models asstatistical traversers of some sort of higher dimensional surface built from training data, it is easy toimagine that the models are not able to stray too far from what they are fed to generate the surface.Specifically, conjectures and theorems involving well-understood subgroups on which the solubilizeris inspired (think centralizer and normalizer) can serve as an incredibly large well from which anLLM can sample a new direction about the solubilizer. This may all be permissible to a practitioner ifone is only interested in clearing out the brush around a new construct such as the solubilizer; but asof writing, it should not be expected that these models will conjecture something profound.We demonstrate that combining LLMs with computational resources like GAP can successfullygenerate and test original, albeit simple math conjectures. Indeed, performance suggests that LLMslike ChatGPT, Claude, and Gemini have potential, but only on conjectures that are similar to existingideas or are otherwise simple. Furthermore, the models face significant challenges in generatingexecutable code and avoiding duplicate conjectures. Indeed, ChatGPT-4 demonstrated strongerperformance in generating conjectures that could not be immediately falsified, Claude was slightlymore effective at identifying counterexamples, and Gemini had the least redundancy likely due to thelonger context window. The high percentage of non-executable code reinforces the need for robusterror-checking and handling within the models. GAP is limited in the variety of error codes that areproduced when code fails, so other more verbose computational algebra solvers could help with errorcorrection. Lastly, further analysis of failed code generation to find patterns of failure could lead tobetter prompting for avoiding common bugs. Further work would likely include adding a formalautomated theorem prover or another form of neuro-symbolic proof engine, giving an end-to-endsystem that can generate new conjectures and prove them in a single pass( 19;20). The authors arealso interested to see other new approaches for accurate conjecture generation in various abstractfields, or more generally, improvements to conjecture generation by non-LLM based models.5References[1] OpenAI, “GPT-4 Technical Report,” 2023.[2] Anthropic, “Claude AI.” https://www.anthropic.com , 2023.[3]G. Team, R. Anil, S. Borgeaud, and et. al, “Gemini: A family of highly capable multimodalmodels,” 2024.[4]C. Terwiesch, “Would ChatGPT3 Get a Wharton MBA? A Prediction Based on Its Performancein the Operations Management Course.” Mack Institute for Innovation Management at theWharton School, University of Pennylvania, 2023.[5]B. Romera-Paredes, M. Barekatain, A. Novikov, et al. , “Mathematical discoveries from programsearch with large language models,” Nature , vol. 625, pp. 468–475, 2024.[6]M. Johansson and N. Smallbone, “Exploring mathematical conjecturing with large languagemodels,” in NeSy , pp. 62–77, 2023.[7]A. Davies, P. Veli ˇckovi ́c, L. Buesing, et al. , “Advancing mathematics by guiding human intuitionwith ai,” Nature , vol. 600, pp. 70–74, 2021.[8]Y . Wu, A. Q. Jiang, W. Li, M. N. Rabe, C. Staats, M. Jamnik, and C. Szegedy, “Autoformalizationwith large language models,” ArXiv , vol. abs/2205.12615, 2022.[9]C. Si, D. Yang, and T. Hashimoto, “Can llms generate novel research ideas? a large-scale humanstudy with 100+ nlp researchers,” 2024.[10] B. Akbari, “More on the non-solvable graphs and solvabilizers,” 2018.[11] B. Akbari, M. L. Lewis, J. Mirzajani, and A. R. Moghaddamfar, “The solubility graph associatedwith a finite group,” 2020.[12] B. Akbari, C. Delizia, and C. Monetta, “On the solubilizer of an element in a finite group,”2022.[13] B. Akbari, J. Chuharski, V . Sharan, and Z. Slonim, “Characterization of solubilizers of elementsin minimal simple groups,” 2023.[14] D. Hai-Reuven, “Non-solvable graph of a finite group and solvabilizers,” 2013.[15] A. Lucchini, “Solubilizers in profinite groups,” 2023.[16] H. Mousavi, M. Poozesh, and Y . Zamani, “The impact of the solubilizer of an element on thestructure of a finite group,” 2023.[17] M. Peeperkorn, T. Kouwenhoven, D. Brown, and A. Jordanous, “Is Temperature the CreativityParameter of Large Language Models?,” 2024.[18] I. Mirzadeh, K. Alizadeh, H. Shahrokhi, O. Tuzel, S. Bengio, and M. Farajtabar, “Gsm-symbolic:Understanding the limitations of mathematical reasoning in large language models,” 2024.[19] T. H. Trinh, Y . Wu, Q. V . Le, et al. , “Solving Olympiad Geometry Without Human Demonstra-tions,” Nature , vol. 625, pp. 476–482, 2024.[20] DeepMind, “AI solves IMO problems at silver medal level,” 2023. Accessed: 2024-09-10.[21] J. H. Thompson, “Nonsolvable finite groups all of whose local subgroups are solvable,” Bull.Am. Math. Soc. , vol. 48, no. 2, pp. 511–592, 1973.[22] R. Guralnick, B. Kunyavski ̆ı, E. Plotkin, and A. Shalev, “Thompson-like characterization of thesolvable radical,” J. Algebra , vol. 300, no. 1, pp. 363–375, 2006.[23] OpenAI, “Introducing openai o1-preview.” https://openai.com/index/introducing-openai-o1-preview , 2024. Accessed: 2024-09-20.6A Appendix / Supplemental MaterialA.1 SolubilizerA.1.1 BackgroundThesolubilizer is a construct that can be attributed to Doron Hai-Reuven in ( 14) from 2013. Much ofthe work in the area can be atributed to Banafsheh Akbari et al. in ( 10;11;12;13). The constructitself should be intelligible for anyone who has taken an introductory university algebra course,although can be potentially unpredictable to work with. It is therefore also simple for the models tounderstand, but non-trivial to produce conjectures. All of the models claim to have not heard of theconstruction before, meaning that the training data either excluded this information or did not haveenough for it to be in the knowledge base. In either case, the models seem to be able to understandthe definition once it is given, and provide often accurate assumptions about how the set behaves. Forexample, when testing these ideas on GPT-4, the model responded with the following:Input: Give me the definition of the solubilizer in group theoryOutput: “In group theory, there isn’t a concept specifically called ‘solubilizer. ’ However, I suspectyou might be referring to the ‘solubility’ of a group or the concept of a ‘solvable group. ’ A group Gis said to be solvable if there exists a sequence of subgroups{e}=G0◁ G 1◁ G 2◁···◁ Gn=Gsuch that each Giis a normal subgroup of Gi+1and the quotient group Gi+1/Giis abelian. Solvablegroups generalize the notion of abelian groups. Solvability is an important property in group theoryand plays a key role in Galois theory, which connects group theory with field theory. If you had adifferent concept in mind or need more information about solvable groups or another topic, pleaseclarify, and I’ll be happy to assist further!"We also further note that the model presents an almost identical response to “solvabilizer” instead of“solubilizer” which is another name that the set can be found under in the literature. We then presentthe definition to the model so that it can begin to formulate an understanding of the area.Input: I will define the solubilizer as follows: Let G be a finite group. For an element x in G, wedefine the solubilizer of x in G as the set: SolG (x) :={y∈G|⟨x, y⟩is soluble }Output: "Ah, I see. Given your definition, the solubilizer SolG(x)of an element xinGis the set ofall elements yinGsuch that the subgroup generated by xandy, denoted ⟨x, y⟩, is a solvable group.To further elaborate on this:1.⟨x, y⟩denotes the subgroup of Ggenerated by the elements xandy. This is the smallestsubgroup of Gthat contains both xandy.2.The condition that ⟨x, y⟩is solvable means that there exists a sequence of subgroups, startingwith the trivial group and ending with ⟨x, y⟩, such that each subgroup in the sequence is anormal subgroup of the next, and the quotient of successive subgroups is abelian.3.The solubilizer SolG(x)is then the set of all elements yinGfor which this condition holds.This is an interesting definition, and it essentially captures the elements in Gthat, when paired withx, produce a solvable subgroup. The concept seems to be related to understanding the structure andsolvability properties of subgroups within G. Do you have specific questions about this definition orits properties?"This output shows three correct assertions about the subset, so we test to see if the model can expandfurther on it’s understanding without more input. The model is then asked to provide some conjecturesbased on the newly learned definition. The idea is to gauge the model’s understanding of a singledefinition and to see what material the model relates to the new definition to conjecture somethingthat it considers to be new material.Input: Based on this definition, can you conjecture any theorems that might apply to this set?Output: “Certainly! Given the definition of the solubilizer SolG(x)in a finite group G, we canpropose some conjectures or theorems. Here are a few ideas:71.Non-Empty Solubilizer : For any element xinG,xbelongs to SolG(x)because thesubgroup ⟨x⟩(generated solely by x) is cyclic and hence solvable.Theorem : For any x∈G,x∈SolG(x).2.Subgroup Containment : Ifybelongs to SolG(x), then y−1(the inverse of y) also belongstoSolG(x). This is because if ⟨x, y⟩is solvable, then ⟨x, y−1⟩is also solvable.Theorem : For any y∈SolG(x),y−1∈SolG(x).3.Closure under Solvability : Ify1, y2∈SolG(x)and their product y1y2belongs to ⟨x, y 1⟩or⟨x, y 2⟩, then y1y2should also belong to SolG(x).Theorem (Conditional) : For y1, y2∈SolG(x), ify1y2∈ ⟨x, y 1⟩ory1y2∈ ⟨x, y 2⟩, theny1y2∈SolG(x).4.Entire Group Solvability : IfGitself is solvable, then for any x∈G,SolG(x) =G.Theorem : IfGis solvable, then for all x∈G,SolG(x) =G.5.Size of the Solubilizer : The size of SolG(x)provides information about the complexity ofthe group’s structure in relation to solvability.Conjecture : For certain classes of groups (e.g., simple groups), the size of SolG(x)relativeto the order of Gmay have particular bounds or properties.All five of these ideas are true. The first four are easily proved to be correct mathematical statementsand the fifth is just a general statement that presents the (correct) expectation that the solubilizer maybe informative to studying the structure of the group.A.1.2 MathLetGbe a finite group, and for any element x∈G, the solubilizer of xinGis defined as:SolG(x) :={y∈G| ⟨x, y⟩is soluble }.In general, SolG(x)is not necessarily a subgroup of G. However, there are specific conditions underwhich this set does form a subgroup. It has been proven in ( 14) that SolG(x)is a subgroup of Gforany element x∈Gif and only if Gis a soluble group.A well-known result, attributed to Thompson ( 21), states that a finite group Gis soluble if and onlyif, for every x, y∈G, the subgroup ⟨x, y⟩is soluble. Thus, a finite group Gis soluble if and only iffor any element x∈GSolG(x) =G.An important related concept is the soluble radical R(G), the largest soluble normal subgroup ofG. Guralnick et al. ( 22) demonstrated that for an element x∈G,x∈R(G)if and only if ⟨x, y⟩issoluble for all y∈G. Consequently, x∈R(G)if and only if SolG(x) =G.A common question is how the structure of a single solubilizer influences the structure of the entiregroup. For instance, it was shown in ( 14) that if Gcontains an element xsuch that all elementsofSolG(x)commute pairwise, then Gmust be abelian. Another example, ( 11) generalizes thisby proving that if there exists x∈Gsuch that for every u1, u2, u3∈SolG(x), the commutator[u1, u2, u3] = 1 , then γ3(G) = 1 , implying that the group is nilpotent. Here, γ3(G)is the third termin the lower central series of G. These are ideas that are more often explored by Claude and Gemini,although are harder to verify.The arithmetic properties of solubilizers also play a crucial role in determining group structure. Forexample, if Gcontains an element xwhose solubilizer has order porp2(where pis a prime), then Gis ap-group, as discussed in (12). These ideas seem to be of heavy focus for ChatGPT.Thompson’s theorem ( 21) demonstrated that a finite group Gis soluble if and only if every two-generated subgroup of Gis soluble. This motivates the definition of the solubilizer and highlightsits significance. Moreover, the solubilizer operates similarly to the centralizer in a group, with thesoluble radical R(G)functioning in a way comparable to the center of a group. Analogous to thecentralizer CG(x) ={y∈G| ⟨x, y⟩is abelian }, the solubilizer describes solvability rather thancommutativity. However, unlike the centralizer, the solubilizer is not always a subgroup which wasinitially tough for the models to rememeber, leading to this fact requiring repetition in the prompting.8A.2 Model Usage and promptingBoth the prompt for generating conjectures and the prompt for generating code were prefaced with:1 """ You are a mathematician and efficient computer scientist . Youare interested in abstract algebra , but are generally veryknowledgeable and interested in the intersections betweendifferent areas of math .23 You have began working on the ’solubilizer ’ subset of a non -solvable group . You are very good at writing GAP code . The userwill ask for either a conjecture , or GAP code to check aconjecture .45 When a user asks for GAP computer algebra code you will providenothing in your response except code to complete their task . Whenwriting code make sure that the answer to your question is PRINTEDto the terminal .67 A maximum of two things should be printed by your code . If theconjecture fails , the code should break and just print for whichgroup the conjecture failed . If the code does not generate anycounter - examples , the code should return "No Counter - examples !".89 Additionally , make sure you only test conjectures on Non - Solvablegroups ! For example , when checking a conjecture , you might want tocheck it on all non - solvable groups of order less than onemillion by using : SimpleGroupsIterator (1, 10^6) . This means ONLYgive GAP computer algebra code unless the user asks for aconjecture . When the user asks for a conjecture you should returnnothing but the conjecture ."""The code generation system prompt for both models included code snippets for how to accurately andconsistently generate the solubilizer and had the format for how the conjectures should be checkedand output.1"The general function for the solubilizer in GAP is here , you willhave to write the rest of the checks yourself ":2 solubilizer := function (G, max , x)3 rad := RadicalGroup (G);4 if x in rad then5 return G;6 else7 M := List ( max);8 M := Set( Filtered (M, m -> x in m));9 maxes := [];10 solx := [];11 while Size (M) > 0 do12 m := M [1];13 if IsSolvable (m) then14 solx := Union (solx , List (m));15 maxes := Union (maxes , [m]);16 Remove (M, 1);17 else18 MM := MaximalSubgroups (m);19 MM := Set( Filtered (MM , mm -> x in mm));20 Append (M, MM);21 Remove (M, 1);22 M := Set(M);23 fi;24 od;25 return solx ;26 fi;27 end ;28" When you test a conjecture . Make sure to have the end of your code bein this format ":929CheckConjecture := function ()30 local G, gen , x, solGx ;3132 for G in SimpleGroupsIterator (1, 1000000) do33 [PUT THE CODE TO CHECK THE CONJECTURE HERE !]34 Print (" Conjecture failed for group : ", StructureDescription (G), "\n");35 return ;36 od;37 Print ("No Counter - examples !\n");38end ;39CheckConjecture ();Listing 1: Solubilizer PromptingThe system prompt for generating conjectures also included information from the literature that wasleft in LaTeX form, including the subset’s definition as above. For brevity they will not all be listedhere but were in the form of:1Let \(G\) be a group . Then \(o(x)\) divides \(|\ operatorname {Sol}_G(x)|\) for all \(x \in G\).23Let \(G\) be a group . Then \(| C_G (x) |\) divides \(|\ operatorname {Sol }_G(x)|\) for all \(x \in G\).At the end of the included known true conjectures, the system was also able to continuously update aset of conjectures falsified by the model.1 The following are conjectures that you know to be false :2Let \( G \) be a finite non - solvable group and \( x \in G \). Then \(|\ operatorname {Sol }_G(x)| \) is always an even number .3Let G be a non - solvable group . For any element x in G, if Sol_G (x) isa subgroup of G, then Sol_G (x) is nilpotent .4Let G be a non - solvable group . For any element x in G, if Sol_G (x) isa subgroup of G, then the derived subgroup [ Sol_G (x), Sol_G (x)] iscontained in the Fitting subgroup of G.5Let G be a non - solvable group . For any element x in G, the index ofSol_G (x) in its normalizer N_G( Sol_G (x)) is always a prime power .6Let G be a non - solvable group . For any element x in G, if Sol_G (x) isa subgroup , then the derived length of Sol_G (x) is strictly lessthan the derived length of G.A.2.1 ComputeThe computer used for the experiments has the following specifications:• Model: Macbook Air• Chip: Apple M1• Memory: 8Gb• OS: MacOS BigSur 11.6 (20G165)The experiments took between 48-72 hours to run for each model. This was mainly due to checkingall non-solvable (or in some cases just simple) groups of order up to 1,000,000.A.3 Additional ExamplesIn the following we include one example of each type from each model.A.3.1 ClaudeExample with no counterexamples from Claude:10Conjecture A.1. LetGbe a non-solvable group. For any two elements x, y∈G, ifSolG(x)∩SolG(y)is non-empty, then SolG(x)∩SolG(y)contains a non-trivial normal subgroup of G.The following conjecture failed:Conjecture A.2. LetGbe a non-solvable group. For any element xinG, ifSolG(x)is a subgroupofG, then the derived subgroup [SolG(x),SolG(x)]is contained in the Fitting subgroup of G.Output A.3. Conjecture failed for group: A5In a similar example, the model could not write code that executed properly:Conjecture A.4. LetGbe a non-solvable group. For any element x in G, ifSolG(x)is a propersubgroup of G, then the intersection of SolG(x)with its normalizer in Gis always properly containedin the normalizer of the Fitting subgroup of G.A.3.2 GPT-4The following had no counterexamples from ChatGPT 4:Conjecture A.5. For any finite non-solvable group Gand any element x∈G, the set SolG(x)is nota cyclic group.Similarly GPT-4 suggested that the following be true although it was immediately obvious to be false:Conjecture A.6. For any element x∈Gof a non-solvable finite group G, the set SolG(x)containsall elements of a certain conjugacy class in GOutput A.7. Conjecture failed for group: A5Where A5is the alternating group on five elements (see A.4.4). Lastly, an example where GPT-4could not execute code for the conjecture:Conjecture A.8. For any finite non-solvable group G, there exists an element x∈Gsuch thatSolG(x)is a nilpotent subgroup of G.A.3.3 GeminiThe following is a conjecture with no counterexample:Conjecture A.9. LetGbe a finite non-solvable group and suppose x∈Gis not an element ofthe soluble radical R(G)ofG. Assume that ⟨x, xy⟩is not solvable for any element y∈G.Then⟨x, Sol G(x)⟩=SolG(x)for all x∈G.The following is a conjecture that is false:Conjecture A.10. LetGbe a finite non-solvable group. For any element xofG, the probability thata randomly chosen element y∈Gis contained in SolG(x)is less than or equal to the probabilitythatyis contained in the radical of G.Output A.11. Conjecture failed for group: A5Where, again, A5is the alternating group on 5elements (see A.4.5). The following is a conjecturewhere code could not be executed:Conjecture A.12. LetGbe a finite non-solvable group. Let x, ybe two non-commuting elements ofGsuch that the subgroups generated by xandy,⟨x, y⟩,is solvable. Then the probability of findinga third element, w∈G, that commutes with both xandy, subject to the additional condition thatat least one of the two groups ⟨x, w⟩or⟨y, w⟩is solvable must be equal to at most the product ofprobabilities of a non-commutation of xandyand the existence of wthat commutes with yandx.A.4 ConjecturesA.4.1 Proof of Conjecture 4.1Proof: By assumption, SolG(x)is a subgroup of G. Let Φ(G)andΦ(Sol G(x))denote the Frattinisubgroups of GandSolG(x), respectively. We aim to prove thatΦ(Sol G(x))⊆Φ(G).11The Frattini subgroup Φ(H)of a group His the intersection of all maximal subgroups of H. Inparticular, for any maximal subgroup MofSolG(x), there exists a maximal subgroup NofGsuchthatM⊆N. SinceΦ(Sol G(x)) =\{M|Mis a maximal subgroup of SolG(x)},we haveΦ(Sol G(x))⊆\{N|Nis a maximal subgroup of G}= Φ(G).Thus, we conclude that Φ(Sol G(x))⊆Φ(G).A.4.2 Failure of Conjecture 4.2• Conjecture failed for group: PSL(3 ,2)• Element: (2,8,4,3,6,7,5)• Prime divisors of |SolG(x)|: [3,7]A.4.3 Failure of Conjecture A.2• Conjecture failed for group: A5• Element: (1,5,2,4,3)• Derived subgroup: Group( [ (1,5,2,4,3) ] )A.4.4 Failure of Conjecture A.6• Conjecture failed for group: A5• Conjugacy class: (3 4 5)• Co-generator: (1 2 3)• Generated group: ⟨(1 2 3) ,(3 4 5) ⟩=A5is not solvableA.4.5 Failure of Conjecture A.10• Conjecture failed for group: A5• Element: ()(the identity element)• Probability( SolG(x)) : 1• Probability(Radical( G)):160A.4.6 Additional ConjecturesThe following conjectures are just a couple of the conjectures that had code that was unable to be runbut are still potentially interesting from Claude:Conjecture A.13. LetGbe a non-solvable group. For any element xinG, ifSolG(x)is a propersubgroup of G, then the intersection of SolG(x)with all of its conjugates in Gis always contained inthe hypercenter of G.Conjecture A.14. LetGbe a non-solvable group. For any element xinG, ifSolG(x)is a propersubgroup of G, then the normalizer of SolG(x)inGcontains at least one element from each non-abelian composition factor of G.Conjecture A.15. LetGbe a non-solvable group. For any element xinG, ifSolG(x)is a propersubgroup of G, then the commutator subgroup [SolG(x), G]contains at least one non-identity elementfrom each non-abelian composition factor of G.Conjecture A.16. LetGbe a non-solvable group. For any element xinG, ifSolG(x)is a propersubgroup of G, then the intersection of SolG(x)with its normalizer in Gis always metabelian.Conjecture A.17. Let G be a non-solvable group. For any element xinG, the subgraph of the powergraph of Ginduced by SolG(x)is always connected.12A.5 OpenAI o1Here we briefly summarize data taken from OpenAI o1( 23). Due to initial limiting factors and theshort timeline, the authors have not finished gathering sufficient data to be included into the mainbody. We recieved 51 conjectures with the following data:Table 3: OpenAI o1 DataCategory OpenAI o1Unique Conjectures 51Total Output 51No Counter-Examples 22Couldn’t Execute Code 1Conjecture Failed 28Perhaps unsurprisingly, the new model was able to execute code properly roughly 98% of thetime. Furthermore, the model only once needed to take a debug step. The authors believe that theconjectures are not any significantly more interesting than the other ones given by Gemini and Claude,although this model does give more interesting conjectures than GPT-4 in terms of the willingnessto give conjectures about relation of the solubilizer to other subgroups, conjugacy classes, etc. Wenote that this model was more willing to stray from the system prompt in the GAP code wherein itgenerated code that checked non-solvable groups that were not simple. Lastly, observe that all ofthe conjectures are unique, which leaves this model so far being (albeit with a lower than desireablenumber of samples to make a strong claim) the best one to use for this task in all aspects. The authorsintend to explore this further in the coming weeks.A.5.1 Examples:The following are four randomly chosen examples and their output.Conjecture A.18. In any finite non-solvable group G, the intersection of SolG(x)over all x∈G\R(G)is equal to R(G), the solvable radical of G.Output A.19. No Counter-examples!Conjecture A.20. In any finite non-solvable group G, for any element x∈G\R(G), there does notexist an element y∈SolG(x)such that ⟨x, y⟩=G.Output A.21. No Counter-examples!Conjecture A.22. In any finite non-solvable group G, for any elements x, y∈G\R(G), ify∈SolG(x), then x /∈SolG(y).Output A.23. Conjecture failed for group: A5Conjecture A.24. In any finite non-solvable group G, for any elements x, y∈G\R(G), if⟨x, y⟩issolvable, then xandyare both contained in a common solvable maximal subgroup of G.Output A.25. Conjecture failed for group: PSL (3,2)The system failed to write code for a single conjecture:Conjecture A.26. In any finite group G, the solubilizer SolG(x)is invariant under all automorphismsofGthat fix x.A.6 Conjecture ‘Interestingness’In mathematics, evaluating the “interestingness” of a conjecture or problem is inherently subjectiveand resists quantification. However, if one seeks a quantitative approach, there are several aspectsto be considered: the conjecture’s depth, its generality or specificity, its implications for otherfields, simplicity of solution, and whether it leads to significant advancements or novel methods.Indeed, some of these are impossible to predict, and are not always necessary for a conjecture tobe labeled interesting. Some conjectures, like the Riemann Hypothesis, have clear applications to awide range of fields, yet others garner interest without obvious practical use. Consider the Collatz13Conjecture: despite its straightforward formulation, the conjecture resists resolution and has fewknown applications, yet it draws wide attention due to its seemingly simple, though elusive nature.Furthermore, a conjecture’s “interestingness” often depends on historical context, cultural influencewithin mathematical communities, and its perceived difficulty or elegance.Another difficulty in quantifying interestingness is the risk of conflating technical complexity withprofundity. A conjecture could be formally intricate yet lack broader appeal or connection toother domains. Additionally, highly specialized conjectures may be overlooked by non-specialistsdespite their beauty or importance for those knowledgebale in the field. Further, the evolving natureof mathematical interest itself becomes an issue; conjectures once regarded as obscure can gainrecognition as foundational connections become clearer.Nevertheless, certain conjectures seem almost universally intriguing. The Poincaré Conjecture andFermat’s Last Theorem captivated broad attention due to their simplicity, profound implications, andhistorical legacy. If just pieces of these ideas could somehow be built into a standardized metric,many studies will surely benefit.A.7 Similarity AnalysisWe include figures that show the similarity heatmaps between conjectures below. It is visuallyapparent from these maps that Claude in general had the most syntactic similarity. Indeed, withconjectures exampled as the following A.27A.28, the entire structure of the first conjecture is heldwithin the second but they are not the same idea.Conjecture A.27. "LetGbe a finite non-solvable group. Then for any element xinG, ifSolG(x)isa proper subgroup of G, there exists a prime pdividing |G|such that SolG(x)intersects at least twodistinct Sylow p-subgroups of Gnon-trivially."Conjecture A.28. LetGbe a finite non-solvable group. Then for any element xinG, ifSolG(x)is aproper subgroup of G, there exists a prime pdividing |G|such that SolG(x)intersects at least twodistinct Sylow p-subgroups of Gnon-trivially, but does not contain any full Sylow p-subgroup of G.GPT-4 clearly has the most syntactic differences in the conjectures. While many of the conjecturesreference the same idea, the way that they are stated is highly variable. Claude and Gemini both havea more methodical approach which show up as lighter colored squares. One can see that Geminialso had a period of runtime where the conjectures were mostly structured similarly with differentmodifiers at the end of the conjecture. The authors are unsure why these structural ‘loops’ seem tooccur periodically throughout the repeated process, but they are interesting to note regardless. Inthese patches there didn’t seem to be a significant difference in the quality of conjecture.With regards to the similary with the literature, non-surprisingly GPT-4 had the highest similarity dueto the reproduction of a conjecture from literature as noted above. Otherwise, the distinction in themodels between themselves was roughly comparable with that in literature. We note that Claude hadthe lowest maximum similarity, that all of the models had a minimum similarity of zero, and that onaverage, the models were slightly more disjoint from literature than they were from each other.14Figure 2: GPT-4 Cosine Self Similarity15Figure 3: Claude Cosine Self Similarity16Figure 4: Gemini Cosine Self Similarity17Figure 5: GPT vs. Claude Cosine Similarity18Figure 6: GPT vs. Gemini Cosine Similarity19Figure 7: Claude vs. Gemini Cosine Similarity20Figure 8: Gemini vs. Literature Cosine Similarity21Figure 9: Claude vs. Literature Cosine Similarity22Figure 10: GPT-4 vs. Literature Cosine Similarity23 |
YXnwlZe0yf | Putnam-AXIOM: A Functional and Static Benchmarkfor Measuring Higher Level Mathematical ReasoningAryan GulatiDepartment of Computer ScienceStanford [email protected] MirandaDepartment of Computer ScienceStanford [email protected] ChenDepartment of MathematicsStanford [email protected] XiaDepartment of MathematicsStanford [email protected] FronsdalDepartment of Computer ScienceStanford [email protected] de Moraes DumontDepartment of MathematicsStanford [email protected] KoyejoDepartment of Computer ScienceStanford [email protected] large language models (LLMs) continue to advance, many existing benchmarks 1designed to evaluate their reasoning capabilities are becoming saturated. Therefore, 2we present the Putnam-AXIOM Original benchmark consisting of 236 mathemati- 3cal problems from the William Lowell Putnam Mathematical Competition, along 4with detailed step-by-step solutions. To preserve the Putnam-AXIOM benchmark’s 5validity and mitigate potential data contamination, we created the Putnam-AXIOM 6Variation benchmark with functional variations of 52 problems. By programmat- 7ically altering problem elements like variables and constants, we can generate 8unlimited novel, equally challenging problems not found online. We see that al- 9most all models have significantly lower accuracy in the variations than the original 10problems. Our results reveal that OpenAI’s o1-preview, the best performing model, 11achieves merely 41.95% accuracy on the Putnam-AXIOM Original but experiences 12around a 30% reduction in accuracy on the variations’ dataset when compared to 13corresponding original problems. The data and the evaluation code are available at 14https://anonymous.4open.science/r/putnam-axiom-B57C/ . 151 Introduction 16The ability for Large Language Models (LLMs) to reason about complex problems has a plethora 17of applications in many fields such as economics [Zhang et al., 2024], drug discovery [Bran et al., 182023], and even simulations of human behavior and society [Park et al., 2023]. The prominence 19of this ability has led to significant development in the performance of LLMs on many reasoning 20benchmarks. 21Outpacing Current Evaluations. Indeed, advanced models like GPT-4 [OpenAI, 2023] and Gemini 22Ultra [Team, 2023] have even surpassed human-level performance on many benchmarks like MMLU 23[Hendrycks et al., 2020] and MMMU [Yue et al., 2023]. Similarly, LLMs have seen astonishing 24Submitted to 38th Conference on Neural Information Processing Systems (NeurIPS 2024) Workshop on MATH-AI.progress in other challenging benchmarks like GSM8K [Chen et al., 2022] and MATH [Hendrycks 25et al., 2021], with SOTA models attaining nearly 90% accuracy on MATH [Lei, 2024] and nearly 26perfect accuracy on GSM8K [Zhong et al., 2024]. Though this progress is a testament to the rapidly 27evolving ability and utility of LLMs, it also presents a large problem: Existing datasets are no longer 28sufficient to evaluate the reasoning abilities of LLMs. 29Data Contamination. Compounding this issue is one of the most significant problems facing 30evaluation datasets today, i.e., data contamination. As LLMs are increasingly trained on more of 31the internet, an increasing number of the open-source problems used in evaluation benchmarks are 32incorporated in the training data of these models. A model can therefore display artificially high 33“reasoning ability” by simply memorizing the answers it has seen undermining evaluation integrity. 34To address these limitations, we introduce the Putnam-AXIOM ( Advanced e Xamination of 35Intelligence in Operational Mathematics) dataset, a novel and challenging compilation of high- 36level mathematics problems sourced from the prestigious William Lowell Putnam Mathematical 37Competition, an annual mathematics competition for undergraduate college students in North Amer- 38ica which requires advanced mathematical reasoning and covers a wide range of university-level 39mathematical concepts. Further, we also introduce functional variations of this AXIOM dataset to 40combat data contamination taking inspiration from the solution employed by Srivastava et al. [2024]. 41These are small variations of questions on the Putnam that are equally difficult as the Putnam but 42unavailable anywhere on the internet. AXIOM enables fully automated evaluations by requiring 43models to provide final answers within “ \boxed{} ” brackets which can then be extracted and com- 44pared to the ground truth final solution using an equivalence function1. This approach eliminates the 45need for human evaluation, allows for complex open-ended answers, and avoids the limitations of 46multiple-choice formats, thus maintaining rigor while enabling scalability. 47Initial evaluations on Putnam-AXIOM demonstrate its difficulty with OpenAI o1-preview scoring 48less than half at 41.95%, while GPT-4o achieves only 17.80%. Even math-specialized models 49such as Qwen2-Math-7B and Qwen2-Math-7B-Instruct perform poorly, scoring 5.51% and 11.8% 50respectively. Performance further declines on functional variations of Putnam-AXIOM, which include 51significant drops for most models, decreasing by 20-30% in relative performance. These low scores 52underscore Putnam-AXIOM’s utility for measuring LLMs’ advanced reasoning capabilities, while 53the variations scrutinize true reasoning skills by exposing the models’ reliance on memorization. 542 Methods 552.1 Putnam-AXIOM Original Dataset 56Dataset. The Putnam-AXIOM Original Dataset contains 236 problems curated from the William 57Lowell Putnam Mathematical Competition posed between 1985 and 2023. These problems were 58selected based on their ability to yield final “ \boxed{} ” solutions ensuring compatibility with our 59automated evaluation. The dataset encompasses various subjects within university-level mathematics 60categorized into 11 distinct domains - Geometry, Algebra, Trigonometry, Calculus, Linear algebra, 61Combinatorics, Probability, Number theory, Complex numbers, Differential equations and Analysis. 62To maintain a consistent and rigorous evaluation, each problem retains its original exam ID, which 63indicates its difficulty level (A or B for sitting, 1-6 for increasing complexity). This categorization 64helps in evaluating subject-specific understanding and overall problem-solving skills at different levels 65of complexity. The dataset is formatted using LATEXto accurately capture the complex equations 66and symbols the problems employ. Additionally, we utilize Asymptote vector graphics for encoding 67mathematical figures and diagrams to ensure language models can process visual elements directly. 68Further, we standardized the placement of boxed answers by relocating them to the end of each 69solution string to minimize unintended emergent behaviors leading to evaluations that are less "harsh" 70or prone to penalizing the model for formatting deviations rather than actual comprehension. 71Model Assessment. Drawing inspiration from the MATH dataset by [Hendrycks et al., 2021], which 72demonstrated the effectiveness of using boxed answers for evaluating mathematical understanding 73in LLMs, we similarly create a dataset with final solutions being wrapped in \boxed{} commands. 74Boxed answers allow for an exact match criterion rather than relying on approximate heuristics by 751For instance, the equivalence function would evaluate the answers 0.5,1/2, and \frac{1}{2} as equal2simply parsing the LLM generated string solution for the value within the box, thereby enhancing 76reliability and consistency of the evaluation process while being quick and cost-effective. To further 77ensure fair evaluation, we implemented an equivalence function that homogenizes similar answers, 78addressing both simple string inconsistencies and complex mathematical equivalences like (x+ 1)279andx2+ 2x+ 1or numerical expressions such as \frac{1}{2} ,1/2, and 0.5and equating them. 80Modified Boxing. Given the complex nature of certain Putnam questions, some problems do not 81lend themselves to simple, singular boxed answers. Instead, they often include conditions, multiple 82possible answers, varied answer formats and elaborate proofs. These original questions would 83have necessitated costly and difficult human evaluations which we seek to avoid. To address this, 84we modified these questions by adding a trivial next step to the original questions, changing the 85solution accordingly. This additional step was designed so as to ensure that solvers reached the 86same conclusions and insights necessary to solve the problem, but then needed to perform a simpler 87computation to get a simplified, boxable answer. We provide an example of such a change in Figure 883. By incorporating this minor modification, we preserved the inherent difficulty and complexity of 89the original problems while making the answers suitable for our boxed answer evaluation criteria. 902.2 Putnam-AXIOM Variation Dataset 91Models trained on snapshots of the internet have likely encountered Putnam questions, potentially 92inflating their performance on the Putnam-AXIOM Original dataset. Therefore, drawing inspiration 93from Srivastava et al. [2024], we introduce functional variations of select problems from Putnam- 94AXIOM Original providing an effective way of evaluating models that have been trained on the entire 95internet by taking advantage of weaknesses in model memorization. These variations are classified 96into two types. 971.Variable Change. The simplest variation is a variable change, where variable names are 98altered and the final answer is unvaried. Variable changes slightly modify the problem from 99its original statement, which models could have trained on. 1002.Constant Change. Constant changes modify numeric properties of the question, altering 101constants within the step-by-step solution and the final answer. Constant changes signifi- 102cantly transform the problem from its original statement, challenging models to perform 103complex reasoning on how the changes affect the solution and final answer, as in the example 104from Figure 4. 105Variational Dataset Description. We created functional variations for 52 Putnam-AXIOM questions, 106considering limitations such as problem-specific constants, non-generalizable solutions, and questions 107lacking constants or boxable answers. The dataset includes 26 constant+variable and 26 variable- 108only changes. We rephrased problem statements while maintaining the core task to prevent pattern 109recognition by LLMs. Each variation can generate infinite unique, equally difficult snapshots, 110offering a sustainable evaluation method. To evaluate various SOTA models, evaluators are expected 111to generate snapshots (instances of the infinite potential variations) of the variation dataset by running 112the generation code. 1132.3 Model Evaluations 114Using the LM Harness Evaluation framework [Gao et al., 2024], we evaluated several open-source 115and proprietary SOTA LLMs. Models were prompted to provide answers in \boxed format, which 116were then compared to Putnam ground truths with an exact final answer match. We evaluated the 117236-question Putnam-AXIOM Original dataset once. For the variation dataset, we conducted five 118trials, each using a randomly selected variation snapshot and its corresponding 52 original questions. 119We then calculated mean accuracy and 95% confidence intervals. 1203 Results and Analysis 1213.1 Putnam-AXIOM Model Performance 122Table 1 presents Putnam-AXIOM Original dataset accuracies. Most models score below 10%, with 123even NuminaMath, the AI Mathematics Olympiad winner [Investments, 2024], achieving only 4.66%. 1243Figure 1: Drop in accuracies on Putnam-AXIOM Variation vs Original questions is statisticallysignificant for nearly all models. Figure shows mean accuracies with 95% confidence intervals.These low accuracies underscore AXIOM’s difficulty. Figure 1 contrasts Putnam-AXIOM Variation 125dataset mean accuracies with the 52corresponding original questions, along with the confidence 126intervals across the five variation snapshots with the average accuracies in Table 2. Original accuracies 127typically surpass variation accuracies. For models like o1-preview, GPT-4o, Claude-3.5 Sonnet and 128NuminaMath-7B-TIR, non-overlapping confidence intervals reveal statistically significant differences, 129indicating artificially inflated performance on original questions due to data contamination. Looking 130at the numbers highlights significant accuracy declines across models: GPT-4o shows the steepest 131drop at 44% , followed by o1-preview at 30% , GPT-4 at 29% , and Claude-3.5 Sonnet at 28.5% . 132ModelOriginal (Final Accuracy)Score Percentage (%)Gemma-2B-Base 7/236 2.97Gemma-7B-Base 9/236 3.81DeepSeek-Math-7B-Base 14/236 5.93Qwen2-Math-7B-Base 13/236 5.51NuminaMath-7B-Base 11/236 4.66Mistral-7B-v0.3-Base 7/236 2.97Llama-3-8B-Base 9/236 3.81Gemma-2B-Instruct 2/236 0.85Gemma-7B-Instruct 8/236 3.38Qwen2-Math-7B-Instruct 28/236 11.86DeepSeek-Math-7B-Instruct 12/236 5.08Mistral-7B-Instruct-v0.3 8/236 3.38Llama-3-8b Instruct 10/236 4.23DeepSeek-Math-7B-RL 19/236 8.05Claude-3.5 Sonnet 38/236 15.96GPT-4 22/236 9.32GPT-4o 42/236 17.80o1-preview 99/236 41.94Table 1: Putnam-AXIOM Original Results .43.2 LLM Error Analysis 133Though we used automated evaluations for efficiency, a manual review of model responses on 134Putnam-AXIOM Original provides deeper insights into models’ reasoning and errors. We selected 135the two best-performing models, GPT-4o and OpenAI o1-preview, as they likely exhibit the strongest 136reasoning abilities. Our goal is to analyze this reasoning in greater depth. 137OpenAI o1-preview Performance: Out of all models, we see that OpenAI o1-preview performed 138the best on Putnam-AXIOM Original, receiving 41.9%boxed accuracy ( 99/236) while other models 139received less than 20%. Analyzing the answers, we see that most of the OpenAI o1-preview responses 140followed generally the same logical path as the ground truth solution. However, several of these 141questions contained logical mistakes and inconsistencies. The biggest discrepancy between model 142responses and the ground-truth solution was a general lack of mathematical rigor. Whereas the 143ground truth solution will make claims to advance its solution then prove those claims step-by-step, 144o1-preview will often make and use claims without justification. While this does succeed in getting 145to the correct boxed final answer, these unjustified claims would receive little credit when marked by 146a human grader. A large part of the difficulty of mathematical reasoning is being logically airtight 147throughout the entire solution; thus, though o1-preview shows promise, there are still evident flaws 148in its mathematical reasoning abilities. In several solutions like Figure 7, for instance, o1-preview 149correctly identified the maximal or minimal value of a variable, but failed to provide sufficient proof 150that the value it provided was indeed the maximum or minimum. 151GPT-4o Performance: Like the o1-preview, GPT-4o mostly followed correct logical reasoning 152for most of its solutions. For GPT-4o, the biggest discrepancy between model responses and the 153ground-truth solution is the same general lack of mathematical rigor throughout most of the solutions. 154An example of this lack of rigor is shown in Figure 8, where GPT-4o makes the claim that a rectangle 155gives the minimal area subject to a set of constraints without any justification. In addition to issues 156with rigor, GPT-4o also displayed logical leaps and incoherent reasoning, as displayed in Figure 9 157where the model simply assumes that an answer is correct. These logical leaps are symptomatic of 158an issue in the GPT-4o’s CoT reasoning, as the model prioritizes reaching the final answer rather 159providing a rigorous logical output. 160General Analysis: Beyond GPT-4o and the o1-preview, we wanted a general overview of the 161reasoning behaviors of models. To do so, we chose the best-performing open-source models, 162DeepSeek-Math-7B-RL, Qwen2-Math-7B, and NuminaMath-7B. We tend to see that open-source 163models are much more error-prone than the proprietary models we evaluated earlier. In general, 164we notice that open-source models are subject to the same lack of mathematical rigor. However, 165this rigor issue is overshadowed by major calculation errors, hallucinated/irrelevant information, 166misunderstandings of the problem, and logical jumps. For instance, in Figure 10, NuminaMath 167simultaneously makes a calculation, irrelevancy, and misunderstanding error when writing the last 168step of its solution; in Figure 11, the model makes false assumptions about functions defined in the 169problem; in Figure 12, the model completely removes a crucial part of the problem and proceeds to 170an incorrect final solution. 1714 Conclusion 172In this paper, we present Putnam-AXIOM, a novel challenging benchmark of 236problems from 173the Putnam examination for evaluating reasoning capabilities of large language models. Our dataset 174allows for automated evaluations with an equivalence function. While SOTA LLMs already have 175saturated performance on benchmarks like MATH, they still struggle with successfully answering 176questions in Putnam-AXIOM. To address potential data contamination issues, we introduce Putnam- 177AXIOM Variations, altering the variable names, constant values, or the phrasing of the question to 178create a potentially infinite number of problems not found anywhere on the internet. We notice that 179for most problems, models get significantly worse on the variations than they do the corresponding 180original questions. Our dataset fills the void opened by rapid progress in model reasoning capabilities. 181We hope that our benchmark will accelerate future research into artificial reasoning. 1825References 183Yadong Zhang, Shaoguang Mao, Tao Ge, Xun Wang, Adrian de Wynter, Yan Xia, Wenshan Wu, Ting 184Song, Man Lan, and Furu Wei. Llm as a mastermind: A survey of strategic reasoning with large 185language models, 2024. URL https://arxiv.org/abs/2404.01230 . 186Andres M Bran, Sam Cox, Oliver Schilter, Carlo Baldassari, Andrew D White, and Philippe Schwaller. 187Chemcrow: Augmenting large-language models with chemistry tools, 2023. URL https:// 188arxiv.org/abs/2304.05376 . 189Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and 190Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior, 2023. URL 191https://arxiv.org/abs/2304.03442 . 192OpenAI. Gpt-4 technical report. Preprint , 2023. 193Gemini Team. Gemini: A family of highly capable multimodal models. arXiv preprint arXiv: 1942312.11805 , 2023. 195Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, D. Song, and J. Steinhardt. 196Measuring massive multitask language understanding. International Conference on Learning 197Representations , 2020. 198Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu 199Jiang, Weiming Ren, Yuxuan Sun, Cong Wei, Botao Yu, Ruibin Yuan, Renliang Sun, Ming Yin, 200Boyuan Zheng, Zhenzhu Yang, Yibo Liu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. 201Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert 202agi.arXiv preprint arXiv: 2311.16502 , 2023. 203Zhiyu Chen, Shiyang Li, Charese Smiley, Zhiqiang Ma, Sameena Shah, and William Yang Wang. Con- 204vfinqa: Exploring the chain of numerical reasoning in conversational finance question answering. 205arXiv preprint arXiv: 2210.03849 , 2022. 206Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, 207Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the 208math dataset. In J. Vanschoren and S. Yeung, editors, Proceedings of the Neural Infor- 209mation Processing Systems Track on Datasets and Benchmarks , volume 1. Curran, 2021. 210URL https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/ 2112021/file/be83ab3ecd0db773eb2dc1b0a17836a1-Paper-round2.pdf . 212Bin Lei. Macm: Utilizing a multi-agent system for condition mining in solving complex mathematical 213problems, 2024. URL https://arxiv.org/abs/2404.04735 . 214Qihuang Zhong, Kang Wang, Ziyang Xu, Juhua Liu, Liang Ding, Bo Du, and Dacheng Tao. Achieving 215>97URL https://arxiv.org/abs/2404.14963 . 216Saurabh Srivastava, Annarose M B, Anto P V , Shashank Menon, Ajay Sukumar, Adwaith Samod T, 217Alan Philipose, Stevin Prince, and Sooraj Thomas. Functional benchmarks for robust evaluation of 218reasoning performance, and the reasoning gap. arXiv preprint arXiv: 2402.19450 , 2024. 219Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, 220Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, 221Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, 222Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot 223language model evaluation, 07 2024. URL https://zenodo.org/records/12608602 . 224XTX Investments. Ai mathematical olympiad - progress prize 1, 2024. URL https://kaggle. 225com/competitions/ai-mathematical-olympiad-prize . 226United states copyright act. U.S. Code Title 17 , 1976. Available at https://www.copyright.gov/ 227title17/ . 2286Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, 229Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John 230Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv: 2110.14168 , 2312021. 232Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander 233Kranias, John J. Nay, Kshitij Gupta, and Aran Komatsuzaki. Arb: Advanced reasoning benchmark 234for large language models, 2023. URL https://arxiv.org/abs/2307.13692 . 235Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, 236Xu Han, Yujie Huang, Yuxiang Zhang, Jie Liu, Lei Qi, Zhiyuan Liu, and Maosong Sun. Olympiad- 237bench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal 238scientific problems. arXiv preprint arXiv: 2402.14008 , 2024. 239Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. 240Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level 241scientific problem-solving abilities of large language models. arXiv preprint arXiv: 2307.10635 , 2422023. 243George Tsoukalas, Jasper Lee, John Jennings, Jimmy Xin, Michelle Ding, Michael Jennings, Ami- 244tayush Thakur, and Swarat Chaudhuri. Putnambench: Evaluating neural theorem-provers on the 245putnam mathematical competition, 2024. URL https://arxiv.org/abs/2407.11214 . 246Rylan Schaeffer. Pretraining on the test set is all you need, 2023. URL https://arxiv.org/abs/ 2472309.08632 . 248Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, and 249Eneko Agirre. Nlp evaluation in trouble: On the need to measure llm data contamination for each 250benchmark, 2023. URL https://arxiv.org/abs/2310.18018 . 251Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu 252Chen, and Nan Duan. Agieval: A human-centric benchmark for evaluating foundation models, 2532023. URL https://arxiv.org/abs/2304.06364 . 254Inbal Magar and Roy Schwartz. Data contamination: From memorization to exploitation. Annual 255Meeting of the Association for Computational Linguistics , 2022. doi: 10.48550/arXiv.2203.08242. 256Leonardo Ranaldi, Elena Sofia Ruzzetti, and Fabio Massimo Zanzotto. PreCog: Exploring the 257relation between memorization and performance in pre-trained language models. In Ruslan Mitkov 258and Galia Angelova, editors, Proceedings of the 14th International Conference on Recent Advances 259in Natural Language Processing , pages 961–967, Varna, Bulgaria, September 2023. INCOMA 260Ltd., Shoumen, Bulgaria. URL https://aclanthology.org/2023.ranlp-1.103 . 261A Appendix / supplemental material 262A.1 Legal Compliance 263We collect and modify various problems from the William Lowell Putnam Competition to create the 264original and variation datasets of Putnam-AXIOM. Putnam problems are created by the Mathematical 265Association of America (MAA), which is also the source of the AMC and AIME problems used in 266the MATH dataset [Hendrycks et al., 2021]. Like Hendrycks et al. [2021], we do not in any form 267seek to monetize or commercialize Putnam problems—only to utilize them for academic purposes. 268Our use of the Putnam problems to create an evaluation dataset completely falls under the “research” 269section of Fair Use. Indeed, according to Section 107, of the U.S. Copyright Act [USC, 1976], our 270work certainly qualifies as Fair Use for the following reasons: 2711.Our use of MAA problems is only for academic research purposes. We do not monetize or 272commercialize the problems. 2732.Our use of Putnam problems as a reasoning evaluation benchmark for large language models 274is significantly different from their original use as competition problems. 27573.Our use of Putnam problems is transformative. As detailed in Section 2 above, we have 276transformed the questions to be answered with a single numerical or algebraic “boxed 277answer” We have altered all of the solutions so that the final boxed answer lies at the 278end of the solution (so as to encourage models to explain their rationale before outputting 279a solution). We have also standardized the solutions: If there are many solutions given, 280we only use the first; if there are any references irrelevant to mathematics necessary to 281understand and solve the problem (such as comments like “Communicated by ...”), we have 282removed those. 2834.Our use of Putnam problems to construct a benchmark has no effect on the demand for or 284supply of Putnam problems in the William Lowell Putnam Competition. The existence of 285our dataset does not alter the value of the original problems—as those are already freely 286available online—nor does it influence the market of future competitors/problem writers. 287Problem: LetFmbe the mth Fibonacci number, defined by F1=F2= 1 andFm=Fm−1+Fm−2for all m≥3. Let p(x)be the polynomial of degree 1008 such thatp(2n+ 1) = F2n+1forn= 0,1,2, . . . , 1008 . Find integers jandksuch that p(2019) =Fj−Fkand give the answer in the form j/k.Solution: More generally, let p(x)be the polynomial of degree Nsuch that p(2n+ 1) =F2n+1for0≤n≤N. We will show that p(2N+ 3) = F2N+3−FN+2.Define a sequence of polynomials p0(x), . . . , p N(x)byp0(x) = p(x)andpk(x) =pk−1(x)−pk−1(x+ 2) fork≥1. Then by induction on k, it is the case thatpk(2n+ 1) = F2n+1+kfor0≤n≤N−k, and also that pkhas degree (at most)N−kfork≥1. Thus pN(x) =FN+1since pN(1) = FN+1andpNis constant.We now claim that for 0≤k≤N,pN−k(2k+ 3) =Pkj=0FN+1+j. We prove this againby induction on k: for the induction step, we havepN−k(2k+ 3) = pN−k(2k+ 1) + pN−k+1(2k+ 1)=FN+1+k+k−1Xj=0FN+1+j.Thus we havep(2N+ 3) = p0(2N+ 3) =NXj=0FN+1+j.Now one final induction shows thatPmj=1Fj=Fm+2−1, and so p(2N+ 3) = F2N+3−FN+2, as claimed. In the case N= 1008 , we thus have p(2019) = F2019−F1010. Wethus prove that (j, k) = (2019 ,1010) is a valid solution with the final answer thus being2019/1010 .Year: 2017 ID:A6 Final Answer: 2019/1010Figure 2: An example problem in Putnam-AXIOM. Solving this problem requires non-trivial con-structions and multiple advanced reasoning chains. The format of the final answer is specified in theproblem statement to make comparison simpler.8Problem: Determine which positive integersnhave the following property: For all inte-gersmthat are relatively prime to n, thereexists a permutation π:{1,2, . . . , n } →{1,2, . . . , n }such that π(π(k))≡mk(mod n)for all k∈ {1,2, . . . , n }.Solution: The desired property holds ifand only if n= 1orn≡2 (mod 4) . Letσn,m be the permutation of Z/nZinducedby multiplication by m; the original problemasks for which ndoesσn,m always have asquare root.···By Lemma 1, σn,mdoes not have a squareroot.Year: 2016 ID:A1 Final Answer: ??Problem: Determine the sum of the first kpositive integers n(in terms of k) which havethe following property: For all integers mthatare relatively prime to n, there exists a per-mutation π:{1,2, . . . , n } → { 1,2, . . . , n }such that π(π(k))≡mk(mod n)for allk∈ {1,2, . . . , n }.Solution: Letσn,mbe the permutation ofZ/nZinduced by multiplication by m; theoriginal problem asks for which ndoesσn,malways have a square root.···The desired property holds if and only ifn= 1 orn≡2 (mod 4) , hence makingthe required sum 2k2−4k+ 3 .Year: 2016 ID:A1 Final Answer:2k2−4k+ 3Figure 3: A modified boxing example in Putnam-MATH. Here we see that the original problem holdstrue for a number of values of nconditioned on a specific property making it hard to find a boxableexpression. We thus modify the solution to still require the solver to get to that conclusion and adda further computation of summing up the first ksuch values of ngiving a boxable solution whilekeeping the core of the problem the same.9Problem: Define a growing spiral in theplane to be a sequence of points with integercoordinates P0= (0,0), P1, . . . , P nsuchthatn≥2and:···How many of the points (x, y)with integercoordinates 0≤x≤2011,0≤y≤2011cannot be the last point, Pnof any growingspiral?Solution: We claim that the set of pointswith0≤x≤2011 and0≤y≤2011 thatcannot be the last point of a growing spiralare as follows: (0, y)for0≤y≤2011 ;(x,0)and(x,1)for1≤x≤2011 ;(x,2)for2≤x≤2011 ; and(x,3)for3≤x≤2011 .···This gives a total of2012 + 2011 + 2011+2010 + 2009 = 10053excluded points.Year: 2011 ID:A1 Final Answer: 10053Problem: Define a growing spiral in theplane to be a sequence of points with integercoordinates L0= (0 ,0), L1, . . . , L nsuchthatn≥2and:···How many of the points (w, v)with integercoordinates 0≤w≤4680,0≤v≤4680cannot be the last point, Lnof any growingspiral?Solution: We claim that the set of pointswith0≤w≤4680 and0≤v≤4680 thatcannot be the last point of a growing spiral areas follows: (0, v)for0≤v≤4680 ;(w,0)and(w,1)for1≤w≤4680 ;(w,2)for2≤w≤4680 ; and (w,3)for3≤w≤4680 .···This gives a total of4681 + 4680 + 4680+4679 + 4678 = 23398excluded points.Year: 2011 ID:A1 Final Answer: 23398Figure 4: A constant change and variable change in Putnam-AXIOM. Here, we perform a variablechange on the original problem/solution on the left by changing variables ‘ x’ to ‘w,’ ‘y’ to ‘v,’ and‘P’ to ‘L.’ We also perform a constant change by altering the constant ‘ 2011 ’ to ‘4680 ’. The constantchange affects the final answer, changing it from 10053 to23398 .Problem: Determine the greatest possi-ble value ofP10i=1cos(3 xi)for real numbersx1, x2, . . . , x 10satisfyingP10i=1cos(xi) =0.Solution: Since cos(3 xi) = 4cos( xi)3−3cos( xi), it is equivalent to maximize4P10i=1y3ifory1, . . . , y 10∈[−1,1]withP10i=1yi= 0; note that this domain is com-pact, so the maximum value is guaranteed toexist.···The maximum value is 480/49.Year: 2018 ID:A3Final Answer: 480/49Problem: Determine the least possiblevalue ofP10i=1sin(3ci)for real numbersc1, c2, . . . , c 10satisfyingP10i=1sin(ci) = 0 .Solution: Since sin(3ci) = 3sin( ci)−4sin(ci)3, it is equivalent to minimize4P10i=1y3ifory1, . . . , y 10∈[−1,1]withP10i=1yi= 0; note that this domain is com-pact, so the minimum value is guaranteed toexist.···The minimum value is −480/49.Year: 2018 ID:A3Final Answer: −480/49Figure 5: A significant change to a question in Putnam-MATH. Here, we change the variable ‘ x’ to‘c.’ Notably, we also change costosin, and “greatest” to “least.” This constitutes a significant changeto the structure of the problem.10Figure 6: Putnam-AXIOM v.s. Putnam-AXIOM with only complex questionsA.2 Binary and Complex Questions 288Several questions in Putnam-AXIOM are binary, meaning that the question inherently has two 289possible answers. These include true/false questions, questions about divergence or convergence, or 290questions about the winner of a two-player game. These questions make up 26of the 262question in 291Putnam-AXIOM Original; of the 59questions of Putnam-AXIOM Variations, binary questions make 292up7. We refer to all questions that are not binary as “complex” questions. 293Given the guessable nature of these questions and our answer-matching evaluation method, models 294have a much higher chance of randomly guessing the right answer on these questions. 295To discern whether the inclusion of these guessable questions significantly affects the overall difficulty 296of Putnam-AXIOM, we conducted an analysis of the accuracy of various models with and without 297the binary questions, with the overall accuracies in Figure 6. 298We see that, with the exception of Qwen2 Math 7B, almost all models have a higher accuracy on 299Putnam-AXIOM with its binary questions than without, meaning that guessing is contributing to their 300success to some extent. However, we see that on the more advanced models—Qwen2 Math 7B, GPT 3014, and Claude Sonnet 3.5—the gap between the accuracies on the entire dataset and the accuracies 302on only complex questions is much smaller. This is likely because these models are capable enough 303that they successfully answer a similar percentage of complex questions and binary questions; less 304advanced models get significantly fewer complex questions correct than binary questions, so we see a 305large accuracy gap. 306Based on the results of this experiment, we’ve decided to use only the complex questions for most of 307our evaluations such as in Table 1 and Figure 1. 30811A.3 Accuracies for Putnam-AXIOM Variation and corresponding Original questions 309ModelOriginal VariationScore Percentage (%) Score Percentage (%)Gemma-2B-Base 1.4 / 52 2.63 1.2 / 52 2.26Gemma-7B-Base 1.6 / 52 3.01 1.7 / 52 3.39DeepSeek-Math-7B-Base 3.2 / 52 6.03 2.4 / 52 4.52Qwen2-Math-7B-Base 5.2 / 52 9.81 4.8 / 52 9.05NuminaMath-7B-Base 5.6 / 52 10.56 2.8 / 52 5.28Mistral-7B-v0.3-Base 3.5 / 52 6.78 2.6 / 52 4.90Llama-3-8B 2 / 52 3.77 2 / 52 3.77Gemma-2B-Instruct 1.8 / 52 3.39 1.4 / 52 2.64Gemma-7B-Instruct 1.8 / 52 3.39 2.8 / 52 5.28Qwen2-Math-7B-Instruct 5.4 / 52 10.19 4 / 52 7.54DeepSeek-Math-7B-Instruct 4.2 / 52 7.92 2.4 / 52 4.52Mistral-7B-Instruct-v0.3 1.2 / 52 2.26 2.4 / 52 4.52Llama-3-8b Instruct 4 / 52 7.69 2 / 52 3.84DeepSeek-Math-RL 5.6 / 52 10.56 4.4 / 52 8.29Claude-3.5 Sonnet 14 / 52 26.40 10 / 52 18.86GPT-4 7 / 52 13.20 5 / 52 9.43GPT-4o 9 / 52 16.98 5 / 52 9.43o1-preview 26 / 52 50.0 18 / 52 33.96Table 2: Accuracy drops significantly on Putnam-AXIOM Variation compared to correspondingOriginal questions for nearly all models. These are mean accuracies over five trials.12A.4 Model Analysis Examples 310Problem: Letnbe a positive integer. Determine, in terms of n, the largest integer mwiththe following property: There exist real numbers x1, . . . , x 2nwith−1< x 1< x 2<···<x2n<1such that the sum of the lengths of the nintervals[x2k−11, x2k−12],[x2k−13, x2k−14], . . . , [x2k−12n−1, x2k−12n]is equal to 1 for all integers kwith1≤k≤m.Model’s Response: We are given npositive integers and need to determine the largestinteger m, in terms of n, such that there exist real numbers x1, x2, . . . , x 2nsatisfying:1. The xiare strictly increasing between −1and1:−1< x 1< x 2<···< x 2n<12. For each integer kwith1≤k≤m, the sum of the lengths of nintervals is equal to 1:nXi=1x2k−12i−x2k−12i−1= 1Our goal is to find the largest integer m(in terms of n) satisfying these conditions.. . .**General Case:** We have 2nvariables and mequations.- Each equation is independent due to the distinct exponents.- To have a solution, we must have at least as many variables as equations.- **Therefore, the maximum number mis bounded by 2n.**- However, since we have 2nvariables but they are constrained by the ordering andthe interval (−1,1), the effective degrees of freedom are less than 2n.*Conclusion:**Given the constraints and the increasing number of equations with m, the largest in-tegermfor which the conditions can be satisfied is:m=n**Final Answer:**The final answer is n. I hope it is correct.Year: 2022 ID:A6 Final Answer: nFigure 7: An example response where OpenAI o1-preview gets the final answer correct, butmade an unjustified claim. OpenAI o1-preview claims that the largest possible value for misn.However, its only justification for this claim is that mis bounded by 2n. OpenAI o1-preview doesnot justify why a value of mbetween nand2ncannot work.13Problem: Find the least possible area of a convex set in the plane that intersects bothbranches of the hyperbola xy= 1and both branches of the hyperbola xy=−1. (A set Sin the plane is called convex if for any two points in Sthe line segment connecting them iscontained in S.)Model’s Response: To find the least possible area of a convex set in the plane that intersectsboth branches of the hyperbolas xy= 1andxy=−1, we need to consider the geometricconstraints. Let’s start by visualizing these hyperbolas:The hyperbola xy= 1consists of two branches in the first and third quadrants:y=1x.The hyperbola xy=−1consists of two branches in the second and fourth quad-rants:y=−1x.We aim to place a convex set that intersects both branches of each hyperbola. Oneuseful geometric figure for this is a rectangle aligned along the hyperbolic axes. We canguarantee that the rectangle will intersect both branches of xy= 1andxy=−1by makingits sides tangent to the hyperbolas.. . .The geometric configuration simplifies the calculation:The minimum enclosing convex area tangency ideally is of square structure, yielding the unithyperbola intersection:area= 4(Since side 2·2 = 4 formation in closed square) .This geometric construction ensures least, minimum convex interaction:Final Answer: The final answer is 4. I hope it is correct.Year: 2007 ID:A2 Final Answer: 4Figure 8: An example response where GPT-4o gets the final answer correct, but made anunjustified logical step, resulting in a non-rigorous solution. OpenAI o1-preview logically jumpsto the idea that the geometric shape that minimizes the area is a rectangle. However, it does notattempt to justify this claim in any way, and proceeds as though this is obvious.14Problem: Given11z10+ 10iz9+ 10iz−11 = 0 ,find the value of |z|. (Here zis a complex number and i2=−1.)Model’s Response: To solve the equation11z10+ 10iz9+ 10iz−11 = 0and find the value of |z|, where zis a complex number, we’ll proceed step by step.. . .**Step 2: Assume |z|= 1**Let’s assume that |z|= 1, which means z=eiθfor some real number θ.*Step 3: Substitute z=eiθinto the Equation**Substituting z=eiθinto the equation, we get:11(eiθ)10+ 10i(eiθ)9+ 10ieiθ−11 = 0Simplify each term:-(eiθ)10=e10iθ-(eiθ)9=e9iθSo the equation becomes:11e10iθ+ 10ie9iθ+ 10ieiθ−11 = 0. . .**Step 6: Analyze the Equations**The trigonometric equations are complex and difficult to solve directly. However, notice thatif|z|= 1, then z=eiθlies on the unit circle, and its magnitude is indeed 1.Moreover, since we couldn’t find any specific θthat satisfies both equations simultaneously(as shown by trying several angles), this suggests that no specific angle θother than thosecorresponding to |z|= 1satisfy the equation.**Conclusion:**Since our manipulations suggest that |z|= 1is necessary (and given the nature of complexnumbers and the unit circle), we conclude that:*Final Answer:** The final answer is 1. I hope it is correct.Year: 1989 ID:A3 Final Answer: 1Figure 9: An example response where GPT-4o make a logical leap and is completely incoherent .In this solution, the model jumps to assume that |z|= 1and performs a series of calculations. Then,it concludes that the final answer is 1simply because the equations it made were too “complex anddifficult” to solve.15Problem: Letkbe the smallest positive integer for which there exist distinct integersm1, m2, m3, m4, m5such that the polynomialp(x) = (x−m1)(x−m2)(x−m3)(x−m4)(x−m5)has exactly knonzero coefficients. Find a set of integers m1, m2, m3, m4, m5for whichthis minimum kis achieved and given the sum k+m1+m2+m3+m4+m5as the finalanswer.Model’s Response: The polynomial p(x) = (x−m1)(x−m2)(x−m3)(x−m4)(x−m5)has exactly knonzero coefficients when the values of miare distinct integers. We want tominimize kand find a set of values for mithat achieves this minimum.The expansion of p(x)will include all possible products of the differences (x−mi). Foreach term of the form (x−a), this generates a coefficient for that term, and this will happenfor each combination of iandj, including when i=j.The minimum possible value for kcan be calculated by considering the sparsity of nonzerocoefficients in the polynomial’s expansion. This is achieved when the polynomial is writtenin the form that it eliminates as many terms as possible, leading to a minimal number ofnonzero coefficients.For(x−m1)(x−m2)(x−m3)(x−m4)(x−m5), it can be achieved if the differences(x−mi)produce terms that include maximal cancellation. The problem becomes one ofdetermining the minimal degree of the polynomial that includes maximal cancellation.The key observation here is that the minimal nonzero coefficients are achieved when thepolynomial is arranged in such a way that most coefficients are zero due to symmetry andcancellation properties.The smallest positive integer kfor such polynomials happens at k= 2 , for instance,(x−1)(x−2)(x−3)(x−4)(x−5)results in a polynomial with precisely coefficients asfollows:p(x) =x5−(1 + 2 + 3 + 4 + 5) x4+(sum of products of each combination of 2 distinct elements) x3etc...Since the polynomial in this problem demands exactly knonzero coefficients, the product ofdifferences can lead to cancellations resulting in minimal terms.Let’s consider the initial observation:m1= 1, m2= 2, m3= 3, m4= 4, m5= 5The resulting polynomial, (x−1)(x−2)(x−3)(x−4)(x−5)Final answer: 15Year: 1985 ID:B1 Final Answer: 3Figure 10: An example response where NuminaMath makes a calculation error, includesirrelevant information, and misunderstands the problem. In this solution, we see that the finalanswer is obtained by adding k+m1+m2+m3+m4+m5. In the model’s final solution, itsvalue of kis2, so its final answer should be 2 + 1 + 2 + 3 + 4 + 5 = 17 , not 15; this is due amisunderstanding of the problem and/or a calculation error. Furthermore, the model includes theexpression of the “resulting polynomial,” which is completely irrelevant to the problem it’s attemptingto solve.16Problem: Ifp(x) =a0+a1x+···+amxmis a polynomial with real coefficients ai, thensetΓ(p(x)) =a20+a21+···+a2m.Letf(x) = 3 x2+ 7x+ 2. Find, with proof, a polynomial g(x)with real coefficients suchthat(i)g(0) = 1 , and(ii)Γ(f(x)n) = Γ( g(x)n)for every integer n≥1.Model’s Response: We have that F(x) = 3 x2+ 7x+ 2 and thus Γ(F(x)) = 32+72+ 22= 9 + 49 + 4 = 62 . We want to find a polynomial g(x)such that g(0) = 1 andΓ(F(x)n) = Γ( g(x)n)for every integer n≥1. Suppose g(x) =x2+bx+ 1. ThenΓ(g(x)) = 1 + b2+ 1 = b2+ 2. If we can find a value for bsuch that b2+ 2 = 62 , then wehaveΓ(g(x)n) = Γ( F(x)n). Solving b2+ 2 = 62 gives b2= 60 , sob=√60 = 2√15.Therefore, we can take g(x) =x2+ 2√15x+ 1. Since g(0) = 1 ,g(x)satisfies bothconditions.Final Answer: The final answer is x2+ 2√15x+ 1 . I hope it is correct.Year: 1985 ID:A6 Final Answer: 6x2+ 5x+ 1Figure 11: An example response where Qwen2-Math-7B hallucinates and misunderstandsthe problem. First, Qwen misunderstands that the polynomial is quadratic—which is not statedanywhere in the problem statement. Then, Qwen falsely states assumes that if Γ(g(x)) = Γ( f(x)),thenΓ(g(x)n) = Γ( f(x))nfor all n. These mistakes lead Qwen to the wrong solution.17Problem: Letx0= 1 and for n≥0, letxn+1= 3xn+⌊xn√5⌋. In particular, x1= 5,x2= 26 ,x3= 136 ,x4= 712 . Find a closed-form expression for x2007. (⌊a⌋means thelargest integer ≤a.)Model’s Response:xn+1= 3xn+⌊xn√5⌋= 3xn+⌊xn(√5−2) + 2 xn⌋= 3xn+xn(√5−2) + 2 xn= 5xn+xn(√5−2)=xn(2 +√5).If we let yn=xn(2 +√5), then xn=yn/(2 +√5), soyn+1=yn·2 +√52 +√5=yn.Thus, yn=y0for all n≥1, soxn=y02 +√5=12 +√5=2−√5(−2 +√5)(2 +√5)=2−√55−2= 2−√5.Final Answer: The final answer is 2−√5. I hope it is correct.Year: 2007 ID:A5 Final Answer:22006√51+√523997−1+√52−3997Figure 12: An example response where DeepSeek-Math-7B-RL makes an incorrect logical jump.In the series of equalities in the beginning, DeepSeek simply removes the floor function and proceeds.However, the floor function is an integral part of the problem, and without it, DeepSeek is unable tocome to the right solution.18B Related Work 311B.1 Mathematics benchmarks 312Numerous benchmarks exist to assess the mathematical capabilities of models, each typically focusing 313on a specific task. Two notable examples are MATH [Hendrycks et al., 2021] and GSM8K [Cobbe 314et al., 2021]. The MATH dataset contains questions sourced from American high school mathematics 315competitions such as the AMC 10, AMC 12, and AIME [Hendrycks et al., 2021], while the GSM8K 316dataset contains 8.5K handwritten elementary school level questions Cobbe et al. [2021]. Both 317contain questions and answers with detailed rationale explanations. 318As models have become larger and more powerful, even the most difficult existing benchmarks have 319become less challenging. For instance, while the MATH dataset saw 6.9%accuracy on its release, 320it now sees 87.92% accuracy with GPT-4 MACM [Lei, 2024]. Similarly, GPT4 has attained 97.1% 321accuracy on the GSM8K [Zhong et al., 2024]. This saturation necessitates the development of more 322challenging benchmarks. 323Many contemporary data sets have been created to combat the saturation of existing benchmarks. For 324instance, the ARB dataset includes hundreds of challenging problems in high school and college-level 325math, physics, and chemistry Sawada et al. [2023]. Similarly OlympiadBench contains nearly 9,000 326problems from the International Mathematics Olympiad (IMO), the Chinese GaoKao, and more 327He et al. [2024]. Finally, SciBench is a similar reasoning benchmark that includes hundreds of 328college-level scientific reasoning questions from instructional textbooks Wang et al. [2023]. 329Although these datasets alleviate the saturation problem, they come with many limitations. For 330instance, ARB Sawada et al. [2023] and OlympiadBench He et al. [2024] both contain several 331symbolic and proof-based questions which cannot be graded automatically and require a costly 332and lengthy human evaluation process. Though ARB attempts to utilize LLMs to grade their own 333responses with a rubric, this process is often unreliable and self-referential. Our Putnam-AXIOM 334dataset addresses these limitations by offering challenging Putnam problems with fully-written 335solutions and easily evaluable answers. It enables efficient automated assessment via frameworks 336like LM Harness [Gao et al., 2024], avoiding costly human evaluation or unreliable self-grading. 337PutnamBench [Tsoukalas et al., 2024] is a related benchmark that primarily focuses on formal theorem 338proving. Its main objective is to derive formalized proofs of mathematical statements and it provides 339formalizations in systems such as Lean, Isabelle, and Coq, all sourced from the prestigious Putnam 340competition. PutnamBench also includes 640 natural language statements and their corresponding 341answers where applicable. While both benchmarks draw from the same competition, Putnam- 342AXIOM focuses on the curation of natural language problems for final answer verification and 343introduces automatic functional variations to generate additional benchmarks addressing potential 344data contamination. Further we focus on assessing true mathematical reasoning ability and hence 345take measures to remove easily guessable answers. 346B.2 Functional Benchmarks 347Data contamination is a significant problem in creating evaluation benchmarks, as many of these 348problems are openly available on the Internet and are likely included in the training data for large 349models [Schaeffer, 2023, Sainz et al., 2023]. Thus, the MATH [Hendrycks et al., 2021], AGIEval 350[Zhong et al., 2023], OlympiadBench [He et al., 2024], and ARB [Sawada et al., 2023] benchmarks 351(which are all sourced from problems on the Internet) could potentially be contaminated. Therefore, 352models may achieve artificially high performance on an evaluation benchmark by memorizing the 353answers to the problems Magar and Schwartz [2022], Ranaldi et al. [2023]. 354A straightforward way of avoiding data contamination issues is to utilize problems unavailable on the 355Internet. However, even if problems are not currently part of model training data, it is unrealistic to 356expect them to remain inaccessible. At the same time, it is costly to rely on the continuous human 357development of new datasets. 358Srivastava et al. [2024] attempts to alleviate this data contamination issue by creating functional 359variations of the MATH dataset, where new problems can be generated simply by changing numeric 360parameters, yielding different solutions. They observe a significant discrepancy in models’ perfor- 361mance between standard benchmarks and these new variations. We recognize the potential of this 36219idea and have adapted it to our more challenging dataset. We have altered the variables, constants, 363and phrasing of many Putnam questions while preserving their overall difficulty and requirements for 364logical and mathematical reasoning. 36520 |
Y3cF5IzhiV | TurtleBench: A Visual Programming Benchmarkin Turtle GeometrySina Rismanchian, Yasaman Razeghi, Sameer Singh, Shayan DoroudiUniversity of California, Irvine{srismanc,yrazeghi,sameer,doroudis }@uci.eduAbstractWhile formal geometric reasoning may be difficult for humans without extensivetraining, humans seem to have the ability to intuitively reason about geometricpatterns in images and scenes from a young age. In contrast, developing largemultimodal models (LMMs) capable of similar feats represents a frontier in AIresearch. We introduce TurtleBench, a benchmark designed to evaluate LMMs’ ca-pacity to interpret geometric patterns—given visual examples, textual instructions,or both—and generate precise code outputs. Inspired by turtle geometry, a notionused to teach children foundational coding and geometric concepts, TurtleBenchfeatures tasks with patterned shapes that have underlying algorithmic logic. Unlikeobject detection tasks that typically do not involve understanding underlying pat-terns, this benchmark combines geometrical reasoning with image understanding.Our evaluation reveals that leading LMMs struggle significantly with these tasks,with GPT-4V achieving only 19% accuracy on the simplest tasks. TurtleBenchhighlights the gap between human and AI performance in intuitive and visualgeometrical understanding, setting the stage for future research in this area.1 IntroductionGeometric reasoning is a hallmark of human mathematical reasoning that has been studied since theAncient Greeks. It was a task that attracted early artificial intelligence (AI) researchers and earlyefforts on building intelligent tutoring systems also focused on geometry. Yet much of the emphasison geometric reasoning is on axiomatic-deductive geometry. Humans of all ages are naturally goodat more intuitive kinds of geometric reasoning that inform how we see and navigate the world.One aspect of this is our ability to look at a geometric shape or complex pattern and construct analgorithm to generate that pattern. We believe this is a powerful task to evaluate large multimodalmodels (LMMs) for a number of reasons. First of all, constructing patterns in this way reflectsan early programming paradigm for teaching kids programming, initially developed in the 1970swith the introduction of the Logo programming language (Papert, 1972, 1980). For several decades,children from a young age have been learning how to procedurally draw geometric patterns and otherdrawings using code in programming languages like Logo, Scratch, and Python—often as their firstintroduction to programming. Given LMMs’ success in a variety of complex programming tasks,one might expect a programming task that children could solve to be easy. Second, recent researchsuggests that this ability to procedurally generate shapes may be more fundamental to our psychologythan meets the eye. Spelke (2022) claims that from infancy (or even birth), humans have a set of sixcore knowledge systems, two of which contribute to our understanding of geometry: a form systemand a place system. While the form system allows us to perceive the boundaries of objects, our coreknowledge of places interprets geometry in terms of how to navigate an environment (Dillon, 2023).Taking this a step further, Sabl ́e-Meyer et al. (2022) suggest that humans perceive shapes and patternsin terms of procedural programs that could generate them; they demonstrate that the time it takes38th Conference on Neural Information Processing Systems (NeurIPS 2024).Write a code in Python turtle that creates exactly the same shape.The shape is a regular rectangle, which means all sides are equal in length.for i in range(4):t.forward(100)t.right(90)Write a code in Python turtle that creates exactly the same shape.Write a code in Python turtle that creates a square with two horizontal sides.The image shows a square with two horizontal sides. Write a code in Python Turtle that creates exactly the same shape.Scratch TasksImage InputText InputImage + Text InputWrite a code in Python turtle that tweaks the above image, based on the instruction.Instruction: Connect the midpoint of each side to the midpoint of adjacent sides.Code:for i in range(4):t.forward(100)t.right(90)Thegiven code generates the given image. Write a code in Python Turtle that tweaks the image based on the instruction.Instruction: Connect the midpoint of each side to the midpoint of adjacent sides.Tweak TasksImage + Text InputCode GenerationCode GenerationCode EditImage + Text InputImage + Image InputCode:for i in range(4):t.forward(100)t.right(90)The given code generates the upper image. Write a code in Python Turtle that creates the following image.An example of A Scratch task, in Code Generation mode, with image inputFigure 1: An illustration of existing types and modes in TurtleBench, A task may have a type ofScratch or Tweak, in a mode of code generation or code edit, with various modalities in the input.for people to process these shapes correlates with the minimum description length of the shape in aLogo-like programming language.In this work, we introduce TurtleBench, a set of manually crafted image/text to code tasks in turtlegeometry (Papert, 1972; Abelson & diSessa, 1986) to evaluate the abilities of these models to combinevisual pattern recognition, abstract geometrical reasoning, and Python programming. To ensure thevisual inputs and the programming language remain straightforward, TurtleBench harnesses turtlegeometry, a concept widely recognized for its effectiveness in introducing programming conceptsto children within the K-12 education system. Although turtle programming is now used more asa tool to foster computational thinking, turtle geometry has also been explored as a powerful wayof teaching geometry and mathematical reasoning to children (Hoyles & Noss, 1992; Clements &Sarama, 1997). In turtle geometry, a turtle acts as a programmable object that navigates the screen,drawing as it goes and turning at specified angles, to create simple visual patterns. The primaryobjective within this framework is to generate code capable of producing simple visual inputs. Thesevisual inputs consist of basic geometric shapes, and the programming syntax required is intentionallylimited and straightforward. An example of such a task is presented in the left side of Figure 1. Asillustrated, the input image is the shape of a simple square and the corresponding code only uses twosimple turtle functions ( forward andright ) along with a simple for loop. This simplicity makesTurtleBench an effective benchmark for evaluating the capabilities of LMMs.To reflect different real-world use cases of an LMM in the domain of Turtle and also cover thebroad range of underlying reasoning abilities, TurtleBench includes 260 tasks with a variety of typesand modalities. We conduct an evaluation of leading LMMs on TurtleBench code generation andcode editing tasks, utilizing zero-shot and visual chain-of-thought (Singh et al., 2023) approachesacross text-only, image-only, and mixed (text and image) input modalities. Our findings reveal thatthese models generally perform poorly across all setups and variety of tasks and modalities. Ourbest-performing model, GPT-4V, outperforms Gemini 1.5 Flash yet neither model comes close tosolving TurtleBench tasks, as about 75% of the tasks were left completely unsolved. Intriguingly,our results indicate that performance improves when tasks are presented in text, rather than inputtingimages. This suggests that integrating visual and linguistic information, particularly in domainsrequiring visual pattern recognition, may need further refinement. All these findings demonstrate thatour benchmark poses a challenging task for LMMs, providing valuable insights into their capabilities.2 Overview of TurtleBenchTurtleBench is a set of 260 tasks that are designed to evaluate LMMs’ performance on visionand language algorithmic reasoning tasks. To ensure the novelty of the tasks and their quality inincorporating authentic geometric shapes and concepts, we craft TurtleBench manually. All the tasks2in TurtleBench are accurately solvable based on the provided information for each, which means thatthere are no ambiguities or arbitrary parameters leading to inaccuracies in the tasks for humans aswell as the models. To remove possible ambiguities in the tasks, two independent annotators workedwith us to identify and resolve any unclear instructions. Each task consists of a black-and-white imageillustrating a set of abstract geometric shapes as an input . An example of this task is presented inFigure 1. TurtleBench is made up of two different types of tasks, these types reflect the methodologiesused in turtle geometry to introduce programming to children.Scratch tasks are intended to show how well a model understands a pattern and translates its under-standing to an executable code. In the general case of this type of task, an image is provided, and therequested output is code in Python Turtle that creates the shapes in the image. In all scratch tasks,the model is asked to generate the code in Python Turtle for the desired input shape. TurtleBenchincludes a total of 130scratch tasks. An example of these tasks is provided in Figure 1, top rows. Todistinguish between the models’ visual comprehension and their textual understanding, a subset (31%)of these tasks includes a text description of the image input in addition to the visual representation.This setup facilitates the evaluation of how models respond differently to visual and textual inputs,providing a clearer understanding of their capabilities.Tweak tasks are intended to measure how well a model uses their understanding of a visual pattern,combined with an instruction to make minimal alterations. Each tweak task presents a model with animage and an instruction; the expected output is Python Turtle code that modifies the shape in theinput image according to the given instruction. These tasks are particularly insightful for determiningwhether a model is merely recalling memorized code for an image, or if it has developed a deeper,more human-like comprehension of the patterns depicted in the images. For instance, a model mightbe capable of generating code for a certain shape based on training data, but the real challenge lies inits ability to adapt that shape in response to various instructed changes. An example of these tasks isprovided in Figure 1, bottom row. Here, the model is given an input image of a rectangle, with aninstruction to connect the midpoint of each side to the midpoint of adjacent sides . As illustrated inFigure 1, we also introduce a code editing version of the tweak task. In this version, we supply thecode corresponding to the input image and then instruct the models to make specific modificationsto this code, aiming to achieve a change in the image as per the provided instructions. Detailedinformation about types of tweaks and their examples is provided in Appendix C.4.3 Evaluation SetupIn the following section, we evaluate TurtleBench using two state-of-the-art LMMs, GPT-4V andGemini 1.5 Flash and also an open source model, namely Llava-1.5-13B(Liu et al., 2023) employinggreedy decoding in our evaluations. We evaluated two other open models, namely Qwen-VL-Max(Bai et al., 2023) and CogVLM (Wang et al., 2023) on a subset of tasks in TurtleBench. However,CogVLM and Qwen are not successful in producing a syntactically correct Python Turtle pieceof code even for the simplest tasks, therefore we limited our benchmark evaluation to the modelsmentioned above.We utilize two types of prompting in our experiments, 1) basic, where we simply prompt the themodel (c.f. Appendix C.2) to do our tasks, and 2) Chain-of-Thought (CoT) prompting (Wei et al.,2022), which has shown to be an effective prompting technique in eliciting reasoning in these models.Specifically, we use a more detailed version of CoT prompting that is tailored to LMMs, namelyv-CoT, recently proposed by Singh et al. (2023). The v-CoT approach is inspired by m-CoT (Zhanget al., 2023), which shows higher performance compared to it. This prompting has been shown toimprove LMMs’ performance on visual tasks that involved reasoning, such as ARC (Chollet, 2019).This prompt, instructs the model to first extract all the relevant information in the image neededfor answering the problem and then to reason step by step based on the information extracted. Thespecific prompt we used in our experiments is in Appendix C.24 Results4.1 Models perform poorly on TurtleBenchWe initially examine the performance of the GPT-4V, Gemini 1.5 Flash and Llava-1.5-13B modelson the comprehensive TurtleBench dataset. The findings, detailed in Table 1, reveal a notably poor3GPT-4VbasicGeminibasicGPT-4V0-S CoTGemini0-S CoTLlava-1.5basicLlava-1.50-s CoTScratch Code GenerationImage only 16% 7.7% 19.23% 8.46% 1% 1%Tweak Code GenerationImage + Text 10% 3.85% 12.3% 7.7% 0% 1%Tweak Code EditImage + Text 18% 12% 18.46% 18.46% 1% 1%Image + Image 12% 3% 13.84% 8.46% NA NATable 1: Performance of GPT-4V, Gemini 1.5 Flash, and Llava-1.5-13B on TurtleBench. Our resultshows that models perform poorly on TurtleBench.performance across the tasks in TurtleBench, with a peak accuracy of 20% achieved by GPT-4Vin the code editing tasks, facilitated by Chain of Thought (CoT) prompting. In the scratch tasks,which represent the simplest problem type within the dataset, GPT-4V’s success rate was just 19%,underscoring the substantial challenges and complexities these tasks pose to the current models.A comparison between CoT and basic prompting within Table 1 illustrates that CoT promptingoutperforms basic prompting on the same models, aligning with previous work that indicates CoTenhances models’ reasoning abilities (Zhang et al., 2023). However, despite utilizing CoT prompting,the task remains far from being solved. Additionally, we note a decline in the performance ofmodels when comparing tasks that involve tweaks to those starting from scratch. This observationsuggests that models fail to generalize their understanding to tweak tasks, even if they can successfullycomplete tasks from scratch. Examples of model output in different subsets of the task are providedin Figures 8 and 10.4.2 Limited Visual Understanding in LMMs: Insights from Textual vs. Visual Tweak TasksFor tweak tasks, where the AI had to edit existing code, we gave instructions either in natural languageor as images (see Figure 1, bottom rows, left two columns). As can be seen by comparing the bottomtwo rows in Table 1, there is a huge decline in accuracy when instructions were provided visuallyrather than textually, especially for Gemini. This outcome suggests a disparity in the models’ abilityto process visual versus textual instructions, revealing that their reasoning abilities may not alignclosely with human-like understanding. The assumption that directly viewing the desired outcomesimplifies the task contrasts sharply with our findings, highlighting a reliance on textual interpretationfor reasoning and a notable limitation in pure visual reasoning capabilities within these models.In Appendix B.3, we provide further evidence of this with additional analyses on scratch tasks byvarying the input to those tasks (i.e., visual or textual descriptions).5 ConclusionsThis study introduces TurtleBench, the first of its kind in benchmarks that focus on convertingvisual inputs to code outputs. The evaluation results from TurtleBench reveal a significant disparitybetween human capabilities and current state-of-the-art AI models in understanding simple geometricshapes, reasoning about these shapes, and converting such understandings into executable code.This gap underscores the challenges that lie ahead in the quest to enhance AI’s comprehension andproblem-solving abilities to match human levels. We believe that TurtleBench serves as a crucialtool in the evaluation of models, offering a clear benchmark that tests the limits of large multimodalmodels.ReferencesHarold Abelson and Andrea diSessa. Turtle geometry: The computer as a medium for exploringmathematics . MIT press, 1986.4Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick,and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE internationalconference on computer vision , pp. 2425–2433, 2015.Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, ChangZhou, and Jingren Zhou. Qwen-vl: A frontier large vision-language model with versatile abilities.arXiv preprint arXiv:2308.12966 , 2023.Jonas Belouadi, Anne Lauscher, and Steffen Eger. Automatikz: Text-guided synthesis of scientificvector graphics with tikz. arXiv preprint arXiv:2310.00367 , 2023.S ́ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar,Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence:Early experiments with gpt-4. arXiv preprint arXiv:2303.12712 , 2023.Anoop Cherian, Kuan-Chuan Peng, Suhas Lohit, Kevin A Smith, and Joshua B Tenenbaum. Aredeep neural networks smarter than second graders? In Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition , pp. 10834–10844, 2023.Franc ̧ois Chollet. On the measure of intelligence. arXiv preprint arXiv:1911.01547 , 2019.Douglas H Clements and Julie Sarama. Children’s mathematical reasoning with the turtle program-ming metaphor. In Mathematical Reasoning , pp. 313–337. Routledge, 1997.Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang,Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-languagemodels with instruction tuning, 2023.Moira R Dillon. Divisive language, 2023.Kevin Ellis, Lionel Wong, Maxwell Nye, Mathias Sable-Meyer, Luc Cary, Lore Anaya Pozo, LukeHewitt, Armando Solar-Lezama, and Joshua B Tenenbaum. Dreamcoder: growing generalizable,interpretable knowledge with wake–sleep bayesian program learning. Philosophical Transactionsof the Royal Society A , 381(2251):20220050, 2023.Kenneth Forbus, Jeffrey Usher, Andrew Lovett, Kate Lockwood, and Jon Wetzel. Cogsketch: Sketchunderstanding for cognitive science research and for education. Topics in Cognitive Science , 3(4):648–666, 2011.Gabriel Grand, Lionel Wong, Matthew Bowers, Theo X Olausson, Muxin Liu, Joshua B Tenenbaum,and Jacob Andreas. Lilo: Learning interpretable libraries by compressing and documenting code.arXiv preprint arXiv:2310.19791 , 2023.Celia Hoyles and Richard Noss. Learning mathematics and Logo . MIT Press, 1992.Huaizu Jiang, Xiaojian Ma, Weili Nie, Zhiding Yu, Yuke Zhu, and Anima Anandkumar. Bongard-hoi:Benchmarking few-shot visual reasoning for human-object interactions. In Proceedings of theIEEE/CVF conference on computer vision and pattern recognition , pp. 19056–19065, 2022.Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, andRoss Girshick. Clevr: A diagnostic dataset for compositional language and elementary visualreasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp.2901–2910, 2017.Brenden M Lake and Steven T Piantadosi. People infer recursive visual concepts from just a fewexamples. Computational Brain & Behavior , 3(1):54–65, 2020.Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learningthrough probabilistic program induction. Science , 350(6266):1332–1338, 2015.Yi Lin and Moira R Dillon. We are wanderers: Abstract geometry reflects spatial navigation. Journalof Experimental Psychology: General , 2023.Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instructiontuning, 2023.5Raja Marjieh, Pol Van Rijn, Ilia Sucholutsky, Theodore R Sumers, Harin Lee, Thomas L Griffiths,and Nori Jacoby. Words are all you need? language as an approximation for human similarityjudgments. arXiv preprint arXiv:2206.04105 , 2022.Melanie Mitchell, Alessandro B Palmarini, and Arseny Moskvichev. Comparing humans, gpt-4, andgpt-4v on abstraction and reasoning tasks. arXiv preprint arXiv:2311.09247 , 2023.Weili Nie, Zhiding Yu, Lei Mao, Ankit B Patel, Yuke Zhu, and Anima Anandkumar. Bongard-logo: Anew benchmark for human-level concept learning and reasoning. Advances in Neural InformationProcessing Systems , 33:16468–16480, 2020.OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia LeoniAleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, IgorBabuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian,Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, LennyBogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks,Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, ChelseaCarlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen,Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung,Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch,Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, AttyEleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Sim ́on Posada Fishman, Juston Forte,Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, GabrielGoh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, JoshuaGross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, MikeHeaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, BrandonHoughton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, JoanneJang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, HeewooJun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar,Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan HendrikKirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich,Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, TeddyLee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, StephanieLin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini,Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne,Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, DavidMedina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, VinnieMonaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David M ́ely,Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, HyeonwooNoh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano,Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng,Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto,Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power,Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, FrancisReal, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, TedSanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, DanielSelsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, SzymonSidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky,Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang,Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, PrestonTuggle, Nick Turley, Jerry Tworek, Juan Felipe Cer ́on Uribe, Andrea Vallone, Arun Vijayvergiya,Chelsea V oss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, JasonWei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff,Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu,Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba,Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang,William Zhuk, and Barret Zoph. Gpt-4 technical report, 2024.Seymour Papert. On making a theorem for a child. In Proceedings of the ACM annual conference-Volume 1 , pp. 345–349, 1972.6Seymour Papert. Mindstorms: Children, computers, and powerful ideas . Basic Books, Inc., 1980.Joshua S Rule, Joshua B Tenenbaum, and Steven T Piantadosi. The child as hacker. Trends incognitive sciences , 24(11):900–915, 2020.Mathias Sabl ́e-Meyer, Kevin Ellis, Josh Tenenbaum, and Stanislas Dehaene. A language of thoughtfor the mental representation of geometric shapes. Cognitive Psychology , 139:101527, 2022.ISSN 0010-0285. doi: https://doi.org/10.1016/j.cogpsych.2022.101527. URL https://www.sciencedirect.com/science/article/pii/S0010028522000639 .Mukul Singh, Jos ́e Cambronero, Sumit Gulwani, Vu Le, and Gust Verbruggen. Assessing gpt4-v onstructured reasoning tasks. arXiv preprint arXiv:2312.11524 , 2023.Elizabeth S Spelke. What babies know: Core knowledge and composition volume 1 , volume 1.Oxford University Press, 2022.Elizabeth S Spelke and Katherine D Kinzler. Core knowledge. Developmental science , 10(1):89–96,2007.Ilia Sucholutsky and Tom Griffiths. Alignment with human representations supports robust few-shotlearning. Advances in Neural Information Processing Systems , 36, 2024.Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,Radu Soricut, Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Millican, David Silver, SlavPetrov, Melvin Johnson, Ioannis Antonoglou, Julian Schrittwieser, Amelia Glaese, Jilin Chen,Emily Pitler, Timothy Lillicrap, Angeliki Lazaridou, Orhan Firat, James Molloy, Michael Isard,Paul R. Barham, Tom Hennigan, Benjamin Lee, Fabio Viola, Malcolm Reynolds, YuanzhongXu, Ryan Doherty, Eli Collins, Clemens Meyer, Eliza Rutherford, Erica Moreira, Kareem Ayoub,Megha Goel, George Tucker, Enrique Piqueras, Maxim Krikun, Iain Barr, Nikolay Savinov, IvoDanihelka, Becca Roelofs, Ana ̈ıs White, Anders Andreassen, Tamara von Glehn, LakshmanYagati, Mehran Kazemi, Lucas Gonzalez, Misha Khalman, Jakub Sygnowski, Alexandre Frechette,Charlotte Smith, Laura Culp, Lev Proleev, Yi Luan, Xi Chen, James Lottes, Nathan Schucher,Federico Lebron, Alban Rrustemi, Natalie Clay, Phil Crone, Tomas Kocisky, Jeffrey Zhao, BartekPerz, Dian Yu, Heidi Howard, Adam Bloniarz, Jack W. Rae, Han Lu, Laurent Sifre, MarcelloMaggioni, Fred Alcober, Dan Garrette, Megan Barnes, Shantanu Thakoor, Jacob Austin, GabrielBarth-Maron, William Wong, Rishabh Joshi, Rahma Chaabouni, Deeni Fatiha, Arun Ahuja,Ruibo Liu, Yunxuan Li, Sarah Cogan, Jeremy Chen, Chao Jia, Chenjie Gu, Qiao Zhang, JordanGrimstad, Ale Jakse Hartman, Martin Chadwick, Gaurav Singh Tomar, Xavier Garcia, Evan Senter,Emanuel Taropa, Thanumalayan Sankaranarayana Pillai, Jacob Devlin, Michael Laskin, Diegode Las Casas, Dasha Valter, Connie Tao, Lorenzo Blanco, Adri `a Puigdom `enech Badia, DavidReitter, Mianna Chen, Jenny Brennan, Clara Rivera, Sergey Brin, Shariq Iqbal, Gabriela Surita,Jane Labanowski, Abhi Rao, Stephanie Winkler, Emilio Parisotto, Yiming Gu, Kate Olszewska,Yujing Zhang, Ravi Addanki, Antoine Miech, Annie Louis, Laurent El Shafey, Denis Teplyashin,Geoff Brown, Elliot Catt, Nithya Attaluri, Jan Balaguer, Jackie Xiang, Pidong Wang, Zoe Ashwood,Anton Briukhov, Albert Webson, Sanjay Ganapathy, Smit Sanghavi, Ajay Kannan, Ming-WeiChang, Axel Stjerngren, Josip Djolonga, Yuting Sun, Ankur Bapna, Matthew Aitchison, PedramPejman, Henryk Michalewski, Tianhe Yu, Cindy Wang, Juliette Love, Junwhan Ahn, DawnBloxwich, Kehang Han, Peter Humphreys, Thibault Sellam, James Bradbury, Varun Godbole, SinaSamangooei, Bogdan Damoc, Alex Kaskasoli, S ́ebastien M. R. Arnold, Vijay Vasudevan, ShubhamAgrawal, Jason Riesa, Dmitry Lepikhin, Richard Tanburn, Srivatsan Srinivasan, Hyeontaek Lim,Sarah Hodkinson, Pranav Shyam, Johan Ferret, Steven Hand, Ankush Garg, Tom Le Paine, JianLi, Yujia Li, Minh Giang, Alexander Neitz, Zaheer Abbas, Sarah York, Machel Reid, ElizabethCole, Aakanksha Chowdhery, Dipanjan Das, Dominika Rogozi ́nska, Vitaly Nikolaev, PabloSprechmann, Zachary Nado, Lukas Zilka, Flavien Prost, Luheng He, Marianne Monteiro, GauravMishra, Chris Welty, Josh Newlan, Dawei Jia, Miltiadis Allamanis, Clara Huiyi Hu, Raoulde Liedekerke, Justin Gilmer, Carl Saroufim, Shruti Rijhwani, Shaobo Hou, Disha Shrivastava,Anirudh Baddepudi, Alex Goldin, Adnan Ozturel, Albin Cassirer, Yunhan Xu, Daniel Sohn,Devendra Sachan, Reinald Kim Amplayo, Craig Swanson, Dessie Petrova, Shashi Narayan, ArthurGuez, Siddhartha Brahma, Jessica Landon, Miteyan Patel, Ruizhe Zhao, Kevin Villela, LuyuWang, Wenhao Jia, Matthew Rahtz, Mai Gim ́enez, Legg Yeung, Hanzhao Lin, James Keeling,Petko Georgiev, Diana Mincu, Boxi Wu, Salem Haykal, Rachel Saputro, Kiran V odrahalli, James7Qin, Zeynep Cankara, Abhanshu Sharma, Nick Fernando, Will Hawkins, Behnam Neyshabur,Solomon Kim, Adrian Hutter, Priyanka Agrawal, Alex Castro-Ros, George van den Driessche, TaoWang, Fan Yang, Shuo yiin Chang, Paul Komarek, Ross McIlroy, Mario Lu ˇci ́c, Guodong Zhang,Wael Farhan, Michael Sharman, Paul Natsev, Paul Michel, Yong Cheng, Yamini Bansal, SiyuanQiao, Kris Cao, Siamak Shakeri, Christina Butterfield, Justin Chung, Paul Kishan Rubenstein,Shivani Agrawal, Arthur Mensch, Kedar Soparkar, Karel Lenc, Timothy Chung, Aedan Pope,Loren Maggiore, Jackie Kay, Priya Jhakra, Shibo Wang, Joshua Maynez, Mary Phuong, TaylorTobin, Andrea Tacchetti, Maja Trebacz, Kevin Robinson, Yash Katariya, Sebastian Riedel, PaigeBailey, Kefan Xiao, Nimesh Ghelani, Lora Aroyo, Ambrose Slone, Neil Houlsby, Xuehan Xiong,Zhen Yang, Elena Gribovskaya, Jonas Adler, Mateo Wirth, Lisa Lee, Music Li, Thais Kagohara,Jay Pavagadhi, Sophie Bridgers, Anna Bortsova, Sanjay Ghemawat, Zafarali Ahmed, Tianqi Liu,Richard Powell, Vijay Bolina, Mariko Iinuma, Polina Zablotskaia, James Besley, Da-Woon Chung,Timothy Dozat, Ramona Comanescu, Xiance Si, Jeremy Greer, Guolong Su, Martin Polacek,Rapha ̈el Lopez Kaufman, Simon Tokumine, Hexiang Hu, Elena Buchatskaya, Yingjie Miao,Mohamed Elhawaty, Aditya Siddhant, Nenad Tomasev, Jinwei Xing, Christina Greer, Helen Miller,Shereen Ashraf, Aurko Roy, Zizhao Zhang, Ada Ma, Angelos Filos, Milos Besta, Rory Blevins,Ted Klimenko, Chih-Kuan Yeh, Soravit Changpinyo, Jiaqi Mu, Oscar Chang, Mantas Pajarskas,Carrie Muir, Vered Cohen, Charline Le Lan, Krishna Haridasan, Amit Marathe, Steven Hansen,Sholto Douglas, Rajkumar Samuel, Mingqiu Wang, Sophia Austin, Chang Lan, Jiepu Jiang, JustinChiu, Jaime Alonso Lorenzo, Lars Lowe Sj ̈osund, S ́ebastien Cevey, Zach Gleicher, Thi Avrahami,Anudhyan Boral, Hansa Srinivasan, Vittorio Selo, Rhys May, Konstantinos Aisopos, L ́eonardHussenot, Livio Baldini Soares, Kate Baumli, Michael B. Chang, Adri `a Recasens, Ben Caine,Alexander Pritzel, Filip Pavetic, Fabio Pardo, Anita Gergely, Justin Frye, Vinay Ramasesh, DanHorgan, Kartikeya Badola, Nora Kassner, Subhrajit Roy, Ethan Dyer, V ́ıctor Campos, Alex Tomala,Yunhao Tang, Dalia El Badawy, Elspeth White, Basil Mustafa, Oran Lang, Abhishek Jindal, SharadVikram, Zhitao Gong, Sergi Caelles, Ross Hemsley, Gregory Thornton, Fangxiaoyu Feng, WojciechStokowiec, Ce Zheng, Phoebe Thacker, C ̧a ̆glar ̈Unl ̈u, Zhishuai Zhang, Mohammad Saleh, JamesSvensson, Max Bileschi, Piyush Patil, Ankesh Anand, Roman Ring, Katerina Tsihlas, Arpi Vezer,Marco Selvi, Toby Shevlane, Mikel Rodriguez, Tom Kwiatkowski, Samira Daruki, Keran Rong,Allan Dafoe, Nicholas FitzGerald, Keren Gu-Lemberg, Mina Khan, Lisa Anne Hendricks, MariePellat, Vladimir Feinberg, James Cobon-Kerr, Tara Sainath, Maribeth Rauh, Sayed Hadi Hashemi,Richard Ives, Yana Hasson, YaGuang Li, Eric Noland, Yuan Cao, Nathan Byrd, Le Hou, QingzeWang, Thibault Sottiaux, Michela Paganini, Jean-Baptiste Lespiau, Alexandre Moufarek, SamerHassan, Kaushik Shivakumar, Joost van Amersfoort, Amol Mandhane, Pratik Joshi, Anirudh Goyal,Matthew Tung, Andrew Brock, Hannah Sheahan, Vedant Misra, Cheng Li, Nemanja Raki ́cevi ́c,Mostafa Dehghani, Fangyu Liu, Sid Mittal, Junhyuk Oh, Seb Noury, Eren Sezener, Fantine Huot,Matthew Lamm, Nicola De Cao, Charlie Chen, Gamaleldin Elsayed, Ed Chi, Mahdis Mahdieh,Ian Tenney, Nan Hua, Ivan Petrychenko, Patrick Kane, Dylan Scandinaro, Rishub Jain, JonathanUesato, Romina Datta, Adam Sadovsky, Oskar Bunyan, Dominik Rabiej, Shimu Wu, John Zhang,Gautam Vasudevan, Edouard Leurent, Mahmoud Alnahlawi, Ionut Georgescu, Nan Wei, IvyZheng, Betty Chan, Pam G Rabinovitch, Piotr Stanczyk, Ye Zhang, David Steiner, Subhajit Naskar,Michael Azzam, Matthew Johnson, Adam Paszke, Chung-Cheng Chiu, Jaume Sanchez Elias,Afroz Mohiuddin, Faizan Muhammad, Jin Miao, Andrew Lee, Nino Vieillard, Sahitya Potluri, JanePark, Elnaz Davoodi, Jiageng Zhang, Jeff Stanway, Drew Garmon, Abhijit Karmarkar, Zhe Dong,Jong Lee, Aviral Kumar, Luowei Zhou, Jonathan Evens, William Isaac, Zhe Chen, Johnson Jia,Anselm Levskaya, Zhenkai Zhu, Chris Gorgolewski, Peter Grabowski, Yu Mao, Alberto Magni,Kaisheng Yao, Javier Snaider, Norman Casagrande, Paul Suganthan, Evan Palmer, GeoffreyIrving, Edward Loper, Manaal Faruqui, Isha Arkatkar, Nanxin Chen, Izhak Shafran, MichaelFink, Alfonso Casta ̃no, Irene Giannoumis, Wooyeol Kim, Mikołaj Rybi ́nski, Ashwin Sreevatsa,Jennifer Prendki, David Soergel, Adrian Goedeckemeyer, Willi Gierke, Mohsen Jafari, MeenuGaba, Jeremy Wiesner, Diana Gage Wright, Yawen Wei, Harsha Vashisht, Yana Kulizhskaya, JayHoover, Maigo Le, Lu Li, Chimezie Iwuanyanwu, Lu Liu, Kevin Ramirez, Andrey Khorlin, AlbertCui, Tian LIN, Marin Georgiev, Marcus Wu, Ricardo Aguilar, Keith Pallo, Abhishek Chakladar,Alena Repina, Xihui Wu, Tom van der Weide, Priya Ponnapalli, Caroline Kaplan, Jiri Simsa,Shuangfeng Li, Olivier Dousse, Fan Yang, Jeff Piper, Nathan Ie, Minnie Lui, Rama Pasumarthi,Nathan Lintz, Anitha Vijayakumar, Lam Nguyen Thiet, Daniel Andor, Pedro Valenzuela, CosminPaduraru, Daiyi Peng, Katherine Lee, Shuyuan Zhang, Somer Greene, Duc Dung Nguyen, PaulaKurylowicz, Sarmishta Velury, Sebastian Krause, Cassidy Hardin, Lucas Dixon, Lili Janzer, KiamChoo, Ziqiang Feng, Biao Zhang, Achintya Singhal, Tejasi Latkar, Mingyang Zhang, Quoc Le,8Elena Allica Abellan, Dayou Du, Dan McKinnon, Natasha Antropova, Tolga Bolukbasi, OrgadKeller, David Reid, Daniel Finchelstein, Maria Abi Raad, Remi Crocker, Peter Hawkins, RobertDadashi, Colin Gaffney, Sid Lall, Ken Franko, Egor Filonov, Anna Bulanova, R ́emi Leblond,Vikas Yadav, Shirley Chung, Harry Askham, Luis C. Cobo, Kelvin Xu, Felix Fischer, Jun Xu,Christina Sorokin, Chris Alberti, Chu-Cheng Lin, Colin Evans, Hao Zhou, Alek Dimitriev, HannahForbes, Dylan Banarse, Zora Tung, Jeremiah Liu, Mark Omernick, Colton Bishop, Chintu Kumar,Rachel Sterneck, Ryan Foley, Rohan Jain, Swaroop Mishra, Jiawei Xia, Taylor Bos, GeoffreyCideron, Ehsan Amid, Francesco Piccinno, Xingyu Wang, Praseem Banzal, Petru Gurita, HilaNoga, Premal Shah, Daniel J. Mankowitz, Alex Polozov, Nate Kushman, Victoria Krakovna,Sasha Brown, MohammadHossein Bateni, Dennis Duan, Vlad Firoiu, Meghana Thotakuri, TomNatan, Anhad Mohananey, Matthieu Geist, Sidharth Mudgal, Sertan Girgin, Hui Li, Jiayu Ye,Ofir Roval, Reiko Tojo, Michael Kwong, James Lee-Thorp, Christopher Yew, Quan Yuan, SumitBagri, Danila Sinopalnikov, Sabela Ramos, John Mellor, Abhishek Sharma, Aliaksei Severyn,Jonathan Lai, Kathy Wu, Heng-Tze Cheng, David Miller, Nicolas Sonnerat, Denis Vnukov, RoryGreig, Jennifer Beattie, Emily Caveness, Libin Bai, Julian Eisenschlos, Alex Korchemniy, TomyTsai, Mimi Jasarevic, Weize Kong, Phuong Dao, Zeyu Zheng, Frederick Liu, Fan Yang, Rui Zhu,Mark Geller, Tian Huey Teh, Jason Sanmiya, Evgeny Gladchenko, Nejc Trdin, Andrei Sozanschi,Daniel Toyama, Evan Rosen, Sasan Tavakkol, Linting Xue, Chen Elkind, Oliver Woodman,John Carpenter, George Papamakarios, Rupert Kemp, Sushant Kafle, Tanya Grunina, RishikaSinha, Alice Talbert, Abhimanyu Goyal, Diane Wu, Denese Owusu-Afriyie, Cosmo Du, ChloeThornton, Jordi Pont-Tuset, Pradyumna Narayana, Jing Li, Sabaer Fatehi, John Wieting, OmarAjmeri, Benigno Uria, Tao Zhu, Yeongil Ko, Laura Knight, Am ́elie H ́eliou, Ning Niu, ShaneGu, Chenxi Pang, Dustin Tran, Yeqing Li, Nir Levine, Ariel Stolovich, Norbert Kalb, RebecaSantamaria-Fernandez, Sonam Goenka, Wenny Yustalim, Robin Strudel, Ali Elqursh, BalajiLakshminarayanan, Charlie Deck, Shyam Upadhyay, Hyo Lee, Mike Dusenberry, Zonglin Li,Xuezhi Wang, Kyle Levin, Raphael Hoffmann, Dan Holtmann-Rice, Olivier Bachem, Summer Yue,Sho Arora, Eric Malmi, Daniil Mirylenka, Qijun Tan, Christy Koh, Soheil Hassas Yeganeh, SiimP ̃oder, Steven Zheng, Francesco Pongetti, Mukarram Tariq, Yanhua Sun, Lucian Ionita, MojtabaSeyedhosseini, Pouya Tafti, Ragha Kotikalapudi, Zhiyu Liu, Anmol Gulati, Jasmine Liu, XinyuYe, Bart Chrzaszcz, Lily Wang, Nikhil Sethi, Tianrun Li, Ben Brown, Shreya Singh, Wei Fan,Aaron Parisi, Joe Stanton, Chenkai Kuang, Vinod Koverkathu, Christopher A. Choquette-Choo,Yunjie Li, TJ Lu, Abe Ittycheriah, Prakash Shroff, Pei Sun, Mani Varadarajan, Sanaz Bahargam,Rob Willoughby, David Gaddy, Ishita Dasgupta, Guillaume Desjardins, Marco Cornero, BronaRobenek, Bhavishya Mittal, Ben Albrecht, Ashish Shenoy, Fedor Moiseev, Henrik Jacobsson,Alireza Ghaffarkhah, Morgane Rivi `ere, Alanna Walton, Cl ́ement Crepy, Alicia Parrish, YuanLiu, Zongwei Zhou, Clement Farabet, Carey Radebaugh, Praveen Srinivasan, Claudia van derSalm, Andreas Fidjeland, Salvatore Scellato, Eri Latorre-Chimoto, Hanna Klimczak-Pluci ́nska,David Bridson, Dario de Cesare, Tom Hudson, Piermaria Mendolicchio, Lexi Walker, AlexMorris, Ivo Penchev, Matthew Mauger, Alexey Guseynov, Alison Reid, Seth Odoom, Lucia Loher,Victor Cotruta, Madhavi Yenugula, Dominik Grewe, Anastasia Petrushkina, Tom Duerig, AntonioSanchez, Steve Yadlowsky, Amy Shen, Amir Globerson, Adam Kurzrok, Lynette Webb, Sahil Dua,Dong Li, Preethi Lahoti, Surya Bhupatiraju, Dan Hurt, Haroon Qureshi, Ananth Agarwal, TomerShani, Matan Eyal, Anuj Khare, Shreyas Rammohan Belle, Lei Wang, Chetan Tekur, Mihir SanjayKale, Jinliang Wei, Ruoxin Sang, Brennan Saeta, Tyler Liechty, Yi Sun, Yao Zhao, StephanLee, Pandu Nayak, Doug Fritz, Manish Reddy Vuyyuru, John Aslanides, Nidhi Vyas, MartinWicke, Xiao Ma, Taylan Bilal, Evgenii Eltyshev, Daniel Balle, Nina Martin, Hardie Cate, JamesManyika, Keyvan Amiri, Yelin Kim, Xi Xiong, Kai Kang, Florian Luisier, Nilesh Tripuraneni,David Madras, Mandy Guo, Austin Waters, Oliver Wang, Joshua Ainslie, Jason Baldridge, HanZhang, Garima Pruthi, Jakob Bauer, Feng Yang, Riham Mansour, Jason Gelman, Yang Xu, GeorgePolovets, Ji Liu, Honglong Cai, Warren Chen, XiangHai Sheng, Emily Xue, Sherjil Ozair, AdamsYu, Christof Angermueller, Xiaowei Li, Weiren Wang, Julia Wiesinger, Emmanouil Koukoumidis,Yuan Tian, Anand Iyer, Madhu Gurumurthy, Mark Goldenson, Parashar Shah, MK Blake, HongkunYu, Anthony Urbanowicz, Jennimaria Palomaki, Chrisantha Fernando, Kevin Brooks, Ken Durden,Harsh Mehta, Nikola Momchev, Elahe Rahimtoroghi, Maria Georgaki, Amit Raul, Sebastian Ruder,Morgan Redshaw, Jinhyuk Lee, Komal Jalan, Dinghua Li, Ginger Perng, Blake Hechtman, ParkerSchuh, Milad Nasr, Mia Chen, Kieran Milan, Vladimir Mikulik, Trevor Strohman, Juliana Franco,Tim Green, Demis Hassabis, Koray Kavukcuoglu, Jeffrey Dean, and Oriol Vinyals. Gemini: Afamily of highly capable multimodal models, 2023.9Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang,Lei Zhao, Xixuan Song, et al. Cogvlm: Visual expert for pretrained language models. arXivpreprint arXiv:2311.03079 , 2023.Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang,Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, and Jie Tang.Cogvlm: Visual expert for pretrained language models, 2024.Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, DennyZhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances inneural information processing systems , 35:24824–24837, 2022.Catherine Wong, Kevin M Ellis, Joshua Tenenbaum, and Jacob Andreas. Leveraging language tolearn program abstractions and search heuristics. In International conference on machine learning ,pp. 11193–11204. PMLR, 2021.Xiangyu Wu, Yang Yang, Shengdong Xu, Yifeng Wu, Qingguo Chen, and Jianfeng Lu. Solutionfor smart-101 challenge of iccv multi-modal algorithmic reasoning task 2023. arXiv preprintarXiv:2310.06440 , 2023.Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. Raven: A dataset for relationaland analogical visual reasoning. In Proceedings of the IEEE/CVF conference on computer visionand pattern recognition , pp. 5317–5327, 2019.Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Multimodalchain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923 , 2023.Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancingvision-language understanding with advanced large language models, 2023.A Related WorkA.1 Large Multi-modal ModelsRecent advancements in foundational multimodal models have marked a significant stride towardsdeveloping generalist AI systems capable of understanding and integrating information acrossdifferent modalities to solve tasks without the need for task-specific fine-tuning. Among these modelsare closed source models such as Gemini 1.5 Flash (Team et al., 2023), GPT-4V (OpenAI et al.,2024), and open source models as LLaV A-1.5 (Liu et al., 2023), Mini-GPT4 (Zhu et al., 2023),InstructBLIP (Dai et al., 2023) and CogVLM (Wang et al., 2024). The versatility and multimodalunderstanding exhibited by these foundational multimodal models have positioned them as primecandidates for applications such as AI software engineers or programming tutors for children. Ourwork evaluates the efficacy of these popular models on image/text-to-code tasks, measuring theirpotential in vision/programming context.A.2 Probabilistic Program InductionRecent work in Bayesian cognitive science has modeled various aspects of cognition and learningas probabilistic program induction (Lake et al., 2015; Lake & Piantadosi, 2020; Rule et al., 2020;Ellis et al., 2023; Wong et al., 2021; Grand et al., 2023). This has involved both modeling humancognition as program induction as well as designing machine learning algorithms that can generateprograms for various tasks, including the kind of turtle geometry task we study here. Ellis et al. (2023)developed the DreamCoder algorithm which can learn to induce programs by using self-supervisionto incrementally build up a library of programs and train a neural network to search to find the bestprogram for a given task. They created a dataset of 160 turtle programming tasks. In contrast to ourapproach, where we assess the performance of out-of-the-box LMMs, DreamCoder is trained on atraining set of images (i.e., half of the dataset). However, it is interesting that the algorithm is trainedin an unsupervised fashion; that is, DreamCoder never receives the code used to generate the imagesand learns that from experience. Wong et al. (2021) extended this work by developing an algorithm(LAPS) that can induce programs given both the task and linguistic annotations for the task. They10used a dataset of 311 turtle graphics with greater complexity than the original DreamCoder dataset.While their dataset includes linguistic annotations, their dataset does not include tweak tasks likein TurtleBench. Additionally, their tasks often include arbitrary aspects (for example, a gap withunspecified distance between two shapes) that makes evaluation hard; in our tasks, the positionalrelationships between shapes should be easy to infer exactly and hence we can evaluate models bycomparing exactly with ground truth shapes. Moreover, neither of these datasets have been framedas a benchmark for visual program induction and have not been considered for evaluating LMMs.Perhaps the approach closest to our work is by Grand et al. (2023), who combined LLMs with asymbolic program induction algorithm and evaluated the performance of their model (LILO) on theturtle geometry task using the aforementioned dataset. Averaged over several runs, the performance ofthe best versions of these approaches on the turtle geometry task is as follows: 43% for DreamCoder,82% for LAPS, 49% for LILO, and 32% for a LLM solver. These results seem to suggest thatprobabilistic programming approaches (such as LAPS) can greatly outperform LMMs on visualprogramming tasks. We note that the performance of the LLM solver (32%) is comparable to theperformance of GPT-4V on our text-only input (37%; see Table 4). Future work could assess theperformance of probabilistic program induction methods like LAPS on TurtleBench.A.3 Mutimodal Algorithmic ReasoningThe existing literature features a range of studies that evaluate these models using naturalistic images(Jiang et al., 2022; Johnson et al., 2017; Antol et al., 2015), yet humans naturally are able to reasonover abstract shapes (Chollet, 2019; Zhang et al., 2019; Spelke & Kinzler, 2007) and also many usecases of LMMs involve understanding abstract shapes and sketches (Forbus et al., 2011; Nie et al.,2020). Moreover, unlike naturalistic images (Marjieh et al., 2022; Sucholutsky & Griffiths, 2024),the relationship between language and abstract shapes is highly intertwined as minimal alterationsin language can lead to different visual perceptions in humans (Dillon, 2023; Lin & Dillon, 2023).The Multimodal Algorithmic Reasoning (MAR) task tests multi-modal models on fundamental skillsunderstandable by children, focusing on interpreting visual and linguistic information to answerquestions. Perhaps the most relevant work to ours is the paper by Cherian et al. (2023) in which theyintroduced a dataset with 101 multiple-choice questions inspired by the Math Kangaroo contest for 6to 8-year-olds, involving images and texts that the model must analyze together. The task has beenshown to be challenging for multimodal deep neural networks, and the following trials to solve theproblem have gained less than 25% accuracy on the private test set Wu et al. (2023). Our proposedbenchmark pushes the evaluation of LMMs forward as TurtleBench includes abstract geometricshapes, and the task only relies on knowledge and reasoning over a set of simple functions in thePython Turtle library. The open-ended nature of our benchmark and its flexibility over differentmodalities makes evaluating different aspects of vision and language algorithmic reasoning in themodels more reliable.B Additional AnalysesB.1 Models fail to generalizeGiven that these models have been extensively trained on vast datasets sourced from the internet,there’s an underlying uncertainty regarding the source of their performance—albeit poor—on theTurtleBench tasks. Specifically, it remains unclear whether this performance is the result of themodels’ ability to memorize aspects of our tasks, rather than genuinely understanding and solvingthem based on their programming and reasoning capabilities. To address this issue, our next stepis to evaluate the true generalization ability of these models. By doing so, we aim to distinguishbetween superficial learning, potentially influenced by memorization, and genuine comprehensionand problem-solving skills. To measure the generalizability of the model’s performance, we definean arbitrary set of commands based on the turtle module in Python. In other words, we developed aclass called Rabbit that inherits the Turtle class from the turtle module. Although the functions ofthe Rabbit class are functionally identical to those in the original turtle module, they are nominallydistinct. This differentiation allows us to evaluate the models’ ability to apply their knowledge tounfamiliar yet equivalent command sets. The definition of the Rabbit class in Python is providedin Appendix C.3.2. We perform a zero-shot CoT prompting to elicit the code using the new set of11GPT-4VTurtle CoTGPT-4VRabbit CoTGeminiTurtle CoTGeminiRabbit CoTScratch Code GenerationImage only Input 19% 6% 8.46% 3%Tweak Code GenerationImage + Text 12% 2% 7.7% 1%Table 2: Performance of GPT-4V and Gemini 1.5 Flash on generalization tasks, in these tasks, wedefined Rabbit, a new set of functions practically equivalent to but nominally different from the onesin Python Turtle. The performance in Rabbit drastically drops, showing poor generalization abilitiesin both models.Python Turtle Output Any OutputScratch Code GenerationImage only Input 19.23% 21.6%Tweak Code GenerationImage + Text 12.3% 15.1%Table 3: Performance of (CoT) prompting with GPT-4V on tasks involving code generation for simplegeometric shapes in any programming language of the model’s choice reveals that models strugglesignificantly, even in their preferred programming language.commands. In the context window, we provide a verbal definition of each function in the Rabbit class.The results of comparing the models’ performance using the Rabbit class versus the standard PythonTurtle module are presented in Table 2. We observe that, although both models were capable ofgenerating executable pieces of code with the new class, there is a huge decline in their performancerelative to their performance with the conventional Python Turtle module. This finding suggests thatthe visual reasoning in these models is not robust to syntax changes, and it is likely that they rely ontraining memorization rather than pure reasoning.B.2 Assessing Model Proficiency Across Programming LanguagesThe initial suspicion might be that the models struggle with tasks in turtle geometry due to a lackof exposure to specific programming syntax during pretraining. However, to investigate whetherthe challenge lies not in syntax familiarity but in understanding visual input and translating thisunderstanding into effective programming, we modify our approach with GPT-4V. We choose GPT-4V as it is our best-performing model in the main task. We allow it to generate code using any library,language, or similar tools it deems appropriate, such as Matplotlib, TikZ, etc., without restricting it tothe Python Turtle library. The prompt for this subset of tasks is presented in Appendix C.2.4. Wemanually evaluate the GPT-4V output for this task. Despite this freedom, we observe no significantimprovement in performance. The model chooses Matplotlib for 50% of the tasks and offerspseudocode for 2%, with the remainder reverting to Python Turtle, even though we do not specifyPython Turtle in the prompts. Notably, it avoids using TikZ, despite its mention in the prompt andproven capabilities in prior work to produce TikZ code (Bubeck et al., 2023; Belouadi et al., 2023).This outcome underscores a deeper issue than syntax familiarity: the models’ fundamental challengeis accurately interpreting visual input and applying this understanding to generate correspondingprogramming code.B.3 Limited Visual Understanding in LMMs: Insights from Scratch TasksOne of the questions regarding LMMs’ abilities in visual abstraction and understanding tasks is theextent the incorporation of the visual component has enhanced their abilities in reasoning (Mitchellet al., 2023). In resonance with what Mitchell et al. (2023) found, here we also found that the visioncomponent contributes poorly to fostering the models’ visual reasoning abilities, at least in the domainof TurtleBench. We explored this in the context of tweak tasks in Section 4.2. Here, we explore it in12GPT-4V basic Gemini basic GPT-4V CoT Gemini CoTScratch Code GenerationImage only Input 26% 7.7% 29% 8.46%Text only Input 37% 25.1% 38% 18.51%Image and Text Input 38% 22.2% 40% 22.22%Table 4: Performance of GPT-4V and Gemini 1.5 Flash on TurtleBench, for comparing visual vs.text input on Scratch Code Generation Tasksthe context of scratch tasks. Specifically, we annotated 41 scratch code generation tasks and providedclear descriptions for each in plain text. The remaining shapes were too complex to describe withoutambiguity in plain text. Then, we compared the three modes of presenting the task, image only, textonly, and the blend of an image and its description in text. Interestingly, for both GPT-4V and Gemini1.5 Flash, the model performed worse when the task was presented only in the image, compared tothe other modes. This phenomenon is counterintuitive as for humans, perceiving the images shouldbe easier than first reading a description, imagining it, and then writing a code for it. Additionally, aspresented in Table 4 the blend of image and text only slightly improved GPT-4V’s performance (from38% to 40%). These two findings show that there is still much room for improvement especially inthe visual components of LMMs.B.4 Reasons of FailureWe manually investigated GPT-4V’s failures in solving Scratch tasks in a single run to find the majorcauses of failure. We find four major causes: 1) Shape identification error: where the model fails tocompletely capture existent shapes in the input image, for instance, if it confuses a semicircle with acircle or assigns non-existent shape attributes to the input image. 2) Counting error: where the modelfails to count adequately, (e.g., three triangles counted as four), 3) Orientation error: where the modelfails to correctly find the relationships between different components of a shape (e.g., semicircle ontop of a square vs. at its bottom), and 4) Implementation error: where the model’s generated codedoes not follow the pre-planned pseudocode.We manually investigated GPT-4V’s failure output in the scratch code generation task and the resultsare provided in Table 5, where the failures are not mutually exclusive as a model can perform acombination of errors in each task. Furthermore, while the first three errors are according to thevision component in these models, we see that 64% of the failures are according to these causes, andin 36% of failure cases, there are no apparent vision errors.Cause Description PercentageShape identification error Shape Identification Error: The model fails tocompletely capture existent shapes in the inputimage, confusing or misattributing shapes25%Counting error Counting Error: The model inadequately countsthe elements.35%Orientation error Orientation Error: The model fails to correctlydetermine the spatial relationships between differ-ent components of a shape21%Implementation error Implementation Error: The model’s generatedcode does not adhere to the pre-planned pseu-docode, resulting in incorrect implementation.45%Table 5: Major Causes of GPT-4V’s Failures in Scratch Tasks; note that the failures are not mutuallyexclusive, as a model can perform a combination of errors in each task13C Experiment SetupC.1 Automatic Evaluation of Code OutputEvaluation of the output code by an AI model is performed automatically. First, the output of the AImodel is processed to extract the code piece of output. Then, this piece of code is run in a sandbox,and the shape produced by the code is stored. An illustration of this pipeline is provided in Figure9. Finally, using the OpenCV module in Python, the binary versions of the correct shape and theproduced shape are compared using an adjusted measure of bitwise similarity where we first usethe bounding box technique with OpenCV to find the exact location of the shape and then calculatesimilarity with the formula:|Ba∩Bm||Ba∪Bm|whereBaandBmrepresent black pixels in the input and LMM output, respectively. This metricmeasures the ratio of co-occurring black pixels to the total black pixels Here, we utilize a heuristicapproach in labeling the correctness of the model’s output. If the bitwise similarity between outputand ground truth is higher than 95% the models’ output is labeled as correct and incorrect otherwise.To make sure that our heuristic in labeling the correctness of generated shapes is reliable, we manuallyannotated 2000 pairs of input and output images and we found that only three instances of pairs werelabeled incorrectly (two of them false negative and the other false positive.), leading to an error rateof 0.15% which shows the high level of reliability in the heuristic we used.C.2 PromptingC.2.1 Basic PromptIn each task, the user provides an image of an abstract geometric shape or patternand an instruction, you need to generate a code in Python Turtle that follows theuser's request.Figure 2: basic prompt used in our experimentsC.2.2 v-CoT PromptYou are Turtle Geometrician, you are an expert in reasoning about images andgenerating code in Python Turtle using images You need to follow the steps belowbefore generating the answer:(1) Describe the relevant information from the image needed to answer the question.List all relevant artifacts from the image.(2) Use the information described in (1) to reason about the problem by workingstep by step to arrive at the final piece of code.(3) Generate the final code. NEVER use "pensize" function in your code.Figure 3: v-CoT prompt used in our experimentsC.2.3 A Complete ExampleHere we provide an instance of a complete prompt we used for a tweak code generation task withCoT prompting:C.2.4 Arbitrary OutputHere we provide the CoT prompt we used for the model to provide a code in any arbitrary languageor library that creates the desired shape.14System: You are Turtle Geometrician, you are an expert in reasoning aboutimages and generating code in Python Turtle using images. You need tofollow the steps below before generating the answer:(1) Describe the relevant information from the image needed to answer thequestion. List all relevant artifacts from the image.(2) Use the information described in (1) to reason about the problem byworking step by step to arrive at the final piece of code.(3) Generate the final code. NEVER use "pensize" function in your code.Text: Provide a code in Python turtle that in the given shape inserts acircle of an equal size to the smaller circle on the left of the biggercircle to make a vertically symmetrical shape.Complete the code:import turtlefrom math import *t = turtle.Turtle()large_circle_radius=100small_circle_radius=50...Figure 4: An example of a complete prompt for a tweak code generation task with using v-CoTprompting.You are an expert in reasoning about images and generating code in any language youprefer. You need to follow the steps below before generating the code that answersthe user's request:(1) Describe the relevant information from the image needed to answer the question.List all relevant information from the image.(2) Use the information described in (1) to reason about the problem by workingstep by step to arrive at the final piece of code.(3) Generate the final code. Your code can be in any visual language or library,such as Matplotlib, TikZ, etc.Figure 5: The system prompt we used for the results discussed in Section B.2C.3 RabbitC.3.1 Prompt usedThe prompt we used for this experiment is provided in Figure 6.C.3.2 Definition of the classThe rabbit class is an arbitrary class that we defined based on Turtle class in the Python Turtle Module.This minimal set of functions includes all functions that a programmer or a model needs to create allof the tasks in TurtleBench. We defined this new set of functions to measure how GPT-4V is able togeneralize its abilities in generating code in Python Turtle to a similar but minimally different set offunctions.import turtleclass Rabbit ( turtle . Turtle ):def __init__ ( self ):super (). __init__ ()self . setheading (90)self . pensize (5)self . hideturtle ()def aa(self , length ):15Suppose that I have a library named Rabbit in Python. Rabbit library has an objectconstructor named Rabbit which is an object that moves on the screen and drawslines. It only has these functions:aa(length): goes front or back (if the length is negative) and draws a line withthe length of pixels.bb(degree): The rabbit turns its head right or left (if degree is negative).cc(radius, degree): creates an arc with the given radius for the given degree. Ifdegree=360 it creates a circle. The center of the circle is in the left of therabbit.pp(vanish): if vanish=True vanishes Rabbit object so if it moves does not drawanything, and if vanish=False, it appears the Rabbit object so if it moves draws onthe screen.you call the functions on an object of Rabbit, such as r.aa(length) where r is anobject of Rabbit. When r is created, it faces north (up) on the screen and it doesnot vanish, so it is in drawing mode.You are Rabbit Geometrician, you are an expert in reasoning about images andgenerating code in Python Rabbit using images. You need to follow the steps belowbefore generating the answer:(1) Describe the relevant information from the image needed to answer the question.List all relevant artifacts from the image.(2) Use the information described in (1) to reason about the problem by workingstep by step to arrive at the final piece of code.(3) Generate the final code. Only use commands in the Rabbit class.Figure 6: v-CoT prompt used for generalization experiments discussed in Section B.1self . forward ( length )def bb(self , degree ):self . right ( degree )def cc(self , radius , degree ):self . circle ( radius , degree )def pp(self , vanish ):if vanish :self . penup ()else :self . pendown ()C.4 Types of Tweak TasksTurtleBench includes a total of 130tweak tasks. We provide a categorization for the tweaks asfollows: There are five major types of tweaks in TurtleBench;• Deletion: Removing a specified part of a shape• Insertion: Adding a specific shape to the pattern as directed• Rotation: Rotating the entire shape• Reflection: Reflecting the entire shape or parts of it across specified lines• Generalization: maintaining a pattern in the image constant while varying its parameters.An illustration of instances of each type is provided in Figure 7. These types are not mutuallyexclusive as 10% of the tasks involve a combination of two types (e.g., removing one side of a squareand inserting a semicircle instead). To successfully complete deletion and insertion tweaks, a modelneeds to demonstrate a nuanced understanding of the details in the image and program the resultingshape accordingly. In contrast, rotation tasks can be relatively easy as most of them can be solvedonly using a simple function in Turtle that can rotate the starting heading of the turtle which results incomplete rotation in the entire shape (i.e., turtle.right(angle) ).16Figure 7: Types of tweaks and their share in TurtleBenchC.5 Evaluating Image Complexity Using Contour CountsAs our result suggests that the vision component is contributing poorly to the models’ performance,to gain a better understanding of the visual obstacles for the models to solve the tasks, we defined ameasure as a proxy for the complexity of shapes. For each provided image, we calculated the numberof contours in each shape. In OpenCV , a contour is a curve joining all the continuous points (alongthe boundary), having the same color or intensity. Contours are a useful tool for shape analysis andobject detection and recognition. The high number of contours in an image hints that there are manyshapes being involved and interleaving with each other, which makes understanding and extractingunderlying patterns challenging.We calculated the number of contours in each shape by utilizing the corresponding function inOpenCV , and defined three arbitrary levels of complexity in the images, where the images whichinclude only one contour (e.g., the basic square in Figure 1) are at level 1 (simple), images includingless than 6 contours and more than 1 are at level 2 (medium) (e.g, the base shape of insertionexample in Figure 7) and the images in which there are more than 6 contours (e.g., the base shapein generalization example in Figure 7) are at level 3 of the complexity (Complex). In Turtle, theproportions of complexity levels 1, 2, and 3 are 25%, 40%, and 35%, respectively.We investigate how models perform over tweak tasks. There are 9 different ways that a pair of inputand output image can combine. As shown in Table 6, the majority of tweak tasks (74) have samelevels of complexity for the input and output image.To examine how complexity of input and output shapes impact the results, we categorize tweak tasksin the 9 different categories and count the number of tasks that are ever solved by GPT-4V underany prompting method in code generation and code edit tasks during 6 different runs. As shown inTable 6, the more complex the input shape is, the more challenging solving the task is.Output ComplexitySimple Medium ComplexInput ComplexitySimple 35% (7/20) 30% (3/10) 25% (1/4)Medium 40% (2/5) 18% (6/33) 7% (1/13)Complex 20% (1/5) 11% (2/19) 19% (4/21)Table 6: The number of tweak tasks under each category and the percentage of those tasks ever solvedby GPT-4V in different settings.C.6 Task Instances17def draw_polygon(sides, length):outer_turn= 360 / sides for _ in range(sides):t.forward(length)t.right(outer_turn)t.setheading(60)draw_polygon(3, small_triangle_side)t.left(120)draw_polygon(3, large_triangle_side)Scratch TasksImage InputText InputImage + Text InputTweak TasksImage + Text InputCode GenerationCode GenerationCode EditImage + Text InputImage + Image InputTextual Tweak Instruction: Insert two triangles to the right of the existing shape to make it vertically symmetrical.Desired Tweaked ShapeGround TruthCodeDescription: Two adjacent equilateral triangles sharing a horizontal common base both pointing upward, where the left triangle is larger than the right oneFigure 8: An illustration of different modes of a single task in TurtleBench along with the imagesgenerated by code from the outputs of GPT-4V and Gemini 1.5 Flash. More examples are providedin Appendix Figure 10The image shows a large hexagon composed of seven smaller hexagons. Each side of the smaller hexagons is equal in length. The large hexagon is oriented such that one of its flat sides is at the top and bottom, rather than a vertex pointing up or down.To create this shape using Python Turtle, we need to:1. Define a function to draw a hexagon.2. Draw the central hexagon.3. Move the turtle to the correct positions to draw the surrounding six hexagons without drawing any unnecessary lines.Here's how we can complete the code:Write Python Turtle code that generates the exact same shape.def draw_polygon(sides, length):outer_turn = 360 / sides for _ in range(sides):t.forward(length)t.right(outer_turn)for i in range(6):draw_polygon(6, small_hexagon_side)t.right(60)import turtle ...SandboxFigure 9: An illustration of our evaluation pipeline18def draw_polygon(sides, length):outer_turn= 360 / sides for _ in range(sides):t.forward(length)t.right(outer_turn)draw_polygon(4, square_side)t.left(-135)t.circle(square_side/2*sqrt(2),-360)Scratch TasksImage InputText InputImage + Text InputTweak TasksImage + Text InputCode GenerationCode GenerationCode EditImage + Text InputImage + Image InputTextual Tweak Instruction:Write a code in Python turtle that creates the given shape without the quarter circles on the left and the right of the square.Desired Tweaked ShapeGround TruthCodeDescription: A square with two horizontal and vertical sides, inscribed in a circle.def draw_polygon(sides, length):outer_turn= 360 / sides for _ in range(sides):t.forward(length)t.right(outer_turn)x = 4for iin range(x):draw_polygon(4, small_square_side)t.right(360 / x)Scratch TasksImage InputText InputImage + Text InputTweak TasksImage + Text InputCode GenerationCode GenerationCode EditImage + Text InputImage + Image InputTextual Tweak Instruction:Write a code in Python Turtle that creates the given shape without the small square on the top right.Desired Tweaked ShapeGround TruthCodeDescription: Four adjacent squares of an equal size that form a larger square.Figure 10: Two examples of tasks in TurtleBench across different modalities19 |
Twzrpa6V2o | InfiMM-WebMath-40B: Advancing MultimodalPre-Training for Enhanced Mathematical ReasoningXiaotian Han1∗Yiren Jian1∗Xuefeng Hu1∗Haogeng Liu1,2,∗Yiqi Wang1∗Qihang Fan1,2Yuang Ai1,2Huaibo Huang2Ran He2Zhenheng Yang1Quanzeng You1†1ByteDance, Inc2Chinese Academy of SciencesAbstractPre-training on large, high-quality datasets is essential for improving the reasoningabilities of Large Language Models (LLMs), particularly in specialized fields likemathematics. However, the field of Multimodal LLMs (MLLMs) lacks a compre-hensive, open-source dataset for mathematical reasoning. To fill this gap, we presentInfiMM-WebMath-40B, a high-quality dataset of interleaved image-text documents.It consists of 24 million web pages, 85 million image URLs, and 40 billion texttokens, all carefully extracted and filtered from CommonCrawl. We outline our datacollection and processing pipeline in detail. Models trained on InfiMM-WebMath-40B demonstrate strong performance in both text-only and multimodal settings, set-ting a new state-of-the-art on multimodal math benchmarks such as MathVerse andWe-Math. We release our data at https://huggingface.co/datasets/Infi-MM/InfiMM-WebMath-40B.1 IntroductionRecent advancements in Large Language Models (LLMs)[ 1,2,12] have improved their ability tohandle complex reasoning and multi-step mathematical problems through techniques like Chain-of-Thought (CoT) prompting[ 54]. These models excel from basic GSM8K word problems [ 10]to high school-level MATH tasks [ 19]. Specialized smaller LLMs like DeepSeekMath-7B [ 49]and InternLM-Math [ 58] have also made notable progress in mathematics, demonstrating strongperformance in focused domains.Although most mathematical knowledge is text-based, visual elements such as figures and diagramsare essential for understanding abstract concepts. To integrate these visual components, MultimodalLLMs (MLLMs) like G-LLaV A [ 14], Math-LLaV A [ 50], and MA VIS [ 65] have been developed.These models enhance reasoning by incorporating visual inputs through embeddings from pre-trainedmodels like CLIP [ 47] and SigLIP [ 61], and use multimodal instruction datasets such as Geo170k [ 7],MathV360K [51], and MA VIS-Instruct [66].However, introducing new knowledge during instruction fine-tuning is challenging [ 69], often leadingto hallucinations [ 16], particularly due to limitations in dataset scale and quality. While largecorporations benefit from proprietary datasets, the open-source community lacks comprehensivepre-training datasets for mathematical reasoning that integrate text and visual data.To address this gap, we introduce InfiMM-WebMath-40B , the first large-scale, publicly availablemultimodal mathematics pre-training dataset. Comprising 24 million web documents, 85 millionimage URLs, and 40 billion text tokens, it provides a valuable resource for training Multimodal∗Equal contributions.†Corresponding author.The 4th Workshop on Mathematical Reasoning and AI at NeurIPS’24LLMs (MLLMs). We validate the effectiveness of InfiMM-WebMath-40B through experiments onbenchmarks like MathVerse [ 64] and WeMath [ 46], showing improved performance in multimodalmathematical reasoning.Our contributions include: (1) We introduce InfiMM-WebMath-40B, the first large-scale, multimodalmath dataset for pre-training, filling a critical gap in open-source research. (2) We provide a detailedpreprocessing pipeline for filtering relevant content from CommonCrawl to ensure high-quality,relevant data. (3) We demonstrate the impact of InfiMM-WebMath-40B through experiments, whereour models excel on multimodal mathematical benchmarks, showcasing the dataset’s potential foradvancing MLLM research.2 Related WorkLLMs have demonstrated potential in mathematical reasoning across various studies. To evaluate andenhance their capabilities, several math-specific benchmarks [ 11,20,18,4,40,32,67] and trainingdatasets, both proprietary [45, 31, 27] and open-source [19, 55, 41, 53, 60], have been introduced.The rise of Multimodal LLMs (MLLMs) has sparked interest in enhancing their multimodal reasoningcapabilities. To support this, various evaluation benchmarks [ 62,35,24,56,38,57,34,64,46] andtraining datasets [ 7,15,51,68,26,3,30] have been developed to assess and enhance MLLMs’mathematical reasoning skills.3 Dataset ConstructionIn this section, we detail the methodology used to construct InfiMM-WebMath-40B, a large-scalemultimodal math dataset integrating interleaved text and image data, following approaches usedin prior works [ 44,29,43]. We enhance the methodology used in the OBELICS dataset [ 26] byincorporating both text and corresponding image URLs.3.1 Text-only Data Curation PipelineFigure 1: InfiMM-WebMath-40B data curation pipeline.Text Extraction and Language Filtering We chose Trafilatura, a Python library widely used toextract text from web pages. While effective for text extraction, Trafilatura omits mathematical sym-bols and equations. Therefore, the subsequent section will outline our development of a specializedextraction tool tailored for math-related content.Following DeepSeekMath [ 49], we focus on retaining only Chinese and English content when con-structing our dataset. To achieve this, we apply language filtering to the CommonCrawl repositorieswith approximately 122 billion webpages, as shown in Figure 1. For language detection, we employa fastText language identification model [ 22]. This language filtering process significantly reducesthe dataset size, lowering the number of pages from 122 billion to 57.2 billion.Mathematical Content Extraction Extracting mathematical content from HTML presents uniquechallenges, as standard tools often fail to accurately capture LaTeX equations and image URLs. After2evaluating various tools, we chose Resiliparse as the foundation for our development. Figure 2 showsa comparison of extraction results between Trafilatura and our enhanced version of Resiliparse.High-Recall Filtering for Mathematical Content Inspired by DeepSeekMath [ 49], we trained afastText classifier to filter mathematical content, using half a million positive samples from OpenWeb-Math [ 42] and negative samples from our earlier extracted content. This filtering reduced the datasetfrom 57.2 billion to 9.5 billion samples, prioritizing recall with a probability threshold set at 0.4.Deduplication We applied MinHash [ 6] for content deduplication, following FineWeb’s method-ology [ 43]. Deduplication was performed within each snapshot and neighboring snapshot pairs,reducing the dataset by 43%, from 9.5 billion to 5.4 billion samples. URL deduplication furtherreduced the sample size to 3.9 billion.Rule-based Filtering We applied a few essential filtering rules, such as removing "lorem ipsum"content, applying a punctuation ratio rule for English, filtering NSFW content, and excludingdocuments with Unicode errors. This step eliminated 3% of the samples, resulting in 3.8B samples.High-Precision Filtering for Mathematical Content To enhance the accuracy of our labelingprocess, we utilized the LLaMA3-70B-Instruct model [ 12], using prompt formats inspired by theFineWeb-Edu dataset [ 33]. This approach allowed us to score the mathematical quality of eachsample on a scale from 0 to 10. The full prompt is displayed in Table 3 of Appendix.From the data remaining after rule-based filtering, we randomly sampled approximately one millionentries. We assigned math quality scores and applied a threshold of 6 to select 640,000 positivesamples for training our updated fastText classifier, alongside an equivalent number of 640,000randomly selected negative samples from prior filtering steps. These positive and negative sampleswere combined to train the new fastText classifier.3During fastText training, we applied data cleaning rules to optimize the model’s performance formathematical content (see Appendix D for details). For evaluation, we used all samples in theGeometry3K [ 35] benchmark as positive examples of mathematical content. With these refinedpreprocessing techniques, fastText’s accuracy improved from 48.74% to 72.15Text-Only Filtering Evaluation We pretrained a deepseek-coder-1.3b-base model on the filteredtext dataset and evaluated its performance on GSM8K [ 10] and the MMLU (STEM) [ 18]. Our modeloutperformed both OpenWebMath and DeepSeekMath, highlighting the quality of our dataset (resultsare shown in Appendix E).3.2 Multimodal Data ConstructionAfter filtering, 24 million documents with 85 million image URLs remained. We extracted imageURLs from each webpage and paired them with the corresponding text, following the OBELICSformat [ 26]. Deduplication reduced the image URLs to 23 million. Further filtering based onkeyword analysis (e.g., “log”, “banner”, “avatar”, “icon”) left us with 22 million URLs, from whichwe successfully downloaded 14 million unique images. These images were reintegrated into thedocuments, resulting in 24 million records with a total of 28 million images.4 ExperimentsModel Architectures We employ the SigLip model siglip-so400m-patch14-384 to extractvisual features, a 3-layer Perceiver Resampler [ 21] with 64 latents to reduce the number of token-s/features per image to 64. These visual token/feature embeddings are then concatenated with textembeddings before being fed into the LLMs (DeepSeek-Coder [ 17]:deepseek-coder-1.3b-baseanddeepseek-coder-7b-v1.5 ).Training Details Our training data and processes involve a three-stage approach: modality align-ment, continued pre-training using InfiMM-WebMath-40B, and instruction fine-tuning. Detailedtraining procedures are provided in the Appendix F. We refer to our resulting model as InfiMM-Math.3We also employ an LLM-based classifier for high-precision filtering, Appendix C shows the comparison.3Table 1: Evaluation of models on MathVerse.ModelBaseLLMAllTextDominantTextLiteVisionIntenseVisionDominantVisionOnlyHuman - 64.9 71.2 70.9 61.4 68.3 66.7Proprietary ModelsGPT-4V N/A 39.4 54.7 41.4 34.9 34.4 31.6Gemini-Pro N/A 23.5 26.3 23.5 23.0 22.3 22.2Open-sourced ModelsSPHINX-Plus LLaMA2-13B 14.0 16.3 12.8 12.9 14.7 13.2G-LLaV A LLaMA2-7B 15.7 22.2 20.4 16.5 12.7 6.6InternLM-XC2 InternLM2-7B 16.5 22.3 17.0 15.7 16.4 11.0Math-LLaV A Vicuna-13B 19.0 21.2 19.8 20.2 17.6 16.4ShareGPT4V Vicuna-13B 17.4 21.8 20.6 18.6 16.2 9.7LLaV A-NeXT LLaMA3-8B 19.3 24.9 20.9 20.8 16.1 13.8LLaV A-NeXT Qwen-1.5-110B 24.5 31.7 24.1 24.0 22.1 20.7MA VIS Mammoth2-7B 27.5 41.4 29.1 27.4 24.9 14.6Our ModelsInfiMM-Math DS-Coder-1.3B 26.9 37.1 30.2 29.2 24.4 13.7InfiMM-Math DS-Coder-1.5-7B 34.5 46.7 32.4 38.1 32.4 15.8Evaluations on MathVerse In line with official MathVerse guidelines, we report the “w/o” score.The results in Table 1 show that our 7B model outperforms all open-source models, including the110B LLaV A-NeXT, and surpasses Gemini-Pro and Qwen-VL-Max, trailing only GPT-4V . Ourmodel demonstrates exceptional performance in the Text-Dominant, Text-Lite, Vision-Intense, andVision-Dominant categories, highlighting its strong multimodal capabilities in processing both textand visual inputs. However, it underperforms in the Vision-Only category, likely due to limitations inour vision encoder, which processes images only at a resolution of 384×384. To validate the effectof our proposed InfiMM-WebMath-40B, we also provide ablations on the CPT and IFT datasets inAppendix G.Table 2: Evaluations on the We-Math benchmark. A VG represents the primary metric of interest.Model Base LLM A VG↑ IK↓ IG↑ CM↑ RM↓Proprietary ModelsGemini-1.5-Pro N/A 26.4 42.7 11.2 20.8 54.8GPT-4V N/A 31.1 39.8 14.5 23.8 47.9Open-sourced ModelsLLaV A-1.6 Vicuna-7B 3.3 78.3 2.5 2.1 89.1LLaV A-1.6 Vicuna-13B 5.2 69.1 3.2 3.6 86.9DeepSeek-VL DeepSeek-7B 6.3 69.1 4.6 4.0 84.8G-LLaV A Vicuna-13B 6.5 64.2 4.6 4.2 86.6Math-LLaV A Vicuna-13B 11.1 - - - 72.8InternLM-XC2 InternLM2-7B 12.7 56.4 10.5 7.4 77.6Our ModelsInfiMM-Math DeepSeek-Coder-1.3B 13.1 56.2 9.1 9.3 73.7InfiMM-Math DeepSeek-Base-7B 20.6 48.8 12.2 15.2 61.7Evaluations on We-Math Here, we compare models on the We-Math benchmarks, consisting of6.5K visual math questions. We report results on the testmini set using four metrics: InsufficientKnowledge (IK), Inadequate Generalization (IG), Complete Mastery (CM), and Rote Memorization(RM). As shown in Table 2, our model, InfiMM-Math, surpasses all open-source models.5 ConclusionsIn this work, we introduced InfiMM-WebMath-40B, the first large-scale multimodal pretrainingdataset for mathematical reasoning, filling a crucial gap in open-source research. Our datasetsignificantly enhances models’ performances on key benchmarks.4References[1] Open AI. Hello gpt-4o. https://openai.com/index/hello-gpt-4o/ , 2024.[2] Anthropic. Claude 3.5 sonnet. https://www.anthropic.com/news/claude-3-5-sonnet , 2024.[3]Anas Awadalla, Le Xue, Oscar Lo, Manli Shu, Hannah Lee, Etash Kumar Guha, Matt Jordan, Sheng Shen,Mohamed Awadalla, Silvio Savarese, Caiming Xiong, Ran Xu, Yejin Choi, and Ludwig Schmidt. Mint-1t:Scaling open-source multimodal data by 10x: A multimodal dataset with one trillion tokens, 2024.[4]Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen Marcus McAleer,Albert Q Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model formathematics. In The Twelfth International Conference on Learning Representations , 2024.[5]Edward Beeching, Shengyi Costa Huang, Albert Jiang, Jia Li, Benjamin Lipkin, Zihan Qina, KashifRasul, Ziju Shen, Roman Soletskyi, and Lewis Tunstall. Numinamath 7b tir. https://huggingface.co/AI-MO/NuminaMath-7B-TIR , 2024.[6]Andrei Z Broder. On the resemblance and containment of documents. In Proceedings. Compression andComplexity of SEQUENCES 1997 (Cat. No. 97TB100171) , pages 21–29. IEEE, 1997.[7]Shihao Cai, Keqin Bao, Hangyu Guo, Jizhi Zhang, Jun Song, and Bo Zheng. Geogpt4v: Towards geometricmulti-modal large language models with geometric image generation, 2024.[8]Guiming Hardy Chen, Shunian Chen, Ruifei Zhang, Junying Chen, Xiangbo Wu, Zhiyi Zhang, ZhihongChen, Jianquan Li, Xiang Wan, and Benyou Wang. Allava: Harnessing gpt4v-synthesized data for a litevision-language model. arXiv preprint arXiv:2402.11684 , 2024.[9]Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang, Lingbo Liu, Eric Xing, and Liang Lin. Geoqa: Ageometric question answering benchmark towards multimodal numerical reasoning. In Findings of theAssociation for Computational Linguistics: ACL-IJCNLP 2021 , pages 513–523, 2021.[10] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, MatthiasPlappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math wordproblems. arXiv preprint arXiv:2110.14168 , 2021.[11] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, MatthiasPlappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Trainingverifiers to solve math word problems, 2021.[12] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman,Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprintarXiv:2407.21783 , 2024.[13] Alex Fang, Albin Madappally Jose, Amit Jain, Ludwig Schmidt, Alexander T Toshev, and Vaishaal Shankar.Data filtering networks. In The Twelfth International Conference on Learning Representations , 2024.[14] Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, JianhuaHan, Hang Xu, Zhenguo Li, et al. G-llava: Solving geometric problem with multi-modal large languagemodel. arXiv preprint arXiv:2312.11370 , 2023.[15] Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, JianhuaHan, Hang Xu, Zhenguo Li, and Lingpeng Kong. G-llava: Solving geometric problem with multi-modallarge language model, 2023.[16] Zorik Gekhman, Gal Yona, Roee Aharoni, Matan Eyal, Amir Feder, Roi Reichart, and Jonathan Herzig.Does fine-tuning llms on new knowledge encourage hallucinations? arXiv preprint arXiv:2405.05904 ,2024.[17] Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, YuWu, YK Li, et al. Deepseek-coder: When the large language model meets programming–the rise of codeintelligence. arXiv preprint arXiv:2401.14196 , 2024.[18] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and JacobSteinhardt. Measuring massive multitask language understanding, 2021.[19] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprintarXiv:2103.03874 , 2021.[20] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, andJacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS , 2021.[21] Andrew Jaegle, Felix Gimeno, Andy Brock, Oriol Vinyals, Andrew Zisserman, and Joao Carreira. Perceiver:General perception with iterative attention. In International conference on machine learning , pages 4651–4664. PMLR, 2021.[22] Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient textclassification. arXiv preprint arXiv:1607.01759 , 2016.[23] Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. Dvqa: Understanding data visualizationsvia question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition ,pages 5648–5656, 2018.[24] Mehran Kazemi, Hamidreza Alvari, Ankit Anand, Jialin Wu, Xi Chen, and Radu Soricut. Geomverse: Asystematic evaluation of large models for geometric reasoning, 2023.5[25] Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. Adiagram is worth a dozen images. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam,The Netherlands, October 11–14, 2016, Proceedings, Part IV 14 , pages 235–251. Springer, 2016.[26] Hugo Laurençon, Lucile Saulnier, Léo Tronchon, Stas Bekman, Amanpreet Singh, Anton Lozhkov, ThomasWang, Siddharth Karamcheti, Alexander M. Rush, Douwe Kiela, Matthieu Cord, and Victor Sanh. Obelics:An open web-scale filtered dataset of interleaved image-text documents, 2023.[27] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, GuyGur-Ari, and Vedant Misra. Solving quantitative reasoning problems with language models, 2022.[28] Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. Camel:Communicative agents for "mind" exploration of large scale language model society, 2023.[29] Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, EtashGuha, Sedrick Keh, Kushal Arora, et al. Datacomp-lm: In search of the next generation of training sets forlanguage models. arXiv preprint arXiv:2406.11794 , 2024.[30] Lei Li, Yuqi Wang, Runxin Xu, Peiyi Wang, Xiachong Feng, Lingpeng Kong, and Qi Liu. Multimodalarxiv: A dataset for improving scientific comprehension of large vision-language models, 2024.[31] Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, JohnSchulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step, 2023.[32] Naiming Liu, Shashank Sonkar, Myco Le, and Richard Baraniuk. Malalgoqa: A pedagogical approach forevaluating counterfactual reasoning abilities, 2024.[33] Anton Lozhkov, Loubna Ben Allal, Leandro von Werra, and Thomas Wolf. Fineweb-edu, May 2024.Software.[34] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-WeiChang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundationmodels in visual contexts, 2024.[35] Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu.Inter-gps: Interpretable geometry problem solving with formal language and symbolic reasoning, 2021.[36] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, PeterClark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science questionanswering. Advances in Neural Information Processing Systems , 35:2507–2521, 2022.[37] Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, andAshwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning.InThe Eleventh International Conference on Learning Representations , 2023.[38] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. Chartqa: A benchmark forquestion answering about charts with visual and logical reasoning, 2022.[39] Minesh Mathew, Dimosthenis Karatzas, and CV Jawahar. Docvqa: A dataset for vqa on document images.InProceedings of the IEEE/CVF winter conference on applications of computer vision , pages 2200–2209,2021.[40] Saeid Naeini, Raeid Saqur, Mozhgan Saeidi, John Giorgi, and Babak Taati. Large language models arefixated by red herrings: Exploring creative problem solving and einstellung effect using the only connectwall dataset, 2023.[41] Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, and Jimmy Ba. Openwebmath: An open dataset ofhigh-quality mathematical web text, 2023.[42] Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, and Jimmy Ba. Openwebmath: An open dataset ofhigh-quality mathematical web text. arXiv preprint arXiv:2310.06786 , 2023.[43] Guilherme Penedo, Hynek Kydlí ˇcek, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel,Leandro V on Werra, and Thomas Wolf. The fineweb datasets: Decanting the web for the finest text data atscale, 2024.[44] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, HamzaAlobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falconllm: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116 ,2023.[45] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving, 2020.[46] Runqi Qiao, Qiuna Tan, Guanting Dong, Minhui Wu, Chong Sun, Xiaoshuai Song, Zhuoma GongQue,Shanglin Lei, Zhe Wei, Miaoxuan Zhang, Runfeng Qiao, Yifan Zhang, Xiao Zong, Yida Xu, Muxi Diao,Zhimin Bao, Chen Li, and Honggang Zhang. We-math: Does your large multimodal model achievehuman-like mathematical reasoning?, 2024.[47] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, GirishSastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models fromnatural language supervision. In International conference on machine learning , pages 8748–8763. PMLR,2021.6[48] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, MingchuanZhang, Y . K. Li, Y . Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning inopen language models, 2024.[49] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li, Yu Wu, andDaya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXivpreprint arXiv:2402.03300 , 2024.[50] Wenhao Shi, Zhiqiang Hu, Yi Bin, Junhua Liu, Yang Yang, See-Kiong Ng, Lidong Bing, and Roy Ka-WeiLee. Math-llava: Bootstrapping mathematical reasoning for multimodal large language models. arXivpreprint arXiv:2406.17294 , 2024.[51] Wenhao Shi, Zhiqiang Hu, Yi Bin, Junhua Liu, Yang Yang, See-Kiong Ng, Lidong Bing, and Roy Ka-WeiLee. Math-llava: Bootstrapping mathematical reasoning for multimodal large language models, 2024.[52] Yuxuan Tong, Xiwen Zhang, Rui Wang, Ruidong Wu, and Junxian He. Dart-math: Difficulty-awarerejection tuning for mathematical problem-solving. arXiv preprint arXiv:2407.13690 , 2024.[53] Zengzhi Wang, Rui Xia, and Pengfei Liu. Generative ai for math: Part i – mathpile: A billion-token-scalepretraining corpus for math. arXiv preprint arXiv:2312.17120 , 2023.[54] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, DennyZhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neuralinformation processing systems , 35:24824–24837, 2022.[55] Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, and Kyunghyun Cho.Naturalproofs: Mathematical theorem proving in natural language. In Thirty-fifth Conference on NeuralInformation Processing Systems Datasets and Benchmarks Track (Round 1) , 2021.[56] Renqiu Xia, Bo Zhang, Hancheng Ye, Xiangchao Yan, Qi Liu, Hongbin Zhou, Zijun Chen, Min Dou,Botian Shi, Junchi Yan, and Yu Qiao. Chartx & chartvlm: A versatile benchmark and foundation modelfor complicated chart reasoning, 2024.[57] Zhengzhuo Xu, Sinan Du, Yiyan Qi, Chengjin Xu, Chun Yuan, and Jian Guo. Chartbench: A benchmarkfor complex visual reasoning in charts, 2024.[58] Huaiyuan Ying, Shuo Zhang, Linyang Li, Zhejian Zhou, Yunfan Shao, Zhaoye Fei, Yichuan Ma, JiaweiHong, Kuikun Liu, Ziyi Wang, et al. Internlm-math: Open math large language models toward verifiablereasoning. arXiv preprint arXiv:2402.06332 , 2024.[59] Longhui Yu, Weisen Jiang, Han Shi, Jincheng YU, Zhengying Liu, Yu Zhang, James Kwok, ZhenguoLi, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for largelanguage models. In The Twelfth International Conference on Learning Representations , 2024.[60] Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.Mammoth: Building math generalist models through hybrid instruction tuning, 2023.[61] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language imagepre-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages11975–11986, 2023.[62] Jiaxin Zhang, Zhongzhi Li, Mingliang Zhang, Fei Yin, Chenglin Liu, and Yashar Moshfeghi. Geoeval:Benchmark for evaluating llms and multi-modal models on geometry problem-solving, 2024.[63] Ming-Liang Zhang, Fei Yin, and Cheng-Lin Liu. A multi-modal neural geometric solver with textualclauses parsed from diagram. In Proceedings of the Thirty-Second International Joint Conference onArtificial Intelligence , pages 3374–3382, 2023.[64] Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, PanLu, Kai-Wei Chang, Peng Gao, and Hongsheng Li. Mathverse: Does your multi-modal llm truly see thediagrams in visual math problems?, 2024.[65] Renrui Zhang, Xinyu Wei, Dongzhi Jiang, Yichi Zhang, Ziyu Guo, Chengzhuo Tong, Jiaming Liu, AojunZhou, Bin Wei, Shanghang Zhang, et al. Mavis: Mathematical visual instruction tuning. arXiv preprintarXiv:2407.08739 , 2024.[66] Renrui Zhang, Xinyu Wei, Dongzhi Jiang, Yichi Zhang, Ziyu Guo, Chengzhuo Tong, Jiaming Liu, AojunZhou, Bin Wei, Shanghang Zhang, Peng Gao, and Hongsheng Li. Mavis: Mathematical visual instructiontuning, 2024.[67] Zihao Zhou, Shudong Liu, Maizhen Ning, Wei Liu, Jindong Wang, Derek F. Wong, Xiaowei Huang,Qiufeng Wang, and Kaizhu Huang. Is your model really a good math reasoner? evaluating mathematicalreasoning with checklist, 2024.[68] Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Youngjae Yu,Ludwig Schmidt, William Yang Wang, and Yejin Choi. Multimodal c4: An open, billion-scale corpus ofimages interleaved with text, 2023.[69] Zeyuan Allen Zhu and Yuanzhi Li. Physics of language models: Part 3.1, knowledge storage and extraction.arXiv preprint arXiv:2309.14316 , 2023.7A Mathematical Content ExtractionFigure 2 illustrates a comparison of extraction results between Trafilatura and our enhanced versionof Resiliparse. Our tool successfully extracts both the mathematical equations and image URLs, ashighlighted in the red boxes in the screenshot from a Wikipedia webpage.Integral form\n[edit]Gauss's law may be expressed as:[6]\nwhereΦE is the electric flux through a closed surface S enclosing any volume V , Q is the total charge enclosed within V , and ε0 is the electric constant. The electric flux ΦE is defined as a surface integral of the electric field:\nwhereE is the electric field, dAis a vector representing an infinitesimal element of area of the surface,[note 2] and · represents the dot product of two vectors.Integralform\n\n[Image_Link]//upload.wikimedia.org/wikipedia/commons/thumb/3/32/Electric-flux-surface-example.svg/220px-Electric-flux-surface-example.svg.png [Image_Link]Electric flux through an arbitrary surface is proportional to the total charge enclosed by the surface. \n\nGauss'slaw may be expressed as:\n\n$\\Phi _{E}={\\frac {Q}{\\varepsilon_{0}}}$\n\n\nwhereΦE is the electric flux through a closed surface S enclosing any volume V , Q is the total charge enclosed within V , and ε0 is the electric constant. The electric flux ΦE is defined as a surface integral of the electric field:\n\n$\\Phi _{E}=$ $\\scriptstyle_{S}$ $\\mathbf{E} \\cdot\\mathrm{d} \\mathbf{A}$\n\nwhereE is the electric field, dAis a vector representing an infinitesimal element of area of the surface,[note 2] and · represents the dot product of two vectors.From trafilaturaFrom OursFigure 2: A comparative illustration of extraction results from a Wikipedia webpage using Trafilaturaand our enhanced version of Resiliparse, highlighting the successful retrieval of mathematicalequations and image URLs.B Using Prompting with Llama-3-70B for Mathematical AnnotationWe display the full prompt used in High-Precision Filtering for Mathematical Content in Table 3.Table 3: Prompt for evaluating mathematical content using Llama-3-70B following FineWeb-Edu [33].Below is an extract from a web page. Evaluate the mathematical value of the extract and its potential utility as a teachingresource in a mathematical context using the additive 10-point scoring system described below. Points accumulate based onthe satisfaction of each criterion, with special attention to the presence and quality of mathematical equations:- 0 points if the extract includes no mathematical content, such as only provides historical context, summarizes an article’sabstract, or exclusively features a person’s resume.- 1-2 points if the extract offers rudimentary information on mathematical subjects, even if interspersed with irrelevantmaterial such as advertisements or non-academic content.- 2-4 points if the extract touches upon mathematical topics without rigorous adherence to academic standards and contains amix of mathematical and non-mathematical content, or if the presentation is haphazard and the writing lacks clarity.- 4-6 points if the extract presents key concepts pertinent to educational curricula and includes mathematical equations,albeit potentially non-comprehensive or alongside superfluous information. It should resemble a mathematical text, such asan introductory section of a textbook or a basic tutorial.- 6-8 points if the extract is highly relevant to mathematics, is well-structured, and offers a clear exposition, including asignificant number of mathematical equations and solutions. It should be akin to an in-depth textbook chapter or tutorial,with a strong focus on mathematical content and minimal unrelated information.- 8-10 points if the extract exhibits exceptional mathematical merit, characterized by detailed explanations, a comprehensivearray of mathematical equations, and a coherent, accessible writing style that provides profound insights into mathematicaltheories and applications.The extract: <EXAMPLE>.After examining the extract: - Briefly justify your total score. - Conclude with the score using the format:"mathematical score: <total points>"8C Ablation Studies on High-Precision Mathematical Content FilteringIn this section, we examine the efficacy of two classifiers—LLM-based and fastText-based—focusingon high-precision mathematical content filtering. The comparison utilizes the DeepSeek-Coder1.3B model, which we trained on a dataset previously introduced in Sec. High-Recall Filtering forMathematical Content with a sequence length of 4096. This model was trained to score documentsbased on their relevance to mathematical content on a scale from 0 to 10.We conduct the continue pretraining of the DeepSeekCoder 1.3B model using datasets filtered byboth the LLM- and fastText-based classifiers. Table 4 shows the performance results. The resultshighlight a length bias in the LLM-based method, which tends to favor longer documents, averaging2,500 tokens, compared to 1,700 tokens for the FastText filter. The length bias associated with theLLM-based classifier has adversely impacted the dataset’s performance on the GSM8K dataset. Asindicated in the table, the LLM-filtered dataset achieved lower accuracy (17.5%) on the GSM8Kdataset compared to the fastText-filtered dataset (20.2%). This decrease in performance indicates thatthe LLM’s preference for longer documents may not align well with the requirements of datasets likeGSM8K, which demand concise and precise mathematical descriptions.Given these insights, we have decided to continue utilizing the fastText classifier for high-precisionfiltering in our ongoing research. Nonetheless, the implications of the LLM-based classifier requirefurther investigation to fully understand and address its biases.Table 4: Ablations on the high-precision filtering. The “Text Avg Length” column indicates theaveraged document length after filtering by each respective classifier.MMLU (STEM) GSM8K Text Avg LengthLLM-Classifier 32.8 17.5% 2500FastText-Classifier 31.1 20.2% 1700D Data Cleaning Rules in FastText TrainingDuring fastText training, we implement data cleaning rules to optimize the model’s performance formathematical content. Mathematical texts pose unique challenges due to specialized terminology,symbols, formulas, and numeric data, which differ from typical natural language and require morerefined preprocessing techniques.Our goal is to standardize and simplify the input training data while preserving essential mathematicalinformation. Key considerations include maintaining consistency in token representation, minimizingnoise from extraneous characters, and standardizing numeric values. The following steps reflect thisapproach:•Utilizing the SpaCy English language model ( en_core_web_sm ), we preprocess the inputtext, tokenize it, and process each token by converting it to its lowercase and lemmatizedform. Common placeholders are replaced, certain non-alphanumeric characters are removed,and patterns of special characters like dashes and underscores are normalized. We also stripany unnecessary whitespace, ensuring the text is well-prepared for downstream processing.•All numeric values are replaced with the <NUM> placeholder to standardize the represen-tation, and line breaks along with carriage returns are removed. Tokens exceeding 100characters in English are discarded.E Text-Only Filtering EvaluationTo provide a preliminary evaluation of the quality of our filtered dataset, we continue pretraining adeepseek-coder-1.3b-base model for one epoch using the filtered mathematical content in Sec. High-Precision Filtering for Mathematical Content, excluding image URLs. We validate the effectivenessof our math-related filtering with a few-shot evaluation using the GSM8K [ 10] and the STEM sectionsof the MMLU [18] benchmark.9Table 5: Evaluation of models on GSM8K and MMLU (STEM). The baseline is the deepseek-coder-1.3b-base without any training.Training Corpus GSM8K MMLU (STEM)Baseline 4.8 25.6OpenWebMath [41] 11.0 29.6DeepSeekMath Corpus [48] 23.8 33.1InfiMM-WebMath-40B (text) 26.1 35.6As shown in Table 5, the model trained on our InfiMM-WebMath-40B text-only dataset demonstratescompetitive performance compared to OpenWebMath and the DeepSeekMath Corpus, highlightingthe high quality of our dataset and the effectiveness of our filtering procedures.F Training DetailsModality Alignment Stage In this stage, we utilize general-purpose image-text pairs to align thevisual encoder and the LLM via Perceiver Resampler. The primary objective is to minimize thedomain gap between visual and linguistic modalities. To achieve this, we sample a 8 million image-text pair subset from the DFN-2B dataset [ 13] for the alignment training. During this stage, the visionencoder and LLM backbone are frozen, and training is focused on the Perceiver Resampler module.Training is conducted for one epoch using DeepSpeed Zero2, with the AdamW optimizer, configuredwith a cosine learning rate scheduler, a maximum learning rate of 1e−4, betas of (0.9,0.95), and aweight decay of 0.1.Continue Pre-training Stage We further continue pre-training our models using the InfiMM-WebMath-40B dataset to enhance the model’s mathematical knowledge acquisition in a multi-modal setting. The training is conducted for one epoch using DeepSpeed Zero2, with the AdamWoptimizer, configured with a cosine learning rate scheduler, a maximum learning rate of 5e−5, betasof(0.9,0.95), and a weight decay of 0.1. The context length for training examples is set to 4096,with a maximum of 32 images per example. During this stage, the visual encoder remains frozen,and training focuses on learning the Perceiver Resampler module (the visual-language connector)and the LLM.Instruction Fine-tuning Stage In this stage of training, we fine-tune our models using instructiondatasets, including PGPS9K [ 63], Geo170k [ 15], TABMWP [ 37], ScienceQA [ 36], Vflan [ 8], Visual-WebInstruct, AI2D [ 25], ChartQA [ 38], DocVQA [ 39], DVQA [ 23], GeoQA [ 9], and MA VIS [ 66].We find that incorporating uni-modal text instruction datasets is crucial for enhancing the models’instruction-following capabilities. Therefore, we also include pure text instruction datasets such asMath[ 28], MetaMathQA [ 59], DART-Math [ 52], and NuminaMath [ 5]. The objective of this stage isto acclimate the models to the common chat templates used in math VQA settings, thereby enablingthem to better utilize the mathematical knowledge acquired in the previous stage.We freeze the vision encoder and update the parameters of the Perceiver Resampler and LLMs. As inthe previous stages, training is conducted using DeepSpeed Zero2 for one epoch, with the AdamWoptimizer, configured with 2000 warmup steps, a maximum learning rate of 5e−6, betas of (0.9,0.95),a weight decay of 0.1, and cosine decay to 5e−7. The batch size is set to one per GPU, and thecontext length of the training examples is set to 4096. We utilize 32 A100-80G GPUs for the 1.3bmodels and 64 A100-80G GPUs for the 7b models.G CPT and IFT Dataset Ablations on MathVerseIn this section, we conduct ablation studies on models (1) trained with and without continue pre-training (CPT), and (2) models fine-tuned on the MA VIS dataset versus a more extensive instructionfine-tuning (IFT) dataset. Specifically, we compare models trained with and without our ownmathematical multi-modal pre-training dataset, InfiMM-WebMath-40B. Additionally, we evaluatetwo IFT dataset configurations: (a) a combination of MA VIS-Caption-to-QA, MA VIS-Existing-10Table 6: Datasets ablations (CPT and IFT)using Deepseek-coder-1.3B.CPT IFTMathVersew/o scoreDSC-1.3B Mavis 20.2DSC-1.3B ✓ Mavis 25.1 (+4.9)DSC-1.3B Extended 22.3DSC-1.3B ✓ Extended 26.9 (+4.6)Table 7: Datasets ablations (CPT and IFT)using Deepseek-coder-1.5-7B.CPT IFTMathVersew/o scoreDSC-1.5-7B Mavis 22.8DSC-1.5-7B ✓ Mavis 27.1 (+4.3)DSC-1.5-7B Extended 23.8DSC-1.5-7B ✓ Extended 29.1 (+5.3)Dataset-Augment, MA VIS-Caption, MA VIS-DataEngine-Geometry, and MA VIS-Meta-Question(referred to as the MA VIS dataset); and (b) a broader set consisting of the MA VIS datasets alongwith Vflan, VisualWebInstruct, AI2D, CHARTQA, DOCVQA, DVQA, GEOQA, DART-Math, andNumina-Math (referred to as the Extended dataset).As shown in Table 6, in the 1.3B model, CPT improves the MathVerse scores by 4.9 and 4.6 pointswhen IFT is performed with MA VIS and Extended datasets, respectively. Similarly, Table 7 showsthat in the 7B model, CPT improves the MathVerse scores by 4.8 and 5.3 points with MA VISand Extended datasets, respectively. In contrast, using broader IFT datasets typically enhancesmodel performance by approximately 2 points. These results highlight the significant mathematicalcapabilities imparted to the models through our InfiMM-WebMath-40B for CPT.11 |
TsAvsEoh0D | WILT: A Multi-turn, Memorization-Robust InductiveLogic Benchmark for LLMsEryk BanattRiot GamesLos Angeles, CA [email protected] ChengRiot GamesLos Angeles, CA [email protected] HwuRiot GamesLos Angeles, CA [email protected] large language models (LLMs) have shown impressive capabilities acrossa wide range of domains, they still encounter significant challenges in reasoningtasks that require gathering evidence over multiple turns and drawing logicalconclusions from this evidence. Despite the multi-turn nature of many real-worldLLM use cases, most existing benchmarks rely on carefully curated single-turntests, which often blur the line between memorization and genuine reasoning. Toaddress this, we introduce the Wason Inductive Logic Test (WILT) , a simpleyet challenging multi-turn reasoning benchmark designed to resist memorization.WILT is inspired by the Wason 2-4-6 task [ 41], where participants must infer abasic boolean function involving three variables (e.g., x < y < z ) by proposingtest cases (such as (2,4,6)). In WILT, each test starts from a clean slate, with onlythe initial instructions provided, preventing models from relying on pre-learnedresponses. Our findings reveal that LLMs struggle with this task, with the best-performing model achieving only 28% accuracy, highlighting a significant gap inLLM performance on complex multi-turn reasoning tasks.1 IntroductionLarge Language Models (LLMs) powered by the transformer architecture [ 36] have enabled a newcomputing paradigm powered by natural language. LLMs have seen an increased ubiquitousness inday-to-day life beyond the machine learning research space, where they help many people acrossthe world with common tasks. These models interact with users through multi-turn conversations ,a capability added to next-token-prediction models via instruction-tuning [ 19] and alignment post-training phases [25].Measuring the performance of LLMs has been challenging for the research community. Publiclyavailable benchmarks are subject to Goodhart’s law [ 12] or implicit overfitting [ 27], and difficultbenchmarks often keep a held-out, publicly unavailable test set to accurately evaluate models [ 5]. Inaddition, the vast majority of benchmarks are suites of single-turn tests, which differ substantiallyfrom the common day-to-day use of LLMs [15] [16].The reasoning capability of LLMs, particularly in multi-turn scenarios, is of substantial interest.A commonly reported failure pattern for LLMs is the "doom loop", where the model repeatedlyresponds with a near-identical message to one of its earlier messages, providing minimal utility.While some benchmarks have emerged to attempt to measure this multi-turn propensity of repetition[18], none so far have done so for multi-step inductive reasoning.In this work, we make the following contributions:38th Conference on Neural Information Processing Systems (NeurIPS 2024).1.We introduce the Wason Inductive Logic Test (WILT) , a multi-turn inductive logic bench-mark that is robust to memorization. We show that frontier LLMs struggle significantly onthis task.2.We further evaluate the reasoning capabilities of the LLMs by framing the task as an explo-ration of hypothesis space. The LLMs show marked differences in the rate of hypothesisspace reduction, novelty of responses, and complexity.2 WILTThe Wason Inductive Logic Test (WILT) is a benchmark for LLM reasoning inspired by the Wason2-4-6 task [ 41]. Models begin with the instruction that they must uncover the hidden rule, and maypose up to 30 test cases of that rule. For example, they can pose the tuple (2,4,6)and the test willrespond with "True, 29 Attempts Remaining."All hidden rules take three numbers and return a boolean. These rules are simple and non-stochastic,so there is no additional value to posing the same test multiple times. Valid inputs include any floator integer that can be typed in three or fewer characters, excluding signs and the decimal point (e.g.-999, 1.23, 5). The hidden rules are written as Python lambda functions. After a maximum of thirtytries (or any turn before then), the model may make one attempt to guess the function, after which thetest will terminate. The model must return a Python lambda function that is the same as or equivalent1to the hidden rule in order to receive full points.WILT is conceptually simple, but very challenging. Humans can identify simple rules fairly easilydespite the infinitely large hypothesis space, the unbounded difficulty of a hidden function, and theimpossibility of verifying the correctness of your response. For example, consider the canonicalWason task rule of x < y < z . This rule has very high overlap with the much more arbitrary rule(x < y < z )∧(x̸= 12) . The WILT benchmark therefore tests a few high-value behaviors of interest:1.Multi-Turn Capability : Participants that fall into "doom loops" are punished by virtue ofhaving less useful information with which to infer the hidden rule.2.Hypothesis Space Reduction : Participants are rewarded for proposing test cases thateffectively narrow down the possible rules, despite that hypothesis space being infinitelylarge.3.Susceptibility to Confirmation Bias : Participants who are more prone to "confirming theirhypothesis" rather than seeking falsification will perform poorly upon this task.4.Inductive Mathematical Reasoning : Proposing good test cases is a useful test of inductivereasoning and the ability to generalize from a number of specific examples.5.Deductive Mathematical Reasoning : Proposing sensible functions after observing manytest cases is a useful test of deductive reasoning, with successful performance rewardingidentifying a specific function that fits a suite of examples.6.Occam’s Razor : Participants are rewarded for finding the simplest explanation fitting theexamples.We release two test suites: a lite split , with 10 very easy tests and a canonical full split with 50moderately difficult tests. Future work will extend this to include a procedurally generated split forrobustness to overfitting. We find that the lite split quickly produces a roughly similar ordering tothe full split, but we report results upon the full split for the remainder of this work. Please see theappendix for further details. The test and associated code can be found at github.com/riotgames/wilt.3 Related WorkCompared to other reasoning benchmarks, WILT stands out as both highly multi-turn focused andunusually robust to memorization. In contrast to other benchmarks, WILT requires models to interactwith an environment by proposing their own test cases to uncover a hidden function without relyingon pre-provided examples. This setup reduces the risk of overfitting, as each test begins with thesame initial instructions, and the model must generate and interpret its own data.1For example, x−y=zis equivalent to y+z=x23.1 Reasoning BenchmarksThere are a wide variety of reasoning benchmarks used to evaluate large language models. Somevery notable among these are MATH [ 16], GSM8K [ 6], CommonsenseQA [ 30], StrategyQA [ 11],BIG-BENCH [ 29], SciBench [ 39], SV AMP [ 26], ARC-AGI [ 5], MMLU [ 15], GPQA [ 28], andHumanEval [ 4]. These benchmarks are the standard for measuring LLM reasoning capabilities, butare overwhelmingly carefully chosen single-turn problems which aim to meaningfully separate theperformance of different models on reasoning-like outputs such as math, code, or logic puzzles. How-ever, these benchmarks are subject to train-on-test leakage, even if efforts are made to decontaminatethe dataset [ 45], and the majority are explicitly single-turn tests. Our benchmark directly measuresthe model’s ability to navigate multi-turn scenarios and does not require careful hiding of a test set toprevent misleading results.With respect to reasoning about simple functions, a benchmark that stands out as similar to oursis CRUXEval [ 14], which assembles a list of 800 simple Python functions and input-output pairs,evaluating language models on their ability to predict input from output and output from input. Ourwork could be seen as a multi-turn, more difficult extension of this work – one where the functionis replaced with a black box, where helpful and informative input-output pairs are not provided butinstead need to be searched for by the language model, and where the objective is to infer the hiddenfunction rather than the input or output.3.2 Multi-Turn BenchmarksThere are a handful of multi-turn benchmarks used to evaluate LLMs. PlanBench [ 34] is oneprominent benchmark that attempts to measure the ability of LLMs to navigate planning problems.This is a class of problems that is solved easily by classical planning algorithms such as STRIPS [ 10],and like our benchmark poses a significant challenge to LLMs. PlanBench is a primarily multi-step,single-turn benchmark with a multi-turn component (i.e. replanning based on unexpected events),which contrasts with our benchmark’s more direct multi-turn focus. This can be observed in the o1models performing comparatively well on PlanBench [ 35], since scaling inference time computewithin a single turn would be expected to improve performance substantially. Similarly, BIG-BENCH[29] includes a "20 Questions" task, which pairs two language models together to guess a word byposing questions about it. Like ours, this involves an iterated reduction of hypothesis space overmultiple turns, but unlike ours relies on potentially subjective or unreliable LLM responses to queriesabout the concept.Closest to ours are MINT [ 40] and Aidan-bench [ 18], which have more direct multi-turn focus. MINTrepurposes existing single-turn benchmarks by allowing models to use tools before answering. Whilethe range of tasks in MINT is therefore quite large, strong models can still solve these tasks in few(or one) turns, and the unmodified prompts remain subject to test set leakage. Aidan-bench measuresthe cosine similarity between multi-turn responses. This represents a more pure measurement ofthe doom loop phenomenon. In our benchmark, rather than directly measuring the doom loops, weare instead measuring how often those doom loops lead to failures of reasoning. We see similarperformances in our benchmark compared to Aidan-bench (e.g. Mistral Large), but with an orderingmore tied to capabilities (e.g. Sonnet’s strong results, see Table 1).3.3 Hypothesis Space ReductionHypothesis space representation is a commonly used framing in inductive logic tasks for LLMs. In[38], the authors show a technique called hypothesis search where the model will propose hypothesesin natural language and then implement these hypotheses as Python programs. This technique wasshown to improve performance on ARC-AGI [ 5], but a similar approach could be used along withchain-of-thought [42] for WILT as well.4 ResultsOur results for this test can be found in Table 1. We show that despite the test’s relative simplicity,most models struggle substantially both to propose good tests and to infer a rule based on availableevidence. As in the original Wason 2-4-6 task, we find a common failure mode to be confirmation3Table 1: Model Accuracy ComparisonModel Accuracy Approx. Correct Avg. Guesses RepeatsClaude 3.5 Sonnet 20240620 [2] 14/50 10/50 16.38 27o1-mini 2024-09-12 [24] 13/50 8/50 12.1 3o1-preview 2024-09-12 [24] 12/50 6/50 8.1223chatgpt-4o-latest [22] 11/50 7/50 14.22 38Mistral Large 2 [20] 11/50 5/50 26.56 142GPT-4o 2024-08-06 [22] 9/50 6/50 15.26 26Llama 3.1 405B [8] 8/50 9/50 12.21 30Gemini 1.5 Flash 0827 [13] 7/50 4/50 14.04 108Llama 3.1 70B [8] 7/50 2/50 15.18 74Deepseek-v2.5-chat [17] 6/50 5/50 27.22 489GPT-4o-mini [23] 6/50 2/50 20.36 54Gemini 1.5 Pro [13] 5/50 6/50 16.78 41Gemini 1.5 Flash [13] 5/50 6/50 16.5 123Deepseek-v2-coder [46] 5/50 5/50 21.82 335Deepseek-v2-chat [17] 3/50 3/50 25.32 334Llama 3.1 8b [8] 3/50 0/50 26.18 223Open Mistral Nemo [21] 2/50 3/50 27.34 400Claude 3 Haiku 20240307 [3] 1/50 1/50 6.76 11Gemini 1.5 Flash 8b 0827 [13] 0/50 2/50 26.76 386Gemma 2 9B [31] 0/50 2/50 8.82 70bias – a participant will identify a plausible hypothesis and continue to propose tests that attempt toconfirm it. Stronger models will more explicitly attempt to falsify these hypotheses instead.In Table 1, we include a column approximately correct , measuring the number of rules in which themodel was able to correctly identify some critical behavior of the rule, but returned a rule with failingedge cases. For example, guessing (x < y < z )instead of (x≤y≤z)is approximately correct.We include this column to highlight models that are more willing to guess immediately instead ofuncovering edge cases by default (e.g. Llama 3.1 405B). In these cases, we could see potentiallyimproved performance through more targeted prompting techniques.We find that LLMs (particularly smaller ones) will frequently repeat tests they have already used,sometimes dozens of times. We therefore also provide a column repeats which counts the totalproposed tests which were already tested for that rule. Further discussion on test novelty can befound in Appendix A.4. Additionally, further experiments analyzing what parts of the task models arestrong or weak at can be found in Appendix A.2. We find that some models are produce very usefultest cases (e.g. chatgpt-4o-latest) whereas other models are strong at guessing the rule (e.g. o1-mini).4.1 Hypothesis Space ReductionTo compare the LLMs’ ability to efficiently reason about the task, we estimate how effectively eachmodel reduces the hypothesis space [ 38]. At best, an LLM should always propose a triplet thateliminates as many untested hypotheses as possible. At worst, a model repeatedly proposes a tripletconfirming previously covered hypotheses. For example, an LLM that has already guessed (2,4,6)retreads much of the same hypothesis space by guessing (4,6,8)rather than (0.01,−1,100) .To represent that hypothesis space, we randomly generate a large number of lambdas that encompassa wide range of potential hypotheses. For example, we randomly generate lambdas having to dowith: ordering (x < y < z ), equality (x=y̸=z), arithmetic relations (x+y=z), parity(x≤10, y≤5, z≤100), etc. When an LLM proposes a triplet, we cross off any lambdas that donot match the observed behavior of the hidden rule. Figure 1 illustrates the rate at which differentmodels reduce the hypothesis space over successive turns. Models with worse reasoning spend moreattempts to clear less of the hypothesis space.2We bold results for this column only if the models are also high performing. For example, Claude 3 Haikuuses fewer guesses than o1-preview, but this is because it is failing.4Figure 1: Models can succeed upon this task by reducing the hypothesis space quickly or providinguseful tests for many turns. We show that models with strong reasoning capabilities can narrow thespace quickly, but weaker multi-turn capability harms their ability to get value out of later tests.4.2 Response ComplexityTo determine how well the models employ Occam’s Razor, we explore different metrics to gaugewhether the models find the simplest rule that covers the examples. From existing Bayesian modelsof cognition [ 32,33], the size principle uses hypothesis size as a measure of simplicity. In theseBayesian models, hypothesis size is calculated as the number of values that match the hypothesis.Calculating hypothesis size in this manner is only possible when the test values are within a limitedcountable range. In our case, the possible test values are infinite, requiring some alternative metricsto gauge hypothesis size. We use three metrics:1.Number of Operators : We count the number of operators used in the rule expression.2.Response Length : We calculate the string length of the rule expression. The longer thelength, the more complex it is likely to be. As longer outputs tend to be arbitrarily preferredby automatic evaluators [ 9], it is particularly important to measure the brevity of the responsefor cases in which simplicity is desired.3.Set Inclusion : We generate a grid of integer-float tuples and apply them to guessed andactual rules to generate sets of tuples returning “True”. If the set of the guessed rule is asubset or superset of the actual rule, we then calculate their set size ratio. A ratio of 1isideal, >1suggests a less complex guess, and <1a more complex one.Appendix A.3 shows the complexity metrics of the LLMs. Most LLMs with high accuracy have longresponse lengths and many operators, with some exceptions.5 DiscussionIn our experiments, we show that LLMs struggle substantially with this task. Specifically, theirpropensity to repeat test cases, propose useless test cases, and guess very unlikely rules harms theirperformance on this task substantially. The varying performance in a multi-turn setting represents apreviously underappreciated dimension of measuring reasoning capability in LLMs. There is muchwork in language modeling for code-based agents [ 7] [43] and LLM-driven unit testing [ 44], and thedifficulty of LLMs to explore edge cases effectively has substantial implications on those applications.With this work, we aim to provide a benchmark that measures a model’s capacity for exploring anenvironment and reasoning based on its own decisions across multiple turns. We believe that thisparadigm offers a more direct measurement of reasoning capability compared to other benchmarks.By focusing specifically on how LLMs handle multi-turn reasoning, we better understand their mostcommon real-world applications.5References[1]Dennis Abts, Garrin Kimmell, Andrew Ling, John Kim, Matt Boyd, Andrew Bitar, Sahil Parmar, IbrahimAhmed, Roberto DiCecco, David Han, et al. A software-defined tensor streaming multiprocessor forlarge-scale machine learning. In Proceedings of the 49th Annual International Symposium on ComputerArchitecture , pages 567–580, 2022.[2]Anthropic. Introducing claude 3.5 sonnet, 2024. URL https://www.anthropic.com/news/claude-3-5-sonnet . Accessed: 2024-09-11.[3]Anthropic. Claude 3 haiku: Our fastest model yet, 2024. URL https://www.anthropic.com/news/claude-3-haiku . Accessed: 2024-09-11.[4]Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan,Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language modelstrained on code. arXiv preprint arXiv:2107.03374 , 2021.[5] François Chollet. On the measure of intelligence. arXiv preprint arXiv:1911.01547 , 2019.[6]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, MatthiasPlappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math wordproblems. arXiv preprint arXiv:2110.14168 , 2021.[7]Cognition AI. Introducing devin: The first ai software engineer, 2024. URL https://www.cognition.ai/blog/introducing-devin . Accessed: 2024-09-11.[8]Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman,Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprintarXiv:2407.21783 , 2024.[9]Yann Dubois, Balázs Galambosi, Percy Liang, and Tatsunori B Hashimoto. Length-controlled alpacaeval:A simple way to debias automatic evaluators. arXiv preprint arXiv:2404.04475 , 2024.[10] Richard E Fikes and Nils J Nilsson. Strips: A new approach to the application of theorem proving toproblem solving. Artificial intelligence , 2(3-4):189–208, 1971.[11] Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. Did aristotleuse a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of theAssociation for Computational Linguistics , 9:346–361, 2021.[12] Charles AE Goodhart. Problems of monetary management: the UK experience . Springer, 1984.[13] Google. Introducing gemini 1.5: Google’s next-generation ai model, 2024. URL https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024/ . Accessed: 2024-09-11.[14] Alex Gu, Baptiste Rozière, Hugh Leather, Armando Solar-Lezama, Gabriel Synnaeve, and Sida I Wang.Cruxeval: A benchmark for code reasoning, understanding and execution. arXiv preprint arXiv:2401.03065 ,2024.[15] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and JacobSteinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 , 2020.[16] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprintarXiv:2103.03874 , 2021.[17] Aixin Liu, Bei Feng, Bin Wang, Bingxuan Wang, Bo Liu, Chenggang Zhao, Chengqi Dengr, ChongRuan, Damai Dai, Daya Guo, et al. Deepseek-v2: A strong, economical, and efficient mixture-of-expertslanguage model. arXiv preprint arXiv:2405.04434 , 2024.[18] Aidan McLaughlin. Aidan-bench. https://github.com/aidanmclaughlin/Aidan-Bench , 2024.[19] Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. Cross-task generalization vianatural language crowdsourcing instructions. arXiv preprint arXiv:2104.08773 , 2021.[20] Mistral AI. Large enough: Mistral large 2 announcement, 2024. URL https://mistral.ai/news/mistral-large-2407/ . Accessed: 2024-09-11.6[21] Mistral AI. Mistral nemo: A state-of-the-art 12b model, 2024. URL https://mistral.ai/news/mistral-nemo/ . Accessed: 2024-09-11.[22] OpenAI. Hello gpt-4o, 2024. URL https://openai.com/index/hello-gpt-4o/ . Accessed: 2024-09-11.[23] OpenAI. Gpt-4o mini: Advancing cost-efficient intelligence, 2024. URL https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/ . Accessed: 2024-09-11.[24] OpenAI. O1 system card, 2024. URL https://cdn.openai.com/o1-system-card.pdf . Accessed:2024-09-11.[25] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang,Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions withhuman feedback. Advances in neural information processing systems , 35:27730–27744, 2022.[26] Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple math wordproblems? arXiv preprint arXiv:2103.07191 , 2021.[27] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiersgeneralize to imagenet? In International conference on machine learning , pages 5389–5400. PMLR, 2019.[28] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani,Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. arXivpreprint arXiv:2311.12022 , 2023.[29] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch,Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game:Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615 , 2022.[30] Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A questionanswering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937 , 2018.[31] Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju,Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. Gemma 2: Improving openlanguage models at a practical size. arXiv preprint arXiv:2408.00118 , 2024.[32] Joshua Tenenbaum. Rules and similarity in concept learning. Advances in neural information processingsystems , 12, 1999.[33] Joshua B Tenenbaum and Thomas L Griffiths. Generalization, similarity, and bayesian inference. Behav-ioral and brain sciences , 24(4):629–640, 2001.[34] Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. Large languagemodels still can’t plan (a benchmark for llms on planning and reasoning about change). In NeurIPS 2022Foundation Models for Decision Making Workshop , 2022.[35] Karthik Valmeekam, Kaya Stechly, and Subbarao Kambhampati. Llms still can’t plan; can lrms? apreliminary evaluation of openai’s o1 on planbench, 2024. URL https://arxiv.org/abs/2409.13373 .[36] Ashish Vaswani. Attention is all you need. Advances in Neural Information Processing Systems , 2017.[37] Junlin Wang, Jue Wang, Ben Athiwaratkun, Ce Zhang, and James Zou. Mixture-of-agents enhances largelanguage model capabilities. arXiv preprint arXiv:2406.04692 , 2024.[38] Ruocheng Wang, Eric Zelikman, Gabriel Poesia, Yewen Pu, Nick Haber, and Noah D Goodman. Hypothesissearch: Inductive reasoning with language models. arXiv preprint arXiv:2309.05660 , 2023.[39] Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R Loomba,Shichang Zhang, Yizhou Sun, and Wei Wang. Scibench: Evaluating college-level scientific problem-solvingabilities of large language models. arXiv preprint arXiv:2307.10635 , 2023.[40] Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, and Heng Ji. Mint: Evalu-ating llms in multi-turn interaction with tools and language feedback. arXiv preprint arXiv:2309.10691 ,2023.[41] Peter C Wason. On the failure to eliminate hypotheses in a conceptual task. Quarterly journal ofexperimental psychology , 12(3):129–140, 1960.7[42] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, DennyZhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neuralinformation processing systems , 35:24824–24837, 2022.[43] John Yang, Carlos E Jimenez, Alexander Wettig, Kilian Lieret, Shunyu Yao, Karthik Narasimhan, andOfir Press. Swe-agent: Agent-computer interfaces enable automated software engineering. arXiv preprintarXiv:2405.15793 , 2024.[44] Lin Yang, Chen Yang, Shutao Gao, Weijing Wang, Bo Wang, Qihao Zhu, Xiao Chu, Jianyi Zhou, GuangtaiLiang, Qianxiang Wang, et al. An empirical study of unit test generation with large language models.arXiv preprint arXiv:2406.18181 , 2024.[45] Shuo Yang, Wei-Lin Chiang, Lianmin Zheng, Joseph E Gonzalez, and Ion Stoica. Rethinking benchmarkand contamination for language models with rephrased samples. arXiv preprint arXiv:2311.04850 , 2023.[46] Qihao Zhu, Daya Guo, Zhihong Shao, Dejian Yang, Peiyi Wang, Runxin Xu, Y Wu, Yukun Li, Huazuo Gao,Shirong Ma, et al. Deepseek-coder-v2: Breaking the barrier of closed-source models in code intelligence.arXiv preprint arXiv:2406.11931 , 2024.8A Appendix / Supplemental MaterialA.1 Experimental DetailsA.1.1 Model ConfigurationsModels are accessed when possible with default configurations and settings provided by theirrespective proprietary APIs (i.e. Anthropic, OpenAI, Mistral, DeepSeek, and Vertex APIs). Llama3.1 models are run via the Amazon Bedrock API. Gemma 2 9b and Llama 3 70B are run via the GroqAPI [1].A.1.2 PromptsAll models begin from the same initial prompt.You are an AI tasked with discovering a hidden rule that takes three numbersas input and returns a boolean. These numbers can be any float or integer numberthat can be typed in 3 or fewer digits, not including signs and decimal characters(e.g. -11.3 is a valid value). In each response, you will provide your step-by-stepreasoning for your future plans on this task, and then you will take one of twoactions:1.Provide a single test case to observe the output of a rule (e.g.“Test Case: ```(1, 2, 3) ```” which corresponds to x=1, y=2, z=3. Youmay only propose one test case at a time.2.Provide a final “guess” in the form of a Python lambda function (e.g.“Final Guess: ```lambda x,y,z: x < y < z ```”). You only get toselect this action one time, so don’t use it unless you are confident or areforced to.You will have up to 30 attempts at test cases, and you will have exactly oneattempt at guessing the final function. Make sure all of the tests you provide areunique from other ones you have already tried – the functions are all deterministic,and will be the same every time. You should prioritize getting the rule as correctas possible, since an incorrect answer will get no points. You should thereforenot guess the function unless you are quite confident, or if you are very close torunning out of attempts.You must return your responses in the format laid out above at the very bottomof your message. For example, if you want to submit a test case, you must con-clude with the string “ Test Case: ```(x,y,z) ```”, where x,y,z are replacedwith your guesses. If you want to submit a final guess, you must conclude withthe string “ Final Guess: ```<function> ```” where <function> is replacedwith a Python lambda function. Do not include any comments or additional text onthe same lines as these two things.Make sure to include your reasoning for your tests – what you are testing for,why you selected that test, etc.Responses by the models are pulled out via regular expressions matching the formatting in the prompt.We find that after controlling for various formatting eccentricities (Python blocks, markdown, boldcharacters, etc) that all listed models are capable of providing test cases in this format.A.1.3 Verifying Equivalent FunctionsTo verify two provided lambda functions are equivalent, we generate a large number of test cases andensure the provided rules match on all outputs. Specifically, we create three sets of cases:1.Integer Grid Cases - We construct a 40x40x40 grid of integer triplets from -20 to 20,inclusive, leading to 64,000 triplet cases.2.Random Uniform Cases - We construct a list of 10,000 uniformly random float tripletsfrom -200 to 200, inclusive.93.Special Cases - We hand-design a small set of test cases to ensure all hidden rules in thefull split are adequately tested.We mark a rule as incorrect if any test cases generated above show different behavior between thehidden rule and the guessed rule, and mark it correct otherwise.A.2 Evaluating Function Inversion CapabilityTo succeed at the WILT task, models must succeed at both gathering evidence (hypothesis reduction)and drawing logical conclusions from evidence (function inversion). To distinguish a model’s abilityto do one or the other, we perform an experiment where models attempt to guess the rule using testsfrom another model. Rather than asking the model to gather evidence, we directly provide it all thereasoning-stripped3input-output pairs generated by another model for the same rule, and ask themodel to guess the rule in a single turn. Without the original reasoning and subsequent observationbefore and after each test case, we expect most models to underperform relative to the full test evenwhen provided their own cases. Likewise, we expect models stronger at single-turn to perform betterin this experiment relative to other models subject to the same evidence. Our results can be found inFigure 2.This reveals some notably varied capabilities among the top performing models. While Claude Sonnet3.5 was the narrowly highest performing model on the full test, this experiment reveals importantcontext for why that may be. We see that it performs better than most other models subject to thesame evidence, but proposes test cases that are generally slightly less useful for other models withoutthe attached justifications. Likewise, without its own reasoning for each case, Sonnet’s performancedegraded substantially more than other models in the same setting, suggesting a larger component ofits success was its reasoning, compared to the test cases alone.o1-mini shows highly superior single-turn capability in this test, but notably performs relatively lesswell when provided its own tests rather than the tests of another high-performing model. Whenpaired with cases from chatgpt-4o-latest, it successfully guessed 19 of the 50 rules, far surpassing thebest-performing single model in the full test.Despite having many repeated tests and messages which were generally similar to each other (seeTables 1 and 3), we see that Mistral Large performs well with other models’ tests and provides acorpus of tests useful to other models. We note its comparable performance to chatgpt-4o-latest bothalong the rows (model’s performance with other model’s test cases) and columns (performance ofother models with the model’s test cases), reinforcing its strong performance in the full test.Critically, we show that models have non-identical strengths and weaknesses, and that success on thefull WILT task depends on strong performance on a few key metrics of interest. Even without theattached reasoning for test cases, composing the test case generation of one model and the functioninversion of another model very often outperforms using a single strong model for both subtasks.This has some notable implications for future LLM applications: in [ 37] it was shown that severallanguage models coordinated by an aggregator LLM could outperform strong single models. Futurework could explore coordinating models for both single-turn and multi-turn oriented tasks, potentiallyleading to improved performance.3We strip reasoning to avoid conflating confirmation bias in the attached reasoning, rather than just theaccumulated evidence.10Figure 2: Models have varying success when using test cases proposed by other models. o1-ministands out as having much stronger single-turn reasoning in this experiment, but it performs poorlywith its own tests.A.3 Response Complexity EvaluationTable 2: Response Complexity (Median)Model Num Operators Response Length Set InclusionClaude 3.5 Sonnet 3 34.5 0.08o1-mini 2024-09-12 3 29.0 0.79o1-preview-2024-09-12 2 25 0.01chatgpt-4o-latest 5 39 1.0Mistral Large 2 5 39 1.0GPT-4o 2024-08-06 4.5 39 0.34Llama 3.1 405B 2 30 0.52Gemini 1.5 Flash 0827 4 35.5 0.00Llama 3.1 70B 2 25 1.00Deepseek-v2.5-chat 3 29 0.27GPT-4o-mini 5 39.5 0.05Gemini 1.5 Pro 3 38 0.27Gemini 1.5 Flash 3 28 0.06Deepseek-v2-coder 5 39 0.88Deepseek-v2-chat 2 28 0.00Llama 3.1 8b 2 23 0.05Open Mistral Nemo 5 46 1.00Claude 3 Haiku 5 31 0.49Gemini 1.5 Flash 8b 0827 3 29 0.02Gemma 2 9B 5 38 0.52Table 2 shows the complexity of each model by the three aforementioned metrics.11In Figure 3 we show the set inclusion ratios in the case where a model is provided another model’stest cases. That is, we show whether an error in the final guess of a model is likely to be smaller / lessthan one (e.g. x < y < z instead of x≤y≤z), or larger / greater than one (e.g. x >0instead ofx >0∧x <5). This seems more test-case dependent than other complexity benchmarks, wheretests provided by certain models seem to lead to smaller hypotheses.Figure 4 shows the length of the string used to guess the rule by each model, which is comparativelymore consistent for a model. We find this to be fairly consistent across settings, with most modelshovering near 45.Figure 3: Set inclusion ratios will differ in the same model when provided another model’s tests, andmodels provided the same tests have different set inclusion behaviors.Figure 5 shows the median number of operators used by a model when given another model’stest cases, as in Section A.2. This can be used to estimate the model’s bias toward guess-ing a simpler rule. o1-preview, for example, tends to use fewer operators than the other mod-els subject to the same evidence. This also highlights potential discrepancies in the complex-ity resulting from a model’s test cases and the complexity of its final guesses. When modelsuse test cases generated by DeepSeek Chat v2.5, they tend to use fewer operators, likely be-cause the test cases are fully encompassed by simple rules like lambda x,y,z: False . Con-versely, when given other models’ tests, DeepSeek Chat v2.5 responds with a high level ofcomplexity compared to other models. Its guesses often overfit a complicated rule to the testcases (e.g., it guesses a rule of lambda x, y, z: y == x or y == z or y == (x + z) / 2when given the following true test cases for even numbers: (2.0, 2.0, 2.0), (2.0, 2.0, 4.0),(−2.0, 4.0, 6.0), (0.0, 2.0, 4.0), (2.2, 4.0, 6.0), (2.0, 4.0, 6.0).)A.4 Test Case NoveltyTest case novelty is an interesting second order metric for success upon the WILT task. Broadlyspeaking, models that reuse fewer tests are rewarded with more information for which to solve thetask. Models that very rarely re-propose a test tend to perform very well upon WILT, but the inverseis not necessarily true – models that loop tests often still arrive at the right answer.Repeated tests are useful for bifurcating the types of failures on WILT – one being the doom loopphenomenon, and the other being reasoning capability conditioned upon some available evidence.12Figure 4: Models tend to have fairly consistent guess string lengths, with some exceptions.One hypothesis for the observed behavior is that certain models are primarily oriented towardssingle-turn scenarios, and that one type of failure need not imply the other. DeepSeek Chat v2.5, forexample, demonstrates strong initial hypothesis space reduction compared to other models, whichallows it notably better performance on WILT compared to other models with similar repeat counts(e.g. open-mistral-nemo). Strong single-turn performance and deductive reasoning capabilities canhelp salvage performance from a model that demonstrates difficulty with multi-turn inductive logic.Following Aidan-bench [ 18], we provide Table 3, containing additional novelty metrics. Theseinclude:1.Average novelty - which reports the average cosine similarity between each message’s gpt-3embeddings and the closest previous message within the same test.2.Average minimum novelty - which reports the average minimum cosine similarity betweeneach message’s gpt-3 embeddings and the closest previous message within the same test.These capture an additional dimension of “test novelty” and “message novelty”, where models maypropose the same tests for different reasons, or repeat previously generated messages verbatim. Webold results which are best within the class of high performing models. We note that models thatpropose fewer tests before guessing (e.g. o1-preview, o1-mini) should see lower values for all ofthese compared to models that tend to use many tests before guessing (e.g. mistral-large) even forotherwise equally performing models. We also show the average novelty by turn in Figure 6, whichcaptures the model’s novelty scores across turns.13Figure 5: The median number of operators provides a window into guessed rules being more or lesscomplicated compared to guesses provided by other models.Figure 6: Cosine similarity by turn for selected models. Models have a higher novelty score near theend, since the final guess is often much different from previous messages, which are all proposed testcases.A.5 Selected Lite Split ResultsTable 4 contains selected results for the lite split . We show this much easier split produces a similarordering, suggesting that the bulk of the separation in our benchmark lies in the easier tests.14Table 3: Model Novelty MetricsModel Repeats Avg. Novelty Avg. Min. Noveltyclaude-3.5-sonnet-20240620 27 0.22 0.08o1-mini-2024-09-12 3 0.24 0.09o1-preview-2024-09-12 3 0.28 0.11chatgpt-4o-latest 38 0.21 0.10mistral-large-2407 142 0.13 0.04gpt-4o-2024-08-06 26 0.22 0.11llama3.1 405B 30 0.20 0.06gemini-1.5-flash-exp-0827 108 0.19 0.08llama3-70b 74 0.19 0.06deepseek-chat-v2.5 489 0.09 0.02gpt-4o-mini 54 0.19 0.10gemini-1.5-pro 41 0.26 0.13gemini-1.5-flash 123 0.19 0.08deepseek-coder 334 0.11 0.03deepseek-chat-v2 335 0.10 0.03llama-3.1-8b 223 0.13 0.01open-mistral-nemo 400 0.12 0.02claude-3-haiku-20240307 11 0.29 0.11gemini-1.5-flash-8b-exp-0827 386 0.14 0.05gemma2-9b-it 70 0.34 0.18Table 4: Lite Split MetricsModel Accuracy Avg. GuessesClaude-3.5-Sonnet 8/10 13.68GPT-4-Turbo 7/10 12.47GPT-4o 6/10 13.86DeepSeek-V2-Coder 6/10 23.46Llama 3 70B 4/10 15.64DeepSeek-V2-Chat 2/10 24.82Llama 3 8B 1/10 24.0GPT-3.5-Turbo 1/10 2.9A.6 Test DescriptionsTable 5 contains the functions found in the lite split of WILT. Table 6 contains the functions found inthefull split of WILT. These are implemented as Python lambda functions.15Table 5: Lite Split: Complete Set of TestsTest ID Description1 x > y > z2 x < y < z3 x≥y≥z4 x≤y≤z5 x=y=z6 x̸=y∧y̸=z∧x̸=z7 x <0∧y <0∧z <08 x+y=z9 x·y=z10 x < y∧y > zB Failure Case ExamplesB.1 Doom Loop Without Reasoning FailureTable 7 shows an example of a model (Deepseek-Chat-v2.5) entering a doom loop during the testproposal phase, but where that doom loop does not constitute a reasoning failure. Reasoning has beenremoved for brevity.B.2 Approximately CorrectTable 8 shows an example of o1-mini getting a very difficult test case (co-primality) approximatelycorrect, failing only because it adds an additional arbitrary constraint upon the magnitude of thevalues despite no such constraint existing. Reasoning has been removed for brevity.B.3 Confirmation BiasTable 9 shows an example of o1-preview failing a relatively easy test case (x≥y≥z)due to aconfirmation bias error. The model uses only 9 test cases and correctly identifies that the rule returnstrue when all three are equal, but submits five test cases confirming that and none exploring otherrules which are true when three items are equal. Reasoning has been removed for brevity.B.4 Same Test For New ReasonTable 10 shows an example of Claude Sonnet 3.5 repeating a test, where it will mistakenly generatethe same test for a different stated reason. We see the model notice it has repeated a test only after ithas already submitted the test. Other tests have been removed for brevity.16Table 6: Full Split: Complete Set of TestsTest ID DescriptionEasy Tests1 x > y > z2 x < y < z3 x≥y≥z4 x≤y≤z5 x < z < y6 x≤z≤y7 z < x < y8 z≤x≤y9 x=y=z10 x̸=y∧y̸=z∧x̸=z11 x <0∧y <0∧z <012 x >0∧y >0∧z >013 xmod 2 = 0 ∧ymod 2 = 0 ∧zmod 2 = 014 xmod 2 ̸= 0∧ymod 2 ̸= 0∧zmod 2 ̸= 0Medium Tests15 x+y=z16 x·y=z17 x+z=y18 x·z=y19 y+z=x20 y·z=x21 max(x, y, z ) =x22 max(x, y, z ) =y23 max(x, y, z ) =z24 min(x, y, z ) =x25 min(x, y, z ) =y26 min(x, y, z ) =z27 x+y+z= 028 x·y·z= 029 (x+y+z) mod 2 = 030 (x+y+z) mod 2 = 131 (x·y·z) mod 2 = 032 (x·y·z) mod 2 = 133 (x+y)/2 =z34 −5≤x, y, z ≤535 −10≤x, y, z ≤1036 −5≤x, y, z ≤037 0≤x, y, z ≤538 −2≤x, y, z ≤239 −20≤x, y, z ≤20Very Hard Tests40 x2+y2=z241 x2+z2=y242 y2+z2=x243 ⌊x⌋&⌊y⌋=⌊z⌋(bitwise AND)44 ⌊x⌋|⌊y⌋=⌊z⌋(bitwise OR)45 ⌊x⌋ ⊕ ⌊y⌋=⌊z⌋(bitwise XOR)46 x, y, z ∈Z∧gcd(x, y) = gcd( y, z) = gcd( z, x) = 147p⌊|x|⌋2=⌊|x|⌋ ∧p⌊|y|⌋2=⌊|y|⌋ ∧p⌊|z|⌋2=⌊|z|⌋48 0< xmod 1 ∧0< ymod 1 ∧0< zmod 149 0< xmod 1 < ymod 1 < zmod 1 <150 x < y < z ∧0< z−x≤117Table 7: DeepSeek-v2.5 Enters a doom loop but still arrives at the correct answerTurn Assistant Harness1 Test Case: (1, 1, 1) (1.0, 1.0, 1.0): True.2 Test Case: (1, 2, 3) (1.0, 2.0, 3.0): True.3 Test Case: (3, 2, 1) (3.0, 2.0, 1.0): True.4 Test Case: (2, 1, 3) (2.0, 1.0, 3.0): True.5 Test Case: (-1, -2, -3) (-1.0, -2.0, -3.0): False.6 Test Case: (-1, 2, -3) (-1.0, 2.0, -3.0): False.7 Test Case: (0, 1, 2) (0.0, 1.0, 2.0): False.8 Test Case: (2, 3, 1) (2.0, 3.0, 1.0): True.9 Test Case: (0, 1, 1) (0.0, 1.0, 1.0): False.10 Test Case: (0.001, 1, 1) (0.001, 1.0, 1.0): True.11 Test Case: (-0.001, 1, 1) (-0.001, 1.0, 1.0): False.12 Test Case: (999, 1, 1) (999.0, 1.0, 1.0): True.13 Test Case: (-999, 1, 1) (-999.0, 1.0, 1.0): False.14 Test Case: (0.0001, 1, 1) (0.0001, 1.0, 1.0): True.15 Test Case: (-0.0001, 1, 1) (-0.0001, 1.0, 1.0): False.16 Test Case: (999.999, 1, 1) (999.999, 1.0, 1.0): True.17 Test Case: (-999.999, 1, 1) (-999.999, 1.0, 1.0): False.18 Test Case: (0.0001, 999.999, 1) (0.0001, 999.999, 1.0): True.19 Test Case: (-0.0001, 999.999, 1) (-0.0001, 999.999, 1.0): False.20 Test Case: (0.0001, -999.999, 1) (0.0001, -999.999, 1.0): False.21 Test Case: (0.0001, 999.999, -0.0001) (0.0001, 999.999, -0.0001): False.22 Test Case: (0.0001, 999.999, -999.999) (0.0001, 999.999, -999.999): False.23 Test Case: (0.0001, 999.999, 0.0001) (0.0001, 999.999, 0.0001): True.24 Test Case: (0.0001, 0.0001, 999.999) (0.0001, 0.0001, 999.999): True.25 Test Case: (999.999, 0.0001, 0.0001) (999.999, 0.0001, 0.0001): True.26 Test Case: (0.0001, 999.999, 0.0001) (0.0001, 999.999, 0.0001): True.27 Test Case: (0.0001, 0.0001, 999.999) (0.0001, 0.0001, 999.999): True.28 Test Case: (999.999, 0.0001, 0.0001) (999.999, 0.0001, 0.0001): True.29 Test Case: (0.0001, 999.999, 0.0001) (0.0001, 999.999, 0.0001): True.30 Test Case: (0.0001, 0.0001, 999.999) (0.0001, 0.0001, 999.999): True.31 Final Guess: lambda x, y, z: x > 0andy >0andz >0Congratulations! Your guess is cor-rect.18Table 8: o1-mini nearly gets the coprimality case correct, but adds bounds for no concrete reason.Turn Assistant Harness1 Test Case: (1, 2, 3) (1.0, 2.0, 3.0): True.2 Test Case: (3, 2, 1) (3.0, 2.0, 1.0): True.3 Test Case: (1, 1, 1) (1.0, 1.0, 1.0): True.4 Test Case: (-1, 2, 3) (-1.0, 2.0, 3.0): True.5 Test Case: (-1, -2, -3) (-1.0, -2.0, -3.0): True.6 Test Case: (0, 0, 0) (0.0, 0.0, 0.0): False.7 Test Case: (0, 1, 2) (0.0, 1.0, 2.0): False.8 Test Case: (1.5, 2.5, 3.5) (1.5, 2.5, 3.5): False.9 Test Case: (1, 2, 3.0) (1.0, 2.0, 3.0): True.10 Test Case: (1, 2, 3.1) (1.0, 2.0, 3.1): False.11 Test Case: (1, 2, -3) (1.0, 2.0, -3.0): True.12 Test Case: (10, -20, 30) (10.0, -20.0, 30.0): False.13 Test Case: (4, 5, 6) (4.0, 5.0, 6.0): False.14 Test Case: (-2, -1, 3) (-2.0, -1.0, 3.0): True.15 Test Case: (3, 3, 4) (3.0, 3.0, 4.0): False.16 Test Case: (1, -1, 2) (1.0, -1.0, 2.0): True.17 Test Case: (-3, 3, -2) (-3.0, 3.0, -2.0): False.18 Test Case: (2, -2, 2) (2.0, -2.0, 2.0): False.19 Test Case: (-1, 2, -2) (-1.0, 2.0, -2.0): False.20 Test Case: (1, 3, 2) (1.0, 3.0, 2.0): True.21 Test Case: (1, 2, 2) (1.0, 2.0, 2.0): False.22 Test Case: (-1, -2, 3) (-1.0, -2.0, 3.0): True.23 Test Case: (2, 3, -1) (2.0, 3.0, -1.0): True.24 Final Guess: lambda x, y, z:all(isinstance(n, int) and n̸= 0 and−3≤n≤3for n in [x, y, z]) andmath.gcd(x, y) == 1 and math.gcd(x, z) ==1 and math.gcd(y, z) == 1Sorry, that’s not the correct rule.19Table 9: o1-preview fails early on an easy test case despite having a wealth of remaining tests, becauseit latched on to an early behavior and sought to confirm it rather than falsify it.Turn Assistant Harness1 Test Case: (2, 3, 6) (2.0, 3.0, 6.0): False.2 Test Case: (2, 3, 5) (2.0, 3.0, 5.0): False.3 Test Case: (1, 1, 1) (1.0, 1.0, 1.0): True.4 Test Case: (2, 2, 2) (2.0, 2.0, 2.0): True.5 Test Case: (1, 1, 2) (1.0, 1.0, 2.0): False.6 Test Case: (-1, -1, -1) (-1.0, -1.0, -1.0): True.7 Test Case: (1.5, 1.5, 1.5) (1.5, 1.5, 1.5): True.8 Test Case: (0, 0, 0) (0.0, 0.0, 0.0): True.9 Test Case: (1, 2, 3) (1.0, 2.0, 3.0): False.10 Final Guess: lambda x, y, z: x == y == z Sorry, that’s not the correct rule.Table 10: Claude Sonnet 3.5 double submits (2,3,5)to test x+y=zandall(isprime (n))Turn Assistant Harness1-5 ... ...6 Still False. Let’s try a different approachand consider some mathematical relation-ships. Reasoning for the next test: 1. Let’stest if the rule involves a simple arithmeticrelationship. 2. We’ll use numbers wherethe third is the sum of the first two. 3. We’lluse small integers for clarity. Test Case: (2,3, 5)(2.0, 3.0, 5.0): False.7-8 ... ...9 We’re still not getting a True result. Let’stry a different approach and consider somenumber theory concepts. Reasoning for thenext test: 1. Let’s test if the rule involvesprime numbers. 2. We’ll use three consecu-tive prime numbers. 3. This will also test ifthere’s a special relationship with consecu-tive numbers that are not simply ascending.Test Case: (2, 3, 5)(2.0, 3.0, 5.0): False.10 We’ve already tested (2, 3, 5) before, andit’s still False. Let’s try a different numbertheory concept......20C Full Conversation ExamplesC.1 Doom Loop on Coding TaskBelow we provide a full example of a “doom loop” interacting with an LLM in a multi-turn conver-sation in a real world coding task. We use Llama 3 8B to generate code for a pytorch variationalautoencoder, and after 2 turns of debugging it repeats an already mentioned suggestion.### User:Can you write me some code which trains a V AE on cifar10 using pytorch?### Llama3-8b-8192:Here’s a basic example of how you can train a Variational Autoencoder (V AE) onCIFAR-10 using PyTorch:```Pythonimport torchimport torch.nn as nnimport torch.optim as optimfrom torchvision import datasets, transformsfrom torch.utils.data import DataLoaderimport numpy as np# Define the VAE modelclass VAE(nn.Module):def __init__(self):super(VAE, self).__init__()self.encoder = nn.Sequential(nn.Conv2d(3, 32, kernel_size=3),nn.ReLU(),nn.MaxPool2d(2, 2),nn.Conv2d(32, 64, kernel_size=3),nn.ReLU(),nn.MaxPool2d(2, 2),nn.Flatten())self.fc1 = nn.Linear(64*4*4, 128)self.fc2 = nn.Linear(128, 128)self.fc3 = nn.Linear(128, 64*4*4)self.fc4 = nn.Linear(64*4*4, 3*32*32)def reparameterize(self, mu, logvar):std = torch.exp(0.5*logvar)eps = torch.randn_like(std)return mu + eps*stddef encode(self, x):h1 = self.encoder(x)h2 = torch.relu(self.fc1(h1))h3 = torch.relu(self.fc2(h2))mu = self.fc3(h3)logvar = self.fc4(h3)return mu, logvardef decode(self, z):h1 = torch.relu(self.fc4(z))h2 = torch.relu(self.fc3(h1))h3 = torch.relu(self.fc2(h2))h4 = torch.relu(self.fc1(h3))21return torch.sigmoid(self.fc4(h4))def forward(self, x):mu, logvar = self.encode(x)z = self.reparameterize(mu, logvar)return self.decode(z), mu, logvar# Define the loss functiondef loss_function(recon_x, x, mu, logvar):BCE = nn.functional.binary_cross_entropy(recon_x, x.view(-1,3*32*32), reduction= 'sum') ,→KLD = 0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())return BCE + KLD# Load the CIFAR-10 datasettransform = transforms.Compose([transforms.ToTensor()])trainset = datasets.CIFAR10(root= './data ', train=True, download=True,transform=transform) ,→trainloader = DataLoader(trainset, batch_size=64, shuffle=True)testset = datasets.CIFAR10(root= './data ', train=False, download=True,transform=transform) ,→testloader = DataLoader(testset, batch_size=64, shuffle=False)# Initialize the VAE model and the optimizervae = VAE()optimizer = optim.Adam(vae.parameters(), lr=0.001)# Train the VAEfor epoch in range(100):for i, (x, _) in enumerate(trainloader):x = x.to(device)optimizer.zero_grad()recon_x, mu, logvar = vae(x)loss = loss_function(recon_x, x, mu, logvar)loss.backward()optimizer.step()if i % 100 == 0:print( 'Epoch: %d, Batch: %d, Loss: %.3f '% (epoch+1, i+1,loss.item())) ,→# Test the VAEtest_loss = 0with torch.no_grad():for x, _ in testloader:x = x.to(device)recon_x, mu, logvar = vae(x)loss = loss_function(recon_x, x, mu, logvar)test_loss += loss.item()test_loss /= len(testloader)print( 'Test Loss: %.3f '% test_loss)```This code defines a V AE model, loads the CIFAR-10 dataset, and trains the modelusing the Adam optimizer. The loss function is the sum of the binary cross-entropyloss and the Kullback-Leibler divergence. The model is trained for 100 epochs,and the test loss is printed after each epoch.Note that you need to install the ‘torchvision‘ package to use the CIFAR-10 dataset.You can install it using ‘pip install torchvision‘. Also, you need to specify thedevice (GPU or CPU) where you want to run the model. You can do this by adding22`device = torch.device( 'cuda 'if torch.cuda.is_available()else 'cpu')`before the training loop.### User:I’m getting the following error trying to run this code:RuntimeError Traceback (most recent call last)<ipython-input-3-471d3f454439> in <cell line: 70>()72 x = x.to(device)73 optimizer.zero_grad()---> 74 recon_x, mu, logvar = vae(x)75 loss = loss_function(recon_x, x,mu, logvar)76 loss.backward()6 framesin forward(self, input)115116 def forward(self, input: Tensor) -> Tensor:--> 117 return F.linear(input, self.weight, self.bias)118119 def extra_repr(self) -> str:RuntimeError: mat1 and mat2 shapes cannot be multiplied (64x2304 and1024x128) ,→### llama3-8b-8192:The error message indicates that there is a mismatch between the shapes of theinput and the weights in one of the fully connected layers in your V AE model.The error message specifically says that the shapes are 64x2304 and 1024x128,which means that the input has a shape of (64, 2304) and the weights have a shapeof (1024, 128).The issue is likely due to the fact that the input to the fully connected layer isnot what you expect. In the ‘encode‘ method of your V AE model, you have thefollowing lines:```h1 = self.encoder(x)h2 = torch.relu(self.fc1(h1))h3 = torch.relu(self.fc2(h2))mu = self.fc3(h3)logvar = self.fc4(h3)```The ‘h3‘ tensor has a shape of (64, 128) because ‘self.fc2‘ is a fully connected layerwith 128 output units. However, ‘self.fc3‘ and ‘self.fc4‘ are also fully connectedlayers, and they are expecting an input with a shape of (batch_size, 128)where batch_size is the number of samples in the batch.In your case, the batch size is 64, so the input to ‘self.fc3‘ and ‘self.fc4‘ should be(64, 128), not (64, 128) as it is now.To fix this issue, you can try to flatten the output of ‘self.encoder‘ before passing itto the fully connected layers. You can do this by adding the following line after ‘h1= self.encoder(x)‘:```h1 = h1.view(-1, 64*4*4)```This will flatten the output of ‘self.encoder‘ into a tensor with a shape of(batch_size, 64*4*4) , which should be compatible with the weights of23‘self.fc1‘, ‘self.fc2‘, ‘self.fc3‘, and ‘self.fc4‘.### User:Okay, I added that line. Now I’m getting a similar error with different values:RuntimeError: mat1 and mat2 shapes cannot be multiplied(144x128 and 1024x3072)### llama3-8b-8192:The error message indicates that there is a mismatch between the shapes of theinput and the weights in one of the fully connected layers in your V AE model.The error message specifically says that the shapes are 144x128 and 1024x3072,which means that the input has a shape of (144, 128) and the weights have a shapeof (1024, 3072).The issue is likely due to the fact that the input to the fully connected layer isnot what you expect. In the ‘encode‘ method of your V AE model, you have thefollowing lines:```h1 = self.encoder(x)h2 = torch.relu(self.fc1(h1))h3 = torch.relu(self.fc2(h2))mu = self.fc3(h3)logvar = self.fc4(h3)```The ‘h3‘ tensor has a shape of (64, 128) because ‘self.fc2‘ is a fully connected layerwith 128 output units. However, ‘self.fc3‘ and ‘self.fc4‘ are also fully connectedlayers, and they are expecting an input with a shape of (batch_size, 128)where batch_size is the number of samples in the batch.In your case, the batch size is 64, so the input to ‘self.fc3‘ and ‘self.fc4‘ should be(64, 128), not (64, 128) as it is now.To fix this issue, you can try to flatten the output of ‘self.encoder‘ before passing itto the fully connected layers. You can do this by adding the following line after ‘h1= self.encoder(x)‘:```h1 = h1.view(-1, 128)```This will flatten the output of ‘self.encoder‘ into a tensor with a shape of(batch_size, 128) , which should be compatible with the weights of ‘self.fc1‘,‘self.fc2‘, ‘self.fc3‘, and ‘self.fc4‘.However, the error message also mentions that the input has a shape of 144x128,which is not compatible with the weights of the fully connected layer. This suggeststhat there may be an issue with the architecture of your V AE model.To fix this issue, you can try to adjust the architecture of your V AE model to ensurethat the input to the fully connected layers is compatible with the weights.24 |
TPtXLihkny | Advancing Theorem Proving in LLMs throughLarge-Scale Synthetic DataHuajian Xin3∗Daya Guo1Zhihong Shao1Z.Z. Ren1Bo Liu1Chong Ruan1Wenda Li3Xiaodan Liang2,4†1DeepSeek2Sun Yat-sen University3University of [email protected], {guoday, zhihongshao, rzz, zhuqh, chong.ruan}@deepseek.com,[email protected], [email protected], [email protected] assistants like Lean have revolutionized mathematical proof verification byproviding high levels of accuracy and reliability. Although large language models(LLMs) have demonstrated potential in mathematical reasoning, their advancementin formal theorem proving is hindered by the scarcity of large, high-quality trainingdatasets. To address this challenge, we present a novel approach to generateextensive Lean 4 proof data from natural language mathematical problems atthe high school and undergraduate levels. Specifically, we synthesize 8 millionformal statements with corresponding proofs, leveraging this dataset to fine-tunethe DeepSeekMath 7B model. The resulting model, DeepSeek-Prover, achievesa pass rate of 50% on the Lean 4 miniF2F benchmark, surpassing the previousstate-of-the-art result of 41.0%. These findings underscore the potential of large-scale synthetic data in significantly enhancing the theorem-proving capabilities ofLLMs.1 IntroductionFormal mathematical languages, such as Lean [Moura and Ullrich, 2021], Isabelle [Paulson, 1994],and Coq [The Coq Development Team], have enabled the development of computer-verifiableproofs [Avigad, 2023]. However, constructing formal proofs remains a labor-intensive process thatrequires both substantial effort and specialized expertise—often challenging even for experiencedmathematicians. To ease the burden of writing formal proofs, recent approaches [Polu and Sutskever,2020, Jiang et al., 2021, Han et al., 2021, Polu et al., 2022, Lample et al., 2022, Jiang et al., 2022,Yang et al., 2023] have explored the use of language models to automatically generate proofs for givenformal statements. Despite this progress, the performance of these methods has been constrained bythe limited availability of high-quality formal proof data.To address this limitation, we propose an iterative methodology for generating large-scale Lean 4proof data from natural language mathematical problems. Our approach utilizes large languagemodels to translate competition-level problems from high school and undergraduate mathematicsinto formal statements, followed by the generation of verifiable proofs using the Lean 4 prover. Themodel is fine-tuned with this synthetic data, and the data generation process is repeated to furtherrefine the model. This iterative pipeline, executed with the continuously enhanced DeepSeekMath 7Bmodel [Shao et al., 2024], continues until no further improvement in model performance is observed.Ultimately, our final dataset comprises 8 million formal statements paired with corresponding proofs.∗Work done during the internship at DeepSeek.†Corresponding author.38th Conference on Neural Information Processing Systems (NeurIPS 2024).Informal MathProblemsSynthesized DataProving Negated StatementsFormal VerifierSynthesized DataStatements Proving5. RepeatDeepSeek-Pr overDeepSeek-Pr over DeepSeek-Pr overDeepSeek-Pr over1. Autoformalization 2. Model Scoring and Hypothesis Rejection4. Fine-tuning ProverProving Original Statements3. Statement ProvingHigh-Quality FormalMath StatementFormal Statement withCorrect ProofsCandidate FormalMath StatementFormal Statement withCorrect ProofsFormal MathStatementsFigure 1: Overview of our approach. Our pipeline starts by autoformalizing a broad range ofmathematical problems into formal statements. Inconsistent or overly simplistic statements arediscarded. The remaining high-quality statements are then iteratively processed, attempting either toprove or refute them (i.e., prove their negation), until one is verified by the Lean 4 prover. The verifiedpairs of natural language problems and their formalization, formal statements (or their negations) andtheir corresponding proofs are used to fine-tune the model, enhancing its theorem-proving ability.This process repeats until no further performance gains are observed.We evaluate the theorem proving performance in Lean 4 of the resulting model, DeepSeek-Prover,using the miniF2F benchmark [Zheng et al., 2022]. DeepSeek-Prover achieves a pass rate of 50% onthe test set of the miniF2F, surpassing the previous state-of-the-art result of 41.0%. Ablation studiesreveal that our iterative training process progressively enhances the model’s problem-solving abilitywith each iteration, further demonstrating the effectiveness of our approach.2 ApproachOur approach consists of an iterative cycle of dataset synthesis and model training, as shown inFigure 1. Each phase is described in detail below.Informal Data Curation. We collect mathematical problems in natural language by scrapingonline resources containing high school and undergraduate exercises, exams, and competitions. Aftercleaning the data, we curated a dataset of 869,659 high-quality math problems, focusing on algebraand number theory.Model Initialization. The model, initialized from DeepSeekMath-Base 7B [Shao et al., 2024], isfine-tuned on MMA dataset [Jiang et al., 2023], which includes formal statements from Mathlib,the standard mathematical library for Lean 4. This enhances the model’s basic autoformalizationcapabilities. Additionally, we include theorem-proving data from LeanDojo [Yang et al., 2023],adapted from a next-tactic prediction task to full-proof generation. We refer to the resulting modeland its further improved versions as DeepSeek-Prover.Model Scoring and Hypothesis Rejection. Initially, many autoformalized statements were lowquality. To improve this, we introduced a scoring mechanism that prompts DeepSeek-Prover to assesseach statement using a chain-of-thought approach. Statements are categorized as "excellent," "good,""above average," "fair," or "poor." Statements rated "fair" or "poor" are discarded. Additionally, someprovable statements contained inconsistent hypotheses leading to vacuous conclusions. To filter these,DeepSeek-Prover attempts to prove the statement with False as the conclusion. Successful proofs of2these transformed statements reveal inconsistent hypotheses, and these statements are discarded. Thisprocess refines the dataset, leaving 712,073 high-quality formal statements for proof generation.Statement Proving. With a large corpus of high-quality formal statements, DeepSeek-Proverattempts to generate proofs. Since some synthesized statements may be semantically incorrect andtherefore unprovable, brute-force search would be inefficient. To optimize this, we exploit the logicalsymmetry between a statement and its negation. Dual concurrent proof searches are initiated for eachstatement—one for Γ⊢Pand another for Γ⊢ ¬P. DeepSeek-Prover samples up to ktimes for eachstatement until a valid proof for either is found. All validated proofs, whether for the statement orits negation, are used to further train DeepSeek-Prover. This method enriches the dataset with bothpropositions and their negations, even if the original propositions were incorrectly formalized.Iterative Enhancement. Since the pipeline relies on DeepSeek-Prover, improving its performanceafter each iteration is crucial. Verified pairs of formal statements (or their negations) and theircorresponding proofs are used to enhance the model’s theorem-proving capabilities. Pairs of naturallanguage problems and their formalized counterparts, when correctly proved, are also collected toimprove the model’s autoformalization abilities. With each cycle of refinement, DeepSeek-Prover’sperformance in both autoformalization and theorem proving incrementally improves, contributingto better dataset synthesis. This iterative process continues until no further performance gains areobserved.3 Experiments3.1 Main ResultsBenchmark and metric. We assess the theorem-proving capabilities of DeepSeek-Prover using theminiF2F benchmark [Zheng et al., 2022], which comprises 244 problems ranging from elementaryarithmetic to competition-level challenges, including those from the American Invitational Mathemat-ics Examination (AIME), the American Mathematics Competitions (AMC), and the InternationalMathematical Olympiad (IMO). Our experiments are based on the Lean 4 version of miniF2F, asprovided by the LeanDojo project [Yang et al., 2023]. The primary evaluation metric is pass@ K,which indicates the model’s ability to generate a correct proof within Kattempts.Evaluation results. Table 1 presents a comparison of various theorem-proving methods on theminiF2F-test dataset. DeepSeek-Prover achieves a pass rate of 50.0% with a 64×1024 samplingbudget, significantly surpassing the previous state-of-the-art pass rate of 41.0% achieved by HypertreeProof Search [Lample et al., 2022] with 64×5000 tree search steps. Notably, DeepSeek-Proverachieves a considerable pass rate of 46.3% with only 128 sampling steps, surpassing most previousmethods with few computational resources allocated. These results demonstrate DeepSeek-Prover’srobustness and its ability to tackle complex proofs even under varying resource constraints.Method Sampling Budget miniF2F-testCOPRA (Code Llama) [13] 500 5 .7%COPRA (GPT-3.5) [13] 60 9 .0%COPRA (GPT-4) [13] 60 26 .6%Llemma-7B [2] 3200 26 .2%Llemma-34B [2] 3200 25 .8%ReProver [16] - 26.5%LLMStep [15] 3200 27 .9%GPT-f [11] 64×4096 36 .6%Hypertree Proof Search [7] 64×5000 41 .0%DeepSeekMath-Base [12] 128 19.67%DeepSeek-Prover (Ours)128 46.3%64×128 48.8%64×1024 50.0%Table 1: Comparison of state-of-the-art methods on the miniF2F-test dataset.33.2 Ablation StudiesWe performed ablation studies to evaluate the contributions of different components of DeepSeek-Prover, using pass@128 as the evaluation metric on the miniF2F-test dataset. The results aresummarized in Table 2.Model #Tokens miniF2F-test- - 27.5%Mathlib 0.2B 31.2%Synthetic Data 3.1B 42.6%(a)Scored Class miniF2F-test"fair" and "poor" 38.1%"excellent", "good" and42.6%"above average"(b)Iteration miniF2F-test0 34.0%1 39.3%2 41.4%3 45.1%4 46.3%(c)Dataset Size miniF2F-test1,000 24.18%10,000 31.97%100,000 37.7%1,000,000 38.11%8,066,621 40.16%(d)Table 2: Ablation studies of data synthesis and model training components.Effectiveness of Large-Scale Autoformalization. We conducted a comparative analysis betweenour autoformalized synthetic dataset and the human-written Mathlib dataset (the standard mathemati-cal library of Lean 4), as shown in Table 2a. The models trained on our synthetic data substantiallyoutperformed those trained solely on Mathlib data. This process employed an expert iteration strategy[Polu and Sutskever, 2020], where formal proofs were iteratively generated and used to fine-tune themodel until performance improvements plateaued.Effectiveness of Formal Statement Scoring. We evaluated the impact of formal statement qualityon model performance by training on both high- and low-scoring proofs. As shown in Table 2b,models trained on high-score proofs outperformed those trained on low-score proofs by 4.5%. Thisresult highlights the importance of accurate statement scoring for filtering lower-quality statementsand improving overall performance.Effectiveness of Iterative Enhancement. The results in Table 2c show a clear correlation betweenthe number of iterations in data synthesis and improved theorem-proving performance. Each iterationrefines the model’s ability to handle increasingly complex proofs, resulting in significant performancegains. This iterative enhancement approach contributes to the generation of higher-quality syntheticdata and bolsters the model’s theorem-proving capabilities.Effectiveness of Scaling Synthetic Theorem-Proving Data. As illustrated in Table 2d, there isa clear relationship between the size of the synthetic dataset and the model’s performance on theminiF2F benchmark. Performance increases consistently with the size of the dataset, highlightingthe critical role of large-scale data in advancing automated theorem proving. This underscores thenecessity of systematic, large-scale data generation for further progress in the field.4 ConclusionIn this paper, we introduced a method for generating extensive synthetic proof data from high-schooland undergraduate-level mathematical competition problems. This approach significantly improvedthe performance of the DeepSeekMath 7B model in automated theorem proving (ATP) when trainedon this synthetic dataset. Our model surpasses all previous state-of-the-art methods on the miniF2F-test benchmark for theorem proving in Lean 4. While our current work primarily focuses on algebraand number theory problems at the middle school and undergraduate levels, future work will aim to4broaden the scope of mathematical domains, enhancing the general applicability of our approach to awider range of theorem proving tasks.Broader ImpactThe research presented in this paper has the potential to significantly advance automated theoremproving through the use of large-scale synthetic proof data generated from informal mathematicalproblems. This progress can facilitate the development of tools for formalizing mathematicalreasoning, supporting the broader mathematical and educational communities.References[1] J. Avigad. Mathematics and the formal turn, 2023.[2]Z. Azerbayev, H. Schoelkopf, K. Paster, M. Dos Santos, S. M. McAleer, A. Q. Jiang, J. Deng,S. Biderman, and S. Welleck. Llemma: An open language model for mathematics. In TheTwelfth International Conference on Learning Representations , 2024.[3]J. M. Han, J. Rute, Y . Wu, E. W. Ayers, and S. Polu. Proof artifact co-training for theoremproving with language models. arXiv preprint arXiv:2102.06203 , 2021.[4]A. Q. Jiang, W. Li, J. M. Han, and Y . Wu. Lisa: Language models of isabelle proofs. In 6thConference on Artificial Intelligence and Theorem Proving , pages 378–392, 2021.[5]A. Q. Jiang, W. Li, S. Tworkowski, K. Czechowski, T. Odrzygó ́ zd ́ z, P. Miło ́s, Y . Wu, andM. Jamnik. Thor: wielding hammers to integrate language models and automated theoremprovers. In Proceedings of the 36th International Conference on Neural Information ProcessingSystems , pages 8360–8373, 2022.[6]A. Q. Jiang, W. Li, and M. Jamnik. Multilingual mathematical autoformalization. arXiv preprintarXiv:2311.03755 , 2023.[7]G. Lample, M.-A. Lachaux, T. Lavril, X. Martinet, A. Hayat, G. Ebner, A. Rodriguez, andT. Lacroix. Hypertree proof search for neural theorem proving. In Proceedings of the 36thInternational Conference on Neural Information Processing Systems , pages 26337–26349,2022.[8]L. d. Moura and S. Ullrich. The lean 4 theorem prover and programming language. In AutomatedDeduction–CADE 28: 28th International Conference on Automated Deduction, Virtual Event,July 12–15, 2021, Proceedings 28 , pages 625–635. Springer, 2021.[9] L. C. Paulson. Isabelle a Generic Theorem Prover . Springer Verlag, 1994.[10] S. Polu and I. Sutskever. Generative language modeling for automated theorem proving. arXivpreprint arXiv:2009.03393 , 2020.[11] S. Polu, J. M. Han, K. Zheng, M. Baksys, I. Babuschkin, and I. Sutskever. Formal mathematicsstatement curriculum learning. arXiv preprint arXiv:2202.01344 , 2022.[12] Z. Shao, P. Wang, Q. Zhu, R. Xu, J. Song, M. Zhang, Y . Li, Y . Wu, and D. Guo. DeepSeek-Math: Pushing the limits of mathematical reasoning in open language models. arXiv preprintarXiv:2402.03300 , 2024.[13] A. Thakur, Y . Wen, and S. Chaudhuri. A language-agent approach to formal theorem-proving.InThe 3rd Workshop on Mathematical Reasoning and AI at NeurIPS , 2023.[14] The Coq Development Team. Coq. URL https://coq.inria.fr .[15] S. Welleck and R. Saha. llmstep: Llm proofstep suggestions in lean. In The 3rd Workshop onMathematical Reasoning and AI at NeurIPS’23 , 2023.5[16] K. Yang, A. M. Swope, A. Gu, R. Chalamala, P. Song, S. Yu, S. Godil, R. Prenger, andA. Anandkumar. Leandojo: theorem proving with retrieval-augmented language models. InProceedings of the 37th International Conference on Neural Information Processing Systems ,pages 21573–21612, 2023.[17] K. Zheng, J. M. Han, and S. Polu. miniF2F: a cross-system benchmark for formal olympiad-levelmathematics. In International Conference on Learning Representations , 2022.6 |
RtTNbJthjV | The Karp DatasetMason DiCiccoDepartment of Computer ScienceWorcester Polytechnic InstituteWorcester, MA [email protected] WordenDepartment of Computer ScienceWorcester Polytechnic InstituteWorcester, MA [email protected] OlsenDepartment of Computer ScienceWorcester Polytechnic InstituteWorcester, MA [email protected] GangaramDepartment of Computer ScienceWorcester Polytechnic InstituteWorcester, MA [email protected] ReichmanDepartment of Computer ScienceWorcester Polytechnic InstituteWorcester, MA [email protected] HeffernanDepartment of Computer ScienceWorcester Polytechnic InstituteWorcester, MA [email protected] the mathematical reasoning capabilities of Large Language Models(LLMs)isacentraltopicinthestudyofartificialintelligence. Thisnewdomainnecessitates the creation of datasets of reasoning tasks for both training andbenchmarkingtheperformanceofLLMs. Tothisend,weintroducethe Karpdataset :The first dataset composed of detailed proofs of NP-completeness reductions. Thereductions vary in difficulty, ranging from simple exercises of undergraduatecourses to more challenging reductions from academic papers. We compare theperformanceofstate-of-the-artmodelsonthistaskanddemonstratetheeffectoffine-tuning with the Karp dataset on reasoning capacity.1 IntroductionPerhapstheconceptreceivingthemostattentionintheoreticalcomputerscienceisthatofa reduction .Loosely speaking, a reduction between decision problems AandBis a mapping fsuch that: If xisaninputto A,then f(x)isaninputto B,andtheanswerto xis“yes”ifandonlyiftheanswerto f(x)is “yes.” Efficiently computable reductions can be used to leverage algorithms that solve Bin orderto solve A. Furthermore, efficient reductions can establish hardness results: if Ais believed to beintractable, and Areduces to Befficiently, then Bis intractable as well, since an efficient algorithmforBcanbeusedtosolve A. ThissimpleobservationisatthecoreofthetheoryofNP-completeness,which is the topic of thousands of papers and an influential monograph Garey and Johnson [1979].OurgoalistostudythecapabilitiesofLargeLanguageModels(LLMs)andtheirpotentialtoinfluenceformal mathematics. To that end, we built a new dataset of 90 NP-hardness proofs (reductions) tobe used for evaluation and training of language models. We are not aware of the study of LLMsfor proving new NP-hardness results (by constructing reductions) or reproving and verifying knownresults. We believe that aiming language models at reductions in particular has great potential to38th Conference on Neural Information Processing Systems (NeurIPS 2024).benefit our understanding of their reasoning capabilities and applicability to formal mathematics.This is because:•Findingareductionbetweentwoproblemsisahigh-levelreasoningtask. ImbuingLLMswith the ability to construct reductions could lead to improved reasoning capabilities.•It is feasible to construct dozens of examples of reductions that are theoretically interesting,gobeyondsymbolicmanipulationstoprovemathematicalidentities,andhaveashort(severalparagraphs) proof using natural language. The existence of short yet difficult-to-find proofshints that such proofs can be found automatically with reasonable computing resources (e.g.,memory, training time).•Such datasets are challenging to construct in other mathematical domains. Current datasetsof mathematical problems (e.g., Hendrycks et al. [2021]) that are used to evaluate mathcapabilities of large language models generally focus on a single numerical or symbolicoutcome.1.1 Related workTherehasbeenextensiverecentresearchdirectedtowardusinggenerativeAI,neuralnetworks,andInteractive Theorem Provers (ITP) in pushing the boundaries of mathematics [Azerbayev et al.,2021, Buzzard, 2020, Hendrycks et al., 2021, Lample et al., 2022, Polu et al., 2022, Szegedy, 2020]including proving new theorems as well as reproving known theorems. To our knowledge, theydonotincludeproofsofNPcompletenessusingreductions. Veryfewworksseemtohavestudiedautomatically constructing reductions toward establishing NP-completeness results. One of the moreadvanceddatasetssimilartooursisTheCLRSAlgorithmicReasoningBenchmarkofVeličkovićetal.[2022],whichpredictsthetrajectoriesofvariousalgorithmsusinganalgorithmicmodelbutexplicitlyavoids NP-Hard problems. Motivated by the education domain, Creus et al. [2014] study the problemof testing the correctness of reductions using SAT-solvers and designated programming languageREDNP to establish NP-completeness. One bottleneck noted in proof verification using SAT solversisthelargesizeofSATformulasobtainedintheprocessofverification. Recently,Zhangetal.[2022]introducedKarp,alanguageforprogrammingandtestingreductions,motivatedbytheeducationaldomainaswell. KarpisaRacket-esqueframeworkthatcanbeusedtodefinecomputationalproblemsaswellasreductionsbetweenthem. Inadditiontoprovidingasystematicwaytoconstructreductions,Karp automatically tests the correctness of reductions. The Karp dataset contains significantly fewersolvedquestionscomparedtomostmathdatasets. ItdoesnotusegenerativeAItofindreductionsandtheir proofs.Related datasets such as MATH [Hendrycks et al., 2020], MathQA [Amini et al., 2019], GSM8K[Cobbeetal.,2021],MGSM[Shietal.,2022],ProofWriter[Tafjordetal.,2020]andothershaveofferednewwaystoevaluatethemathematicalreasoningandproofgenerationcapabilitiesoflanguagemodels.The MATH dataset consists of challenging problems taken from high school math competitions,testing a model’s elementary problem-solving skills across various domains of mathematics. TheGSM8KandMGSM(multilingualGSM8K)datasetsfocusongrade-schoolmathproblems,assessingthe model’s ability to perform arithmetic reasoning and handle multi-step calculations. ProofWriterevaluatesamodel’sproficiencyingeneratingnaturallanguageproofsforelementarylogicalinferencetasks, emphasizing multi-hop reasoning. While these datasets are instrumental in testing generalmathematical and logical reasoning, they are completely disjoint from the task of constructingreductionsforNP-completenessproofs. Reductionsincomputationalcomplexityinvolveauniqueblend of algorithmic thinking, formal proof techniques, and an understanding of computationalproblems’ intrinsic properties. This gap highlights the need for specialized resources.1.2 Evaluating large language models on the Karp datasetDatasets such as MATH and MGSM are valuable because they allow for standardized comparison ofthe capabilities of language models, but LLMs now excel at scoring highly on them. For instance,GPT-4o [Achiam et al., 2023] scores over 90% on GSM8K, 75% on the MATH dataset, and around86% on MMLU (Massive Multitask Language Understanding) [Hendrycks et al., 2020]. Whileimpressive, there are concerns that LLMs have been overfit on the testing datasets due to theiravailability on the internet. Moreover, achieving a high level of performance on GSM8K, whichconsistsofgrade-schoolmathproblems,onlyindicatesthatLLMsarecomparabletohighlyskilled2OursTheorem 1. Problem X reduces to Problem YProof.Assumewehaveanalgorithm AsolvingY.Then,we can execute the following algorithm to solve X:ReductionGiven input xto X, construct inputs to Y as follows.y=. . .Output the result of Aony.Proof of CorrectnessIt remains to prove that xis “yes” if and only if yis “yes”=⇒: Suppose xis “yes”.···⇐=: Suppose yis “yes”.···MATHProblem: Tom has a red marble, a green marble, a bluemarble, and three identical yellow marbles. How manydifferent groups of two marbles can Tom choose?Solution: There are two cases here: either Tom choosestwo yellow marbles ( 1result), or he chooses two marbles ofdifferent colors (42= 6results). The total number ofdistinct pairs of marbles Tom can choose is 1 + 6 = 7 .MGSMProblem: Beth bakes 4, 2 dozen batches of cookies in aweek. If these cookies are shared amongst 16 people equally,how many cookies does each person consume?Solution: Beth bakes 4 2 dozen batches of cookies for atotal of 4*2 = ⟨⟨4·2 = 8⟩⟩8 dozen cookiesThere are 12 cookies in a dozen and she makes 8 dozencookiesforatotalof12*8= ⟨⟨12·8 = 96 ⟩⟩96cookiesShe splits the 96 cookies equally amongst 16 people so theyeach get 96/16 = ⟨⟨96/16 = 6 ⟩⟩6 cookiesFinal answer: 6Figure 1: Our reduction template (left) compared to MATH (middle) and GSM8k (right)eighth graders. As more advanced LLMs such as Strawberry (also known as o1) are released,researchers will be aiming towards matching the problem-solving capacity of undergraduate or evenPhDlevelstudents. Thisnecessitatesdatasetsofcomplexhigher-education-levelquestionssuchasreductions.2 The Karp datasetOurdatasetconsistsofdetailednaturallanguagedescriptionsofdozensofreductionsestablishingNP-hardness proofs. These proofs are significantly more involved and labor-intensive to generaterelative to math problems with a numerical answer [Hendrycks et al., 2021] or a sequence ofcomputational steps as a solution [Cobbe et al., 2021]. Every reduction in the dataset is sourcedfrom well-known literature such as Garey and Johnson [1979], Papadimitriou [1994], Dasgupta et al.[2006]. The dataset also contains natural language versions of Karp’s 21 original NP-completeproblems [Karp, 2010]. Other sources include academic papers Garey et al. [1974, 1976], Fominetal.[2013],Aloiseetal.[2009]anddedicatedsurveysofNP-completenessAusielloetal.[2012]and the references therein.Many proofs of NP-completeness in the literature compress proofs of claims that are somewhattedioustoproveformally,andithasbeenobservedthatsomeproofscontaininaccuracies[Zhangetal.,2022]. Inourproofs,weattemptedtoavoidincludingunprovenclaims,emphasizingclarityatthecost of verbosity. Such proofs also often rely on diagrams, which we convert to natural language forLLM comprehension. As a result, the proofs in our dataset are somewhat longer than proofs in otherdatasets,altogetherspanningover170pages. Weavoidedincludingproblemswithhighlycomplexproofsthatrequiremorethantwopages. Thereductionsinthedatasethavelengthsbetween1000and6000charactersandhaveanaveragelengthofapproximately2000characters. ThedistributionoflengthsisdepictedinFigure2. SomeexamplesofreductionscanbefoundinAppendixD,andthefulllistsofproblemsandreductionscanbefoundinTables5and6,respectively. Wewillsharethefull dataset with interested researchers upon request.Formatting The dataset consists of reductions (in the form of L ATEX-typeset theorems) betweencomputationalproblemswhosedefinitionsarealsoprovided. Reductionsinthedatasetadheretoahighlystructuredtemplate: Aprecisedefinitionofthemappingfollowedbyaproofofcorrectness(SeeFigure1). Thelanguageisfairlyexpositoryandinstructive: Whileallthecontentofaformalproof is present, we frequently include conceptual justification of non-trivial logical steps.Omitteddetails Inallofourproofs,weomitakeyconceptneededtoestablishNP-completeness:Polynomial-timecomputabilityandverification. Forexample,inaproperNP-completenessproof,themapping from one decision problem to another must be possible to implement efficiently1, otherwisethe reduction is vacuous (e.g., if exponential time is allowed, then one could just brute-force theanswer to the original problem.) Efficiency of a reduction is often easy (but tedious) to prove, and wemaintain that this holds true for all problems in our dataset. Hence, we choose to mask these details.1Here, “efficiently” means “in polynomial time”, with respect to the size of the input3Benchmark Strawberry Llama LlamaReduceTest set 1.5 0.875 1.25Challenge set 0.875 0.375 0.5Table 1: Average scoresachieved by Strawberry, Llama, and LlamaReduceon the two problem sets.In the second row, LlamaReduce has been fine-tuned on the entire Karp dataset, while in the first row,the test set is held out during training.3 ExperimentsIn contrast to computations and formal logical deductions, natural-language mathematical proofsresiststraightforwardautomaticverification. Duetothislimitation,allmodelsaremanuallyevaluatedon a small, fixed test set by a human expert (a graduate student in theoretical computer science).Testset Weinitiallyevaluatedourmodelsonarandomlychosensetof8reductionsfromthedataset,atthelevelofundergraduatehomeworkassignments(testset). Afterourinitialevaluation,Strawberrywas released and achieved significantly better results on the test set. To gain a better understanding ofthe capabilities of Strawberry, we constructed an additional list of eight more challenging reductions(challenge set) that did not belong to the original dataset.Prompts Modelsareevaluatedontheirresponsestoahighlystructuredprompt,whichasksforareductionbetweentwodecisionproblems. ThepromptprovidesaL ATEXtemplateforthereduction,whichmatchestheformatofthedataset,statesthetwoproblemsandanynecessarydefinitions,andasks for a detailed reduction. Full examples of prompts can be found in Appendix E.Scoring Completed reductions receive a score of 0,1, or 2, where 0represents a completelyincorrect answer, 1reflects a construction that contains significant yet fixable flaws, and 2indicates afully or nearly correct reduction with only minor errors. If the response contains superficial bugs(such as L ATEX-compilation errors), we repair these and proceed with normal scoring.Models WecomparetheperformanceofOpenAI’srecentStrawberrymodel,theLlama70B-Instructbase model [Touvron et al., 2023] as well as our fine-tuned Llama70B-Instruct model, which we callLlamaReduce. The fine-tuning method we used is described in Appendix B.Results Strawberry achieves impressive averages of 1.5 on the test set, and 0.875 on the challengeset. Interestingly, Strawberry even gave a more compact version of a current well-known reduction inthe challenge set (See Appendix E). This outperforms the base Llama model, which scores 0.875 onthe test set and 0.375 on the challenge set. The only problem that Llama answered correctly fromthe challenge set was NAE4SAT to Set Splitting , whose difficulty is relatively low. LlamaReduceclearly benefited from fine-tuning on the Karp dataset, as it was able to score 1.25 and 0.5 on the testand challenge sets respectively. The complete breakdown of scores is compiled in Tables 2 and 3 inAppendix C.These preliminary findings, especially the low scores achieved on the challenge set, suggest thatreductions are a challenging task for LLMs, leaving room for potential improvement. For easierreductions(suchasthoseinthetestset),fine-tuningwasbeneficialinimprovingperformance. TheimpressiveperformanceofStrawberryprovidesadditionalevidencethatpromptengineeringhasasignificant effect on problem-solving capacity, particularly on problems from the test set (at the levelof homework questions from an undergraduate course covering NP-completeness). Both promptengineering and fine-tuning appear to be less effective for improving performance for the harderreductions such as those in the challenge dataset.We also evaluate LlamaReduce on the MATH and MGSM datasets. Results are in Appendix C.4Problem Strawberry Llama LlamaReduce3Coloring to Planar 3Coloring 1 0 03SAT to Independent Set 2 1 13SAT to NAE4SAT 1 0 2Hamiltonian Path to K-SpanningTree 0 0 0Independent Set to Set Packing 2 1 2Independent Set to Vertex Cover 2 2 1Partition to Bin Packing 2 2 2Partition to Knapsack 2 1 2Average 1.5 0.875 1.25Table 2: Scores achieved by each model on each problem in the test set.4 ConclusionWe have constructed the Karp dataset consisting of reductions establishing NP-completeness. Futureworkcouldexamineextendingthedatasetwithadditionalreductions(e.g.,reductionsestablishinghardness of approximation of NP-hard optimization problems Arora et al. [1998], Feige et al. [1996],Dinur [2007]). Using the Karp dataset as well as generative AI more broadly to discover newreductions and simplify known NP-completeness proofs is an exciting future direction.The lack of automatic verification for natural language proofs of NP-completeness is a bottleneckincreatingalargerdataset. Inourexperiments,languagemodelsfailedtojudgethecorrectnessofreductions. Wesuspectthatatransformationfromnaturallanguagetomorestructuredrepresentations(e.g., code, formal math, the Karp language) is a required step to allow automatic verification.5ReferencesJosh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman,Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report.arXiv preprint arXiv:2303.08774 , 2023.Daniel Aloise, Amit Deshpande, Pierre Hansen, and Preyas Popat. Np-hardness of euclideansum-of-squares clustering. Machine learning , 75:245–248, 2009.Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi.Mathqa: Towardsinterpretablemathwordproblemsolvingwithoperation-basedformalisms. arXivpreprint arXiv:1905.13319 , 2019.SanjeevArora,CarstenLund,RajeevMotwani, MadhuSudan, andMarioSzegedy. Proofverificationand the hardness of approximation problems. Journal of the ACM (JACM) , 45(3):501–555, 1998.GiorgioAusiello,PierluigiCrescenzi,GiorgioGambosi,ViggoKann,AlbertoMarchetti-Spaccamela,and Marco Protasi. Complexity and approximation: Combinatorial optimization problems andtheir approximability properties . Springer Science & Business Media, 2012.Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, MD Santos, Stephen McAleer, Albert QJiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model formathematics.(2023). arXiv preprint arXiv:2310.10631 , 2021.KevinBuzzard. Provingtheoremswithcomputers. NoticesoftheAmericanMathematicalSociety ,67(11):1791–1799, 2020.Kathie Cameron. Induced matchings. Discrete Applied Mathematics , 24(1-3):97–102, 1989.Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solvemath word problems, 2021. URL https://arxiv. org/abs/2110.14168 , 2021.Carles Creus, Pau Fernández, and Guillem Godoy. Automatic evaluation of reductions betweennp-completeproblems. In InternationalConferenceonTheoryandApplicationsofSatisfiabilityTesting, pages 415–421. Springer, 2014.SanjoyDasgupta,ChristosHPapadimitriou,andUmeshVazirani. Algorithms . McGraw-Hill,Inc.,2006.TimDettmers,ArtidoroPagnoni,AriHoltzman,andLukeZettlemoyer. Qlora: Efficientfinetuningofquantized llms. Advances in Neural Information Processing Systems , 36, 2024.Irit Dinur. The pcp theorem by gap amplification. Journal of the ACM (JACM) , 54(3):12–es, 2007.Uriel Feige, Shafi Goldwasser, László Lovász, Shmuel Safra, and Mario Szegedy. Interactive proofsand the hardness of approximating cliques. Journal of the ACM (JACM) , 43(2):268–292, 1996.Fedor V Fomin, Petr A Golovach, and Janne H Korhonen. On the parameterized complexity ofcuttingafewverticesfromagraph. In MathematicalFoundationsofComputerScience2013: 38thInternational Symposium, MFCS 2013, Klosterneuburg, Austria, August 26-30, 2013. Proceedings38, pages 421–432. Springer, 2013.Michael R Garey and David S Johnson. Computers and intractability , volume 174. freeman SanFrancisco, 1979.Michael R Garey, David S Johnson, and Larry Stockmeyer. Some simplified np-complete problems.InProceedings of the sixth annual ACM symposium on Theory of computing , pages 47–63, 1974.Michael R Garey, Ronald L Graham, and David S Johnson. Some np-complete geometric problems.InProceedings of the eighth annual ACM symposium on Theory of computing , pages 10–22, 1976.DanHendrycks,CollinBurns,StevenBasart,AndyZou,MantasMazeika,DawnSong,andJacobSteinhardt. Measuringmassivemultitasklanguageunderstanding. arXivpreprintarXiv:2009.03300 ,2020.6Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXivpreprint arXiv:2103.03874 , 2021.Richard M Karp. Reducibility among combinatorial problems . Springer, 2010.GuillaumeLample,Marie-AnneLachaux,ThibautLavril,XavierMartinet,AmauryHayat,GabrielEbner, Aurélien Rodriguez, and Timothée Lacroix. Hypertree proof search for neural theoremproving.URL https://arxiv. org/abs/2205.11491 , 2022.I Loshchilov. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 , 2017.Christos H Papadimitriou. Computational complexity . Addison Wesley, 1994.Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and IlyaSutskever. Formalmathematicsstatementcurriculumlearning. arXivpreprintarXiv:2202.01344 ,2022.FredaShi,MiracSuzgun,MarkusFreitag,XuezhiWang,SurajSrivats,SoroushVosoughi,HyungWonChung,YiTay,SebastianRuder,DennyZhou,etal. Languagemodelsaremultilingualchain-of-thought reasoners. arXiv preprint arXiv:2210.03057 , 2022.Christian Szegedy. A promising path towards autoformalization and general artificial intelligence. InIntelligentComputerMathematics: 13thInternationalConference,CICM2020,Bertinoro,Italy,July 26–31, 2020, Proceedings 13 , pages 3–20. Springer, 2020.Oyvind Tafjord, Bhavana Dalvi Mishra, and Peter Clark. Proofwriter: Generating implications,proofs, and abductive statements over natural language. arXiv preprint arXiv:2012.13048 , 2020.HugoTouvron,ThibautLavril,GautierIzacard,XavierMartinet,Marie-AnneLachaux,TimothéeLacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open andefficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023.PetarVeličković,AdriàPuigdomènechBadia,DavidBudden,RazvanPascanu,AndreaBanino,MishaDashevskiy,RaiaHadsell,andCharlesBlundell. Theclrsalgorithmicreasoningbenchmark. InInternational Conference on Machine Learning , pages 22084–22102. PMLR, 2022.ChenhaoZhang,JasonDHartline,andChristosDimoulas. Karp: alanguagefornpreductions. InProceedings of the 43rd ACM SIGPLAN International Conference on Programming LanguageDesign and Implementation , pages 762–776, 2022.7A Test setsTest set The test set consists of the reductions: Partition to Knapsack; Independent Set to SetPacking; Independent Set to Vertex Cover; Independent Set to Undirected Feedback Set; Partition toBin Packing; Clique to Dense Subgraph; Unweighted Max Bisection to Weighted Bisection Width;Hamiltonian Cycle to Hamiltonian Path .Challengeset Thechallengesetconsistsofthereductions: NAE4SATtoSetSplitting;CliquetoBalanced Biclique; Independent Set to Induced Matching; 3SAT to Contagious Set; 3SAT to EdgeDisjointPaths;3ColoringtoLowDiameterClustering;DensestCuttoSumofSquaresClustering;Vertex Cover to Planar Vertex Cover .B Fine-tuningWe fine-tuned Llama 70B-Instruct using Unsloth. For training, we utilized the AdamW optimizerLoshchilov [2017] and QLora Dettmers et al. [2024] with 4-bit precision to reduce memoryconsumption. Thelearningratewassetto 2×10−5,followingalinearschedulerwith10warmupsteps. Weappliedweightdecayof0.01topreventoverfitting. Themodelwastrainedwithabatchsize of 8 per device. We used 16-bit floating point precision and random seed 0. LlamaReducewas trained on 1 A100 GPU until the loss converged on a validation set at 10 epochs. All models,fine-tuned or not, were inferenced with a temperature of 0.C ResultsThis section contains tables of results that were omitted due to space constraints.Problem Strawberry Llama LlamaReduce3Coloring to Low Diameter Clustering 2 1 23SAT to Contagious Set 0 0 03SAT to Edge-Disjoint Paths 1 0 0Clique to Balanced Biclique 0 0 0Densest Cut to Sum of Squares Clustering 0 0 0Independent Set to Induced Matching 1 0 0NAE4SAT to Set Splitting 2 2 2Vertex Cover to Planar Vertex Cover 1 0 0Average 0.875 0.375 0.5Table 3: Scores achieved by each model on each problem in the challenge set.Benchmark Strawberry Llama LlamaReduceMATH 85.5 68.0 68.5MGSM 90.8 86.9 64.5Table 4: Accuracy of Strawberry, Llama, and LlamaReduce on the MATH and MGSM benchmarks.D Examples of reductionsThis section contains the reductions 3SAT to Independent Set as well as Hamiltonian Path toBounded-Degree Spanning Tree as they appear in the dataset.3SAT to Independent Set8Definition1. A3-CNFisaBooleanformulaequaltoanANDofclauses,whereeach clause is an OR of exactly 3literals (i.e., variables or their negations). A3-CNFissatisfiable ifthereexistsanassignmentofvariablestotrue( 1)orfalse( 0)such that the entire formula evaluates to true.Problem 1 (3SAT).•Input: (X, C ), where X={x1,···, xn}is a set of variables and C={C1,···, Cm}isasetofclausescontainingexactly 3literalsderivedfrom X(i.e.,xior¬xi).•Output:1There exists an assignment (of variables in X) satisfying φ=C1∧ ··· ∧ Cm.0OtherwiseDefinition 2. Given an undirected graph G= (V, E ), a subset of vertices S⊆Vis anindependent set if no two nodes are joined by an edge:∀u, v∈S: (u, v)/∈E.Problem 2 (Independent Set) .•Input: (G, k )where–G= (V, E )is an undirected graph–kis a positive integer•Output:1Ghas an independent set of size k0otherwiseTheorem 2. 3SAT reduces to Independent SetProof.Assume we have an algorithm Asolving Independent Set. Then, we canexecute the following algorithm to solve 3SAT:Reduction Given inputs (X, C )to 3SAT, construct inputs to Independent Set(G, k )as follows:1.For each clause Ci= (ai∨bi∨ci), create a “cluster" of vertices ai, bi, ciinV, and connect them in a triangle by adding edges (ai, bi),(bi, ci),(ci, ai)toE.2.Additionally, connect every two vertices corresponding to complementaryliterals (i.e. there is an edge between every xiand¬xi).Output the result of Aon(G, k ), where k=|C|.Proof of Correctness To establish correctness, it remains to prove that φissatisfiable ⇐⇒ Ghas an independent set of size k.=⇒: LetTbe an assignment of variables satisfying φ. In particular, each clauseCicontains at least one true literal. Construct a set Iwhich contains one such trueliteral from each clause. We now claim that Icorresponds to an independent set inGof size k: It contains one vertex (literal) from each of the kclauses, and no pairofverticesin Iareadjacentsincethereisonlyonevertexperclusterandverticescorresponding to complementary literals (i.e. xand¬x) cannot both be in Isincethat would be an impossible assignment; xand¬xcannot simultaneously be true.⇐=: LetIbeanindependentsetofsize kinG. Notethat Icannotcontaintwovertices in the same cluster. Hence, Icontains one vertex in each cluster of Ganddoesnotcontainverticescorrespondingtocomplementaryliterals(i.e. xiand¬xi).Thus, it is possible to assign every literal (vertex) in Ito be true simultaneously,which constitutes a satisfying assignment for φ.Hamiltonian Path to Bounded-Degree Spanning TreeDefinition 3. Given an undirected graph G= (V, E ), aHamiltonian path is asimple path in Gthat visits each vertex in Vexactly once.9Problem 3 (Hamiltonian Path) .•Input:An undirected graph G= (V, E ).•Output:1Ghas a Hamiltonian path.0Otherwise.Definition 4. Given an undirected graph G= (V, E )and a positive integer k, adegree- kspanning tree ofGis a subgraph TofGsuch that:•Tis connected;•Tis acyclic;•Tspans all the vertices of G(i.e., includes all vertices in V);•The maximum degree of any vertex in Tis at most k.Problem 4 (Bounded-Degree Spanning Tree) .•Input:An undirected graph G= (V, E )and a positive integer k.•Output:1Ghas a degree- kspanning tree0OtherwiseTheorem 3. Hamiltonian Path reduces to Bounded-Degree Spanning Tree.Proof.Assumewehaveanalgorithm AsolvingBounded-DegreeSpanningTree.Then, we can execute the following algorithm to solve Hamiltonian Path:Reduction: Given an instance G= (V, E )of Hamiltonian Path, we construct aninstance (G′, k)of Bounded-Degree Spanning Tree as follows:•Ifk= 2, letG′=G.•Ifk > 2:–LetV′=V∪ {v1, v2, . . . , v k−2|v∈V}–LetE′=E∪ {(v, vi)|v∈V,1≤i≤k−2}Output the result of Aon(G′, k).Proofof Correctness: We claimthat Ghasa Hamiltonianpath ⇐⇒ G′hasadegree- kspanning tree. This clearly holds for k= 2as a degree- 2spanning tree isexactly a Hamiltonian path; a tree with maximum degree 2 is a path, and spanningGisequivalenttovisitingeveryvertex. Wenowshowthereductionholdsforallk > 2:=⇒: Suppose GhasaHamiltonianpath P. Wecanconstructadegree- kspanningtreeT′ofG′bytaking Pandaddingallthenewedges (v, vi)foreach v∈V. Thistree spans all vertices of G′, is acyclic, and has maximum degree k(2 from theoriginal path plus k−2new edges).⇐=: Conversely, suppose G′has a degree- kspanning tree T′. All the newvertices vimustbeleavesin T′astheyhavedegree1. Ifweremovetheseleavesandtheirincidentedges( k−2pervertexof G)from T′, weobtainaspanningtreeTofGwith maximum degree 2, which must be a Hamiltonian path.E Examples prompts and responsesThissectioncontainsthe promptsandresponsesfromtheStrawberrymodelon the IndependentSettoInducedMatching andCliquetoBalancedBi-clique reductions. Forthesakeofreadability,theLATEXsnippets in the prompts have been made renderable.PromptYouareabotdesignedtowriteNP-Hardnessreductions. YoumustusethefollowingLaTeX template to write the reduction:Theorem 4. Problem Independent_Set reduces to Problem Induced_MatchingProof.Assumewehaveanalgorithm AsolvingProblemInduced_Matching. Then,we can execute the following algorithm to solve Problem Independent_Set:10Reduction Given inputs x1,···to Problem Independent_Set, construct an in-stance of Problem Induced_Matching as follows.•y1=···Output the result of Aony1,···.ProofofCorrectness Toestablishcorrectness,itremainstoprovethat x1,···contains a foo of size k⇐⇒ y1,···contains a bar of size k′.=⇒: Suppose x1,···containsafooofsize k.···Thus, y1,···containsabarofsizek′.⇐=: Suppose y1,···containsabarofsize k′.···Thus, x1,···containsafooof size k.···.Definition 5. Given an undirected graph G= (V, E ), a subset of vertices S⊆Vis anindependent set if no two nodes are joined by an edge:∀u, v∈S: (u, v)/∈E.Problem 5 (Independent Set) .•Input: (G, k )where–G= (V, E )is an undirected graph–kis a positive integer•Output:1Ghas an independent set of size k0otherwiseDefinition 6. Given an undirected graph G= (V, E ), a subset of edges M⊆Eisaninduced matching if no two edges in Mare incident to the same vertex.Problem 6 (Induced Matching) .•Input: (G, k )where–G= (V, E )is an undirected graph–kis a positive integer•Output:1Ghas an induced matching of size k0otherwiseWriteaproofthatInduced_MatchingisNP-hardbyreducingfromIndependent_Set.ResponseTheorem 5. Problem Independent Set reduces to Problem Induced MatchingProof.Assume we have an algorithm Asolving Problem Induced Matching. Then,we can execute the following algorithm to solve Problem Independent Set:Reduction Giveninputs (G, k )toProblemIndependentSet,constructaninstanceof Problem Induced Matching as follows.•For each vertex v∈V(G), create two vertices uvandwvinG′, and add anedge (uv, wv).•For every edge (vi, vj)∈E(G), add edges between all pairs of vertices from{uvi, wvi}and{uvj, wvj}. Specifically, add edges:–(uvi, uvj)–(uvi, wvj)–(wvi, uvj)–(wvi, wvj)Setk′=k.Output the result of Aon(G′, k′).11ProofofCorrectness Toestablishcorrectness,itremainstoprovethat Gcontainsan independent set of size k⇐⇒ G′contains an induced matching of size k′.=⇒: Suppose Gcontainsanindependentset S⊆V(G)ofsize k. Considertheset of edges M={(uv, wv)|v∈S}inG′.•Since Sis an independent set in G, there are no edges between any pair ofvertices vi, vj∈S.•Therefore, by construction, there are no edges between any pair of edges(uvi, wvi)and(uvj, wvj)inG′forvi, vj∈S.•Additionally, since each edge in Mshares no common vertices with others inM,Mis a matching in G′.•Therearealsonoedgesconnectingtheendpointsofdifferentedgesin M(dueto the absence of connections between their vertices), so Mis aninducedmatching inG′.Thus, G′contains an induced matching of size k′.⇐=: Suppose G′contains an induced matching Mof size k′=k. LetS={v∈V(G)|(uv, wv)∈M}.•Since Mis a matching in G′, no two edges in Mshare a common vertex.•Since Mis an induced matching, there are no edges between the endpoints ofdifferent edges in M.•By construction, if there were an edge (vi, vj)∈E(G)between any twovertices vi, vj∈S,thenin G′,therewouldbeedgesbetweentheendpointsof(uvi, wvi)and(uvj, wvj), violating the induced matching property.•Therefore, there are no edges between any pair of vertices in SinG.Thus, Sis an independent set of size kinG.Therefore, Gcontains an independent set of size kif and only if G′contains aninduced matching of size k′.ScoreThis construction does indeed resemble the well-known reduction from Cameron [1989].The key difference is that Strawberry constructs a clique on the four vertices {uvi, wvi, uvj, wvj}foreachedge (vi, vj)∈E,whereas[Cameron,1989]constructsalargergadget,wherethefouredgesbetween the u’s and w’s are replaced with disjoint paths of length 8.Interestingly, a short argument shows that a slight modification of Strawberry’s construction doesproduceavalidreduction. Themainpointisthat G′maycontainamatchingwhichcontainssome“vertical”or“diagonal”edges(i.e.,edgesnotofthe form (uv, wv)). However, itispossibletoswapthese edges with the desired type of edge, which completes the proof. As a result, this responsereceives a 1.PromptYouareabotdesignedtowriteNP-Hardnessreductions. YoumustusethefollowingLaTeX template to write the reduction:Theorem 6. Problem Clique reduces to Problem Balanced_BicliqueProof.Assumewehaveanalgorithm AsolvingProblemBalanced_Biclique. Then,we can execute the following algorithm to solve Problem Clique:Reduction Given inputs x1,···to Problem Clique, construct an instance ofProblem Balanced_Biclique as follows.•y1=···Output the result of Aony1,···.12ProofofCorrectness Toestablishcorrectness,itremainstoprovethat x1,···contains a foo of size k⇐⇒ y1,···contains a bar of size k′.=⇒: Suppose x1,···containsafooofsize k.···Thus, y1,···containsabarofsizek′.⇐=: Suppose y1,···containsabarofsize k′.···Thus, x1,···containsafooof size k.···.Definition7. Givenaundirectedgraph G= (V, E ),asubsetofthevertices S⊆Vis acliqueif pair of vertices in Sis adjacent:∀u̸=v∈S,(u, v)∈EProblem 7 (Clique).The Clique Problem, given a graph Gand an integer k, is todecide whether or not Gcontains a clique of size k.•Input: (G, k )where G= (V, E )is an undirected graph and kis a positiveinteger.•Output:1Gcontains a clique of size at least k.0OtherwiseDefinition8. Abalancedbipartite graphisabipartitegraphinwhichbothsideshave the same cardinality; the bipartition (A, B )satisfies |A|=|B|.Given a balanced bipartite graph G= (V, E )with bipartition (A, B ), a subsetC⊂Vis abalanced biclique if it is a complete bipartite graph satisfying|A∩C|=|B∩C|.Problem8 (BalancedBiclique) .TheBalancedBicliqueProblem,givenabalancedbipartitegraph Gandaninteger k,istodecidewhetherornot Gcontainsabalancedbiclique of size k.•Input: (G, k )where G= (V, E )is a balanced bipartite graph and kis apositive integer.•Output:1Gcontains a balanced biclique of size at least k.0OtherwiseWrite a proof that Balanced_Biclique is NP-hard by reducing from Clique.ResponseTheorem 7. Problem Cliquereduces to Problem Balanced_Biclique .Proof.Assume we have an algorithm Asolving Problem Balanced_Biclique .Then, we can execute the following algorithm to solve Problem Clique:Reduction Given inputs (G, k )to Problem Clique, construct an instance ofProblem Balanced_Biclique as follows.•Construct a balanced bipartite graph G′= (V′, E′)with bipartition (A, B )where:–A=V, the vertex set of G.–B=V′, a copy of the vertex set V(i.e., for each v∈V, there is acorresponding v′∈B).–Foreachedge (u, v)∈E,includeedges (u, v′)and (v, u′)inE′,whereu∈A,v′∈B,v∈A, andu′∈B.•Setk′=k.Output the result of Aon(G′, k′).13ProofofCorrectness Toestablishcorrectness,itremainstoprovethat Gcontainsa clique of size k⇐⇒ G′contains a balanced biclique of size 2k.=⇒: Suppose Gcontains a clique C⊆Vof size k. Define subsets A′=C⊆AandB′={v′|v∈C} ⊆B. For every pair u, v∈C, since Cis a clique in G,(u, v)∈E. Thus,in G′,edges (u, v′)and(v, u′)arein E′. Therefore,everyvertexinA′is connected to every vertex in B′, forming a balanced biclique of size 2kinG′.⇐=: Suppose G′containsabalancedbicliqueinducedby A′⊆AandB′⊆B,where |A′|=|B′|=k, and every vertex in A′is connected to every vertex inB′. By the construction of G′, an edge (u, v′)exists in E′only if (u, v)∈EinG. Similarly, sinceeach v′∈Bcorrespondstoaunique v∈V. Define C=A′.For every pair u, v∈C, their corresponding vertices u′∈B′,v′∈B′satisfy(u, v′)∈E′and(v, u′)∈E′,implying (u, v)∈Eand(v, u)∈EinG. Therefore,Cis a clique of size kinG.ScoreThis response actually contains the well-known “naive” reduction which is somewhatconvincing, but completely incorrect. The reason this construction is incorrect is that G′may containa “misaligned” biclique, where A′andB′do not correspond to the same set of vertices in G. Forexample, take G=K3,3and the construction fails.F List of problemsTable 5: Counts of problem definitions used in reductionsProblem Name Source Dest3 Coloring 2 13D Matching 6 03-Partition 1 13-SAT 12 14 Coloring 0 14D Matching 0 14-Partition 1 14-SAT 0 1ABCD Partition 1 1Almost-SAT 0 1Bin Packing 0 2Bipartization 1 1Bounded Degree Spanning Tree 0 1Clique 6 3Common Subgraph 0 1Contagious Set 0 1Cutting at most K Vertices 0 1Densest Cut 0 1Dense Subgraph 0 1Directed Edge-Disjoint Paths 0 1Directed Hamiltonian Path 0 1Dominating Set 1 3Double-SAT 0 1Edge Bipartization 1 1Exact Cover by 3-Sets 1 1Hamiltonian Cycle 2 0Hamiltonian Path 2 1Hitting Set 0 2Independent Set 12 4Integer Programming 0 414Kernel 0 1Kite 0 1Knapsack 0 1Lecture Planning 0 1Linear Arrangement 0 1Longest Path 0 1Max 2-SAT 2 1MAX 2-XORSAT 0 1Max Cover 0 1Max Cover by Cliques 0 1Max k-Colorable Subgraph 0 1Max-SAT 0 1Min 2-SAT Deletion 0 1NAE 3SAT 1 1NAE 4SAT 1 1Partition 2 1Path Selection 0 1Planar 3-Coloring 0 1SAT 6 0Set Cover 5 3Set Packing 0 2Sparse Subgraph 0 1Steiner Tree 0 1Strongly Independent Set 0 2Subgraph Isomorphism 1 1Subset Sum 2 2Suspicious Coalition 0 1Traveling Salesman 1 1Triangle Cover 0 2Undirected Feedback Set 1 2Unit Intersection 0 1Unweighted Bisection Width 0 1Unweighted Max Bisection 2 1Unweighted Max Cut 4 2Vertex Cover 11 3Vertex Disjoint Paths 0 1Weighted Bisection Width 0 2Weighted Max Bisection 1 1Weighted Max Cut 1 0Zero One Equations 0 1Zero Weight Cycle 0 1G List of reductionsThe dataset contains reductions over a wide range of difficulties, from easy generalizations (e.g., SATtoMax-SAT)tocomplexconstructions(e.g.,3-SATto3-Coloring). Thelengthofareductionisareasonable indicator of its difficulty, so we include the lengths of each reduction (in characters) in thefollowing table. The complete distribution of lengths is visualized in Figure 2.Table 6: List of reductions between problemsSource Destination Length3-Coloring 4-Coloring 15253-Coloring Planar 3-Coloring 57893D Matching 4D Matching 17013D Matching ABCD Partition 38973D Matching Exact Cover By 3-Sets 1486153D Matching Subset Sum 23313D Matching Unit Intersection 17473D Matching Zero One Equations 16503-Partition Bin Packing 16293-SAT 3-Coloring 35533-SAT 4-SAT 21303-SAT Clique 23373-SAT Directed Hamiltonian Path 36803-SAT Double SAT 15183-SAT Independent Set 20093-SAT Integer Programming 24563-SAT Kernel 24343-SAT Max 2-SAT 29543-SAT NAE 4-SAT 17373-SAT Vertex Cover 22133-SAT Vertex Disjoint Paths 33094-Partition 3-Partition 4707ABCD Partition 4-Partition 1821Bipartization Vertex Cover 2692Clique Bipartization 2098Clique Cutting At Most K Vertices 3008Clique Dense Subgraph 1387Clique Independent Set 1505Clique KITE 1443Clique Subgraph Isomorphism 1032Dominating Set Set Cover 1252Edge Bipartization Max 2-XORSAT 2191Exact Cover By 3-Sets Steiner Tree 2066Hamiltonian Cycle Hamiltonian Path 1799Hamiltonian Cycle Traveling Salesman 1425Hamiltonian Path Bounded Degree Spanning Tree 1845Hamiltonian Path Longest Path 967Independent Set Clique 1505Independent Set Dominating Set 3316Independent Set Hitting Set 1463Independent Set Integer Programming 1946Independent Set Path Selection 1821Independent Set Set Cover 1701Independent Set Set Packing 1509Independent Set Sparse Subgraph 1267Independent Set Strongly Independent Set 1944Independent Set Triangle Cover 2631Independent Set Undirected Feedback Set 2398Independent Set Vertex Cover 1308Max 2-SAT Min 2-SAT Deletion 967Max 2-SAT Unweighted Max Cut 4609NAE 3-SAT Unweighted Max Cut 3031NAE 4-SAT NAE 3-SAT 2175Partition Bin Packing 1729Partition Knapsack 1875SAT 3-SAT 2401SAT Almost SAT 1354SAT Directed Edge Disjoint Paths 3925SAT Independent Set 2184SAT Max SAT 1447SAT Subset Sum 3724Set Cover Dominating Set 2115Set Cover Integer Programming 2034Set Cover Max Cover 939160 20 40 60 80Reduction0100020003000400050006000Length (characters)Distribution of Reduction LengthsFigure2: Thedistributionoflengths(i.e.,numberofcharacters)ofreductionsinthedataset. Mostreductionshavelengthsbetween1000and3000characters. Theminimumis939,themaximumis5789, and the mean is 2180.Set Cover Max Cover By Cliques 2755Set Cover Max K Colorable Subgraph 2713Subgraph Isomorphism Common Subgraph 1004Subset Sum Partition 1618Subset Sum Zero Weight Cycle 2312Traveling Salesman Integer Programming 2672Undirected Feedback Set Contagious Set 1790Unweighted Max Bisection Unweighted Bisection Width 1832Unweighted Max Bisection Weighted Bisection Width 2232Unweighted Max Cut Densest Cut 2414Unweighted Max Cut Edge Bipartization 1231Unweighted Max Cut Linear Arrangement 5681Unweighted Max Cut Unweighted Max Bisection 1769Vertex Cover Clique 1431Vertex Cover Dominating Set 3154Vertex Cover Hitting Set 1307Vertex Cover Independent Set 1306Vertex Cover Lecture Planning 1918Vertex Cover Set Cover 1536Vertex Cover Set Packing 1617Vertex Cover Strongly Independent Set 2262Vertex Cover Suspicious Coalition 2194Vertex Cover Triangle Cover 2418Vertex Cover Undirected Feedback Set 2199Weighted Max Bisection Weighted Bisection Width 2509Weighted Max Cut Weighted Max Bisection 173517NeurIPS Paper Checklist1.ClaimsQuestion: Dothemainclaimsmadeintheabstractandintroductionaccuratelyreflectthepaper’s contributions and scope?Answer: [Yes]Justification: WeintroducetheKarpdatasetandperformpreliminaryexperimentswhichincludes a comparison of language models and fine-tuningGuidelines:•TheanswerNAmeansthattheabstractandintroductiondonotincludetheclaimsmadein the paper.•The abstract and/or introduction should clearly state the claims made, including thecontributionsmadeinthepaperandimportantassumptionsandlimitations. ANoorNA answer to this question will not be perceived well by the reviewers.•The claims made should match theoretical and experimental results, and reflect howmuch the results can be expected to generalize to other settings.•Itisfinetoincludeaspirationalgoalsasmotivationaslongasitisclearthatthesegoalsare not attained by the paper.2.LimitationsQuestion: Does the paper discuss the limitations of the work performed by the authors?Answer: [Yes]Justification: We discuss the limitations of the dataset in Section 2Guidelines:•The answer NA means that the paper has no limitation while the answer No means thatthe paper has limitations, but those are not discussed in the paper.•The authors are encouraged to create a separate "Limitations" section in their paper.•Thepapershouldpointoutanystrongassumptionsandhowrobusttheresultsaretoviolations of these assumptions (e.g., independence assumptions, noiseless settings,modelwell-specification,asymptoticapproximationsonlyholdinglocally). Theauthorsshould reflect on how these assumptions might be violated in practice and what theimplications would be.•Theauthorsshouldreflectonthescopeoftheclaimsmade,e.g.,iftheapproachwasonly tested on a few datasets or with a few runs. In general, empirical results oftendepend on implicit assumptions, which should be articulated.•Theauthorsshouldreflectonthefactorsthatinfluencetheperformanceoftheapproach.Forexample,afacialrecognitionalgorithmmayperformpoorlywhenimageresolutionis low or images are taken in low lighting. Or a speech-to-text system might not beused reliably to provide closed captions for online lectures because it fails to handletechnical jargon.•The authors should discuss the computational efficiency of the proposed algorithmsand how they scale with dataset size.•Ifapplicable,theauthorsshoulddiscusspossiblelimitationsoftheirapproachtoaddressproblems of privacy and fairness.•While the authors might fear that complete honesty about limitations might be used byreviewersasgroundsforrejection,aworseoutcomemightbethatreviewersdiscoverlimitations that aren’t acknowledged in the paper. The authors should use their bestjudgmentandrecognizethatindividualactionsinfavoroftransparencyplayanimportantrole in developing norms that preserve the integrity of the community. Reviewers willbe specifically instructed to not penalize honesty concerning limitations.3.Theory Assumptions and ProofsQuestion: For each theoretical result, does the paper provide the full set of assumptions anda complete (and correct) proof?Answer: [NA]18Justification: We have no theoeretical results per se, but the reductions in the datasetconstitute theoretical results and indeed all have full proofs.Guidelines:•The answer NA means that the paper does not include theoretical results.•All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.•Allassumptionsshouldbeclearlystatedorreferenced inthestatementofanytheorems.•The proofs can either appear in the main paper or the supplemental material, but ifthey appear in the supplemental material, the authors are encouraged to provide a shortproof sketch to provide intuition.•Inversely,anyinformalproofprovidedinthecoreofthepapershouldbecomplementedby formal proofs provided in appendix or supplemental material.•Theorems and Lemmas that the proof relies upon should be properly referenced.4.Experimental Result ReproducibilityQuestion: Doesthepaperfullydisclosealltheinformationneededtoreproducethemainexperimentalresultsofthepapertotheextentthatitaffectsthemainclaimsand/orconclusionsof the paper (regardless of whether the code and data are provided or not)?Answer: [Yes]Justification: We describe our experimental setup in Appendix B, and the dataset and codefor fine-tuning and inference are available upon requestGuidelines:•The answer NA means that the paper does not include experiments.•Ifthepaperincludesexperiments,aNoanswertothisquestionwillnotbeperceivedwellbythereviewers: Makingthepaperreproducibleisimportant,regardlessofwhetherthe code and data are provided or not.•Ifthecontributionisadatasetand/ormodel,theauthorsshoulddescribethestepstakento make their results reproducible or verifiable.•Dependingonthecontribution,reproducibilitycanbeaccomplishedinvariousways.Forexample, ifthecontributionisanovelarchitecture, describingthearchitecturefullymight suffice, or if the contribution is a specific model and empirical evaluation, it maybe necessary to either make it possible for others to replicate the model with the samedataset, or provide access to the model. In general. releasing code and data is oftenone good way to accomplish this, but reproducibility can also be provided via detailedinstructionsfor howtoreplicatethe results,accessto ahostedmodel (e.g.,inthe caseofalargelanguagemodel),releasingofamodelcheckpoint,orothermeansthatareappropriate to the research performed.•While NeurIPS does not require releasing code, the conference does require allsubmissions to provide some reasonable avenue for reproducibility, which may dependon the nature of the contribution. For example(a)Ifthecontributionisprimarilyanewalgorithm,thepapershouldmakeitclearhowto reproduce that algorithm.(b)If the contribution is primarily a new model architecture, the paper should describethe architecture clearly and fully.(c)If the contribution is a new model (e.g., a large language model), then there shouldeitherbeawaytoaccessthismodelforreproducingtheresultsorawaytoreproducethemodel(e.g.,withanopen-sourcedatasetorinstructionsforhowtoconstructthedataset).(d)Werecognizethatreproducibilitymaybetrickyinsomecases,inwhichcaseauthorsare welcome to describe the particular way they provide for reproducibility. In thecase of closed-source models, it may be that access to the model is limited in someway(e.g.,toregisteredusers),butitshouldbepossibleforotherresearcherstohavesome path to reproducing or verifying the results.5.Open access to data and code19Question: Doesthepaperprovideopenaccesstothedataandcode,withsufficientinstructionstofaithfullyreproducethemainexperimentalresults,asdescribedinsupplementalmaterial?Answer: [No]Justification: The dataset and code will be provided upon request; they are not openaccess since the dataset likely contains solutions to undergraduate courses covering NPcompletenessGuidelines:•The answer NA means that paper does not include experiments requiring code.•Please see the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•While we encourage the release of code and data, we understand that this might not bepossible, so “No” is an acceptable answer. Papers cannot be rejected simply for notincludingcode,unlessthisiscentraltothecontribution(e.g.,foranewopen-sourcebenchmark).•The instructions should contain the exact command and environment needed to runto reproduce the results. See the NeurIPS code and data submission guidelines(https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•The authors should provide instructions on data access and preparation, including howto access the raw data, preprocessed data, intermediate data, and generated data, etc.•The authors should provide scripts to reproduce all experimental results for the newproposed method and baselines. If only a subset of experiments are reproducible, theyshould state which ones are omitted from the script and why.•At submission time, to preserve anonymity, the authors should release anonymizedversions (if applicable).•Providing asmuch informationas possiblein supplementalmaterial(appended tothepaper) is recommended, but including URLs to data and code is permitted.6.Experimental Setting/DetailsQuestion: Does the paper specify all the training and test details (e.g., data splits, hyper-parameters, how they were chosen, type of optimizer, etc.) necessary to understand theresults?Answer: [Yes]Justification: See Appendix BGuidelines:•The answer NA means that the paper does not include experiments.•Theexperimentalsettingshouldbepresentedinthecoreofthepapertoalevelofdetailthat is necessary to appreciate the results and make sense of them.•The fulldetails canbe provided eitherwith thecode, in appendix,or as supplementalmaterial.7.Experiment Statistical SignificanceQuestion: Doesthepaperreporterrorbarssuitablyandcorrectlydefinedorotherappropriateinformation about the statistical significance of the experiments?Answer: [NA]Justification: Resultsofstatisticalsignificancewereunfortunatelytooexpensivesinceproofsmust be carefully verified manually by a human. This is discussed in Section 3Guidelines:•The answer NA means that the paper does not include experiments.•Theauthorsshouldanswer"Yes"iftheresultsareaccompaniedbyerrorbars,confidenceintervals,orstatisticalsignificancetests,atleastfortheexperimentsthatsupportthemain claims of the paper.•The factors of variability that the error bars are capturing should be clearly stated (forexample, train/testsplit,initialization,randomdrawingofsomeparameter,oroverallrun with given experimental conditions).20•The method for calculating the error bars should be explained (closed form formula,call to a library function, bootstrap, etc.)•The assumptions made should be given (e.g., Normally distributed errors).•Itshouldbeclearwhethertheerrorbaristhestandarddeviationorthestandarderrorofthe mean.•It is OK to report 1-sigma error bars, but one should state it. The authors shouldpreferablyreporta2-sigmaerrorbarthanstatethattheyhavea96%CI,ifthehypothesisof Normality of errors is not verified.•For asymmetric distributions, the authors should be careful not to show in tables orfiguressymmetricerrorbarsthatwouldyieldresultsthatareoutofrange(e.g. negativeerror rates).•If error bars are reported in tables or plots, The authors should explain in the text howthey were calculated and reference the corresponding figures or tables in the text.8.Experiments Compute ResourcesQuestion: Foreachexperiment,doesthepaperprovidesufficientinformationonthecomputerresources(typeofcomputeworkers,memory,timeofexecution)neededtoreproducetheexperiments?Answer: [Yes]Justification: See Appendix BGuidelines:•The answer NA means that the paper does not include experiments.•Thepapershouldindicatethetypeofcomputeworkers CPUorGPU,internalcluster,or cloud provider, including relevant memory and storage.•Thepapershouldprovidetheamountofcomputerequiredforeachoftheindividualexperimental runs as well as estimate the total compute.•The paper should disclose whether the full research project required more computethan the experiments reported in the paper (e.g., preliminary or failed experiments thatdidn’t make it into the paper).9.Code Of EthicsQuestion: Does the research conducted in the paper conform, in every respect, with theNeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ?Answer: [Yes]Justification:Guidelines:•The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.•Ifthe authorsanswer No, theyshould explainthe specialcircumstances thatrequire adeviation from the Code of Ethics.•The authors should make sure to preserve anonymity (e.g., if there is a specialconsideration due to laws or regulations in their jurisdiction).10.Broader ImpactsQuestion: Does the paper discuss both potential positive societal impacts and negativesocietal impacts of the work performed?Answer: [NA]Justification: TheKarpdatasetismerelyacompilationofpubliclyavailableandwellknownmaterialGuidelines:•The answer NA means that there is no societal impact of the work performed.•If the authors answer NA or No, they should explain why their work has no societalimpact or why the paper does not address societal impact.21•Examplesofnegativesocietalimpactsincludepotentialmaliciousorunintendeduses(e.g., disinformation, generating fake profiles, surveillance), fairness considerations(e.g.,deploymentoftechnologiesthatcouldmakedecisionsthatunfairlyimpactspecificgroups), privacy considerations, and security considerations.•The conference expects that many papers will be foundational research and not tiedtoparticularapplications,letalonedeployments. However,ifthereisadirectpathtoany negative applications, the authors should point it out. For example, it is legitimatetopointoutthatanimprovementinthequalityofgenerativemodelscouldbeusedtogenerate deepfakes for disinformation. On the other hand, it is not needed to point outthatagenericalgorithmforoptimizingneuralnetworkscouldenablepeopletotrainmodels that generate Deepfakes faster.•The authors should consider possible harms that could arise when the technology isbeing used as intended and functioning correctly, harms that could arise when thetechnologyisbeingusedasintendedbutgivesincorrectresults,andharmsfollowingfrom (intentional or unintentional) misuse of the technology.•Iftherearenegativesocietalimpacts,theauthorscouldalsodiscusspossiblemitigationstrategies (e.g., gated release of models, providing defenses in addition to attacks,mechanisms for monitoring misuse, mechanisms to monitor how a system learns fromfeedback over time, improving the efficiency and accessibility of ML).11.SafeguardsQuestion: Does the paper describe safeguards that have been put in place for responsiblerelease of data or models that have a high risk for misuse (e.g., pretrained language models,image generators, or scraped datasets)?Answer: [NA]Justification: The dataset poses no risk for misuse and is not publicly availableGuidelines:•The answer NA means that the paper poses no such risks.•Releasedmodelsthathaveahighriskformisuseordual-useshouldbereleasedwithnecessary safeguards to allow for controlled use of the model, for example by requiringthatusersadheretousageguidelinesorrestrictionstoaccessthemodelorimplementingsafety filters.•Datasets that have been scraped from the Internet could pose safety risks. The authorsshould describe how they avoided releasing unsafe images.•Werecognizethatprovidingeffectivesafeguardsischallenging,andmanypapersdonot require this, but we encourage authors to take this into account and make a bestfaith effort.12.Licenses for existing assetsQuestion: Arethecreatorsororiginalownersofassets(e.g.,code,data,models),usedinthepaper,properlycreditedandarethelicenseandtermsofuseexplicitlymentionedandproperly respected?Answer: [Yes]Justification: The literature from which reductions were sourced is citedGuidelines:•The answer NA means that the paper does not use existing assets.•The authors should cite the original paper that produced the code package or dataset.•Theauthorsshouldstatewhichversionoftheassetisusedand,ifpossible,includeaURL.•The name of the license (e.g., CC-BY 4.0) should be included for each asset.•For scraped data from a particular source (e.g., website), the copyright and terms ofservice of that source should be provided.•If assets are released, the license, copyright information, and terms of use in thepackage should be provided. For popular datasets, paperswithcode.com/datasetshascuratedlicensesforsomedatasets. Theirlicensingguidecanhelpdeterminethelicense of a dataset.22•Forexistingdatasetsthatarere-packaged,boththeoriginallicenseandthelicenseofthe derived asset (if it has changed) should be provided.•Ifthisinformationisnotavailableonline,theauthorsareencouragedtoreachouttotheasset’s creators.13.New AssetsQuestion: Arenewassetsintroducedinthepaperwelldocumentedandisthedocumentationprovided alongside the assets?Answer: [NA]Justification:Guidelines:•The answer NA means that the paper does not release new assets.•Researchers should communicate the details of the dataset/code/model as part of theirsubmissions via structured templates. This includes details about training, license,limitations, etc.•Thepapershoulddiscusswhetherandhowconsentwasobtainedfrompeoplewhoseasset is used.•At submission time, remember to anonymize your assets (if applicable). You can eithercreate an anonymized URL or include an anonymized zip file.14.Crowdsourcing and Research with Human SubjectsQuestion: For crowdsourcing experiments and research with human subjects, does the paperincludethefulltextofinstructionsgiventoparticipantsandscreenshots,ifapplicable,aswell as details about compensation (if any)?Answer: [NA]Justification:Guidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Including this information in the supplemental material is fine, but if the maincontribution of the paper involves human subjects, then as much detail as possibleshould be included in the main paper.•According to the NeurIPS Code of Ethics, workersinvolvedin data collection, curation,or other labor should be paid at least the minimum wage in the country of the datacollector.15.InstitutionalReviewBoard(IRB)ApprovalsorEquivalentforResearchwithHumanSubjectsQuestion: Doesthepaperdescribepotentialrisksincurredbystudyparticipants,whethersuch risks were disclosed to the subjects, and whether Institutional Review Board (IRB)approvals(oranequivalentapproval/reviewbasedontherequirementsofyourcountryorinstitution) were obtained?Answer: [NA]Justification:Guidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Dependingonthecountryinwhichresearchisconducted,IRBapproval(orequivalent)mayberequiredforanyhumansubjectsresearch. IfyouobtainedIRBapproval,youshould clearly state this in the paper.•Werecognizethattheproceduresforthismayvarysignificantlybetweeninstitutionsand locations, and we expect authors to adhere to the NeurIPS Code of Ethics and theguidelines for their institution.•For initial submissions, do not include any information that would break anonymity (ifapplicable), such as the institution conducting the review.23 |
RcqAmkDJfI | Not All LLM Reasoners Are Created EqualArian [email protected] SordoniMila, Microsoft ResearchDaniel ToyamaGoogle DeepMindAaron CourvilleMilaRishabh AgarwalMila, Google DeepMindAbstractWe study the depth of problem-solving capabilities of LLMs, and to what extentthey perform mathematical reasoning in a compositional manner. To this end,we create a new benchmark by composing pairs of existing math word problemstogether so that the answer to the second problem depends on correctly answeringthe first problem. We measure the difference between the performance of solvingeach question independently and solving the compositional pairs as the reasoninggap of a model. Our findings reveal a significant reasoning gap in most frontierLLMs. This gap is more pronounced in smaller and more cost-efficient models.The objective of this study is not to introduce yet another benchmark, but rather toprovide a case study aimed at gaining deeper insights into current models’ reasoningabilities, and to reassess existing established training methods and benchmarks.30 40 50 60 70 80 90 100GSM8K Accuracy (%)020406080100Compositional GSM Accuracy (%)Gemini 1.5 ProGemini 1.0 ProGemma2-27B-ITGemma2-9B-ITGemma2-27B-PTGemma2-9B-PTGPT-4oGPT-4o miniLLAMA3-70B-ITLLAMA3-8B-ITPhi-2Phi-3-mini-4k-ITMixtral-8x7B-ITMistral-7B-IT Mistral-7B-PTMathstral-7BGemini 1.5 FlashLLAMA3-8B-PTLLAMA3-70B-PTMixtral-8x7B-PTFigure 1: Reasoning Gap: Pairs of GSM8K test questions are chained together so that the answer of the firstquestion ( Q1) is a variable in the second one ( Q2). The model is required to correctly answer both questionsto solve the problem. If a model has an accuracy of S1on the Q1set, and S2onQ2set, then the expectedCompositional GSM accuracy is S1×S2. The x-axis corresponds to the geometric mean√S1×S2, labeledGSM8K accuracy for simplicity. The trend-line y=x2is the expected Compositional GSM accuracy.38th Conference on Neural Information Processing Systems (NeurIPS 2024) workshop on MATH-AI.Compositional GSM ProblemLet X be the answer to the Q1:Q1: There are 27 unicorns left in the world. One third of them are in the Scottish Highlands. Two thirds ofthe Scottish unicorns are female. How many female Scottish unicorns are there?Solve it and use the value of X to solve Q2. Explain your answer step by step.Q2: Zack’s locker is half as big as Timothy’s locker. Peter’s locker is 1/4 as big as Zack’s locker. If Peter’slocker is X cubic inches, how big is Timothy’s locker in cubic inches?Figure 2: Example Problem from the Compositional GSM benchmark . The answer of Question-1 ( Q1) isa variable X in Question-2 ( Q2). Therefore, the model has to be able to solve the first question correctly inorder to solve the second question. The new final answer of Question-2 is calculated by modifying its code-formsolution and executing it. Question-1 and the number to modify in Question-2 are chosen to have a new finalanswer which is a positive integer not too far from the old answer of Question-2.1 IntroductionThe strong performance of large language models (LLMs) on high-school and college-level mathreasoning benchmarks (Dubey et al., 2024; Google, 2024; OpenAI, 2023b), has led to the commonbelief that LLMs have “mastered” grade-school math, particularly as measured by the GSM8Kbenchmark (Cobbe et al., 2021). This apparent mastery of grade-school math problems raises adeeper question: do LLMs truly grasp the underlying concepts or do they mostly rely on datasetcontamination or memorization (Srivastava et al., 2024)? For example, a recent examination onprivate “held-out” grade-school problems (Zhang et al., 2024) reveals that while frontier closed-sourceLLMs show minimal signs of overfitting, some open-weights models show systematic overfitting,possibly due to test data contamination.In this work, we perform a case study to evaluate how well LLMs can combine learned concepts tosolve unseen problems, to probe the brittleness of their reasoning abilities. To do so, we introduceCompositional GSM , a two-hop version of GSM8K with higher difficulty, where each problem chainstwo test questions together such that the answer to the first question is used as a variable in thesecond question (Figure 2). As LLMs can easily solve grade-school math problems, they shouldalso be capable of solving combinations of those problems. As such, we measure the gap betweentheir performance on solving the questions individually and on Compositional GSM. Specifically,we benchmark frontier open-weights and closed LLMs, including Gemini (Google, 2023, 2024),Gemma2 (Gemma Team et al., 2024), Llama-3 (AI@Meta, 2024), GPT (OpenAI, 2023a), Phi (Abdinet al., 2024), Qwen (Yang et al., 2024) and Mistral families (Jiang et al., 2024).Here are our key findings:•Most models exhibit a gap between their performance on GSM8K test set and CompositionalGSM (Figure 1).• This reasoning gap is larger in small and more cost-efficient models (Figure 5 and Figure 3).• Instruction-following tuning of LLMs heavily favours the original GSM8K split (Figure 4).• Finetuning with human data and synthetic data results in a similar reasoning gap trend (Figure 7).•Smaller models benefit more from generating code rather than natural language Chain-of-Thought(CoT) to solve Compositional GSM problems (Figure 6).2 Compositional Grade-School Math (GSM)Each question in compositional GSM consists of two questions, Question-1 and Question-2, from asubset of 1200 examples of the original GSM8K test set. The final answer of Question-1 is refereedto as Xwhich is a variable in Question-2 (Figure 2). The final answer of Question-2 is obtained bysubstituting Xand solving it. The choice of Question-1 and the number to modify and replace withXin Question-2 was made in a a way such that the new final answer of Question-2 is different fromits old final answer, and is a positive integer not too far from the old final answer.220406080100T est Accuracy (%):4o1.14o mini14.2GPT204060801001.5 Pro5.81.5 Flash11.3Gemini2040608010070B-IT4.98B-IT27.5LLAMA32040608010027B-IT189B-IT37.3Gemma2GSM8K (Original) Compositional GSMFigure 3: Cost efficient LLMs reason differently: showing four family of models, each having a high-costand low-cost option. Although the cheaper models perform similarly on the original GSM8K test, they show asignificant decline in performance on the Compositional GSM test.Reasoning Gap Question-1 and Question-2 in our compositional queries are from the originaltest split Doriginal , and the modified test split Dmodified respectively. Assuming that a model has anaccuracy of S1onDoriginal andS2onDmodified , it is expected for it to have an accuracy of S1×S2onthe compositional split Dcomp. We report the following as the compositional reasoning gap score,Compositional reasoning gap : ∆ = Sc−S1×S2 (1)where Scis the performance of the model on Dcomp.3 Experiments & ResultsThe distance to the trend-line in Figure 1 shows the reasoning gap of models. The x-axis correspondsto√S1×S2, which is the geometric mean of the accuracies on the set of Q1andQ2independently.We find that most models fall below expectation on Compositional GSM. Specifically, it is evidentthat cost-efficient models have a larger gap than more expensive models. More analysis is providedin the Appendix.3.1 Cost-Efficient LLMs Reason DifferentlyThe reasoning abilities of cost-efficient LMs has been rapidly improving over time, as evaluated usingstandard benchmarks (Bansal et al., 2024). For example, GPT-4o mini and Gemini 1.5 Flash bothachieve above 90% accuracy on GSM, while costing 25−35×cheaper than GPT-4o and Gemini1.5 Pro respectively. This progress could be attributed to several factors, such as better pretrainingdata (AI@Meta, 2024), and knowledge distillation (Agarwal et al., 2024; Team et al., 2024). To thisend, we investigate whether these reasoning gains on GSM8K still persist on Compositional GSM.We study four family of models, each comprising both a high-cost and low-cost option, where costis measured via parameter count or API pricing. Figure 3 shows the original GSM8K test splitperformance and Compositional GSM performance for all models. The numbers above the barsrepresents the reasoning gap defined in Eq 1. While cheaper models perform comparably on theoriginal GSM8K test, they exhibit a notable drop in performance on the Compositional GSM test set.These results suggest that critical flaws of cost-efficient LLMs in their reasoning may be obscuredby high scores on standard benchmarks. This underscores the need to rethink current strategies fordeveloping cost-efficient language models.3.2 Instruction-Tuning Impacts LLM Reasoning DifferentlyWe compare pretrained and instruction-following tuned versions of models in three families of Mistral,LLAMA3 and Gemma2. Figure 4 illustrates this comparison, along with the performance gains from3020406080T est Accuracy (%)GSM8K+14.1Comp GSM+4.3Mistral-7B020406080GSM8K+25.1Comp GSM+12.6LLAMA3-8B020406080GSM8K+22.8Comp GSM+4.8Gemma2-9BPretrained (PT) Instruction-Tuned (IT)020406080T est Accuracy (%)GSM8K+7.6Comp GSM+3.1Mixtral-8x7B020406080GSM8K+8.6Comp GSM+19.0LLAMA3-70B020406080GSM8K+15.2Comp GSM+17.2Gemma2-27BFigure 4: Impact of Instruction-Tuning on Reasoning Gap: comparing pretrained and instruction-followingtuned variant of models from Mistral, LLAMA3 and Gemma2 families. Numbers above bars represent im-provements from instruction-tuning on each set. For smaller models (top), we observe that instruction-tuning ishighly optimized for GSM8K questions, which results in a greater improvement on the original GSM8K test setcompared to the Compositional GSM test. However, this pattern does not hold for larger models (bottom).instruction-tuning, displayed above bars for each test set. On small models (top row), this comparisonshows that current instruction-tuning is heavily optimized for GSM8K questions. Instruction-tuningleads to a significantly larger improvement on the original GSM8K test set than the CompositionalGSM test across model families. However, this trend does not apply to larger models (bottom row),where the improvements are inconsistent.4 Discussion and ConclusionWe designed the Compositional GSM benchmark, which requires solving dependent pairs of mathword problems. These problems are from the original GSM8K test split. We investigate the “System2” mathematical reasoning capabilities of LLMs by comparing their performance on the originalGSM8K test split and our Compositional GSM test set. Our analysis reveals a notable reasoning gapin most models. Many leading LLMs exhibit a substantial difference in performance when solvingquestions independently versus as part of the compositional pair. Our study indicates that smaller andmore cost efficient models exhibit a larger reasoning gap. Models frequently struggle with pairs ofquestions and get distracted likely because they are tuned to handle one question at a time. They oftenanswer the first question correctly, but lose attention to details and make subtle errors in answeringthe second question. We also noticed that learning from human data and self-generated data results insimilar behaviour. In both settings, as training progresses, the model’s performance on the originaltest split improves. However, beyond a certain point, performance on the Compositional GSM testbegins to decline.We emphasize that this benchmark should not be viewed as an endpoint or merely as a tool forgenerating additional training data, but as a catalyst to gain insights about current models and to re-evaluate and improve existing benchmarks. Our findings are intended to stimulate further explorationand provide new perspectives. Future research could build on this setup by incorporating morechallenging questions, such as those from the MATH dataset, or by extending the framework tomulti-modal problems to gain deeper insights into the reasoning capabilities of LLMs.4ReferencesM. Abdin, S. A. Jacobs, A. A. Awan, J. Aneja, A. Awadallah, H. Awadalla, N. Bach, A. Bahree,A. Bakhtiari, J. Bao, H. Behl, A. Benhaim, M. Bilenko, J. Bjorck, S. Bubeck, Q. Cai, M. Cai,C. C. T. Mendes, W. Chen, V . Chaudhary, D. Chen, D. Chen, Y .-C. Chen, Y .-L. Chen, P. Chopra,X. Dai, A. D. Giorno, G. de Rosa, M. Dixon, R. Eldan, V . Fragoso, D. Iter, M. Gao, M. Gao, J. Gao,A. Garg, A. Goswami, S. Gunasekar, E. Haider, J. Hao, R. J. Hewett, J. Huynh, M. Javaheripi,X. Jin, P. Kauffmann, N. Karampatziakis, D. Kim, M. Khademi, L. Kurilenko, J. R. Lee, Y . T.Lee, Y . Li, Y . Li, C. Liang, L. Liden, C. Liu, M. Liu, W. Liu, E. Lin, Z. Lin, C. Luo, P. Madan,M. Mazzola, A. Mitra, H. Modi, A. Nguyen, B. Norick, B. Patra, D. Perez-Becker, T. Portet,R. Pryzant, H. Qin, M. Radmilac, C. Rosset, S. Roy, O. Ruwase, O. Saarikivi, A. Saied, A. Salim,M. Santacroce, S. Shah, N. Shang, H. Sharma, S. Shukla, X. Song, M. Tanaka, A. Tupini, X. Wang,L. Wang, C. Wang, Y . Wang, R. Ward, G. Wang, P. Witte, H. Wu, M. Wyatt, B. Xiao, C. Xu, J. Xu,W. Xu, S. Yadav, F. Yang, J. Yang, Z. Yang, Y . Yang, D. Yu, L. Yuan, C. Zhang, C. Zhang, J. Zhang,L. L. Zhang, Y . Zhang, Y . Zhang, Y . Zhang, and X. Zhou. Phi-3 technical report: A highly capablelanguage model locally on your phone, 2024. URL https://arxiv.org/abs/2404.14219 .R. Agarwal, N. Vieillard, Y . Zhou, P. Stanczyk, S. R. Garea, M. Geist, and O. Bachem. On-policy distillation of language models: Learning from self-generated mistakes. In The TwelfthInternational Conference on Learning Representations , 2024.AI@Meta. Llama 3 model card. 2024. URL https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md .H. Bansal, A. Hosseini, R. Agarwal, V . Q. Tran, and M. Kazemi. Smaller, weaker, yet better: Trainingllm reasoners via compute-optimal sampling. arXiv preprint arXiv:2408.16737 , 2024.M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y . Burda,N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin,B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P.Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-V oss, W. H. Guss, A. Nichol,A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr,J. Leike, J. Achiam, V . Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati,K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and W. Zaremba.Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021.K. Cobbe, V . Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton,R. Nakano, C. Hesse, and J. Schulman. Training verifiers to solve math word problems. arXivpreprint arXiv:2110.14168 , 2021.A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Yang,A. Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024.L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y . Yang, J. Callan, and G. Neubig. PAL: program-aidedlanguage models. In A. Krause, E. Brunskill, K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett,editors, International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu,Hawaii, USA , volume 202 of Proceedings of Machine Learning Research , pages 10764–10799.PMLR, 2023. URL https://proceedings.mlr.press/v202/gao23f.html .T. M. Gemma Team, C. Hardin, R. Dadashi, S. Bhupatiraju, L. Sifre, M. Rivière, M. S. Kale,J. Love, P. Tafti, L. Hussenot, and et al. Gemma. 2024. doi: 10.34740/KAGGLE/M/3301. URLhttps://www.kaggle.com/m/3301 .G. T. Google. Gemini: a family of highly capable multimodal models. arXiv preprintarXiv:2312.11805 , 2023.G. T. Google. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context.arXiv e-prints , pages arXiv–2403, 2024.Z. Gou, Z. Shao, Y . Gong, Y . Yang, M. Huang, N. Duan, W. Chen, et al. Tora: A tool-integratedreasoning agent for mathematical problem solving. arXiv preprint arXiv:2309.17452 , 2023.5A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, B. Savary, C. Bamford, D. S. Chaplot, D. de lasCasas, E. B. Hanna, F. Bressand, G. Lengyel, G. Bour, G. Lample, L. R. Lavaud, L. Saulnier, M.-A.Lachaux, P. Stock, S. Subramanian, S. Yang, S. Antoniak, T. L. Scao, T. Gervet, T. Lavril, T. Wang,T. Lacroix, and W. E. Sayed. Mixtral of experts, 2024. URL https://arxiv.org/abs/2401.04088 .OpenAI. GPT-4 technical report. CoRR , abs/2303.08774, 2023a. doi: 10.48550/ARXIV .2303.08774.URL https://doi.org/10.48550/arXiv.2303.08774 .OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023b.S. Srivastava, A. PV , S. Menon, A. Sukumar, A. Philipose, S. Prince, S. Thomas, et al. Functionalbenchmarks for robust evaluation of reasoning performance, and the reasoning gap. arXiv preprintarXiv:2402.19450 , 2024.G. Team M. Riviere, S. Pathak, P. G. Sessa, C. Hardin, S. Bhupatiraju, L. Hussenot, T. Mesnard,B. Shahriari, A. Ramé, et al. Gemma 2: Improving open language models at a practical size. arXivpreprint arXiv:2408.00118 , 2024.A. Yang, B. Zhang, B. Hui, B. Gao, B. Yu, C. Li, D. Liu, J. Tu, J. Zhou, J. Lin, K. Lu, M. Xue, R. Lin,T. Liu, X. Ren, and Z. Zhang. Qwen2.5-math technical report: Toward mathematical expert modelvia self-improvement. arXiv preprint arXiv:2409.12122 , 2024.H. Zhang, J. Da, D. Lee, V . Robinson, C. Wu, W. Song, T. Zhao, P. Raja, D. Slack, Q. Lyu, S. Hendryx,R. Kaplan, M. Lunati, and S. Yue. A careful examination of large language model performance ongrade school arithmetic. CoRR , abs/2405.00332, 2024. doi: 10.48550/ARXIV .2405.00332. URLhttps://doi.org/10.48550/arXiv.2405.00332 .6AppendicesPhi-3-mini-4k-IT Gemma2-9B-ITLLAMA3-8B-ITQwen2.5-MATH-7B-ITGemma2-27B-ITMixtral-8x7B-IT Gemini 1.0 ProGPT-4o miniMathstral-7BMistral-7B-ITPhi-2NuminaMath-7B-CoTGemini 1.5 FlashGemma2-27B-PTMixtral-8x7B-PTLLAMA3-70B-PTLLAMA3-8B-PTGemma2-9B-PTGemini 1.5 ProMistral-7B-PTLLAMA3-70B-ITQwen2.5-MATH-72B-ITGPT-4o403020100Reasoning Gap ()GemmaGeminiPhiGPTMistral/MixtralLLAMAMath-SpecializedFigure 5: Reasoning Gap of notable open-weights and closed-source LLMs. Smaller, more cost-efficient andmath specific models have a bigger gap.A Compositional GSM DetailsTo obtain the new final answer of Question-2 automatically, we replace a number in the code-formsolution of Question-2. We used a slightly modified version of the code-form solutions from Gaoet al. (2023). The new final answer is the result of executing the code with the new number. We putsignificant efforts into ensuring that the modification to Question-2 is sensible. Figure 10 shows thedistribution of final answers (magnitude) of the original test set of GSM8K and compositional GSM.Both test sets have a similar distribution of final answers.Quality Checks To make sure that the modified questions are sensible and logical, we generated16 candidate solutions per modified question from GPT-4o and Gemini 1.5 Pro. We filtered thosequestions for which less than 4 (out of 16) agree with the expected final answer from code execution.We checked these questions manually and modified them if needed so that they are logical (about25% of questions).B Experiment DetailsSetup We evaluate each model on three test sets: 1) the original GSM8K test split ,2) the modifiedGSM8K test split which are the questions with X being substituted, and 3) the compositional GSMtest set. Each test set has 1200 examples. Following Zhang et al. (2024), we evaluate all models withan 8-shot prompt (Appendix I) for the original and modified GSM8K test splits. We also createda similar 8-shot prompt (Appendix J) for the compositional GSM questions. We evaluate GPT-4o,GPT-4o mini, LLAMA3-70B and 8B (PT and IT), Phi 2, Phi-3-mini-instruct, Gemini 1.0, 1.5 Flashand 1.5 Pro, Gemma2 9B and 27B (PT and IT), Mistral-7B (PT and IT), Mixtral-8x7B (PT and IT)and Mathstral-7B. All models are sampled with temperature 0, and pass @1(Chen et al., 2021) isused to measure the performance on each test split. Some of the models require a preamble prefixedto the 8-shot prompt in order to output in a consistent format (Appendix F). We test both cases andreport the best performance for each model.7020406080Compositional GSM Accuracy (%) 70B-IT 8B-ITLLAMA3+2%+69%02040608027B-IT 9B-ITGemma2+27%+74%0204060808x7B-IT 7B-ITMistral+71%+149%Natural Language CoT CodeFigure 6: Natural Language CoT v.s. Code: Generating code to solve the problems helps in both settingsof original test split and Compositional GSM split. Numbers above bars represent relative improvements overnatural language Chain-of-Thought (CoT) generation. Smaller models benefit more from generating code ratherthan natural language CoT to solve Compositional GSM questions, further highlighting that smaller modelsdemonstrate systematic differences in reasoning capabilities.C Thinking in Natural Language v.s. CodeBreaking down the natural language problem into executable code steps has been shown to improvemodels’ reasoning and generalization abilities (Gao et al., 2023; Gou et al., 2023). To this end, weevaluate whether the compositional problem-solving ability of LLMs improves when generatingnatural langauge CoT rationales compared to generating executable Python code. For code generation,we utilize a compositional 8-shot prompt(Appendix K), where the answers are written as two functions,one which solves the first question solve_q1() , and solution() which solves the second question with aX = solve_q1() line at the beginning.Our results are shown in Figure 6 for three families of open-weight instruction-tuned models:LLAMA3-8B and 70B, Gemma2-9B and 27B, and Mistral 7B and (Mixtral) 8×7B. Notably,generating code generally improves performance on Compositional GSM problems, albeit notuniformly. Specifically, comparing relative improvements, smaller models benefit significantlymore from generating code solutions compared to generating natural language CoTs. This rutherunderscores the systematic differences in reasoning behaviors of smaller models.D Finetuning Can Lead to Task OverfittingFinetuning models on task specific problems is a common strategy to improve reasoning performance.In this section, we explore how it impacts the performance on Compositional GSM. We investigatethe performance of Gemma2 27B PT as we finetune it on the original GSM8K training data, andself-generated rationales (aka synthetic data) to identify any difference in the characteristics of thesetwo sources. We collect self-generated rationales which results in correct final answers for all GSM8Ktraining queries.See Appendix H for details of data generation and training for this set of experiments. We evaluatedintermediate checkpoints (at 50, 100 and 400 training steps) from both settings on GSM8K originaltest split and Compositional GSM split (Figure 7). We observer a similar pattern for both settings.The Compositional GSM performance increases with some training (up to 100 steps), but drops withmore training steps while GSM8K test performance keeps increasing, which suggests overfitting.Our results show that training on synthetic data generally leads to a higher Compositional GSMperformance. We did not observe further improvements on either test splits after 400 training steps.860 70 80 90GSM8K T est Accuracy (%)20304050Compositional GSM Accuracy (%)50 steps100 steps100 steps400 steps400 stepsHuman Data Synthetic DataFigure 7: Human Data v.s. Synthetic Data: We finetune Gemma2 27B on the original GSM8K training data,and self-generated rationales. In both settings, after 100 training steps, Compositional GSM test performancedrops while original GSM8K test performance keeps increasing. No further improvements were observed oneither test split after 400 training steps.E Failure Modes of LLMs on Compositional QuestionsDoes Solving Question-1 Guarantee Solving Question-2? Correctly solving Question-1 is aprerequisite to solve Question-2 in the compositional format. In Figure 8, we look at how oftenmodels are able to solve a question independently versus how often can they solve it given they havecorrectly solved the previous question in the compositional format. What remains for the model todo here is to substitute Xand solve Q2. The deviation from the diagonal line indicates that certainmodels may have become too specialized in handling GSM8K-style questions, and are unable toanswer a second question having generated the solution to the first question. Our qualitative analysisshows that when given two questions, the model might answer the first one correctly, but oftenmakes subtle errors and overlooks details, leading to inaccurate reasoning and solution for the secondquestion.0 20 40 60 80 100% Q2 Solved in Non-Compositional format 020406080100% Q2 Solved |Q1=correctGemini 1.5 ProGemini 1.0 ProGemma2-27B-ITGemma2-9B-ITGemma2-27B-PTGemma2-9B-PTGPT-4oGPT-4o miniLLAMA3-70B-ITLLAMA3-8B-ITPhi-2Phi-3-mini-4k-ITMixtral-8x7B-ITMistral-7B-IT Mistral-7B-PTMathstral-7BGemini 1.5 FlashLLAMA3-8B-PTLLAMA3-70B-PTMixtral-8x7B-PTNuminaMath-7B-CoTQwen2.5-MATH-7B-ITQwen2.5-MATH-72B-ITFigure 8: Can models answer the second question if they have correctly answered the first one? Here, wecompare how often models are able to solve a question independently to how often they are able to solve themin the compositional format given that the first question is solved correctly. This is an alternate measurementof the compositional reasoning gap. If a model can solve a question independently, it should be able to solveit in a compositional setting given that the prerequisites are met. The gap from the diagonal line suggests thatsome models have overfit to the format of GSM8K type questions. While models may correctly answer the firstquestion, they frequently makes subtle errors and miss key details when solving the second question.920 30 40 50 60 70 80 90 100% Q1 Solved in Non-Compositional format20406080100% Q1 Solved in Compositional formatGemini 1.5 ProGemini 1.0 ProGemma2-27B-ITGemma2-9B-ITGemma2-27B-PTGemma2-9B-PTGPT-4oGPT-4o miniLLAMA3-70B-ITLLAMA3-8B-ITPhi-3-mini-4k-ITMixtral-8x7B-ITMistral-7B-ITMistral-7B-PTMathstral-7BGemini 1.5 FlashLLAMA3-8B-PTLLAMA3-70B-PTMixtral-8x7B-PTNuminaMath-7B-CoTQwen2.5-MATH-7B-ITQwen2.5-MATH-72B-ITFigure 9: Some LLMs get distracted easily: Measuring models’ ability to solve a question in the standardformat (non-compositional) versus solving the same question as Q1in the compositional format. Models belowthe trend-line get distracted and cannot answer Q1in the compositional format even though solving it does notdepend on solving any other question.Models Get Distracted Easily Assuming an LLM answers a question correctly, it is somewhatexpected that it would answer the same question correctly with additional context. Figure 9 showshow often a model answers a question (from Q1set) correctly on the x-axis, and how often it answersit correctly in our compositional format, as Q1. Ideally, models should be on the x=yline, butwe observe that most of the models fall short of this expectation. Examining the responses frommodels with greater deviations from the trendline in Figure 9 reveals that they frequently make subtleerrors. They often overlook important details, such as missing a reasoning step related to each inthe question or omitting a multiplication step when the question specifies in a month . The modelsgenerally adhere well to the output format provided in the 8-shot context, resulting in negligibleinstances of non-extractable answers. This distraction is caused by the existence of a second questionQ2in the prompt. Such failures lead to not being able to correctly answer Q1, which subsequentlyimpairs the models’ ability to answer Q2correctly.1 2 3 4 5 6 7 8 9 10 11 12Answer Magnitude (log scale)050100150200250FrequencyOriginalCompositionalFigure 10: Distribution of final answers from the test set of original GSM8K and compositional GSMbenchmark. The number modification in the compositional benchmark was done in a way to ensure that the newfinal answer is a positive integer not too far from the old answer. Our compositional GSM benchmark has asimilar distribution of final answers.10F Prmopt PreamblesGSM8K PreambleI am going to give you a series of demonstrations of math Problems and Solutions.When you respond, respond only with the Solution of the final Problem, thinkingstep by step. At the end of the Solution, when you give your final answer, writeit in the form “The final answer is ANSWER.”Compositional GSM PreambleI am going to give you a series of demonstrations of compositional math questionsand solutions. Respond by thinking step by step. Solve the first question andwrite the intermediate answer as “The Q1 answer is ANSWER1.“ Then solve Q2. Atthe end of the solution, when you give your final answer, write it in the form“The final answer is ANSWER2.”G GSM8K Original vs Modified30 40 50 60 70 80 90 100GSM8K Accuracy (Original)30405060708090100GSM8K Accuracy (Modified)Gemini 1.5 ProGemini 1.0 ProGemma2-27B-ITGemma2-9B-ITGemma2-27B-PTGemma2-9B-PTGPT-4oGPT-4o miniLLAMA3-70B-ITLLAMA3-8B-ITPhi-2Phi-3-mini-4k-ITMixtral-8x7B-ITMistral-7B-ITMistral-7B-PTMathstral-7BGemini 1.5 FlashLLAMA3-8B-PTLLAMA3-70B-PTMixtral-8x7B-PTNuminaMath-7B-CoTQwen2.5-MATH-7B-ITQwen2.5-MATH-72B-ITFigure 11: Original v.s. Modified GSM8K test accuracy . Most models are very close to the x=yline,indicating that contamination is not a significant concern.H Rejection Finetuning DetailsSynthetic data was generated by prompting Gemma2 27B PT model with the 8-shot prompt in Ap-pendix I to solve GSM8K training questions. We generated 10 solutions for each question in theoriginal GSM8K training data, and only kept those solutions with a correct final answer. These modelgeneration solutions were used to train the model.11I GSM8K 8-shot PromptQ: There are 15 trees in the grove. Grove workers will plant trees in the grovetoday. After they are done, there will be 21 trees. How many trees did thegrove workers plant today?A: There are 15 trees originally. Then there were 21 trees after some more wereplanted. So there must have been 21 - 15 = 6. The final answer is 6.Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many carsare in the parking lot?A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The finalanswer is 5.Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many piecesdo they have left in total?A: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The final answer is 39.Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12lollipops. How many lollipops did Jason give to Denny?A: Jason started with 20 lollipops. Then he had 12 after giving some to Denny.So he gave Denny 20 - 12 = 8. The final answer is 8.Q: Shawn has five toys. For Christmas, he got two toys each from his mom anddad. How many toys does he have now?A: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, thenthat is 4 more toys. 5 + 4 = 9. The final answer is 9.Q: There were nine computers in the server room. Five more computers wereinstalled each day, from monday to thursday. How many computers are now in theserver room?A: There were originally 9 computers. For each of 4 days, 5 more computers wereadded. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The final answer is29.Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday,he lost 2 more. How many golf balls did he have at the end of wednesday?A: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 -23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The final answeris 33.Q: Olivia has 23.Sheboughtfivebagelsfor 3 each. How much money does she have left?A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars.So she has 23 - 15 dollars left. 23 - 15 is 8. The final answer is 8.Q: {question}A:12J Compositional 8-shot PromptLet X be the answer to Q1:Q1: There are 15 trees in the grove. Grove workers will plant trees in thegrove today. After they are done, there will be 21 trees. How many trees didthe grove workers plant today?solve it and use the value of X to solve Q2. Explain your answer step by step.Q2: There are X students in Marissa’s class. Each student started the yearwith 10 pencils. After two months, 1/5 of the total pencils in class were used.At the end of the year, only 1/3 of the remaining pencils were left. How manypencils were left?A: There are 15 trees originally. Then there were 21 trees after some more wereplanted. So there must have been 21 - 15 = 6. The Q1 answer is 6. ThereforeX=6. So there were 6 x 10 = 60 pencils in the class at the start of the year.After two months, 60 x 1/5 = 12 pencils were used. Thus, 60 - 12 = 48 pencilswere left unused after two months. Therefore, 48 x 1/3 = 16 pencils were left atthe end of the year. The final answer is 16.Let X be the answer to Q1:Q1: If there are 3 cars in the parking lot and 2 more cars arrive, how many carsare in the parking lot?solve it and use the value of X to solve Q2. Explain your answer step by step.Q2: Ingrid drinks X cups of water every day. If there are 16 cups in a gallon,how many gallons of water does she drink in 32 days?A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The Q1 answeris 5. Therefore X=5. So Ingrid drinks 5 cups of water a day so after 32 daysshe drinks 5 * 32 = 160 cups of water. There are 16 cups in 1 gallon so shedrinks 160 / 16 = 10 gallons of water in 30 days. The final answer is 10....Let X be the answer to Q1:Q1: {QUESTION_1}solve it and use the value of X to solve Q2. Explain your answer step by step.Q2: {QUESTION_2}A:Some examples in the prompt are omitted due to space constraints. The remaining question-and-answer pairs follow the same format.13K Compositional Code PromptSome examples in the prompt are omitted due to space constraints. The remaining question andcode solutions are written with the same format. The answers are provided through two functions:solve_q1() , which addresses the first question, and solution() , which answers the second question.Thesolution() function begins with a line X = solve_q1() to get the result from the first function.Write two functions ‘solve_q1‘ and ‘solution‘ to solve Q1 and Q2 problems.Let X be the answer to Q1:Q1: There are 15 trees in the grove. Grove workers will plant trees in thegrove today. After they are done, there will be 21 trees. How many trees didthe grove workers plant today?Q2: There are X students in Marissa’s class. Each student started the yearwith 10 pencils. After two months, 1/5 of the total pencils in class were used.At the end of the year, only 1/3 of the remaining pencils were left. How manypencils were left?A: The answer is```def solve_q1():"""There are 15 trees in the grove. Grove workers will plant treesin the grove today. After they are done, there will be 21 trees. How many treesdid the grove workers plant today?"""trees_initial = 15trees_after = 21trees_added = trees_after - trees_initialresult = trees_addedreturn resultdef solution():"""There are X students in Marissa’s class. Each student started theyear with 10 pencils. After two months, 1/5 of the total pencils in class wereused. At the end of the year, only 1/3 of the remaining pencils were left. Howmany pencils were left?"""X = solve_q1()num_students = Xpencils_per_student = 10total_pencils = num_students * pencils_per_studentpencils_left_after_two_months = total_pencils * (4/5)remaining_pencils = pencils_left_after_two_months * (1/3)result = remaining_pencilsreturn result```...Let X be the answer to the following question:Q1: {QUESTION_1}Q: {QUESTION_2}A: The answer is14 |
QzOc0tpdef | NLIR: Natural Language Intermediate Representationfor Mechanized Theorem ProvingLaetitia TeodorescuAdaptive MLGuillaume BaudartUniversité Paris CitéCNRS, Inria, IRIFEmilio Jesús Gallego AriasUniversité Paris CitéCNRS, Inria, IRIFMarc LelargeDI ENS, PSL UniversityInriaAbstractFormal theorem proving is challenging for humans as well as for machines. Thanksto recent advances in LLM capabilities, we believe natural language can serve as auniversal interface for reasoning about formal proofs. In this paper, 1) we introducePétanque , a new lightweight environment to interact with the Coq theorem prover;2) we present two interactive proof protocols leveraging natural language as anintermediate representation for designing proof steps; 3) we implement beamsearch over these interaction protocols, using natural language to rerank proofcandidates; and 4) we use Pétanque to benchmark our search algorithms. Usingour method with GPT-4o we can successfully synthesize proofs for 58% of the first100/260 lemmas from the newly published Busy Beaver proofs.1 IntroductionThe general knowledge and reasoning abilities of frontier large language models (LLMs) makes thempractical as a backbone for building agents able to interact with interactive theorem provers (ITP).These agents should iteratively build proofs with help from proof engine feedback. While previouswork (e.g. Yang et al. [2023]) used a costly data collection procedure to finetune modestly sizedlanguage models, we believe that reasoning in natural language before outputting tactics will lead tobetter and more interpretable results. Recently, Thakur et al. [2024] showed promising preliminaryresults by using GPT-4 as an agent proposing tactics inside a backtracking search and using richfeedback from the proof environment.In this work, we develop infrastructure to allow communication between a GPT-4o-based agentand the Coq proof environment [The Coq Development Team, 2024]. Our key idea is to rely onnatural language as much as possible when generating proofs. Using natural language leverages thestrength of LLMs, and allows us to use chain-of-thought [Wei et al., 2022] by asking for an informalmathematical proof before generating the formal proof, making it more intuitive and comprehensiblecompared to purely automatic formal techniques. Additionally, partial proofs expressed in naturallanguage are easier for humans to understand, adapt, or reuse, allowing for greater flexibility andcollaboration between machine-generated suggestions and human mathematicians.We present the following contributions: 1) Pétanque : A new fast and lightweight environment tointeract with the Coq theorem prover. 2) Two interactive proof protocols both leveraging naturallanguage reasoning: tactic-by-tactic proof construction, and hierarchical proof templating. 3) Wecouple both protocols with standard search algorithms leveraging feedback from the ITP and usingnatural language to rerank proof candidates. 4) We evaluate this agent on a new dataset of textbook4th MATH-AI Workshop at NeurIPS’24 (2024)1ExplainReasonFormalizeNL-AgentPetanqueParseExecuteSerializetacticsgoaltheoremproof completeCurrent goalforall n m : nat, n + m = m + nTo begin with, let's use induction on n. This approach will allow us to handle the problem by breaking it down into a base case and an inductive step./step intros n m. induction n. Next goalsm : nat ⊢ 0 + m = m + 0 n : nat m : nat IHn : n + m = m + n ⊢ S n + m = m + S nFigure 1: Tactic-by-tactic proof construction.exercises and intermediate theorems from the recent Busy Beaver proof formalized in Coq ofBB(4) = 107 , [ccz181078, 2024]. NLIR is open source ( https://github.com/llm4coq/nlir ).2 Pétanque: a lightweight interactive environment for CoqA common difficulty when interacting with interactive proof assistants in the context of machinelearning is inadequate tooling (see for example [Reichel et al.]). Following existing work [Gal-lego Arias et al., 2016, Gallego Arias, 2019, Yang and Deng, 2019, Sanchez-Stern et al., 2020],we have built a new environment for machine to machine interaction for the Coq proof assistant,particularly tailored for interactive, high-throughput, low-latency learning applications. Pétanqueis based on Flèche [Gallego Arias, 2024], a new document manager for Coq. We extend Flèche byenabling Pétanque to access the Coq proof engine directly without requiring edits in the associateddocument. This makes our environment fast and lightweight. A Python interface provides easy accessto the API. See Appendix B for more information on Flèche and Pétanque.3 Proof interaction protocolsIn this section, we present two approaches leveraging LLMs’ ability to reason in natural language inorder to find a formal proof with the help of a proof assistant. Tactic-by-tactic proof constructionmimics the typical behavior of a standard Coq user: given the current goals, the agent generatesone or several tactics that updates the goals and repeats this process until the proof is complete. Bycontrast, hierarchical proof templating tries to generate full proofs directly. Failed tactics are thenreplaced with holes to obtain a proof template . The agent repeats the process of filling each holeuntil the proof is complete. Our approach’s originality is that although both protocols’ inputs (goals)and outputs (tactics) are Coq code, the agent internally uses natural language as an intermediaterepresentation to analyze the input and guide the code generation.3.1 Tactic-by-tactic proof constructionAn overview of the tactic-by-tactic proof construction agent is presented in Figure 1. Given a Coqtheorem, the agent first uses natural language to describe the goal and explain how to continue theproof (chain-of-thought). The last step synthesizes the corresponding Coq tactics. For instance, inFigure 1, the goal is to prove that addition over natural numbers is commutative. The agent decides totry a proof by induction and correctly synthesizes a sequence of two tactics: intros n m.introducestwo variables nandmof type nat(natural numbers), and induction n.starts an induction over n.The tactics are sent to the Pétanque environment, which parses and executes each tactic to updatethe current goal. A textual representation of the new goal is then fed back to the agent to makefurther progress in the proof. If the execution returns an error, the current goal does not change,but we augment the prompt with the failed tactics and ask the LLM to try something else for thenext attempt. For instance, in Figure 1, both tactics succeed and generate two new subgoals: thebase case (for n=0, prove m+ 0 = 0 + m) and the induction case (given the induction hypothesis22Theorem Sn_le_Sm__n_le_m : forall n m, S n <% S m -' n <% m. intros n m H. inversion H as [H0 | n' H0]. - { admit. } - { admit. }intros n m H. inversion H as [H0 | n' H0]. - apply H0. - apply H0.apply le_n. exact (le_S_n _ _ H). 1. Introduce n and m. 2. Introduce the hypothesis S n <= S m into the context. 3. Use the inversion lemma on the hypothesis to simplify it. 4. Conclude the proof by referring to the simplify hypothesis.1. Recognize that m <= m is always true. 2. Use the le_n constructor to finish the proof. Use hypothesis H to conclude n <= m. n, n', m : nat H : S n <% S m H0 : S n <% m H1 : n' = m ⊢ n <% mn, m : nat H : S n <% S m H0 : n = m ⊢ m <% mReasonTemplatizeFormalize~~~~~~~~~~~~~~~~~~intros n m H. inversion H as [H0 | n' H0]. - { apply le_n. } - { exact (le_S_n _ _ H). }Final proofFigure 2: Hierarchical proof templating.IHn:n+m=m+n, prove (n+ 1) + m=m+ (n+ 1) ). The textual representation of a goaluses the the symbol ⊢to separate hypotheses from the conclusion, and S ndenotes n + 1.Model Interface. In early experiments, we observed that conversation-style reasoning often diverges:after a few rounds, the output makes very little sense, and the agent never recovers. Following [Yanget al., 2024] – and similarly to [Thakur et al., 2024] – we use a synthetic interface to summarize ateach goal the global objective (initial theorem), the current goal (in the middle of a proof), and failedattempts to solve the same goal.3.2 Hierarchical proof templatingAn example execution of the hierarchical proof templating agent is presented in Figure 2. The agentpipeline is similar to the tactic-by-tactic method, but instead of focusing only on the next step, theagent generates a complete proof in natural language, before translating the proof in Coq syntax. Forinstance, in Figure 2, the agent uses the inversion tactics on the hypothesis Hwhich generate twosubgoals with a simpler hypothesis H0, and then tries to solve each subgoals using this H0hypothesis.The Pétanque environment then repairs the proof, replacing failed tactics by holes which admit andclose the current subgoal, removing subsequent tactics until the focus moves to the next subgoal.Pétanque then checks that the resulting template is correct, i.e., assuming a valid proof for each holes,the proof is complete. A textual representation of each holes is then fed back to the agent whichrepeat the process to fill the holes one by one. For instance, in Figure 2, apply H0fails on bothsubgoals. The agent then repeats the process for each holes, using focused fine-grain reasoning toprove the corresponding subgoal. The proof is complete when there are no more holes.4 Proof searchdef beam_search (n_steps ,n_actions ,beam_size ):s=petanque .start (thm)beam = [s]# Initial statefor step inrange (n_steps ):candidates = []for sinbeam :# Try multiple actions for each statefor ainagent .generate (s,n_actions ):sa=petanque .step (s,a)ifpetanque .proof_finished (sa):return sa.proof # Proof found!else:candidates =candidates + [sa]# Rank and sort candidatesbeam =agent .sort (candidates )[:beam_size ]return None # No proof foundWe combine our interactive protocol with the clas-sic beam search algorithm. Inspired by [Yao et al.,2023], we use the LLM to rank and sort the pro-posals at each step of the search. A simplified ver-sion of the code is presented on the right. At eachstep, agent .generate generates n_actions possi-ble steps (tactics or proofs). Each step is then vali-dated with petanque .step and the state of all the re-sulting candidates is stored. Then agent .sort callsthe LLM to discuss, compare and finally rank andsort the candidates for the next step.3For the tactic-by-tactic agent, the state contains the current goal obtained by running all the previoussteps from the initial goal, i.e., the theorem statement. At each step, agent .generate generatesmultiple possible tactics for the current goal. For each tactic proposed by the LLM, petanque .stepexecutes the tactic to compute the updated state. If the tactic is invalid, we log the failure and thestate is not modified.For the template agent, the state contains a template, i.e. a proof with holes and a queue containingpointers to these holes and the associated goals. At each step, agent .generate generates multiplepossible proofs for the first hole in the queue. For each proposed proof, petanque .step buildsthe corresponding sub-template. The updated state is obtained by replacing the current hole by thesub-template and adding the sub-templates holes to the end of the queue.As a baseline, our naive search corresponds to a beam search with n_action =1andbeam_size =1(inwhich case, the sorting step is useless).5 EvaluationLogical Foundations exercises : We extracted the exercises of Logical Foundations [Pierce et al.,2024], the first volume of the Software Foundation textbooks series that is widely used to introduceCoq. We extracted 179 exercices. Given the popularity of this textbook the risk of data leak ishigh. We filtered out 66 “easy” exercises that are solved with one shot prompting. This dataset thuscomprises 113 exercises.BB(4)lemmas : To avoid data leak issues, we extracted the 260 lemmas from the recent proof ofBB(4) = 107 [ccz181078, 2024]. The repository was created in April 2024, long after the knowledgecutoff date of the current version of GPT-4o (October 2023). To provide the necessary context for theproof, for each lemma we augment the prompt with all the preceding definitions and lemmas.Evaluation. The results are presented in the following table. We use Coq 8.19.2 and GPT-4oversion 2024-05-13 for all the experiments.Logical Foundations BB(4)tactics template templatenaive beam naive beam naive beam% success 30.1 46.0 (13.3) 25.6 (23.9) 38.9 (21.0) 38.0 (38.0) 58.0For both agents, we set n_actions =4andbeam_size =3, with n_steps =30for the tactics agent andn_steps =10for the template agent. While the tactics agent outperforms the template agent on theLogical Foundation benchmark, we observe that the template agent is significantly cheaper and fasterthan the tactics agent. By design the tactics agent requires much more interactions with the LLM toreach a full proof step by step.To limit the costs of our experiments, we only run the template agent on the first 100 Lemmas of theBB(4)benchmark. For the template agent, the gray numbers indicate the proportion of proofs thatare correct at the first try (no holes). See Appendix A and Tables 1 and 2 for more details.6 Related work and conclusionLLMs and theorem provers Automatic theorem-proving is a longstanding challenge in computerscience [Newell et al., 1957]. Recent work has used neural models based on autoregressive languagemodel that generate a proof tactic by tactic. Most works use finetuned LLMs [Polu and Sutskever,2020, Han et al., 2021, Wu et al., 2022, Yang et al., 2023, First et al., 2023], trained on (goal, tactic)pairs obtained from intermediate steps of existing proofs. On the other hand, Lample et al. [2022] useonline training, progressively collecting more data. Closest to our work, Thakur et al. [2024] build atactic-by-tactic LLM agent based on GPT-4 and also use an interface to summarize past interactions.They, however, do not use proof repair or beam search. Also close to our work, Wang et al. [2024] useproof repair over hierarchical proofs in Isabelle, coupled with best-first search. Contrary to us, theyuse fine-tuned models and no chain-of-thought. Finally, Lin et al. [2024] propose a framework for4training language models to produce informal thoughts prior to each step of a proof, thereby boostingthe model’s theorem-proving capabilities.Reasoning in LLMs This work is also related to recent investigations on the reasoning abilitiesof LLMs [Plaat et al., 2024]. Chain-of-Thought (CoT) prompting [Wei et al., 2022] was shownto improve LLM’s answers; subsequent work found that these reasoning abilities could be elicitedzero-shot [Kojima et al., 2022]. Further work interleaved CoT with decision-making [Yao et al.,2022], added search and complex control flow to reasoning [Chen et al., 2022, Yao et al., 2023, Bestaet al., 2024], incorporated refinement and feedback [Madaan et al., 2024, Shinn et al., 2024], andlearned to generate novel reasoning traces that proved beneficial for further training [Zelikman et al.,2022, 2024]. Like our work, many of these methods – especially the ones using search and refinement– make use of LLM-based scoring or ranking functions [Zheng et al., 2023].Conclusion In this work, we have presented a new agent for building proofs leveraging chain ofthought as an intermediate representation, and generating proofs by outputting step-by-step tactics orhierarchical proof templates. We couple this with beam search and natural language reranking andobtain good performance on a new evaluation set built with the help of our novel proof environment,Pétanque . Future work could investigate how one could use reinforcement learning to obtain betterreasoning and performance with smaller models [OpenAI, 2024].Acknowledgements We thank Cyril Cohen and Pierre Boutillier for many insightful discussions.We also thank Alex Sanchez-Stern for his feedback on early versions of Pétanque. This work issupported by the Inria Défi LLM4Code and the project ReaLiSe, Émergence Ville de Paris 2021-2025.ReferencesUmut A. Acar, Guy E. Blelloch, Matthias Blume, Robert Harper, and Kanat Tangwongsan. A libraryfor self-adjusting computation. In Nick Benton and Xavier Leroy, editors, ML Workshop , 2005.Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi,Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts:Solving elaborate problems with large language models. In AAAI , 2024.ccz181078. https://github.com/ccz181078/Coq-BB5/tree/main , 2024.Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompt-ing: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprintarXiv:2211.12588 , 2022.Emily First, Markus N. Rabe, Talia Ringer, and Yuriy Brun. Baldur: Whole-proof generation andrepair with large language models. CoRR , abs/2303.04910, 2023.Emilio Jesús Gallego Arias, Benoît Pin, and Pierre Jouvelot. jscoq: Towards hybrid theorem provinginterfaces. In Serge Autexier and Pedro Quaresma, editors, UITP , 2016.Emilio Jesús Gallego Arias. SerAPI: Machine-friendly, data-centric serialization for Coq. preprint,01 2019. URL https://github.com/ejgallego/coq-serapi/ .Emilio Jesús Gallego Arias. Flèche: Incremental validation for hybrid formal documents. underrevision, 2024.Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W Ayers, and Stanislas Polu. Proof artifactco-training for theorem proving with language models. arXiv preprint arXiv:2102.06203 , 2021.Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Largelanguage models are zero-shot reasoners. NeurIPS , 2022.Guillaume Lample, Timothee Lacroix, Marie-Anne Lachaux, Aurelien Rodriguez, Amaury Hayat,Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. Hypertree proof search for neural theoremproving. NeurIPS , 2022.5Haohan Lin, Zhiqing Sun, Yiming Yang, and Sean Welleck. Lean-star: Learning to interleave thinkingand proving. arXiv preprint arXiv:2407.10040 , 2024.Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, UriAlon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinementwith self-feedback. NeurIPS , 2024.Allen Newell, John Clifford Shaw, and Herbert A Simon. Empirical explorations of the logic theorymachine: a case study in heuristic. In Western joint computer conference: Techniques for reliability ,pages 218–230, 1957.OpenAI. Learning to Reason with LLMs. https://openai.com/o1/ , 2024.Benjamin C. Pierce, Arthur Azevedo de Amorim, Chris Casinghino, Marco Gaboardi, MichaelGreenberg, C ̆at ̆alin Hri ̧ tcu, Vilhelm Sjöberg, and Brent Yorgey. Logical Foundations . SoftwareFoundations. 2024. Version 6.7, http://softwarefoundations.cis.upenn.edu .Aske Plaat, Annie Wong, Suzan Verberne, Joost Broekens, Niki van Stein, and Thomas Back.Reasoning with large language models, a survey. arXiv preprint arXiv:2407.11511 , 2024.Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.CoRR , abs/2009.03393, 2020.Tom Reichel, R. Wesley Henderson, Andrew Touchet, Andrew Gardner, and Talia Ringer. Proof repairinfrastructure for supervised models: Building a large proof repair dataset. In Adam Naumowiczand René Thiemann, editors, LIPIcs .Alex Sanchez-Stern, Yousef Alhessi, Lawrence K. Saul, and Sorin Lerner. Generating correctnessproofs with neural networks. In MAPL@PLDI , 2020.Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion:Language agents with verbal reinforcement learning. NeurIPS , 2024.Amitayush Thakur, George D. Tsoukalas, Yeming Wen, Jimmy Xin, and Swarat Chaudhuri. Anin-context learning agent for formal theorem-proving. In COLM , 2024.The Coq Development Team. The Coq reference manual – release 8.19.0. https://coq.inria.fr/doc/V8.19.0/refman , 2024.Haiming Wang, Huajian Xin, Zhengying Liu, Wenda Li, Yinya Huang, Jianqiao Lu, Zhicheng Yang,Jing Tang, Jian Yin, Zhenguo Li, and Xiaodan Liang. Proving theorems recursively. CoRR ,abs/2405.14414, 2024.Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi,Quoc V . Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large languagemodels. In NeurIPS , 2022.Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus N. Rabe, Charles Staats, Mateja Jamnik, andChristian Szegedy. Autoformalization with large language models. In NeurIPS , 2022.John Yang, Carlos E. Jimenez, Alexander Wettig, Kilian Lieret, Shunyu Yao, Karthik Narasimhan,and Ofir Press. Swe-agent: Agent-computer interfaces enable automated software engineering.CoRR , abs/2405.15793, 2024.Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants. In ICML ,2019.Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil,Ryan J. Prenger, and Animashree Anandkumar. Leandojo: Theorem proving with retrieval-augmented language models. In NeurIPS , 2023.Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao.React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629 ,2022.6Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan.Tree of thoughts: Deliberate problem solving with large language models. In NeurIPS , 2023.Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning withreasoning. NeurIPS , 2022.Eric Zelikman, Georges Harik, Yijia Shao, Varuna Jayasiri, Nick Haber, and Noah D Goodman.Quiet-star: Language models can teach themselves to think before speaking. arXiv preprintarXiv:2403.09629 , 2024.Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang,Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica.Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. In NeurIPS , 2023.7A Detailed resultsA.1 Logical FoundationsFor the template agent, the gray numbers indicate the proportion of proofs that are correct at the firsttry (no holes). We also report the average length of the generated proofs (number of tactics) and thesize of the smallest and the biggest proofs. Details are presented in Table 1.tactics templatenaive beam naive beam total# success 34 52 (15) 29 (27) 44 113% success 30.1 46.0 25.6 38.9 100.0average proof length 10.6 9.4 16.0 12.4(min, max) proof length (4, 31) (4, 53) (4, 55) (4, 58)A.2 BB (4)For each methods, we also report the original proof sizes (mean, min, and max) on the set of lemmasthat were successfully proved. Details are presented in Table 2.templatenaive beam total# success (21) 38 (38) 58 100% success 38.0 58.0 100.0average proof length 13.7 15.4original average proof length 7.4 7.9(min, max) proof length (3, 38) (3, 54)original (min, max) proof length (2, 34) (2, 34)B From Flèche to PétanqueIn this section we will describe Pétanque, a new environment for lightweight interaction with formalproof documents. Pétanque targets machine-learning applications such as reinforcement learningand other agent-based use cases, providing zero-overhead ,purely functional1access to Coq’s proofengine, along with some utilities to implement custom proof search routines.Flèche Pétanque is built on top of Flèche [Gallego Arias, 2024], a new document manager for Coq.Flèche is both a formal document interpreter and a build system for Coq proof documents.A schematic view of Flèche’s behavior when the document is edited is presented in Figure 3. Flèchemaintains an enriched representation of Coq proof documents, including the relevant Coq statesassociated to the interactive proofs and their dependencies. When an edit occurs, Flèche onlyinvalidate the parts of the document that depend on that change, following standard incrementalcomputing practices [Acar et al., 2005].At any point, users can query Flèche for data about the document — for example information aboutthe current proof obligations at a given point of the document — and Flèche will compute therequested information on-demand, as fast as possible.coq-lsp Flèche’s edit / query interface accommodates seamlessly the Language Server Protocol(LSP) protocol, the standard way to provide programming language support in modern IntegratedDevelopment Enviroments (IDEs). The LSP server coq-lsp2built on top of Flèche thus providescontinuous real-time checking for Coq documents inside popular editors such as Emacs or VSCode.1computations are treated as stateless functions, i.e., for equal inputs, we obtain equal outputs2https://github.com/ejgallego/coq-lsp8Theorem Sn_le_Sm__n_le_m : forall n m, S n <% S m -' n <% m. intros n m H. inversion H as [H0 | n' H0]. - apply le_n.Decorated documentCoq InterpreterFlèchenext goalnext statestatecodeeditEditor / LSPEditor / LSPapply le_n.Memoize statesCoq documentin progressUser interfaceFigure 3: Flèche: a document manager for Coq. Flèche maintains a decorated document whereeach atom (definitions and proof steps) are associated with the Coq state (green dots). When anedit happens in the editor, Flèche retrieves the corresponding state, execute the code with the Coqinterpreter, stores the new state (blue dot) in the decorated document, and returns the next goal thatcan be visualized in the editor. Communication with the editor relies on the LSP protocol.class Pytanque :def start (self ,file :str,thm:str) -> Statedef run_tac (self ,state :State ,tac:str) -> Statedef goals (self ,state :State ) -> List [Goal ]Figure 4: A simplified view of the pytanque APIPétanque Unfortunately the edit/query document model turns out to be too expensive for high-throughput, proof-search applications: while Flèche invalidation on edits is very efficient, theassociated overhead starts to become a problem when the edit frequency is higher than a few timesper second. Moreover, using IDE protocols such as LSP means that agents need to exchange messagewith the server multiple times per step, which again creates non-trivial overhead. To overcome theprevious problems, Pétanque provides one-shot direct access to Coq’s proof state and tactic engine.A simplified view of the Pétanque API is presented in Figure 4. Using this API, agents can performspeculative proof checking without altering the original document.The start methods initialize a proof session where the initial Coq proof state correspond to thetheorem statement thmin the document file . Then, given a state, the run_tac method executes atactic tacand return the resulting state if successful. The goals method can be used to retrieve ahuman readable version of the proof goals (e.g., as in Figures 1 and 2).9Table 1: Detailed results for the Logical Foundations benchmark.tactics templatenaive beam naive beamBasics :andb_true_elim2 12 9 10 10Basics :lower_letter_lowers x 7 x 8Basics :grade_lowered_once 11 6 10 6Lists :eqblist_refl x x x xLists :count_member_nonzero x x x xLists :remove_does_not_increase_count x x x xLists :involution_injective x 8 7 7Lists :option_elim_hd x x x xLists :eqb_id_refl x 6 14 14Lists :update_eq 19 6 x 9Lists :update_neq 13 6 11 7Induction :add_comm x 10 x xInduction :even_S 12 x x xInduction :add_shuffle3 11 10 x xInduction :mul_comm x 16 x xInduction :plus_leb_compat_l x x x xInduction :mult_plus_distr_r x x x xInduction :mult_assoc x 11 x xInduction :add_shuffle3 ' 9 10 x xInduction :bin_to_nat_pres_incr x x x xInduction :nat_bin_nat x x x xInduction :bin_nat_bin x x x xImp:optimize_0plus_b_sound x x x xImp:pup_to_2_ceval x x x xImp:loop_never_stops x x x xImp:no_whiles_eqv x x x xImp:execute_app x x x xImp:s_compile_correct x x x xImp:break_ignore 4 4 4 4Imp:while_continue 5 4 4 6Imp:while_stops_on_break x 4 x xImp:seq_continue x x x xImp:seq_stops_on_break 4 5 x xImp:while_break_true 4 4 4 4Imp:ceval_deterministic x x 30 9IndProp :ev_double 9 7 11 11IndProp :ev5_nonsense 7 7 x xIndProp :ev'_ev x x x xIndProp :ev_plus_plus x x x xIndProp :total_relation_is_total x x x xIndProp :empty_relation_is_empty 5 5 7 8IndProp :O_le_n 4 4 10 10IndProp :Sn_le_Sm__n_le_m 16 5 9 9IndProp :lt_ge_cases x x x xIndProp :le_plus_l x 6 11 11IndProp :plus_le x x x xIndProp :add_le_cases x x x xIndProp :plus_le_compat_r x 14 x xIndProp :le_plus_trans x 15 x xIndProp :n_lt_m__n_le_m x 6 7 9IndProp :plus_lt x x x xIndProp :leb_complete x x x 23IndProp :leb_correct x x x xIndProp :leb_true_trans 12 11 x 11IndProp :R_equiv_fR x x x xIndProp :subseq_refl x x x xIndProp :subseq_app 8 4 4 4tactics templatenaive beam naive beamIndProp :subseq_trans 4 4 x 6IndProp :reflect_iff 11 12 20 18IndProp :eqbP_practice x x x xIndProp :merge_filter 19 4 29 4IndProp :pal_app_rev x x x xIndProp :pal_rev 4 4 4 4IndProp :palindrome_converse x x x xIndProp :pigeonhole_principle x x x xIndProp :regex_match_correct x x x xPoly :rev_involutive 14 9 12 12Poly :map_rev x x x xPoly :uncurry_curry x x x xPoly :curry_uncurry x x x xImpCEvalFun :ceval__ceval_step x x x xLogic :leb_plus_exists x x x xLogic :In_map_iff 31 28 x 46Logic :In_app_iff x x 55 xLogic :All_In x x x xLogic :combine_odd_even_intro x x x xLogic :combine_odd_even_elim_odd x x x xLogic :combine_odd_even_elim_even x x x xLogic :eqb_neq x 15 x xLogic :eqb_list_true_iff x x x xLogic :forallb_true_iff x x x xLogic :tr_rev_correct x x x xLogic :excluded_middle_irrefutable x 16 x 16Rel:total_relation_not_partial_function x x x xRel:lt_trans ' 18 6 x 4Rel:lt_trans '' 18 9 x 12Rel:le_S_n 7 5 x 9Rel:le_not_symmetric x 7 x 7Rel:le_antisymmetric 7 9 x xRel:le_step x x x xRel:rtc_rsc_coincide x x x 30IndPrinciples :booltree_ind_type_correct x x x xIndPrinciples :Toy_correct x x x xIndPrinciples :reflect_involution x x x xMaps :t_update_neq x 12 10 14Maps :t_update_permute x x x xTactics :rev_exercise1 9 7 17 15Tactics :eqb_true x x x xTactics :plus_n_n_injective x x 34 xTactics :combine_split x x 21 20Tactics :bool_fn_applied_thrice 21 16 35 xTactics :eqb_sym x x x 17Tactics :eqb_trans 10 x x xTactics :split_combine x x x xTactics :existsb_existsb ' x x x xProofObjects :ev_8 7 7 7 7ProofObjects :pe_implies_pi x 12 x 11AltAuto :ev100 x 53 58 55AltAuto :andb3_exchange x 4 x 4AltAuto :andb_true_elim2 4 6 10 10AltAuto :andb3_exchange ' x 12 x 23AltAuto :nor_comm ' 12 10 x 10AltAuto :nor_not ' x 11 x 1010Table 2: Detailed results for the BB (4)benchmark.orig. naive beamffx_eq_x_inj 10 7 7enc_v1_eq 6 x xenc_pair_inj 12 x xenc_list_inj 16 x xandb_shortcut_spec 3 7 9orb_shortcut_spec 3 9 7set_ins_spec 33 x xempty_set_WF 10 19 16pop_back_len 8 x 20pop_back__nth_error 15 x 54list_eq__nth_error 34 37 44pop_back '__push_back 6 x xSt_enc_inj 2 5 4St_eqb_spec 3 3 4Sigma_eqb_spec 3 x xSigma_enc_inj 2 x xlistSigma_inj 12 38 23map_inj 9 29 29listT_enc_inj 7 6 6Dir_eqb_spec 3 11 3St_list_spec 4 x 12Sigma_list_spec 4 13 8Dir_list_spec 4 13 13forallb_St_spec 9 x 14forallb_Sigma_spec 9 18 17forallb_Dir_spec 9 x 13Steps_trans 9 x xSteps_unique 11 x 19Steps_NonHalt 22 x xHaltsAt_unique 16 x xNonHalt_iff 27 x xLE_step 10 x 14LE_Steps 10 13 12LE_NonHalts 8 x xHaltTimeUpperBound_LE_NonHalt 7 x xLE_HaltsAtES_1 11 x xLE_HaltsAtES_2 14 x xHaltTimeUpperBound_LE_Halt 15 x xSt_swap_swap 12 x xTrans_swap_swap 7 8 8option_Trans_swap_swap 7 10 10TM_swap_swap 8 x 15ExecState_swap_swap 7 6 6step_swap 18 x 48step_halt_swap 10 x 39Steps_swap 27 x xLE_swap_0 7 x 23LE_swap 9 x xInitES_swap 8 x 15HaltsAt_swap_0 15 x 17orig. naive beamHaltsAt_swap 9 31 30HaltTimeUpperBound_LE_swap 10 x xHaltTimeUpperBound_LE_swap_InitES 5 x xTrans_rev_rev 7 6 8option_Trans_rev_rev 8 11 10TM_rev_rev 7 8 11Tape_rev_rev 7 12 9ExecState_rev_rev 7 6 6fext_inv 3 5 5step_rev 44 x xstep_halt_rev 11 x xSteps_rev 27 x xLE_rev_0 7 19 19LE_rev 9 x xInitES_rev 3 8 6HaltsAt_rev_0 15 20 18HaltsAt_rev 9 x xHaltTimeUpperBound_LE_rev 10 x xHaltTimeUpperBound_LE_rev_InitES 5 x xTrans_swap_id 10 x xisUnusedState_spec 58 x xstep_UnusedState 11 13 17Steps_UnusedState 15 x xHaltTimeUpperBound_LE_HaltsAtES_UnusedState 68 x xTM0_LE 7 x xUnusedState_TM0 10 12 21UnusedState_dec 4 x 12HaltTimeUpperBound_LE_HaltAtES_MergeUnusedState 31 x xSt_to_nat_inj 4 5 5St_suc_le 4 x 3St_suc_eq 5 x 14St_suc_neq 3 17 8HaltTimeUpperBound_LE_HaltAtES_UnusedState_ptr 21 x xHaltsAtES_Trans 9 x 25UnusedState_upd 68 x xUnusedState_ptr_upd 97 x xisHaltTrans_0 3 21 18CountHaltTrans_upd 7 x xCountHaltTrans_0_NonHalt 21 x xTrans_list_spec 6 x 8St_leb_spec 13 x 10TM_simplify_spec 6 9 7TM_upd '_spec 5 9 8nat_eqb_spec 3 11 11TNF_Node_expand_spec 64 x xTNF_Node_NonHalt 6 x 9HaltDecider_cons_spec 7 16 39SearchQueue_upd_spec 74 x xSearchQueue_upd_bfs_spec 30 x xSearchQueue_reset_spec 13 29 2611 |
QalNAKG48c | DafnyBench: A Benchmark for Formal SoftwareVerificationChloe Loughridge∗Harvard [email protected] Sun∗†Massachusetts Institute of [email protected] [email protected] CassanoNortheastern [email protected] SunStanford [email protected] ShengStanford [email protected] MudideMassachusetts Institute of [email protected] Rakib Hossain MisuUniversity of California [email protected] AminHarvard [email protected] TegmarkMassachusetts Institute of [email protected] introduce DafnyBench, the largest benchmark of its kind for training andevaluating machine learning systems for formal software verification. We test theability of LLMs such as GPT-4 and Claude 3 to auto-generate enough annotationsfor the Dafny formal verification engine to successfully verify over 750 programswith about 53,000 lines of code. The best model and prompting scheme achieved68% success rate, and we quantify how this rate improves when retrying with errormessage feedback and how it deteriorates with the amount of required code andannotations. We hope that DafnyBench will enable rapid improvements from thisbaseline as LLMs and verification techniques grow in quality.1 IntroductionRapidly improving Large Language Models (LLMs) [ 1–3] are helping accelerate software develop-ment through program synthesis tools. But how can we ensure that LLM-generated code meets ourspecifications and reliably does precisely what it is supposed to do? Indeed, this remains a persistentproblem even with human-written code: major code-testing efforts failed to prevent e.g. bugs causingan Ariane-V rocket explosion [4] and security vulnerabilities in ssh [5] and the Bash shell [6].Although formal verification can guarantee reliability, providing rigorous mathematical proof thatsoftware meets specification, it has yet to gain widespread adoption. Formally verifying code is often*Equal contribution. Order determined alphabetically.†Corresponding author.38th Conference on Neural Information Processing Systems (NeurIPS 2024).a significant burden on the developer [ 7,8]. Also, existing formal verification tools involve a majorlearning curve above and beyond just coding, greatly reducing the pool of people able to do this work.Machine learning methods have the potential to minimize a common pain point of formal methods,i.e., writing formal specifications. To support automation of formal verification, this paper’s goal is tobuild a benchmark by assembling a suite of formally verified programs written in Dafny , a formalverification language developed for easy adoption by programmers due to its similarity with popularimperative programming languages such as Python [ 9]. For formal verification to succeed, most ofthese programs require supplementary “annotations” to guide the automated theorem prover.2 Related WorkAs summarized in Table 1 below, there is a striking lack of training data for formal verification: whilethere are hundreds of thousands of training examples for proving mathematical theorems and over tenthousand training examples for synthesizing programs, there are only 66 + 153 = 219 for provingprogram correctness. This motivates our work in the current paper to expand the largest existingformal verification benchmarks from Clover [10] and dafny-synthesis [11].Table 1: Summary of popular machine learning benchmark datasets for proving mathematicaltheorems, synthesizing programs, and formally verifying programs. Size is measured by the numberof samples in each dataset. In the formal reasoning datasets, each sample is usually a math problemor a theorem. In the program synthesis and verified software programming benchmarks, each sampleis a program.Category Dataset SizeMathematical theorem provingCoqGym [12] 71,000 proofsLeanDojo [13] 98,734 proofsPISA [14] 138,000 proofsNatural Proofs [15] 15,000 proofsArchive of Formal Proofs [16] 1 million lines of codeUnverified program synthesisAPPS [17] 10,000 programsHumanEvalX [18, 19] 165 programsMBPP [20] 974 programsSWEBench [21] 2,294 programsLiveCodeBench [22] grows weeklyFormal software verificationClover [10] 66 programsDafny-synthesis [11] 153 programs3 DafnyBench Construction3.1 Sourcing Ground Truth ProgramsIn total, our DafnyBench benchmark contains 782 ground_truth stand-alone Dafny programs thatcompile and verify. These programs come from the following sources:•GitHub Scrape : We scraped all publicly available Dafny files on GitHub published on andbefore the end of 2023. We adapted a deduplication script from [ 23] to retain a unique setof the scraped Dafny files. The deduplication process reduced the number of .dfy filesfrom∼15,000 to ∼5,000. We removed any files that did not verify, which left 1,112 files.We found 374 of these files lacked ensures statements (postconditions) and 459 of themlacked assert andinvariant clauses (annotations), and removed the union of these sets,which left 556 ground_truth files. Out of these files, 113 verify without any annotations.•Clover : We added 62 ground truth textbook Dafny programs provided by the Cloverbenchmark [10]. Out of these files, 23 verify without any annotations.2•Dafny-synthesis : Finally, we included 164 Dafny programs provided by the dafny-synthesisbenchmark. These problems have been translated from the MBPP benchmark [ 11]. Out ofthese files, 72 verify without any annotations.Theground_truth programs in our dataset have on average 2.12 methods, 1.03 functions, and1.40 lemmas. This places the mean complexity of our examples at a level higher than Clover [10]alone, which has only one stand-alone method per example. For more detailed summary statistics ofDafnyBench dataset, see Appendix A.3.2 Task Design: Fill AnnotationsWe implemented the fill_annotations task. For this task, we took a ground_truth program,removed all of its annotations (all of the assert andinvariant statements in the body of the code),and asked LLM to fill annotations back in so that the resulting program could be verified with Dafny.Evaluation Metric An LLM’s attempt to fill annotations back in for a test program is countedas a success if all following conditions are satisfied: 1) The reconstructed program is verified withDafny; 2) LLM preserves all preconditions ( requires statements) and postconditions ( ensuresstatements); and 3) LLM does not use {:verify false} or{assume false} to "cheat."method LinearSearch <T >(a: array <T>, P: T -> bool ) returns (n: int)ensures 0 <= n <= a. Lengthensures n == a. Length || P(a[n])ensures forall i :: 0 <= i < n ==> !P(a[i]){n := 0;while n != a. Lengthinvariant 0 <= n <= a. Lengthinvariant forall i :: 0 <= i < n ==> !P(a[i]){if P(a[n]) {return ;}n := n + 1;}}Figure 1: A verified ground_truth program. To create fill_annotations task, we remove theinvariant lines, and ask LLM to fill back in equivalent lines so that the resulting program verifies.4 Experiments4.1 Hyperparameters & PromptsWe set max_tokens = 4096 , which corresponds to the lowest max output token limit among all theevaluated models, and we set temperature = 0.3. We gave each model up to n= 10 attempts at agiven file. If the model failed on any of the intermediate attempts, it received the Dafny error messageand was asked to fill in annotations again with the error message taken into consideration. If it failedon all nattempts, it was considered to fail on that specific test program. See Appendix C for prompts.4.2 Basic ResultsWe tested GPT-4o, GPT-4 Turbo [ 24], GPT-3.5 Turbo [ 25], Claude 3 Opus [ 2], and CodeLlama-7b-Instruct-hf [ 26] on the 782 programs. Table 2 shows that Claude 3 Opus achieved the highest successrate∼68%.34.3 Difficulty Utilizing Dafny Error MessagesFigure 2 shows how the cumulative success rate improved with more attempts n. We see that the bestmodels succeeded on the first try about 54%, with rapidly diminishing returns after that, approachinga plateau at about 65% for n∼5. This suggests that the LLMs are not great at taking Dafny errormessages into consideration, or struggle to cope with the underlying task.Model % SuccessNo LLM 26.9GPT-3.5 Turbo 44.0±1.8GPT-4 Turbo 59.8±1.8GPT-4o 59.3±1.8Claude 3 Opus 67.8±1.7CodeLlama-7b-Instruct-hf 28.0±1.6Table 2: Models’ success rates at writingannotations for DafnyBench, with n= 10attempts given. Dafny succeeds in auto-verifying some programs even without annota-tions, corresponding to the “No LLM" 26.9%success rate baseline.Figure 2: Success rate vs. number ofattempts given.4.4 Difficulty Grows with Program LengthFigure 3a shows that the success rate drops with program length. An obvious explanation could bethat there is more to verify. Also, as a program gets longer, there may be more dependencies amongvariables, functions, methods, and classes, increasing the overall verification difficulty level.4.5 Difficulty Grows with Annotation QuantityFigure 3b shows that the success rate drops with annotation quantity, defined as the number ofcharacters in the lines of annotations. In other words, the success rate drops with the amount of workthat LLM needs to do (the amount of text that it needs to insert in the right places).(a) (b)Figure 3: Mean success rate of each bin vs. program length (a) , and mean success rate of eachbin vs. annotation quantity (b) . The vertical lines indicate the bin boundaries used, where the binshave an almost uniform distribution of the programs. Note the bins are different for the two metrics.For visual clarity, the scales are adjusted for both plots and their x-axes do not start at 0character.4.6 Models’ Common Failure TypesTo analyze where LLMs failed on the benchmark, we categorized failures into nine types, includingverification logic error, code logic error, type error, resolution error, syntax issue, altered specification,4timeout, trivial verification, and others. For a test program that a model failed at, we: 1) checked fortimeout, cheating by altering specification, and cheating by trivial verification; and 2) passed Dafnyerror message from the failed program to Claude and asked it to classify the failure type. Table 3explains each failure type, and Figure 4 gives by-model statistics of failure types.Table 3: Examples of failure types . Note that the examples are samples, not a complete list, for eachfailure type.Failure Type ExamplesCode logic error Index out of range / Target object might be nullVerification logic error Cannot prove termination / Assertion might not holdSyntax issue lbrace/rbrace expected / Semicolon expected / Unresolved identifierType error Value does not satisfy the subset constraints of ’nat’Resolution error Boogie program had... resolution errorsTimeout Verification timeoutTrivial verification Cheating by using {:verify false} orassume falseAltered specification Cheating by altering provided specificationOther Failure type not belonging to any listed category aboveFigure 4: Counts of failures by failure type and by model . Note that a model could have multiplefailures for a single test program (for example, it might have both verification logic error and syntaxissue). Also note that the closed-source models had most of their failures at verification logic, whilethe open-source model had most of its failures at syntax issues and cheating by altering specification.5 Discussion & ConclusionsWe have assembled the largest machine learning benchmark to date for formal software verificationand made it publicly available on GitHub at https://github.com/sun-wendy/DafnyBench .5.1 Benchmark Evaluation LimitationsData contamination emerges as a potentially significant limitation for evaluating LLMs on Dafny-Bench. Scraping data from platforms such as GitHub introduces risks of leveraging previous models’training data into the benchmark evaluation, potentially inflating the abilities of certain models.Another limitation emerges in that DafnyBench does not assess a model’s competence in translatingnatural language into concise formal specifications. Arguably, this conversion is a demanding andcrucial skill we seek from language models: the capacity to validate, beyond merely verifying code.The pivotal question is whether a model can assist in identifying the essential properties an algorithmmust fulfill. This provides an exciting frontier for future work, which we begin to brainstorm inAppendix D.For further discussion on LLM’s potential for auto-verifying program synthesis and synthesizingspecifications from natural language, see Appendix E.5Acknowledgements: The authors wish to thank Clark Barrett, Rustan Leino, Daniel Windham,David Brandfonbrener, William Byrd, Josh Engels, and Anastasiya Kravchuk for helpful discussions.References[1]Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, EceKamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial generalintelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712 , 2023.[2]Anthropic. The claude 3 model family: Opus, sonnet, haiku. Technical report, Anthropic, 2024.[3]Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highlycapable multimodal models. arXiv preprint arXiv:2312.11805 , 2023.[4]European Space Agency. Flight 501 failure, 1996. URL https://esamultimedia.esa.int/docs/esa-x-1819eng.pdf . Accessed: 2024-06-04.[5] Heartbleed. The heartbleed bug. https://heartbleed.com/ , 2024. Accessed: 2024-06-04.[6]Wikipedia contributors. Shellshock (software bug). https://en.wikipedia.org/wiki/Shellshock_(software_bug) , 2024. Accessed: 2024-06-04.[7] Li Huang, Sophie Ebersold, Alexander Kogtenkov, Bertrand Meyer, and Yinling Liu. Lessonsfrom formally verified deployed software systems (extended version), 2024. URL https://arxiv.org/abs/2301.02206 .[8]Marcelo Orenes-Vera, Margaret Martonosi, and David Wentzlaff. Using llms to facilitate formalverification of rtl, 2023. URL https://arxiv.org/abs/2309.09437 .[9] K Rustan M Leino. Program Proofs . MIT Press, 2023.[10] Chuyue Sun, Ying Sheng, Oded Padon, and Clark Barrett. Clover: Closed-loop verifiable codegeneration. In ICLR 2024 Conference , 2024. URL https://openreview.net/forum?id=oSuVEv4X7w .[11] Md Rakib Hossain Misu, Cristina V . Lopes, Iris Ma, and James Noble. Towards ai-assistedsynthesis of verified dafny methods. Proc. ACM Softw. Eng. , 1(FSE), 2024. doi: 10.1145/3643763. URL https://doi.org/10.1145/3643763 .[12] Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants,2019.[13] Kaiyu Yang, Aidan M. Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, SaadGodil, Ryan Prenger, and Anima Anandkumar. Leandojo: Theorem proving with retrieval-augmented language models, 2023.[14] Pisa: Isabelle proofs dataset. https://aitp-conference.org/2021/abstract/paper_17.pdf , 2021.[15] Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, and KyunghyunCho. Naturalproofs: Mathematical theorem proving in natural language. In Thirty-fifth Confer-ence on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1) ,2021. URL https://openreview.net/forum?id=Jvxa8adr3iY .[16] Jasmin Christian Blanchette, Maximilian Haslbeck, Daniel Matichuk, and Tobias Nipkow.Mining the archive of formal proofs. In Manfred Kerber, Jacques Carette, Cezary Kaliszyk,Florian Rabe, and V olker Sorge, editors, Intelligent Computer Mathematics , pages 3–17, Cham,2015. Springer International Publishing. ISBN 978-3-319-20615-8.[17] Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo,Collin Burns, Samir Puranik, Horace He, Dawn Song, and Jacob Steinhardt. Measuring codingchallenge competence with apps. arXiv preprint arXiv:2105.09938 , 2021.6[18] Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen,Andi Wang, Yang Li, Teng Su, Zhilin Yang, and Jie Tang. Codegeex: A pre-trained model forcode generation with multilingual evaluations on humaneval-x, 2023.[19] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, JaredKaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating largelanguage models trained on code, 2021.[20] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, DavidDohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesiswith large language models, 2021.[21] Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, KarthikNarasimhan, et al. Swe-bench: Can language models resolve real-world github issues?, 2023.[22] Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Ar-mando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contaminationfree evaluation of large language models for code. arXiv preprint , 2024.[23] Chenghao Mou, Chris Ha, Kenneth Enevoldsen, and Peiyuan Liu. Chenghaomou/text-dedup:Reference snapshot, September 2023. URL https://doi.org/10.5281/zenodo.8364980 .[24] OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Flo-rencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat,Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao,Mohammad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro,Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman,Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, An-drew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, FotisChantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, ChesterCho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, CoryDecareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, SteveDowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus,Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges,Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, JonathanGordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, YufeiGuo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke,Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu,Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang,Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan,Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, LoganKilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan Hendrik Kirchner, Jamie Kiros,Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis,Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike,Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, MateuszLitwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Man-ning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, BobMcGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, DavidMedina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, VinnieMonaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély,Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, HyeonwooNoh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pan-tuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov,Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Pondede Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, AletheaPower, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh,Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez,Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt,David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh,Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Kata-rina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski7Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine B. Thompson, PhilTillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, JuanFelipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea V oss, Carroll Wainwright,Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, AkilaWelihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, ClemensWinter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu,Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers,Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk,and Barret Zoph. Gpt-4 technical report, 2024.[25] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal,Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, ArielHerbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M.Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, MateuszLitwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, AlecRadford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020.[26] Code llama: Fine-tuning llama for code generation. https://huggingface.co/Phind/Phind-CodeLlama-34B-v2 , 2022.[27] Steve Klabnik and with contributions from the Rust Community Carol Nichols. The Rust Pro-gramming Language . No Starch Press, second edition, 2021. URL https://doc.rust-lang.org/stable/book/ .[28] David Brandfonbrener, Sibi Raja, Tarun Prasad, Chloe Loughridge, Federico Cassano, JianangYang, Simon Henniger, William E. Byrd, Robert Zinkov, and Nada Amin. Verified multi-stepsynthesis using large language models and monte carlo tree search, 2023.[29] Zhaoyu Li, Jialiang Sun, Logan Murphy, Qidong Su, Zenan Li, Xian Zhang, Kaiyu Yang, andXujie Si. A survey on deep learning for theorem proving. arXiv preprint arXiv:2404.09939 ,2024.[30] Design Automation Standards Committee and Automatic Test Program Generation Subcom-mittee. Ieee standard for vhdl language reference manual. IEEE Std 1076-2019 , pages 1–673,2019. doi: 10.1109/IEEESTD.2019.8938196.[31] Donald Thomas and Philip Moorby. The Verilog Hardware Description Language . SpringerPublishing Company, Incorporated, 5th ed. edition, 2008. ISBN 0387849300.[32] Adam Procter, William L. Harrison, Ian Graves, Michela Becchi, and Gerard Allwein. Semanticsdriven hardware design, implementation, and verification with rewire. In Proceedings of the16th ACM SIGPLAN/SIGBED Conference on Languages, Compilers and Tools for EmbeddedSystems 2015 CD-ROM , LCTES’15, New York, NY , USA, 2015. Association for ComputingMachinery. ISBN 9781450332576. doi: 10.1145/2670529.2754970. URL https://doi.org/10.1145/2670529.2754970 .[33] Xun Li, Mohit Tiwari, Jason K. Oberg, Vineeth Kashyap, Frederic T. Chong, Timothy Sherwood,and Ben Hardekopf. Caisson: a hardware description language for secure information flow.InProceedings of the 32nd ACM SIGPLAN Conference on Programming Language Designand Implementation , PLDI ’11, page 109–120, New York, NY , USA, 2011. Association forComputing Machinery. ISBN 9781450306638. doi: 10.1145/1993498.1993512. URL https://doi.org/10.1145/1993498.1993512 .[34] Wikipedia contributors. Jaccard index — wikipedia, the free encyclopedia. https://en.wikipedia.org/wiki/Jaccard_index , 2024. Accessed: 2024-03-27.[35] Wikipedia contributors. N-gram — Wikipedia, the free encyclopedia. https://en.wikipedia.org/wiki/N-gram , 2024. Accessed: 2024-03-27.[36] Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Jeff Huang, Chuyue Sun, Cody Hao Yu, ShiyiCao, Christos Kozyrakis, Ion Stoica, Joseph E. Gonzalez, Clark Barrett, and Ying Sheng.Efficiently programming large language models using sglang, 2023.A Summary Statistics of DafnyBench Dataset8Table 4: Mean and maximum values that describe attributes of a DafnyBench test program.Mean Max# Methods 2.12 42# Functions 1.03 42# Lemmas 1.40 35# Characters 1916.47 28736# Annotation characters 261.23 6019B Overview of Evaluating LLM on DafnyBenchFigure 5: Overview of evaluating LLM on a DafnyBench test program.C Prompt Engineering for Annotation ReconstructionWe based our prompts on the prompts used in the Clover benchmark [ 10], one of the previouslylargest such benchmarks, since they provide a fairly rigorous precedent. We tried to keep promptsmostly the same across models in order to reduce the difference between model performances thatis caused by prompts. However, the prompts are not fully identical. For example, when we askLLM to simply return the annotations-filled program without any explanation, Claude 3 tends to addexplanations that interfere with Dafny compilation. Thus, we had to adjust some prompts slightly tofit each model’s peculiarities.9C.1 GPT Model Famly PromptsSYSTEM_PROMPT = "You are an expert in Dafny. You will be given tasks dealingwith Dafny programs including precise annotations."USER_PROMPT = "Given a Dafny program with function signature, preconditions,postconditions, and code, but with annotations missing.Please return a complete Dafny program with the strongestpossible annotations (loop invariants, assert statements,etc.) filled back in. Do not explain. Please use exactly thesame function signature, preconditions, and postconditions.Do not ever modify the given lines. Below is the program:"C.2 Claude 3 Opus PromptsSYSTEM_PROMPT = "You are an expert in Dafny. You will be given tasks dealingwith Dafny programs including precise annotations. You shouldonly return code body in all circumstances. No text is allowed."USER_PROMPT = "Given a Dafny program with function signature, preconditions,postconditions, and code, but with annotations missing.Please return a complete Dafny program with the strongestpossible annotation (loop invariants, assert statements,etc.) filled back in. Do not explain or output any text. Ifyou have to explain, put all explanations in comments form.There should only be code body in your output. Please useexactly the same function signature, preconditions, andpostconditions. Do not ever modify the given lines. Belowis the program:\n‘‘‘dafny\n"C.3 CodeLlama-7b-Instruct-hf PromptsThe prompts for CodeLlama-7b-Instruct-hf are the same as those in C.2.D Proposals for Evaluating Strength of Generated SpecificationsThe evaluation of models’ capability to generate formal specifications might be enhanced by integrat-ing the process with the creation of positive and negative test cases for each Dafny implementation.This approach proposes a reward system where models are evaluated based on the number of positivetest cases their formal specifications support and the number of negative test cases they successfullyreject. However, this method introduces a new challenge: ensuring the test cases accurately reflect thecomprehensive meaning intended in the natural language descriptions. The consistency and validityof these test cases become critical, raising questions about the methods used to generate and verifythem.E Further DiscussionE.1 Opportunities for Larger BenchmarksIt will be valuable to further expand formal verification benchmarks, which still remain more thantwo orders of magnitude smaller than corresponding benchmarks for mathematical theorem proving.One convenient way to expand the number of available problems may involve incorporating Dafnyprograms from GitHub that have dependencies spread across multiple files (while DafnyBenchencompasses increasingly complex multi-step programs, its programs each fit in a single file, avoidingthe intricacies associated with distributed files or the integration of external libraries).Perhaps models that perform especially well on this initial benchmark can later be used to expand itby translating existing Python benchmark problems into Dafny, Rust [ 27] or other popular formalverification languages.10A subset of the programs we scraped from GitHub do not have appropriate docstrings. By buildinga benchmark with better code documentation, models may be able to leverage helpful contextualinformation to better constructing verification annotations.E.2 Opportunities for Improved LLM ResultsWe evaluated the models with a fixed temperature setting and a max output token limit of 4096,and we used prompts that were manually but not very systematically tuned for effectiveness (seeAppendix C) — all of these choices probably leave room for improvement.We do not yet provide an official training dataset or models custom-trained to do well on theDafnyBench evaluation set. However, we do provide the full json file produced by the GitHub scrape,and we separately provide the names of the files we use for the evaluation benchmark. Hence, it ispossible for researchers to use files from the Github scrape that are not used in the benchmark astraining data, though we cannot at this time provide strong guarantees on similarity between suchtraining problems and the benchmark problems.We also see opportunities for LLM-related innovation on the algorithmic side: out-of-the-box LLMsprovide a floor but not a ceiling for possible performance on this benchmark. For example, fine-tuningor search-based inference-time algorithms might boost models’ performances on this benchmark[28].E.3 The Potential of Better LLM-Powered VerifiersLLMs also have potential to improve formal verification in more profound ways than mentionedabove, when used in combination with other AI tools. For example, they can help automate theidentification of sub-goals and annotations, reducing the search space for automated theorem proversand SAT solvers. A software developer is likely able to specify the high level assurance properties ofa piece of code, but may lack familiarity with the complexities of proof sub-goals and annotations.LLMs offer a way to bridge this gap between software developers and formal verification.Bigger, more general benchmarks can be used to train LLMs to specify sub-goals and annotationsin formats most useful to the presently available provers and solvers. Benchmarks covering broadground, from cryptography, lambda calculus, embedded systems, and avionics, in a variety of widelyused programming languages suitable for verification, will help create LLMs that can take real-worldsoftware, automatically process and serve it to verification tools, and inform the developer in nearreal time about the correctness of the code. The problem is analogous to that solved by existingautomated theorem provers and model checkers in the domain of mathematics. For a survey on theapplication of deep learning to automated theorem proving, see [29].E.4 The Potential of Auto-Verifying Program SynthesisAbove we discussed the challenge of verifying existing pre-programs. Anther potential of LLMs isuse program-synthesis techniques that produce both programs and proofs of their correctness, allat the same time. This makes intuitive sense, since when a human programmer writes code, theytypically have an informal proof in their head for why this code is correct. In other words, in additionto bridging the gap from low level implementation to high level specification in the upward direction,LLMs can offer assistance in generating provably correct low level code from high level specificationsvia program synthesis.Current approaches to program synthesis enable engineers to encode a desired specification in a highlevel language, and then through a (hopefully) verified correct compiler generate correct low levelcode in a language like VHDL [ 30] or Verilog [ 31] for hardware synthesis. Program synthesis islimited by the need for a special purpose language or compiler to be constructed and verified correctin its own right. For example, ReWire, a domain specific language defined as a subset of Haskell[32], was manually verified correct using the Coq Interactive Theorem Prover. In order to add a newhigh-to-low path, a new language or compiler will need to be defined and verified. If an engineerneeds to synthesize correct Verilog rather than VHDL, they would likely need to first learn Caisson[33].LLMs offer a way to generalize this approach. Starting with a high level language, an engineermight be able to specify a system and then leverage a LLM to generate low level code with the11corresponding loop invariants, weakest pre-conditions, strongest post-conditions, etc, included. Earlyresults indicate that an LLM that is able to converse with a human when producing a program canreduce the error rate against a simple programming benchmark by half [ 20]. If instead of receivingfeedback from a human, the LLM were to interact with a suite of formal verification tools, we expectfurther improvements. The LLM should be capable of generating code that is appropriately annotatedfor theorem proving, which is exactly the skill assessed by test benches like that described here.F The Minhash Deduplication AlgorithmWe can think about deduplicating a set of files by finding groups of “similar”files and then choosingonly one file representative from each group to form our final deduplicated set of files. To do this, wecan use the Jaccard similarity metric to decide whether one document is a duplicate of another.The Jaccard similarity metric provides a way to quantify the similarity of two sets. It is defined as[34]:J(A, B) =|A∩B||A∪B|.In the application to code files, we could consider each file to be a set of n-grams, where an n-gramis defined as a sequence of nadjacent symbols in a particular order [ 35], and then apply the Jaccardscore as a similarity metric for our files. To directly calculate this Jaccard score, we would needto run string comparison on every n-gram, which would have time complexity Onm2if wehaven n-grams each with max length mcharacters. This turns out to be an inefficient method forrepresenting each code file as a set. Instead, the minhash deduplication algorithm approximates theJaccard similarity between two documents by shingling the documents and comparing the minhashrepresentation of each set of shingles (i.e. we compare fingerprints of documents instead of fulldocuments). The minhash representation of a document is a way to represent a text document as a setof numbers that is faithful to the structure of its content but with a fixed set size that is smaller thanthe total number of n-grams in the document (i.e. the minhash representation of the document is aform of numerical fingerprint of the document). In Figure 6 below, we provide the pseudocode forthe minhash algorithm used, based entirely on the script in [23]:Note that the probability two files have the same min hash value under the same hash function isequivalent to their Jaccard similarity. Concretely, for file Aand file B:Pr[ min hi(A) = min hi(B) ] = J(A, B)where minhi()denotes taking the minimum hash value under hash function hi. This makes sensebecause, assuming negligible hash collision, Pr [minhi(A) = min hi(B)]is equivalent to the proba-bility that the first n-gram hash of Aunder hiis equal to the first n-gram hash of Bunder hi. Ifhiisa good hash function, then it uniformly distributes the hash values of the original n-gram hashes overthe range of hi. Letcdenote the number of n-grams with equivalent hashes; let adenote the numberofn-grams from Awith smaller hash values than the hash value of corresponding n-gram from B; letbdenote the reverse of the previous category. Then, Pr [minhi(A) = min hi(B)] =ca+b+c, giventhe uniformity of h1. Note thatca+b+c=|A∩B||A∪B|=J(A, B).G Repositories of Scraped Dafny CodeWe provide a full list of all repositories whose data we used in the scraped portion of DafnyBench inTables 5, 6, 7. When reporting the license information, "Renamed so N/A" implies that the originalrepository we scraped in December 2023 no longer exists under that name. Otherwise, the repositorieshave either Microsoft open-source licenses, MIT licenses, GNU General Public License v3.0 licenses,Creative Commons Zero v1.0 Universal, Apache 2.0 licenses, or "Other" (which is secretly an MITLicense in a strange format, which has been checked manually). In light of this, we release ourderivative DafnyBench repository under an Apache 2.0 license and a GNU General Public Licensev3.0. We note explicitly here that all files from repositories with the Apache 2.0 license have beenmodified from their original form.12function minhash_deduplication ( documents , num_permutations , threshold ):# Preprocess the documentsfor each document in documents :tokenize the document into n- grams ( shingles )hash each n- gram using a hash function (e.g., xxHash or SHA -1)store the hashed n- grams in a set# Generate permutationsfor i from 1 to num_permutations :generate random coefficients a and bcreate a permutation function : (a * x + b) % prime_modulus# Create minhash signaturessignatures = []for each document in documents :signature = []for each permutation function :min_hash = INFINITYfor each hashed n- gram in the document :permuted_hash = apply permutation function to hashed n- grammin_hash = min( min_hash , permuted_hash )append min_hash to signatureappend signature to signatures# Perform Locality - Sensitive Hashing (LSH )# We use 250 permutations , so to achieve Jaccard similaritythreshold of 0.5# We really only need one band (i.e. one hash table )num_bands = choose number of bandsrows_per_band = num_permutations / num_bandscandidate_pairs = []for each band :create an empty hash tablefor each document signature :band_signature = subset of signature for the current bandhash_bucket = hash ( band_signature )add document to the corresponding hash bucketfor each hash bucket :if number of documents in the bucket > 1:generate all pairs of documents in the bucketadd pairs to candidate_pairs# Use a union - find datastructure to track groups of duplicatesduplicates = UnionFind ()for each band :for each row in hashtable :for each hash_bucket :if size ( hash_bucket ) <= 1:continueelse :cluster_id = min( hash_bucket )for x in hash_bucket :duplicates . union (x, cluster_id )# Perform deduplicationdeduplicated_documents = []for each document in documents :if duplicates . find_root ( document ) = document :add document to deduplicated_documentsreturn deduplicated_documentsFigure 6: Pseudocode for the minhash deduplication algorithm.13Table 5: Repositories from which DafnyBench utilizes scraped code (no particular order).Repository Name Licensedafl No license providedDafny-Grind75 No license providedfeup-mfes MIT LicenseDafny GNU General Public License v3.0nitwit MIT LicenseDafny-experiences No license providedFormal_Verification_With_Dafny No license providedSENG2011 No license providedM2 No license providedassertive-programming-assignment-1 No license providedt1_MF No license provideddafny-exercise Otherdafny-learn No license providedsoftware-specification-p1 No license providedFMSE-2022-2023 The Unlicensefv2020-tms No license providedtype-definition No license providedlaboratory No license provideddafny GNU General Public License v3.0TFG GNU General Public License v3.0SiLemma MIT Licensedafny-training No license providedFormalMethods No license provideddafny_misc MIT Licensevmware-verification-2023 No license providedCSU55004—Formal-Verification No license providedMIEIC_mfes MIT LicenseDafny-programs No license providedMFES_2021 MIT LicenseDafnyPrograms No license providedcs357 No license providedformal-methods-in-software-engineering No license providedDafny_ProgrammingLanguages No license providedCSC8204-Dafny No license providedBPTree-verif No license providedtangent-finder No license providedTrab1-Metodos-Formais No license providedverified-using-dafny MIT LicenseMetodos_Formais No license providedlets-prove-blocking-queue Creative Commons Zero v1.0 UniversalDafny_Programs No license provideddafny-workout MIT LicenseDafny-Projects No license providedVerifiedMergeSortDafny No license provideddafny_projects No license providedpucrs-metodos-formais-t1 No license providedspecTesting No license providedQS_BoilerPlate1 No license provideddafny-sandbox No license providedFormal-Verification No license provideddafny-duck No license providedFlexWeek No license provided703FinalProject No license provided14Table 6: Repositories from which DafnyBench utilizes scraped code (no particular order), continued.Repository Name LicenseMFS No license provideddafny-mini-project No license providedSoftware-Verification No license providedcircular-queue-implemetation No license providedFinal-Project-Dafny No license providedDafnyProjects No license providedbbfny No license providedFormal-methods-of-software-development No license providedSoftware-building-and-verification-Projects No license providedsoftware_analysis No license providedcs245-verification No license provideddafny-aoc-2019 No license providedProjectosCVS No license providedMFDS MIT LicensegroupTheory No license provideddafny-language-server OtherInvoker Apache License 2.0formal-verification No license provideddafny-programs No license providedironsync-osdi2023 Otherverified-isort No license providedpaxos_proof No license providedse2011 No license providedDafny_Verify No license providedFormal-Methods-Project No license provided630-dafny No license provideddafny_examples MIT LicenseWorkshop No license providedDafny-Practice MIT LicenseCVS-handout1 No license providedCS494-final-project No license providediron-sync Otherstunning-palm-tree Creative Commons Zero v1.0 Universalsat_dfy No license providedverification-class MIT LicenseAssertivePrograming No license providedDafny-VMC MIT Licenselibraries Othercmsc433 No license providedCorrectness No license providedCVS-Projto1 No license provideddafleet MIT Licensedafny-rope MIT Licenseprotocol-verification-fa2023 No license providedvfag No license providedDafny_Learning_Experience Apache License 2.0summer-school-2020 No license providedBinarySearchTree Renamed so N/Allm-verified-eval MIT LicenseProgrammverifikation-und-synthese Renamed so N/AProg-Fun-Solutions Renamed so N/ACO3408-Advanced-Software-Modelling-Assignment... Renamed so N/A15Table 7: Repositories from which DafnyBench utilizes scraped code (no particular order), continued.Repository Name LicenseDafnyExercises No license providedtest-generation-examples No license providedHATRA-2022-Paper No license providedveri-sparse No license providedFormal-Verification-Project No license providedformal_verication_dafny No license providedSimulink-To_dafny No license provideddafny_experiments No license providedcs686 No license providedProgram-Verification-Dataset MIT LicenseDafny-demo No license provideddafny-exercises No license providedmetodosFormais No license providedCS5232_Project No license providedDafny-Exercises No license providedH Dafny Verification ExamplesWe take one example test program from DafnyBench, and consider four possible results for thecorresponding LLM-reconstructed program: successfully verifies, fails to verify, cheats by includingassume false , and cheats by including {:verify false} . The last three cases are all considereda fail by the DafnyBench evaluation metric.H.1 Successful ExampleFigure 7 shows a Dafny program that is considered to have successfully verified without cheating.Dafny verifier message : Dafny program verifier finished with 3 verified, 0 errors.H.2 Failed ExampleFigure 8 shows a Dafny program that fails to be verified.Dafny verifier message : (20,11): Error: index out of range. (30,4): Error: a postcondition could notbe proved on this return path. (11,28): Related location: this is the postcondition that could not beproved. Dafny program verifier finished with 2 verified, 2 errors.H.3 Cheat ExampleFigure 9 shows that a Dafny program cheats by including assume false , which DafnyBenchevaluation would count as a fail.Dafny verifier message : Dafny program verifier finished with 3 verified, 0 errors.H.4 Another Cheat ExampleFigure 10 shows that another Dafny program cheats by including {:verify false} , which Dafny-Bench evaluation would count as a fail.Dafny verifier message : Dafny program verifier finished with 3 verified, 0 errors.16function sorted (a: array <int >) : boolreads a{forall i,j : int :: 0 <= i < j < a. Length ==> a[i] <= a[j]}method BinarySearch (a: array <int >, x: int) returns ( index : int)requires sorted (a)ensures 0 <= index < a. Length ==> a[ index ] == xensures index == -1 ==> forall i : int :: 0 <= i < a. Length ==> a[i] != x{var low := 0;var high := a. Length - 1;var mid := 0;while (low <= high )invariant 0 <= low <= high + 1 <= a. Lengthinvariant x !in a[.. low ] && x !in a[ high + 1..]{mid := ( high + low) / 2;if a[mid] < x {low := mid + 1;}else if a[mid ] > x {high := mid - 1;}else {return mid;}}return -1;}Figure 7: An example response that successfully fills annotations back in and verifies withoutcheating.I Overdetailed SpecificationFigures 11 and 12 show two example programs update_array_strong.dfy andtriple_strong.dfy from the Clover benchmark [ 10], in which the formal specificationclosely echoes the program implementation.J Ethics StatementIn creating DafnyBench, we took care to use only data that was publicly available on GitHub,and we reference every repository from which we acquired this data, along with their licenses, inAppendix G. Furthermore, we cite the existing verifiable programming benchmarks that we subsumein DafnyBench (i.e. Clover [10] and dafny-synthesis [11]), and we asked explicit permission fromtheir authors in order to do so. Finally, we cite all models that were used for evaluations on thisbenchmark [ 24,25,2,26]. We used these models in accordance with the policies set forth in theirAPI and model card documentation.K Reproducibility StatementOur benchmark contains the 782 ground_truth programs and the correspondingverification_conditions_removed programs. Additionally, we include full metadataon all of these files and the evaluation scripts necessary for running the listed models on them. Byusing the OpenAI and Anthropic APIs, others looking to reproduce this work should not expect to17function sorted (a: array <int >) : boolreads a{forall i,j : int :: 0 <= i < j < a. Length ==> a[i] <= a[j]}method BinarySearch (a: array <int >, x: int) returns ( index : int)requires sorted (a)ensures 0 <= index < a. Length ==> a[ index ] == xensures index == -1 ==> forall i : int :: 0 <= i < a. Length ==> a[i] != x{var low := 0;var high := a. Length - 1;var mid := 0;while (low <= high ){mid := ( high + low) / 2;if a[mid] < x {low := mid + 1;}else if a[mid ] > x {high := mid - 1;}else {return mid;}}return -1;}Figure 8: An example response that fails to be verified, as it lacks necessary invariant statements.spend more than $300 for a full run of GPT4-o on DafnyBench, $300 for a full run of Claude3 onDafnyBench, $500 for a full run of GPT4-turbo on DafnyBench, and $400 for a full run of GPT-3.5on DafnyBench. We used the sglang package [ 36] to efficiently query the models. All evaluationswere completed on a Linux cluster with an A100 Nvidia GPU.18function sorted (a: array <int >) : boolreads a{forall i,j : int :: 0 <= i < j < a. Length ==> a[i] <= a[j]}method BinarySearch (a: array <int >, x: int) returns ( index : int)requires sorted (a)ensures 0 <= index < a. Length ==> a[ index ] == xensures index == -1 ==> forall i : int :: 0 <= i < a. Length ==> a[i] != x{assume false ;var low := 0;var high := a. Length - 1;var mid := 0;while (low <= high ){mid := ( high + low) / 2;if a[mid] < x {low := mid + 1;}else if a[mid ] > x {high := mid - 1;}else {return mid;}}return -1;}Figure 9: An example response that cheats by including assume false .19function sorted (a: array <int >) : boolreads a{forall i,j : int :: 0 <= i < j < a. Length ==> a[i] <= a[j]}method {: verify false } BinarySearch (a: array <int >, x: int) returns (index : int)requires sorted (a)ensures 0 <= index < a. Length ==> a[ index ] == xensures index == -1 ==> forall i : int :: 0 <= i < a. Length ==> a[i] != x{var low := 0;var high := a. Length - 1;var mid := 0;while (low <= high ){mid := ( high + low) / 2;if a[mid] < x {low := mid + 1;}else if a[mid ] > x {high := mid - 1;}else {return mid;}}return -1;}Figure 10: An example response that cheats by including {:verify false} .method UpdateElements (a: array <int >)requires a. Length >= 8modifies aensures old(a [4]) +3 == a[4]ensures a [7]==516ensures forall i::0 <= i<a. Length ==> i != 7 && i != 4 ==> a[i] ==old (a[i]){a[4] := a [4] + 3;a[7] := 516;}Figure 11: An example program update_array_strong.dfy from the Clover benchmark [ 10], inwhich the formal specification closely echoes the program implementation.method Triple (x:int) returns (r:int)ensures r ==3* x{r:= x*3;}Figure 12: Another example program triple_strong.dfy from the Clover benchmark [ 10], inwhich the formal specification closely echoes the program implementation.20 |
PEdOdntGJG | Looped Transformers for Length GeneralizationYing Fan1, Yilun Du2, Kannan Ramchandran3, Kangwook Lee11University of Wisconsin-Madison2Massachusetts Institute of Technology3UC BerkeleyAbstractRecent work has shown that Transformers trained from scratch can successfullysolve various arithmetic and algorithmic tasks, such as adding numbers and comput-ing parity. While these Transformers generalize well on unseen inputs of the samelength, they struggle with length generalization, i.e., handling inputs of unseenlengths. In this work, we demonstrate that looped Transformers with an adaptivenumber of steps significantly improve length generalization. We focus on tasks witha known iterative solution, involving multiple iterations of a RASP-L operation—alength-generalizable operation that can be expressed by a finite-sized Transformer.We train looped Transformers using our proposed learning algorithm and observethat they learn highly length-generalizable solutions for various tasks.1 IntroductionMost algorithmic tasks such as coding, writing mathematical proofs, and reasoning are defined withinputs of variable length . The length of an input often correlates with the difficulty of the probleminstance. For example, the longer the input, the more difficult the problem tends to be. We say amodel perfectly length-generalizes if it can solve an algorithmic task on inputs of any length, even ifit was only trained on data with inputs up to a finite length [ 2]. Generally, it is hard to expect modelsto be trained on inputs with all possible lengths, and we need to rely on length generalization. Also, ifa model can length-generalize, it means the model has truly learned the correct algorithmic solutionto the task, not just a spurious solution that works only for a certain range of input lengths.Recently, many works on Large Language Models (LLMs) have shown that we can get more powerfulAI models by scaling both compute and data at training time. This scaling approach has indeedsucceeded in improving accuracies on various benchmarks. However, even the largest and latestLLMs like [ 1] which have been trained on much of the existing text on the Internet, still struggle withlength generalization [ 35,2,21]. One possible cause is the particular computing model. LLMs arebuilt based mostly on the Transformer architecture [ 32]. While Transformers can accept a variablelength of inputs (that can be processed in parallel), they usually have a fixed depth. This might besufficient for certain tasks, but not always.To learn a model that can effectively generalize to longer problems, it is important to considerarchitectures that can adaptively adjust the computational budget to the difficulty of the tasks [ 2,12,13]. One approach to achieve this is to explicitly generate intermediate output tokens, similar towriting down a scratchpad, which improves LLMs’ capability for solving harder problems[ 25]. Intheory, LLMs may generate more scratchpad tokens representing intermediate computation whensolving a more difficult task, indicating that they can allocate elastic computation according to thelength and difficulty of the given instance. This approach can be learned by explicitly training amodel on data with intermediate computation steps [ 23,9]. Alternatively, it can be achieved viaChain-of-Thought (CoT) reasoning with few-shot examples [ 33] or even in a zero-shot manner [ 20].Notice that these approaches still use fixed-depth models. While these approaches help solve morecomplex reasoning tasks, they are still far from achieving near-perfect length generalization forsimple algorithmic tasks. For instance, Lee et al. applied CoT for arithmetic tasks but observed thatTransformers cannot length generalize even for simple addition tasks [21].38th Conference on Neural Information Processing Systems (NeurIPS 2024).Recently, there has been growing interest in using recurrent architectures for reasoning [ 10,3,5,36]. Unlike standard RNN-type architectures that process different parts of the input sequenceincrementally, one can consider a recurrent architecture that processes the entire input sequencemultiple times. This architecture passes the intermediate processing output to the next iteration’sinput, possibly along with the original input. In particular, if the base model used in each iteration isa Transformer, this model is called a Looped Transformer [36].**##>#Decoder Block OutputInput............Figure 1: Method Overview. During training, wesupervise the output of the model to match thetarget data only after a certain number of stepsof applying the same decoder block, helping themodel learn intermediate steps that can be reusedand can handle input of arbitrary lengths. All greyblocks share the same parameters. Examples arefrom the Copy task with nsymbols. “#” indicatesEOS, “*” indicates ignored output, and “>” indi-cates the end of the query (EOQ).Looped Transformer can naturally break the limita-tion of the fixed depth in the standard Transformerarchitecture: One can adjust the number of loopedsteps based on the computational complexity of theunderlying algorithmic solution . Consider a problemset with the following properties: 1) The problemscan be solved by a loop of one RASP-L [ 37] pro-gram1, i.e., each step in the loop can be performedby a decoder-only Transformer with a fixed depth;2) The number of steps needed in the loop dependson the problem’s complexity, i.e., more difficult prob-lems could potentially require more steps to solve.Under the length generalization scheme, we considerthe number of steps depending on the problem length,and define this problem set as n-RASP-L problems.Forn-RASP-L problems, if we can learn these length-independent steps, we can utilize an adaptive numberof steps to achieve length generalization.Inspired by this observation, we study trainingLooped Transformers models for length generaliza-tion. Specifically, we consider a training setup wherewe do not require any intermediate supervision data(such as reasoning steps or scratchpad). We only as-sume access to end-to-end supervision (input and output) and the number of steps needed. Dependingon the number of steps, we iteratively apply the same decoder block and then decode the final answer;See Figure 1 for illustration. At inference time, the model could either decide when to stop withpredefined stopping criteria or stop when reaching the ground-truth number of steps. Empirically, weshow that looped Transformers with an adaptive number of steps can successfully length-generalizeto longer lengths simply by appropriately adapting the number of loops at inference time, indicatingthat our approach encourages the model to implicitly learn the necessary steps to solve a task.Our contributions can be summarized as follows: (1)We first formally define n-RASP-L problems,and provide examples of n-RASP-L solutions to the Copy, Parity, and Addition tasks (Section 2); (2)We propose to learn n-RASP-L problems with Looped Transformers where we supervise the finalanswer in a step-dependent way, enabling us to use an adaptive number of steps depending on theproblem complexity (Section 3); (3)Empirically, we show that our proposed method outperforms thebaseline approaches in terms of length generalization performance (Section 5). Due to lack of space,we present the background introduction of RASP-L, next-token prediction and full-answer predictionin Section A, related work in Section 4, and full experimental results in Section C in the appendix.2n-RASP-LRASP-L programs [ 37] do not allow loops. If we consider the next-token prediction (NTP) scheme, itmeans that we need to find the same RASP-L program (which can be represented with a fixed-depthdecoder-only Transformer) to predict the next token given any possible prefix in the answer sequence.Such solutions might not always exist for all problems: there is no known RASP-L program foraddition, parity, and copy under the NTP scheme [ 37]. On the other hand, architectures such asthe Looped Transformer have external loops embedded in the architecture which naturally providesadaptive depth. Thus, a natural question is: what kind of algorithmic tasks can we represent with adecoder-only Transformer in a loop? Specifically, what if we also allow the number of iterations to1Here we consider a more general way to loop, i.e., predicting all missing tokens at the end of the loop, notnecessarily in the way of predicting the single next token at a time. See more discussions in Section A.2.2explicitly depend on the input length, say n? Moreover, what if we are not constrained by the NTPscheme, but a more general full-answer prediction (FAP) scheme?Inspired by these questions, we define the following class of algorithmic tasks:Definition 2.1 (n-RASP-L) .A program Pis called an n-RASP-L program if (1) there exist T(n) :N→N, and (2) Pcan be decomposed to a sequential application of P′forT(n)steps.We show that n-digit addition, n-bit parity, copying nsymbols indeed have n-RASP-L solutions. Forthe parity task, P′parity is to shift the input sequence to the right by 1 and calculate XOR of the answersequence and the input sequence; For the copy task, P′copyis to shift the input sequence to the right by1; For the addition task P′addition is to calculate the XOR of two sequences and shift the results to theright by 1 position as the partial answer, and calculate the AND of two sequences as the carry-onsequence2. See Figure 4, Proposition B.1, B.2, B.3 and Listings 1, 2, 3 for details.3 Learning n-RASP-L problems with looped TransformersWe present a novel framework for length generalization: In the absence of ground truth CoTdata/intermediate output, we propose to leverage the inherent structure of the problem with thehelp of “knowing when to stop”. We present the setup for training data in Section 3.1, the modelarchitecture and training algorithm in Section 3.2, and the inference algorithm in Section 3.3.3.1 End-to-end supervised data without intermediate step supervisionWe consider the following settings: (1) There exists an n-RASP-L program that solves the giventask. (2) Training data consists only of (x, y)pairs, but not intermediate steps. That is, we donot have access to P′(x), P′(P′(x)), . . .. (3)T(n), i.e., the ground truth number of iterations tosolve the problem (with some P′) is available in the training data3. (4) The length nis diverselydistributed in the dataset, e.g., n∈ {1, . . . , n max}where nmaxis the maximum number of lengths inthe dataset; The ground truth number of steps needed T(n)is also diversely distributed in the dataset,e.g.,T(n)∈ {1, , . . . , T (nmax)}where T(nmax)is the maximum number of steps in the dataset4.3.2 Looped training with step supervision3.2.1 Architecture of the looped TransformersWe present the model architecture in Figure 1 with following key characteristics. (1) Recurrence:The Looped Transformer is recurrent (like [ 15] but with decoder-only structure): We reuse the samedecoder block for a number of steps, where each block consists of a certain number of layers. We canadjust the number of looped steps at will. (2) Input injection: For each step, the input embeddingsare added to the output embeddings of the previous step as the input of the current step, preventinginformation loss with improved performance [ 3,36]. (3) Positional embedding : There is no positionalencoding in the RASP-L operations [ 37]. To follow our n-RASP-L assumption and test the effect ofthe looped training, we use NoPE [19] to avoid the impact from different positional embeddings5.3.2.2 TrainingGiven a dataset D={({(xl)Lil=1}i,{(yl)Lil=1}i, Ti, Li)}Ni=1, where {({(xl)Lil=1}iis the input with Litokens, {(yl)Lil=1}iis the output with Litokens, and Tiis ground truth number of steps of sample i.We aim to learn the transformer model Mθ6by minimizing the following loss:ED[LfTi(Mθ,{({(xl)Lil=1}i),{(yl)Lil=1}i], (1)whereLis the cross entropy loss and fTi(Mθ,{({(xl)Lil=1}i) =Mθ(Mθ(· · ·Mθ|{z }Tiiterations({({(xl)Lil=1}i))).2Here we omit the pre-processing and post-processing steps like handling EOS (“#”) and EOQ (“>”) tokenswhich can be done by fixed-depth attention layers outside of the loop.3This assumption is to provide supervision for when to stop during training; for inference, we can either usethe ground truth steps or leverage the confidence of the output as a stopping criterion (see Section 3.3 for details.)4The length of the problem is not necessarily the same as the actual length of the input due to EOS and EOQtokens; see Section C.1.1 for the definition of the length of the specific tasks.5NoPE is shown to inherently learn to use relative positional embeddings in practice [19].6Mθonly handles the embedding space and we use greedy decoding to get the decoded output.320 30 40 50T est Length0.00.20.40.60.81.0AccuracyParityFAP-Loop-Adaptive (Ours)NTPNTP-PauseNTP-Loop20 22 24 26 28 30T est Length0.00.20.40.60.81.0AccuracyAdditionFAP-Loop-Adaptive (Ours)NTPNTP-PauseNTP-Loop2022242628303234T est Length0.00.20.40.60.81.0AccuracyCopyFAP-Loop-Adaptive (Ours)NTPNTP-PauseNTP-Loop11 12 13 14 15 16T est Length0.00.20.40.60.81.0AccuracyMultiplicationFAP-Loop-Adaptive (Ours)NTPNTP-PauseNTP-Loop20 22 24 26 28 30T est Length0.00.20.40.60.81.0AccuracyBinary SumFAP-Loop-Adaptive (Ours)NTPNTP-PauseNTP-Loop2022242628303234T est Length0.00.20.40.60.81.0AccuracyUnique SetFAP-Loop-Adaptive (Ours)NTPNTP-PauseNTP-LoopFigure 2: Length Generalization Performance. Our looped Transformer model with adaptive depth generalizedbetter than NTP methods across studied tasks, including the variants with pause tokens and weight-tied layers.The vertical dashed line indicates the maximum training length. NTP indicates vanilla next-token prediction;NTP-Pause indicates next-token prediction with pause tokens; NTP-Loop indicates next-token prediction with afixed number of weight-tied layers.3.3 Adaptive inferenceLooped Transformers can use adaptive depth at inference time, so we need certain rules to decidewhen to stop. We consider two rules: 1) Oracle : We can assume that the number of steps needed isgiven; 2) Maximum confidence : We can use confidence base rules to decide when to stop, i.e., stopwhen we are confident about the output at the current step. More specifically, for 2), given a testsequence {({(xl)Ll=1}and a trained model Mθ, we can get the number of steps Tfrom Equation (2):T= arg maxt∈[1,Tmax]Lft(Mθ,{({(xl)Ll=1}),{( ˆyl)Ll=1}t, (2)where{( ˆyl)Ll=1}tis the decoded sequence from ft(Mθ,{({(xl)Ll=1})at step t,Tmaxis a threshold forthe maximum number of steps.4 Related workPositional embedding for length generalization. Positional embeddings have been shown to greatlyaffect Transformers’ ability to generalize to longer lengths [ 27,19,29,31,22,7,30,24,16]. Bydesigning positional embedding schemes that better capture relative positional information withtechniques such as randomization and functional representations, researchers have made significantprogress in improving length generalization. Especially, [ 7] and [ 24] use tailor-made positionalembeddings for some arithmetic problems without potential generality.7This direction is orthogonalto our work since there is no positional encoding in RASP-L operations. We choose no positionalembedding in our experiments, but other positional embeddings could further be synergisticallyapplied with our approach. However, they might not be expressed as RASP-L operations. We leavefurther investigation with different positional embeddings to future work.RNNs and Chomsky Hierarchy. [11] conduct an extensive empirical study to investigate the limitsof the generalization performance of different neural network structures, demonstrate that groupingtasks according to the Chomsky hierarchy allows forecasting whether certain architectures will beable to generalize to out-of-distribution inputs. Their results show that RNNs and Transformers failto generalize on non-regular tasks, LSTMs can solve regular and counter-language tasks, and onlynetworks augmented with structured memory (such as a stack or memory tape) can successfullygeneralize on some context-free and context-sensitive tasks. In our paper, the Looped Transformer7In [24], they show that models with weight-tied layers (but with a fixed depth) can improve the generalizationability when comparing with the variants of the same positional embedding, but they do not find adaptive depthsto be helpful since they do not perform the step-specific training as our method, while the key to our method isto use models with adaptive depths. To also compare with this baseline, we add NTP-Loop in Section C.1.2.4Method Encoder/Decoder Prediction Type PE Input Injection Halting MechanismUT Both NTP Yes No ACT [6]PonderNet Both NTP Yes No Halting nodeOurs Decoder-only FAP No Yes Confidence based or predefinedTable 1: Comparison between UT, PonderNet, and ours. PE is short for “Positional Embeddings”.architecture also has augmented memory and the recurrent structure but is potentially more powerfulsince each iteration contains an operation of the whole sequence.Universal Transformers and other looped models. Our method is highly inspired by UniversalTransformers (UT) [ 10], but we introduce several novel modifications to design looped Transformersthat are compatible with our n-RASP-L assumption. One major architectural innovation is theuse of FAP, while all the other prior works are based on NTP. We also only use decoder-onlyTransformers, which is different from UT and the follow-up work PonderNet [ 4], which use bothencoder and decoder Transformers. In addition to these two critical differences, we do not use anypositional encoding, and use a simpler halting mechanism. Moreover, we find input injection usefulto further improve the performance (see details in Section C.3). Table 1 summarizes the differencesbetween ours and the previous approaches. Besides architectural differences, we are also the first toshow the benefit of using step-dependent supervision for training looped Transformers. Apart fromTransformers, [ 5] study learning recurrent networks to generalize to harder maze problems than seenduring training, but with a focus on CNNs.Input representation. Recall that adding two numbers of length ncould not be solved by a RASP-Lprogram where the difficulty mainly comes from indexing operations [ 37]. It could be solved byreformatting the input so that each digit is presented to the model with “index hints” in [ 37]. Suchreformatting enables a simple RASP-L program for addition. Similarly, representing the answer inreversed order also helps because the corresponding RASP-L program gets much shorter, providing aconcrete justification of the empirical observation made in [ 21]. However, such input representationsare highly dependent on the specific problems and might not necessarily exist in general.COT. Scratchpad or CoT reasoning [ 25,23,9,33,18] is also useful for length generalization as itcould simplify the next-token prediction task with intermediate results presented to the input layer.There are also potential drawbacks and limitations to CoT reasoning. First, CoT training data couldbe hard to collect. Training and inference with pause tokens [ 17] has been proposed to learn implicitCoT steps without CoT data, but pause tokens only increase horizontal compute, not sequentialcompute. Second, not all CoT steps are helpful. If CoT steps introduce additional complexity orrequire operations not easily expressible in RASP-L, then CoT may hinder length generalization, asshown in [ 37]. Moreover, CoT steps that could convert the next token prediction task to RASP-Lprograms might not always exist. Besides, CoT is normally constrained to fixed-depth models, whilewe study a more general and powerful way to use adaptive compute at inference time.5 ExperimentsIn this section, we evaluate the efficacy of looped Transformers. We consider tasks with n-RASP-Lsolutions presented in Section 2: Parity, Copy, and Addition, together with more tasks like calculatingthe sum, multiplication, and calculating the unique set. Due to lack of space, we introduce thedetailed experimental setup in Section C.1, present length generalization results in Section C.2,ablation studies in Section C.3, and visualize the stopping criterion in Section C.4 in the appendix.Looped Transformers help with length generalization. As shown in Figure 2, our looped modelsignificantly improves the length generalization performance. For example, for Parity, it can generalizeto more than 50 digits near perfectly8when only trained with up to 20 digits. Moreover, for tasks likeaddition and copy, where the next token prediction failed when tested on maximum training length+10, our looped model can still perform nearly perfectly. All of the models are only trained with arelatively small number of lengths, and the looped model generalizes surprisingly well.Variants of NTP could improve generalization but not as effectively as our adaptive-depth model.Compared with vanilla NTP, we observe that NTP-Loop could lead to improved generalization in taskslike Addition, Copy and Multiplication. Similarly, NTP-pause could introduce slight improvement inParity and Unique Set. However, they all fall behind compared with our method. Besides, NTP-Loopsuffers from lower in-distribution accuracy in Parity, possibly because using a fixed-depth model withweight-tied layers for NTP with all lengths might be too constrained for the task.8It still maintains accuracy higher than 0.95 when tested with 100 digits, which is not included in the graph.5References[1]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia LeoniAleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4technical report. arXiv preprint arXiv:2303.08774 , 2023.[2]Cem Anil, Yuhuai Wu, Anders Andreassen, Aitor Lewkowycz, Vedant Misra, Vinay Ramasesh,Ambrose Slone, Guy Gur-Ari, Ethan Dyer, and Behnam Neyshabur. Exploring length gen-eralization in large language models. Advances in Neural Information Processing Systems ,35:38546–38556, 2022.[3]Shaojie Bai, J Zico Kolter, and Vladlen Koltun. Deep equilibrium models. Advances in neuralinformation processing systems , 32, 2019.[4]Andrea Banino, Jan Balaguer, and Charles Blundell. Pondernet: Learning to ponder. In 8thICML Workshop on Automated Machine Learning (AutoML) , 2021.[5]Arpit Bansal, Avi Schwarzschild, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Goldblum,and Tom Goldstein. End-to-end algorithm synthesis with recurrent networks: Extrapolationwithout overthinking. Advances in Neural Information Processing Systems , 35:20232–20242,2022.[6]Tolga Bolukbasi, Joseph Wang, Ofer Dekel, and Venkatesh Saligrama. Adaptive neural networksfor efficient inference. In International Conference on Machine Learning , pages 527–536.PMLR, 2017.[7]Hanseul Cho, Jaeyoung Cha, Pranjal Awasthi, Srinadh Bhojanapalli, Anupam Gupta, andChulhee Yun. Position coupling: Leveraging task structure for improved length generalizationof transformers. arXiv preprint arXiv:2405.20671 , 2024.[8]Kevin Clark, Paul Vicol, Kevin Swersky, and David J Fleet. Directly fine-tuning diffusionmodels on differentiable rewards. arXiv preprint arXiv:2309.17400 , 2023.[9]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers tosolve math word problems, 2021. URL https://arxiv. org/abs/2110.14168 , 2021.[10] Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. Uni-versal transformers. arXiv preprint arXiv:1807.03819 , 2018.[11] Grégoire Delétang, Anian Ruoss, Jordi Grau-Moya, Tim Genewein, Li Kevin Wenliang, ElliotCatt, Chris Cundy, Marcus Hutter, Shane Legg, Joel Veness, et al. Neural networks and thechomsky hierarchy. arXiv preprint arXiv:2207.02098 , 2022.[12] Yilun Du, Shuang Li, Joshua Tenenbaum, and Igor Mordatch. Learning iterative reasoningthrough energy minimization. In International Conference on Machine Learning , pages 5570–5582. PMLR, 2022.[13] Yilun Du, Jiayuan Mao, and Joshua B Tenenbaum. Learning iterative reasoning through energydiffusion. In Forty-first International Conference on Machine Learning , 2024.[14] Shivam Garg, Dimitris Tsipras, Percy S Liang, and Gregory Valiant. What can transformerslearn in-context? a case study of simple function classes. Advances in Neural InformationProcessing Systems , 35:30583–30598, 2022.[15] Angeliki Giannou, Shashank Rajput, Jy-yong Sohn, Kangwook Lee, Jason D Lee, and DimitrisPapailiopoulos. Looped transformers as programmable computers. In International Conferenceon Machine Learning , pages 11398–11442. PMLR, 2023.[16] Olga Golovneva, Tianlu Wang, Jason Weston, and Sainbayar Sukhbaatar. Contextual positionencoding: Learning to count what’s important. arXiv preprint arXiv:2405.18719 , 2024.[17] Sachin Goyal, Ziwei Ji, Ankit Singh Rawat, Aditya Krishna Menon, Sanjiv Kumar, andVaishnavh Nagarajan. Think before you speak: Training language models with pause tokens.arXiv preprint arXiv:2310.02226 , 2023.[18] Kaiying Hou, David Brandfonbrener, Sham Kakade, Samy Jelassi, and Eran Malach. Universallength generalization with turing programs. arXiv preprint arXiv:2407.03310 , 2024.[19] Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, and SivaReddy. The impact of positional encoding on length generalization in transformers. Advancesin Neural Information Processing Systems , 36, 2024.6[20] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Largelanguage models are zero-shot reasoners. Advances in neural information processing systems ,35:22199–22213, 2022.[21] Nayoung Lee, Kartik Sreenivasan, Jason D. Lee, Kangwook Lee, and Dimitris Papailiopoulos.Teaching arithmetic to small transformers. In The Twelfth International Conference on LearningRepresentations , 2024.[22] Shanda Li, Chong You, Guru Guruganesh, Joshua Ainslie, Santiago Ontanon, Manzil Zaheer,Sumit Sanghai, Yiming Yang, Sanjiv Kumar, and Srinadh Bhojanapalli. Functional interpolationfor relative positions improves long context transformers. arXiv preprint arXiv:2310.04418 ,2023.[23] Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by ratio-nale generation: Learning to solve and explain algebraic word problems. arXiv preprintarXiv:1705.04146 , 2017.[24] Sean McLeish, Arpit Bansal, Alex Stein, Neel Jain, John Kirchenbauer, Brian R Bartoldson,Bhavya Kailkhura, Abhinav Bhatele, Jonas Geiping, Avi Schwarzschild, et al. Transformerscan do arithmetic with the right embeddings. arXiv preprint arXiv:2405.17399 , 2024.[25] Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin,David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Showyour work: Scratchpads for intermediate computation with language models. arXiv preprintarXiv:2112.00114 , 2021.[26] Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, TomHenighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, et al. In-context learningand induction heads. arXiv preprint arXiv:2209.11895 , 2022.[27] Ofir Press, Noah A Smith, and Mike Lewis. Train short, test long: Attention with linear biasesenables input length extrapolation. arXiv preprint arXiv:2108.12409 , 2021.[28] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.Language models are unsupervised multitask learners. OpenAI blog , 1(8):9, 2019.[29] Anian Ruoss, Grégoire Delétang, Tim Genewein, Jordi Grau-Moya, Róbert Csordás, MehdiBennani, Shane Legg, and Joel Veness. Randomized positional encodings boost length general-ization of transformers. arXiv preprint arXiv:2305.16843 , 2023.[30] Mahdi Sabbaghi, George Pappas, Hamed Hassani, and Surbhi Goel. Explicitly encod-ing structural symmetry is key to length generalization in arithmetic tasks. arXiv preprintarXiv:2406.01895 , 2024.[31] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer:Enhanced transformer with rotary position embedding. Neurocomputing , 568:127063, 2024.[32] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural informationprocessing systems , 30, 2017.[33] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le,Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models.Advances in neural information processing systems , 35:24824–24837, 2022.[34] Gail Weiss, Yoav Goldberg, and Eran Yahav. Thinking like transformers. In InternationalConference on Machine Learning , pages 11080–11090. PMLR, 2021.[35] Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, NajoungKim, Jacob Andreas, and Yoon Kim. Reasoning or reciting? exploring the capabilities andlimitations of language models through counterfactual tasks. arXiv preprint arXiv:2307.02477 ,2023.[36] Liu Yang, Kangwook Lee, Robert D Nowak, and Dimitris Papailiopoulos. Looped transformersare better at learning learning algorithms. In The Twelfth International Conference on LearningRepresentations , 2024.[37] Hattie Zhou, Arwen Bradley, Etai Littwin, Noam Razin, Omid Saremi, Joshua M. Susskind,Samy Bengio, and Preetum Nakkiran. What algorithms can transformers learn? a study inlength generalization. In The Twelfth International Conference on Learning Representations ,2024.7[38] Yongchao Zhou, Uri Alon, Xinyun Chen, Xuezhi Wang, Rishabh Agarwal, and DennyZhou. Transformers can achieve length generalization but not robustly. arXiv preprintarXiv:2402.09371 , 2024.8A BackgroundA.1 RASP-LA decoder-only Transformer is a type of Transformer architecture that consists of only the decoderpart of the original Transformer model introduced by [ 32], where a causal mask is applied to theattention weights to prevent the model from attending to future tokens.RASP (Restricted Access Sequence Processing) [ 34] is a computational model for the Transformerarchitecture in the form of a programming language. RASP-L [ 37], where ‘L’ stands for learnable, isa learnable subset of the RASP language. Some key points about RASP-L are:•RASP-L programs accept an input sequence and return an output sequence of the same length foranarbitrary length , like decoder-only Transformers.•The core operations in RASP-L include element-wise operations on sequences and a specific typeof non-elementwise operation called kqv, which simulates a causal attention layer.•RASP-L has restrictions on the allowed operations to ensure learnability: It does not allowarbitrary index arithmetic, and restricts operations on token indices to order comparisons andcomputing successor/predecessor.•RASP-L does not allow control flow statements like branching or loops. Programs must bestraight-line code, with each line being a call to a core function or another RASP-L program.In [37], the authors show that algorithmic tasks that can be written as a RASP-L program can beeasily learned by a Transformer in a length-generalizable way with next-token prediction. Thelength-generalizable tasks include counting, finding the mode, copying the input sequence (consistingof unique tokens), and sorting. However, they also showed that for algorithmic tasks whose RASP-Lprogram representation is not known to exist, such as addition, parity, and copying the input sequence,it is hard to learn in a length-generalizable way. In other words, once the Transformer is trained onin-distribution data up to a particular length, it fails to generalize to unseen lengths.A.2 Next-token prediction and full-answer prediction***011##0 1 -> 011>###OutputInput#TF model***011##0 1 -> 011>011TF modelOutputInput#(a) NTP(b ) F APFigure 3: Visualization of the next-token prediction(NTP) and full-answer prediction (FAP) schemes. “#"indicates EOS, “*" indicates ignored output, and “>"indicates the end of the query (EOQ).Decoder-only Transformers are naturally con-venient for next-token prediction (NTP) whichcould be efficiently trained in parallel. In [ 37],their setup and RASP-L solutions are both con-strained to predicting the single next token: Dur-ing training, the full sequence (both the queryand the answer) is provided as input and the out-put is expected to be the shifted sequence. Dur-ing inference, only the query part is provided,and the model continues to output the next tokenand append the token to the current sequence un-til the output token is EOS. The output locationsbefore the end of query (EOQ) sign are ignored.See (a) in Figure 3 for illustration.On the other hand, we can also consider amore general way of predicting the answer: full-answer prediction (FAP). During both trainingand inference time, the input given is just the query part, and the rest of the locations are filled withmultiple EOS tokens to keep the input and the output to be the same length. The model is supposed tooutput the answer with a shifted location, and the output locations before the EOQ sign are ignored;see (b) in Figure 3. Notice that in FAP, the model is not forced to predict token-by-token as NTP.Instead, the model is expected to predict all missing tokens after all internal processing steps.9***00 1 -> 011>*01100010 1 -> 01>00>1input0 1 -> 11>0 1 -> 0110000000000110 1 -> 001010(a) Cop y(b ) Paritysummand 2summand digit sansw er digit ssummand 1carr y-onpar tial anspar tial ans( c) Addition0000000000000 1 -> 11000000000000001000000#000 1 -> 01000000000000001000000#00carr y-on0 1 -> 01000000000000000000000#00carr y-onans0 1 -> 00000000000000000000000#01inputshift ed inputshift ed inputinit anspar tial ansansansdigit s t o be check eddigit s t o be copiedansw er digit sansw er digit s0Figure 4: Visualization of the n-RASP-L solutions for Copy, Parity, and Addition with n= 2. Copy isimplemented by niterations of shifting; Parity is implemented by niterations of shifting and XOR; Addition isimplemented by n+ 1iterations of shifted XOR and AND; The inputs are preprocessed.Bn-RASP-L programsProposition B.1. (Parity.) There exists a n-RASP-L program with T(n) =nthat solves the n-bitparity check task:x1. . .xn|{z}ntokens> #. . . #|{z}n′tokens, n′≥0⇒ *. . . *|{z}ntokensy#. . . #|{z}n′tokens,where yis the parity check result for the arbitrary binary input sequence {xi}.Proof. See Listing 1 in Appendix B, where the number of steps required in parity_loop isT(n) =nfor the input query with nbits.Proposition B.2. (Copy.) There exists a n-RASP-L program with T(n) =nthat solves the n-symbolcopy task:x1. . .xn|{z}ntokens> #. . . #|{z}n′tokens, n′≥n−1⇒ *. . . *|{z}ntokensx1. . .xn|{z}ntokens#. . . #|{z}n′−n+ 1 tokens,where{xi}is an arbitrary binary input symbols.Proof. See Listing 2 in Appendix B, where the number of steps required in copy_loop isT(n) =nfor the input query with nsymbols.Proposition B.3. (Addition.) There exists a n-RASP-L program with T(n) =n+ 1that solves then-digit addition task:x1. . . xn|{z}ntokens+y1. . . yn|{z}ntokens> #. . . #|{z}n′tokens, n′≥n⇒ *. . . *|{z}2n+ 1 tokensz1. . .zn+1|{z}n+ 1 tokens#. . . #|{z}n′−ntokens,where{xi},{yi}are arbitrary binary summands and {zi}is the result of adding {xi}and{yi}9.Proof. See Listing 3 in Appendix B, where the number of steps required in addition_loop isT(n) =n+ 1for the input summands with ndigits each.# Input example: 1 1 0 1 > # # ## Output example: * * * * 1 # # ##*indicates ignored token, > is EOQ, and # is EOS.def parity_step(partial_ans_seq, seq):# align the last digit with the answer locationseq = shift_right(seq, 1)9For simplicity, we include the leading 0’s to keep the same length of the output for all possible inputs.10# calculate XORpartial_ans_seq = (partial_ans_seq | seq) \& (~(partial_ans_seq & seq))return partial_ans_seq, seqdef parity_loop(seq, num_step):# get the question in the promptprompt_mask = 1-has_seen(seq, full(seq, EOQ))seq = mask(seq, prompt_mask)# init answer seq with 0partial_ans_seq = full(seq, 0)# generate EOS seq after EOQend_seq = where(prompt_mask==1, full(seq, 0), full(seq, EOS))# perform parity stepsfor i in range(num_step):partial_ans_seq, seq = parity_step(partial_ans_seq, seq)# get answer with EOSans_seq = partial_ans_seqend_seq = shift_right(end_seq, 1)ans_seq = where(end_seq == EOS, end_seq, ans_seq)return ans_seqListing 1: Parity.# Input example: 0 1 0 1 1 > # # # # # ## Output example: * * * * * 0 1 0 1 1 # ##*indicates ignored token, > is EOQ, and # is EOS.def copy_step(seq, end_seq):seq = shift_right(seq, 1)end_seq = shift_right(end_seq, 1)return seq, end_seqdef copy_loop(seq, num_step):# generate EOS seq after EOQend_mask = has_seen(seq, full(seq, EOQ))end_seq = where(end_mask==0, full(seq, 0), full(seq, EOS))# perform copy stepsfor i in range(num_step):seq, end_seq = copy_step(seq, end_seq)# get answer with EOSseq = where(end_seq == EOS, end_seq, seq)return seqListing 2: Copy.# Input example: 0 0 1 + 1 1 1 > # # # # # ## Output example: * * * * * * * 1 0 0 0 # # ##*indicates ignored token, > is EOQ, and # is EOS.def addition_step(seq1, seq2, end_seq):end_seq = shift_right(end_seq, 1)seq1 = np.array(seq1, dtype = bool)seq2 = np.array(seq2, dtype = bool)carry_on = seq1 & seq2# A XOR B = (A OR B) AND (NOT (A AND B))in_place = ((seq1 | seq2) & (~(seq1 & seq2)))in_place = shift_right(in_place,1)seq1 = np.array(in_place, dtype = int)seq2 = np.array(carry_on, dtype = int)return seq1, seq2, end_seqdef addition_preprocess(seq):# generate EOS seq after EOQend_mask = has_seen(seq, full(seq, EOQ))11end_seq = where(end_mask==0, full(seq, 0), full(seq, EOS))# generate masks for the first and second summandsseen_tok0 = has_seen(seq, full(seq, ADD_SIGN))seen_tok1 = has_seen(seq, full(seq, EOQ))mask1 = ~seen_tok0mask2 = seen_tok0 & (~seen_tok1)mask2 = mask2 & shift_right(mask2, 1)# get the first and second summandsseq1 = mask(seq, mask1)seq2 = mask(seq, mask2)# align the first summand with the secondinduct_num1 = cumsum(mask1)induct_num2 = cumsum(mask2)target_index = firsts(induct_num1, induct_num2, default = 0)seq1 = index_select(seq1, target_index)seq1 = mask(seq1, mask2)return seq1, seq2, end_seqdef addition_loop(seq, num_step):seq1, seq2, end_seq = addition_preprocess(seq)# perform addition stepsfor i in range(num_step):seq1, seq2, end_seq = addition_step(seq1, seq2, end_seq)# get answer with EOSans = seq1ans = where(end_seq == EOS, end_seq, ans)return ansListing 3: Addition (in forward order).We also present the RASP-L library functions we use in Listing 4, which is partially taken from [ 37].import numpy as npdef full(x, const):return np.full_like(x, const, dtype=int)def indices(x):return np.arange(len(x), dtype=int)def select(k, q, pred, causal=True):# compute attention matrixs = len(k)A = np.zeros((s, s), dtype=bool)for qi in range(s):for kj in (range(qi+1) if causal else range(s)): # k_index <= q_indexif causalA[qi, kj] = pred(k[kj], q[qi])return Adef sel_width(A):return np.dot(A, np.ones(len(A))).astype(int)def aggr_mean(A, v, default=0):out = np.dot(A, v)norm = sel_width(A)out = np.divide(out, norm, out=np.full_like(v, default,dtype=float),where=(norm != 0))return out.astype(int)def kqv(k, q, v, pred, default=0,):return aggr_mean(select(k, q, pred), v, default=default)def shift_right(x, n, default = 0):# shifts sequence x to the right by n positions (other positionsfilled with default)12return kqv(indices(x)+n, indices(x), x, equals, default = default)def where(condition, x_if, y_else):# equivalent to np.where(condition, x_if, y_else)x_masked = seq_map(x_if, condition, lambda x, m: x if m else 0)y_masked = seq_map(y_else, condition, lambda y, m: y if not m else 0)return seq_map(x_masked, y_masked, lambda x, y: x if y == 0 else y)def has_seen(x, queries):return kqv(x, queries, full(x, 1), equals, default=0)def mask(x, bool_mask, mask_val=0):# equivalent to x *bool_mask + default *(~bool_mask)return where(bool_mask, x, full(x, mask_val))Listing 4: Library functions from [37].C Full Experimental ResultsC.1 Experimental setupC.1.1 TasksHere we consider tasks with n-RASP-L solutions presented in Section 2: Parity, Copy, and Addition,together with more tasks like calculating the sum, multiplication, and calculating the unique set.Parity. Checking the parity of the binary string. Example input: 00011>##, exampleoutput: *****0##. We define the length of the problem to be the number of the digits,setT(the number of steps needed) to be the same as the length, and train with length [1,20).Copy (with repeated tokens). Copying the binary string. Example input: 101>####,example output: ***101##. We define the length of the problem to be the number ofthe binary digits to copy, set Tto be the same as the problem length, and train with length [1,20). Ithas been shown that copy with unique tokens could be easily solved by inductive head [ 26], but copywith repeated tokens (e.g., binary) does not length-generalize with vanilla NTP training [37].Binary Addition. Performing binary addition of two binary numbers with the same number ofdigits, and the output has one more digit (without removing leading 0 if it appears). Example input:10+11>###, example output: *****101##. We highlight that we donot reverse the output like recent works [ 21,24,38]. We define the length of the problem to be thenumber of digits to be added, set Tto be the same as the problem length, and train with length [1,20).It has been shown that binary addition without index hint is hard to generalize in vanilla NTP [37].Binary Sum. Calculating the sum of the binary string in the binary form (in reversed order). Exampleinput: 1011>####, example output: ****11###. We define the lengthof the problem to be the number of binary digits to be added, set Tto be the same as the problemlength, and train with length [1,20).Binary Multiplication. Multiplying two binary numbers, while the first number has up to twodigits. The output is in reversed order and the length is the sum of the lengths of two numbers,without removing leading 0. Example input: 11× 110>#####, example output:******010010#. We define the problem length to be the number of the seconddigits, and set Tto be the product of the lengths of two digits, and train with length [1,12).Unique Set. Calculating the unique set with the first occurrence order with an alphabet of 50 tokens.Example input: 142243>#####, example output: ******1423##.We define the length of the problem to be the number of digits to be calculated, set Tto be the sameas problem length, and train with length [1,20).13C.1.2 Baseline methodsVanilla NTP. We use vanilla next-token prediction as one of our baselines, which is referred to as“NTP” in Figure 2. To ensure that the baseline method uses a maximum effective depth comparableto our method during training, we train the transformer model with a depth 20 times the depth of thelooped block in our approach.NTP with pause tokens. Training and inference with pause tokens [17] is a way to implicitly learnimplicit CoT steps without CoT data by enabling extra compute pathways before outputting theanswer in NTP. We use it as a baseline with the same depth as in vanilla NTP which is referred to as“NTP-Pause” in Figure 2. We include a visual illustration of NTP-Pause in Figure 7 in Appendix D.NTP with weight-tied layers. Using weight-tied layers but with a fixed number of overall depths inNTP is also shown to improve the performance in [ 24]. Here we fix the number of looped steps as20, use the same depth as the decoder block of our looped model, and train the model with NTP asanother baseline which is referred to as “NTP-Loop” in Figure 2.C.1.3 Training and evaluation setupFor training, we use a decoder-only Transformer block in GPT-2 architecture [ 28]. We adopt acurriculum learning strategy for all methods that starts from the smallest length and incrementallyincreases the length during training till it reaches the maximum length as in [14].For evaluation, we measure the exact match accuracy for the whole output sequence. For our loopedinference, we test two possible stopping criterion discussed in Section 3.3: 1) Oracle : Adopt the samerule when generating the dataset as the number of steps to perform, 2) Maximum confidence : Run amaximum number of steps, and choose the step using Equation (2)10. We report test results from 1)in Section C.2 and C.3, and we also find 2) to be an effective stopping criterion in Section C.4.Full details of training and evaluation are in Appendix G.C.2 Length generalization resultsWe present the generalization performance on various reasoning tasks in Figure 2.Looped Transformers help with length generalization. Our looped training significantly improvesthe length generalization performance. For example, for Parity, it can generalize to more than 50digits near perfectly11when only trained with up to 20 digits. Moreover, for tasks like additionand copy, where the next token prediction failed when tested on maximum training length +10, ourlooped model can still perform nearly perfectly. All of the models are only trained with a relativelysmall number of lengths, and the looped model generalizes surprisingly well.Variants of NTP could improve generalization but not as effectively as our adaptive-depth model.Compared with vanilla NTP, we observe that NTP-Loop could lead to improved generalization in taskslike Addition, Copy and Multiplication. Similarly, NTP-pause could introduce slight improvement inParity and Unique Set. However, they all fall behind compared with our method. Besides, NTP-Loopsuffers from lower in-distribution accuracy in Parity, possibly because using a fixed-depth model withweight-tied layers for NTP with all lengths might be too constrained for the task.C.3 Ablation studiesIn Section C.2, we compare with NTP baselines while the efficacy of components in our architecturedesign remains unclear. In this section, we compare with FAP variants of our model: “FAP-Loop-Adaptive-WO” indicates our method but without input injection; “FAP-Pause” indicates FAP withpause tokens12; “FAP” indicates vanilla FAP, without weight-tied layers or adaptive depth.Effect of input injection. We observe that the generalization performance with input injection isgenerally better than without it, which aligns with the findings in [ 3,36]. The effect of input injectionis more visible in tasks like Addition, Binary Sum, and Unique Set.10Another option is to set a threshold for the cross-entropy loss and stop when the threshold is first met. Thiswill also succeed if the maximum confidence rule works.11It still maintains accuracy higher than 0.95 when tested with 100 digits, which is not included in the graph.12Visual illustration of FAP-Pause is in Figure 8 in Appendix D.1420 30 40 50T est Length0.00.20.40.60.81.0AccuracyParityFAP-Loop-Adaptive (Ours)FAP-Loop-Adaptive-WOFAP-PauseFAP20 22 24 26 28 30T est Length0.00.20.40.60.81.0AccuracyAdditionFAP-Loop-Adaptive (Ours)FAP-Loop-Adaptive-WOFAP-PauseFAP2022242628303234T est Length0.00.20.40.60.81.0AccuracyCopyFAP-Loop-Adaptive (Ours)FAP-Loop-Adaptive-WOFAP-PauseFAP11 12 13 14 15 16T est Length0.00.20.40.60.81.0AccuracyMultiplicationFAP-Loop-Adaptive (Ours)FAP-Loop-Adaptive-WOFAP-PauseFAP20 22 24 26 28 30T est Length0.00.20.40.60.81.0AccuracyBinary SumFAP-Loop-Adaptive (Ours)FAP-Loop-Adaptive-WOFAP-PauseFAP2022242628303234T est Length0.00.20.40.60.81.0AccuracyUnique SetFAP-Loop-Adaptive (Ours)FAP-Loop-Adaptive-WOFAP-PauseFAPFigure 5: Ablation study . Our looped Transformer model with adaptive depth generalized better than FAPvariants across studied tasks, including the variant of our method without input injection, and FAP with pausetokens. The vertical dashed line indicates the maximum training length.Comparison with pause tokens and vanilla FAP. We find that training with pause tokens in FAPcould boost the generalization performance compared to vanilla FAP, but not as effective as ourmethod with looped steps and adaptive depth. As discussed in [ 17], pause tokens mainly introduceparallel but not sequential compute, which is less powerful than adaptive depth. Besides, we findworse in-distribution accuracy for both FAP and FAP-Pause in Addition, which mainly comes fromthe difficulty in training a deep model (20 ×the depths of the decoder block used in the looped model)in FAP. It further highlights the importance of supervision with variant depths used in our training.C.4 The stopping criterion and visualizationsIn this section, we visualize the accuracy and the cross-entropy loss w.r.t. the decoded output in eachiterative step across tasks in Figure 6, with the test length to be the maximum length in Figure 2. Wealso provide more visualizations from other test lengths in Appendix E.Convergence in Addition, Copy, Multiplication, and Unique Set. We notice that for Addition,Copy, Multiplication, and Unique Set, the looped model somehow learns to converge for a certainnumber of steps after solving the task, even though we do not explicitly train the model to converge.The loss curves for these tasks are also smoother than those without convergence behaviors.The maximum confidence stopping criterion chooses the step with near-perfect accuracy. InFigure 6, the cross-entropy loss reaches the lowest when the generalization performance is nearperfect, which indicates the maximum confidence rule chooses the right time to exit. By training withthe ground truth number of iterations in the loop, we learn both the length-generalizable iterativesteps and when to stop, which is important for looped models.D Visualization of using pause tokens in NTP and FAPWe visualize NTP-Pause in Figure 7 and FAP-Pause in Figure 8 respectively, where we add a fixednumber of pause tokens (3 in the figures, 20 in our experiments) before outputting the final answerduring both training and inference.E (More) visualizations of the stopping criterionHere we present more visualization of the stopping criterion in Figure 9 when tested with differentlengths from Section C.4. We can still see similar patterns of convergence in Addition, Copy,Multiplication, and Unique Set. Moreover, the maximum confidence stopping criterion chooses thestep with near-perfect accuracy.15Step01Accuracy0 20 40 60Step2010Log Loss(a) Parity (test length=50)Step01Accuracy0 20 40 60Step7.55.02.5Log Loss (b) Addition (test length=30)Step01Accuracy0 20 40 60Step105Log Loss (c) Copy (test length=35)Step01Accuracy0 20 40 60Step105Log Loss(d) Multiplication (test length=16)Step01Accuracy0 20 40 60Step105Log Loss (e) Binary Sum (test length=30)Step01Accuracy0 20 40 60Step50Log Loss (f) Unique Set (test length=35)Figure 6: Stopping criterion visualizations. Plot of the stopping criterion. The vertical line indicates the stepchosen from Equation (2) within the range shown in the plots. The chosen steps have accuracy ≈1across tasks.011##0 1 -> 011>0110*1###0 1 01>10##0#####TF modelTF modelTF modelOutputOutputOutputInputInputInput............0 1 ****##00>###0 1 0 1 0 1 0 1 **********..................Figure 7: NTP-Pause visualization. Examples are from the Copy task. “...” indicates the pause token.F Inference time complexityHere we present the inference time complexity for our method, vanilla NTP and vanilla FAP.Assume that the maximum length of the training set is n, the number of steps needed is T(n), andthe number of layers in each step is k. Assume that NTP and FAP are using a fixed number of layersC. And we test on length n′.For the first stopping criterion where we know a(n′), our inference time would be O(ka(n′)n′2), andNTP (with KV cache) and FAP will be O(Cn′2). For the second criterion, we need to specify themaximum number of steps in order to find the step with maximum confidence. So our inference timewould be O(kN′n′2), where N′is the maximum number of steps.In NTP and FAP, we use some C≈kT(n)in our experiments such that they use similar computeduring training. Our inference time is then slightly longer than NTP with KV cache and FAP sincewe use more steps than the fixed-depth models.Moreover, we provide the inference time (in seconds) in Table 2, where we test on length 50 for Paritywith batch size 64. Ours (1) and (2) indicate our first and the second stopping criterion respectively.Table 2: Inference time from Parity.Ours (1) Ours (2) FAP FAP-pause NTP NTP-pause NTP (weight-tied)0.1967s 0.2190s 0.1117s 0.1262s 0.1229s 0.1315 0.1527s16011##0 1 -> 011>###0*1###0 1 01>####0#####TF modelTF modelTF modelOutputOutputOutputInputInputInput0 1 ****##0#>###0 1 0 1 **********....................................Figure 8: FAP-Pause visualization. Examples are from the Copy task. “...” indicates the pause token.Step01Accuracy0 20 40 60Step2010Log Loss(a) Parity (test length=40)Step01Accuracy0 10 20 30 40 50Step7.55.02.5Log Loss (b) Addition (test length=25)Step01Accuracy0 20 40 60Step105Log Loss (c) Copy (test length=30)Step01Accuracy0 20 40 60Step105Log Loss(d) Multiplication (test length=15)Step01Accuracy0 10 20 30 40 50Step15105Log Loss (e) Binary Sum (test length=25)Step01Accuracy0 20 40 60Step100Log Loss (f) Unique Set (test length=30)Figure 9: Stopping criterion visualizations. Plot of the stopping criterion. The vertical line indicates the stepchosen from Equation (2) within the range shown in the plots. The chosen steps have accuracy ≈1across tasks.G Experimental detailsWe use the decoder-only GPT-2 model with NoPE, 8 heads, 256 embedding dimensions as the basicblock for the looped iterations with task specific depth in Table 3. We convert the input to theembedding space, perform the loop in the embedding space, and decode the final output after the loopstops. We use a curriculum to gradually increase the maximum training length (see Table 3 for thespecific setup for each task). We use AdamW optimizer with a cosine learning rate decay schedulefrom 10−4to 0 after reaching the maximum training length, and train up to 100K gradient stepswith batch size 64 for all tasks. For the training distribution, we adopt the online training schemefollowing [ 37] where each batch is i.i.d. sampled. Given any length, the probability of each possiblecharacter is evenly distributed instead of from a finite train set to avoid over-fitting, and the length isalso evenly distributed. For input injection, we use a similar technique as [36] that adds the originalinput embedding to each looped block as part of the input. For vanilla NTP, we adopt the sametraining scheme, but trained with autoregressive loss instead. For NTP-Pause and FAP-Pause, we add20 pause tokens before outputting the final answer. Each training run takes about 4-6h on NVIDIAA100 40 GB GPU, depending on the maximum training lengths of the problems.For evaluation, we use 100×the batch size number of samples in Figure 2, 5, 6, 9, and report themean exact match accuracy and standard error from five training runs with different random seeds.17Table 3: Task-specific experimental hyperparameters. “Incremental Interval” denotes the number of trainingsteps between successive increases in the input sequence length.Task Depth of the Decoder Block Incremental IntervalParity 1 1000Copy 2 1000Addition 3 1600Multiplication 4 500Binary Sum 3 1000Unique Set 3 1000H Limitations and conclusionOur work has several limitations. Direct looped training could be computationally demanding whenthe number of looped steps is too large. A possible workaround for more efficient training could bestopping the gradient tracking for earlier steps like [ 8], but there might be a trade-off in performanceand computation. We only train the looped Transformers for a limited number of steps and lengths dueto a lack of computing resources. With more diverse training data, the looped model has the potentialto generalize to even longer test lengths. We use NoPE for simplicity, and an orthogonal directionis to use more delicate positional embedding to further improve length generalization performance.Moreover, our step-dependent supervision requires the ground-truth number of steps in the trainingdata, which is an additional requirement compared with normal end-to-end training. However, westill require fewer assumptions than CoT training.In conclusion, we show that n-RASP-L problems can be learned by looped Transformer with step-dependent supervision on the final answer, and can be applied with an adaptive number of stepsduring inference time to improve generalization. Note that n-RASP-L, as a challenging algorithmicproblem set, could cover more challenging reasoning problems than presented in the paper, and webelieve that our method has the potential to generalize to more challenging tasks.18 |
NyiygQYdh7 | Genetic Curriculum Learning for DistributionGeneralization on the Travelling Salesman ProblemMichael LiUniversity of [email protected] HaberlandUniversity of [email protected] JaquesUniversity of [email protected] Travelling Salesman Problem (TSP) is a classic NP-hard combinatorial op-timization task with numerous practical applications. Classic heuristic solvers –and Large Language Models (LLMs) using such solvers as a tool – can attain near-optimal performance for small problem instances, but become computationallyintractable for larger problems. Real-world logistics problems such as dynamicallyre-routing last-mile deliveries demand a solver with fast inference time, which hasled to specialized neural network solvers being favored in practice. However, neuralnetworks struggle to generalize beyond the synthetic data they were trained on. Inparticular, we show that there exist TSP distributions that are realistic in practice,which also consistently lead to poor worst-case performance for existing neuralapproaches. To address distributional robustness, we present Genetic CurriculumLearning (GCL), an efficient novel approach utilizing automatic curricula. Wealso present TSPLib50, a dataset of realistically distributed TSP samples, whichtests real-world distribution generalization ability without conflating this issue withTSP instance size. We evaluate our method on various synthetic datasets as wellas TSPLib50, and compare to state-of-the-art LLM results and neural baselines.We demonstrate that GCL improves distributional robustness, with most of itsperformance gains coming from worst-case scenarios.1 IntroductionFrom least-cost shipping and warehouse logistics to efficient automated circuit board drilling, theTraveling Salesman Problem (TSP) has an outsized impact on global trade, accounting for billionsof dollars worth of saved time, energy, and harmful emissions. The TSP is NP-hard, which meansthere exists no efficient algorithm for finding exact solutions. Classic heuristic methods haveprohibitive runtimes for real-world situations requiring fast and dynamic decision-making. Neuralcombinatorial optimization (NCO) methods seek to effectively solve the TSP at lower computationalcost [ 18,10,22], but generalize poorly to unfamiliar distributions [ 11,6]. In practice, such planningfaults can be very expensive in terms of wasted time, money, and human resources.Given impressive recent gains in reasoning capabilities of Large Language Models (LLMs) [ 27,28,17], LLMs potentially provide a promising path for solving novel TSP instances. However,LLMs currently perform suboptimally on the TSP [ 13,26]. In this paper we directly study theperformance of state-of-the-art LLMs prompted to solve TSP problems, and find that they havesimilarly prohibitive inference times as classic heuristics, in addition to inconsistent performance.Instead, we propose a novel adversarial training technique to enhance the robustness of NCO methods.Rather than training on limited TSP datasets or randomly generated TSP instances, which is inefficientand wasteful due to the high-dimensional parameter space, we propose to use a curriculum learningapproach in which environments and tasks are adaptively evolved to be more challenging [ 3]. Asapplied to TSP solvers, a “task” or a “level” would be a TSP instance that needs to be solved.38th Conference on Neural Information Processing Systems (NeurIPS 2024).Curriculum-based methods have shown promise for combinatorial optimization [ 15], but past worksutilize very simple heuristic regimes, and are also focused on generalization across lengths, notdistributions [ 20]. Zhang et al. [29] proposed a hardness-adaptive curriculum (HAC), which usesgradient ascent to produce increasingly harder levels. However, we show that HAC’s sampling andgradient ascent procedure causes unreliable performance on specific types of TSP instances whichare of practical interest.In this paper, we seek to improve model robustness to different distributions. We present an NCOoptimization approach which maintains the computational benefits of NCO methods while improvinggeneralization on disparate but practical distributions. We make the following contributions: 1)We propose the TSPLib50 dataset, a testing dataset of 10,000 instances sampled from realisticdistributions, designed to test the robustness of TSP solvers; 2) We propose an automatic curriculumwhich mutates high-improvement-potential training distributions; 3) We present empirical resultscomparing the performance to the best prior work on curriculum learning for NCO and state-of-the-artLLMs, and show that our method gives better worst-case performance and improved robustness tovarying distributions of practical interest. Our method is also relatively efficient to train, requiringonly a single GPU and no more than a few hours of training for each model.2 BackgroundTraveling Salesman Problem. The Traveling Salesman Problem (TSP) is a NP-Complete com-binatorial optimization problem (COP), which requires finding the shortest tour through a set ofcities. The TSP has been of intense interest to computational theorists due to its applicability in manypractical scenarios, especially in the logistics sector. Past works as well as this paper consider the2D-Euclidean TSP, which is formally defined in the Appendix.Deep and Reinforcement Learning for TSP. Neural combinatorial optimization (NCO), or theuse of deep learning for combinatorial optimization, can be broadly grouped into three primaryapproaches: solutions utilizing 1) pointer networks [ 23,14], 2) graph neural networks [ 8,19,30],or 3) transformers [ 12]. Reinforcement learning (RL) has seen successful applications in learningto solve the TSP [ 16,4]. Deep RL methods often use a neural network to generate a tour, and thentreat tour length as a negative reward, or “cost”. Kool et al. [10] propose a transformer-based solvertrained with REINFORCE [ 25], using a simple deterministic greedy rollout baseline. However, neuralnetworks are known to often generalize poorly to distributions outside their training data, and existingNCO solvers are no exception. This makes them a risky solution for real-world deployments, inspite of their fast inference time. In this paper, we aim to improve the reliability and robustness ofRL-based NCO approaches.Curriculum Learning for TSP. Curriculum methods improve robustness and sample efficiency byproposing tasks to learn from which are optimal for learning by being neither too easy nor too hard[3,24,1], and have been applied to real-world problems such as web navigation [ 7]. In the contextof Neural COP solvers, Zhang et al. [29] propose a hardness-adaptive curriculum (HAC) for theTSP, which mainly consists of two components: a hardness-adaptive generator that conducts gradientascent on training instances, and a re-weighting procedure for batch gradients in favor of updatesfor harder levels. We directly compare to HAC in this work, and include the HAC hardness metricH(X, M)as defined by Zhang et al. [29] in the Appendix. The hardness-adaptive generator conductsgradient ascent on input samples X(t)given a model M[29]:X(t)′=X(t)+η∇X(t)H(X(t), M).3 Preliminary StudyTSPLib50 and Other Evaluation Datasets. We first motivate the creation of TSPLib50, a newtesting dataset. TSPLib, a collection of real-world TSP instances, is often used as a benchmark forcombinatorial optimization solvers [ 21]. Because TSPLib is based on real data, its distributions areboth varied and relevant for real-world applications. However, many solvers are trained on relativelysmall TSP instances. When tested on TSPLib, the gaps incurred by such models are correlated withinstance size. For instance, we find a strong Pearson correlation of 0.907 between TSPLib instancesize and optimality gap of HAC models (see the Appendix). Improving model generalization to largerinstance sizes often requires extensive computational resources, and is beyond the scope of precedingpapers as well as this paper.2Figure 1: Example high-gap instances of a HAC model tested on TSPLib50. We see that all of thesefailure cases have large distances between node clusters, and thus deviate far from uniform levels.Figure 2: Architecture of our proposed Genetic Curriculum system. After the forward pass, wecompute improvement and then mutate high-improvement levels while saving Fisher informationabout low-improvement levels.Following the work of Zhang et al. [29], we focus on 50-node instances. Hence, we introduceTSPLib50, a dataset of 10,000 instances, each created by sampling 50 points uniformly at randomfrom a TSPLib instance. Because the distribution of points in TSPLib50 is the same in expectationas the distribution of points in TSPLib, we can thus disentangle generalization ability on differentdistributions with generalization ability on different instance sizes.We also test performance of our method on challenging synthetic distributions. We test on a Gaussianmixture distribution from prior work [ 29], and a “Diagonal” distribution of our design where allpoints align with a main diagonal. We justify and visualize these distributions in the Appendix.Hardness-Adaptive Curriculum Shortcomings. While HAC improves performance by training onharder distributions [ 29], it only conducts one step of gradient ascent on data sampled from a uniformdistribution. As a result, HAC fails to cover instances that deviate far from a uniform distribution.In HAC, changes in X(t)are determined by η∇X(t)H(X(t), M). We find that elements inη|∇X(t)H(X(t), M)|tend to have a mean around 0.077and median around 0.023, which are smallrelative to the unit square [0,1]2that points are placed in. Thus, points are only mildly perturbed.In Figure 1, we visualize high-gap TSPLib50 levels for HAC, and find that HAC performs sub-optimally on levels with large distances between nodes. TSPLib50 bootstraps from real-worlddistributions, and thus represents use cases of practical interest. We seek to address this issue.4 Genetic Curriculum LearningImprovement Potential Metric. In Genetic Curriculum learning (GCL), we compute the “improve-ment potential” I(X, M)for each training instance after each epoch with the current model MandREINFORCE baseline model M′:I(X, M) =CM(X)− CM′(X). Note that I(X, M)is similar toH(X, M)as used by Zhang et al. [29].3Figure 3: Gaps across training epochs of our proposed Genetic Curriculum model compared tobaselines, on average cases across different distributions and worst-case scenarios in TSPLib50. Theoptimality gap of GPT-4o on TSPLib50 is around 90% on a small sample, and is not plotted due tothe different scale of those values relative to existing results.Genetic Curriculum Algorithm. GCL proposes novel usage of an evolutionary approach to maintaina population of challenging levels, drawing inspiration from genetic programming [ 2]. GCL storesa population of level “genes” that describe the probabilistic process creating the levels. After eachepoch, the 50% of highest improvement-potential levels have their genes edited and placed intothe next epoch. We find that mutating a population of genes achieves better results than mutatinga population of levels. The genome consists of 6 bases: 0) Cluster size of the distribution pointsare drawn from; 1) Cluster width of the distribution points are drawn from; 2) Rotation angle; 3)Scale factor; 4) x-axis translation factor; 5) y-axis translation factor. Through this genome, we tryto address distribution invariance, rotational invariance, scale invariance, and translation invariance.Technical details of level sampling from genomes, genetic mutation procedure, and motivation forrelated hyperparameters are in the Appendix.GCL also uses Elastic Weight Consolidation (EWC) [ 9] to maintain performance on its learned knowl-edge, because as the genetic curriculum evolves to harder instances, it is possible that catastrophicforgetting is leading the model to perform poorly on easier instances. Figure 2 provides a diagram ofthe architecture of GCL. An algorithmic specification of GCL is provided in the Appendix.5 ExperimentsOur experiments work on fine-tuning an attention-based model with a REINFORCE rollout baseline,previously trained exclusively on uniform random distributions. We compare our model to resultsfrom OpenAI’s GPT-4o, a state-of-the-art LLM. We also compare against 2 NCO baselines: a“Uniform” baseline which samples from uniform distributions without curriculum, and a “HAC”baseline which samples from uniform distributions but uses HAC. Notably, all experiments are runon a single GPU, and no model takes longer than a few hours to train. Full experimental details andhyperparameters are in the Appendix.1We plot gaps of all models relative to oracles. We present results for average gaps on the GaussianMixture, Diagonal, and TSPLib50 distributions. We also present results on worst-case 1%, 0.5%, and0.1% of gaps on TSPLib50, to demonstrate robustness to challenging out-of-distribution cases.To further investigate our method, we run three tests to better interpret GCL. First, we run ablationtests on the genome and EWC components of our method. Second, we plot the distribution of eachgenome base over the course of training, to better understand the role the genome plays in GCL.Third, we plot the optimality gap of our baseline model M′and current model Mon training dataover the course of training, to better understand model convergence behavior.6 ResultsLarge Language Model Performance. Despite advances in mathematical and reasoning capabilities,Large Language Models (LLMs) often fail to find satisfactorily optimal TSP solutions, and haveprohibitively slow inference, requiring around 47 seconds on average. Even with prompt engineering,1All code is provided at https://github.com/ML72/Genetic-Curriculum-TSP/4LLMs still produce inconsistent and suboptimal responses. Details about our TSP-related LLMexperiments are in the Appendix.Dataset Model Gap Avg (%) Gap Std (%)GaussianMixtureUniform 15.0049 1.1970HAC 8.8460 0.3032GCL (Ours) 6.2214 0.2155DiagonalUniform 7.2115 0.1822HAC 3.9447 0.1346GCL (Ours) 3.0165 0.0318TSPLib50Uniform 2.3206 0.0082HAC 1.8183 0.0167GCL (Ours) 1.7738 0.0119Table 1: Average Model Gap Across DistributionsAverage Gaps. Average gap results can be seen in Table 1. On all distributions, HAC alreadyimproves significantly on the uniform baseline, as HAC uses a hardness-adaptive generator. GCL,our proposed method, achieves consistent improvement over HAC on the harder distributions. Weobserve an approximately 1/4 factor gap decrease on both hard distributions: the gap decreases from8.85% to 6.22% on Gaussian mixtures, and from 3.94% to 3.02% on the diagonal distribution. Forthe TSPLib50 distribution, GCL improves only slightly on HAC in terms of average gap. This makessense because a large portion of TSPLib50 levels are easy, while GCL focuses on robustness tochallenging levels.We find decreases in average gap between HAC and GCL to be statistically significant, with values ofp <0.01in two-sample t-tests for all distributions.Worst-Case Gaps. We can see that the slight improvement on TSPLib50 average gap mostly comesfrom gap decreases on hard levels by analyzing worst-case scenarios. On TSPLib50, GCL provides a1.82% gap improvement on the worst 1% of cases, a 2.52% gap improvement on the worst 0.5%, anda 3.42% gap improvement on the worst 0.1%. This is significant because in large-scale real-worldapplications that route millions of TSP problems every day, 1% of routes is still an important andcostly fraction. For example, if a large shipping company routes 40,000 loads, 1% of routes wouldstill equate to 400 loads.We find decreases in worst-case TSPLib50 gap between HAC and GCL to be statistically significant,with values of p <0.001in two-sample t-tests for worst 1% and worst 0.5%, and p <0.03for worst0.1%. GCL also improves gap from 204.40% by HAC to 98.47% on the worst 1% of GaussianMixture cases, and from 31.63% by HAC to 14.03% on the worst 1% of Diagonal cases. Thisdemonstrates GCL’s significant impact on improving robustness in the most challenging scenarios.Detailed tables are in the Appendix.Method Interpretations. In our ablation tests, we find that performance gains are mainly provided bythe genetic component of our method. In our genome evolution plots, we observe that the diversity ofgenome bases decreases over training, suggesting our genome is effective at encouraging explorationof new configurations. In our optimality gap plot, we observe that the baseline gap starts higher thanmodel gap, but dramatically decreases at around epoch 60, which further supports the previous pointon the genome promoting exploration. Detailed results and explanations are in the Appendix.7 DiscussionOur results demonstrate that GCL is able to significantly improve the performance of TSP solverson hard distributions. We also show that a portion of this improvement occurs in “worst-case”scenarios on real-world distributions of practical interest. Such improved robustness and performanceguarantees are significant in real-world deployment.GCL is a general methodology, and could be applied to other NCOs methods or COPs. As the Koolet al. [10] architecture generalizes to other problems such as the Vehicle Routing Problem (VRP) andCapacitated VRP (CVRP), it would be exciting to see GCL used for other COPs.5References[1]A. S. Azad, I. Gur, J. Emhoff, N. Alexis, A. Faust, P. Abbeel, and I. Stoica. CLUTR: Cur-riculum learning via unsupervised task representation learning. In A. Krause, E. Brunskill,K. Cho, B. Engelhardt, S. Sabato, and J. Scarlett, editors, Proceedings of the 40th InternationalConference on Machine Learning , volume 202 of Proceedings of Machine Learning Research ,pages 1361–1395. PMLR, 23–29 Jul 2023. URL https://proceedings.mlr.press/v202/azad23a.html .[2]H. Bai, R. Cheng, and Y . Jin. Evolutionary reinforcement learning: A survey. IntelligentComputing , 2:0025, 2023.[3]M. Dennis, N. Jaques, E. Vinitsky, A. Bayen, S. Russell, A. Critch, and S. Levine. Emergentcomplexity and zero-shot transfer via unsupervised environment design. In Proceedings of the34th International Conference on Neural Information Processing Systems , NIPS ’20, Red Hook,NY , USA, 2020. Curran Associates Inc. ISBN 9781713829546.[4]M. Deudon, P. Cournut, A. Lacoste, Y . Adulyasak, and L.-M. Rousseau. Learning heuristicsfor the tsp by policy gradient. In W.-J. van Hoeve, editor, Integration of Constraint Program-ming, Artificial Intelligence, and Operations Research , pages 170–181, Cham, 2018. SpringerInternational Publishing. ISBN 978-3-319-93031-2.[5]M. M. Flood. The Traveling-Salesman Problem. Operations Research , Feb. 1956. doi:10.1287/opre.4.1.61.[6]Z.-H. Fu, K.-B. Qiu, and H. Zha. Generalize a Small Pre-trained Model to Arbitrarily LargeTSP Instances. Proceedings of the AAAI Conference on Artificial Intelligence , 35(8):7474–7482,2021. doi: 10.1609/aaai.v35i8.16916.[7]I. Gur, N. Jaques, Y . Miao, J. Choi, M. Tiwari, H. Lee, and A. Faust. Environment generationfor zero-shot compositional reinforcement learning, 2022. URL https://arxiv.org/abs/2201.08896 .[8]C. K. Joshi, T. Laurent, and X. Bresson. An efficient graph convolutional network technique forthe travelling salesman problem. arXiv preprint arXiv:1906.01227 , 2019.[9]J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan,J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, andR. Hadsell. Overcoming catastrophic forgetting in neural networks. Proceedings of the NationalAcademy of Sciences , 114(13):3521–3526, Mar. 2017. ISSN 1091-6490. doi: 10.1073/pnas.1611835114. URL http://dx.doi.org/10.1073/pnas.1611835114 .[10] W. Kool, H. van Hoof, and M. Welling. Attention, learn to solve routing problems! InInternational Conference on Learning Representations , 2019. URL https://openreview.net/forum?id=ByxBFsRqYm .[11] M. Lisicki, A. Afkanpour, and G. W. Taylor. Evaluating curriculum learning strategies in neuralcombinatorial optimization. In Learning Meets Combinatorial Algorithms at NeurIPS2020 ,2020. URL https://openreview.net/forum?id=dZrtnd0nkc .[12] S. Liu, Y . Zhang, K. Tang, and X. Yao. How good is neural combinatorial optimization? asystematic evaluation on the traveling salesman problem. IEEE Computational IntelligenceMagazine , 18(3):14–28, 2023.[13] S. Liu, C. Chen, X. Qu, K. Tang, and Y .-S. Ong. Large language models as evolutionaryoptimizers, 2024. URL https://arxiv.org/abs/2310.19046 .[14] Q. Ma, S. Ge, D. He, D. Thaker, and I. Drori. Combinatorial optimization by graph pointernetworks and hierarchical reinforcement learning. AAAI Workshop on Deep Learning AAAIWorkshop on Deep Learning on Graphs: Methodologies and Applications , 2019.[15] S. Manchanda, S. Michel, D. Drakulic, and J.-M. Andreoli. On the generalization of neuralcombinatorial optimization heuristics, 2022. URL https://arxiv.org/abs/2206.00787 .[16] N. Mazyavkina, S. Sviridov, S. Ivanov, and E. Burnaev. Reinforcement learning for combinato-rial optimization: A survey. Computers & Operations Research , 134:105400, 2021.[17] S. Meng, Y . Wang, C.-F. Yang, N. Peng, and K.-W. Chang. Llm-a*: Large language modelenhanced incremental heuristic search on path planning. arXiv preprint arXiv:2407.02511 ,2024.6[18] S. Miki, D. Yamamoto, and H. Ebara. Applying deep learning and reinforcement learning totraveling salesman problem. In 2018 international conference on computing, electronics &communications engineering (ICCECE) , pages 65–70. IEEE, 2018.[19] Y . Min, Y . Bai, and C. P. Gomes. Unsupervised learning for solving the travelling salesmanproblem. Advances in Neural Information Processing Systems , 36, 2024.[20] W. Ouyang, Y . Wang, P. Weng, and S. Han. Generalization in deep rl for tsp problems viaequivariance and local search, 2021. URL https://arxiv.org/abs/2110.03595 .[21] G. Reinhelt. {TSPLIB }: a library of sample instances for the tsp (and related problems) from var-ious sources and of various types. URL: http://comopt. ifi. uniheidelberg. de/software/TSPLIB95 ,2014.[22] Y . Shi and Y . Zhang. The neural network methods for solving traveling salesman problem.Procedia Computer Science , 199:681–686, 2022. ISSN 1877-0509. doi: https://doi.org/10.1016/j.procs.2022.01.084. URL https://www.sciencedirect.com/science/article/pii/S1877050922000850 . The 8th International Conference on Information Technology andQuantitative Management (ITQM 2020 2021): Developing Global Digital Economy afterCOVID-19.[23] O. Vinyals, M. Fortunato, and N. Jaitly. Pointer networks, 2017. URL https://arxiv.org/abs/1506.03134 .[24] R. E. Wang, J. Mu, D. Arumugam, N. Jaques, and N. Goodman. In the zone: Measuringdifficulty and progression in curriculum generation. In Deep Reinforcement Learning WorkshopNeurIPS 2022 , 2022.[25] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcementlearning. Machine learning , 8:229–256, 1992.[26] C. Yang, X. Wang, Y . Lu, H. Liu, Q. V . Le, D. Zhou, and X. Chen. Large language models asoptimizers, 2024. URL https://arxiv.org/abs/2309.03409 .[27] K. Yang, A. Swope, A. Gu, R. Chalamala, P. Song, S. Yu, S. Godil, R. J. Prenger, andA. Anandkumar. Leandojo: Theorem proving with retrieval-augmented language models.Advances in Neural Information Processing Systems , 36, 2024.[28] E. Zelikman, Q. Huang, G. Poesia, N. Goodman, and N. Haber. Parsel: Algorithmic reason-ing with language models by composing decompositions. Advances in Neural InformationProcessing Systems , 36:31466–31523, 2023.[29] Z. Zhang, Z. Zhang, X. Wang, and W. Zhu. Learning to solve travelling salesman problem withhardness-adaptive curriculum. In Proceedings of the AAAI Conference on Artificial Intelligence ,2022.[30] J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun. Graph neuralnetworks: A review of methods and applications. 2021.A Formal DefinitionsA.1 Traveling Salesman ProblemFormally, given a set of cities V={1,2, . . . , n }and a n×ndistance matrix Dwhere Di,jis a realnumber denoting the distance between city iand city j, the Traveling Salesman Problem (TSP) seeksto find the optimal permutation of cities σ∗that minimizes total tour length [5]:σ∗= arg minσ"Dσ(n),σ(1)+n−1Xi=1Dσ(i),σ(i+1)#(1)The 2D-Euclidean TSP is a special case of the TSP where all cities are given a position on the 2DEuclidean plane, and all Di,jrepresent the Euclidean distance between cities iandj. Because σ∗istheoretically translation-invariant and scale-invariant, 2D-Euclidean TSP problems often provide citylocations that are translated and scaled to fit in the [0,1]2unit square.7Figure 4: There is a Pearson correlation of 0.907between HAC gap and TSPLib instance size. All2D-Euclidean TSPLib instances with 1400 or fewer nodes are included.A.2 Elastic Weight ConsolidationFor an old task Aand a new task B, Elastic Weight Consolidation (EWC) computes a mean given bythe model parameters θ∗Aand the diagonal of the Fisher information matrix F. For some importancehyperparameter λand loss LB(θ)on task B, EWC loss is formally defined as follows [9]:L(θ) =LB(θ) +Xiλ2Fi(θi−θ∗A,i)2(2)A.3 Optimality GapThe optimality gap Gfor a dataset Xused by the hardness-adaptive generator is the gap in cost Cbetween the current model Mand an oracle model M∗:G(X, M) =CM(X)− CM∗(X)CM∗(X)(3)A.4 HAC Hardness MetricThe hardness metric Hfor a dataset Xused by the hardness-adaptive generator is the gap in cost Cbetween the current model Mand a surrogate model M′which is greedily updated by a few steps ofgradient descent [29]:H(X, M) =CM(X)− CM′(X)CM′(X)(4)Note that the formulation for H(X, M)is similar to the formulation for G(X, M). In fact, Zhanget al. [29] observe that H(X, M)is always a lower bound for G(X, M).B Preliminary StudyB.1 Evaluation DatasetsFigure 4 demonstrates the high correlation between TSPLib instance size and resulting performance,justifying the necessity of our creation of TSPLib50.8Figure 5: Visualizations of example instances from the distributions we evaluate on. For TSPLib50,we also plot the original TSPLib instance that we sampled from.Algorithm 1 Generate TSPLib50 DatasetParameter : Dataset size SOutput : Dataset D1:Initialize empty dataset D2:Initialize list of 2D-Euclidean TSPLib levels L3:L←[linLif size (l)≥150]4:fori= 0,1, . . . , S −1do5: l←L[i% length (L)]6: l′←50 points ∼l7: D←D+l′8:end for9:return DWe also test on the Gaussian mixture distribution because it tends to pose a challenge to existing TSPsolvers, as noted by previous works [29].We also test on a “Diagonal” distribution of our design because it is intended to be difficult in anothermanner. Previously, we identified a common feature of HAC failure cases being that they havemuch empty space, and we justified this interpretation mathematically. However, another commonfeature of those cases is that they have points in distinct clusters. The diagonal distribution aims toexperimentally demonstrate that empty space is a primary factor for difficulty, by having all pointsaligned along a main diagonal. As such, there is only one cluster, but there is still much empty spaceon the level.Figure 5 visualizes the TSPLib50, Gaussian Mixture, and Diagonal distributions.B.2 Distribution GenerationOur algorithm for generating TSPLib50 is specified in Algorithm 1. Note that when generatingTSPLib50, we only sample from TSPLib instances with 150 or more nodes to ensure that there issufficient diversity in the generated TSPLib50 instances.Our algorithm for generating Gaussian mixture datasets is specified in Algorithm 2. Note that whilewe generate Gaussian mixture distributions in the same fashion as Zhang et al. [29], our datasetcomposition is different. Zhang et al. [29] add instances of the uniform distribution and Gaussianmixtures with cdist= 1to the testing dataset. Note that Gaussian mixtures with cdist= 1are “closeto uniform” by our definition, as there is not much empty space on the level. We do not do this,and keep the Gaussian mixture distribution as purely Gaussian mixtures. Thus, our reported gaps9Algorithm 2 Generate Gaussian Mixture DatasetParameter : Dataset size SOutput : Dataset D1:Initialize empty dataset D2:forcdist∈[10,20,30,40,50,60,70,80,90,100] do3: fori= 0,1, . . . , S/ 10−1do4: Number of centers ∼Unif(3,6)5: Number of points per center ∼Multinomial6: Place centers within [0, cdist]27: l←50 points ∼ N(centers ,2I2)8: Rescale lto fit in [0,1]29: D←D+l10: end for11:end for12:return DAlgorithm 3 Generate Diagonal DatasetParameter : Dataset size SOutput : Dataset D1:Initialize empty dataset D2:ford∈[1,2,3,4,5]do3: fori= 0,1, . . . , S/ 5−1do4: l←50/dpoints distributed uniformly at each location from (1,1),(2,2), . . .(d, d)5: Negate y-coordinates in lwithp= 0.56: Rescale lto fit in [0,1]27: D←D+l8: end for9:end for10:return Dwith HAC are higher than those reported by Zhang et al. [29], as uniform distributions and Gaussianmixtures with cdist= 1incur very low gap.Our algorithm for generating Diagonal datasets is specified in Algorithm 3.C Genetic Curriculum LearningC.1 Algorithmic Specification and ArchitectureAlgorithm 4 provides an algorithmic specification of GCL.C.2 Genetic AlgorithmRecall that the GCL genome consists of 6 bases: 0) Cluster size of the distribution points are drawnfrom; 1) Cluster width of the distribution points are drawn from; 2) Rotation angle; 3) Scale factor; 4)x-axis translation factor; 5) y-axis translation factor.Levels are drawn from a “clustered uniform distribution”, and then rotated, scaled, and translatedin that order. Combined, these parameters address distributional invariance, rotational invariance,scale invariance, and translation invariance. Intuitively, the clustered uniform distribution generatespoints in various clusters, which serves as a better starting point for the gradient ascent step inthe hardness-adaptive generator to reach a variety of distributions. A sampling algorithm for thisdistribution is specified in Algorithm 5.Beginning each epoch, training data is sampled from the genomes in a probabilistic process. Ouralgorithm for sampling a level lfrom a gene gis specified in Algorithm 6.10Algorithm 4 Genetic Curriculum LearningInput : Current model M, baseline model M′, hardness-adaptive generator φ, genetic mutationprocedure ψ, genome distribution ΨParameter : Batch size B, training epochs L, EWC sample size N, EWC importance λOutput : Fine-tuned model M′1:Initialize and warm up MandM′2:Initialize genome G∼Ψ3:fori= 1,2, . . . , L do4: Sample dataset D∼G5: D′←φ(D)6: forb= 1,2, . . . ,|D|/Bdo7: Get batch data {X}Bi=1from D8: Pass batch data through baseline model M′9: Pass batch data through model M10: Update model parameters with weighted gradients11: end for12: Compute improvement I=CM(X)− CM′(X)13: SortDandGusing I14: Compute EWC Fisher matrix FwithD[0 :N]15: G[|G|/2 :|G|]←ψ(G[|G|/2 :|G|])16: G[0 :|G|/2]∼Ψ17: ifC(M)<C(M′)then18: M′←M19: end if20:end for21:return MAlgorithm 5 Generate Clustered Uniform DatasetParameter : Dataset size S, cluster size c, noise εOutput : Dataset D1:Initialize empty dataset D2:fori= 0,1, . . . , S −1do3: Number of centers ←50/c4: Place centers within [0,1]25: l←50 points ∼Unif(centers −ε,centers +ε)6: Rescale lto fit in [0,1]27: D←D+l8:end for9:return DGenetic mutation procedure ψ:After each epoch, we select the 50% of highest-improvement genesand mutate their bases with ψ. Each base is incremented with probability 1/12 and decrementedwith probability 1/12. Note that in aggregate, this means each base is mutated with probability 1/6.Following are the min/max bounds and increment/decrement magnitudes for the bases:• 0: Min = 1, Max = 25, Inc/Dec = ±1• 1: Min = 0.03, Max = 0.08, Inc/Dec = ±0.01• 2: Min ≈ −2π, Max≈2π, Inc/Dec = ±0.1• 3: Min = 0.7, Max = 1, Inc/Dec = ±0.1• 4: Min = 0, Max = 1, Inc/Dec = ±0.1• 5: Min = 0, Max = 1, Inc/Dec = ±0.1Genome distribution Ψ:After each epoch, we select the 50% of lowest-improvement genes andresample them from Ψ, as specified below per base. These initial values are often “middle” valuesthat allow exploration in both directions, which mitigates the possibility of getting stuck in a genepool “local minima”, as detailed in our hyperparameter interpretations below.11Algorithm 6 Sample Level from GenesInput: Gene gOutput : Level l1:l∼ClusteredUniform (c=g[0], ε=g[1])2:l←Rotate (l, θ=g[2])3:Rescale lto fit in [0,1]24:l←l×g[3]5:l←l+ (∆ x=g[4](1−g[3]),∆y=g[5](1−g[3]))6:return l• 0: Uniform split between {1,5,10,15}• 1: 0.05• 2: 0• 3: 1• 4: 0.5• 5: 0.5Following are interpretations for the two important end-of-epoch mutation hyperparameters:•Mutate 50% of high-improvement levels: We mutate the 50% of most improved levelsand re-sample 50% as it strikes a balance between fresh training distributions and harddistributions that the model needs to improve at. This also minimizes the risk of gettingstuck in possible gene pool “local minima”, where the majority of genes are focused ondistributions that used to be hard, but are no longer challenging.•Mutate each genome base with 1/6 probability: During mutation, each base is mutatedwith 1/6 probability; because there are 6 bases in each level’s genome, in expectation onlyone base is modified per mutation. This allows genetic diversity of levels while preventinglevels from mutating so much that the mutated genes’ difficulty is vastly different from theoriginal genes’ difficulty.C.3 Elastic Weight ConsolidationAfter each epoch, a number Nof least-improved levels are saved and used to compute Fisherinformation diagonals and means for the network parameters θ. This is then used in the EWC penaltyfor constraining gradient updates in the next epoch. The intuition is that low improvement-potentiallevels are likely representative of the model’s strengths, and thus important related parameters shouldnot be changed.D ExperimentsD.1 SetupFollowing the work of Kool et al. [10], we use an attention-based architecture trained with a REIN-FORCE rollout baseline. We also use the gradient re-weighting curriculum from Zhang et al. [29].All our experiments work on fine-tuning a model previously trained exclusively on uniform randomdistributions. We work off the Kool et al. [10] codebase, which is released under an MIT license.We compare our model against two baseline methods: a uniform baseline trained on uniformdistributions without any form of curriculum, and a HAC baseline which samples from uniformdistributions but uses HAC to make training instances more challenging. Note that the uniformbaseline is equivalent to the model used by Kool et al. [10], while the HAC baseline is equivalent tothe model used by Zhang et al. [29].We plot gaps of all models relative to Concorde solutions, as Concorde is an optimal solver. For alldistributions, 10,000 instances are sampled for evaluation. We also train 5 models for each setting,and report averaged results between the 5 models for each setting. We assume a normal distributionof error across these averages, and plot error bars equal to 1- σof these averages.12D.2 HyperparametersThe base hyperparameters used for training all models are listed below. For fair comparison, we usethe same model architecture used by Kool et al. [10] and Zhang et al. [29].•Architecture: embedding dim = 128, hidden dim = 128, num encode layers = 3•Training: graph size = 50, baseline = rollout, baseline warmup epochs = 0, epoch size =65536, batch size = 1024, epochs = 151, LR decay = 0.98•HAC: η= 5, adaptive percent = 100•EWC: λ= 1, warmup epochs = 20, num samples = 2048The exact training and evaluation commands which we use to obtain our results are included in ourcode. Those commands give additional information about default/implicit hyperparameters not listed.Our criteria for selecting final parameter settings was best average gap on testing distributions. Notethat we did not tune all parameters concurrently; in particular, our EWC parameters were selected ata stage of our tuning when EWC had a more significant effect on results. Hyperparameter rangesused for tuning are as follows:•Architecture: Not tuned, consistent with Kool et al. [10] and Zhang et al. [29]•Training: baseline warmup epochs ∈[0,1], epochs ∈[101, 251], LR decay ∈[0.95, 1]•HAC: Not tuned, consistent with Zhang et al. [29]•EWC: λ∈[0.01, 1000], warmup epochs ∈[0,50], num samples ∈[128, 2048]D.3 DetailsWe train 5 models for each setting, and report aggregate results over the 5 models for each setting.All test datasets were generated with the random seed “1234”. Our models are not explicitly seeded,but we find variance between runs to be consistent and reproducible.All experiments were run on a singular NVIDIA L40 GPU on a Linux operating system with 20GBrequested memory. However, our hyperparameter settings do not fully utilize GPU capabilities, andresults should be reproducible on lower-memory GPUs such as the GeForce RTX 4060. No modeltakes longer than a few hours to train.D.4 Source CodeOur source code is publicly available at https://github.com/ML72/Genetic-Curriculum-TSP .Relevant software libraries and frameworks are listed in the dependencies file included in our code.We also include special documentation on installing pyconcorde, which we use as our oracle solver.All exact code used for plotting and conducting statistical tests is included in our code.E ResultsE.1 Large Language Model ResultsInstance Inference Time (s) Optimal Distance GPT-4o Distance Gap (%)Instance 1 52 4.8462 6.7730 39.7591Instance 2 50 5.5103 24.4655 343.9992Instance 3 41 4.0004 4.4332 10.8190Instance 4 47 3.8288 4.4590 16.4596Instance 5 47 3.0122 4.3116 43.1364Average 47.4 4.2396 8.8885 90.8347Table 2: GPT-4o Performance on Solving the TSP13We run brief experiments to demonstrate that large language models have inconsistent performanceand lengthy inference times when asked to solve TSP problems. We prompted OpenAI’s GPT-4o,which boasts state-of-the-art performance on mathematical and logical reasoning tasks, to solve thefirst 5 TSP instances in TSPLib50. Performance on these instances is printed in Table 2.As opposed to prior work which uses meta-prompts [ 26], we prompt the LLM to directly solve a givenTSP instance, for simplicity and faster inference. We use prompt engineering to improve responseperformance and consistency to the best of our ability. In our prompt, we ask for a permutation of theindices 1 to 50 and nothing else, to encourage the LLM to consistently return a valid permutation.We also ask the LLM to solve the problem to the best of its ability, to avoid errors where the LLMtries to use tools or libraries not in its environment and thus encounters an error. If we directly askfor a solution to the TSP instance without such further instructions, the majority of responses do notprovide a tour, falling into one of the following failure cases:•GPT-4o correctly identifies the problem as being the TSP and describes methods for approx-imating a solution, but makes no attempt to solve the instance provided.•GPT-4o provides code which runs classic heuristic-based algorithms to solve the TSP, butdoes not provide an actual permutation.•GPT-4o writes code and attempts to run it, but encounters into an error because some librarythat it needs is not in its current environment.A sample prompt is printed below:There are 50 cities, respectively at the following locations on a 2D plane:(0.42731, 0.17487), (0.91628, 0.1503), (0.29484, 0.32477), (0.79805, 0.83944),(0.19764, 0.06484), (0.16905, 0.59943), (0.2539, 0.38879), (0.63631, 0.06185),(0.29277, 0.18652), (0.15599, 0.57022), (0.12723, 0.37378), (0.64433, 0.03523),(0.05164, 0.40614), (0.71752, 0.3749), (0.34851, 0.0), (0.53609, 0.61541),(0.41741, 0.57393), (0.47766, 0.1533), (0.25821, 0.15485), (0.31986, 0.75521),(0.44394, 0.63197), (0.84433, 0.39004), (0.33283, 0.56372), (0.325, 0.59029),(0.02606, 0.49288), (0.573, 0.78519), (0.4752, 0.01513), (0.40449, 0.54465),(0.23839, 0.11115), (0.23977, 0.06993), (0.4329, 0.01002), (0.92899, 0.40017),(0.21591, 0.14988), (0.07178, 0.56037), (0.88931, 0.31247), (0.25623, 0.01655),(0.24801, 0.0432), (0.16818, 0.30984), (0.45731, 0.88187), (0.80764, 0.22011),(0.44103, 0.49385), (0.62063, 0.62541), (0.72454, 0.86523), (0.36167, 0.76018),(0.55969, 0.02518), (0.42486, 0.14698), (0.01606, 0.74696), (0.20613, 0.54872),(0.06619, 0.72514), (0.80553, 0.81297)What is the optimal tour permutation of the cities to minimize the total distance trav-eled? Solve the problem to the best of your ability. Reply with only a permutationof the indices 1 to 50, and nothing else.From Table 2, we can immediately see that the inference time required by GPT-4o is prohibitivelyexpensive in situations requiring fast and dynamic decision-making. Furthermore, the performanceis inconsistent: we can see that the gap incurred by GPT-4o has high variance, which is especiallyevident in the gap incurred on instance 2. Even in the best-case scenarios, the gap provided is stillsuboptimal compared to NCO methods. It is also worth mentioning that the formatting of the outputsproduced by GPT-4o have slight inconsistencies as well. A permutation was provided for all 5instances, but sometimes there were artifacts such as extra square brackets around the permutation.These small formatting variations pose a possible risk of feeding unexpected input into downstreamapplications that utilize these outputs.E.2 Worst-Case GapsWorst-case gap results on TSPLib50 can be seen in Table 3. We can observe that there is significantimprovement by GCL on the hardest TSP instances.E.3 AblationsIn Table 4, we report ablation results for the two aspects of our curriculum. It can be seen thatwithout the genome component, the gap increases significantly, while without the EWC component,14% Worst Model Gap Avg (%) Gap Std (%)1%Uniform 20.4826 0.3445HAC 11.0197 0.2942GCL (Ours) 9.2047 0.42230.5%Uniform 24.9234 0.7753HAC 13.3394 0.3584GCL (Ours) 10.8200 0.67980.1%Uniform 42.7885 4.6717HAC 19.4061 0.5314GCL (Ours) 15.9905 2.0829Table 3: Worst Case Gap on TSPLib50Dataset Model Gap Avg (%) Gap Std (%)GaussianMixtureGCL (Ours) 6.2214 0.2155No EWC 5.9941 0.1912No Genome 8.8520 0.7046DiagonalGCL (Ours) 3.0165 0.0318No EWC 3.0217 0.0344No Genome 3.8851 0.1026TSPLib50GCL (Ours) 1.7738 0.0119No EWC 1.7792 0.0115No Genome 1.8232 0.0127Table 4: Average Model Gap Ablationsperformance is similar. We conclude that the genome component is most impactful for improvingperformance. We find a similar trend for ablation tests on TSPLib50 worst-case scenarios.E.4 Genome EvolutionTo better understand how the genome evolves over training, we plot the distribution for each genomebase over the course of training in Figure 6. We plot this for a single GCL run. Note that thehigh-frequency bright bands are the default values that new genome instances are sampled from.Recall that half of the genomes are resampled after each epoch, and only 1 base mutates in expectationper genome each epoch; this explains why the default values are considerably higher frequency thanneighboring values.However, outside of the default values, we can see that the genome diversity slowly decreases overtraining. This is most visible in the plots of bases 2, 4, and 5. We interpret this as indicating that thegenome is effective: there is rapid evolution at first as the genome population explores new genomeconfigurations, but this incentive for diversity decreases as the model eventually learns to solve thesenovel configurations.E.5 Gap Over TrainingIn Figure 7, we plot the optimality gap for our current model and our baseline model on traininglevels, over training. We plot this for a single GCL run. As it is too computationally intensive tocompute the oracle for all levels in an epoch, we uniformly sample 1000 training levels from every15th epoch, and calculate optimality gap on those levels.We can see that notably, the baseline gap starts higher than the model gap, but then decreasessignificantly at around epoch 60. We also interpret this as a further indicator that the genome iseffective: at first the baseline model struggles to keep up with the levels that the current model istraining on due to rapid genome evolution, but over time the baseline learns to generalize to thosenew levels. This finding is consistent with our interpretation of Figure 6.15Figure 6: We plot the evolution of genome base distributions over the course of training for a singleGCL run. The diversity of each genome base decreases over training.F DiscussionF.1 LimitationsWe acknowledge that there are realistic distributions out there that GCL still fails to consistentlygenerate and train on, as noted by worst-case gaps that are still significantly above average.This paper also focused on instances with 50 nodes. With more compute, GCL could be tested atscale to see if trends hold with larger instance sizes.Additionally, while we conducted basic hyperparameter searches for curriculum-based parameters,by no means is our search exhaustive due to the large parameter space. Thus, we believe that ourperformance could be optimized upon further tuning.16Figure 7: We plot baseline model and current model optimality gap on training data over the courseof training for a single GCL run. The baseline gap starts considerably higher than the current modelgap, but decreases significantly at around epoch 60.17 |
L5US093OwO | Synthesizing Verified Mathematical ProblemsXuefeng Li1,2Yanheng He1,2Pengfei Liu1,2,3∗1Shanghai Jiao Tong University2Generative AI Research Lab3Shanghai AI LaboratoryAbstractMathematical data synthesis offers a potentially effective solution for enhancingthe mathematical capabilities of large language models. However, existing meth-ods either synthesize a large number of rationales based on existing questions,limiting the diversity of the questions, or rely on advanced proprietary models todirectly generate new questions without verification, which cannot guarantee thecorrectness of the synthesized problems. This paper introduces a novel method,mathematical data synthesis through Algorithmic Abstraction, Implementation,andContextualization (AIC), to synthesize new and verifiable mathematical prob-lems. AIC abstracts mathematical problems into algorithms, implements thesealgorithms as code functions, and contextualizes them under different conditionsto create new problems, which are then verified using code functions. Experimen-tal results on multiple challenging mathematical benchmarks show that modelsfine-tuned on our synthesized data are superior to previous state-of-the-art models.Further experiments indicate that, when controlling for the same synthesizer, datasynthesized using the AIC method is not only more accurate but also more effectiveat improving the model’s mathematical abilities.1 IntroductionLarge language models (LLMs) have made significant strides, expanding from natural languageprocessing to areas like code generation and creative writing [ 3,29,4]. Their success stems fromvast amounts of high-quality training data [ 30,9]. As the availability of untapped high-quality datadiminishes, LLM research faces a problem of data scarcity [ 25]. Consequently, data synthesis, usinggenerative models to create data similar to real data, offers a solution to this scarcity by supplementingreal-world data [ 18,2]. For synthetic data to be effective, it must maintain quality comparable to realdata [27, 24], particularly for mathematical data, which demands high logical consistency.Research on enhancing LLMs’ mathematical abilities through instruction tuning mainly follows twoapproaches. The first generates rationales for known mathematical problems using LLMs, filteringrationales based on the correctness of the final answer [ 30,26,23,22,10], though this limits thediversity of problems. The second approach uses advanced LLMs, like GPT-4 [ 1], to generatenew questions and rationales [ 6,19,17,21,13,16], enhancing data diversity but risking accuracywithout verification [ 28,20]. Therefore, a method that generates new problems while ensuring theircorrectness is essential for producing diverse and accurate synthetic mathematical data.In this paper, we propose Mathematical Data Synthesis via Algorithmic Abstraction, Implementation,and Contextualization (AIC). The central idea is that many mathematical problems can be addressed byabstract algorithms. By abstracting such algorithms from mathematical problems and contextualizingthem, we can generate new mathematical questions and corresponding rationales. Moreover, abstractmathematical algorithms can be implemented using Python to verify the correctness of the synthesized∗Corresponding author.38th Conference on Neural Information Processing Systems (NeurIPS 2024). Workshop on MATH-AISeed DataQuestion: Find the maximum value of f(x)=−2x2+4x+1Rationale: Finding the x-coordinate of the vertex: ...... So, fmax=3 Final Answer: 3import sympydef maximum_value(f): ...... return max_valueAlgorithm Objective: Find the maximum value of a quadratic function f(x).Algorithm Process: Step1: Finding the x-coordinate of the vertex. .....f(x)=−4�2−8� Verification for CodeQuestion: Find out the peak value of −4x2−8�Rationale: Finding the x-coordinate of the vertex: ...... So, fmax=4 Final Answer: 4Question: Find the maximum value of f(x)=−2x2+4x+1Rationale: Finding the x-coordinate of the vertex: ...... So, fmax=3 Final Answer: 3Question: Find the maximum value of f(x)=−2x2+4x+1Rationale: Finding the x-coordinate of the vertex: ...... So, fmax=3 Final Answer: 3Natural Language AlgorithmAbstractionImplementationCode FunctionContextualizationSynthesized New Problem Verification for New ProblemLLM LLMFigure 1: An Overview of AIC: (1) The synthesizer(LLM) abstracts mathematical problems (SeedData) into natural language algorithms. (2) These algorithms are implemented in python by thesynthesizer, with their correctness verified through a verification process. (3) Finally, the synthesizercontextualizes the abstract algorithms to generate new problems, employing a verification mechanismto ensure the correctness of the newly synthesized problem.data. As shown in Figure 1, the process of AIC is divided into two stages. Stage1 AlgorithmAbstraction and Implementation : First, we use large language models (LLMs) as a synthesizer toabstract existing mathematical problems, which serve as seed data. Each entry includes the question,rationale, and final solution, and is transformed into a natural language algorithm. Next, we promptthe synthesizer to implement the algorithm as a Python code function and verify its correctness usinga verification mechanism. Stage2 Algorithm Contextualization : The synthesizer contextualizes thenatural language algorithm and generates a new mathematical problem. Then, the conditions of thisnewly generated problem are fed into the corresponding code function, and by checking if the finalanswer generated by the synthesizer aligns with the result of the code execution, thereby verifyingthe correctness of the synthesized problem.We evaluated the model on several challenging mathematics benchmarks, including MATH [ 12],MathOdyssey [ 8], finding that data synthesized using AIC can significantly improve the performanceof the synthesis model itself and is highly competitive compared to other methods. AIC not only hasthe capability to synthesize a large volume of high-quality mathematical data but also paves a newway for generating verifiable mathematical problems.2 MethodsIn this paper, we propose a data synthesis method for generating new mathematical problems withverified solutions. Our method comprises two stages. In the first stage, we employ an LLM asa synthesizer to abstract algorithms from existing mathematical problems. We then prompt thesynthesizer to implement these algorithms as Python code functions and verify the code’s correctness.In the second stage, the synthesizer contextualizes the algorithms into new mathematical problems,using the code functions to verify the correctness of the synthesized problems.LetDseed= (qi, ri, ai)Ni=1represent a typical mathematical training dataset, which serves as seed data,where qi,ri,aiare question, rationale and final answer of the i-th problem. In addition to the seed data,the synthesis process utilizes large language models such as M(e.g., Mixtral-8 ×7B-Instruct [ 15],Llama3-70B-Instruct [7]) and a code interpreter C.2.1 Stage1: Algorithm Abstraction and ImplementationFor each piece of data di= (qi, ri, ai), the LLM is asked to first analyze the question qiand therationale rito understand the core goals of the problem, identify the key operations and steps ofreasoning, determine the sequential relationships between steps, and finally, identify the mathematicalobjects such as integers, series and expressions in question qithat are independent of the core steps inreasoning, parameterizing them as placeholder variables. This way, a question qiand its rationalerican be transformed into an algorithm objective oiand an algorithm process pi. As show inFigure 2, We also prompt the synthesizer Mto generate additional information about the algorithm,2fAbstractionImplementationExecutionContextulizationExecutionSynthesized Problem SetStage2: Algorithm ContextulizationStage1: Algorithm Abstraction and ImplementationAlgorithm SetRepeat Stage 1Verify Correctness of Code FunctionVerify Correctness of New ProblemFigure 2: A more detailed overview of the synthesis pipeline.including placeholder constraint ci, which specify the placeholder variables’ types, value ranges andrelationships with other placeholder variables; placeholder values vi, which indicate the values ofthe placeholder variables in the original problem. Overall, we prompt the synthesizer Mto abstarct amathematical problem di= (qi, ri, ai)into a natural language algorithm Ψi.For each algorithm Ψi, the synthesizer Mprograms a code function fiby Python, where theparameters are the placeholder variables, and the return value is the final result of the algorithm.Furthermore, to ensure the correctness of the algorithm and function, we propose a verificationmechanism, using the original problem di= (qi, ri, ai)as a test case, inputting placeholder values viinto the function fi, obtaining the function’s return value and comparing it with the original answerai, to filter out incorrect functions. If verification fails, we regenerate the algorithm for the problem,repeating until the algorithm passes the verification or reaches the maximum number of iterations I.2.2 Stage2: Algorithm ContextualizationContextualization aims to transform abstract natural language algorithms into specific mathematicalproblems. For any given algorithm Ψi= (oi, pi, vi, ci)and the corresponding code function fi, first,the synthesizer generates Kpossible values of placeholder variable based on the algorithm, denoteaspvji, j= 1, ..., K , which assigning specific mathematical objects that comply with the algorithm’sconstraints, which can also be referred to as a context . With the algorithm and placeholder variablesin place, the synthesizer generates specific mathematical question qji, corresponding rationale rji, andfinal answer ajifor the algorithm Ψiin the current context pvj.Unlike traditional synthesis algorithms that lack verification, here we can input the placeholdervariable values pvjiinto the code function fiand execute it by code interpreter C, filtering outincorrect synthesized data by checking whether the execution result gjimatches the final answer ajigiven by the synthesizer through algorithm contextualization. By generating numerous contexts, analgorithm can be contextualized into many mathematical instruction-tuning data points.33 Experimental Results3.1 Experiments SettingData We use the training set from MATH [ 12] and a small subset of MAmmoTH2 [ 31] includingabout 30,000 data points as seed data. subset includes approximately 30,000 data points. Datasynthesis is conducted using Mixtral-8 ×7B-Instruct and Llama3-70B-Instruct, resulting in the AIC-M and AIC-L datasets, respectively. Further details about the data are provided in Appendix A.Training We follow a standard supervised fine-tuning approach to train several models, includingMistral-7B-Base [14], Llama3-8B-Base, Mixtral-8 ×7B-Instruct [15], and Llama3-70B-Instruct.Evaluation We evaluate the effectiveness of our method using five high-difficulty mathemat-ical benchmarks, including the in-domain benchmark MATH and out-of-domain benchmarksGaoKaoBench-Math [32], MathOdyssey [8], OlympiadBench-Math [11], and TheoremQA [5].More detailed information on the experimental settings is provided in Appendix B.3.2 Effetiveness of Synthesized DataTable 1: Mixtral-AIC and Llama3-70B-AIC refer to models trained using AIC-M on Mixtral-8 ×7B-Instruct and AIC-L on Llama3-70B-Instruct, respectively. On the other hand, Mixtral-Seed andLlama3-70B-Seed are models trained with the seed data on corresponding models.Model MATH GaoKao Odyssey Olypaid TheoremQA AvgMixtral-8 ×7B-InstructMixtral-Seed 27.6 16.9 8.7 6.8 12.8 14.6Mixtral-AIC 35.4 +7.819.0 +2.112.6 +3.9 9.9+3.1 13.8 +1.0 18.1 +3.5Llama3-70B-InstructLlama3-70B-Seed 39.2 22.0 9.5 10.5 15.0 19.2Llama3-70B-AIC 48.7 +9.530.5 +8.514.4 +4.915.1 +4.6 18.3 +3.3 25.4 +6.2We separately fine-tune Mixtral-8 ×7B-Instruct and Llama3-70B-Instruct using data synthesized byMixtral-87B-Instruct (AIC-M) and Llama3-70B-Instruct (AIC-L) to evaluate whether the synthesizeddata can enhance the performance of the models. Since the data synthesis process involves both largelanguage models (LLMs) and seed data, we compared the performance of models trained with bothsynthesized and seed data. As shown in Table 1, training with the synthesized data significantlyoutperforms training with the original seed data, demonstrating the effectiveness of our approach.3.3 Comparison with other modelTable 2: Comparison of different models testing accuracy on mathematical benchmarks.Model Synthesis Model MATH GaoKao Odyssey Olypaid TheoremQA AvgMistral-7B-WizardMATH GPT4 32.3 - - - - -Mistral-7B-MetaMATH GPT3.5 27.7 14.9 5.9 6.5 6.0 13.1Mistral-7B-MMIQC GPT4 31.5 17.9 7.2 6.8 9.2 14.5Mistral-7B-MathScale GPT4 35.2 - - - - -Mistral-7B-AIC Llama3 36.4 +1.220.8 +2.9 8.7+1.5 11.1 +4.5 12.5 +2.7 17.9 +3.4Llama3-8B-MetaMATH GPT4 31.5 14.7 6.4 6.8 10.2 13.9Llama3-8B-MAmmoTH2 GPT4 35.8 - - - -Llama3-8B-MMIQC GPT4 37.5 15.3 11.3 6.9 9.7 16.1Llama3-8B-AIC Llama3 39.0 +1.520.6 +5.310.8−0.5 8.8+1.9 11.6 +1.4 18.1 +2.0In this section, we train two base models Mistral-7B-Base and Llama3-8B-Base using AIC-L andcompare them with other models, including WizardMath, MetaMath, MMIQC, MathScale, andMAmmoTH2. Additional details about baselines are provided in Appendix B.3.Table 2 presents the performance of our method compared to other data synthesis approaches acrossvarious high-difficulty math benchmarks. Among models based on Mistral-7B-Base, Mistral-7B-4AIC demonstrated an average improvement of 3.4%. For models derived from Llama3-8B-Base,Llama3-8B-AIC showed an improvement of 2.0%. Additionally, while most competing models utilizeclosed-source advanced models (e.g., GPT-3.5, GPT-4) for data synthesis, our approach leverages theopen-source model, further underscoring the effectiveness of our method.3.4 Fair Comparison with Other Methods0 +7.5k +15k +30k171819202122Accuracy on MATHLlama3-8B-Base0 +7.5k +15k +30k293031323334DeepSeekMath-7B-Base0 +7.5k +15k +30k1415161718192021Mistral-7B-BaseMMIQC Xwin-Math NumReplace RFT OursFigure 3: Fair comparison with other methods.Given the variation in both the models used for data synthesis and the scale of data synthesis acrossthe comparison objects in Section 3.3, these differences do not accurately represent the strengths andweaknesses of the synthesis methods. To address this, we standardize both the data synthesis modelsand the scale of data synthesis in this section, allowing for a more comprehensive evaluation of themethods. We chose MATH as the seed data and conducted all evaluations on MATH. The methodscompared include NumReplace, MMIQC, and Xwin-MATH, as introduced in Appendix B.3. Wetrained models on various data scales to thoroughly assess the effectiveness of the synthesis methods.The results in Figure 3 show that our method outperforms the baselines at any scale. We believe thisis because, when generating more difficult problems, methods without a verification mechanism oftenlead to errors in the synthesized data, thereby reducing its quality.3.5 Effectiveness of VerificationTable 3: Ablation Study on the verifica-tion mechanism.Verification Samples MATHLlama3-8B-Base✓ 34k 21.2 +1 .1✗ 34k 20.1Mistral-7B-Base✓ 34k 18.8 +1 .0✗ 34k 17.8We investigate the verification mechanism’s effectivenessby comparing two equal-sized datasets: one before andone after its application. For simplicity, this experimentuses only MATH as the seed data and test set.The results in Table 3, demonstrate that the verificationmechanism enhances model performance. This improve-ment stems from the fact that the final answers generatedby the code are generally correct, and filtering the rationalebased on these answers improves the logical and compu-tational accuracy of the data, thereby enhancing its overallquality. These findings highlight the importance of veri-fying the correctness of synthesized data.4 ConclusionThe paper proposes a mathematical synthesis approach generating diverse, verified synthetic datathrough algorithmic abstraction and contextualization, offering a scalable solution for enhancingLLM mathematical capability.5References[1]J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida,J. Altenschmidt, S. Altman, S. Anadkat, et al. Gpt-4 technical report. arXiv preprintarXiv:2303.08774 , 2023.[2]A. Bauer, S. Trapp, M. Stenger, R. Leppich, S. Kounev, M. Leznik, K. Chard, and I. Foster. Com-prehensive exploration of synthetic data generation: A survey. arXiv preprint arXiv:2401.02524 ,2024.[3]T. B. Brown. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 , 2020.[4]M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. D. O. Pinto, J. Kaplan, H. Edwards, Y . Burda,N. Joseph, G. Brockman, et al. Evaluating large language models trained on code. arXivpreprint arXiv:2107.03374 , 2021.[5]W. Chen, M. Yin, M. Ku, P. Lu, Y . Wan, X. Ma, J. Xu, X. Wang, and T. Xia. Theoremqa: Atheorem-driven question answering dataset. In The 2023 Conference on Empirical Methods inNatural Language Processing , 2023.[6]E. Chern, H. Zou, X. Li, J. Hu, K. Feng, J. Li, and P. Liu. Generative ai for math: Abel.https://github.com/GAIR-NLP/abel , 2023.[7]A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten,A. Yang, A. Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024.[8]M. Fang, X. Wan, F. Lu, F. Xing, and K. Zou. Mathodyssey: Benchmarking mathematicalproblem-solving skills in large language models using odyssey math data. arXiv preprintarXiv:2406.18321 , 2024.[9]L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite,N. Nabeshima, et al. The pile: An 800gb dataset of diverse text for language modeling. arXivpreprint arXiv:2101.00027 , 2020.[10] Z. Gou, Z. Shao, Y . Gong, Y . Yang, M. Huang, N. Duan, W. Chen, et al. Tora: A tool-integratedreasoning agent for mathematical problem solving. arXiv preprint arXiv:2309.17452 , 2023.[11] C. He, R. Luo, Y . Bai, S. Hu, Z. L. Thai, J. Shen, J. Hu, X. Han, Y . Huang, Y . Zhang, J. Liu,L. Qi, Z. Liu, and M. Sun. Olympiadbench: A challenging benchmark for promoting agi witholympiad-level bilingual multimodal scientific problems, 2024.[12] D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Stein-hardt. Measuring mathematical problem solving with the math dataset. arXiv preprintarXiv:2103.03874 , 2021.[13] Y . Huang, X. Liu, Y . Gong, Z. Gou, Y . Shen, N. Duan, and W. Chen. Key-point-driven datasynthesis with its enhancement on mathematical reasoning. arXiv preprint arXiv:2403.02333 ,2024.[14] A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. d. l. Casas, F. Bressand,G. Lengyel, G. Lample, L. Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825 , 2023.[15] A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, B. Savary, C. Bamford, D. S. Chaplot, D. d. l.Casas, E. B. Hanna, F. Bressand, et al. Mixtral of experts. arXiv preprint arXiv:2401.04088 ,2024.[16] C. Li, W. Wang, J. Hu, Y . Wei, N. Zheng, H. Hu, Z. Zhang, and H. Peng. Common 7b languagemodels already possess strong math capabilities. arXiv preprint arXiv:2403.04706 , 2024.[17] H. Liu and A. C.-C. Yao. Augmenting math word problems via iterative question composing.arXiv preprint arXiv:2401.09003 , 2024.[18] R. Liu, J. Wei, F. Liu, C. Si, Y . Zhang, J. Rao, S. Zheng, D. Peng, D. Yang, D. Zhou, et al.Best practices and lessons learned on synthetic data for language models. arXiv preprintarXiv:2404.07503 , 2024.6[19] H. Luo, Q. Sun, C. Xu, P. Zhao, J. Lou, C. Tao, X. Geng, Q. Lin, S. Chen, and D. Zhang.Wizardmath: Empowering mathematical reasoning for large language models via reinforcedevol-instruct. arXiv preprint arXiv:2308.09583 , 2023.[20] A. Mitra, H. Khanpour, C. Rosset, and A. Awadallah. Orca-math: Unlocking the potential ofslms in grade school math. arXiv preprint arXiv:2402.14830 , 2024.[21] Z. Tang, X. Zhang, B. Wan, and F. Wei. Mathscale: Scaling instruction tuning for mathematicalreasoning. arXiv preprint arXiv:2403.02884 , 2024.[22] Y . Tong, X. Zhang, R. Wang, R. Wu, and J. He. Dart-math: Difficulty-aware rejection tuningfor mathematical problem-solving. arXiv preprint arXiv:2407.13690 , 2024.[23] S. Toshniwal, I. Moshkov, S. Narenthiran, D. Gitman, F. Jia, and I. Gitman. Openmathinstruct-1:A 1.8 million math instruction tuning dataset. arXiv preprint arXiv:2402.10176 , 2024.[24] B. Van Breugel, Z. Qian, and M. Van Der Schaar. Synthetic data, real errors: how (not) topublish and use synthetic data. In International Conference on Machine Learning , pages34793–34808. PMLR, 2023.[25] P. Villalobos, J. Sevilla, L. Heim, T. Besiroglu, M. Hobbhahn, and A. Ho. Will we run outof data? an analysis of the limits of scaling datasets in machine learning. arXiv preprintarXiv:2211.04325 , 2022.[26] K. Wang, H. Ren, A. Zhou, Z. Lu, S. Luo, W. Shi, R. Zhang, L. Song, M. Zhan, and H. Li.Mathcoder: Seamless code integration in llms for enhanced mathematical reasoning. arXivpreprint arXiv:2310.03731 , 2023.[27] Z. Xu, F. Jiang, L. Niu, Y . Deng, R. Poovendran, Y . Choi, and B. Y . Lin. Magpie: Align-ment data synthesis from scratch by prompting aligned llms with nothing. arXiv preprintarXiv:2406.08464 , 2024.[28] L. Yu, W. Jiang, H. Shi, J. Yu, Z. Liu, Y . Zhang, J. T. Kwok, Z. Li, A. Weller, and W. Liu.Metamath: Bootstrap your own mathematical questions for large language models. arXivpreprint arXiv:2309.12284 , 2023.[29] A. Yuan, A. Coenen, E. Reif, and D. Ippolito. Wordcraft: story writing with large languagemodels. In Proceedings of the 27th International Conference on Intelligent User Interfaces ,pages 841–852, 2022.[30] Z. Yuan, H. Yuan, C. Li, G. Dong, K. Lu, C. Tan, C. Zhou, and J. Zhou. Scaling relationship onlearning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825 ,2023.[31] X. Yue, T. Zheng, G. Zhang, and W. Chen. Mammoth2: Scaling instructions from the web.arXiv preprint arXiv:2405.03548 , 2024.[32] X. Zhang, C. Li, Y . Zong, Z. Ying, L. He, and X. Qiu. Evaluating the performance of largelanguage models on gaokao benchmark. arXiv preprint arXiv:2305.12474 , 2023.7AppendixA DataA.1 Seed DataWe use the MATH training set and a small subset of MAmmoTH2 as the seed data for data synthesis.The MATH dataset consists of competition-level math problems, covering a wide range of topicssuch as algebra, geometry, probability, number theory, and more. MAmmoTH2, on the other hand, isan instruction-tuned dataset created by retrieving, cleaning, and rewriting mathematical content fromthe internet, containing a large number of math problems.The MATH training set contains 7,500 problems, and we selected all of them as seed data. MAm-moTH2 consists of 10 million entries, of which 2 million have been open-sourced. From this, weselected 30,000 high-quality examples to be used as seed data. We applied LLMs to filter the data,prioritizing high quality and the presence of correct answers, and included these filtered examples inthe seed data.A.2 Synthesized DataWe used two models, Llama3-70B-Instruct and Mixtral-8 ×7B-Instruct, to synthesize the data. Thetotal amount of data, as well as the data from each type of seed data, is shown in Table 4.Table 4: The statistics of synthesized data.Synthesis Model Total MATH MAmmoTH2-SubsetLlama3-70B-Instruct 1670k 970k 700kMixtral-8 ×7B-Instruct 1350k 700k 650kB Experiments SettingB.1 TrainingWhen performing standard supervised fine-tuning, we used the template shown in Figure 4, whetherfine-tuning the Base model or the Instruct model. This is because we found that using the nativeInstruct template or the new template made almost no difference in model performance after training.Figure 4: Template for supervised fine-tuning.Training TemplateQuestion:{Question}Answer: Let’s think Step by Step.{Rationale}#### {Final Answer}For all models, we applied full-parameter fine-tuning using the Adam optimizer, with a warmup ratioset to 0.1 and the learning rate scheduler set to a cosine scheduler. The values for the number ofepochs, learning rate, and batch size vary depending on the model and are shown in Table 5.B.2 BenchmarksWe selected five high-difficulty benchmarks, which include the MATH test set, GaoKao-MATH,OlympiadBench-MATH, MathOdyssey, and TheoremQA.8Table 5: The statistics of synthesized data.Model Batch size Epoch Learning RateMistral-7B-Base 128 3 2e-6Llama3-8B-Base 128 1 1e-5Mixtral-8 ×7B-Instruct 128 3 1e-5Llama3-70B-Instruct 128 1 1e-5•The MATH test set is distributed similarly to the MATH training set, featuring high difficultyand wide coverage.•GaoKao-MATH contains 5000 pieces of math problems from China’s Gaokao (collegeentrance examination).•MathOdyssey consists of 387 pieces of professional math problems from both university andhigh school levels, serving as the problem set for the 2024 Global AI Competition (GAIC)math contest.•OlympiadBench-MATH consists of 675 pieces of Olympiad-level math competition prob-lems. We selected only the pure text-based math problems from OlympiadBench.•TheoremQA includes 800 problems from various fields such as mathematics, physics, andeconomics, which require domain-specific theorems to solve.B.3 BaselinesOur comparisons focus on various methods for synthesizing mathematical instruction-tuning dataand the corresponding models, including WizardMATH, MetaMATH, MMIQC, MAmmoTH2,MATHScale, and Xwin-MATH.•WizardMATH enhances existing data using the Eval-Instruct method, which includes in-creasing and decreasing the difficulty. Additionally, WizardMATH employs reinforcementlearning to further improve model performance.•MetaMATH introduces methods for rephrasing questions and backward reasoning to expandthe existing data.•MMIQC iterates on existing questions to increase their complexity, generating new andmore challenging questions.•MAmmoTH2 retrieves, cleans, and rewrites mathematical content from the internet to createinstructional question data.•MATHScale extracts knowledge points from existing data and uses these points as seedinformation for large-scale data synthesis.•Xwin-MATH directly requires the LLM to generate a completely new math problem basedon an existing one.Since MetaMATH and MMIQC have open-sourced their data, the experimental results for MetaMATHand MMIQC were obtained from our own training and testing. For MMIQC, we used only thesynthetic data for training and did not use StackExchange data. For WizardMATH, MAmmoTH2,MATHScale, and Xwin-MATH, since the data was not open-sourced, we compared our resultsdirectly with those reported in the corresponding papers.B.4 EvaluationIn evaluation, we use greedy decoding and apply the same template as used in training. We evaluateusing exact match criteria, where only results that are either exactly the same as the correct answer ormeet certain rules for matching the correct answer are considered correct.C Synthesized ExamplesWe provide an example of synthesized new problem, along with its original mathmatical problem,natural language algorithm and code function.9Original Mathematical ProblemQuestion:A printer prints 17 pages per minute. How many minutes will it take to print 200 pages?Express your answer to the nearest whole number.Solution:200 pages17pages per minute≈12minutes.Final Answer:12Natural Language AlgorithmAlgorithm ObjectiveGiven:- A printer prints ppages per minute.- The desired number of pages to be printed is n.Find:- Approximate number of minutes needed to print npages.Algorithm Process1. **Calculate the Number of Minutes Needed:**Number of minutes =Total number of pagesPages per minuteNumber of minutes =npPlaceholder Variables:{"p": 17, "n": 200}Constraints:None 12Code Functiondef print_time(p, n):minutes = round(n / p)return minutesSynthesized New ProblemQuestion:A printer produces 24 pages per minute. How many minutes will it take to print 240 pages?Round your answer to the nearest whole number.Solution:240/24=10. So it will take 10 minute to print 240 pages.Final Answer:10D Compute ResourcesFor algorithm abstraction and implementation, the required GPU time depends on the maximumnumber of iterations. When the iteration count is set to 50, it requires about 20 hours of runtimeon 8*A100 machines. For contextualization, it depends on the number of generated problems. Togenerate 100,000 problems, it takes approximately 20 hours on 8*A100 machines.10E Limitations and Future WorkOur method is limited to mathematical data synthesis and needs to be further extended to other typesof inference data. Additionally, our method lacks diversity in problem generation, which requiresdefining more meta-level algorithms and proposing corresponding algorithm abstraction methods andvalidation mechanisms to improve the diversity of both algorithms and generated problems.11 |
KqALqWJSbF | VinePPO: Accurate Credit Assignment in RL for LLMMathematical ReasoningAmirhossein Kazemnejad∗1, Milad Aghajohari∗1, Eva Portelance1,6,Alessandro Sordoni1,2, Siva Reddy1,3,4, Aaron Courville†1,4,5, Nicolas Le Roux†1,41Mila2Microsoft Research3McGill University4Canada CIFAR AI Chair5Université de Montréal6HEC Montréal{amirhossein.kazemnejad,aghajohm}@mila.quebecAbstractLarge language models (LLMs) are increasingly required to solve complex rea-soning tasks, like mathematical problems, that involve multiple reasoning stepsbefore feedback is received. Effectively identifying and prioritizing key steps byaccurately assigning credit to these intermediate steps is essential for enhancingmodel performance. Proximal Policy Optimization (PPO), a state-of-the-art rein-forcement learning algorithm for finetuning LLMs, addresses the credit assignmentproblem by employing value networks to predict the expected cumulative rewardsof intermediate states. In this work, we identify significant limitations with thisvalue estimation method. To address this, we propose VinePPO that leveragesthe flexibility of language environments to compute unbiased Monte Carlo-basedestimates of the intermediate values. VinePPO consistently outperforms standardPPO, doing so more efficiently and with lower divergence from the reference model.Our findings underscore the critical importance of accurate credit assignment inLLM post-training and present a simple, yet effective solution.1 IntroductionLarge language models (LLMs) are increasingly employed in tasks requiring complex reasoning,such as solving mathematical problems (Trinh et al., 2024; OpenAI, 2024). In these settings, LLMsoften engage in extended reasoning chains and perform numerous actions. Prioritizing steps that leadto correct solutions while downplaying erroneous ones during finetuning is essential for improvingthe performance and reducing unnecessary updates. This is particularly important as most reasoningsteps generated by a model often do not impact its likelihood of solving the problem (Fig. 2).This issue is known as the credit assignment problem in reinforcement learning (RL, Sutton andBarto 1998). Proximal Policy Optimization (PPO) (Schulman et al., 2017; Ouyang et al., 2022),the state-of-the-art algorithm for RL tuning of LLMs (Xu et al., 2024; Ivison et al., 2024; Shaoet al., 2024), is a variant of actor-critic methods that utilizes a value network ( critic ) to handle creditassignment (Bai et al., 2022, 2023; Havrilla et al., 2024). The value network is a separate model(the same size as and initialized from a pretrained checkpoint of the LLM) that learns to estimateexpected cumulative future reward (value) of intermediate actions during training. PPO then usesthe predicted values to measure the advantage of each action and update the model accordingly. Forexample, in Fig. 2, an ideal value network would assign a low value to s0, where the model initiallystruggles, and a higher value to s2and beyond, where a critical action led to solving the problem.Accurately predicting rewards from a partial and incomplete response requires the value networkto grasp the space of correct solutions and predict the model’s future behavior – both of which are∗Equal contribution.†Equal advising.Initial SFT RestEM PPO VinePPO15.017.520.022.525.015.517.318.123.0Initial SFT RestEM PPO VinePPO35404532.834.942.946.0Accuracy ()RhoMath 1.1B DeepSeekMath 7BFigure 1: VinePPO outperforms standard PPO and other baselines on the MATH dataset, while alsoexhibiting scalability across different model sizes. The figure shows Pass@1 performance.Prompt ( s0) ˆp(correct|s:t)0.4Letaandbbe nonzero real numbers such that(2−7i)(a+bi)is pure imaginary. Findab.Responses1 0.4 We can expand the left-hand side to gets2 1.0 (2−7i)(a+bi) = (2 a+ 7b) + (−7a+ 2b)i.s3 1.0 This is pure imaginary if and only if the real part is 0, i.e.s4 1.0 2a+ 7b= 0.s5 1.0 Thena=−72b,soab=−72.x...yt−1yt...τ′1τ′2. . . τ′KˆVMC(st) =1/KPkR(τk)Figure 2: (Left) A response generated by the model. The notation ˆp(correct |s:t)represents theestimated probability of successfully solving the problem at step t, based on nine model rollouts. Inthis example, only step s2is critical; after this, the model completes the solution correctly. (Right)Illustration of estimating the value of a state within the trajectory.challenging. There are hints in the literature that standard PPO implementations for LLM finetuninghave inaccurate value estimations. Ahmadian et al. (2024) and Luong et al. (2024) demonstrate thatvalue networks often serve best as just a baseline in policy gradient2. Shao et al. (2024) shows thatthe value network can be replaced by averaging rewards of a group of responses to a given problem,without degradation in performance.As estimation errors can significantly hamper model convergence and performance (Sutton et al.,1999; Greensmith et al., 2001), it is crucial to ask: how accurately do value networks perform inpractice during LLM finetuning? While recent studies (Hwang et al., 2024; Setlur et al., 2024) havebegun to highlight the importance of identifying early reasoning errors and incorporating these astraining signals in “RL-free” approaches (Rafailov et al., 2023), to what extend the accuracy of creditassignment plays a role in RL tuning of LLMs remains an open question.In this work, we evaluate the standard PPO pipeline in mathematical reasoning tasks across variousmodel sizes and find that value networks consistently provide inaccurate estimates and a sub-optimaltraining signal for finetuning. To address this, we propose VinePPO. Instead of relying on valuenetworks, VinePPO computes unbiased estimates by resetting the environment to intermediate statesand performing independent Monte Carlo (MC) rollouts to calculate the average return of individualsteps. This approach takes advantage of a special property of the language environment—the abilityto easily reset to any intermediate state of a trajectory (Schulman et al., 2015). Not only does itremoves the need for large, memory-intensive value networks, VinePPO also outperforms standardPPO and other baselines such as RestEM (Singh et al., 2023)(Fig. 1). VinePPO is also able to matchPPO’s final accuracy in fewer iterations, requiring less wall-clock time (Fig. 4), and achieving alower KL divergence (Fig. G.3) from the base model. These findings highlight the importance ofaccurate credit assignment in RL post-training and position VinePPO as an effective alternative tovalue networks.2setting GAE, (Schulman et al., 2016) parameter λto1.22 Advantage Estimation with Monte CarloWe build on PPO (Schulman et al., 2017; Ouyang et al., 2022), for which we provide an extensivebackground in Appendices B and I. VinePPO only modifies the way advantages are estimated. Westart by estimating the true value function V(st). Instead of relying on a value network, for anyintermediate state st, we sample Kindependent trajectories starting from st. The average returnacross these trajectories serves as the value estimate:ˆVMC(st):=1KKXk=1R(τk),where τ1, . . . , τ K∼π(· |st). (1)where τkis an independent continuation sampled from the model, starting from standR(·)isthe return over the completed trajectory. This is an MC estimate of the value function V(st) =E[R(τ)|s0=st]. Once the values ˆVMC(st)are computed, we compute the advantages with:ˆAMC(st, at):=r(st, at) +γˆVMC(st+1)−ˆVMC(st), (2)where r(·)is the step-wise reward (in practice, equal to zero unless at final step). Note that for anyK≥1, the policy gradient computed using the advantage estimator ˆAMCis an unbiased estimate ofthe gradient of expected return.In essence, VinePPO only alters advantage computation in PPO pipeline, leaving the rest unchanged.With this simple modification, we eliminate the need for a value network, significantly reducingmemory footprint (up to 112GB for a 7B LLM) while providing unbiased estimates of advantages.The parameter Koffers a trade-off between computational cost (i.e. more MC samples per state)and the variance of the estimator. To enhance the efficiency of ˆAMC, we also group states withina reasoning step and compute a single advantage, which is then assigned to all tokens in that step.Since everything else in the PPO pipeline of VinePPO is unchanged, by comparing the two methods,we can systematically evaluate of the impact of accurate credit assignment in RL tuning of LLMs.3 ExperimentsWe use two strong base LLMs pretrained for mathematical reasoning: (1) DeepSeekMath 7B (Shaoet al., 2024) and (2) RhoMath 1.1B (Lin et al., 2024). Our focus is the MATH dataset (Hendrycks et al.,2021), which contains competition-level problems. We compare three LLM reasoning finetuningstrategies, PPO, VinePPO , and RestEM to the supervised finetuned model (SFT) baseline, from whichall methods are initialized. We tune PPO hyperparameters like KL penalty coefficient, batch size, andGAE λ, applying best practices in PPO optimization. VinePPO uses the same hyperparameters asPPO but modifies the advantage estimation A(st, at)to isolate the effect of accurate credit assignment.We sample K= 9 trajectories in ˆVMC. For RestEM, we closely follow the original setup whileensuring consistency in training conditions for a fair comparison. We choose the best checkpointbased on a held-out validation set for all experiments3.4 Results and AnalysisTask Performance As shown in Fig. 1, VinePPO outperforms standard PPO and RestEM. The gapbetween VinePPO and PPO is consistent throughout the training (Fig. E.1). RestEM lacks explicitcredit assignment and finetunes on full trajectories. Despite higher training accuracy, it underperformson test, likely due to overfitting caused by training on disadvantageous intermediate steps. In addition,fig. 4 represents our ablation on K, observing increasing Kconsistently improves accuracy.KL Divergence The RL objective4aims to balance maximizing task performance while limitingdeviations from the reference policy π0, or original SFT, as measured by KL divergence. We trackthe KL divergence KL[πθ∥π0]throughout training for both methods and plot task accuracy againstKL to assess this balance in Fig. G.3. The results show that VinePPO consistently achieves higheraccuracy for a given KL divergence.3Refer to Appendix D for full details.4The full definition is in Appendix B.30.000.250.500.751.000.000.250.500.751.000.000.250.500.751.000.000.250.500.751.000.000.250.500.751.000.000.250.500.751.000.000.250.500.751.000.000.250.500.751.00Ground Truth ValuePredicted ValuePPO @ Step: 360 PPO @ Step: 960VinePPO @ Step: 360VinePPO @ Step: 960Figure 3: Distribution of predicted values for each state vs. ground truth (computed using 256 MCsamples) for DeepSeekMath 7B on MATH, highlighting the nature of errors in PPO’s value estimates.PPO VinePPO(K=1)VinePPO(K=3)VinePPO(K=9)15%18%20%22%25%28%18.119.921.223.0Varying ComputeAccuracy ()MATH0 5 10 15 20 2516%20%24%3.7x FasterWall Clock (Hours)AccuracyRhoMath 1.1BMethodVinePPOPPOFigure 4: (Left) Impact of the number of sampled trajectories Kwhen estimating ˆVMC(st), evaluatedon RhoMath 1.1B models. We observe that increasing Kimproves task performance consistently.(Right) Accuracy per wall clock time for both methods. Although VinePPO spend more time in eachiteration, it achieves PPO’s peak performance in fewer iteration and wall clock time.Computational efficiency VinePPO and standard PPO need different kinds of resources. Thevalue network needs to be trained and alongisde its optimizer consuming more GPU memory. Incontrast, MC rollouts need fast inferences and as a result VinePPO is generally slower per iterationcompared to PPO. In our setup, RhoMath 1.1B and DeepSeekMath 7B are 5x and 2x slower periteration when using VinePPO . However, as shown in Fig. 4, the impact of accurate credit assignmentwith VinePPO is substantial. VinePPO reaches the final accuracy of PPO in fewer iterations and lesstime. Specifically, RhoMath 1.1B and DeepSeekMath 7B achieve PPO’s final test accuracy 3.7 ×and2.3× faster in wall-clock time, and in 20× and 5× fewer gradient steps, respectively.5Value Prediction Accuracy To analyze the accuracy of value prediction, we compute the groundtruth value of each state by taking 256MC samples. We compare value network (from PPO)predictions against VinePPO’s. As shown in Fig. 3, VinePPO and PPO produce errors of verydifferent types. VinePPO estimates are unbiased, with variance peaking at 0.5and dropping to zero at0and1. In contrast, the value network’s estimates exhibit high bias. See Appendix H for full details.5 Related WorkCredit Assignment in Post-Training of LLM PPO (Schulman et al., 2017), as applied in Rein-forcement Learning from Human Feedback (RLHF, Ouyang et al. 2022), was among the pioneeringapproaches for RL finetuning of LLMs. While effective, PPO is known for its computational overheadand sensitivity to hyperparameters. As a result, subsequent approaches have sought to simplify orbypass PPO without sacrificing performance. For example, RL-free methods such as DPO (Rafailovet al., 2023) and its newer variants (Azar et al., 2023; Ethayarajh et al., 2024) operate in a banditsetting, where the entire response is treated as a single action, without distinguishing intermediatestates. Similarly, methods based on rejection sampling, like RestEM (Singh et al., 2023), finetunethe model on full high-reward responses. In the realm of PPO simplifications, methods like RLOO5Note that this is despite the fact that all of hyperparameter searches were tuned for PPO.4(Ahmadian et al., 2024) and GRPO (Shao et al., 2024) abandon the value network of PPO. Theysample a group of Mresponses per each prompt and compute the average reward (of other M−1responses) as a policy gradient baseline for all tokens in the group, effectively treating the entireresponse as a single action. Recent works, however, have started to emphasize the importance offiner credit assignment. Work such as Hwang et al. (2024) and Setlur et al. (2024) introduce MonteCarlo-based mechanisms that detect key errors in reasoning chains and apply use them negativesample in DPO. Unlike these approaches, which rely on ad-hoc heuristics, our work fully embracesRL training pipeline and addresses the core issue of inaccurate value estimation in PPO to unlockits full potential. In parallel, there has been interest (Hosseini et al., 2024; Lightman et al., 2023a)in building better verifiers and reward models that can provide per-step feedback. Although thesemethods often require costly human annotation, recent efforts (Ma et al., 2023; Uesato et al., 2022;Luo et al., 2024; Wang et al., 2023) have automated data collection using MC rollouts. VinePPO isorthogonal to these approaches, as it operates within PPO-based training, optimizing a given task’sreward rather than designing new reward models. Our method can further benefit from improvementsin reward modeling as they emerge.Value Estimation in RL and Monte Carlo Tree Search Deep RL algorithms are categorized intovalue-based and policy-based methods. Value-based algorithms, such as DQN and its successors(Mnih et al., 2013; Wang et al., 2015), train a neural network to predict values and derive the policyfrom the learned value function. Policy-based methods, including A2C, A3C (Mnih et al., 2016),SAC (Haarnoja et al., 2018), and PPO (Schulman et al., 2017), train a policy directly and use valueestimates only to guide the policy updates. Typically, these methods rely on critic networks for valueprediction. An exception is a variant of TRPO (Schulman et al., 2015), known as the “Vine” variant,where state value estimation is performed using MC samples. However, the authors note that the Vinevariant is limited to environments that allow easy resets to any state, which is uncommon in mostRL settings as the focus is on black-box engines or real-world deployment. In contrast to commonRL environments, language generation, allows for easy resets to any intermediate state, presentingunique opportunities for RL tunning of LLM. In fact, when easy resets were available in RL (e.g., Go,Chess), strong MC-based methods like AlphaGo (Silver et al., 2016) and AlphaZero (Silver et al.,2017) have emerged. AlphaGo trains a policy using expert moves data and self-play, alongside avalue network to predict the win probability from a given state. Then during the inference, it applies atree search guided by MC rollouts and the value network to find the best possible moves. AlphaZeroadvances this approach by distilling MCTS outcomes into its policy, removing the need for expertdata. Recent works have adapted AlphaZero’s principles and lessons to LLM, using similar searchtechniques during inference to improve responses and during training to find better trajectories fordistillation (Xie et al., 2024; Chen et al., 2024; Feng et al., 2023; Zhang et al., 2024; Hao et al., 2023).While this is a promising direction, VinePPO is not an MCTS method; it rather utilizes MC samplessolely for value estimation and only during PPO training to improving credit assignment. In fact,inference-time search like MCTS can be layered on top of VinePPO to further enhance performance.6 ConclusionCredit assignment is a weak spot for current RL finetuning of LLMs. While value networks aretasked and trained to estimate these values, they perform poorly. VinePPO simply replaces the valuenetworks with MC samples. We found that it reaches higher accuracy faster supporting the significantimpact that accurate credit assignment has on RL finetuning of LLMs for reasoning. We hope ourwork encourages researchers to look into the details of RL finetuning pipelines of LLMs and toexplore more computationally practical methods for accurate credit assignment.ReferencesArash Ahmadian, Chris Cremer, Matthias Gallé, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin,Ahmet Üstün, and Sara Hooker. 2024. Back to Basics: Revisiting REINFORCE Style Optimizationfor Learning from Human Feedback in LLMs. CoRR , abs/2402.14740.Thomas Anthony, Zheng Tian, and David Barber. 2017. Thinking Fast and Slow with Deep Learningand Tree Search. CoRR , abs/1705.08439.5Mohammad Gheshlaghi Azar, Mark Rowland, Bilal Piot, Daniel Guo, Daniele Calandriello, MichalValko, and Rémi Munos. 2023. A General Theoretical Paradigm to Understand Learning fromHuman Preferences. CoRR , abs/2310.12036.Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge,Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu,Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan,Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, JinXu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, ZhengYuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, JingrenZhou, Xiaohuan Zhou, and Tianhang Zhu. 2023. Qwen Technical Report. CoRR , abs/2309.16609.Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, DawnDrain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, JacksonKernion, Tom Conerly, Sheer El Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez,Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, DarioAmodei, Tom B. Brown, Jack Clark, Sam McCandlish, Chris Olah, Benjamin Mann, and JaredKaplan. 2022. Training a Helpful and Harmless Assistant with Reinforcement Learning fromHuman Feedback. CoRR , abs/2204.05862.Dan Biderman, Jose Javier Gonzalez Ortiz, Jacob Portes, Mansheej Paul, Philip Greengard, ConnorJennings, Daniel King, Sam Havens, Vitaliy Chiley, Jonathan Frankle, Cody Blakeney, and John P.Cunningham. 2024. LoRA Learns Less and Forgets Less. CoRR , abs/2405.09673.Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. 2024. AlphaMath Almost Zero: processSupervision without process. CoRR , abs/2405.03553.Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. 2024. KTO:Model Alignment as Prospect Theoretic Optimization. CoRR , abs/2402.01306.Xidong Feng, Ziyu Wan, Muning Wen, Ying Wen, Weinan Zhang, and Jun Wang. 2023. Alphazero-like Tree-search can Guide Large Language Model Decoding and Training. CoRR , abs/2309.17179.Evan Greensmith, Peter L. Bartlett, and Jonathan Baxter. 2001. Variance Reduction Techniques forGradient Estimates in Reinforcement Learning. In Advances in Neural Information ProcessingSystems 14 [Neural Information Processing Systems: Natural and Synthetic, NIPS 2001, December3-8, 2001, Vancouver, British Columbia, Canada] , pages 1507–1514. MIT Press.Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. 2018. Soft Actor-Critic:Off-policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor. CoRR ,abs/1801.01290.Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu.2023. Reasoning with Language Model is Planning with World Model. CoRR , abs/2305.14992.Alex Havrilla, Yuqing Du, Sharath Chandra Raparthy, Christoforos Nalmpantis, Jane Dwivedi-Yu,Maksym Zhuravinskyi, Eric Hambro, Sainbayar Sukhbaatar, and Roberta Raileanu. 2024. TeachingLarge Language Models to Reason with Reinforcement Learning. CoRR , abs/2403.04642.Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,and Jacob Steinhardt. 2021. Measuring Mathematical Problem Solving With the MATH Dataset.CoRR , abs/2103.03874.Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron C. Courville, Alessandro Sordoni, and RishabhAgarwal. 2024. V-STaR: Training Verifiers for Self-taught Reasoners. CoRR , abs/2402.06457.Shengyi Huang, Michael Noukhovitch, Arian Hosseini, Kashif Rasul, Weixun Wang, and LewisTunstall. 2024. The N+ Implementation Details of RLHF with PPO: A Case Study on TL;DRSummarization. CoRR , abs/2403.17031.Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, and Minjoon Seo. 2024. Self-explore to Avoid the Pit: Improving the Reasoning Capabilities of Language Models with Fine-grained Rewards. CoRR , abs/2404.10346.6Hamish Ivison, Yizhong Wang, Jiacheng Liu, Zeqiu Wu, Valentina Pyatkin, Nathan Lambert, Noah A.Smith, Yejin Choi, and Hannaneh Hajishirzi. 2024. Unpacking DPO and PPO: Disentangling BestPractices for Learning from Preference Feedback. CoRR , abs/2406.09279.Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E.Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient Memory Management for Large LanguageModel Serving with PagedAttention. CoRR , abs/2309.06180.Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay V .Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, BehnamNeyshabur, Guy Gur-Ari, and Vedant Misra. 2022. Solving Quantitative Reasoning Problems withLanguage Models. CoRR , abs/2206.14858.Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, JanLeike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023a. Let’s Verify Step by Step. CoRR ,abs/2305.20050.Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, JanLeike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023b. Let’s Verify Step by Step. CoRR ,abs/2305.20050.Zhenghao Lin, Zhibin Gou, Yeyun Gong, Xiao Liu, Yelong Shen, Ruochen Xu, Chen Lin, YujiuYang, Jian Jiao, Nan Duan, and Weizhu Chen. 2024. Rho-1: Not All Tokens Are What You Need.CoRR , abs/2404.07965.Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Harsh Lara, Yunxuan Li, Lei Shu, YunZhu, Lei Meng, Jiao Sun, and Abhinav Rastogi. 2024. Improve Mathematical Reasoning inLanguage Models by Automated Process Supervision. CoRR , abs/2406.06592.Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, and Hang Li. 2024. ReFT:Reasoning with Reinforced Fine-tuning. CoRR , abs/2401.08967.Qianli Ma, Haotian Zhou, Tingkai Liu, Jianbo Yuan, Pengfei Liu, Yang You, and Hongxia Yang.2023. Let’s reward step by step: Step-level reward model as the Navigators for Reasoning. CoRR ,abs/2310.10080.V olodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap,Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous Methods for DeepReinforcement Learning. CoRR , abs/1602.01783.V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, DaanWierstra, and Martin A. Riedmiller. 2013. Playing Atari with Deep Reinforcement Learning.CoRR , abs/1312.5602.OpenAI. 2024. OpenAI o1 System Card.Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, ChongZhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton,Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, andRyan Lowe. 2022. Training language models to follow instructions with human feedback. CoRR ,abs/2203.02155.Qwen. 2024. Qwen2.5-Math: The world’s leading open-sourced mathematical LLMs. https://qwenlm.github.io/blog/qwen2.5-math/ . Accessed: 2024-09-23.Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and ChelseaFinn. 2023. Direct Preference Optimization: Your Language Model is Secretly a Reward Model.CoRR , abs/2305.18290.John Schulman. 2020. Notes on the KL-divergence Approximation. http://joschu.net/blog/kl-approx.html . Accessed: 2024-09-23.John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. 2015. TrustRegion Policy Optimization. CoRR , abs/1502.05477.7John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan, and Pieter Abbeel. 2016. High-dimensional Continuous Control Using Generalized Advantage Estimation. In 4th InternationalConference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016,Conference Track Proceedings .John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. ProximalPolicy Optimization Algorithms. CoRR , abs/1707.06347.Amrith Setlur, Saurabh Garg, Xinyang Geng, Naman Garg, Virginia Smith, and Aviral Kumar. 2024.RL on Incorrect Synthetic Data Scales the Efficiency of LLM Math Reasoning by Eight-fold.CoRR , abs/2406.14532.Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y . K. Li,Y . Wu, and Daya Guo. 2024. DeepSeekMath: Pushing the Limits of Mathematical Reasoning inOpen Language Models. CoRR , abs/2402.03300.David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driess-che, Julian Schrittwieser, Ioannis Antonoglou, Vedavyas Panneershelvam, Marc Lanctot, SanderDieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy P. Lillicrap,Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. 2016. Mastering thegame of Go with deep neural networks and tree search. Nat., 529(7587):484–489.David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez,Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy P. Lillicrap, KarenSimonyan, and Demis Hassabis. 2017. Mastering Chess and Shogi by Self-play with a GeneralReinforcement Learning Algorithm. CoRR , abs/1712.01815.Avi Singh, John D. Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J.Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron Parisi, Abhishek Kumar, Alex Alemi, AlexRizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Gamaleldin F. Elsayed, Hanie Sedghi, IgorMordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jeffrey Pennington, Jiri Hron, KathleenKenealy, Kevin Swersky, Kshiteej Mahajan, Laura Culp, Lechao Xiao, Maxwell L. Bileschi, NoahConstant, Roman Novak, Rosanne Liu, Tris Warkentin, Yundi Qian, Yamini Bansal, Ethan Dyer,Behnam Neyshabur, Jascha Sohl-Dickstein, and Noah Fiedel. 2023. Beyond Human Data: ScalingSelf-training for Problem-solving with Language Models. CoRR , abs/2312.06585.Xianghui Sun, Yunjie Ji, Baochang Ma, and Xiangang Li. 2023. A Comparative Study betweenFull-parameter and LoRA-based Fine-tuning on Chinese Instruction Data for Instruction FollowingLarge Language Model. CoRR , abs/2304.08109.Richard S. Sutton and Andrew G. Barto. 1998. Introduction to Reinforcement Learning. In Introduc-tion to Reinforcement Learning .Richard S. Sutton, David A. McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy GradientMethods for Reinforcement Learning with Function Approximation. In Advances in NeuralInformation Processing Systems 12, [NIPS Conference, Denver, Colorado, USA, November 29 -December 4, 1999] , pages 1057–1063. The MIT Press.Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, NikolayBashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, CristianCanton-Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, WenyinFu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, SagharHosseini, Rui Hou, Hakan Inan, et al. 2023. Llama 2: Open Foundation and Fine-tuned ChatModels. CoRR , abs/2307.09288.Mark Towers, Ariel Kwiatkowski, Jordan Terry, John U Balis, Gianluca De Cola, Tristan Deleu,Manuel Goulão, Andreas Kallinteris, Markus Krimmel, Arjun KG, et al. 2024. Gymnasium: Astandard interface for reinforcement learning environments. arXiv preprint arXiv:2407.17032 .Trieu H. Trinh, Yuhuai Wu, Quoc V . Le, He He, and Thang Luong. 2024. Solving olympiad geometrywithout human demonstrations. Nat., 625(7995):476–482.8Jonathan Uesato, Nate Kushman, Ramana Kumar, H. Francis Song, Noah Y . Siegel, Lisa Wang,Antonia Creswell, Geoffrey Irving, and Irina Higgins. 2022. Solving math word problems withprocess- and outcome-based feedback. CoRR , abs/2211.14275.Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Y Wu, and Zhifang Sui.2023. Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning. arXivpreprint arXiv:2312.08935 .Ziyu Wang, Nando de Freitas, and Marc Lanctot. 2015. Dueling network architectures for deepreinforcement learning. CoRR abs/1511.06581 (2015). arXiv preprint arXiv:1511.06581 .Yuxi Xie, Anirudh Goyal, Wenyue Zheng, Min-Yen Kan, Timothy P. Lillicrap, Kenji Kawaguchi,and Michael Shieh. 2024. Monte Carlo Tree Search Boosts Reasoning via Iterative PreferenceLearning. CoRR , abs/2405.00451.Shusheng Xu, Wei Fu, Jiaxuan Gao, Wenjie Ye, Weilin Liu, Zhiyu Mei, Guangju Wang, Chao Yu,and Yi Wu. 2024. Is DPO Superior to PPO for LLM Alignment? A Comprehensive Study. CoRR ,abs/2404.10719.Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. 2023.Scaling Relationship on Learning Mathematical Reasoning with Large Language Models. CoRR ,abs/2308.01825.Dan Zhang, Sining Zhoubian, Yisong Yue, Yuxiao Dong, and Jie Tang. 2024. ReST-MCTS*: LLMSelf-training via Process Reward Guided Tree Search. CoRR , abs/2406.03816.Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Chuyue Sun, Jeff Huang, Cody Hao Yu, Shiyi Cao,Christos Kozyrakis, Ion Stoica, Joseph E. Gonzalez, Clark Barrett, and Ying Sheng. 2024. SGLang:Efficient Execution of Structured Language Model Programs.Rui Zheng, Shihan Dou, Songyang Gao, Yuan Hua, Wei Shen, Binghai Wang, Yan Liu, Senjie Jin,Qin Liu, Yuhao Zhou, Limao Xiong, Lu Chen, Zhiheng Xi, Nuo Xu, Wenbin Lai, Minghao Zhu,Cheng Chang, Zhangyue Yin, Rongxiang Weng, Wensen Cheng, Haoran Huang, Tianxiang Sun,Hang Yan, Tao Gui, Qi Zhang, Xipeng Qiu, and Xuanjing Huang. 2023. Secrets of RLHF in LargeLanguage Models Part I: PPO. CoRR , abs/2307.04964.Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul F.Christiano, and Geoffrey Irving. 2019a. Fine-tuning Language Models from Human Preferences.CoRR , abs/1909.08593.Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul F.Christiano, and Geoffrey Irving. 2019b. Fine-tuning Language Models from Human Preferences.CoRR , abs/1909.08593.9A LimitationsIn this work, we focused on complex mathematical reasoning tasks, which provide a clear testbed forevaluating the impact of accurate credit assignment. While VinePPO is a general-purpose modificationto PPO for LLM finetuning, its performance on more general human alignment tasks remains unclear.It is plausible that the performance gap between VinePPO and PPO would be less pronounced ontasks where the value network can generalize more easily. For example, in tasks like detecting toxicityin partial responses, the value network may perform well, reducing the advantage VinePPO offers.B BackgroundWe focus on the RL tuning phase in the RLHF pipeline, following Ziegler et al. (2019a); Ouyanget al. (2022); Shao et al. (2024). In this section, we provide an overview of actor-critic finetuning asimplemented in PPO.RL Finetuning In this setup, the policy πθrepresents a language model that generates a responsey= [y0, . . . , y T−1]autoregressively given an input x= [x0, . . . , x M−1], such that πθ(y|x) =QT−1t=0πθ(yt|x;y<t).The goal of RL finetuning is to maximize the expected undiscounted ( γ= 1)finite-horizon return, while incorporating a KL-divergence constraint to regularize the policy andprevent it from deviating too far from a reference policy π0(typically the initial supervised finetuned(SFT) model). The objective can be written as:J(θ) =Ex∼D,y∼π(·|x)[R(x;y)]−βKL[πθ∥π0], (3)where Dis the dataset of prompts, R(x;y)is the complete sequence-level reward function, and βcontrols the strength of the KL penalty. Note that the policy πθis initialized from π0.Language Environment as an MDP The language generation is typically modeled as a token-levelMarkov Decision Process (MDP) in an actor-critic setting, where each response yis an episode.Specifically, the state at time step t,st∈ S, is the concatenation of the input prompt and the tokensgenerated up to that point: st=x;y<t= [x0, . . . , x M−1, y0, . . . , y t−1].At each time step, theaction atcorresponds to generating the next token ytfrom fixed vocabulary. The process beginswith the initial state s0=x, and after each action, the environment transitions to the next state,st+1=st; [at], by appending the action atto the current state st. In this case, since states are alwaysconstructed by concatenating tokens, the environment dynamics are known and the transition functionisdeterministic , i.e., P(st+1|st, at) = 1 . During the generation process, the reward rtis set to zerofor all intermediate actions at’s, with the sequence-level reward R(x;y)only applied at the finalstep when the model stops generating. A trajectory τ= (s0, a0, s1, a1, . . .)is therefore a sequenceof state-action pairs, starting from the input prompt until the terminal state. Finally, we define thecumulative return of a trajectory τasR(τ) =PT−1t=0rt=R(sT) =R(x;y).Policy Gradient Given this MDP formulation, policy gradient methods like PPO maximize Eq. 3by repeatedly sampling trajectories and taking a step in the direction of the gradient gpg:=∇θJ(θ)to update the parameters. Policy gradient gpgtakes the following form:gpg=Eτ∼πθ"T−1Xt=0∇θlogπθ(at|st)A(st, at)#,where st=x;y<t, at=yt, (4)τ= (s0, a0, . . .), andA(st, at)is the advantage function. The gradient gpgpoints towards increasingthe probability πθ(at|st)when A(st, at)>0and the opposite when A(st, at)<0. Intuitively, theadvantage function A(st, at)quantifies how much better taking action atat state stis compared tothe average action taken in that state under the policy. Formally, it is defined as:A(st, at) =Q(st, at)−V(st) =rt+γV(st+1)−V(st), (5)where Q(st, at)is the state-action value and V(st)is the per-state value function6. The valuefunction, V(st) :S →R, offers a long-term assessment of how desirable a particular state is under6Such derivation is possible as the language environment is deterministic.10the current policy. Formally, it represents the expected cumulative reward obtained from starting instatestand following the policy thereafter7:V(st) =Eτ∼πθ[R(τ)|s0=st].PPO uses the sameadvantage-weighted policy gradient as in Eq. 4, but constrains policy updates through clipping toensure stable training. For full details, see Appendix I.Estimating the Advantage via Value Networks In practice, the advantage function A(st, at)isnot known a priori and commonly estimated by first using a value network ˆVφto approximate thetrue value function V(st)and then plugging the learned values into Eq. 5 or other variants such asGAE (Schulman et al., 2016). The value network is parameterized by φand trained alongside thepolicy network πθ. The training objective for the value network minimizes the mean squared errorbetween the predicted value and the empirical return:LV(φ) =Eτ∼πθ"1TXt12(ˆVφ(st)−Gt)2#, (6)where Gt=PT−1t′=trt′is the empirical return from state st. PPO uses the same objective for ˆVφbutenhances stability by applying clipping during training (see Appendix I.1 for details). In RL-tuningof LLMs, the value network is often initialized using the initial policy π0(or the reward model whenavailable), with the language modeling head swapped out for a scalar output head to predict values(Zheng et al., 2023). This setup leverages the prior knowledge of the pretrained model for valueestimation.C Accurate Credit Assignment with VinePPOAs outlined in Appendix B, a step in the PPO gradient update (Eq. 4) aims to increase the probabilityof better-than-average actions while decreasing the probability of those that perform worse—aprocess quantified by advantage function A(st, at). However, the true advantage function is generallyunknown and must be estimated, typically by substituting estimates from a value network into Eq. 5.As we will elaborate in Appendix H, neural networks are imperfect function approximators and canresult in biased value estimates. Fortunately, the language environment offers a useful property thatallows for deriving an unbiased estimator of value function V(st). In this section, we first describethis property and then explain how VinePPO leverages it to enhance credit assignment.C.1 Language EnvironmentThe language environment, as defined in Appendix B, possesses a unique property not commonlyfound in traditional RL settings: the ability to reset to any point within a trajectory. Since states aresimply concatenated tokens, we can prompt the language model πθto generate continuations from anyintermediate state. This flexibility allows us to explore alternative future paths from arbitrary pointsin a generation. In contrast, standard RL typically collect training data through sequential rollouts, aprocess reflected in the design of the Gym (Towers et al., 2024), the de facto RL environment API.Gym environments provide two primary functions: (1) env.reset() , which resets the environmentto its initial state, and (2) env.step(action) , which advances the environment based on the agent’saction. There is no mechanism for resetting to an arbitrary intermediate state within a trajectory. Thisdesign suits classic RL, where the focus is on black-box game engines or real-world deployment.Moreover, recent advancements in LLM inference engines (Kwon et al., 2023; Zheng et al., 2024)have dramatically increased the speed of on-the-fly response generation—for example, an LLM with7B parameters can generate up to 5,000 tokens per second on a single GPU8. This computationalefficiency makes it feasible to conduct fast environment simulation, opening up unique opportunitiesfor RL training of LLMs.D Experimental SetupDatasets and Pretrained LLMs We conduct our experiments using strong LLMs specificallypretrained for mathematical reasoning: (1) DeepSeekMath 7B (Shao et al., 2024) and (2) RhoMath7We drop the dependency on πθfor brevity.8Nvidia A100 GPU with model loaded in 16bit precision.110 200 400 600 800 100012%16%20%24%0 200 400 600 800 100032%36%40%44%Training StepAccuracy ()RhoMath 1.1B DeepSeekMath 7BMethod VinePPO PPOFigure E.1: Comparison of the training behavior between VinePPO and PPO. VinePPO demonstratesconsistently higher accuracy (as measured on the test set of MATH dataset) throughout the training.1.1B (Lin et al., 2024), both of which have been trained on diverse mathematical and natural languagecorpora. Having models from different sizes allows for evaluating the effect of scaling. We focuson mathematical reasoning datasets MATH (Hendrycks et al., 2021), which consists of competition-level mathematical problems and present a range of difficulty levels that allow for comprehensiveevaluation of reasoning abilities. To ensure our setup is reproducible, we only make use of publiclyavailable data and checkpoints on Huggingface. For each dataset, we finetune the base LLMs ontheir respective training sets to obtain the initial SFT models ( π0). In all experiments, we employfull-parameter finetuning to allow utilization of models’ full capacity (Biderman et al., 2024; Sunet al., 2023).Evaluation We evaluate model performance on the test sets of each dataset, using accuracy(Pass@1) as our primary metric, which measures the correctness of the final answers produced by themodels. As our baseline, we adopt the standard PPO framework, as commonly implemented for LLMfinetuning (Ouyang et al., 2022; Touvron et al., 2023; Huang et al., 2024). Additionally, we compareour proposed method against RestEM (Singh et al., 2023), which applies Expert Iteration, a formof Iterative Rejection Finetuning (Yuan et al., 2023; Anthony et al., 2017) with measures to preventoverfitting. All methods are initialized from the same SFT checkpoint π0to ensure a fair comparison.Hyperparameters and Training Details To ensure standard PPO (and its value network) has ahealthy training and our evaluation reflects its full potential, we first focus our hyperparameter searchon PPO parameters (such as KL penalty coefficient, batch size, minibatch size, GAE λ, number ofepochs per iteration) and apply all well-known techniques and best practices (Huang et al., 2024;Ivison et al., 2024) in PPO tuning (Refer to Appendix J.2 for the full list). VinePPO borrows the exactsame hyperparameters from PPO and only modifies the advantage A(st, at)estimation, keeping therest of the pipeline unchanged. This allows us to isolate the effect of accurate credit assignment.We found that sampling K= 9trajectories in ˆVMCperforms well; the effect of varying Kis fullyanalyzed in Fig. 4. For the other baseline, we closely follow the original setup while ensuringconsistency in training conditions for a fair comparison. We choose the best checkpoint based on aheld-out validation set for all experiments. Full implementation details, including all hyperparametersand training procedures, are provided in Appendix J.5.E Training Plots1210 20 3015.0%17.5%20.0%22.5%4 8 12 1636%40%44%KL(0)Accuracy ()RhoMath 1.1B DeepSeekMath 7BMethodVinePPO PPOFigure G.3: Comparing task accuracy and KL divergence during training on the MATH dataset.VinePPO consistently achieves higher accuracy at similar KL levels, reflecting its more efficientcredit assignment and focused updates.F Temperature Tolerance0 100 200 30030%35%40%45%Initial SFTTraining StepAccuracy ()Method VinePPO PPOTemparature 0.6 0.8 1.0Figure F.2: Test set accuracy dur-ing training with higher tempera-ture presented for DeepSeekMath7B and MATH dataset. VinePPOcan tolerate higher temperatures.Sampling temperature is a critical hyperparameter that con-trols the randomness of trajectories generated by the model.At higher temperatures, the model generates more diverse tra-jectories, encouraging exploration that can accelerate training,especially during the early stages. However, increased diversityin the trajectories also presents a challenge: the value network inPPO must generalize over a wider range of states, complicatingvalue estimation. To evaluate the effect of temperature on per-formance, we compared VinePPO and PPO runs using differenttemperatures T∈ {0.6,0.8,1.0}over 360 training steps, anal-ysed their training dynamics. As shown in Fig. F.2, VinePPOconsistently benefits from higher temperatures, achieving fasterconvergence and higher accuracy. In contrast, PPO not onlyfails to benefit from increased temperature, but also divergeswhen the temperature is set to its highest value, T= 1.0, wherethe trajectories are most diverse.These findings raise concerns about the scalability of PPO,particularly in real-world scenarios involving large and diversedatasets, in contrast to VinePPO which maintains robust valueestimation regardless of the diversity in the trajectories.G KL DivergenceH Value Prediction AnalysisBoth PPO and VinePPO estimate values as means to credit assignment, one employing a valuenetwork and the other using MC samples. More accurate value estimates lead to more preciseadvantage computations, resulting in more effective policy updates. As shown in Section 4, VinePPOconsistently outperforms PPO. In this section, we explore the underlying reasons for this performancegap by closely analyzing the value prediction of both methods. To assess the accuracy of valuepredictions, we first establish a “ground truth” value for each state within trajectories, denoted asˆV∗(st), by running multiple MC rollouts ( 256in our case) and averaging the returns. This provides alow-variance reference value. We then compare the value predictions in both methods against thisground truth on the DeepSeekMath 7B and MATH datasets.130.000.250.500.751.000.00 0.25 0.50 0.75 1.000.000.250.500.751.000.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00Ground Truth ValuePredicted ValueStep: 60 Step: 360 Step: 540 Step: 960PPO VinePPOFigure H.4: Distribution of predicted values for each state vs. ground truth (computed using 256 MCsamples) during training for DeepSeekMath 7B on MATH dataset, highlighting the nature of errors.VinePPO achieves much lower Mean Absolute Error (MAE).0.00 0.25 0.50 0.75 1.000.0000.0250.0500.0750.1000.00 0.25 0.50 0.75 1.000.000.050.100.150.00 0.25 0.50 0.75 1.000.00000.00250.00500.00750.00 0.25 0.50 0.75 1.000.0000.0010.0020.0030.004Normalized Reasoning StepMSEPPO @ Step 420 PPO @ Step 960 VinePPO @ Step 420 VinePPO @ Step 960Figure H.5: Visualizing the Mean Absolute Error (MAE) of the value predictions in different point ofreasoning chain. Value Network in PPO fails to generalize as the reasoning chain progresses, whileVinePPO’s value estimates become more accurate as the model become more deterministic.Accuracy Fig. H.4 presents the distribution of value predictions during training. The errorsproduced by VinePPO and PPO differ significantly in their nature. VinePPO’s estimates are unbiased,with variance peaking at 0.5and dropping to zero at 0and1. In contrast, the value network usedin PPO exhibits high bias, often misclassifying bad states ( ˆV∗(st) = 0 ) as good and vice versa.To further visualize accuracy, we consider a value prediction as “correct” if it falls within 0.05 ofthe ground truth. The accuracy of this classification formulation is shown in Figure Fig. H.6. Thevalue network starts with low accuracy, improving gradually to a peak of 65%. In contrast, VinePPOconsistently achieves an accuracy of 70-90% throughout the training process, pointing to its morereliable approach.Error Per Reasoning Step To gain insights into the mechanisms behind value prediction, weanalyze the prediction error at each reasoning step within a trajectory. As illustrated in Fig. H.5,PPO’s value estimation error tends to increase as the reasoning chain progresses. We hypothesizethis is because, at earlier steps, partial trajectories more closely resemble the training data, allowingthe value network to rely on memorization. However, as reasoning progresses and the states becomeunfamiliar, the value network needs to generalize, where it tends to fail. In contrast, VinePPO exhibitsthe opposite trend: its value prediction error decreases as reasoning advances. We attribute this to theincreasing determinism of later reasoning steps, which conditions on prior actions. This determinismallows the same number MC sample to provide more accurate estimates.I Reviewing PPOPPO, as used in RL tuning of LLMs, formulates language generation as token-level MDP (Ap-pendix B), where each response yis an episode. The state at time step t,st∈ S, is the concatenationof the prompt and the tokens generated so far: st=x;y<t= [x0, . . . , x M−1, y0, . . . , y t−1].The14250 500 750 10000%20%40%60%80%100%Binning AccuracyStepAccuracy250 500 750 100020%40%60%80%100%Binning AccuracyStepAccuracyVinePPO PPOFigure H.6: Value prediction accuracy formulated as a classification problem, where a prediction isconsidered correct if it falls within 0.05 of the ground truth.action atcorresponds to generating the next token ytfrom the model’s vocabulary. Given a promptx, an episode of this MDP starts from the initial state s0=x, and with each action taken, theenvironment moves to a subsequent state, st+1=st; [at], by adding the action atto the existingstatest. In the language environment, because states are always formed by concatenating tokens,the environment dynamics are fully known, and the transition function is deterministic , meaningP(st+1|st, at) = 1 . During the generation process, the reward rtis set to zero for all intermediateactions at’s, with the sequence-level reward R(x;y)only applied at the final step when the modelstops generating. Throughout the generation process, the reward rtis set to zero for all intermediateactions at, with the sequence-level reward R(x;y)applied only at the final step when the modelstops the generation. That is:rt=r(st, at) =R(x;y)ift=T−1, where st+1=yis terminal ,0 otherwise .(7)A trajectory τ= (s0, a0, s1, a1, . . .)thus represents a sequence of state-action pairs that begins atthe input prompt and continues until reaching the terminal state. Finally, the cumulative return of atrajectory τis defined as R(τ) =PT−1t=0rt=rT−1=R(x;y).The goal of RL tuning is to maximize the expected return of the model’s responses to prompts in thedataset, as defined by the reward function R(Eq. 3). PPO, similar to other policy gradient methods,achieves this goal by repeatedly sampling trajectories for a batch of prompt sampled from Dandtaking multiple optimization steps in the direction of the gradient gppoto update the parameters. PPOgradient gppois defined as the gradient of the following loss:Lppo(θ) =Eτ∼πθk"T−1Xt=0min πθ(at|st)πθk(at|st)Aθkt,clip(θ)Aθkt!−βKL[πθ∥π0]#(8)where πθkis the policy at the previous iteration, εis the clipping parameter, βis the KL penaltycoefficient, Aθkt=Aθk(st, at)is the advantage estimate for policy πθk, and the clip(θ)function is:clip(θ) =clipπθ(at|st)πθk(at|st),1−ε,1 +ε. (9)Note that the KL penalty could be also added to the reward function R. We follow the more recentimplementations (Shao et al., 2024; Qwen, 2024), where it is added to the loss function. The KL termcan be computed using the following unbiased estimator (Schulman, 2020):ˆKL(θ) =π0(at|st)πθ(at|st)−logπ0(at|st)πθ(at|st)−1. (10)where π0denotes the reference model (initial SFT).15I.1 Value NetworkIn addition to the policy πθ, PPO also trains a separate value network ˆVφto obtain an estimate thetrue values V(st)of states st. Parameterized by φ, the value network is trained alongside the policynetwork πθusing the following loss:LValNet (φ) =12Eτ∼πθ"1TT−1Xt=0maxˆVφ(st)−Gt2,clip(φ)−Gt2#(11)where ˆVφkis the value network at the previous iteration, Gt=PT−1t′=tγt′−trt′is the empirical returnfrom state st,ε′is a value clipping parameter, and the clip(θ)is defined as:clip(φ) =clipˆVφ(st),ˆVφk(st)−ε′,ˆVφk(st) +ε′. (12)In RL-tuning of LLMs, the value network is typically initialized from the initial policy π0(or thereward model, if available), replacing the language modeling head with a scalar output head to predictvalues (Zheng et al., 2023) This approach takes advantage of the base model’s prior knowledge forvalue estimation.Advantage Estimation Once the estimated values ˆVφ(st)are obtained, the advantages A(st, at)are computed using the GAE (Schulman et al., 2016):A(st, at)≈ˆAGAE(st, at) (13)= (1−λ)ˆA(1)t+λˆA(2)t+λ2ˆA(3)t+. . .(14)=∞Xl=0(γλ)lδt+l (15)=∞Xl=0(γλ)lrt+l+γˆVφ(st+l+1)−ˆVφ(st+l)(16)where δt=rt+γˆVφ(st+1)−ˆVφ(st)is the temporal difference error, λis the GAE parameter, and γis the discount factor. Also, we have:ˆA(k)t:=k−1Xl=0γlδt+l=rt+γrt+1+···+γk−1rt+k−1+γkˆVφ(st+k)−ˆVφ(st). (17)Adjusting the GAE parameter λallows for a trade-off between bias and variance in the advantageestimates. However, as we discuss in Appendix J.5, we found that λ= 1works best in our experiments(similar to the findings of Luong et al. (2024) and Ahmadian et al. (2024)). In this case, the GAEsimplifies to the following form (assuming γ= 1):ˆAGAE(st, at) =PT−1t′=trt′−ˆVφ(st).J Experimental DetailsJ.1 DatasetsWe focus on mathematical reasoning datasets that require step-by-step solutions and are widely usedto evaluate the reasoning capabilities of LLMs. Below is a brief overview of the datasets used in ourexperiments:MATH (Hendrycks et al., 2021) The MATH dataset contains problems from high school mathcompetitions, covering a wide range of topics such as algebra, geometry, and probability. For ourexperiments, we use the OpenAI split provided by Lightman et al. (2023b), which consists of 500problems for testing and 12,500 problems for training. We further divide the training set into 11,500problems for training and 500 problems for validation. Each problem includes a step-by-step solution,ending in a final answer marked by \boxed{} in the solution (e.g., “ ..so the smallest possible valueofcisπ”). This marking allows for verification of the correctness of model-generated responsesby comparing the final answer to the ground truth. We use the scripts provided by Lewkowycz et al.(2022), Lightman et al. (2023b), and Shao et al. (2024) to extract and compare the final answers tothe ground truth.16Table 1: Summary of PPO hyperparamters used in the experiments.Parameter ValueTRAININGOptimizer AdamWAdam Parameters ( β1, β2) ( 0.9,0.999)Learning rate 1×10−6Weight Decay 0.0Max Global Gradient Norm for Clipping 1.0Learning Rate Scheduler PolynomialWarm Up 3%of training steps# Train Steps For MATH dataset 1000 steps (around 8 dataset epochs)GENERALMaximum Response Length 1024 tokensMaximum Sequence Length for RhoMath 1.1B 2048 tokensMaximum Sequence Length for DeepSeekMath 7B 2500 tokensPPO# Responses per Prompt 8 Search Space: {8,16,32}# Episodes per PPO Step 512 Search Space: {256,512}# Prompts per PPO Step 512/8 = 64Mini-batch Size 64# Inner epochs per PPO Step 2 Search Space: {1,2}Sampling Temperature 0.6 Search Space: {0.6,0.8,1.0}Discount Factor γ 1.0GAE Parameter λ 1.0 Search Space: [0.95−1.0]KL Penalty Coefficient β 1e-4 Search Space: {1 e-1, 1e-2, 3e-3, 1e-4}Policy Clipping Parameter ε 0.2Value Clipping Parameter ε′0.2Table 2: Summary of RestEM hyperparamters used in the experiments.Parameter ValueTRAININGOptimizer AdamWAdam Parameters ( β1, β2) ( 0.9,0.999)Learning rate 1×10−6Weight Decay 0.0Max Global Gradient Norm for Clipping 1.0Learning Rate Scheduler PolynomialWarm Up 3%of training stepsRESTEM# iterations 10# Sampled Responses per Prompt 8 Search Space: {8,32}Sampling Temperature 0.6 Search Space: {0.6,0.8,1.0}Checkpoints every # iteration 500 stepCheckpoint Selection until validation improvesSearch Space: {until validation improves ,best validation }J.2 PPO ImplementationTo ensure our PPO implementation is robust, and our evaluation reflects its full potential, we haveapplied a set of well-established techniques and best practices from the literature (Huang et al., 2024;Ivison et al., 2024; Zheng et al., 2023). Below, we outline the key implementation details that weremost effective in our experiments:17•Advantage Normalization : After calculating the advantages, we normalize them to havezero mean and unit variance, not only across the batch but also across data parallel ranks. Thisnormalization step is applied consistently in both our PPO and VinePPO implementations.•Reward Normalization : We follow Ivison et al. (2024) and do not normalize the rewards, asthe reward structure in our task is already well-defined within the range of [0,1]. Specifically,correct responses are assigned a reward of 1, while incorrect responses receive 0.•End-of-Sequence (EOS) Trick : As detailed in Appendix I, rewards are only applied atthe final token of a response, which corresponds to the EOStoken when the response iscomplete. For responses that exceed the maximum length, we truncate the response to themaximum length and apply the reward to the last token of the truncated sequence. We alsoexperimented with penalizing truncated responses by assigning a negative reward (-1), butthis did not lead to performance improvements.•Dropout Disabling : During the RL tuning phase, we disable dropout across all models.This ensures that the log probabilities remain consistent between different forward passes,thereby avoiding stochastic effects that could hurt training stability.•Fixed KL Coefficient We use a constant coefficient for the KL penalty. Although theoriginal PPO implementation for finetining language models (Ziegler et al., 2019b) utilizedan adaptive KL controller, more recent implementations typically do not use this approach(Ouyang et al., 2022; Touvron et al., 2023; Xu et al., 2024).J.3 SFT ModelsTo ensure a systematic and reproducible evaluation, we create our SFT models πrefby finetuning thebase pretrained LLMs (as opposed to their “Instruct” version) on the training splits of the respectivedatasets. Specifically, we produce two distinct SFT models: two base LLM (DeepSeekMath 7B andRhoMath 1.1B ) across MATH. The base models are finetuned using the Adam optimizer withoutweight decay. We employ a learning rate warm-up over 6% of the total training steps. Each modelis trained for three epochs with a batch size of 64, and the best checkpoint is selected based onvalidation accuracy. For each SFT model, we conduct a hyperparameter sweep over learning rates inthe range {1×10−7,3×10−7,1×10−6,3×10−6,1×10−5,3×10−5,8×10−5,1×10−4}toensure optimal performance. We then use these SFT models as the initial checkpoint for training themethods mentioned in our paper.J.4 EvaluationWe evaluate each method’s performance on the test sets of each dataset. For example, when wereport that PPO achieves 42.8% accuracy on the MATH dataset for the DeepSeekMath 7B model,this means the PPO training was initialized with the SFT model specific to DeepSeekMath 7B on theMATH dataset, and accuracy was measured on the MATH test set. Our primary evaluation metric isaccuracy, specifically Pass@1 , which reflects the percentage of correct responses on the first attempt.This metric is crucial because it represents a realistic user interaction, where the model is expected todeliver a correct answer without the need for multiple tries. For each evaluation, we sample a responsefrom the model for a given prompt, using a maximum token length of 1024 and a temperature of0.35. A response is considered correct if its final answer matches the ground truth final answer, asdetailed in Appendix J.1. Furthermore, each accuracy score is averaged over 16 evaluation rounds,each conducted with different random seeds. This will ensure a robust and low variance assessmentof model performance.J.5 HyperparametersIn this section, we present a comprehensive overview of the hyperparameters used in our experiments.PPO Finetuning LLMs using PPO is known to be sensitive to hyperparameter selection, and findingthe optimal settings is critical for achieving strong performance. To ensure the robustness of ourstudy, we explored hyperparameter values reported in recent studies (Shao et al., 2024; Zheng et al.,2023; Ivison et al., 2024; Huang et al., 2024) and conducted various sweeps across a wide range ofvalues to identify the best configuration for our tasks and models. The full set of hyperparameters,along with their respective search spaces, is detailed in Table 1.18Table 3: Average time spent per each training step for different methods and models measured forMATH datasetMethod Model Hardware Average Training Step Time (s)PPO RhoMath 1.1B 4 ×Nvidia A100 80GB 80VinePPO RhoMath 1.1B 4 ×Nvidia A100 80GB 380PPO DeepSeekMath 7B 8 ×Nvidia H100 80GB 312VinePPO DeepSeekMath 7B 8 ×Nvidia H100 80GB 583VinePPO We utilized the same hyperparameter setup as in the PPO implementation (Table 1) forVinePPO. The number of MC samples, K, was set to 9 for all experiments.RestEM To ensure fair comparison we equalize the number of sampled responses for trainingbetween our RestEM run and our PPO runs. Therefore, in each RestEM iteration we sample 8responses per prompt and train for 8 epochs on the correct responses. In order to boost RestEM’sperformance we also run a sweep on other sensible parameters but we noticed no improvement(Table 2).J.6 ComputeAll experiments were conducted using multi-GPU training to efficiently handle the computationaldemands of large-scale models. For the RhoMath 1.1B model, we utilized a node with 4 ×NvidiaA100 80GB GPUs to train both PPO and VinePPO. For the larger DeepSeekMath 7B model, weemployed a more powerful setup, using a node with 8 ×Nvidia H100 80GB GPUs. Additionally,for training DeepSeekMath 7B models with the RestEM approach, we used a node with 4 ×NvidiaA100 80GB GPUs. The average training step time for each method on the MATH dataset is presentedin Table 3.J.7 Software StackBoth PPO and VinePPO require a robust and efficient implementation. For model implementation,we utilize the Huggingface library. Training is carried out using the DeepSpeed distributed traininglibrary, which offers efficient multi-GPU support. Specifically, we employ DeepSpeed ZeRO stage 0(vanilla data parallelism) for RhoMath 1.1B and ZeRO stage 2 (shared optimizer states and gradientsacross GPUs) for DeepSeekMath 7B . For trajectory sampling during RL training, we rely on thevLLM library (Kwon et al., 2023), which provides optimized inference for LLMs. Additionally,VinePPO leverages vLLM to generate Monte Carlo samples for value estimation. This softwarestack ensures that our experiments are both efficient and reproducible. For instance, during VinePPOtraining, we achieve an inference speed of up to 30K tokens per second using 8 ×Nvidia H100 GPUswith the DeepSeekMath 7B model.J.8 ReproducibilityIn this study, all experiments were conducted using open-source libraries, publicly available datasets,and open-weight LLMs. To ensure full reproducibility, we will release both Singularity and Dockercontainers, pre-configured with all dependencies and libraries, enabling our experiments to be run onany machine equipped with NVIDIA GPUs, now or in the future. Additionally, we will make ourcodebase publicly available on GitHub at https://www.omitted.link .19 |
JgW2VGxvYV | SBSC: Step-by-Step Coding for ImprovingMathematical Olympiad PerformanceKunal Singh∗Ankan Biswas Sayandeep Bhowmick Pradeep MoturiFractal AI ResearchMumbaiAbstractWe propose Step-by-Step Coding (SBSC): a multi-turn math reasoning frameworkthat enables Large Language Models (LLMs) to generate sequence of programsfor solving Olympiad level math problems. After each turn/step, by leveragingthe code execution outputs and programs of previous steps, the model generatesthe next sub-task and the corresponding program to complete it. SBSC allowsmore granular, flexible and precise approach to problem-solving compared toexisting methods. Extensive experiments highlight the effectiveness of SBSC intackling competition and Olympiad-level math problems. For Claude-3.5-Sonnet,we observe SBSC (greedy decoding) surpasses existing state-of-the-art (SOTA)program generation based reasoning strategies by absolute 10.7% on AMC12, 8%on AIME and 12.6% on MathOdyssey. Given SBSC is multi-turn in nature, we alsobenchmark SBSC’s greedy decoding against self-consistency decoding results ofexisting SOTA math reasoning strategies and observe performance gain by absolute6.2% on AMC, 6.7% on AIME and 7.4% on MathOdyssey. Scripts & Data isuploaded at this link for reproducibility.1 IntroductionMathematics is considered as a critical benchmark to measure the reasoning abilities of the LargeLanguage Models (LLMs) [ 5,8,1,23,2,21] due to the complex and creative nature of the subject.The current generation of advanced LLMs, GPT-4o [ 1], Claude-3.5-Sonnet [ 2], Gemini-ultra [ 23] haveachieved high scores on elementary GSM8k [ 9] & high-school level MATH [ 14]. However, recentmath specific competition and Olympiad-level benchmarking on Math Odyssey [ 11], AmericanInvitational Mathematics Examination (AIME) & American Mathematics Competitions (AMC)[4, 10, 23] questions show that they continue to struggle with advanced mathematical reasoning.Related Work : In recent times, numerous developments in multiple research directions have takenplace to enhance the math ability of the LLMs. One of the major ones has been along the promptingand thinking strategies such as Chain-of-Thought (COT) method [ 31,15] that has shown to evokemulti-step thinking in LLMs before arriving at the answer. These methods struggle with complex andsymbolic computations. For this, PAL [12] & POT [7] suggest making LLMs perform reasoning bywriting program and offloading the computations to code interpreter. Another line of research hasbeen around pre-training and supervised fine-tuning (SFT). Multiple studies [ 24,34,10,3,16,22,25]have shown pre-training LLMs on maths tokens results in increased mathematical knowledge andreasoning abilities. Recent approaches [ 36,13,37,28,24,27,19,4,33,26] have tried creatingsynthetic reasoning paths/trajectories using a teacher model like GPT4 [ 1] for SFT. Also, somestudies [29, 35, 32, 6, 17] provide an alternative to manual annotations for process supervision [18].∗Corresponding author: Kunal Singh ([email protected])38th Conference on Neural Information Processing Systems (NeurIPS 2024) MATH-AI.Motivation : COT prompting helps LLMs to solve a problem using a step-by-step thought process.PAL & POT introduced problem-solving via program generation where the answer is generated byexecuting the generated program. ToRA [ 13] & Mathcoder [ 28] introduced tool-integrated mathproblem solving format. There, model outputs natural language reasoning followed by programgeneration to solve the problem in a single turn/block and incorporates code-interpreter output foreither summarizing the program output to get the final answer and terminate; or re-attempt theproblem in the subsequent turn using the same format. For brevity, let’s call ToRA’s defined way oftool-integrated reasoning (TIR) strategy as TIR-ToRA.Fundamentally, both PAL & TIR-ToRA generate a single program block to solve the problem.Additionally, TIR-ToRA framework allows the model to re-attempt the program generation in caseof execution error. These approaches show improved performance over COT on elementary & highschool level math problems. However, solving olympiad level math requires coming up with complexand creative solutions. Often, it is not feasible to solve a complex problem entirely using a singleprogram block and as a result, these strategies fail to systematically address each detailed step ofthe problem-solving process. It tends to overlook specified constraints, edge cases or necessarysimplifications, which are often encountered in Olympiad-level problems.Our Contribution : Olympiad level math problem-solving can be viewed as solving/exploring anintermediate sub-task in depth; and discovering + solving the next critical sub-task dynamically basisthe accumulated knowledge of previous sub-tasks explorations. To this end, we propose Step-by-StepCoding paradigm (SBSC) which is a multi-turn math reasoning framework that leverages existingprogramming [ 20] and in-context learning skills [ 5] of the current generation of LLMs, particularlyClaude-3.5-Sonnet [ 2] & GPT-4o [ 21]. It uses program generation as the reasoning strategy to solvean intermediate sub-task unlike PAL & TIR-ToRA. In each turn, it leverages code-interpreter resultsand knowledge of previous sub-tasks solutions to define and programmatically solve the next sub-task.We investigate the performance of SBSC on last 11 years of AIME & AMC-12 questions. We alsobenchmark on Olympiad-subset of MathOdyssey dataset. We compare our method with existingreasoning strategies: COT, PAL, TIR-ToRA. We conduct ablations to understand the benefits ofour approach such as sensitivity to exemplars, topic-wise analysis and measuring improvement inprogram refinement/debugging ability over TIR-ToRA due to the granular nature of SBSC process.2 MethodSBSC is a multi-turn, program-generation based math reasoning framework where at each turn:the model generates an intermediate sub-task and corresponding program to solve that sub-task byleveraging the outputs of the previous turns. At the end of each turn, code interpreter is used toexecute the program block to generate the solution for the intermediate sub-task. The intermediatesub-task depends on the results of the previous turns and the question. The code snippet for the ithsub-task directly incorporates the execution results of the previous code snippets by directly definingthem as variables and symbols. This way SBSC makes LLMs generate sequence of targeted programsover multiple turns to solve complex math problems.Our inference procedure is inspired by ToRA [ 13]. Solution chain is initialized with the Prompt pcontaining method instructions followed by exemplars and the current question q. At each step, LLMGfirst outputs a subtask si. Ifsigeneration ends with stop-word " ###END OF CODE ", we extractthe final answer. Else, it continues to generate program code ciending with stop-word “ “‘output ”.We then pass cito code interpreter and obtain the execution message or output oi←E(ci). Thesolution chain is updated by concatenating it with si,ci,oiand loop continues till we get " ###ENDOF CODE ".⊕denoting concatenation, the sequential process can be generalised as:si⊕ci∼G(· |p⊕q⊕(s1⊕c1⊕o1)⊕(s2⊕c2⊕o2)⊕......(si−1⊕ci−1⊕oi−1)) (1)Step-wise sequential approach of SBSC ensures that every part of the problem is addressed with exactprecision, reducing the risk of errors that might arise from false assumptions or skipped steps. Havingseparate programs for each part of the solution also allows it to make necessary simplifications thatwould make the future subparts, and hence the whole problem, easier to solve allowing for a moregranular and precise approach to problem-solving compared to existing methods. In case the codeexecution at any step results in an erroneous output, SBSC is better able to rectify that particular step.Fig 1a shows a visual sample SBSC response for an AIME question and Fig 1b shows TIR-ToRA2response for the same question. More detailed discussion on comparison is at Appendix A.1. In depthunderstanding of SBSC via multiple examples and comparisons at Appendix A.4.(a) Example multi-turn SBSC response for an AIME problem. Pink boxes denote the sub-task siat the i-th step,blue boxes denote the program cito solve siand>>> denote the corresponding execution output oi. The redcurly brackets indicate reusing outputs from earlier steps.(b) Example TIR-ToRA response for the same problem, which is not solved correctly. In first turn, it tries tosolves the problem at once using a rational and program. It encounters error and in second turn, tries to fix theentire approach and solve again but the solution is incorrect.Figure 1: Comparison of SBSC and TIR-ToRA frameworks.3 ExperimentBenchmark datasets We create our datasets using problems of last 11 years from popular mathcompetitions AMC and AIME. We obtain questions and answers (Q&A) in L ATEX format from theAoPS Wiki website. We remove problems which are dependent on accompanying images and processthe Q&A to have integer answers using GPT-4o if needed, leaving us with 330 AIME problemsand 475 AMC-12 problems. We also use MathOdyssey [ 11], a popular benchmark for LLM math3reasoning, consisting of problems of varying difficulties. We include the 148 problems belonging toOlympiad-level competitions and perform similar filtering and processing. For more details on howwe processed the dataset, please refer to Appendix A.2.Models & Configurations We usegpt-4o-2024-05-13 andClaude-3.5-Sonnet as baseLLMs for our experiments. For all datasets and all reasoning frameworks, we use 4-shot setting.Maximum number of turns (n) for both TIR-ToRA and SBSC is set to 15. For greedy decodinginference, we use temperature=0 and max_tokens=1024 and also, we run 3 times andreport average. Given SBSC is multi-turn in nature (on average 6-7 turns per problem , Table 2in Appendix A.3) , we also benchmark SBSC’s greedy decoding results against self-consistency(SC) [ 30] decoding results (majority@7) of COT, PAL & TIR-ToRA. For SC decoding, we usetemperature=0.7 and top_p=0.9 . Note: we experimentally observe that for n > 4, there isno improvement in accuracy for TIR-ToRA so we set n=4 for TIR-ToRA during SC decoding. Allablations were conducted using Claude-3.5-Sonnet unless otherwise specified.Prompting/Few-shot Exemplars For both AIME and AMC, we select 90 questions each, drawnfrom problems of years other than those included in the evaluation datasets. These questions wereprompted with COT, PAL, TIR-ToRA and SBSC to generate corresponding solutions in accurateformat. For each dataset, we create a subset of 10 problems correctly solved by every method andfinally select a combination of 4 exemplars among them. For MathOdyssey, we use AIME exemplarsas both are of similar difficulty level. We provide the 4 chosen exemplars and system-prompts, usedin the main experiments, for different methods in Appendix (A.5, A.6, A.8, A.9) & repository here.4 ResultsMain ResultsMethod AMC AIME MathOdysseygreedy maj@7 greedy maj@7 greedy [email protected] 31.16 35.79 9.09 10.91 11.89 16.89PAL 35.79 36.42 27.48 28.79 27.23 31.01TIR-ToRA 38.59 43.16 24.64 26.67 27.23 32.43SBSC (Ours) 49.33↑10.7−↑6.2 35.45↑8-↑6.7 39.86↑12.6-↑7.4GPT-4oCOT 35.94 37.47 10.39 12.12 13.51 17.57PAL 36.48 38.11 24.63 26.97 15.74 20.27TIR-ToRA 37.33 40.42 22.42 25.45 19.59 23.64SBSC (Ours) 44.55↑7.2 -↑4.1 30.7↑6.1-↑3.7 26.55↑7 -↑2.9Table 1: Benchmarking SBSC against different math reasoning methods across 3 datasets:We reportthe average accuracy over 3 runs. Best result in each setting is highlighted in bold & second best isunderlined . Absolute improvement in performance by SBSC over the previous best method in eachsetting is indicated in subscript.As shown in Table 1, on AMC dataset, SBSC shows an absolute improvement over TIR-ToRAby roughly 11% using Claude-3.5-Sonnet and 7% using GPT-4o. SBSC greedy decoding resultsoutperforms SC decoding results of TIR-TORA by absolute 6% and 4%, for Claude-3.5-Sonnetand GPT-4o respectively. We see similar absolute improvements in accuracy on our AIME datasettoo. SBSC outperforms its nearest competitor (PAL) by 8% and 6% with greedy settings and SCsettings by 6.7% and 3.7% for Claude-3.5-Sonnet and GPT-4o respectively. For MathOdyssey, SBSCimproves by as much as 12.6% and 7% over TIR-ToRA while showing improvement of 7.4% and 3%over its SC variant, for Claude-3.5-Sonnet & GPT-4o respectively. Standard deviation values at A.10.5 AblationsSensitivity to Exemplars : We study the effect of number/choice of examples in prompting onSBSC’s performance. As shown in Figure 2, we observe a notable increase in performance when4Figure 2: Effect of Number of Exemplars Figure 3: Sensitivity to choice of Exemplarsincreasing the examples from 2 to 4, which then starts to saturate as we further increase the number ofexamples to 6 and 8. This justifies our decision of using a 4-shot setting. To understand if the choiceof exemplars affect the accuracy or not, we conduct a sensitivity analysis. We randomly sample 4exemplars out of the already created pool of 10 exemplars three times to create 3 variations of 4-shotprompts: v1, v2, and v3. In Figure 3, we can see that the performance remains stable irrespective ofthe exemplars used, across a subset of AIME (2022-2024) and AMC (2021-2023) problems.Figure 4: Topic breakdown analysis Figure 5: Comparison of Debugging AbilitiesTopic-wise Analysis : We use GPT-4o-mini [ 21] to classify problems from AIME and AMC, whileMathOdyssey already had topic labels. As can be seen in Figure 4, our method outperforms TIR-ToRAin all the individual topics and across all 3 datasets, thereby proving beneficial for all topics.Code Debugging Ability : We present the superior ability of our method to resolve an error relatedto code execution. If at any step of the trajectory chain, the program returns an execution error, weconsider that to be an error step. In Figure 5, we see that SBSC is able to recover from even multiplewrong steps and reach the correct final answer quite easily when compared to TIR-ToRA whoseperformance drops steeply on increasing error steps. This can be attributed to the fact that SBSC,being precise and granular, tackles only a specific part of the problem and finds it easier to correct itsmistakes compared to TIR-ToRA which tries to correct the program at the problem level.6 ConclusionSBSC is a math reasoning framework that solves a problem by generating a sequence of sub-tasks andcorresponding program blocks. Each sub-task and its corresponding program solution is generatedleveraging the execution outputs and solutions of all the previous sub-tasks. We show performanceimprovements of SBSC over TIR-ToRA, PAL & COT on challenging math problems. Limitations :We only focus on text-based questions. We also just evaluate on integer-answer type questions.References[1]O. J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt,S. Altman, S. Anadkat, R. Avila, I. Babuschkin, S. Balaji, V . Balcom, P. Baltescu, H. ing Bao, M. Bavarian,J. Belgum, I. Bello, J. Berdine, G. Bernadett-Shapiro, C. Berner, L. Bogdonoff, O. Boiko, M. Boyd, A.-L.Brakman, G. Brockman, T. Brooks, M. Brundage, K. Button, T. Cai, R. Campbell, A. Cann, B. Carey,C. Carlson, R. Carmichael, B. Chan, C. Chang, F. Chantzis, D. Chen, S. Chen, R. Chen, J. Chen, M. Chen,5B. Chess, C. Cho, C. Chu, H. W. Chung, D. Cummings, J. Currier, Y . Dai, C. Decareaux, T. Degry,N. Deutsch, D. Deville, A. Dhar, D. Dohan, S. Dowling, S. Dunning, A. Ecoffet, A. Eleti, T. Eloundou,D. Farhi, L. Fedus, N. Felix, S. P. Fishman, J. Forte, I. abella Fulford, L. Gao, E. Georges, C. Gibson,V . Goel, T. Gogineni, G. Goh, R. Gontijo-Lopes, J. Gordon, M. Grafstein, S. Gray, R. Greene, J. Gross,S. S. Gu, Y . Guo, C. Hallacy, J. Han, J. Harris, Y . He, M. Heaton, J. Heidecke, C. Hesse, A. Hickey,W. Hickey, P. Hoeschele, B. Houghton, K. Hsu, S. Hu, X. Hu, J. Huizinga, S. Jain, S. Jain, J. Jang, A. Jiang,R. Jiang, H. Jin, D. Jin, S. Jomoto, B. Jonn, H. Jun, T. Kaftan, L. Kaiser, A. Kamali, I. Kanitscheider,N. S. Keskar, T. Khan, L. Kilpatrick, J. W. Kim, C. Kim, Y . Kim, H. Kirchner, J. R. Kiros, M. Knight,D. Kokotajlo, L. Kondraciuk, A. Kondrich, A. Konstantinidis, K. Kosic, G. Krueger, V . Kuo, M. Lampe,I. Lan, T. Lee, J. Leike, J. Leung, D. Levy, C. M. Li, R. Lim, M. Lin, S. Lin, M. teusz Litwin, T. Lopez,R. Lowe, P. Lue, A. Makanju, K. Malfacini, S. Manning, T. Markov, Y . Markovski, B. Martin, K. Mayer,A. Mayne, B. McGrew, S. M. McKinney, C. McLeavey, P. McMillan, J. McNeil, D. Medina, A. Mehta,J. Menick, L. Metz, A. Mishchenko, P. Mishkin, V . Monaco, E. Morikawa, D. P. Mossing, T. Mu, M. Murati,O. Murk, D. M’ely, A. Nair, R. Nakano, R. Nayak, A. Neelakantan, R. Ngo, H. Noh, O. Long, C. O’Keefe,J. W. Pachocki, A. Paino, J. Palermo, A. Pantuliano, G. Parascandolo, J. Parish, E. Parparita, A. Passos,M. Pavlov, A. Peng, A. Perelman, F. de Avila Belbute Peres, M. Petrov, H. P. de Oliveira Pinto, M. Pokorny,M. Pokrass, V . H. Pong, T. Powell, A. Power, B. Power, E. Proehl, R. Puri, A. Radford, J. W. Rae,A. Ramesh, C. Raymond, F. Real, K. Rimbach, C. Ross, B. Rotsted, H. Roussez, N. Ryder, M. D. Saltarelli,T. Sanders, S. Santurkar, G. Sastry, H. Schmidt, D. Schnurr, J. Schulman, D. Selsam, K. Sheppard,T. Sherbakov, J. Shieh, S. Shoker, P. Shyam, S. Sidor, E. Sigler, M. Simens, J. Sitkin, K. Slama, I. Sohl,B. D. Sokolowsky, Y . Song, N. Staudacher, F. P. Such, N. Summers, I. Sutskever, J. Tang, N. A. Tezak,M. Thompson, P. Tillet, A. Tootoonchian, E. Tseng, P. Tuggle, N. Turley, J. Tworek, J. F. C. Uribe,A. Vallone, A. Vijayvergiya, C. V oss, C. L. Wainwright, J. J. Wang, A. Wang, B. Wang, J. Ward, J. Wei,C. Weinmann, A. Welihinda, P. Welinder, J. Weng, L. Weng, M. Wiethoff, D. Willner, C. Winter, S. Wolrich,H. Wong, L. Workman, S. Wu, J. Wu, M. Wu, K. Xiao, T. Xu, S. Yoo, K. Yu, Q. ing Yuan, W. Zaremba,R. Zellers, C. Zhang, M. Zhang, S. Zhao, T. Zheng, J. Zhuang, W. Zhuk, and B. Zoph. Gpt-4 technicalreport. 2023. URL https://api.semanticscholar.org/CorpusID:257532815 .[2]Anthropic. Introducing claude 3.5, 2023. URL https://www-cdn.anthropic.com/fed9cc193a14b84131812372d8d5857f8f304c52/Model_Card_Claude_3_Addendum.pdf.[3]Z. Azerbayev, H. Schoelkopf, K. Paster, M. D. Santos, S. M. McAleer, A. Q. Jiang, J. Deng, S. Biderman,and S. Welleck. Llemma: An open language model for mathematics. ArXiv , abs/2310.10631, 2023. URLhttps://api.semanticscholar.org/CorpusID:264172303 .[4]E. Beeching, S. C. Huang, A. Jiang, J. Li, B. Lipkin, Z. Qina, K. Rasul, Z. Shen, R. Soletskyi, andL. Tunstall. Numinamath 7b tir. https://huggingface.co/AI-MO/NuminaMath-7B-TIR ,2024.[5]T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry,A. Askell, S. Agarwal, A. Herbert-V oss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler,J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. teusz Litwin, S. Gray, B. Chess, J. Clark, C. Berner,S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. ArXiv ,abs/2005.14165, 2020. URL https://api.semanticscholar.org/CorpusID:218971783 .[6]G. Chen, M. Liao, C. Li, and K. Fan. Alphamath almost zero: process supervision without process. ArXiv ,abs/2405.03553, 2024. URL https://api.semanticscholar.org/CorpusID:269605484 .[7]W. Chen, X. Ma, X. Wang, and W. W. Cohen. Program of thoughts prompting: Disentangling computationfrom reasoning for numerical reasoning tasks. Trans. Mach. Learn. Res. , 2023, 2022. URL https://api.semanticscholar.org/CorpusID:253801709 .[8]A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton,S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y . Tay, N. M. Shazeer,V . Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari,P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. García, V . Misra, K. Robinson,L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal,M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee,Z. Zhou, X. Wang, B. Saeta, M. Díaz, O. Firat, M. Catasta, J. Wei, K. S. Meier-Hellstern, D. Eck, J. Dean,S. Petrov, and N. Fiedel. Palm: Scaling language modeling with pathways. ArXiv , abs/2204.02311, 2022.URLhttps://api.semanticscholar.org/CorpusID:247951931 .[9]K. Cobbe, V . Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton,R. Nakano, C. Hesse, and J. Schulman. Training verifiers to solve math word problems. ArXiv ,abs/2110.14168, 2021. URL https://api.semanticscholar.org/CorpusID:239998651 .6[10] DeepSeek-AI, Q. Zhu, D. Guo, Z. Shao, D. Yang, P. Wang, R. Xu, Y . Wu, Y . Li, H. Gao, S. Ma,W. Zeng, X. Bi, Z. Gu, H. Xu, D. Dai, K. Dong, L. Zhang, Y . Piao, Z. Gou, Z. Xie, Z. Hao, B.-L.Wang, J.-M. Song, D. Chen, X. Xie, K. Guan, Y . mei You, A. Liu, Q. Du, W. Gao, X. Lu, Q. Chen,Y . Wang, C. Deng, J. Li, C. Zhao, C. Ruan, F. Luo, and W. Liang. Deepseek-coder-v2: Breakingthe barrier of closed-source models in code intelligence. ArXiv , abs/2406.11931, 2024. URL https://api.semanticscholar.org/CorpusID:270562723 .[11] M. Fang, X. Wan, F. Lu, F. Xing, and K. Zou. Mathodyssey: Benchmarking mathematical problem-solving skills in large language models using odyssey math data. ArXiv , abs/2406.18321, 2024. URLhttps://api.semanticscholar.org/CorpusID:270737739 .[12] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y . Yang, J. Callan, and G. Neubig. Pal: Program-aidedlanguage models. ArXiv , abs/2211.10435, 2022. URL https://api.semanticscholar.org/CorpusID:253708270 .[13] Z. Gou, Z. Shao, Y . Gong, Y . Shen, Y . Yang, M. Huang, N. Duan, and W. Chen. Tora: A tool-integratedreasoning agent for mathematical problem solving. ArXiv , abs/2309.17452, 2023. URL https://api.semanticscholar.org/CorpusID:263310365 .[14] D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. X. Song, and J. Steinhardt.Measuring mathematical problem solving with the math dataset. ArXiv , abs/2103.03874, 2021. URLhttps://api.semanticscholar.org/CorpusID:232134851 .[15] T. Kojima, S. S. Gu, M. Reid, Y . Matsuo, and Y . Iwasawa. Large language models are zero-shot rea-soners. ArXiv , abs/2205.11916, 2022. URL https://api.semanticscholar.org/CorpusID:249017743 .[16] A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V . V . Ramasesh, A. Slone,C. Anil, I. Schlag, T. Gutman-Solo, Y . Wu, B. Neyshabur, G. Gur-Ari, and V . Misra. Solving quan-titative reasoning problems with language models. ArXiv , abs/2206.14858, 2022. URL https://api.semanticscholar.org/CorpusID:250144408 .[17] H. Lightman, V . Kosaraju, Y . Burda, H. Edwards, B. Baker, T. Lee, J. Leike, J. Schulman, I. Sutskever,and K. Cobbe. Let’s verify step by step. ArXiv , abs/2305.20050, 2023. URL https://api.semanticscholar.org/CorpusID:258987659 .[18] H. Lightman, V . Kosaraju, Y . Burda, H. Edwards, B. Baker, T. Lee, J. Leike, J. Schulman, I. Sutskever, andK. Cobbe. Let’s verify step by step, 2023. URL https://arxiv.org/abs/2305.20050 .[19] A. Mitra, H. Khanpour, C. Rosset, and A. Awadallah. Orca-math: Unlocking the potential of slms ingrade school math. ArXiv , abs/2402.14830, 2024. URL https://api.semanticscholar.org/CorpusID:267897618 .[20] A. G. W.-D. L. F. Y . T. Z. S. W. A. S.-L. K. S. I. S. Naman Jain, King Han. Livecodebench: Holistic andcontamination free evaluation of large language models for code. arXiv preprint , 2024.[21] OpenAI. "hello gpt-4o.", June, 2024. URL https://openai.com/index/hello-gpt-4o/ .[22] K. Paster, M. D. Santos, Z. Azerbayev, and J. Ba. Openwebmath: An open dataset of high-qualitymathematical web text. ArXiv , abs/2310.06786, 2023. URL https://api.semanticscholar.org/CorpusID:263829563 .[23] M. Reid, N. Savinov, D. Teplyashin, D. Lepikhin, T. P. Lillicrap, J.-B. Alayrac, R. Soricut, A. Lazaridou,O. Firat, J. Schrittwieser, I. Antonoglou, R. Anil, S. Borgeaud, A. M. Dai, K. Millican, E. Dyer, M. Glaese,T. Sottiaux, B. Lee, F. Viola, M. Reynolds, Y . Xu, J. Molloy, J. Chen, M. Isard, P. Barham, T. Hennigan,R. McIlroy, M. Johnson, J. Schalkwyk, E. Collins, E. Rutherford, E. Moreira, K. W. Ayoub, M. Goel,C. Meyer, G. Thornton, Z. Yang, H. Michalewski, Z. Abbas, N. Schucher, A. Anand, R. Ives, J. Keeling,K. Lenc, S. Haykal, S. Shakeri, P. Shyam, A. Chowdhery, R. Ring, S. Spencer, E. Sezener, L. Vilnis,O. Chang, N. Morioka, G. Tucker, C. Zheng, O. Woodman, N. Attaluri, T. Kociský, E. Eltyshev, X. Chen,T. Chung, V . Selo, S. Brahma, P. Georgiev, A. Slone, Z. Zhu, J. Lottes, S. Qiao, B. Caine, S. Riedel,A. Tomala, M. Chadwick, J. C. Love, P. Choy, S. Mittal, N. Houlsby, Y . Tang, M. Lamm, L. Bai, Q. Zhang,L. He, Y . Cheng, P. Humphreys, Y . Li, S. Brin, A. Cassirer, Y .-Q. Miao, L. Zilka, T. Tobin, K. Xu,L. Proleev, D. Sohn, A. Magni, L. A. Hendricks, I. Gao, S. Ontan’on, O. Bunyan, N. Byrd, A. Sharma,B. Zhang, M. Pinto, R. Sinha, H. Mehta, D. Jia, S. Caelles, A. Webson, A. Morris, B. Roelofs, Y . Ding,R. Strudel, X. Xiong, M. Ritter, M. Dehghani, R. Chaabouni, A. Karmarkar, G. Lai, F. Mentzer, B. Xu,Y . Li, Y . Zhang, T. L. Paine, A. Goldin, B. Neyshabur, K. Baumli, A. Levskaya, M. Laskin, W. Jia,J. W. Rae, K. Xiao, A. He, S. Giordano, L. Yagati, J.-B. Lespiau, P. Natsev, S. Ganapathy, F. Liu,D. Martins, N. Chen, Y . Xu, M. Barnes, R. May, A. Vezer, J. Oh, K. Franko, S. Bridgers, R. Zhao,7B. Wu, B. Mustafa, S. Sechrist, E. Parisotto, T. S. Pillai, C. Larkin, C. Gu, C. Sorokin, M. Krikun,A. Guseynov, J. Landon, R. Datta, A. Pritzel, P. Thacker, F. Yang, K. Hui, A. Hauth, C.-K. Yeh, D. Barker,J. Mao-Jones, S. Austin, H. Sheahan, P. Schuh, J. Svensson, R. Jain, V . V . Ramasesh, A. Briukhov,D.-W. Chung, T. von Glehn, C. Butterfield, P. Jhakra, M. Wiethoff, J. Frye, J. Grimstad, B. Changpinyo,C. L. Lan, A. Bortsova, Y . Wu, P. V oigtlaender, T. N. Sainath, C. Smith, W. Hawkins, K. Cao, J. Besley,S. Srinivasan, M. Omernick, C. Gaffney, G. de Castro Surita, R. Burnell, B. Damoc, J. Ahn, A. Brock,M. Pajarskas, A. Petrushkina, S. Noury, L. Blanco, K. Swersky, A. Ahuja, T. Avrahami, V . Misra,R. de Liedekerke, M. Iinuma, A. Polozov, S. York, G. van den Driessche, P. Michel, J. Chiu, R. Blevins,Z. Gleicher, A. Recasens, A. Rrustemi, E. Gribovskaya, A. Roy, W. Gworek, S. M. R. Arnold, L. Lee,J. Lee-Thorp, M. Maggioni, E. Piqueras, K. Badola, S. Vikram, L. Gonzalez, A. Baddepudi, E. Senter,J. Devlin, J. Qin, M. Azzam, M. Trebacz, M. Polacek, K. Krishnakumar, S. yiin Chang, M. Tung,I. Penchev, R. Joshi, K. Olszewska, C. Muir, M. Wirth, A. J. Hartman, J. Newlan, S. Kashem, V . Bolina,E. Dabir, J. R. van Amersfoort, Z. Ahmed, J. Cobon-Kerr, A. B. Kamath, A. M. Hrafnkelsson, L. Hou,I. Mackinnon, A. Frechette, E. Noland, X. Si, E. Taropa, D. Li, P. Crone, A. Gulati, S. Cevey, J. Adler,A. Ma, D. Silver, S. Tokumine, R. Powell, S. Lee, M. B. Chang, S. Hassan, D. Mincu, A. Yang, N. Levine,J. Brennan, M. Wang, S. Hodkinson, J. Zhao, J. Lipschultz, A. Pope, M. B. Chang, C. Li, L. E. Shafey,M. Paganini, S. Douglas, B. Bohnet, F. Pardo, S. Odoom, M. Rosca, C. N. dos Santos, K. Soparkar,A. Guez, T. Hudson, S. Hansen, C. Asawaroengchai, R. Addanki, T. Yu, W. Stokowiec, M. Khan, J. Gilmer,J. Lee, C. G. Bostock, K. Rong, J. Caton, P. Pejman, F. Pavetic, G. Brown, V . Sharma, M. Luvci’c,R. Samuel, J. Djolonga, A. Mandhane, L. L. Sjosund, E. Buchatskaya, E. White, N. Clay, J. Jiang,H. Lim, R. Hemsley, J. Labanowski, N. D. Cao, D. Steiner, S. H. Hashemi, J. Austin, A. Gergely,T. Blyth, J. Stanton, K. Shivakumar, A. Siddhant, A. Andreassen, C. L. Araya, N. Sethi, R. Shivanna,S. Hand, A. Bapna, A. Khodaei, A. Miech, G. Tanzer, A. Swing, S. Thakoor, Z. Pan, Z. Nado, S. Winkler,D. Yu, M. Saleh, L. Maggiore, I. Barr, M. Giang, T. Kagohara, I. Danihelka, A. Marathe, V . Feinberg,M. Elhawaty, N. Ghelani, D. Horgan, H. Miller, L. Walker, R. Tanburn, M. Tariq, D. Shrivastava, F. Xia,C.-C. Chiu, Z. C. Ashwood, K. Baatarsukh, S. Samangooei, F. Alcober, A. Stjerngren, P. Komarek,K. Tsihlas, A. Boral, R. Comanescu, J. Chen, R. Liu, D. Bloxwich, C. Chen, Y . Sun, F. Feng, M. Mauger,X. Dotiwalla, V . Hellendoorn, M. Sharman, I. Zheng, K. Haridasan, G. Barth-Maron, C. Swanson,D. Rogozi’nska, A. Andreev, P. K. Rubenstein, R. Sang, D. Hurt, G. Elsayed, R. shen Wang, D. Lacey,A. Ili’c, Y . Zhao, W. Han, L. Aroyo, C. Iwuanyanwu, V . Nikolaev, B. Lakshminarayanan, S. Jazayeri, R. L.Kaufman, M. Varadarajan, C. Tekur, D. Fritz, M. Khalman, D. Reitter, K. Dasgupta, S. Sarcar, T. Ornduff,J. Snaider, F. Huot, J. Jia, R. Kemp, N. Trdin, A. Vijayakumar, L. Kim, C. Angermueller, L. Lao, T. Liu,H. Zhang, D. Engel, S. Greene, A. White, J. Austin, L. Taylor, S. Ashraf, D. Liu, M. Georgaki, I. Cai,Y . Kulizhskaya, S. Goenka, B. Saeta, K. V odrahalli, C. Frank, D. de Cesare, B. Robenek, H. Richardson,M. Alnahlawi, C. Yew, P. Ponnapalli, M. Tagliasacchi, A. Korchemniy, Y . Kim, D. Li, B. Rosgen, K. Levin,J. Wiesner, P. Banzal, P. Srinivasan, H. Yu, cCauglar Unlu, D. Reid, Z. Tung, D. F. Finchelstein, R. Kumar,A. Elisseeff, J. Huang, M. Zhang, R. Zhu, R. Aguilar, M. Gim’enez, J. Xia, O. Dousse, W. Gierke,S. H. Yeganeh, D. Yates, K. Jalan, L. Li, E. Latorre-Chimoto, D. D. Nguyen, K. Durden, P. Kallakuri,Y . Liu, M. Johnson, T. Tsai, A. Talbert, J. Liu, A. Neitz, C. Elkind, M. Selvi, M. Jasarevic, L. B. Soares,A. Cui, P. Wang, A. W. Wang, X. Ye, K. Kallarackal, L. Loher, H. Lam, J. Broder, D. N. Holtmann-Rice, N. Martin, B. Ramadhana, D. Toyama, M. Shukla, S. Basu, A. Mohan, N. Fernando, N. Fiedel,K. Paterson, H. Li, A. Garg, J. Park, D. Choi, D. Wu, S. Singh, Z. Zhang, A. Globerson, L. Yu, J. Carpenter,F. de Chaumont Quitry, C. Radebaugh, C.-C. Lin, A. Tudor, P. Shroff, D. Garmon, D. Du, N. Vats,H. Lu, S. Iqbal, A. Yakubovich, N. Tripuraneni, J. Manyika, H. Qureshi, N. Hua, C. Ngani, M. A. Raad,H. Forbes, A. Bulanova, J. Stanway, M. Sundararajan, V . Ungureanu, C. Bishop, Y . Li, B. Venkatraman,B. Li, C. Thornton, S. Scellato, N. Gupta, Y . Wang, I. Tenney, X. Wu, A. Shenoy, G. Carvajal, D. G.Wright, B. Bariach, Z. Xiao, P. Hawkins, S. Dalmia, C. Farabet, P. Valenzuela, Q. Yuan, C. A. Welty,A. Agarwal, M. Chen, W. Kim, B. Hulse, N. Dukkipati, A. Paszke, A. Bolt, E. Davoodi, K. Choo, J. Beattie,J. Prendki, H. Vashisht, R. Santamaria-Fernandez, L. C. Cobo, J. Wilkiewicz, D. Madras, A. Elqursh,G. Uy, K. Ramirez, M. Harvey, T. Liechty, H. Zen, J. Seibert, C. H. Hu, A. Y . Khorlin, M. Le, A. Aharoni,M. Li, L. Wang, S. Kumar, A. Lince, N. Casagrande, J. Hoover, D. E. Badawy, D. Soergel, D. Vnukov,M. Miecnikowski, J. ima, A. Koop, P. Kumar, T. Sellam, D. Vlasic, S. Daruki, N. Shabat, J. Zhang,G. Su, K. Krishna, J. Zhang, J. Liu, Y . Sun, E. Palmer, A. Ghaffarkhah, X. Xiong, V . Cotruta, M. Fink,L. Dixon, A. Sreevatsa, A. Goedeckemeyer, A. Dimitriev, M. Jafari, R. Crocker, N. Fitzgerald, A. Kumar,S. Ghemawat, I. Philips, F. Liu, Y . Liang, R. Sterneck, A. Repina, M. Wu, L. Knight, M. Georgiev,H. Lee, H. Askham, A. Chakladar, A. Louis, C. Crous, H. Cate, D. Petrova, M. Quinn, D. Owusu-Afriyie,A. Singhal, N. Wei, S. Kim, D. Vincent, M. Nasr, C. A. Choquette-Choo, R. Tojo, S. Lu, D. de Las Casas,Y . Cheng, T. Bolukbasi, K. Lee, S. Fatehi, R. Ananthanarayanan, M. Patel, C. E. Kaed, J. Li, J. Sygnowski,S. R. Belle, Z. Chen, J. Konzelmann, S. Poder, R. Garg, V . Koverkathu, A. Brown, C. Dyer, R. Liu, A. Nova,J. Xu, J. Bai, S. Petrov, D. Hassabis, K. Kavukcuoglu, J. Dean, O. Vinyals, and A. Chronopoulou. Gemini1.5: Unlocking multimodal understanding across millions of tokens of context. ArXiv , abs/2403.05530,2024. URL https://api.semanticscholar.org/CorpusID:268297180 .[24] Z. Shao, P. Wang, Q. Zhu, R. Xu, J.-M. Song, M. Zhang, Y . K. Li, Y . Wu, and D. Guo. Deepseekmath:Pushing the limits of mathematical reasoning in open language models. ArXiv , abs/2402.03300, 2024.8URLhttps://api.semanticscholar.org/CorpusID:267412607 .[25] R. Taylor, M. Kardas, G. Cucurull, T. Scialom, A. S. Hartshorn, E. Saravia, A. Poulton, V . Kerkez,and R. Stojnic. Galactica: A large language model for science. ArXiv , abs/2211.09085, 2022. URLhttps://api.semanticscholar.org/CorpusID:253553203 .[26] Y . Tong, X. Zhang, R. Wang, R. M. Wu, and J. He. Dart-math: Difficulty-aware rejection tuning for mathe-matical problem-solving. ArXiv , abs/2407.13690, 2024. URL https://api.semanticscholar.org/CorpusID:271270574 .[27] S. Toshniwal, I. Moshkov, S. Narenthiran, D. Gitman, F. Jia, and I. Gitman. Openmathinstruct-1: A 1.8 million math instruction tuning dataset. ArXiv , abs/2402.10176, 2024. URL https://api.semanticscholar.org/CorpusID:267681752 .[28] K. Wang, H. Ren, A. Zhou, Z. Lu, S. Luo, W. Shi, R. Zhang, L. Song, M. Zhan, and H. Li. Mathcoder:Seamless code integration in llms for enhanced mathematical reasoning. ArXiv , abs/2310.03731, 2023.URLhttps://api.semanticscholar.org/CorpusID:263671510 .[29] P. Wang, L. Li, Z. Shao, R. Xu, D. Dai, Y . Li, D. Chen, Y .Wu, and Z. Sui. Math-shepherd: Verify andreinforce llms step-by-step without human annotations. ArXiv , abs/2312.08935, 2023. URL https://api.semanticscholar.org/CorpusID:266209760 .[30] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. H. hsin Chi, and D. Zhou. Self-consistency improveschain of thought reasoning in language models. ArXiv , abs/2203.11171, 2022. URL https://api.semanticscholar.org/CorpusID:247595263 .[31] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. H. hsin Chi, F. Xia, Q. Le, and D. Zhou. Chain ofthought prompting elicits reasoning in large language models. ArXiv , abs/2201.11903, 2022. URLhttps://api.semanticscholar.org/CorpusID:246411621 .[32] Z. Xi, W. Chen, B. Hong, S. Jin, R. Zheng, W. He, Y . Ding, S. Liu, X. Guo, J. Wang, H. Guo, W. Shen,X. Fan, Y . Zhou, S. Dou, X. Wang, X. Zhang, P. Sun, T. Gui, Q. Zhang, and X. Huang. Training largelanguage models for reasoning through reverse curriculum reinforcement learning. ArXiv , abs/2402.05808,2024. URL https://api.semanticscholar.org/CorpusID:267547500 .[33] S. Yin, W. You, Z. Ji, G. Zhong, and J. Bai. Mumath-code: Combining tool-use large language modelswith multi-perspective data augmentation for mathematical reasoning. ArXiv , abs/2405.07551, 2024. URLhttps://api.semanticscholar.org/CorpusID:269756851 .[34] H. Ying, S. Zhang, L. Li, Z. Zhou, Y . Shao, Z. Fei, Y . Ma, J. Hong, K. Liu, Z. Wang, Y . Wang, Z. Wu,S. Li, F. Zhou, H. Liu, S. Zhang, W. Zhang, H. Yan, X. Qiu, J. Wang, K. Chen, and D. Lin. Internlm-math:Open math large language models toward verifiable reasoning. ArXiv , abs/2402.06332, 2024. URLhttps://api.semanticscholar.org/CorpusID:267617098 .[35] F. Yu, A. Gao, and B. Wang. Ovm, outcome-supervised value models for planning in mathematicalreasoning. In NAACL-HLT , 2023. URL https://api.semanticscholar.org/CorpusID:265221057 .[36] L. L. Yu, W. Jiang, H. Shi, J. Yu, Z. Liu, Y . Zhang, J. T. Kwok, Z. Li, A. Weller, and W. Liu. Metamath:Bootstrap your own mathematical questions for large language models. ArXiv , abs/2309.12284, 2023.URLhttps://api.semanticscholar.org/CorpusID:262084051 .[37] X. Yue, X. Qu, G. Zhang, Y . Fu, W. Huang, H. Sun, Y . Su, and W. Chen. Mammoth: Building mathgeneralist models through hybrid instruction tuning. ArXiv , abs/2309.05653, 2023. URL https://api.semanticscholar.org/CorpusID:261696697 .[38] H. S. Zheng, S. Mishra, X. Chen, H.-T. Cheng, E. H. Chi, Q. V . Le, and D. Zhou. Take a step back:Evoking reasoning via abstraction in large language models, 2024. URL https://arxiv.org/abs/2310.06117 .[39] D. Zhou, N. Scharli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, O. Bousquet, Q. Le, andE. H. hsin Chi. Least-to-most prompting enables complex reasoning in large language models. ArXiv ,abs/2205.10625, 2022. URL https://api.semanticscholar.org/CorpusID:248986239 .A Appendix / supplemental materialOptionally include supplemental material (complete proofs, additional experiments and plots) inappendix. All such materials SHOULD be included in the main submission.9A.1 Framework ExplanationWe present example responses from both SBSC and TIR-ToRA for a problem from AIME in figures1a and 1b respectively. As can be seen, in case of TIR-ToRA, the initial program generated by themodel runs into an execution error. At the next turn, it attempts to rectify the error and comes up witha new approach and the corresponding program. This time, the code executes correctly but the finalanswer is wrong.On the other hand, we see that SBSC is progressing step-by-step, tackling individual sub-taskswith separate programs and utilising outputs of previous steps. In the third step, it runs into a codeexecution error but succeeds in rectifying it using a different approach in the very next turn. Further,we observe SBSC checking the validity of the generated solutions in the fourth step before proceedingwith the final step and ultimately reaches the correct answer.This example also helps to illustrate how our approach is different from Least-to-Most (L2M)prompting [ 39] where the first stage involves pre-decomposing the question into two or more sub-questions in one go and then finding solutions for these pre-defined sub-questions whereas SBSCidentifies sub-tasks on the fly, based on preceding steps’ results and the final goal of the problem. Italso doesnt use tool-integration. Major advantage of SBSC is the granular program/sub-task levelthinking/refinement ability which previous works lack.A.2 Dataset ProcessingAll AIME problems have a unique integer answer ranging from 0 to 999, while AMC-12 problemsare of Multiple Choice Question(MCQ) format. Following Numina AIMO, we remove all the answerchoices from each AMC-12 question and modify the question, wherever necessary, to ensure aninteger answer. For this, we prompt GPT-4o to append an additional line at the end of each problemas suitable. Following is an example for demonstration:Original Question: An urn contains one red ball and one blue ball. A box of extra red and blue ballslies nearby. George performs the following operation four times: he draws a ball from the urn atrandom and then takes a ball of the same color from the box and returns those two matching ballsto the urn. After the four iterations the urn contains six balls. What is the probability that the urncontains three balls of each color?Answer:15Modified Question: An urn contains one red ball and one blue ball. A box of extra red and blueballs lies nearby. George performs the following operation four times: he draws a ball from the urn atrandom and then takes a ball of the same color from the box and returns those two matching ballsto the urn. After the four iterations the urn contains six balls. What is the probability that the urncontains three balls of each color? If the answer is represented as a fractionmnin its simplest terms,what is the value of m+n?Integer Answer: 6A.3 Number of Steps in SBSCIn Table 2, we present the number of turns taken per question by SBSC across the different datasets.A.4 Understanding SBSC in DetailIn this section, we demonstrate some scenarios where SBSC has been successful while TIR-ToRAhas failed, with the help of some example questions and investigating the responses obtained fromthe two models.Let’s consider the question in Example 1, involving a geometric progression of numbers written inlogarithmic form, which TIR-ToRA gets wrong.The method uses a binary search technique, whichis not very precise when dealing with exact values required for mathematical problems, especiallywhen fractions are involved.The solution uses a function to check whether the logarithms form ageometric progression which introduces additional complexity and potential inaccuracies because itinvolves comparing ratios that may not be exactly equal due to floating-point arithmetic.Also, thissingle-turn method tends to overlook specified constraints or necessary simplifications, which areoften encountered in Olympiad level problems and instead makes false assumptions.10Number of turns or steps AMC AIME MathOdyssey2 21 12 83 57 19 174 101 47 195 79 51 216 63 43 287 41 43 148 42 31 109 12 18 8others 59 66 23Average turns or steps/Problem 6.0 6.9 6.4Table 2: Table showing number of turns/steps used by SBSCThe question in Example 2 is an example scenario where TIR-ToRA fails because it makes anincorrect assumption. It misinterprets the Lipschitz condition and incorrectly makes a simplerassumption that the difference f(800)−f(400) is equal to the maximum possible difference, whichis 200. While the magnitude of the difference is bounded by 200, it does not mean that the actualdifference will always be 200. Iterative solutions, as are often the only way out in single programbased solutions, can sometimes lead to infinite loops, especially in cases where the stopping conditionis not clearly defined or understood by the LLM.As can be seen in Example 3, the single code is unable to take advantage of the factorization of 2020,which is key to solving the problem efficiently and instead iterates over a very large range of potentialvalues for m, leading to inefficiency. The upper bound 2020 is extremely large and the sheer numberof iterations causes a timeout.Example 4 presents a scenario where TIR-ToRA makes up an assumption about the problem andwrites the code for terminating a loop accordingly, which leads to a timeout error, as the incorrectassumption leads to an infinite loop. It lacks intermediate checks that would provide insights intowhether the sequence terms are of the formtt+1, which is crucial for solving the problem and wouldhave enabled it to chalk out the termination conditions suitably.On the other hand, our Step-By-Step Coding method enforces a decomposition of the problem intosmaller sub-task. Each sub-task is tackled independently by the LLM, which generates code to solveit and then uses the resulting output to suitably proceed to the next sub-task and this process continuestill the final answer is reached. Such an approach ensures that every part of the problem is addressedwith exact precision, reducing the risk of errors that might arise from skipped steps. Dividing theproblem into multiple sub-tasks also allows it to make necessary simplifications that would make thefuture sub-tasks, and hence the entire problem, easier to solve.Going back to the problem in Example 1, SBSC starts by defining the logarithms and setting up theequations based on the geometric progression condition. It then simplifies the equations to reducethem to a more manageable form, eliminating unnecessary complexity and allowing straightforwardsolving. Throughout the problem, it uses precise mathematical formulations of the problem, ensuringthe solution is accurate. Since this method isn’t trying to solve the entire problem at one go, it doesn’tneed to make any assumptions to simplify the problem statement.For the question in Example 2, it correctly interprets the problem, keeps applying the given Lipschitzcondition as it solves each sub-task and finds the correct maximum possible value of f(f(800)) −f(f(400)) . By systematically checking for constraints and edge cases at each stage, our methodguarantees that solutions are not only accurate but also valid under all possible conditions. This isparticularly crucial for problems with intricate conditions or multiple solution paths. Additionally,our process explicitly defines loop conditions to ensure that termination criteria are clear, allowingfor correction if issues like infinite loops arise.For the problem in Example 3, the first sub-task correctly identified by SBSC is to factorize the largenumber 2020into smaller powers, paving the path towards an efficient solution. In the subsequentsteps, it takes care of constraints like ensuring that m and n are integers and avoids over-counting byproperly iterating over valid powers, giving us the correct answer.Likewise, in Example 4, we see that our method generates each term of the sequence one at a time,allowing for intermediate checks to confirm whether each term is of the desired formtt+1. Thus,it accurately identifies the terms that meet the required condition and ensures that only the correct11indices are considered. This avoids the infinite loop risk and ensures the correctness of each stepbefore moving on. We also notice that the model runs into an error at the first sub-task itself but basedon the corresponding code output, it is able to take an alternate approach and successfully solves thesub-task, moving onto later ones. This example also highlights our superior code debugging ability,as discussed earlier.Example Question 1: There is a unique positive real number x such that the three numbers log82x,log4x, andlog2x, in that order, form a geometric progression with positive common ratio. Thenumber x can be written asmn, where m and n are relatively prime positive integers. Find m + nIncorrect solution by TIR-ToRALet’s approach this problem step-by-step: 11. Define a function to check if three numbers form a geometric progression. 22. Use binary search to find xthat satisfies the condition. 33. Convert xto a fraction and return the sum of its numerator and denominator. 4import math 5from fractions import Fraction 6def solution(): 7# Function to check if three numbers form 8#a geometric progression 9def is_geometric_progression(a, b, c): 10ratio1 = b / a 11ratio2 = c / b 12return math.isclose(ratio1, ratio2) and ratio1 > 1 13# Binary search to find x 14left, right = 1, 1000 15while right - left > 1e-10: 16x = (left + right) / 2 17log8_2x = math.log(2 *x, 8) 18log4_x = math.log(x, 4) 19log2_x = math.log(x, 2) 20if is_geometric_progression(log8_2x, log4_x, log2_x): 21break 22elif (log4_x / log8_2x) > (log2_x / log4_x): 23left = x 24else: 25right = x 26# Convert x to a fraction 27frac = Fraction(x).limit_denominator() 28return frac.numerator + frac.denominator 29result = solution() 30print(f"\\boxed{{{result}}}") 3132>>> output: 33\boxed{2} 34The answer is 2 35Correct solution by SBSC:Let’s solve this problem step by step using SymPy-based Python code.Step 1: Define the symbols and set up the equations3637from sympy import symbols, log, Eq, solve 38# Define symbols 39x, r = symbols('x r') 40# Define the three logarithms 41log1 = log(2 *x, 8) 42log2 = log(x, 4) 4312log3 = log(x, 2) 44# Set up equations for geometric progression 45eq1 = Eq(log2 / log1, r) 46eq2 = Eq(log3 / log2, r) 47print("Equation 1:", eq1) 48print("Equation 2:", eq2) 4950>>> output 51Equation 1: Eq(log(8) *log(x)/(log(4) *log(2*x)), r) 52Equation 2: Eq(log(4)/log(2), r) 53Let’s continue solving this problem step by step using SymPy-based Python code.Step 2: Simplify the equations5455from sympy import symbols, log, Eq, solve, simplify 56# Define symbols 57x, r = symbols('x r') 58# Define the three logarithms 59log1 = log(2 *x, 8) 60log2 = log(x, 4) 61log3 = log(x, 2) 62# Set up equations for geometric progression 63eq1 = Eq(log2 / log1, r) 64eq2 = Eq(log3 / log2, r) 65# Simplify equations 66eq1_simplified = simplify(eq1) 67eq2_simplified = simplify(eq2) 68print("Simplified Equation 1:", eq1_simplified) 69print("Simplified Equation 2:", eq2_simplified) 7071>>> output: 72Simplified Equation 1: Eq(r, 3 *log(x)/(2 *log(x) + 2 *log(2))) 73Simplified Equation 2: Eq(r, 2) 74Let’s continue solving this problem step by step using SymPy-based Python code.Step 3: Solve the equations7576from sympy import symbols, log, Eq, solve, simplify 77# Define symbols 78x, r = symbols('x r') 79# Define the simplified equations 80eq1 = Eq(r, 3 *log(x)/(2 *log(x) + 2 *log(2))) 81eq2 = Eq(r, 2) 82# Solve the system of equations 83solution = solve((eq1, eq2), (x, r)) 84print("Solution:", solution) 8586>>> output 87Solution: [(1/16, 2)] 88Let’s continue solving this problem step by step using SymPy-based Python code.Step 4: Extract the value of x and convert to a fraction8990from sympy import symbols,log,Eq, solve, simplify, Rational 91# Define symbols 92x, r = symbols('x r') 93# Define the solution from the previous step 94solution = [(Rational(1, 16), 2)] 95# Extract the value of x 96x_value = solution[0][0] 97print("x =", x_value) 98# Convert x to a fraction 99m = x_value.numerator 100n = x_value.denominator 101print("x as a fraction: {}/{}".format(m, n)) 10213103>>> output 104x = 1/16 105x as a fraction: 1/16 106Let’s continue solving this problem step by step using SymPy-based Python code.Step 5: Calculate m + n107108from sympy import symbols,log,Eq, solve, simplify, Rational 109# Define symbols 110x, r = symbols('x r') 111# Define the solution from the previous step 112solution = [(Rational(1, 16), 2)] 113# Extract the value of x 114x_value = solution[0][0] 115# Convert x to a fraction 116m = x_value.numerator 117n = x_value.denominator 118# Calculate m + n 119result = m + n 120print("m =", m) 121print("n =", n) 122print("m + n =", result) 123124>>> output: 125m = 1 126n = 16 127m + n = 17 128### END OF CODE 129The final answer is 17 130Example Question 2: Consider functions fthat satisfy |f(x)−f(y)| ≤0.5|x−y|for all realnumbers xandy. Of all such functions that also satisfy the equation f(300) = f(900) , what is thegreatest possible value of f(f(800)−f(f(400))) ?Incorrect solution by TIR-ToRA:To solve this problem, we need to consider the properties of the given function and find themaximum possible difference between f(f(800)) andf(f(400)) . Let’s solve it using python:131132def solution(): 133#Given: |f(x) - f(y)| <= 0.5|x - y| for all real x and y 134#Also, f(300) = f(900) 135#Maximum difference between f(800) and f(400) 136max_diff_800_400 = 0.5 *(800 - 400) # 200 137#Maximum value of f(800) - f(400) 138max_f800_f400 = max_diff_800_400 # 200 139#Maximum difference between f(f(800)) and f(f(400)) 140max_final_diff = 0.5 *max_f800_f400 # 100 141return max_final_diff 142result = solution() 143print(f"\\boxed{{{result}}}") 144145>>> output: 146\boxed{100} 147The answer is 100 14814Correct solution by SBSC:Let’s approach this problem step by step using SymPy-based Python code where appropriate.Step 1: Understand the given condition The condition |f(x) - f(y)| 1/2|x - y| implies that f is aLipschitz continuous function with Lipschitz constant 1/2. This means f doesn’t change toorapidly.149150151152Step 2: Analyze the equation f(300)=f(900).This equation tells us that f takes the same value atx=300 and x=900. Let’s call this value k.153154from sympy import symbols, Abs 155k = symbols('k') 156print(f"f(300) = f(900) = {k}") 157158>>> output: 159f(300) = f(900) = k 160Let’s continue solving this problem step by step.Step 3: Apply the Lipschitz condition to f(800) and f(400) We can use the Lipschitz conditionto find bounds for f(800) and f(400) in terms of k.161162163from sympy import symbols, Abs, Min, Max 164k = symbols('k') 165# For f(800) 166f_800_min = k - Abs(800 - 900)/2 167f_800_max = k + Abs(800 - 900)/2 168# For f(400) 169f_400_min = k - Abs(400 - 300)/2 170f_400_max = k + Abs(400 - 300)/2 171print(f"f(800) is between {f_800_min} and {f_800_max}") 172print(f"f(400) is between {f_400_min} and {f_400_max}") 173174>>> output: 175f(800) is between k - 50 and k + 50 176f(400) is between k - 50 and k + 50 177Let’s continue solving this problem step by step.Step 4: Analyze f(f(800)) andf(f(400)) Now we need to apply the Lipschitz condition againtof(f(800)) andf(f(400)) .178179180from sympy import symbols, Abs, Min, Max 181k = symbols('k') 182# For f(f(800)) 183ff_800_min = k - Abs(k + 50 - k)/2 184ff_800_max = k + Abs(k - 50 - k)/2 185# For f(f(400)) 186ff_400_min = k - Abs(k + 50 - k)/2 187ff_400_max = k + Abs(k - 50 - k)/2 188print(f"f(f(800)) is between {ff_800_min} and {ff_800_max}") 189print(f"f(f(400)) is between {ff_400_min} and {ff_400_max}") 190191>>> output 192f(f(800)) is between k - 25 and k + 25 193f(f(400)) is between k - 25 and k + 25 194Let’s continue solving this problem step by step.Step 5: Calculate the maximum possible difference Now we need to find the maximum possiblevalue of f(f(800)) −f(f(400)) .195196197from sympy import symbols, Abs, Min, Max 198k = symbols('k') 199# Maximum possible difference 200max_diff = (k + 25) - (k - 25) 201print(f"The maximum possible difference f(f(800))-f(f(400)) 202is {max_diff}") 20320415>>> output: 205The maximum possible difference f(f(800)) - f(f(400)) is 50 206### END OF CODE 207The final answer is 50 208Example Question 3: Find the number of ordered pairs of positive integers (m, n)such thatm2n= 2020Incorrect solution by TIR-ToRA:Let’s approach this problem step-by-step: 2091. Iterate through possible values of m up to the square root of 20 ˆ20 2102. For each m, check if m2divides 2020evenly 2113. Return the total count of valid pairs 212import math 213def solution(): 214count = 0 215target = 20 **20 216for m in range(1, int(math.sqrt(target)) + 1): 217if target % (m **2) == 0: 218n = target // (m **2) 219count += 1 220return count 221result = solution() 222print(f"The number of ordered pairs (m,n) is: 223f"\\boxed{{{result}}}"") 224225>>> output: 226TimeoutError: [Errno 110] Connection timed out 227Looks like we got a timeout error, let’s try a different approach that should not result in anyerror:228229import math 230def solution(): 231count = 0 232target = 20 **20 233for m in range(1, int(math.sqrt(target)) + 1): 234if target % (m **2) == 0: 235n = target // (m **2) 236count += 1 237return count 238result = solution() 239print(f"The number of ordered pairs (m,n) is: 240f"\\boxed{{{result}}}"") 241242>>> output: 243TimeoutError: [Errno 110] Connection timed out 244...This loop continues without ultimately getting an answer.245246Correct solution by SBSC:Let’s approach this problem step by step using SymPy-based Python code.Step 1: Define the equation and initialize variables247248from sympy import symbols,Eq,solve, divisors, prime_factors 249m, n = symbols('m n', positive=True, integer=True) 25016equation = Eq(m **2*n, 20**20) 251print("Equation:", equation) 252253>>> output: 254ImportError: cannot import name 'prime_factors' from 'sympy' 255I apologize for the error. It seems the ‘prime factors‘ function is not available in the version ofSymPy you’re using. Let’s modify our approach and use the ‘factorint‘ function instead, whichis more commonly available. Here’s the corrected code:Step 1: Define the equation and initialize variables256257258259from sympy import symbols, Eq, solve, divisors, factorint 260m, n = symbols('m n', positive=True, integer=True) 261equation = Eq(m **2*n, 20**20) 262print("Equation:", equation) 263264>>> output 265Equation: Eq(m **2*n, 104857600000000000000000000) 266Let’s continue solving this problem step by step.Step 2: Factor 2020and analyze its structure267268from sympy import symbols, Eq, solve, divisors, factorint 269m, n = symbols('m n', positive=True, integer=True) 270equation = Eq(m **2*n, 20**20) 271# Factor 20^20 272factorization = factorint(20 **20) 273print("Factorization of 20^20:", factorization) 274# Analyze the structure 275power_of_2 = factorization[2] 276power_of_5 = factorization[5] 277print("Power of 2:", power_of_2) 278print("Power of 5:", power_of_5) 279280>>> output 281Factorization of 20^20: {2: 40, 5: 20} 282Power of 2: 40 283Power of 5: 20 284Let’s continue solving this problem step by step.Step 3: Determine possible values for m and n285286from sympy import symbols, Eq, solve, divisors, factorint 287m, n = symbols('m n', positive=True, integer=True) 288equation = Eq(m **2*n, 20**20) 289# Factor 20^20 290factorization = factorint(20 **20) 291power_of_2 = factorization[2] 292power_of_5 = factorization[5] 293# Initialize counter for valid pairs 294valid_pairs = 0 295# Iterate through possible values of m 296for m_power_of_2 in range(0, power_of_2 + 1, 2): 297for m_power_of_5 in range(0, power_of_5 + 1): 298# Calculate m 299m_value = 2 **m_power_of_2 *5**m_power_of_5 300# Calculate corresponding n 301n_power_of_2 = power_of_2 - 2 *m_power_of_2 302n_power_of_5 = power_of_5 - m_power_of_5 303# Check if n is a positive integer 304if n_power_of_2 >= 0 and n_power_of_5 >= 0: 305valid_pairs += 1 306print("Number of valid (m,n) pairs:", valid_pairs) 307308>>> output 30917Number of valid (m,n) pairs: 231 310### END OF CODE 311The final answer is 231 312Example Question 4: Consider the sequence (ak)_k≥1of positive rational numbers definedbya1=20202021and for k≥1, ifak=mnfor relatively prime positive integers m and n , thenak+1=m+18n+19. Determine the sum of all positive integers j such that the rational number ajcan bewritten in the formtt+1for some positive integer tIncorrect solution by TIR-ToRA:Let’s proceed step-by-step: 3131. Generate the sequence of fractions starting from a1=20202021. 3142.For each term, check if it is in the formtt+1for some integer t. If so, add its index to arunning sum.3153163. Generate the next term using the given formula. 3174.Continue this process until we cycle back to the initial fraction. The sum of indiceswhere the fraction is in the formtt+1is our final answer.318319from fractions import Fraction 320def next_term(a): 321return Fraction(a.numerator + 18, a.denominator + 19) 322def is_of_form_t_over_t_plus_1(frac): 323return frac.denominator == frac.numerator + 1 324def solution(): 325a = Fraction(2020, 2021) 326j_sum = 0 327j = 1 328while True: 329if is_of_form_t_over_t_plus_1(a): 330j_sum += j 331a = next_term(a) 332j += 1 333# Check if we've reached the initial fraction again 334if a == Fraction(2020, 2021): 335break 336return j_sum 337result = solution() 338print(f"\\boxed{{{result}}}") 339Similar to the earlier example, this code results in a TimeOut error and then the loop keeps repeating.Correct solution by SBSC:Let’s solve this problem step by step using SymPy-based Python code.Step 1: Define the sequence and initial value340341from sympy import Rational, gcd 342def next_term(a): 343m, n = a.numerator, a.denominator 344return Rational(m + 18, n + 19) 345a = [Rational(2020, 2021)] 346print(f"a[1] = {a[0]}") 347348>>> output 349a[1] = 2020/2021 35018Let’s continue solving this problem step by step using SymPy-based Python code.Step 2: Generate the sequence until we find a term of the form t/(t+1)351352from sympy import Rational, gcd 353def next_term(a): 354m, n = a.numerator, a.denominator 355return Rational(m + 18, n + 19) 356a = [Rational(2020, 2021)] 357j = 1 358result = [] 359while True: 360if a[-1].denominator == a[-1].numerator + 1: 361result.append(j) 362print(f"Found a[{j}] = {a[-1]} of the form t/(t+1)") 363j += 1 364a.append(next_term(a[-1])) 365if j > 100: # Limit to prevent infinite loop 366break 367print(f"Indices found: {result}") 368369>>> output 370Found a[1] = 2020/2021 of the form t/(t+1) 371Found a[2] = 1019/1020 of the form t/(t+1) 372Found a[8] = 161/162 of the form t/(t+1) 373Found a[18] = 31/32 of the form t/(t+1) 374Found a[30] = 19/20 of the form t/(t+1) 375Indices found: [1, 2, 8, 18, 30] 376Let’s continue solving this problem step by step using SymPy-based Python code.Step 3: Calculate the sum of the indicesfrom sympy import Rational, gcd377378def next_term(a): 379m, n = a.numerator, a.denominator 380return Rational(m + 18, n + 19) 381a = [Rational(2020, 2021)] 382j = 1 383result = [] 384while True: 385if a[-1].denominator == a[-1].numerator + 1: 386result.append(j) 387j += 1 388a.append(next_term(a[-1])) 389if j > 100: # Limit to prevent infinite loop 390break 391sum_of_indices = sum(result) 392print(f"Indices found: {result}") 393print(f"Sum of indices: {sum_of_indices}") 394395>>> output 396Indices found: [1, 2, 8, 18, 30] 397Sum of indices: 59 398### END OF CODE 399The final answer is 59 400A.5 PAL ExemplarsIn this section, we provide the prompts for Program-Aided Language models (PAL) method. Weinitially used the default prompt as mentioned in the original PAL paper, but the results were poor.We noticed that the response often contained textual reasoning before or after the program, whichisn’t the desired format for PAL. Hence, we modify the instructions to confine the responses only toinclude Python program and subsequently, also notice improved accuracy.19For AIMELet’s use python program to solve math problems.DO NOT USE ANY TEXTUAL REASONING.Your response must start with: “‘pythonYour response must end with: print(result)Here are some examples you may refer to.Example Problem: A frog begins at P0= (0,0)and makes a sequence of jumps according tothe following rule: from Pn= (xn, yn),the frog jumps to Pn+1,which may be any of the points(xn+ 7, yn+ 2),(xn+ 2, yn+ 7),(xn−5, yn−10),or(xn−10, yn−5).There are Mpoints(x, y)with|x|+|y| ≤100that can be reached by a sequence of such jumps. Find the remainderwhen Mis divided by 1000.Example Solution:def solution():jumps = [(7, 2), (2, 7), (-5, -10), (-10, -5)]# Set to keep track of all reachable points, starting from the origin(0, 0).reachable = set([(0, 0)])# Queue to process points, starting with the origin (0, 0).queue = [(0, 0)]# Breadth-first search (BFS) to explore reachable points.while queue:# Pop the first point from the queue.x, y = queue.pop(0)# Iterate over all possible jumps.for dx, dy in jumps:# Calculate new coordinates after the jump.nx, ny = x + dx, y + dy# Check if the Manhattan distance is within 100 and the pointhasn't been visited.if abs(nx) + abs(ny) <= 100 and (nx, ny) not in reachable:# Add the new point to the reachable set.reachable.add((nx, ny))# Add the new point to the queue to explore further.queue.append((nx, ny))return len(reachable) % 1000result = solution()print(result)Example Problem: The AIME Triathlon consists of a half-mile swim, a 30-mile bicycle ride, andan eight-mile run. Tom swims, bicycles, and runs at constant rates. He runs fives times as fast as heswims, and he bicycles twice as fast as he runs. Tom completes the AIME Triathlon in four and aquarter hours. How many minutes does he spend bicycling?Example Solution:from sympy import symbols, Eq, solve, Rationaldef solution():x = symbols('x')# Set up the equationeq = Eq(Rational(1,2)/x + 30/(10 *x) + 8/(5 *x), Rational(17,4))# Solve the equationsolution = solve(eq)[0]# Calculate bicycling time in hoursbike_time = 30 / (10 *solution)# Convert to minutesbike_time_minutes = int(bike_time *60)return bike_time_minutesresult = solution()print result20Example Problem: LetSbe the increasing sequence of positive integers whose binary representationhas exactly 8ones. Let Nbe the 1000th number in S. Find the remainder when Nis divided by1000Example Solution:def solution():count = 0 # Initialize a counter to track how many numbers have beenfoundn = 1 # Start checking numbers from 1 upwardswhile count < 1000: # Continue the loop until we find the 1000thnumber# Check if the binary representation of the number 'n' hasexactly 8 '1'sif bin(n).count('1') == 8:count += 1 # Increment the counter when a number with 8 '1'sis found# If this is the 1000th such number, return the remainder ofn divided by 1000if count == 1000:return n % 1000n += 1 # Move to the next numberresult = solution()print(result)Example Problem: Two geometric sequences a1, a2, a3, . . .andb1, b2, b3, . . .have the same com-mon ratio, with a1= 27 b1= 99 , anda15=b11. Find a9Example Solution:def solution():# Initialize known valuesa1 = 27b1 = 99# Calculate the common ratio# We know that a15 = b11, so:# a1*r^14 = b1 *r^10# 27*r^14 = 99 *r^10# 27*r^4 = 99# r^4 = 99/27 = 11/3r = (11/3) **(1/4)# Calculate a9a9 = a1 *(r**8)return round(a9)result = solution()print(result)For AMC:Let’s use python program to solve math problems.DO NOT USE ANY TEXTUAL REASONING.Your response must start with: “‘pythonYour response must end with: print(result)Here are some examples you may refer to.Example Problem: Small lights are hung on a string 6inches apart in the order red, red, green,green, green, red, red, green, green, green, and so on continuing this pattern of 2red lights followedby3green lights. How many feet separate the 3rd red light and the 21st red light? Note: 1foot isequal to 12inches.Example Solution:def solution():# Find position of 3rd red lightn_3rd = 3complete_cycles_3rd = (n_3rd - 1) // 2remaining_lights_3rd = (n_3rd - 1) % 221pos_3rd = complete_cycles_3rd *5*6 + remaining_lights_3rd *6# Find position of 21st red lightn_21st = 21complete_cycles_21st = (n_21st - 1) // 2remaining_lights_21st = (n_21st - 1) % 2pos_21st = complete_cycles_21st *5*6 + remaining_lights_21st *6# Calculate the distance in inchesdistance_inches = pos_21st - pos_3rd# Convert to feetdistance_feet = distance_inches / 12return distance_feetresult = solution()print(result)Example Problem: A fruit salad consists of blueberries, raspberries, grapes, and cherries. The fruitsalad has a total of 280pieces of fruit. There are twice as many raspberries as blueberries, three timesas many grapes as cherries, and four times as many cherries as raspberries. How many cherries arethere in the fruit salad?Example Solution:from sympy import symbols, Eq, solvedef solution():# Define the symbols for the variablesb, r, g, c = symbols('b r g c')# Define the equations based on the problem statementeq1 = Eq(r, 2 *b) # Equation 1: r = 2beq2 = Eq(g, 3 *c) # Equation 2: g = 3ceq3 = Eq(c, 4 *r) # Equation 3: c = 4req4 = Eq(b + r + g + c, 280) # Equation 4: b + r + g + c = 280# Solve the system of equationssol = solve((eq1, eq2, eq3, eq4))return sol[c]result = solution()print(result)Example Problem: Last summer 30% of the birds living on Town Lake were geese, 25% wereswans, 10% were herons, and 35% were ducks. What percent of the birds that were not swans weregeese?Example Solution:def solution():# Total percentage of all birdstotal = 100# Percentages of each bird typegeese = 30swans = 25herons = 10ducks = 35# Calculate percentage of birds that are not swansnot_swans = total - swans# Calculate percentage of geese among birds that are not swansgeese_among_not_swans = (geese / not_swans) *100# Round to nearest whole numberreturn round(geese_among_not_swans)result = solution()print(result)Example Problem: At a twins and triplets convention, there were 9sets of twins and 6sets of triplets,all from different families. Each twin shook hands with all the twins except his/her siblings and withhalf the triplets. Each triplet shook hands with all the triplets except his/her siblings and with half thetwins. How many handshakes took place?Example Solution:22def solution():# Number of twins and tripletstwins = 9 *2triplets = 6 *3# Handshakes between twinstwin_handshakes = (twins *(twins - 2)) // 2# Handshakes between tripletstriplet_handshakes = (triplets *(triplets - 3)) // 2# Handshakes between twins and tripletstwin_triplet_handshakes = (twins *triplets) // 2# Total handshakestotal_handshakes = twin_handshakes + triplet_handshakes +twin_triplet_handshakesreturn total_handshakesresult = solution()print(result)A.6 TIR-ToRA ExemplarsFor AIMEIntegrate step-by-step reasoning and Python code to solve math problems using the followingguidelines:- Analyze the question and write functions to solve the problem; the function should not take anyarguments.- Present the final result in LaTeX using a ‘ ‘ without any units.- Utilize the ‘pi‘ symbol and ‘Rational“ from Sympy for πand fractions, and simplify all fractionsand square roots without converting them to decimal values.Once you get the code output, just display the output answer within ‘ ‘ and terminate.Here are some examples you may refer to:Example Problem: A frog begins at P0= (0,0)and makes a sequence of jumps according tothe following rule: from Pn= (xn, yn),the frog jumps to Pn+1,which may be any of the points(xn+ 7, yn+ 2),(xn+ 2, yn+ 7),(xn−5, yn−10),or(xn−10, yn−5).There are Mpoints(x, y)with|x|+|y| ≤100that can be reached by a sequence of such jumps. Find the remainderwhen Mis divided by 1000.Example Solution:Let’s approach this problem step-by-step:1.We need to find all points (x, y) that the frog can reach within the given constraint |x| + |y|100.2. We’ll use a breadth-first search (BFS) algorithm to explore all possible points.3. We’ll keep track of visited points to avoid revisiting them.4. We’ll count the number of unique points visited.Here’s the Python code to solve this:from collections import dequedef count_reachable_points():# Possible jumpsjumps = [(7, 2), (2, 7), (-5, -10), (-10, -5)]# Set to store visited pointsvisited = set([(0, 0)])# Queue for BFSqueue = deque([(0, 0)])while queue:x, y = queue.popleft()for dx, dy in jumps:new_x, new_y = x + dx, y + dy# Check if the new point is within the constraint and notvisited23if abs(new_x) + abs(new_y) <= 100 and (new_x, new_y) not invisited:visited.add((new_x, new_y))queue.append((new_x, new_y))# Count of reachable points is the size of visited setM = len(visited)# Return the remainder when M is divided by 1000return M % 1000result = count_reachable_points()print(f"\\boxed{{{result}}}")>>> output\boxed{373}The answer is 373Example Problem: The AIME Triathlon consists of a half-mile swim, a 30-mile bicycle ride, andan eight-mile run. Tom swims, bicycles, and runs at constant rates. He runs fives times as fast as heswims, and he bicycles twice as fast as he runs. Tom completes the AIME Triathlon in four and aquarter hours. How many minutes does he spend bicycling?Example Solution:Let’s approach this problem step-by-step:1. Let’s define variables:- Let x be Tom’s swimming speed in miles per hour- Then 5x is his running speed in miles per hour- And 10x is his bicycling speed in miles per hour2. We can set up equations based on the given information:- Swimming time: 0.5 / x hours- Bicycling time: 30 / (10x) hours- Running time: 8 / (5x) hours3. The total time is 4.25 hours, so we can set up an equation:0.5/x + 30/(10x) + 8/(5x) = 4.254. Let’s solve this equation and find the bicycling time.Here’s the Python code to solve this:from sympy import symbols, Eq, solve, Rationaldef solve_triathlon():x = symbols('x')# Set up the equationeq = Eq(Rational(1,2)/x + 30/(10 *x) + 8/(5 *x), Rational(17,4))# Solve the equationsolution = solve(eq)[0]# Calculate bicycling time in hoursbike_time = 30 / (10 *solution)# Convert to minutesbike_time_minutes = int(bike_time *60)return f"\\boxed{{{bike_time_minutes}}}"print(solve_triathlon())>>> output:\boxed{150}The answer is 150Example Problem: LetSbe the increasing sequence of positive integers whose binary representationhas exactly 8ones. Let Nbe the 1000th number in S. Find the remainder when Nis divided by1000Example Solution:Let’s approach this step-by-step:241.We need to generate the sequence S of numbers with exactly 8 ones in their binary represen-tation.2. We’ll find the 1000th number in this sequence.3. Finally, we’ll calculate the remainder when this number is divided by 1000.Here’s the Python code to solve this:def count_ones(n):return bin(n).count('1')def solution():count = 0n = 1while count < 1000:if count_ones(n) == 8:count += 1if count == 1000:return n % 1000n += 1result = solution()print(result)>>> output:\boxed{32}The answer is 32Example Problem: Two geometric sequences a1, a2, a3, . . .andb1, b2, b3, . . .have the same com-mon ratio, with a1= 27 b1= 99 , anda15=b11. Find a9Example Solution:Let’s approach this step-by-step:1. Let’s denote the common ratio as r.2. We can express a and b in terms of their first terms and r.3. We can then equate these expressions to find r.4. Once we have r, we can calculate a.Here’s the Python code to solve this:from sympy import symbols, Eq, solve, Rationaldef find_a9():r = symbols('r')# a = a *r# b = b *r# a = beq = Eq(27 *r**14, 99*r**10)# Solve for rsolution = solve(eq)# We expect only one positive real solutionr_value = [sol for sol in solution if sol.is_real and sol > 0][0]# Calculate aa9 = 27 *r_value**8return a9result = find_a9()print(f"\\boxed{{{result}}}")>>> output\boxed{363}The answer is 363For AMC25Integrate step-by-step reasoning and Python code to solve math problems using the followingguidelines:- Analyze the question and write functions to solve the problem; the function should not take anyarguments. - Present the final result in LaTeX using a ‘ ‘ without any units. - Utilize the ‘pi‘ symboland ‘Rational“ from Sympy for πand fractions, and simplify all fractions and square roots withoutconverting them to decimal values.Once you get the code output, just display the output answer within ‘ ‘ and terminate. Here are someexamples you may refer to:Example Problem: Small lights are hung on a string 6inches apart in the order red, red, green,green, green, red, red, green, green, green, and so on continuing this pattern of 2red lights followedby3green lights. How many feet separate the 3rd red light and the 21st red light? Note: 1foot isequal to 12inches.Example Solution:Let’s approach this step-by-step:1. First, we need to understand the pattern of lights:- Every 5 lights (2 red + 3 green) complete one cycle of the pattern.- Each cycle is 30 inches long (5 lights * 6 inches).2. We need to find how many complete cycles are between the 3rd and 21st red lights:- 3rd red light is at the end of the 2nd cycle (as each cycle has 2 red lights).- 21st red light is at the end of the 11th cycle (as 21 ÷ 2 = 10.5, rounded up to 11).- So there are 9 complete cycles between them (11 - 2 = 9).3. Calculate the distance:- 9 cycles * 30 inches per cycle = 270 inches- Convert 270 inches to feet: 270 ÷ 12 = 22.5 feetHere’s the Python code to solve this:from sympy import Rationaldef calculate_distance():lights_per_cycle = 5inches_per_light = 6inches_per_cycle = lights_per_cycle *inches_per_lightstart_cycle = 2 # 3rd red light is at the end of 2nd cycleend_cycle = 11 # 21st red light is at the end of 11th cyclecycles_between = end_cycle - start_cycletotal_inches = cycles_between *inches_per_cycletotal_feet = Rational(total_inches, 12)return f"\\boxed{{{total_feet}}}"print(calculate_distance())>>> output:\boxed{45/2}The answer is 45/2Example Problem: A fruit salad consists of blueberries, raspberries, grapes, and cherries. The fruitsalad has a total of 280pieces of fruit. There are twice as many raspberries as blueberries, three timesas many grapes as cherries, and four times as many cherries as raspberries. How many cherries arethere in the fruit salad?Example Solution:Let’s approach this problem step-by-step:1. Let’s define variables for each fruit:b = number of blueberriesr = number of raspberriesg = number of grapesc = number of cherries262. We can set up equations based on the given information:r = 2b (twice as many raspberries as blueberries)g = 3c (three times as many grapes as cherries)c = 4r (four times as many cherries as raspberries)3. We also know that the total number of fruits is 280:b + r + g + c = 2804. Now, let’s write a Python function to solve this system of equations:from sympy import symbols, Eq, solvedef solve_fruit_salad():b, r, g, c = symbols('b r g c')eq1 = Eq(r, 2 *b)eq2 = Eq(g, 3 *c)eq3 = Eq(c, 4 *r)eq4 = Eq(b + r + g + c, 280)solution = solve((eq1, eq2, eq3, eq4))return f"\\boxed{{{solution[c]}}}"print(solve_fruit_salad())>>> output:\boxed{64}The answer is 64Example Problem: Last summer 30% of the birds living on Town Lake were geese, 25% wereswans, 10% were herons, and 35% were ducks. What percent of the birds that were not swans weregeese?Example Solution:Let’s approach this step-by-step:1. First, we need to calculate the percentage of birds that were not swans.2. Then, we’ll calculate what percentage of this non-swan group were geese.Here’s the Python code to solve this:from sympy import Rationaldef calculate_geese_percentage():# Total percentage of birdstotal = 100# Percentage of swansswans = 25# Percentage of non-swansnon_swans = total - swans# Percentage of geese among all birdsgeese = 30# Calculate the percentage of geese among non-swansgeese_among_non_swans = Rational(geese, non_swans) *100# Simplify the fractionresult = geese_among_non_swans.simplify()return f"\\boxed{{{result}}}"print(calculate_geese_percentage())>>> output\boxed{40}The answer is 40Example Problem : At a twins and triplets convention, there were 9sets of twins and 6sets of triplets,all from different families. Each twin shook hands with all the twins except his/her siblings and withhalf the triplets. Each triplet shook hands with all the triplets except his/her siblings and with half thetwins. How many handshakes took place?27Example Solution:Let’s approach this problem step-by-step:1. Calculate the number of handshakes between twins2. Calculate the number of handshakes between triplets3. Calculate the number of handshakes between twins and triplets4. Sum up all handshakesHere’s the Python code to solve this:from sympy import Rationaldef calculate_handshakes():# Number of twins and tripletstwins = 9 *2triplets = 6 *3# Handshakes between twinstwin_handshakes = (twins *(twins - 2)) // 2 # Each twin shakeshands with all but 1 other twin# Handshakes between tripletstriplet_handshakes = (triplets *(triplets - 3)) // 2 # Each tripletshakes hands with all but 2 other triplets# Handshakes between twins and tripletstwin_triplet_handshakes = twins *triplets *Rational(1, 2) # Eachtwin shakes hands with half the triplets# Total handshakestotal_handshakes = twin_handshakes + triplet_handshakes +twin_triplet_handshakesreturn f"\\boxed{{{int(total_handshakes)}}}"print(calculate_handshakes())>>> output\boxed{441}The answer is 441A.7 SBSC System-Prompt TuningFor few-shot learning, apart from relevant exemplars, the LLM also benefits from a general instructionat the beginning [ 38,13,28] that provides a guideline or context about how the model should approachthe task, particularly those requiring logical reasoning, multi-step operations, etc. This can be speciallyuseful when the task requires a more nuanced understanding and when the instructions need to befollowed rigorously, as is the case with SBSC.The following has been used for our method:You are given a math problem and you need to think of a stepwise approach/process to be followed tosolve the problem. Use sympy-based python code to codify each of these steps in sequential manner.You must end each of your code snippet with a print statement followed by output in the next line.Use the results from the output of the code snippets of each step and continue to the next stepuntil you finish the final step and solve the problem completely. 1In each new code block, you must define the variables and symbols again based on the previouscode snippets or their outputs as suitable. 2Use sympy in appropriate way, as and when required.Once you complete the last step, write this line: ### END OF CODEThen just print the final numeric answer (without any units) like this: The final answer is boxed{}.If the executed code snippet returns an error, use it to correct the current step’s code snippet.DO NOT restart solving from Step 1. 3In Table 3, we compare the importance of the highlighted lines 1,2 and 3 above by trying out differentcombinations with/without them. We find that the complete instruction seems to be the best andproceed to use it for all experiments.28Full 3 only 2+3 1+3AMC 3 Yrs 48 43 49 46AIME 3 Yrs 29 24 28 25Table 3: Comparison of GPT-4o scores across different variations of prompt instructions.A.8 SBSC Prompt for AMCYou are given a math problem and you need to think of a stepwise approach/process to be followed tosolve the problem. Use sympy-based python code to codify each of these steps in sequential manner.You must end each of your code snippet with a print statement followed by output in the next line.Use the results from the output of the code snippets of each step and continue to the next step untilyou finish the final step and solve the problem completely.In each new code block, you must define the variables and symbols again based on the previous codesnippets or their outputs as suitable.Use sympy in appropriate way, as and when required.Once you complete the last step, write this line: ### END OF CODEThen just print the final numeric answer (without any units) like this: The final answer is boxed{}.If the executed code snippet returns an error, use it to correct the current step’s code snippet. DONOT restart solving from Step 1.Here are a few examples for your reference.Example Problem: Small lights are hung on a string 6inches apart in the order red, red, green,green, green, red, red, green, green, green, and so on continuing this pattern of 2red lights followedby3green lights. How many feet separate the 3rd red light and the 21st red light? Note: 1foot isequal to 12inches.Example Solution: Let’s solve this problem step by step using SymPy-based Python code.Step 1: Define the pattern and calculate the number of lights in one complete patternfrom sympy import symbols, Rational# Define the number of red and green lights in one patternred_lights_per_pattern = 2green_lights_per_pattern = 3# Calculate total lights in one patternlights_per_pattern = red_lights_per_pattern + green_lights_per_patternprint(f"Number of lights in one complete pattern: {lights_per_pattern}")>>> output:Number of lights in one complete pattern: 5Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 2: Calculate the number of complete patterns between the 3rd and 21st red lightsfrom sympy import symbols, Rational# Define the number of red lights we're consideringstart_red_light = 3end_red_light = 21# Calculate the number of red lights between the 3rd and 21st (inclusive)red_lights_between = end_red_light - start_red_light + 1# Calculate the number of complete patternscomplete_patterns = (red_lights_between - 1) // 2print(f"Number of complete patterns between the 3rd and 21st red lights:{complete_patterns}")>>> output:Number of complete patterns between the 3rd and 21st red lights: 9Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 3: Calculate the total number of lights between the 3rd and 21st red lights29from sympy import symbols, Rational# Define the number of red and green lights in one patternred_lights_per_pattern = 2green_lights_per_pattern = 3lights_per_pattern = red_lights_per_pattern + green_lights_per_pattern# Number of complete patternscomplete_patterns = 9# Calculate total lightstotal_lights = complete_patterns *lights_per_pattern + 1 # +1 for the21st red lightprint(f"Total number of lights between the 3rd and 21st red lights (inclusive): {total_lights}")>>> output:Total number of lights between the 3rd and 21st red lights (inclusive):46Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 4: Calculate the total distance in inchesfrom sympy import symbols, Rational# Total number of lightstotal_lights = 46# Distance between lights in inchesdistance_between_lights = 6# Calculate total distance in inchestotal_distance_inches = (total_lights - 1) *distance_between_lightsprint(f"Total distance in inches: {total_distance_inches}")>>> output:Total distance in inches: 270Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 5: Convert the distance from inches to feetfrom sympy import symbols, Rational# Total distance in inchestotal_distance_inches = 270# Convert inches to feetinches_per_foot = 12total_distance_feet = Rational(total_distance_inches, inches_per_foot)print(f"Total distance in feet: {total_distance_feet}")>>> output:Total distance in feet: 45/2### END OF CODEThe final answer is452Example Problem: A fruit salad consists of blueberries, raspberries, grapes, and cherries. The fruitsalad has a total of 280pieces of fruit. There are twice as many raspberries as blueberries, three timesas many grapes as cherries, and four times as many cherries as raspberries. How many cherries arethere in the fruit salad?Example Solution: Let’s solve this problem step by step using SymPy-based Python code.Step 1: Define the variables and set up the equationsfrom sympy import symbols, Eq, solve# Define symbols for each type of fruitb, r, g, c = symbols('b r g c')# Set up equations based on the given informationeq1 = Eq(b + r + g + c, 280) # Total pieces of fruiteq2 = Eq(r, 2 *b) # Twice as many raspberries as blueberrieseq3 = Eq(g, 3 *c) # Three times as many grapes as cherries30eq4 = Eq(c, 4 *r) # Four times as many cherries as raspberriesprint("Equations:")print(eq1)print(eq2)print(eq3)print(eq4)>>> output:Equations:Eq(b + c + g + r, 280)Eq(r, 2*b)Eq(g, 3*c)Eq(c, 4*r)Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 2: Solve the system of equationsfrom sympy import symbols, Eq, solve# Define symbols for each type of fruitb, r, g, c = symbols('b r g c')# Set up equations based on the given informationeq1 = Eq(b + r + g + c, 280) # Total pieces of fruiteq2 = Eq(r, 2 *b) # Twice as many raspberries as blueberrieseq3 = Eq(g, 3 *c) # Three times as many grapes as cherrieseq4 = Eq(c, 4 *r) # Four times as many cherries as raspberries# Solve the system of equationssolution = solve((eq1, eq2, eq3, eq4))print("Solution:")print(solution)>>> output:Solution:{b: 8, c: 64, g: 192, r: 16}Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 3: Extract the number of cherries from the solutionfrom sympy import symbols, Eq, solve# Define symbols for each type of fruitb, r, g, c = symbols('b r g c')# Set up equations based on the given informationeq1 = Eq(b + r + g + c, 280) # Total pieces of fruiteq2 = Eq(r, 2 *b) # Twice as many raspberries as blueberrieseq3 = Eq(g, 3 *c) # Three times as many grapes as cherrieseq4 = Eq(c, 4 *r) # Four times as many cherries as raspberries# Solve the system of equationssolution = solve((eq1, eq2, eq3, eq4))# Extract the number of cherriescherries = solution[c]print(f"The number of cherries in the fruit salad is: {cherries}")>>> output:The number of cherries in the fruit salad is: 64END OF CODE The final answer is 64Example Problem: Last summer 30% of the birds living on Town Lake were geese, 25% wereswans, 10% were herons, and 35% were ducks. What percent of the birds that were not swans weregeese?Example Solution: Let’s solve this problem step by step using SymPy-based Python code.Step 1: Define the percentages of each type of birdfrom sympy import symbols, Rationalgeese = Rational(30, 100)31swans = Rational(25, 100)herons = Rational(10, 100)ducks = Rational(35, 100)print(f"Percentage of geese: {geese}")print(f"Percentage of swans: {swans}")print(f"Percentage of herons: {herons}")print(f"Percentage of ducks: {ducks}")>>> output:Percentage of geese: 3/10Percentage of swans: 1/4Percentage of herons: 1/10Percentage of ducks: 7/20Let’s continue with the next steps to solve this problem. Step 2: Calculate the percentage of birdsthat are not swansfrom sympy import symbols, Rationalgeese = Rational(30, 100)swans = Rational(25, 100)herons = Rational(10, 100)ducks = Rational(35, 100)not_swans = 1 - swansprint(f"Percentage of birds that are not swans: {not_swans}")>>> output:Percentage of birds that are not swans: 3/4Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 3: Calculate the percentage of geese among the birds that are not swansfrom sympy import symbols, Rationalgeese = Rational(30, 100)swans = Rational(25, 100)herons = Rational(10, 100)ducks = Rational(35, 100)not_swans = 1 - swansgeese_among_not_swans = geese / not_swansprint(f"Percentage of geese among birds that are not swans: {geese_among_not_swans}")>>> output:Percentage of geese among birds that are not swans: 2/5Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 4: Convert the fraction to a percentagefrom sympy import symbols, Rationalgeese = Rational(30, 100)swans = Rational(25, 100)herons = Rational(10, 100)ducks = Rational(35, 100)not_swans = 1 - swansgeese_among_not_swans = geese / not_swanspercentage = geese_among_not_swans *100print(f"Percentage of geese among birds that are not swans: {percentage}%")>>> output:Percentage of geese among birds that are not swans: 40%### END OF CODEThe final answer is 4032Example Problem: At a twins and triplets convention, there were 9sets of twins and 6sets of triplets,all from different families. Each twin shook hands with all the twins except his/her siblings and withhalf the triplets. Each triplet shook hands with all the triplets except his/her siblings and with half thetwins. How many handshakes took place?Example Solution: Let’s solve this problem step by step using SymPy-based Python code.Step 1: Define the given informationfrom sympy import symbols, Rational# Number of sets of twins and tripletstwin_sets = 9triplet_sets = 6# Number of individualstwins = twin_sets *2triplets = triplet_sets *3print(f"Number of twins: {twins}")print(f"Number of triplets: {triplets}")>>> output:Number of twins: 18Number of triplets: 18Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 2: Calculate handshakes between twinsfrom sympy import symbols, Rational# Number of sets of twins and tripletstwin_sets = 9triplet_sets = 6# Number of individualstwins = twin_sets *2triplets = triplet_sets *3# Handshakes between twinstwin_handshakes = (twins *(twins - 2)) // 2 # Each twin shakes handswith all twins except self and siblingprint(f"Handshakes between twins: {twin_handshakes}")>>> outputHandshakes between twins: 144Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 3: Calculate handshakes between tripletsfrom sympy import symbols, Rational# Number of sets of twins and tripletstwin_sets = 9triplet_sets = 6# Number of individualstwins = twin_sets *2triplets = triplet_sets *3# Handshakes between twinstwin_handshakes = (twins *(twins - 2)) // 2# Handshakes between tripletstriplet_handshakes = (triplets *(triplets - 3)) // 2 # Each tripletshakes hands with all triplets except self and two siblingsprint(f"Handshakes between triplets: {triplet_handshakes}")>>> outputHandshakes between triplets: 135Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 4: Calculate handshakes between twins and tripletsfrom sympy import symbols, Rational33# Number of sets of twins and tripletstwin_sets = 9triplet_sets = 6# Number of individualstwins = twin_sets *2triplets = triplet_sets *3# Handshakes between twinstwin_handshakes = (twins *(twins - 2)) // 2# Handshakes between tripletstriplet_handshakes = (triplets *(triplets - 3)) // 2# Handshakes between twins and tripletstwin_triplet_handshakes = twins *(triplets // 2) # Each twin shakeshands with half the tripletsprint(f"Handshakes between twins and triplets: {twin_triplet_handshakes}")>>> output:Handshakes between twins and triplets: 162Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 5: Calculate the total number of handshakesfrom sympy import symbols, Rational# Number of sets of twins and tripletstwin_sets = 9triplet_sets = 6# Number of individualstwins = twin_sets *2triplets = triplet_sets *3# Handshakes between twinstwin_handshakes = (twins *(twins - 2)) // 2# Handshakes between tripletstriplet_handshakes = (triplets *(triplets - 3)) // 2# Handshakes between twins and tripletstwin_triplet_handshakes = twins *(triplets // 2)# Total handshakestotal_handshakes = twin_handshakes + triplet_handshakes +twin_triplet_handshakesprint(f"Total number of handshakes: {total_handshakes}")>>> outputTotal number of handshakes: 441### END OF CODEThe final answer is 441A.9 SBSC Prompt for AIMEYou are given a math problem and you need to think of a stepwise approach/process to be followed tosolve the problem. Use sympy-based python code to codify each of these steps in sequential manner.You must end each of your code snippet with a print statement followed by output in the next line.Use the results from the output of the code snippets of each step and continue to the next step untilyou finish the final step and solve the problem completely.In each new code block, you must define the variables and symbols again based on the previous codesnippets or their outputs as suitable.Use sympy in appropriate way, as and when required.Once you complete the last step, write this line: ### END OF CODEThen just print the final numeric answer (without any units) like this: The final answer is boxed{}.If the executed code snippet returns an error, use it to correct the current step’s code snippet. DONOT restart solving from Step 1.Here are a few examples for your reference.34Example Problem: A frog begins at P0= (0,0)and makes a sequence of jumps according tothe following rule: from Pn= (xn, yn),the frog jumps to Pn+1,which may be any of the points(xn+ 7, yn+ 2),(xn+ 2, yn+ 7),(xn−5, yn−10),or(xn−10, yn−5).There are Mpoints(x, y)with|x|+|y| ≤100that can be reached by a sequence of such jumps. Find the remainderwhen Mis divided by 1000.Example Solution: Let’s approach this problem step by step using SymPy-based Python code.Step 1: Import necessary libraries and define the jump rulesfrom sympy import symbols, Absimport numpy as np# Define the possible jumps from any position (x, y)def jumps(x, y):return [(x + 7, y + 2), # Jump by (7, 2)(x + 2, y + 7), # Jump by (2, 7)(x - 5, y - 10), # Jump by (-5, -10)(x - 10, y - 5) # Jump by (-10, -5)]print("Jump rules defined.")>>> outputJump rules defined.Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 2: Implement a breadth-first search to find all reachable pointsfrom sympy import symbols, Absimport numpy as npfrom collections import deque# Reuse the jumps function to define possible movesdef jumps(x, y):return [(x + 7, y + 2),(x + 2, y + 7),(x - 5, y - 10),(x - 10, y - 5)]# Define the breadth-first search (BFS) functiondef bfs():queue = deque([(0, 0)]) # Start BFS from the point (0, 0)visited = set([(0, 0)]) # Keep track of visited points to avoidreprocessing# Process each point in the queuewhile queue:x, y = queue.popleft()for nx, ny in jumps(x, y): # Try all possible jumps from thecurrent pointif (nx, ny) not in visited and abs(nx) + abs(ny) <= 100:visited.add((nx, ny)) # Mark the new point as visitedqueue.append((nx, ny)) # Add the new point to the queueto explore furtherreturn len(visited) yM = bfs()print(f"Number of reachable points (M): {M}")>>> output:Number of reachable points (M): 1373Let’s continue with the final step to solve this problem. Step 3: Calculate the remainder when M isdivided by 1000from sympy import symbols, Absimport numpy as npM = 1373 # M_value from previous step35# Find the remainder when M is divided by 1000remainder = M % 1000print(f"Remainder when M is divided by 1000: {remainder}")>>> outputRemainder when M is divided by 1000: 373### END OF CODEThe final answer is 373Example Problem: The AIME Triathlon consists of a half-mile swim, a 30-mile bicycle ride, andan eight-mile run. Tom swims, bicycles, and runs at constant rates. He runs fives times as fast as heswims, and he bicycles twice as fast as he runs. Tom completes the AIME Triathlon in four and aquarter hours. How many minutes does he spend bicycling?Example Solution: Let’s solve this problem step by step using SymPy-based Python code.Step 1: Define the variables and set up the basic equationsfrom sympy import symbols, Eq, solve# Define variabless, b, r = symbols('s b r') # s: swim speed, b: bike speed, r: run speed# Set up equations based on the given informationeq1 = Eq(r, 5 *s) # Tom runs 5 times as fast as he swimseq2 = Eq(b, 2 *r) # Tom bicycles twice as fast as he runsprint("Equation 1:", eq1)print("Equation 2:", eq2)>>> outputEquation 1: Eq(r, 5 *s)Equation 2: Eq(b, 2 *r)Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 2: Set up the equation for the total timefrom sympy import symbols, Eq, solve# Reuse earlier codes, b, r = symbols('s b r')eq1 = Eq(r, 5 *s)eq2 = Eq(b, 2 *r)# Total time equation (in hours)total_time = 0.5/s + 30/b + 8/r #The AIME Triathlon consists of a half-mile swim, a 30-mile bicycle ride, and an eight-mile run.eq3 = Eq(total_time, 4.25) # Total time is 4.25 hoursprint("Total time equation:", eq3)>>> output:Total time equation: Eq(0.5/s + 8/r + 30/b, 4.25)Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 3: Substitute the relationships between speeds into the total time equationfrom sympy import symbols, Eq, solve, simplify# Resue earlier codes, b, r = symbols('s b r')eq1 = Eq(r, 5 *s)eq2 = Eq(b, 2 *r)total_time = 0.5/s + 30/b + 8/req3 = Eq(total_time, 4.25) # Total time is 4.25 hours# Substitute r and b in terms of s to simplify the equationeq3_substituted = eq3.subs([(r, 5 *s), (b, 10 *s)])eq3_simplified = simplify(eq3_substituted)print("Simplified equation:", eq3_simplified)36>>> output:Simplified equation: Eq(5.1/s, 4.25)Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 4: Solve for the swim speed (s)from sympy import symbols, Eq, solve, simplify# Redefine variabless, b, r = symbols('s b r') # s: swim speed, b: bike speed, r: run speed# Simplified equation from previous stepeq3_simplified = Eq(5.1/s, 4.25)# Solve for ss_solution = solve(eq3_simplified, s)print("Swim speed (s):", s_solution[0])>>> outputSwim speed (s): 1.20000000000000Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 5: Calculate the bike speed (b)from sympy import symbols, Eq, solve, simplify# Redefine variabless, b, r = symbols('s b r') # s: swim speed, b: bike speed, r: run speed# Swim speed from previous steps_value = 1.20000000000000# Calculate bike speedb_value = 10 *s_value # b = 2r = 2(5s) = 10sprint("Bike speed (b):", b_value)>>> outputBike speed (b): 12.0Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 6: Calculate the time spent bicyclingfrom sympy import symbols, Eq, solve, simplify# Redefine variabless, b, r = symbols('s b r') # s: swim speed, b: bike speed, r: run speed# Bike speed from previous stepb_value = 12.0# Calculate time spent bicyclingbike_distance = 30 # milesbike_time = bike_distance / b_value # hours# Convert to minutesbike_time_minutes = bike_time *60print("Time spent bicycling (in minutes):", bike_time_minutes)>>> outputTime spent bicycling (in minutes): 150.0### END OF CODEThe final answer is 150Example Problem: LetSbe the increasing sequence of positive integers whose binary representationhas exactly 8ones. Let Nbe the 1000th number in S. Find the remainder when Nis divided by1000Example Solution: Let’s solve this problem step by step using SymPy-based Python code.Step 1: Import necessary libraries and define a function to count ones in binary representationfrom sympy import symbols, binomialimport math# Function to count the number of '1's in the binary representation of anumber37def count_ones(n):return bin(n).count('1') # Convert n to binary and count occurrencesof '1'print("Function to count ones in binary representation defined.")>>> output:Function to count ones in binary representation defined.Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 2: Find the smallest number in the sequence Sfrom sympy import symbols, binomialimport math# Reuse the function to count the number of '1's in binarydef count_ones(n):return bin(n).count('1')# The smallest number in the sequence S is represented by 8 consecutive'1's in binarysmallest_number = int('1' *8, 2) # Convert '11111111' (binary) todecimalprint(f"The smallest number in sequence S is: {smallest_number}")>>> outputThe smallest number in sequence S is: 255Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 3: Find the largest number in the sequence Sfrom sympy import symbols, binomialimport math# Reuse the function to count the number of '1's in binarydef count_ones(n):return bin(n).count('1')smallest_number = int('1' *8, 2)# The largest number in sequence S is represented by 8 consecutive '1'sfollowed by 24 '0'slargest_number = int('1' *8 + '0' *24, 2) # Convert'11111111000000000000000000000000' to decimalprint(f"The largest number in sequence S is: {largest_number}")>>> outputThe largest number in sequence S is: 4278190080Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 4: Calculate the total number of elements in sequence Sfrom sympy import symbols, binomialimport mathdef count_ones(n):return bin(n).count('1')smallest_number = int('1' *8, 2)largest_number = int('1' *8 + '0' *24, 2)# Calculate the total number of elements in sequence S using binomial(32,8)# This represents the number of ways to choose 8 positions out of 32 bitsto set to '1'total_elements = binomial(32, 8)print(f"The total number of elements in sequence S is: {total_elements}")>>> outputThe total number of elements in sequence S is: 10518300Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 5: Find the 1000th number in sequence S38from sympy import symbols, binomialimport math#Reuse all earlier codedef count_ones(n):return bin(n).count('1')smallest_number = int('1' *8, 2)largest_number = int('1' *8 + '0' *24, 2)total_elements = binomial(32, 8)# Find the 1000th number in the sequencecount = 0 # Initialize a counter for the sequenceN = smallest_number # Start from the smallest number in S# Iterate through numbers and check for 8 '1's in the binaryrepresentationwhile count < 1000:if count_ones(N) == 8: # If the current number has exactly 8 '1'scount += 1 # Increment the countif count == 1000: # Stop if we've found the 1000th numberbreakN += 1 # Move to the next numberprint(f"The 1000th number (N) in sequence S is: {N}")>>> outputThe 1000th number (N) in sequence S is: 7032Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 6: Calculate the remainder when N is divided by 1000from sympy import symbols, binomialimport mathdef count_ones(n):return bin(n).count('1')N = 7032 # From the previous step# Calculate the remainder when N is divided by 1000remainder = N % 1000print(f"The remainder when N is divided by 1000 is: {remainder}")>>> outputThe remainder when N is divided by 1000 is: 32### END OF CODEThe final answer is 32Example Problem: Two geometric sequences a1, a2, a3, . . .andb1, b2, b3, . . .have the same com-mon ratio, with a1= 27 b1= 99 , anda15=b11. Find a9Example Solution: Let’s solve this problem step by step using SymPy-based Python code.Step 1: Define the variables and set up the initial conditionsfrom sympy import symbols, Eq, solve# Define symbolsr = symbols('r') # common ratioa1, b1 = 27, 99 # first terms of sequences a and bprint(f"a1 = {a1}, b1 = {b1}")print("Common ratio is represented by r")>>> outputa1 = 27, b1 = 99Common ratio is represented by rLet’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 2: Set up the equation for a15 = b11from sympy import symbols, Eq, solve# Redefine symbolsr = symbols('r')39a1, b1 = 27, 99# Set up the equation a15 = b11eq = Eq(a1 *r**14, b1*r**10)print("Equation: a15 = b11")print(f"Symbolically: {eq}")>>> outputEquation: a15 = b11Symbolically: Eq(27 *r**14, 99*r**10)Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 3: Solve the equation for rfrom sympy import symbols, Eq, solve# Redefine symbolsr = symbols('r')a1, b1 = 27, 99# Set up the equation a15 = b11eq = Eq(a1 *r**14, b1*r**10)# Solve for rsolution = solve(eq, r)print("Solutions for r:")print(solution)>>> outputSolutions for r:[0, -11**(1/4)*3**(3/4)/3, 11 **(1/4)*3**(3/4)/3]Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 4: Identify the correct value for rfrom sympy import symbols, Eq, solve, N# Reuse earlier coder = symbols('r')a1, b1 = 27, 99eq = Eq(a1 *r**14, b1*r**10)solution = solve(eq, r)# The correct value for r is the positive real solutionr_value = max(sol for sol in solution if sol.is_real)print(f"The common ratio r is: {N(r_value, 6)}")>>> outputThe common ratio r is: 1.38378Let’s continue with the next steps to solve this problem and leveraging the outputs of previous steps.Step 5: Calculate a9from sympy import symbols, Eq, solve, N# Reuse earlier coder = symbols('r')a1, b1 = 27, 99eq = Eq(a1 *r**14, b1*r**10)solution = solve(eq, r)r_value = max(sol for sol in solution if sol.is_real)# Calculate a9a9 = a1 *r_value**8print(f"a9 = {N(a9, 10)}")>>> outputa9 = 363.0000000### END OF CODEThe final answer is 36340A.10 Results with Standard DeviationsMethod AMC AIME MathOdysseygreedy maj@7 greedy maj@7 greedy [email protected] 31.16 ( ±1.0) 35.79 9.09 ( ±1.0) 10.91 11.89 ( ±0.6) 16.89PAL 35.79 ( ±1.0) 36.42 27.48 (±0.6) 28.79 27.23 ( ±0.6) 31.01TIR-ToRA 38.59 (±0.6) 43.16 24.64 ( ±3.2) 26.67 27.23 (±0.6) 32.43SBSC (Ours) 49.33 (±3.1)↑10.7−↑6.2 35.45 (±1.7)↑8-↑6.7 39.86 (±1.0)↑12.6-↑7.4GPT-4oCOT 35.94 ( ±0.6) 37.47 10.39 ( ±2.1) 12.12 13.51 ( ±1.0) 17.57PAL 36.48 ( ±0.6) 38.11 24.63 (±0.6) 26.97 15.74 ( ±0.6) 20.27TIR-ToRA 37.33 (±2.5) 40.42 22.42 ( ±1.7) 25.45 19.59 (±2.6) 23.64SBSC (Ours) 44.55 (±0.6)↑7.2 -↑4.1 30.7 (±1.1)↑6.1-↑3.7 26.55 (±1.1)↑7 -↑2.9Table 4: Benchmarking SBSC against different math reasoning methods across three datasets.We report average accuracy over 3 runs with standard deviation within parentheses. Best result in eachsetting is highlighted in bold and second best is underlined . Absolute improvement in performanceby SBSC over the previous best method in each setting is indicated in subscript.41NeurIPS Paper ChecklistThe checklist is designed to encourage best practices for responsible machine learning research,addressing issues of reproducibility, transparency, research ethics, and societal impact. Do not removethe checklist: The papers not including the checklist will be desk rejected. The checklist shouldfollow the references and follow the (optional) supplemental material. The checklist does NOT counttowards the page limit.Please read the checklist guidelines carefully for information on how to answer these questions. Foreach question in the checklist:• You should answer [Yes] , [No] , or [NA] .•[NA] means either that the question is Not Applicable for that particular paper or therelevant information is Not Available.• Please provide a short (1–2 sentence) justification right after your answer (even for NA).The checklist answers are an integral part of your paper submission. They are visible to thereviewers, area chairs, senior area chairs, and ethics reviewers. You will be asked to also include it(after eventual revisions) with the final version of your paper, and its final version will be publishedwith the paper.The reviewers of your paper will be asked to use the checklist as one of the factors in their evaluation.While "[Yes] " is generally preferable to "[No] ", it is perfectly acceptable to answer "[No] " provided aproper justification is given (e.g., "error bars are not reported because it would be too computationallyexpensive" or "we were unable to find the license for the dataset we used"). In general, answering"[No] " or "[NA] " is not grounds for rejection. While the questions are phrased in a binary way, weacknowledge that the true answer is often more nuanced, so please just use your best judgment andwrite a justification to elaborate. All supporting evidence can appear either in the main paper or thesupplemental material, provided in appendix. If you answer [Yes] to a question, in the justificationplease point to the section(s) where related material for the question can be found.IMPORTANT, please:•Delete this instruction block, but keep the section heading “NeurIPS paper checklist" ,•Keep the checklist subsection headings, questions/answers and guidelines below.•Do not modify the questions and only use the provided macros for your answers .1.ClaimsQuestion: Do the main claims made in the abstract and introduction accurately reflect thepaper’s contributions and scope?Answer: [Yes]Justification: We provide detailed experiment results and ablations supporting the claimsmade in abstract.Guidelines:•The answer NA means that the abstract and introduction do not include the claimsmade in the paper.•The abstract and/or introduction should clearly state the claims made, including thecontributions made in the paper and important assumptions and limitations. A No orNA answer to this question will not be perceived well by the reviewers.•The claims made should match theoretical and experimental results, and reflect howmuch the results can be expected to generalize to other settings.•It is fine to include aspirational goals as motivation as long as it is clear that these goalsare not attained by the paper.2.LimitationsQuestion: Does the paper discuss the limitations of the work performed by the authors?Answer: [Yes]42Justification: We only focus on text-based questions. We also just evaluate on integer-answertype questions. We dont explore questions with images. We dont explore proof basedquestions.Guidelines:•The answer NA means that the paper has no limitation while the answer No means thatthe paper has limitations, but those are not discussed in the paper.• The authors are encouraged to create a separate "Limitations" section in their paper.•The paper should point out any strong assumptions and how robust the results are toviolations of these assumptions (e.g., independence assumptions, noiseless settings,model well-specification, asymptotic approximations only holding locally). The authorsshould reflect on how these assumptions might be violated in practice and what theimplications would be.•The authors should reflect on the scope of the claims made, e.g., if the approach wasonly tested on a few datasets or with a few runs. In general, empirical results oftendepend on implicit assumptions, which should be articulated.•The authors should reflect on the factors that influence the performance of the approach.For example, a facial recognition algorithm may perform poorly when image resolutionis low or images are taken in low lighting. Or a speech-to-text system might not beused reliably to provide closed captions for online lectures because it fails to handletechnical jargon.•The authors should discuss the computational efficiency of the proposed algorithmsand how they scale with dataset size.•If applicable, the authors should discuss possible limitations of their approach toaddress problems of privacy and fairness.•While the authors might fear that complete honesty about limitations might be used byreviewers as grounds for rejection, a worse outcome might be that reviewers discoverlimitations that aren’t acknowledged in the paper. The authors should use their bestjudgment and recognize that individual actions in favor of transparency play an impor-tant role in developing norms that preserve the integrity of the community. Reviewerswill be specifically instructed to not penalize honesty concerning limitations.3.Theory Assumptions and ProofsQuestion: For each theoretical result, does the paper provide the full set of assumptions anda complete (and correct) proof?Answer: [NA]Justification: We did not make any theoretical claims. We produced experimental resultssupporting the algorithm presented.Guidelines:• The answer NA means that the paper does not include theoretical results.•All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.•All assumptions should be clearly stated or referenced in the statement of any theorems.•The proofs can either appear in the main paper or the supplemental material, but ifthey appear in the supplemental material, the authors are encouraged to provide a shortproof sketch to provide intuition.•Inversely, any informal proof provided in the core of the paper should be complementedby formal proofs provided in appendix or supplemental material.• Theorems and Lemmas that the proof relies upon should be properly referenced.4.Experimental Result ReproducibilityQuestion: Does the paper fully disclose all the information needed to reproduce the main ex-perimental results of the paper to the extent that it affects the main claims and/or conclusionsof the paper (regardless of whether the code and data are provided or not)?Answer: [Yes]43Justification: We describe all details of our method and benchmark dataset creation. We alsoprovide benchmark datasets in anonymous github. We also provide complete prompt forall the methods in Appendix. We also mention the closed source LLM names and settingsneeded. We also provide prompts as well.Guidelines:• The answer NA means that the paper does not include experiments.•If the paper includes experiments, a No answer to this question will not be perceivedwell by the reviewers: Making the paper reproducible is important, regardless ofwhether the code and data are provided or not.•If the contribution is a dataset and/or model, the authors should describe the steps takento make their results reproducible or verifiable.•Depending on the contribution, reproducibility can be accomplished in various ways.For example, if the contribution is a novel architecture, describing the architecture fullymight suffice, or if the contribution is a specific model and empirical evaluation, it maybe necessary to either make it possible for others to replicate the model with the samedataset, or provide access to the model. In general. releasing code and data is oftenone good way to accomplish this, but reproducibility can also be provided via detailedinstructions for how to replicate the results, access to a hosted model (e.g., in the caseof a large language model), releasing of a model checkpoint, or other means that areappropriate to the research performed.•While NeurIPS does not require releasing code, the conference does require all submis-sions to provide some reasonable avenue for reproducibility, which may depend on thenature of the contribution. For example(a)If the contribution is primarily a new algorithm, the paper should make it clear howto reproduce that algorithm.(b)If the contribution is primarily a new model architecture, the paper should describethe architecture clearly and fully.(c)If the contribution is a new model (e.g., a large language model), then there shouldeither be a way to access this model for reproducing the results or a way to reproducethe model (e.g., with an open-source dataset or instructions for how to constructthe dataset).(d)We recognize that reproducibility may be tricky in some cases, in which caseauthors are welcome to describe the particular way they provide for reproducibility.In the case of closed-source models, it may be that access to the model is limited insome way (e.g., to registered users), but it should be possible for other researchersto have some path to reproducing or verifying the results.5.Open access to data and codeQuestion: Does the paper provide open access to the data and code, with sufficient instruc-tions to faithfully reproduce the main experimental results, as described in supplementalmaterial?Answer: [Yes]Justification: We provide the exemplars (in appendix) and benchmark datasets scripts (in aanonymous github repository) required to reproduce our results. We have also outlined allthe settings and model name for making the api call in the paper.Guidelines:• The answer NA means that paper does not include experiments requiring code.•Please see the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•While we encourage the release of code and data, we understand that this might not bepossible, so “No” is an acceptable answer. Papers cannot be rejected simply for notincluding code, unless this is central to the contribution (e.g., for a new open-sourcebenchmark).•The instructions should contain the exact command and environment needed to run toreproduce the results. See the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.44•The authors should provide instructions on data access and preparation, including howto access the raw data, preprocessed data, intermediate data, and generated data, etc.•The authors should provide scripts to reproduce all experimental results for the newproposed method and baselines. If only a subset of experiments are reproducible, theyshould state which ones are omitted from the script and why.•At submission time, to preserve anonymity, the authors should release anonymizedversions (if applicable).•Providing as much information as possible in supplemental material (appended to thepaper) is recommended, but including URLs to data and code is permitted.6.Experimental Setting/DetailsQuestion: Does the paper specify all the training and test details (e.g., data splits, hyper-parameters, how they were chosen, type of optimizer, etc.) necessary to understand theresults?Answer: [Yes]Justification: We do not train any model. We specify all details on configuration of modelwhile inference. We clearly explain the creation of benchmark datasets and examples usedfor few-shot approach. We clearly outline the experiment details.Guidelines:• The answer NA means that the paper does not include experiments.•The experimental setting should be presented in the core of the paper to a level of detailthat is necessary to appreciate the results and make sense of them.•The full details can be provided either with the code, in appendix, or as supplementalmaterial.7.Experiment Statistical SignificanceQuestion: Does the paper report error bars suitably and correctly defined or other appropriateinformation about the statistical significance of the experiments?Answer: [Yes]Justification: We ran all the experiments multiple times and report average. In fact, we alsobenchmark against 7 runs of other SOTA methods against. We also do ablations to measuresensitivity. We also report standard deviation valuesGuidelines:• The answer NA means that the paper does not include experiments.•The authors should answer "Yes" if the results are accompanied by error bars, confi-dence intervals, or statistical significance tests, at least for the experiments that supportthe main claims of the paper.•The factors of variability that the error bars are capturing should be clearly stated (forexample, train/test split, initialization, random drawing of some parameter, or overallrun with given experimental conditions).•The method for calculating the error bars should be explained (closed form formula,call to a library function, bootstrap, etc.)• The assumptions made should be given (e.g., Normally distributed errors).•It should be clear whether the error bar is the standard deviation or the standard errorof the mean.•It is OK to report 1-sigma error bars, but one should state it. The authors shouldpreferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesisof Normality of errors is not verified.•For asymmetric distributions, the authors should be careful not to show in tables orfigures symmetric error bars that would yield results that are out of range (e.g. negativeerror rates).•If error bars are reported in tables or plots, The authors should explain in the text howthey were calculated and reference the corresponding figures or tables in the text.8.Experiments Compute Resources45Question: For each experiment, does the paper provide sufficient information on the com-puter resources (type of compute workers, memory, time of execution) needed to reproducethe experiments?Answer: [Yes]Justification: We mainly make api calls which we have mentioned. so hence outline therequirement of internet access.Guidelines:• The answer NA means that the paper does not include experiments.•The paper should indicate the type of compute workers CPU or GPU, internal cluster,or cloud provider, including relevant memory and storage.•The paper should provide the amount of compute required for each of the individualexperimental runs as well as estimate the total compute.•The paper should disclose whether the full research project required more computethan the experiments reported in the paper (e.g., preliminary or failed experiments thatdidn’t make it into the paper).9.Code Of EthicsQuestion: Does the research conducted in the paper conform, in every respect, with theNeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ?Answer: [Yes]Justification: Our paper follows the ethics mentioned.Guidelines:•The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.•If the authors answer No, they should explain the special circumstances that require adeviation from the Code of Ethics.•The authors should make sure to preserve anonymity (e.g., if there is a special consid-eration due to laws or regulations in their jurisdiction).10.Broader ImpactsQuestion: Does the paper discuss both potential positive societal impacts and negativesocietal impacts of the work performed?Answer: [NA]Justification: Our paper focus on math solving abilities of AI. It wont help in misinformationGuidelines:• The answer NA means that there is no societal impact of the work performed.•If the authors answer NA or No, they should explain why their work has no societalimpact or why the paper does not address societal impact.•Examples of negative societal impacts include potential malicious or unintended uses(e.g., disinformation, generating fake profiles, surveillance), fairness considerations(e.g., deployment of technologies that could make decisions that unfairly impact specificgroups), privacy considerations, and security considerations.•The conference expects that many papers will be foundational research and not tiedto particular applications, let alone deployments. However, if there is a direct path toany negative applications, the authors should point it out. For example, it is legitimateto point out that an improvement in the quality of generative models could be used togenerate deepfakes for disinformation. On the other hand, it is not needed to point outthat a generic algorithm for optimizing neural networks could enable people to trainmodels that generate Deepfakes faster.•The authors should consider possible harms that could arise when the technology isbeing used as intended and functioning correctly, harms that could arise when thetechnology is being used as intended but gives incorrect results, and harms followingfrom (intentional or unintentional) misuse of the technology.46•If there are negative societal impacts, the authors could also discuss possible mitigationstrategies (e.g., gated release of models, providing defenses in addition to attacks,mechanisms for monitoring misuse, mechanisms to monitor how a system learns fromfeedback over time, improving the efficiency and accessibility of ML).11.SafeguardsQuestion: Does the paper describe safeguards that have been put in place for responsiblerelease of data or models that have a high risk for misuse (e.g., pretrained language models,image generators, or scraped datasets)?Answer: [NA]Justification: The datasets we created are scraped from reputed sites and consists mainly ofmath problems for students.Guidelines:• The answer NA means that the paper poses no such risks.•Released models that have a high risk for misuse or dual-use should be released withnecessary safeguards to allow for controlled use of the model, for example by requiringthat users adhere to usage guidelines or restrictions to access the model or implementingsafety filters.•Datasets that have been scraped from the Internet could pose safety risks. The authorsshould describe how they avoided releasing unsafe images.•We recognize that providing effective safeguards is challenging, and many papers donot require this, but we encourage authors to take this into account and make a bestfaith effort.12.Licenses for existing assetsQuestion: Are the creators or original owners of assets (e.g., code, data, models), used inthe paper, properly credited and are the license and terms of use explicitly mentioned andproperly respected?Answer: [Yes]Justification: We cite all the related works and follow the licences.Guidelines:• The answer NA means that the paper does not use existing assets.• The authors should cite the original paper that produced the code package or dataset.•The authors should state which version of the asset is used and, if possible, include aURL.• The name of the license (e.g., CC-BY 4.0) should be included for each asset.•For scraped data from a particular source (e.g., website), the copyright and terms ofservice of that source should be provided.•If assets are released, the license, copyright information, and terms of use in the packageshould be provided. For popular datasets, paperswithcode.com/datasets hascurated licenses for some datasets. Their licensing guide can help determine the licenseof a dataset.•For existing datasets that are re-packaged, both the original license and the license ofthe derived asset (if it has changed) should be provided.•If this information is not available online, the authors are encouraged to reach out tothe asset’s creators.13.New AssetsQuestion: Are new assets introduced in the paper well documented and is the documentationprovided alongside the assets?Answer: [Yes]Justification: The new assets are mainly benchmark datasets provided in the anonymousgithub repository are of standard dataset format.Guidelines:47• The answer NA means that the paper does not release new assets.•Researchers should communicate the details of the dataset/code/model as part of theirsubmissions via structured templates. This includes details about training, license,limitations, etc.•The paper should discuss whether and how consent was obtained from people whoseasset is used.•At submission time, remember to anonymize your assets (if applicable). You can eithercreate an anonymized URL or include an anonymized zip file.14.Crowdsourcing and Research with Human SubjectsQuestion: For crowdsourcing experiments and research with human subjects, does the paperinclude the full text of instructions given to participants and screenshots, if applicable, aswell as details about compensation (if any)?Answer: [NA]Justification: No crowdsourcing or research with human subjectsGuidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Including this information in the supplemental material is fine, but if the main contribu-tion of the paper involves human subjects, then as much detail as possible should beincluded in the main paper.•According to the NeurIPS Code of Ethics, workers involved in data collection, curation,or other labor should be paid at least the minimum wage in the country of the datacollector.15.Institutional Review Board (IRB) Approvals or Equivalent for Research with HumanSubjectsQuestion: Does the paper describe potential risks incurred by study participants, whethersuch risks were disclosed to the subjects, and whether Institutional Review Board (IRB)approvals (or an equivalent approval/review based on the requirements of your country orinstitution) were obtained?Answer: [NA]Justification: No crowdsourcing or research with human subjectsGuidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Depending on the country in which research is conducted, IRB approval (or equivalent)may be required for any human subjects research. If you obtained IRB approval, youshould clearly state this in the paper.•We recognize that the procedures for this may vary significantly between institutionsand locations, and we expect authors to adhere to the NeurIPS Code of Ethics and theguidelines for their institution.•For initial submissions, do not include any information that would break anonymity (ifapplicable), such as the institution conducting the review.48 |
IV7BPdg1mM | Models Can and Should Embrace the CommunicativeNature of Human-Generated MathSasha Boguraevy, Ben Lipkinz, Leonie Weissweilery, Kyle MahowaldyyDepartment of Linguistics, The University of Texas at AustinzDepartment of Brain and Cognitive Sciences, Massachusetts Institute of Technology{sasha.boguraev, weissweiler, kyle}@[email protected] is constructed by people for people: just as natural language corpora reflectnot just propositions but the communicative goals of language users, the mathdata that models are trained on reflects not just idealized mathematical entities butrich communicative intentions. While there are important advantages to treatingmath in a purely symbolic manner, we here hypothesize that there are comple-mentary benefits to treating math as situated linguistic communication and thatlanguage models are well suited for this goal, in ways that are not fully appreciated.We illustrate these points with two case studies. First, we ran an experiment inwhich we found that language models interpret the equals sign in a humanlikeway—generating systematically different word problems for the same underlyingequation arranged in different ways. Second, we found that language modelsprefer proofs to be ordered in naturalistic ways, even though other orders would belogically equivalent. We advocate for AI systems that learn from and represent thecommunicative intentions latent in human-generated math.Mathematical propositions are first of all English sentences; not only English sentences,but each mathematical proposition has a resemblance to certain non-mathematical propositions.—Ludwig Wittgenstein, Lectures on the Foundations of Mathematics, 19391 IntroductionLanguage Models sometimes rely on heuristics and statistics rather than being perfectly compositionalidealized reasoners, especially in domains like math and logic [ 27,30,33,34,36,42,45,47]. Whereaslanguage production and comprehension involve some idealized composition using abstract rules[8,20], in tandem with memorization and pragmatic inference [ 9,16], math and logic reflect domainswhere one might expect an idealized compositional system to be required for obtaining precisesolutions. Indeed, whether an expression is written 5 +x= 7 or75 = xor “What is 5 lessthan 7?” or “Seven frogs were sitting on a log. Five left. How many are there now?”, there is anunderlying computation that can be extracted and performed (namely, the expression 75). Toproperly solve these problems, the thinking goes, systems should abstract away from their situatedformat into symbolic space.There is an intuitive, and well-justified, idea that competent human mathematical reasoners employexactly this kind of abstraction. By contrast, less competent mathematical reasoners (e.g., childrenstruggling to learn math) are often shown to rely on heuristics, schemas, and keywords [ 6,10,24,38,48]. For instance, kids might learn that every time they see the phrase “in total” in a word problem,they should add up all the numbers [ 39]. While the “heuristic” keyword-based direct translationapproach may be less cognitively taxing, it is also prone to translation errors [ 49]. Students who38th Conference on Neural Information Processing Systems (NeurIPS 2024).Forward Equation ( e!")Reverse Equation ( e!")Forward Word Problem ( w!")Reverse Word Problem ( w!")Recovered Equation ( e!"#)Recovered Equation ( e!"#)Sallyhas4candiesandreceives3candiesforeverystickershehas.Jimmyhas9candiesforeverystickerhehasbutloses2candies.Iftheyendupwiththesamenumberofcandies,howmanystickersdotheyeachhave?Alexhasninepacksoftradingcards,eachwiththesamenumberofcards.Hegivesaway2cardsandnowhas4cardsmorethanthreepacksoftradingcards.Howmanycardsareineachpack?Recovered Equation ( e!"#)Recovered Equation ( e!"#)4+3x=9x−29x−2=4+3x4+3x=9x−29x−2=4+3x4+3x=9x−29x−2=4+3xFigure 1: For each pair of equations, we generate corresponding word problems and then try torecover the equations from those problems. The model often recovered the original ordering.report adopting the more involved strategy of first parsing a math word problem into a structuredmental model, then planning computation and finally evaluating the solution in that space, are moresuccessful problem solvers [19].Taken together, these ideas might make it seem like the goal of AI math models should be to leavethe messy domain of language behind and translate expressions into symbolic representations. And,indeed, combining language models with symbolic solvers has proven successful in a variety of mathand reasoning domains [4, 14, 18, 32, 46, 53].Here, we argue that something is lost when disregarding the original context. We introduce theCommunicative Math Hypothesis :Math is constructed by people, for people. As such, there areconventions and pragmatics that people bring to the production and comprehension of mathematicalexpressions—communicative interpretations that go beyond the purely symbolic. Such traces ofinformation are particularly well suited for study via the tools of linguistics and cognitive science.The choice to write 3x+ 9instead of 3(x+ 3) conveys something to the reader, even though they areequivalent. Similarly, the proof of a theorem is not only a formalization that could be computationallyverified, but is a communicative act, with intention of being internalized and understood by others.Drawing on research in math education that we believe is underappreciated in machine learning, wemake the case for AI researchers to take the Communicative Math Hypothesis seriously. We presentsome initial proof-of-concept experiments showing that LLMs pick up on these communicativeregularities. We argue that this information should not always be ignored or explained away, but is acrucial component of human mathematics.2 Case Study One: Equations are AsymmetricAsymmetry in human mathematical interpretation has long been studied in math education. Inparticular, there is a wealth of literature on the perils of grade-school-aged children’s asymmetricalunderstanding of math – that is, a difficulty in reasoning with a problem such as = 2 + 4 , despiterelative comfort with the complementary equation of 2 + 4 = [2,37]. But, such sensitivity toasymmetry is not isolated to students. Even expert mathematicians understand math asymmetrically[29], offering different interpretations of equivalent expressions based on what is on the left orright of the equals sign. Here, we present results from a case study demonstrating that LLMs aresensitive to such asymmetries in equations as well and, like humans, do not learn a purely symmetricalinterpretation of the equals sign.Methods To test LLMs’ sensitivity to symmetry, we conduct an experiment assessing their ability toreconstruct the equations they used to create a specific word problem, as shown in Figure 1. Formally,we perform a three-step experiment. We first generate a set of npaired forward and reverse equations,denoted as E=fe1; e2; : : : ; e ng, where each paired equation eiconsists of the forward equation eifand the reverse equation eir. Thus, we can express each eiasei=feif; eirg. Next, for each of ournpairs, we pass both equations in eito GPT-4o, and prompt it to generate a corresponding pair ofword problems, wi=fwif,wirg, that could be solved by ei, with W=fw1: : : w ng. We finallyask the LLM to extract the equations e0i=fe0if,e0irgfor each wi2W, with E0=fe01: : : e0ng. Ourhypothesis is that across all nequations, the LLMs will more often recover e0iffrom eifande0irfromeir. For details on the equations used, their generation, and model prompting, see Appendix A.2Results and Discussion We measure the recovery rate of an equation’s original order and therespective reverse order using GPT-4o across 5 different sets of 200 pairs of randomly generatedstarting equations. We found that the original equation was recovered on average 52% of the timewith a 95% CI of [51%, 54%] across 5 runs. The reverse equation was nearly never recovered: 0.2%of the time, with a 95% CI of [0.0%, 0.4%] – a mere 3 times over the 1000 samples.These results suggest a difference between the word problems generated from a “forward” equationand word problems generated from a logically equivalent “reverse” equation, and that this differenceis itself recoverable by GPT-4o. We posit that this information, which a purely symbolic solver wouldbe agnostic to, is crucial information for systems that aim to use math in collaboration with humansor in human-like ways. These findings are consistent with work showing that premise order mattersin LLMs’ ability to reason [ 3,7,51], although they frame this order sensitivity as primarily revealingLLMs’ brittleness. We interpret these findings (and theirs) as revealing sensitivity to importantcommunicative factors inherent in the data.3 Case Study Two: Mathematical Rules and Proofs Have OrdersOur second case study focuses on mathematical communication of the sort more likely to take placeamong professional mathematicians: mathematical rules and proofs. Proofs, in particular, are widelyused in academic math, as well as related fields, and are duly an area of major focus for AI for math.Proofs are written to communicate truths that are, in some sense, tautological. Nonetheless, math-ematicians have strong expectations and interpretations about the directionality of equation. Forinstance, there are generalized principles associated with equal signs, like that the right side of theequation expounds upon or explains the left side [ 29]. Thus, while a=bandb=aare equivalentstatements by our agreed-upon set of axioms and inference rules, the choice of one or the other mightcommunicate a different message when used in a proof.To explore the preferred orderings used by mathematicians in proofs and rules, Mirin and Dawkins[29]utilize a set of breaching experiments . Breaching experiments are a class of experiments whichtry to break rules in an attempt to confirm their existence [ 41]. In particular, the authors first providedexpert mathematicians with a host of formal mathematical equations, such as the distributive ruleor an inductive proof. However, these equations were ordered in an unnatural manner – that is, inthe case of rules, orders which are not commonly encountered in formal mathematical texts, or inthe case of proofs, orders in which steps do not sequentially build from one to the next. The authorsmeasured whether these mathematicians reported any perceived breaches, with any such breachesproviding evidence for the existence of the mathematicians’ ordering preferences. Our case studyinto LLM ordering preferences in formal mathematics follows in this vein, measuring LLM surprisalsfor various natural (extant) and unnatural (unobserved) equation orderings.Methods Our set of mathematical equations consists of all examples used in the breaching ex-periments of Mirin and Dawkins [29]. This totals ten different examples, six of which are one lineequivalences, expressing common mathematical rules, and the other four of which are a series ofequivalences comprising a longer proofs. Each example further contains a brief textual introductionbefore the series of equivalences. All examples are reported in Appendix B.We first split each equation into its individual expressions. We then generate every possible orderingof a given equation by permuting the order of these individual expressions. Finally, for each modelwe calculate the average per-token surprisal for every ordering of expressions in a given equation,conditioned on that equation’s textual introductions. Our calculations are performed using theminicons package [31], a wrapper around Huggingface’s transformers package [50].In this case study, we use the instruction-tuned variants of four models: LLaMa 3.1 8B [ 12], Mistral7B v0.3 [ 23], Mathstral 7B [ 1], and Qwen2-Math 7B[ 52]. Two of these models were trained ongeneral corpora (LLaMa and Mistral), the other two fine-tuned on math (Mathstral and Qwen2-Math).Equation Variants To control for our ten equations potentially being within the evaluated model’straining data, we also performed evaluations with three sets of modified, but equivalent, variantsof each equation. Our first variants consist of all proofs reworded in a logically equivalent, butexpressively different, manner. Our second variants systematically replace all equation variablenames, and some rule names, with emojis – maintaining the correctness of equations, but presenting3differencequotientdistributiveexponentsdiff ruleexponentspower ruleexponentsprod rulehomomorphism inductionproductruleproofsettheoryOriginal Equations Reworded Emoji Substitution Reworded + EmojisLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwenLlamaMistralMathstralQwen0246024602460246per token surprisalFigure 2: We compare average per-token surprisal for different, logically equivalent orderings ofexpressions in proofs from Mirin and Dawkins [29](first row), and corresponding variants (secondthrough fourth row). We find that the original order ( ) has lower per-token surprisals on average(more probable) than equivalent counterfactual orders.them in a unique, unseen, manner. Our last set of variants combines the two previous variants,substituting emojis into the reworded variants. All variants are reported in Appendix C.Results and Discussion As seen in Figure 2, the evaluated models display clear and consistentpreferences for the natural ordering in nine of ten equations. In seven of these, all models display uni-form preference for each equation’s natural ordering. Of the remaining two equations (D IFFERENCEQUOTIENT and P ROOF ), only a few nearby orders had a lower surprisal than the natural orders (99.6thand 98.3rdpercentiles, respectively). The only equation for which there is no clear model preferencefor the natural form is PRODUCT RULE , but this was also a rule noted as unusual by participants inMirin and Dawkins [29]: mathematicians expressed surprise at seeing fandginstead of f(x)andg(x). When we instead use the latter notation, we see consistent preferences for the natural order. Wedo not find significant differences between the performances of math-fine-tuned models and moregeneralized language models across all equations (paired t= 0:606,p= 0:548).These results are further consistent across our equation variants (Figure 2). While there is minorvariability here (e.g., after rewording proofs, models no longer display clear preferences in E XPONENTPROD RULEbut do display clear preferences for natural orderings in PRODUCT RULE ) the evaluatedLLMs maintain clear and consistent preferences for natural equation orderings even when modified.These results suggest that LLMs agree with expert mathematicians in their preferences for orderingof proofs and rules, that is in a manner which expresses clear communicative intent. Further, our4work with equation variants suggest that they are aligned due to more than just memorizing trainingdata. This alignment leads to AI systems able to produce math interpretable by those using them,which in comparison to much of the uninterpretable math produced by symbolic solvers and logicprogramming systems, is a highly desirable quality. As such, while the proofs LLMs produce in theircurrent iteration may not always be correct, any remedies attempting to improve on that correctnessshould not do so to the detriment of this alignment, if the goal is human use.4 Practical ApplicationsWe focused our experiments on equation asymmetry and proof ordering, showing that LLMs learnextra-symbolic communicative information in both domains. But these principles encompass a muchbroader class of phenomena. For instance, several patterns identified as reflecting LLMs’ brittlenessmay instead be fruitfully seen as contributing to the communicative interpretation of math.•Even though they don’t matter logically, variable names matter for communicating math (e.g.,functions are often fandg). This pattern extends to programming as well [21, 28].•Logically extraneous or pragmatically anomalous information can matter for inferences about howexpressions are interpreted [35, 45].•Notation choice and instruction/prompt phrasing can matter for how problems are solved [ 17,22].Seeing these aspects of LLMs as possible features, and not bugs, could be an important step indeveloping AI systems that can work with humans. For instance, working mathematicians werelong limited to purely symbolic theorem provers. Such systems in isolation neglect the more humanaspects of math, ignoring differences in style and comprehensibility. We recommend developing proofassistants that are sensitive to these regularities in human proof-writing and other communicativecues. LLM-based proof systems offer the promise of mathematical assistants that can work withpeople [ 11], alongside them and not just forthem as blackbox tools. Below we discuss this idea intwo particularly relevant domains.Math Education Math educators leverage their explicit and implicit understanding of mathematicalcommunicative signaling to enhance teaching. They carefully choose problems presented in mannersthat probe the intended concepts [ 26,44]. They can identify subtle misunderstandings in theirstudents’ reasoning just by observing how they discuss mathematical concepts and use them inpractice [ 5,25,40,43]. These are key skills for educators, allowing for more efficient and effectiveteaching. As we move towards building AI assistants for math education, it is pertinent to developsystems that, like math educators, can both produce and identify these rich communicative signals.Math Research The furthering of knowledge in any field depends upon the ability to communicatenew ideas. If AI math systems are unable to communicate with those using them, we risk merelydeveloping powerful systems that remain limited in their benefits. Instead, we advocate for buildingsystems that produce math in a manner which is communicative and human-like by design, offeringpromise of furthering our collective mathematical knowledge base. While there is some benefit tosystems that can solve problems and prove theorems that humans cannot, what we gain is limitedif little information from their methods can be communicated and thusly understood. Of course,AI systems augmented with symbolic solvers do possess the necessary qualities of correctness androbustness which current, non-augmented, LLMs do not. We are not arguing that future AI mathsystems should lose these qualities, merely that they should also possess communicative sensitivity,something that many current approaches lack. Developing a new generation of hybrid systems, thatwork through problems via human-interpretable traces, while in parallel formalizing and verifyingvia symbolic means, appears to be a fruitful path forward.5 ConclusionWhile necessarily fuzzier than purely symbolic representations, the communicative principles inhuman-generated math are not lawless or illogical but can be studied, systematized, and modeled asrational behavior—as they are in linguistics and cognitive science [ 9,13,15]. We join Zhang et al.[54]in their call for a cognitive science perspective on AI and mathematics, centering the role ofmath as a group activity and communicative endeavor. The math of the people, by the people, for thepeople, shall not perish from our models.5AcknowledgmentsWe would like to thank Paul Dawkins for valuable discussions and insights on mathematical asymme-try and, more generally, the math education literature. We would further like to thank Qing Yao andthe computational linguistics research group at UT Austin for their valuable discussions and insightson this work. We would also like to thank Kanishka Misra for assistance with the minicons package,and comments on the manuscript. We acknowledge funding from NSF CAREER grant 2339729 (toKyle Mahowald).References[1]AI, M. (2024). Mathstral.[2]Behr, M., Erlwanger, S., and Nichols, E. (1980). How Children View the Equals Sign. Mathe-matics Teaching , 92(1):13–15.[3]Berglund, L., Tong, M., Kaufmann, M., Balesni, M., Stickland, A. C., Korbak, T., and Evans, O.(2024). The Reversal Curse: LLMs Trained on "A is B" fail to Learn "B is A".[4]Borazjanizadeh, N. and Piantadosi, S. T. (2024). Reliable Reasoning Beyond Natural Language.arXiv preprint arXiv:2407.11373 .[5]Bray, W. S. (2011). A Collective Case Study of the Influence of Teachers’ Beliefs and Knowledgeon Error-Handling Practices During Class Discussion of Mathematics. Journal for Research inMathematics Education , 42(1):2–38.[6]Briars, D. and Larkin, J. (1984). An Integrated Model of Skill in Solving Elementary WordWroblems. Cognition and Instruction , 1(3):245296.[7]Chen, X., Chi, R. A., Wang, X., and Zhou, D. (2024). Premise Order Matters in Reasoning withLarge Language Models. In International Conference on Machine Learning . PMLR.[8]Chomsky, N. (1957). Syntactic Structures . The Hague: Mouton.[9]Clark, H. H. (1996). Using Language . Cambridge university press.[10] Clement, L. and Bernhard, J. (2005). A Problem-Solving Alternative to Using Key Words.Mathematics Teaching in the Middle School , 10(7):360365.[11] Collins, K. M., Sucholutsky, I., Bhatt, U., Chandra, K., Wong, L., Lee, M., Zhang, C. E.,Zhi-Xuan, T., Ho, M., Mansinghka, V., et al. (2024). Building Machines that Learn and Thinkwith People. arXiv preprint arXiv:2408.03943 .[12] Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten,A., Yang, A., Fan, A., Goyal, A., Hartshorn, A., Yang, A., Mitra, A., Sravankumar, A., Korenev, A.,Hinsvark, A., Rao, A., Zhang, A., Rodriguez, A., Gregerson, A., Spataru, A., Roziere, B., Biron,B., Tang, B., Chern, B., Caucheteux, C., Nayak, C., Bi, C., Marra, C., McConnell, C., Keller, C.,Touret, C., Wu, C., Wong, C., Ferrer, C. C., Nikolaidis, C., Allonsius, D., Song, D., Pintz, D.,Livshits, D., Esiobu, D., Choudhary, D., Mahajan, D., Garcia-Olano, D., Perino, D., Hupkes, D.,Lakomkin, E., AlBadawy, E., Lobanova, E., Dinan, E., Smith, E. M., Radenovic, F., Zhang, F.,Synnaeve, G., Lee, G., Anderson, G. L., Nail, G., Mialon, G., Pang, G., Cucurell, G., Nguyen,H., Korevaar, H., Xu, H., Touvron, H., Zarov, I., Ibarra, I. A., Kloumann, I., Misra, I., Evtimov,I., Copet, J., Lee, J., Geffert, J., Vranes, J., Park, J., Mahadeokar, J., Shah, J., van der Linde, J.,Billock, J., Hong, J., Lee, J., Fu, J., Chi, J., Huang, J., Liu, J., Wang, J., Yu, J., Bitton, J., Spisak,J., Park, J., Rocca, J., Johnstun, J., Saxe, J., Jia, J., Alwala, K. V., Upasani, K., Plawiak, K., Li,K., Heafield, K., Stone, K., El-Arini, K., Iyer, K., Malik, K., Chiu, K., Bhalla, K., Rantala-Yeary,L., van der Maaten, L., Chen, L., Tan, L., Jenkins, L., Martin, L., Madaan, L., Malo, L., Blecher,L., Landzaat, L., de Oliveira, L., Muzzi, M., Pasupuleti, M., Singh, M., Paluri, M., Kardas, M.,Oldham, M., Rita, M., Pavlova, M., Kambadur, M., Lewis, M., Si, M., Singh, M. K., Hassan, M.,Goyal, N., Torabi, N., Bashlykov, N., Bogoychev, N., Chatterji, N., Duchenne, O., Çelebi, O.,Alrassy, P., Zhang, P., Li, P., Vasic, P., Weng, P., Bhargava, P., Dubal, P., Krishnan, P., Koura, P. S.,Xu, P., He, Q., Dong, Q., Srinivasan, R., Ganapathy, R., Calderer, R., Cabral, R. S., Stojnic, R.,6Raileanu, R., Girdhar, R., Patel, R., Sauvestre, R., Polidoro, R., Sumbaly, R., Taylor, R., Silva,R., Hou, R., Wang, R., Hosseini, S., Chennabasappa, S., Singh, S., Bell, S., Kim, S. S., Edunov,S., Nie, S., Narang, S., Raparthy, S., Shen, S., Wan, S., Bhosale, S., Zhang, S., Vandenhende, S.,Batra, S., Whitman, S., Sootla, S., Collot, S., Gururangan, S., Borodinsky, S., Herman, T., Fowler,T., Sheasha, T., Georgiou, T., Scialom, T., Speckbacher, T., Mihaylov, T., Xiao, T., Karn, U.,Goswami, V ., Gupta, V ., Ramanathan, V ., Kerkez, V ., Gonguet, V ., Do, V ., V ogeti, V ., Petrovic, V .,Chu, W., Xiong, W., Fu, W., Meers, W., Martinet, X., Wang, X., Tan, X. E., Xie, X., Jia, X., Wang,X., Goldschlag, Y., Gaur, Y., Babaei, Y., Wen, Y., Song, Y., Zhang, Y., Li, Y., Mao, Y., Coudert,Z. D., Yan, Z., Chen, Z., Papakipos, Z., Singh, A., Grattafiori, A., Jain, A., Kelsey, A., Shajnfeld,A., Gangidi, A., Victoria, A., Goldstand, A., Menon, A., Sharma, A., Boesenberg, A., Vaughan,A., Baevski, A., Feinstein, A., Kallet, A., Sangani, A., Yunus, A., Lupu, A., Alvarado, A., Caples,A., Gu, A., Ho, A., Poulton, A., Ryan, A., Ramchandani, A., Franco, A., Saraf, A., Chowdhury,A., Gabriel, A., Bharambe, A., Eisenman, A., Yazdan, A., James, B., Maurer, B., Leonhardi, B.,Huang, B., Loyd, B., Paola, B. D., Paranjape, B., Liu, B., Wu, B., Ni, B., Hancock, B., Wasti, B.,Spence, B., Stojkovic, B., Gamido, B., Montalvo, B., Parker, C., Burton, C., Mejia, C., Wang, C.,Kim, C., Zhou, C., Hu, C., Chu, C.-H., Cai, C., Tindal, C., Feichtenhofer, C., Civin, D., Beaty,D., Kreymer, D., Li, D., Wyatt, D., Adkins, D., Xu, D., Testuggine, D., David, D., Parikh, D.,Liskovich, D., Foss, D., Wang, D., Le, D., Holland, D., Dowling, E., Jamil, E., Montgomery, E.,Presani, E., Hahn, E., Wood, E., Brinkman, E., Arcaute, E., Dunbar, E., Smothers, E., Sun, F.,Kreuk, F., Tian, F., Ozgenel, F., Caggioni, F., Guzmán, F., Kanayet, F., Seide, F., Florez, G. M.,Schwarz, G., Badeer, G., Swee, G., Halpern, G., Thattai, G., Herman, G., Sizov, G., Guangyi,Zhang, Lakshminarayanan, G., Shojanazeri, H., Zou, H., Wang, H., Zha, H., Habeeb, H., Rudolph,H., Suk, H., Aspegren, H., Goldman, H., Damlaj, I., Molybog, I., Tufanov, I., Veliche, I.-E.,Gat, I., Weissman, J., Geboski, J., Kohli, J., Asher, J., Gaya, J.-B., Marcus, J., Tang, J., Chan,J., Zhen, J., Reizenstein, J., Teboul, J., Zhong, J., Jin, J., Yang, J., Cummings, J., Carvill, J.,Shepard, J., McPhie, J., Torres, J., Ginsburg, J., Wang, J., Wu, K., U, K. H., Saxena, K., Prasad,K., Khandelwal, K., Zand, K., Matosich, K., Veeraraghavan, K., Michelena, K., Li, K., Huang, K.,Chawla, K., Lakhotia, K., Huang, K., Chen, L., Garg, L., A, L., Silva, L., Bell, L., Zhang, L., Guo,L., Yu, L., Moshkovich, L., Wehrstedt, L., Khabsa, M., Avalani, M., Bhatt, M., Tsimpoukelli, M.,Mankus, M., Hasson, M., Lennie, M., Reso, M., Groshev, M., Naumov, M., Lathi, M., Keneally,M., Seltzer, M. L., Valko, M., Restrepo, M., Patel, M., Vyatskov, M., Samvelyan, M., Clark, M.,Macey, M., Wang, M., Hermoso, M. J., Metanat, M., Rastegari, M., Bansal, M., Santhanam, N.,Parks, N., White, N., Bawa, N., Singhal, N., Egebo, N., Usunier, N., Laptev, N. P., Dong, N.,Zhang, N., Cheng, N., Chernoguz, O., Hart, O., Salpekar, O., Kalinli, O., Kent, P., Parekh, P., Saab,P., Balaji, P., Rittner, P., Bontrager, P., Roux, P., Dollar, P., Zvyagina, P., Ratanchandani, P., Yuvraj,P., Liang, Q., Alao, R., Rodriguez, R., Ayub, R., Murthy, R., Nayani, R., Mitra, R., Li, R., Hogan,R., Battey, R., Wang, R., Maheswari, R., Howes, R., Rinott, R., Bondu, S. J., Datta, S., Chugh, S.,Hunt, S., Dhillon, S., Sidorov, S., Pan, S., Verma, S., Yamamoto, S., Ramaswamy, S., Lindsay, S.,Lindsay, S., Feng, S., Lin, S., Zha, S. C., Shankar, S., Zhang, S., Zhang, S., Wang, S., Agarwal, S.,Sajuyigbe, S., Chintala, S., Max, S., Chen, S., Kehoe, S., Satterfield, S., Govindaprasad, S., Gupta,S., Cho, S., Virk, S., Subramanian, S., Choudhury, S., Goldman, S., Remez, T., Glaser, T., Best,T., Kohler, T., Robinson, T., Li, T., Zhang, T., Matthews, T., Chou, T., Shaked, T., V ontimitta, V .,Ajayi, V., Montanez, V., Mohan, V ., Kumar, V . S., Mangla, V ., Albiero, V ., Ionescu, V., Poenaru,V ., Mihailescu, V. T., Ivanov, V ., Li, W., Wang, W., Jiang, W., Bouaziz, W., Constable, W., Tang,X., Wang, X., Wu, X., Wang, X., Xia, X., Wu, X., Gao, X., Chen, Y ., Hu, Y ., Jia, Y ., Qi, Y ., Li, Y .,Zhang, Y ., Zhang, Y ., Adi, Y ., Nam, Y ., Yu, Wang, Hao, Y ., Qian, Y ., He, Y ., Rait, Z., DeVito, Z.,Rosnbrick, Z., Wen, Z., Yang, Z., and Zhao, Z. (2024). The Llama 3 Herd of Models.[13] Frank, M. C. and Goodman, N. D. (2012). Predicting Pragmatic Reasoning in Language Games.Science , 336(6084):998–998. Publisher: American Association for the Advancement of Science.[14] Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang, Y., Callan, J., and Neubig, G. (2023).PAL: Program-aided Language Models. In International Conference on Machine Learning , pages10764–10799. PMLR.[15] Gibson, E., Futrell, R., Piandadosi, S. T., Dautriche, I., Mahowald, K., Bergen, L., and Levy, R.(2019). How Efficiency Shapes Human Language. Trends in Cognitive Sciences .[16] Goldberg, Y . (2019). Assessing BERT’s Syntactic Abilities. arXiv preprint arXiv:1901.05287 .7[17] Güçler, B. (2014). The Role of Symbols in Mathematical Communication: The Case of theLimit Notation. Research in Mathematics Education , 16(3):251–268.[18] He-Yueya, J., Poesia, G., Wang, R., and Goodman, N. (2023). Solving Math Word Problemsby Combining Language Models With Symbolic Solvers. In The 3rd Workshop on MathematicalReasoning and AI at NeurIPS’23 .[19] Hegarty, M., Mayer, R. E., and Monk, C. A. (1995). Comprehension of arithmetic wordproblems: A comparison of successful and unsuccessful problem solvers. Journal of EducationalPsychology , 87(1):18.[20] Heim, I. and Kratzer, A. (1998). Semantics in Generative Grammar . Wiley-Blackwell, Malden,MA.[21] Hersh, R. (1998). What is Mathematics, Really? Mitteilungen der Deutschen Mathematiker-Vereinigung , 6(2):13–14.[22] Iverson, K. E. (1979). Notation as a Tool of Thought. Communications of the ACM ,23(8):444–465. ACM Turing Award Lecture, Delivered at ACM ’79, Detroit, Oct. 29, 1979.[23] Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., de las Casas, D.,Bressand, F., Lengyel, G., Lample, G., Saulnier, L., Lavaud, L. R., Lachaux, M.-A., Stock, P.,Scao, T. L., Lavril, T., Wang, T., Lacroix, T., and Sayed, W. E. (2023). Mistral 7B.[24] Karp, K. S., Bush, S. B., and Dougherty, B. J. (2019). Avoiding the Ineffective KeywordStrategy. Teaching Children Mathematics , 25(7):428–435.[25] Kingsdorf, S. and Krawec, J. (2014). Error Analysis of Mathematical Word Problem SolvingAcross Students with and without Learning Disabilities. Learning Disabilities Research & Practice ,29(2):66–74.[26] Liz, B., Dreyfus, T., Mason, J., Tsamir, P., Watson, A., and Zaslavsky, O. (2006). Exemplificationin Mathematics Education. In Proceedings of the 30th Conference of the International Group forthe Psychology of Mathematics Education , volume 1, pages 126–154. Citeseer.[27] McCoy, R. T., Yao, S., Friedman, D., Hardy, M., and Griffiths, T. L. (2023). Embers ofAutoregression: Understanding Large Language Models Through the Problem They are Trainedto Solve. arXiv preprint arXiv:2309.13638 .[28] Miceli-Barone, A. V ., Barez, F., Cohen, S. B., and Konstas, I. (2023). The Larger they are, theHarder they Fail: Language Models do not Recognize Identifier Swaps in Python. In Findings ofthe Association for Computational Linguistics: ACL 2023 , pages 272–292.[29] Mirin, A. and Dawkins, P. C. (2022). Do Mathematicians Interpret Equations Asymmetrically?The Journal of Mathematical Behavior , 66:100959.[30] Mirzadeh, I., Alizadeh, K., Shahrokhi, H., Tuzel, O., Bengio, S., and Farajtabar, M. (2024).GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large LanguageModels. arXiv preprint arXiv:2410.05229 .[31] Misra, K. (2022). minicons: Enabling Flexible Behavioral and Representational Analyses ofTransformer Language Models. arXiv preprint arXiv:2203.13112 .[32] Olausson, T. X., Gu, A., Lipkin, B., Zhang, C. E., Solar-Lezama, A., Tenenbaum, J. B., and Levy,R. (2023). LINC: A Neurosymbolic Approach for Logical Reasoning by Combining LanguageModels with First-Order Logic Provers. arXiv preprint arXiv:2310.15164 .[33] Opedal, A., Shirakami, H., Schölkopf, B., Saparov, A., and Sachan, M. (2024a). MathGAP:Out-of-Distribution Evaluation on Problems with Arbitrarily Complex Proofs. arXiv preprintarXiv:2410.13502 .[34] Opedal, A., Stolfo, A., Shirakami, H., Jiao, Y., Cotterell, R., Schölkopf, B., Saparov, A., andSachan, M. (2024b). Do Language Models Exhibit the Same Cognitive Biases in Problem Solvingas Human Learners? In Forty-first International Conference on Machine Learning .8[35] Pasolunghi, M. C., Cornoldi, C., and De Liberto, S. (1999). Working memory and intrusionsof irrelevant information in a group of specific poor problem solvers. Memory & Cognition ,27:779–790.[36] Patel, A., Bhattamishra, S., and Goyal, N. (2021). Are NLP models really able to solve simplemath word problems? In Toutanova, K., Rumshisky, A., Zettlemoyer, L., Hakkani-Tur, D., Beltagy,I., Bethard, S., Cotterell, R., Chakraborty, T., and Zhou, Y ., editors, Proceedings of the 2021 Con-ference of the North American Chapter of the Association for Computational Linguistics: HumanLanguage Technologies , pages 2080–2094, Online. Association for Computational Linguistics.[37] Powell, S. R. (2012). Equations and the Equal Sign in Elementary Mathematics Textbooks. TheElementary School Journal , 112(4):627–648.[38] Powell, S. R. and Fuchs, L. S. (2018). Effective Word-Problem Instruction: Using Schemas toFacilitate Mathematical Reasoning. Teaching Exceptional Children , 51(1):31–42.[39] Powell, S. R., Namkung, J. M., and Lin, X. (2022). An Investigation of Using Keywords toSolve Word Problems. The Elementary School Journal , 122(3):452–473.[40] Radatz, H. (1979). Error Analysis In Mathematics Education. Journal for Research in Mathe-matics Education , 10(3):163–172.[41] Rafalovich, A. (2006). Making Sociology Relevant: The Assignment and Application ofBreaching Experiments. Teaching Sociology , 34(2):156–163.[42] Razeghi, Y., Logan IV, R. L., Gardner, M., and Singh, S. (2022). Impact of Pretraining TermFrequencies on Few-Shot Numerical Reasoning. In Goldberg, Y., Kozareva, Z., and Zhang, Y.,editors, Findings of the Association for Computational Linguistics: EMNLP 2022 , pages 840–854,Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.[43] Riccomini, P. J. (2005). Identification and Remediation of Systematic Error Patterns in Subtrac-tion. Learning Disability Quarterly , 28(3):233–242.[44] Rowland, T. (2008). The purpose, design and use of examples in the teaching of elementarymathematics. Educational Studies in Mathematics , 69(2):149–163.[45] Shi, F., Chen, X., Misra, K., Scales, N., Dohan, D., Chi, E. H., Schärli, N., and Zhou, D.(2023). Large Language Models Can Be Easily Distracted by Irrelevant Context. In InternationalConference on Machine Learning , pages 31210–31227. PMLR.[46] Sprague, Z., Yin, F., Rodriguez, J. D., Jiang, D., Wadhwa, M., Singhal, P., Zhao, X., Ye, X.,Mahowald, K., and Durrett, G. (2024). To CoT or not to CoT? Chain-of-thought helps mainly onmath and symbolic reasoning.[47] Stolfo, A., Jin, Z., Shridhar, K., Schoelkopf, B., and Sachan, M. (2023). A Causal Frameworkto Quantify the Robustness of Mathematical Reasoning with Language Models. In Proceedingsof the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: LongPapers) , pages 545–561.[48] Verschaffel, L., Greer, B., and De Corte, E. (2000). Making Sense of Word Problems. Lisse,The Netherlands , 224:224.[49] Verschaffel, L., Schukajlow, S., Star, J., and Van Dooren, W. (2020). Word problems inmathematics education: A survey. Zdm, 52:1–16.[50] Wolf, T., Debut, L., Sanh, V ., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf,R., Funtowicz, M., Davison, J., Shleifer, S., von Platen, P., Ma, C., Jernite, Y., Plu, J., Xu, C.,Le Scao, T., Gugger, S., Drame, M., Lhoest, Q., and Rush, A. (2020). Transformers: State-of-the-art natural language processing. In Liu, Q. and Schlangen, D., editors, Proceedings of the2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations ,pages 38–45, Online. Association for Computational Linguistics.[51] Wu, D., Yang, J., and Wang, K. (2024). Exploring the reversal curse and other deductive logicalreasoning in BERT and GPT-based large language models. Patterns , 5(9).9[52] Yang, A., Yang, B., Hui, B., Zheng, B., Yu, B., Zhou, C., Li, C., Li, C., Liu, D., Huang, F., et al.(2024). Qwen2 Technical Report. arXiv preprint arXiv:2407.10671 .[53] Ye, X., Chen, Q., Dillig, I., and Durrett, G. (2024). SatLM: Satisfiability-Aided LanguageModels Using Declarative Prompting. Advances in Neural Information Processing Systems , 36.[54] Zhang, C., Collins, K., Weller, A., and Tenenbaum, J. (2023). AI for Mathematics: A CognitiveScience Perspective. In The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS’23 .10A Equation Generation and Prompting in Case Study OneA.1 Equation GenerationFor step one of this experiment, we create our equation sets as follows. We first create two independentexpressions, each of which consists of two operands, either added or subtracted to each other. One ofthese operands is a single digit number, with the other being a variable quantity in xwith a singledigit coefficient. All operands, operations, and choice of which operands are the variable quantityare selected at random. We then form our pair of complimentary equations by placing an equalssign between these two expressions, in both orders. That is, given the two expressions aandb, ourpair of complimentary equations would be a=bandb=a. To illustrate further, a generated setof expressions may include, for example, 2x+ 3 = 4 5xor85x= 2 + 3 x, but not 2x= 3,4x+ 5y= 8x2, or9x2 =x4.A.2 Prompting MethodsOur experimental methodology necessitates prompting GPT-4o twice for each equation in ourevaluation set: once to create a word problem from a given equation, and once to try and recover anequation given a math word problem. Below, we describe the prompts used for each of these steps.A.2.1 Prompting for Word Problem CreationFor a given equation, EQUATION , we first prime GPT-4o with the following command:"You are a helpful middle school math teacher."We then prompt the model to generate a word problem using the following prompt:"Create a grade-school math problem representing the following equation:{EQUATION }. Make sure your problem is clear, concise, represents every term ofthe equation, and ends in a question mark. Generate just the problem and nothingelse."A.2.2 Prompting for Equation RecoveryFor a given math word problem, PROBLEM , we first prime GPT-4o with the following command:You are a helpful assistant.We then prompt the model to recover the equation that is represented by PROBLEM with the followingprompt:"What is the underlying math equation represented by the following situation:{PROBLEM }. Use the letter ’x’ for the unknown quantity. Please do not explain, orwrite any accompanying text, give just a single equation and nothing else."B Equation Set for Case Study TwoWe use the following set of equations from Mirin and Dawkins [29] for evaluating model orderpreferences in formal mathematics. We name each following subsection as they are labeled in Figure2. Each equation is presented below in its "natural" form. All T EXformatting used to render thefollowing sections, up-to the end of the equations, is included in our experiment.B.1 D IFFERENCE QUOTIENTThedifference quotient of a function gis defined to beg(x+h)g(x)(x+h)x11where his nonzero. Let f:R!Rbe the function defined by f(x) =x2. The following shows thedifference quotient:f(x+h)f(x)(x+h)x=f(x+h)f(x)h=(x+h)2x2h=x2+ 2xh+h2x2h=2xhh2= 2x+hB.2 D ISTRIBUTIVEThe distributive law tells us that for all numbers x,y, and z,x(y+z) =xy+xzB.3 E XPONENTS DIFFRULERecall the Properties of Exponents:bxby=bxyB.4 E XPONENTS POWER RULERecall the Properties of Exponents:(bx)y=bxyB.5 E XPONENTS PROD RULERecall the Properties of Exponents:bxby=bx+yB.6 H OMOMORPHISMLethS; ?iandhS0; ?0ibe binary algebraic structures. A homomorphism from hS; ?itohS0; ?0iis afunction :S!S0such that for all x,y2S,(x ? y ) =(x)?0(y)B.7 I NDUCTIONThe following is a portion of a proof by induction that for all natural numbers k,k3kis divisibleby 6. At this point in the proof, it has been assumed that n3nis divisible by 6, and it is beingshown that (n+ 1)3(n+ 1) is therefore also divisible by 6.(n+ 1)3(n+ 1) = ( n3+ 3n2+ 3n+ 1)(n+ 1)= (n3+ 3n2+ 3n+ 1)(n+ 1)= (n3n) + (3 n2+ 3n)= (n3n) + 3 n(n+ 1)B.8 P RODUCT RULETheproduct rule for derivatives says that if fandgare differentiable functions, thenfg0+f0g= (fg)012B.9 P ROOFTheorem 1. Suppose hS; ?iandhS0; ?0ibe binary algebraic structures, and is an isomorphismfromhS; ?iontohS0; ?0i. Further suppose that eis a left identity element in hS; ?i. Then (e)is aleft identity element in hS0; ?0i.Proof. Lets0be an element of S0. Since is onto, there exists some s2Ssuch that (s) =s0.Hences0=(s) =(e ? s) =(e)?0(s) =(e)?0s0B.10 S ETTHEORYThe following is a proof in a set theory textbook that if ais a transitive set, thenS(a+) =a. Notethat a transitive set is defined to be a set asuch that all members of aare subsets of a, and a+isdefined to be a[ fagProof.([a+) =[(a[ fag)= ([a)[([fag)= ([a)[a=aC Equation VariantsC.1 Reworded VariantsC.1.1 D IFFERENCE QUOTIENTLetf:R!Rbe the function f(x) =x2. The following shows the difference quotient:f(x+h)f(x)(x+h)x=f(x+h)f(x)h=(x+h)2x2h=x2+ 2xh+h2x2h=2xhh2= 2x+hC.1.2 D ISTRIBUTIVEFor all numbers x,y, and z, the distributive law states thatx(y+z) =xy+xzC.1.3 E XPONENTS DIFFRULEHere are some exponent properties:bxby=bxy13C.1.4 E XPONENTS POWER RULEHere are some exponent properties:(bx)y=bxyC.1.5 E XPONENTS PROD RULEHere are some exponent properties:bxby=bx+yC.1.6 H OMOMORPHISMIfhS; ?iandhS0; ?0iare binary algebraic structures, a homomorphism from hS; ?itohS0; ?0iis afunction :S!S0such that 8x,y2S,(x ? y ) =(x)?0(y)C.1.7 I NDUCTIONInductively prove that 8k2N,k3kis divisible by 6. We will show that (n+ 1)3(n+ 1) isdivisible by 6, with the prior assumption that n3nis divisible by 6.(n+ 1)3(n+ 1) = ( n3+ 3n2+ 3n+ 1)(n+ 1)= (n3n) + (3 n2+ 3n)= (n3n) + 3 n(n+ 1)C.1.8 P RODUCT RULEIffandgare differentiable functions, then the product rule states that(fg)0=fg0+f0gC.1.9 P ROOFTheorem 2. is an isomorphism from hS; ?iontohS0; ?0iwhere hS; ?iandhS0; ?0iare both binaryalgebraic structures. If eis a left identity element in hS; ?i, then (e)is a left identity element inhS0; ?0i.Proof. Lets02S0. Due to being onto, 9s2Ssuch that (s) =s0. Hences0=(s) =(e ? s) =(e)?0(s) =(e)?0s0C.1.10 S ETTHEORYA transitive set is defined to be a set asuch that all members of aare subsets of a, and a+is definedto be a[ fag. We show a proof that if ais a transitive set, thenS(a+) =a.Proof.([a+) =[(a[ fag)= ([a)[([fag)= ([a)[a=a14C.2 Emoji-Substituted Original Proof VariantsC.2.1 D IFFERENCE QUOTIENTThe of a function is defined to be(+)()(+)where is nonzero. Let :R!Rbe the function defined by () =2. The following shows the:(+)()(+)=(+)()=(+)22=2+ 2 +22=22= 2 +C.2.2 D ISTRIBUTIVEThe distributive law tells us that for all numbers ,, and ,(+) = +C.2.3 E XPONENTS DIFFRULERecall the Properties of Exponents:=-C.2.4 E XPONENTS POWER RULERecall the Properties of Exponents:()=C.2.5 E XPONENTS PROD RULERecall the Properties of Exponents: =+C.2.6 H OMOMORPHISMLeth;iandh0;0ibe binary algebraic structures. A from h;itoh0;0iis a function :!0such that for all ,2,( ) = ()0()C.2.7 I NDUCTIONThe following is a portion of a proof by induction that for all natural numbers ,3is divisibleby 6. At this point in the proof, it has been assumed that3is divisible by 6, and it is beingshown that (+ 1)3(+ 1) is therefore also divisible by 6.(+ 1)3(+ 1) = (3+ 32+ 3 + 1)(+ 1)= (3) + (32+ 3 )= (3) + 3 (+ 1)15C.2.8 P RODUCT RULETheproduct rule for derivatives says that if and are differentiable functions, then()0=0+0C.2.9 P ROOFTheorem 3. Suppose h;iandh0;0ibe binary algebraic structures, and is an isomorphismfromh;iontoh0;0i. Further suppose that is a left identity element in h;i. Then ()is aleft identity element in h0;0i.Proof. Let0be an element of0. Since is onto, there exists some 2such that () =0.Hence0=() = ( ) = ()0() = ()00C.2.10 S ETTHEORYThe following is a proof in a set theory textbook that if is a transitive set, thenS(+) = . Notethat a transitive set is defined to be a set such that all members of are subsets of , and+isdefined to be [ fgProof.([+) =[([ fg)= ([)[([fg)= ([)[=C.3 Emoji-Substituted Reworded Proof VariantsC.3.1 D IFFERENCE QUOTIENTLet :R!Rbe the function () =2. The following shows the difference quotient:(+)()(+)=(+)()=(+)22=2+ 2 +22=22= 2 +C.3.2 D ISTRIBUTIVEFor all numbers ,, and , the distributive law states that(+) = +C.3.3 E XPONENTS DIFFRULEHere are some exponent properties:=-16C.3.4 E XPONENTS POWER RULEHere are some exponent properties:()=C.3.5 E XPONENTS PROD RULEHere are some exponent properties: =+C.3.6 H OMOMORPHISMIfh;iandh0;0iare binary algebraic structures, a from h;itoh0;0iis a function :!0such that 8,2,( ) = ()0()C.3.7 I NDUCTIONInductively prove that 82N,3is divisible by 6. We will show that (+ 1)3(+ 1) isdivisible by 6, with the prior assumption that3is divisible by 6.(+ 1)3(+ 1) = (3+ 32+ 3 + 1)(+ 1)= (3) + (32+ 3 )= (3) + 3 (+ 1)C.3.8 P RODUCT RULEIfand are differentiable functions, then the product rule states that()0=0+0C.3.9 P ROOFTheorem 4. is an isomorphism from h;iontoh0;0iwhere h;iandh0;0iare both binaryalgebraic structures. If is a left identity element in h;i, then ()is a left identity element inh0;0i.Proof. Let020. Due to being onto, 92such that () =0. Hence0=() = ( ) = ()0() = ()00C.3.10 S ETTHEORYA transitive set is defined to be a set such that all members of aare subsets of , and+is definedto be [ fg. We show a proof that if is a transitive set, thenS(+) = .Proof.([+) =[([ fg)= ([)[([fg)= ([)[=17 |
ICYF91UPjE | MathCAMPS: Fine-grained Synthesis ofMathematical Problems From Human CurriculaShubhra Mishra∗,1Gabriel Poesia∗,1Belinda Mo1Noah D. Goodman1,2{shubhra,poesia,ngoodman}@stanford.edu [email protected] of Computer Science1and Psychology2, Stanford UniversityAbstractMathematical problem solving is an important skill for Large Language Models(LLMs), both as an important capability and a proxy for a range of reasoningabilities. Existing benchmarks probe a diverse set of skills, but they yield aggregateaccuracy metrics, obscuring specific abilities or weaknesses. Furthermore, they aredifficult to extend with new problems, risking data contamination over time. Toaddress these challenges, we propose MathCAMPS: a method to synthesize high-quality mathematical problems at scale, grounded on 44 fine-grained “standards”from the Mathematics Common Core (CC) Standard for K-8 grades. We encodeeach standard in a formal grammar, allowing us to sample diverse symbolic prob-lems and their answers. We then use LLMs to realize the symbolic problems intoword problems. We propose a cycle-consistency method for validating problemfaithfulness. Finally, we derive follow-up questions from symbolic structures andconvert them into follow-up word problems—a novel task of mathematical dia-logue that probes for robustness in understanding. Experiments on 23 LLMs showsurprising failures even in the strongest models (in particular when asked simplefollow-up questions). Moreover, we evaluate training checkpoints of Pythia 12B onMathCAMPS, allowing us to analyze when particular mathematical skills developduring its training. Our framework enables the community to reproduce and extendour pipeline for a fraction of the typical cost of building new high-quality datasets.Project page: https://mathcamps.cc .1 IntroductionAs Large Language Models (LLMs) become increasingly capable, mathematical reasoning hasbecome a key benchmark for evaluating their abilities. Traditional benchmarking, which relies onfixed sets of human-generated problems (e.g., GSM8k[ 8], or MATH [ 11]), now faces new challenges.Many LLMs are trained on vast public datasets that may include these benchmarks, raising concernsabout data contamination [ 20,7,4]. This issue is amplified by the lack of transparency in the trainingdata of most state-of-the-art models, including GPT-4 [ 1], Claude [ 2], and LLaMA [ 19]. Whilecreating novel problems could mitigate contamination concerns but is resource-intensive. Moreover,current benchmarks offer limited insights into the specific mathematical skills of LLMs, as aggregateaccuracy alone does not reveal where models excel or struggle, and how this has changed over time.To address these issues, we introduce the Mathematics Common Core Assessment of ProblemSolving (MathCAMPS), a framework for generating high-quality mathematical problems based onthe Common Core (CC) standards. MathCAMPS enables detailed analysis of LLMs’ mathematicalproficiency, aligned with skills taught in schools. Our pipeline employs a composable grammarfor generating problems, symbolic solvers (e.g. SymPy) to get final solutions, and an LLM fortransforming them into word problems. We ensure problem faithfulness through a cycle-consistencycheck, where the LLM back-translates word problems into symbolic form.38th Conference on Neural Information Processing Systems (NeurIPS 2024).Figure 1: Overview of the MathCAMPS generation pipeline. We start from a grammar ( A) thatrepresents problems tied to a Common Core Standard - a specific mathematical ability drawn from ahuman curriculum. We sample problems in a symbolic form ( B), and use a language model to realizeit in natural language ( C), applying a cycle-consistency where we back-translate the problem intosymbolic form and ensure the answer remains the same, validating truthfulness. We also synthesizeincremental and counterfactual follow-up problemsWe also propose a novel "mathematical dialogue" task, where the model answers follow-up questionsafter solving a problem. These follow-ups can be either counterfactual , modifying an aspect of theoriginal problem, or incremental , providing additional information and changing the question.Using our framework, we synthesize problems for each of 44 CC standards (Appendix C), resulting ina dataset of 4,900 initial problems and 4707 total follow-ups. Our results reveal surprising weaknesses,particularly in response to follow-up responses, highlighting significant gaps in even the strongestmodels. Additionally, we provide a first-of-its-kind analysis of learning dynamics of mathematicalabilities in LLM training using checkpoints from Pythia 12B [6] (Appendix B).2 MathCAMPSWe now describe our pipeline for automatically generating mathematical problems and follow-upquestions that are grounded in a human curriculum – the Mathematics Common Core ( https://www.thecorestandards.org ). Figure 1 overviews our pipeline. We describe how we representCC standards in a grammar, sample symbolic problems, generate follow-ups, realize those in naturallanguage, and finally improve quality by checking for cycle consistency.Representing Common Core Standards We represent CC standards using an attribute grammar [ 10],allowing both syntactic and semantic rules. This formalism supports context-sensitive constraints,enabling encoding of information like numerical bounds directly in production rules.From Symbolic to Word Problems To convert symbolic problems into natural language, we usefew-shot prompting with GPT-4 (Figure 1 (C)). For each standard, we manually create word problemsfrom two symbolic examples. For word problems requiring cover stories, we randomly select a themefrom a set of 188. These examples guide GPT-4 in generating diverse, natural problems. To ensurefaithfulness to the original structure, we apply a cycle consistency approach: GPT-4 converts itsgenerated word problem back into a symbolic structure, which is solved and compared to the original.Problems failing this test are discarded.Generating Follow-Up Questions We leverage symbolic representations to generate two types offollow-up questions: counterfactual (altering a constant) and incremental (adding information). Foreach CC standard, we identify applicable follow-up types. Symbolically, follow-up questions aremodeled as differences applied to the original problem, which we solve to produce ground-truthanswers. We use few-shot prompting to translate these changes into natural language questions andapply cycle consistency to verify accuracy.2Table 1: Final answer accuracy of LLMs on MathCAMPS, both over all problems ( All) and consider-ing only standards in each grade we cover ( Kto8). Highlights compare to gradewise avg.Vendor Model All K 1 2 3 4 5 6 7 8OpenAI GPT-4o [1] 0.92 0.98 0.98 0.98 0.98 0.92 0.88 0.95 0.89 0.64Anthropic Claude-3 Opus [2] 0.89 0.97 0.99 0.96 0.98 0.89 0.83 0.96 0.73 0.56Google Gemini-1.5 Pro [17] 0.89 0.95 0.98 0.97 0.97 0.89 0.83 0.93 0.78 0.54Google Gemini-1.5 Flash [17] 0.87 0.98 0.98 0.97 0.98 0.80 0.80 0.90 0.84 0.56OpenAI GPT-3.5 Turbo [1] 0.87 0.96 0.98 0.98 0.97 0.86 0.77 0.90 0.77 0.56Anthropic Claude-3 Sonnet [2] 0.86 0.96 0.98 0.97 0.98 0.88 0.74 0.94 0.66 0.49Anthropic Claude-3 Haiku [2] 0.84 0.97 0.98 0.97 0.98 0.87 0.69 0.92 0.59 0.51Meta Llama 3 70B [19] 0.85 0.96 0.97 0.97 0.97 0.85 0.71 0.87 0.73 0.50Mistral Mixtral 8x22B [13] 0.84 0.96 0.99 0.98 0.96 0.79 0.69 0.88 0.73 0.61DeepSeek DeepSeek 67B [5] 0.80 0.95 0.99 0.96 0.93 0.82 0.60 0.84 0.61 0.47Meta Llama 3 8B [19] 0.77 0.94 0.97 0.96 0.94 0.78 0.55 0.79 0.53 0.43Mistral Mixtral 8x7B [13] 0.76 0.94 0.96 0.93 0.91 0.75 0.52 0.80 0.53 0.45EleutherAI Llemma 34B [3] 0.71 0.95 0.96 0.93 0.87 0.61 0.47 0.77 0.46 0.44Mistral Mistral 7B [12] 0.68 0.89 0.94 0.91 0.84 0.61 0.42 0.66 0.45 0.42DeepSeek DeepSeek Coder 33B [9] 0.65 0.88 0.93 0.92 0.83 0.54 0.36 0.66 0.44 0.38Meta CodeLlama 34B [15] 0.64 0.90 0.94 0.92 0.85 0.51 0.38 0.70 0.37 0.30Microsoft phi-2 [14] 0.63 0.95 0.96 0.89 0.78 0.46 0.38 0.61 0.37 0.41EleutherAI Llemma 7B [3] 0.62 0.88 0.90 0.85 0.79 0.48 0.41 0.67 0.41 0.36Google Gemma 7B [18] 0.62 0.83 0.92 0.90 0.82 0.47 0.36 0.65 0.36 0.30Meta CodeLlama 13B [15] 0.58 0.87 0.92 0.87 0.75 0.41 0.30 0.61 0.32 0.34Meta CodeLlama 7B [15] 0.52 0.85 0.92 0.84 0.69 0.37 0.25 0.57 0.25 0.16Google Gemma 2B [18] 0.51 0.66 0.76 0.74 0.67 0.42 0.28 0.55 0.30 0.27- Avg. Performance 0.74 0.87 0.91 0.89 0.87 0.70 0.59 0.78 0.57 0.383 ExperimentsWe now evaluate a suite of 23 LLMs from 8 different vendors on MathCAMPS. We evaluate allmodels by sampling with temperature 0, using a fixed 1-shot prompt with the first example fromGSM8K, mostly to demonstrate the format. For all models (most of them instruction-tuned), asingle example was enough for to adhere to the task and the format we specify. The rich structurein MathCAMPS allows us to perform a number of unique analyses on LLMs relating to specificmathematical abilities and their corresponding grade levels for human students.Table 1 shows both aggregate accuracy on MathCAMPS, as well as accuracy across standardspartitioned by grade, whereas Figure 3 compares the aggregate accuracies on MathCAMPS andGSM8K. Closed-weights models are shown above the line, with open-weights models below. GPT-4oranks at the top in overall accuracy. Since we used GPT-4 to generate the problems, we must rule outfamiliarity bias [16] in this result, which we do in Appendix D.Models of similar overall performance can have large disparities in specific abilities or grades.Several models that have comparable overall accuracies show large differences when comparedon specific mathematical skills. As an example, Claude-3 Opus and Claude-3 Sonnet have similaroverall accuracy both in MathCAMPS (.89 vs .86) and in GSM8K (.95 vs .923). However, wefind that Claude-3 Opus is significantly better at manipulating fractions. For instance, in the CCstandard 5.NF.A.2 , described as “Solve word problems involving addition and subtraction offractions referring to the same whole, including cases of unlike denominators” , Opus has a 36%advantage over Sonnet, scoring a 70% accuracy for this standard, whereas Sonnet only achieves 34%.Similarly, while Gemma 7B and phi-2 have comparable overall performance (.62 vs .63 accuracy onMathCAMPS), some capabilities in each model seem nearly absent from the other. Gemma 7B ishighly accurate when performing multi-digit multiplication (4.NBT.B.4), which phi-2 struggles with.And while phi-2 performs well while comparing fractions (4.NF.A.2), Gemma 7B struggles. Suchstark differences are obscured when only analyzing aggregate metrics, whereas MathCAMPS allowsfor a much more nuanced understanding of mathematical reasoning capabilities.Overall ranking between models is largely a function of which skills we choose to evaluate.Overall accuracies in any dataset induce a single performance ranking of models. However, whenwe look at individual CC standards in MathCAMPS, rankings are largely a function of which skillswe choose to evaluate. Comparing pairs of models across all standards, rarely we find cases where3one model Pareto-dominates another (i.e. is better on all standards): only 23.08% of all pairs ofmodels have a Pareto winner. Table 3 shows how the ranking of a model in individual skills canoften deviate strongly from its overall ranking. Here, the first ordinal in each cell shows the model’sranking on overall performance on MathCAMPS, whereas the second shows the model’s ranking onthat particular CC standard. We find many cases of large discrepancies. For instance, on systems ofequations, GPT-4o tends to excessively rely on decimal approximations when operating with fractions,resulting in poor performance. Llemma 34B, which places 13th overall, is the best performing modelon a simple kindergarten-level word problems on adding to complete 10.Follow-up tasks We now evaluate the performance of LLMs on follow-up questions. Here, we firstgive a problem, and in case the model answers correctly we ask either an incremental follow-up, acounterfactual follow-up, or both (in separate contexts), depending on the standard (some standardsdon’t have follow-ups, and for some problems we failed to find a cycle-consistent follow-up withinthe max attempts). Here, we’re interested in analyzing the (lack of) robustness that LMs might havewhen probed with extra questions — our follow-ups are generally answerable using the same coremathematical knowledge involved in the initial problem but require longer range attention and dialogunderstanding.Table 3 (full table with all models in the Appendix) shows overall accuracies when we only considera model successful on a problem when it also answers its follow-up questions correctly. We alsoshow the major accuracy drops across CC standards for each model (last two columns). We findmany notable cases, in both stronger and weaker models. GPT-4o, for instance, is 90% accuratein evaluating expressions of addition of fractions with multi-digit numerators and denominators(5.NF.A.1 — notably, this requires putting fractions in the same denominator). When asked toadd another fraction to the result, or change one of the original fractions to a new one and re-dothe computation, its success rate when evaluated at correctly answering both follow-ups drops to61%, or a 29% decrease. Other models drop even more dramatically. For instance, phi-2 solves57% of the problems in 7.NS.A.2 , which are about multiplying two fractions (only requires twomulti-digit multiplications — we do not require the result to be in lowest terms). However, whenasked to multiply the result by a further third fraction, phi-2 tends to not reuse its previous (correct)result, and instead proceeds by writing down the product of the three numerators (and denominators),and attempt to directly evaluate this product. This strategy is rarely successful, and it only achieves8% accuracy when accounting for the follow-ups (an absolute 49% drop). Overall, we find manycases where models are not robust to simple follow-up questions. We hypothesize that this setup ofmathematical dialogue is much less frequent in pre-training data, and that follow-up problems inMathCAMPS can be a rich source of further analyses for future work.Table 2: Model performance on our mathematical dialogue task, where the model must answerfollow-up questions besides the initial problem. Results for all models are shown in the Appendix.Model Acc. with follow-ups Largest accuracy drop w/ follow-upsGPT-4o 0.82 5.NF.A.1 - Add/sub fractions 0.90 0.61)Claude-3 Opus 0.76 7.NS.A.1-fraction - Add/sub with fractions 0.57 0.25)Gemini-1.5 Pro 0.77 5.NF.A.1 - Add/sub fractions 0.60 0.35)GPT-3.5 Turbo 0.71 7.NS.A.1-fraction - Add/sub with fractions 0.73 0.22)Llama 3 70B 0.69 4.NF.A.2 - Compare two fractions 0.99 0.66)Mixtral 8x22B 0.69 7.NS.A.1-fraction - Add/sub with fractions 0.69 0.18)DeepSeek 67B 0.68 6.NS.B.3 - Add/sub/mult/div decimals 0.59 0.37)phi-2 0.39 7.NS.A.2 - Mult/div with fractions 0.57 0.08)Gemma 7B 0.33 7.NS.A.1-decimal - Add/sub with decimals 0.91 0.32)4 ConclusionWe introduce MathCAMPS, a fine-grained synthetic benchmark of mathematical reasoning in LLMs.MathCAMPS is directly grounded on the Common Core Standards, a widely used curriculum inhuman education. By tying our problems to a human curriculum, we enable a much wider rangeof analyses to understand mathematical reasoning capabilities and weaknesses of LLMs. We showanalyses of performance by grade level and identify particularly challenging skills for a range ofmodels, though we believe these are only a few examples of analyses that MathCAMPS permits.4References[1]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia LeoniAleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4technical report. arXiv preprint arXiv:2303.08774 , 2023.[2] AI Anthropic. The claude 3 model family: Opus, sonnet, haiku. Claude-3 Model Card , 2024.[3]Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer,Albert Q Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open languagemodel for mathematics. arXiv preprint arXiv:2310.10631 , 2023.[4]Simone Balloccu, Patrícia Schmidtová, Mateusz Lango, and Ond ˇrej Dušek. Leak, cheat,repeat: Data contamination and evaluation malpractices in closed-source llms. arXiv preprintarXiv:2402.03927 , 2024.[5]Xiao Bi, Deli Chen, Guanting Chen, Shanhuang Chen, Damai Dai, Chengqi Deng, HonghuiDing, Kai Dong, Qiushi Du, Zhe Fu, et al. Deepseek llm: Scaling open-source language modelswith longtermism. arXiv preprint arXiv:2401.02954 , 2024.[6]Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien,Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, EdwardRaff, et al. Pythia: A suite for analyzing large language models across training and scaling. InInternational Conference on Machine Learning , pages 2397–2430. PMLR, 2023.[7]Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, EceKamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi,Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experimentswith gpt-4, 2023.[8]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and JohnSchulman. Training verifiers to solve math word problems, 2021.[9]Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen,Xiao Bi, Y Wu, YK Li, et al. Deepseek-coder: When the large language model meetsprogramming–the rise of code intelligence. arXiv preprint arXiv:2401.14196 , 2024.[10] Bernd Heine and Tania Kuteva. The genesis of grammar: A reconstruction , volume 9. OxfordUniversity Press, USA, 2007.[11] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, DawnSong, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset,2021.[12] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra SinghChaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, LucileSaulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825 , 2023.[13] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, ChrisBamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand,et al. Mixtral of experts. arXiv preprint arXiv:2401.04088 , 2024.[14] Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin TatLee. Textbooks are all you need ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463 ,2023.[15] Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan,Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation modelsfor code. arXiv preprint arXiv:2308.12950 , 2023.[16] Rickard Stureborg, Dimitris Alikaniotis, and Yoshi Suhara. Large language models are incon-sistent and biased evaluators. arXiv preprint arXiv:2405.01724 , 2024.5[17] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highlycapable multimodal models. arXiv preprint arXiv:2312.11805 , 2023.[18] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, ShreyaPathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Openmodels based on gemini research and technology. arXiv preprint arXiv:2403.08295 , 2024.[19] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Openfoundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023.[20] Hugh Zhang, Jeff Da, Dean Lee, Vaughn Robinson, Catherine Wu, Will Song, Tiffany Zhao,Pranav Raja, Dylan Slack, Qin Lyu, Sean Hendryx, Russell Kaplan, Michele Lunati, andSummer Yue. A careful examination of large language model performance on grade schoolarithmetic, 2024.6A TablesTable 3: Largest model rank changes when focusing on one CC standard. Here, A B indicates thatthe model ranks Athon MathCAMPS overall, but ranks Bthwhen only evaluating on problems fromthe indicated CC standard. Conversely, marks notable cases where a model’s performance on theindicated CC standard is lower than its overall performance on MathCAMPS. We show selected rowshere, the complete table can be found in the Appendix.Model Top outlier skill Rank changeGPT-4o 8.EE.C.8 - Solve two-variable systems ( 1st22th)Claude-3 Opus 2.MD.B.5 - Add/sub within 100 ( 2nd13th)Gemini-1.5 Pro K.OA.A.4 - Adding to equal 10 ( 3rd19th)Gemini-1.5 Flash 4.OA.B.4 - Factor pairs within 100 ( 4th20th)Claude-3 Haiku 3.OA.A.4 - Determine unknowns in mul/div probs ( 9th1st)Llama 3 70B K.OA.A.4 - Adding to equal 10 ( 7th17th)DeepSeek 67B K.NBT.A.1 - Decompose into 10s ( 10th1st)Llemma 34B K.OA.A.4 - Adding to equal 10 ( 13th1st)Mistral 7B 1.OA.A.1 - Add/sub within 20 ( 14th21th)DeepSeek Coder 33B 6.EE.A.1 - Evaluate exponents ( 15th3rd)Llemma 7B 6.EE.A.1 - Evaluate exponents ( 18th5th)Gemma 2B 8.EE.C.8 - Solve two-variable systems ( 22th11th)Table 4: Model performance on our mathematical dialogue task, where the model must answerfollow-up questions besides the initial problem. The second column, Accuracy with follow-ups ,shows overall success rate across standards that contain follow-up questions, considering a modelsuccessful only when it answers a problem and its follow-up questions correctly. The third and fourthcolumns show the hardest standard for each model when it comes to follow-up questions, showinga standard’s code and abbreviated description, the model’s accuracy ignoring follow-ups, and afterfollow-ups.Model Acc. with follow-ups Largest accuracy drop w/ follow-upsGPT-4o 0.82 5.NF.A.1 - Add/sub fractions 0.90 0.61)Claude-3 Opus 0.76 7.NS.A.1-fraction - Add/sub with fractions 0.57 0.25)Gemini-1.5 Pro 0.77 5.NF.A.1 - Add/sub fractions 0.60 0.35)Gemini-1.5 Flash 0.76 7.NS.A.1-fraction - Add/sub with fractions 0.78 0.38)GPT-3.5 Turbo 0.71 7.NS.A.1-fraction - Add/sub with fractions 0.73 0.22)Claude-3 Sonnet 0.72 5.NF.A.1 - Add/sub fractions 0.41 0.07)Claude-3 Haiku 0.70 3.OA.A.3 - Mul/div within 100 1.00 0.73)Llama 3 70B 0.69 4.NF.A.2 - Compare two fractions 0.99 0.66)Mixtral 8x22B 0.69 7.NS.A.1-fraction - Add/sub with fractions 0.69 0.18)DeepSeek 67B 0.68 6.NS.B.3 - Add/sub/mult/div decimals 0.59 0.37)Llama 3 8B 0.58 4.NF.A.2 - Compare two fractions 0.90 0.52)Mixtral 8x7B 0.58 5.NF.B.4 - Mult fractions 0.61 0.31)Llemma 34B 0.55 5.NF.B.4 - Mult fractions 0.69 0.33)Mistral 7B 0.48 7.NS.A.1-decimal - Add/sub with decimals 0.91 0.50)DeepSeek Coder 33B 0.60 3.OA.A.3 - Mul/div within 100 0.95 0.81)CodeLlama 34B 0.60 5.NF.B.4 - Mult fractions 0.52 0.39)phi-2 0.39 7.NS.A.2 - Mult/div with fractions 0.57 0.08)Llemma 7B 0.43 5.NF.B.4 - Mult fractions 0.61 0.22)Gemma 7B 0.33 7.NS.A.1-decimal - Add/sub with decimals 0.91 0.32)CodeLlama 13B 0.43 4.NBT.B.4 - Add/sub multi-digit nums 0.81 0.49)CodeLlama 7B 0.49 2.NBT.B.7 - Add/sub within 100 0.80 0.67)Gemma 2B 0.24 3.NBT.A.2 - Add/sub within 1000 0.93 0.26)7Figure 2: Performance of Pythia 12B checkpoints on MathCAMPS standards as it evolves duringtraining. We show all 7 standards where the last checkpoint has at least 30% accuracy.B Learning dynamicsWe use Pythia [ 6] to showcase another analysis that MathCAMPS enables: understanding the learningdynamics of mathematical skills during LM training. We evaluate checkpoints of Pythia 12B onall standards, and track the performance change as the model was trained. Figure 2 shows Pythia’sperformance evolving during training on all 7 CC standards where the last checkpoint achievesat least 30% accuracy. Early in training, after 28k steps, Pythia performs best in a Kindergartenstandard, K.OA.A.5 — “Fluently add and subtract within 5.”. At 57k steps, its performance is bestin both K.OA.A.5 (37% accuracy) and two first-grade standards, 1.OA.A.1 and 1.OA.A.2 — bothstandards involve simple word problems with addition and subtraction within 20. Pythia starts tobecome proficient at a sixth-grade standard around midway during training: 6.EE.A.1, which involvesevaluating simple expressions using whole-number exponents (e.g, computing squares and cubes).These skills develop in tandem with its linguistic competence – at first, Pythia repeats questionsverbatim often, but at 57k steps it already often produces responses . Overall, the high-resolution ofMathCAMPS as a reasoning benchmark can support future work to deepen our understanding of howlanguage models acquire capabilities during training, and how specific factors (such as data, or scale)contribute to their learning.C Common Core Standards in MathCAMPSMathCAMPS is available on Github at https://github.com/gpoesia/mathcamps . All of theCommon Core standards we implement are described in a configuration file, commoncore.yaml ,where standards are instantiated by composing high-level components from the Common Coreattribute grammar. Moreover, we provide our prompts used to generate the dataset and modelresponses, as well as all problems and model responses for all LLMs we evaluated.We list the Common Core standards we represent in MathCAMPS in Tables 5 through 13, segregatedby grade. Standards 3.MD.D.8, 4.MD.A.2, 7.NS.A.1, and 7.NS.A.3 are split up into sub-standards.This was done for ease of implementation of the grammar.D Familiarity biasMathCAMPS was generated using GPT-4. GPT-4o, a model of the same family, was also the bestperformer overall (Table 1). To test whether this might be due to a familiarity bias — problems being8Standard ID DescriptionK.CC.C.7 Compare two numbers between 1 and 10 presented as written numerals.K.OA.A.4 For any number from 1 to 9, find the number that makes 10 when addedto the given number, e.g., by using objects or drawings, and record theanswer with a drawing or equation.K.OA.A.5 Fluently add and subtract within 5.K.NBT.A.1 Compose and decompose numbers from 11 to 19 Into ten ones andsome further ones, e.g., by using objects or drawings, and record eachcomposition or decomposition by a drawing or equation (e.g., 18 = 10+ 8); understand that these numbers are composed of ten ones and one,two, three, four, five, six, seven, eight, or nine ones.Table 5: CC Standards for Grade KStandard ID Description1.OA.A.1 Use addition and subtraction within 20 to solve word problems involvingsituations of adding to, taking from, putting together, taking apart, andcomparing, with unknowns in all positions, e.g., by using objects, draw-ings, and equations with a symbol for the unknown number to representthe problem.1.OA.A.2 Solve word problems that call for addition of three whole numberswhose sum is less than or equal to 20, e.g., by using objects, drawings,and equations with a symbol for the unknown number to represent theproblem.1.OA.D.8 Determine the unknown whole number in an addition or subtractionequation relating three whole numbers.Table 6: CC Standards for Grade 1Standard ID Description2.OA.A.1 Use addition and subtraction within 100 to solve one- and two-step wordproblems involving situations of adding to, taking from, putting together,taking apart, and comparing, with unknowns in all positions, e.g., byusing drawings and equations with a symbol for the unknown number torepresent the problem.2.NBT.B.5 Fluently add and subtract within 100 using strategies based on placevalue, properties of operations, and/or the relationship between additionand subtraction.2.NBT.B.6 Add up to four two-digit numbers using strategies based on place valueand properties of operations.2.NBT.B.7 Add and subtract within 1000, using concrete models or drawings andstrategies based on place value, properties of operations, and/or therelationship between addition and subtraction; relate the strategy to awritten method. Understand that in adding or subtracting three-digitnumbers, one adds or subtracts hundreds and hundreds, tens and tens,ones and ones; and sometimes it is necessary to compose or decomposetens or hundreds.2.MD.B.5 Use addition and subtraction within 100 to solve word problems involv-ing lengths that are given in the same units, e.g., by using drawings (suchas drawings of rulers) and equations with a symbol for the unknownnumber to represent the problem.2.MD.C.8 Solve word problems involving dollar bills, quarters, dimes, nickels, andpennies, using $ and ¢ symbols appropriately.Table 7: CC Standards for Grade 29Standard ID Description3.OA.A.3 Use multiplication and division within 100 to solve word problems insituations involving equal groups, arrays, and measurement quantities,e.g., by using drawings and equations with a symbol for the unknownnumber to represent the problem.3.OA.A.4 Determine the unknown whole number in a multiplication or divisionequation relating three whole numbers.3.OA.C.7 Fluently multiply and divide within 100, using strategies such as therelationship between multiplication and division (e.g., knowing that 8 ×5 = 40, one knows 40 ÷5 = 8) or properties of operations. By the end ofGrade 3, know from memory all products of two one-digit numbers.3.OA.D.8 Solve two-step word problems using the four operations. Represent theseproblems using equations with a letter standing for the unknown quantity.Assess the reasonableness of answers using mental computation andestimation strategies including rounding.3.MD.D.8-triangleSolve real world and mathematical problems involving perimeters ofpolygons, including finding the perimeter given the side lengths, findingan unknown side length, and exhibiting rectangles with the same perime-ter and different areas or with the same area and different perimeters.3.MD.D.8-quadrilateralSolve real world and mathematical problems involving perimeters ofpolygons, including finding the perimeter given the side lengths, findingan unknown side length, and exhibiting rectangles with the same perime-ter and different areas or with the same area and different perimeters.3.MD.D.8-polygonSolve real world and mathematical problems involving perimeters ofpolygons, including finding the perimeter given the side lengths, findingan unknown side length, and exhibiting rectangles with the same perime-ter and different areas or with the same area and different perimeters.3.NBT.A.2 Fluently add and subtract within 1000 using strategies and algorithmsbased on place value, properties of operations, and/or the relationshipbetween addition and subtraction.Table 8: CC Standards for Grade 3in-distribution for GPT-4o, but out-of-distribution for other models —, we generated a 10%-scaledataset using the exact same pipeline, but using Claude 3 Opus for both generating word problemsand testing cycle consistency. This dataset has the same distribution of standards as MathCAMPS.We evaluated GPT-4o and Claude 3 Opus on this dataset — accuracies are reported in Table 14.GPT-4o also performs better in this dataset, suggesting that its performance in MathCAMPS was notdue to a higher relative familiarity with the problems.10Standard ID Description4.OA.A.3 Solve multistep word problems posed with whole numbers and havingwhole-number answers using the four operations, including problemsin which remainders must be Interpreted. Represent these problemsusing equations with a letter standing for the unknown quantity. Assessthe reasonableness of answers using mental computation and estimationstrategies including rounding.4.OA.B.4 Find all factor pairs for a whole number in the range 1-100. Recognizethat a whole number is a multiple of each of its factors. Determinewhether a given whole number in the range 1-100 is a multiple of a givenone-digit number. Determine whether a given whole number in the range1-100 is prime or composite.4.NBT.B.4 Fluently add and subtract multi-digit whole numbers using the standardalgorithm.4.NBT.B.5 Multiply a whole number of up to four digits by a one-digit wholenumber, and multiply two two-digit numbers, using strategies based onplace value and the properties of operations. Illustrate and explain thecalculation by using equations, rectangular arrays, and/or area models.4.NBT.B.6 Find whole-number quotients and remainders with up to four-digit divi-dends and one-digit divisors, using strategies based on place value, theproperties of operations, and/or the relationship between multiplicationand division. Illustrate and explain the calculation by using equations,rectangular arrays, and/or area models.4.NF.A.2 Compare two fractions with different numerators and different denom-inators, e.g., by creating common denominators or numerators, or bycomparing to a benchmark fraction such as 1/2. Recognize that com-parisons are valid only when the two fractions refer to the same whole.Record the results of comparisons with symbols >, =, or <, and justifythe conclusions, e.g., by using a visual fraction model.4.MD.A.2-decimalUse the four operations to solve word problems involving distances,Intervals of time, liquid volumes, masses of objects, and money, includ-ing problems involving simple fractions or decimals, and problems thatrequire expressing measurements given in a larger unit in terms of asmaller unit. Represent measurement quantities using diagrams such asnumber line diagrams that feature a measurement scale.4.MD.A.2-fractionUse the four operations to solve word problems involving distances,Intervals of time, liquid volumes, masses of objects, and money, includ-ing problems involving simple fractions or decimals, and problems thatrequire expressing measurements given in a larger unit in terms of asmaller unit. Represent measurement quantities using diagrams such asnumber line diagrams that feature a measurement scale.4.MD.A.3 Apply the area and perimeter formulas for rectangles in real world andmathematical problems.Table 9: CC Standards for Grade 411Standard ID Description5.OA.A.1 Use parentheses, brackets, or braces in numerical expressions, and eval-uate expressions with these symbols.5.NBT.B.5 Fluently multiply multi-digit whole numbers using the standard algo-rithm.5.NBT.B.6 Find whole-number quotients of whole numbers with up to four-digit div-idends and two-digit divisors, using strategies based on place value, theproperties of operations, and/or the relationship between multiplicationand division. Illustrate and explain the calculation by using equations,rectangular arrays, and/or area models.5.NBT.B.7 Add, subtract, multiply, and divide decimals to hundredths, using con-crete models or drawings and strategies based on place value, propertiesof operations, and/or the relationship between addition and subtraction;relate the strategy to a written method and explain the reasoning used.5.NF.A.1 Add and subtract fractions with unlike denominators (including mixednumbers) by replacing given fractions with equivalent fractions in such away as to produce an equivalent sum or difference of fractions with likedenominators.5.NF.A.2 Solve word problems involving addition and subtraction of fractionsreferring to the same whole, including cases of unlike denominators, e.g.,by using visual fraction models or equations to represent the problem.Use benchmark fractions and number sense of fractions to estimatementally and assess the reasonableness of answers.5.NF.B.4 Apply and extend previous understandings of multiplication to multiplya fraction or whole number by a fraction.Table 10: CC Standards for Grade 5Standard ID Description6.NS.B.2 Fluently divide multi-digit numbers using the standard algorithm.6.NS.B.3 Add, subtract, multiply, and divide decimals to hundredths, using con-crete models or drawings and strategies based on place value, propertiesof operations, and/or the relationship between addition and subtraction;relate the strategy to a written method and explain the reasoning used.6.EE.A.1 Write and evaluate numerical expressions involving whole-number ex-ponents.6.EE.B.7 Solve real-world and mathematical problems by writing and solvingequations of the form x + p = q and px = q for cases in which p, q and xare all nonnegative rational numbers.Table 11: CC Standards for Grade 6Standard ID Description7.NS.A.1-fractionApply and extend previous understandings of addition and subtractionto add and subtract rational numbers; represent addition and subtractionon a horizontal or vertical number line diagram.7.NS.A.1-decimalApply and extend previous understandings of addition and subtractionto add and subtract rational numbers; represent addition and subtractionon a horizontal or vertical number line diagram.7.NS.A.2 Apply and extend previous understandings of multiplication and divisionand of fractions to multiply and divide rational numbers.7.NS.A.3-fractionSolve real-world and mathematical problems involving the four opera-tions with rational numbers.7.NS.A.3-decimalSolve real-world and mathematical problems involving the four opera-tions with rational numbers.Table 12: CC Standards for Grade 712Standard ID Description8.EE.A.2 Use square root and cube root symbols to represent solutions to equationsof the form x 2= p and x 3= p, where p is a positive rational number.Evaluate square roots of small perfect squares and cube roots of smallperfect cubes. Know that the square root of 2 is irrational.8.EE.C.7 Solve linear equations in one variable.8.EE.C.8 Analyze and solve pairs of simultaneous linear equations.Table 13: CC Standards for Grade 8Model GPT4-generated MathCAMPS accuracy Claude-generated MathCAMPS accuracyGPT-4o 0.910 0.954Claude 3 Opus 0.887 0.909Table 14: Performance of GPT-4o and Claude 3 Opus on the dataset genreated using ClaudeE Data generation pipeline detailsE.1 GrammarWe implemented a global attribute grammar in Python, where production rules are implemented asrecursive Python functions. Effectively, each CC standard has its own grammar, composed of piecesfrom components from the global CC grammar, as well as possibly adding unique non-terminals.Each CC standard contains the following parameters:Description: The description of the CC standard.Short description: A shortened description of the CC standard.Filters: A list of problem filters to ensure that all problems in this standard satisfy some requirementgiven in the Common Core description of the standard. The ProblemLength filter makessure that the problem is within the desired length. CheckIntermediateValues filters outany problems with intermediate values greater or lesser than max_value or min_value,respectively. The ChainsOfVariables filter eliminates any problems where variables areassigned to equal exactly another variable, and nothing else. The ContainsTen filter checksif the math word problem contains numbers adding up to 10, or contains a 10 in the problem(for standards K.OA.A.4 and K.NBT.A.1, respectively).Transforms: List of problem transformations applied to all symbolic structures from this standard.The NoUselessVariables transform performs dead code elimination — it removes anyvariables that do not contribute to the final answer by applying a simple graph reachabilityalgorithm on a dependency graph between statements, removing statements that the answerdoes not depend on. The Simplify transform essentially inlines variables that are used onlyonce.Expressions: Lists non-terminals available to generate expressions in symbolic structures for thisstandard. For example, this can make specific binary operations (e.g. addition, division)available on that particular standard.Min/max value: Specifies bounds on values for both the final answer and all intermediate values inthe solution.Min/max number: Specifies bounds on numeric constants sampled in the symbolic structure.Max depth: Sets a maximum depth for expressions in the symbolic structure.Samples: We include 2+ hand-written, standard-relevant examples of a symbolic problem followedby a relevant natural language problem generation, which we use as few-shot prompts duringproblem generation. We also use these prompts, but in reverse (natural language followedby symbolic problem), when we prompt GPT-4 during cycle consistency.13E.2 Answer Grading During EvaluationGiven a solution in natural language, we first use a rule-based answer extractor to extract any model’snumerical answer. In cases where a language model doesn’t answer in the required format, oranswers in an unexpected format, the answer is initially marked as incorrect. For all problems withincorrect answers, we use Llama-3 70B to re-extract the final answer. We few-shot prompt it withhand-generated examples of solutions and extracted final answers, and ask it to extract the finalanswer from the new solution. If a problem that was previously incorrect is marked as correct (giventhe newly extracted answer), we rerun the model on any followups the problem might have. Notethat this “regrading” step can only improve accuracy from the base result, since we only run it onproblems that failed under the rule-based evaluation. In practice, we found this process to havenegligible false-positive rate — only in a handful of cases across all models we observed eitheranswer extraction processes extracting the correct answer out of a wrong response (e.g., if the answerto a problem is 2, and the model responds “On day 2, Sally bought 9 dolls”, the rule-based parserextracts 2 as being the model’s answer, though the sentence implies its answer to be 9). On the otherhand, the LLaMA-3 70B extractor greatly reduces our false negative rate in a handful of models(especially DeepSeek) which are more likely to respond in a format different from what our promptasks for.E.3 Cost estimateAll problems in MathCAMPS were generated using OpenAI gpt-4-0613 , in May 2024. We estimatean approximate cost of 330 USD to generate 9607 problems (including main problems and follow-ups). This includes the cost to perform cycle consistency, and problems that are discarded by cycleconsistency. This gives an average cost of 0.034 USD (3.4 cents) per cycle-consistent problem orfollow-up question.F Correlation between MathCAMPS and GSM8kFigure 3 shows accuracies of several models on both GSM8k and MathCAMPS, along with theline of best fit. There is a strong correlation between overall accuracy in both datasets ( ρ= 0.91,p < 10−6), though MathCAMPS allows for many fine-grained analysis besides overall performance.G Largest Model Rank Changes When Focusing on One CC Standard(Complete Table)Table 15 shows the full table from which Table 3 was extracted.H Followup AnalysisTable 16 lists model accuracies when only looking at the main problems (Main Acc.), their accuracieswhen only looking at the incremental followups (IFUP Acc.), their accuracies when only lookingat the counterfactual followups (CFUP Acc.), and finally, the total number of followups seen byeach model. The total number of followups a model sees relies on whether or not they get the mainquestion for that followup correct. If a model does not correctly solve the main question, it is notprompted with follow-ups. Note that each followup serves as a followup to the main question, asopposed to a followup to each other.14Model Top outlier skill Rank changeGPT-4o 8.EE.C.8 - Solve two-variable systems ( 1st22th)Claude-3 Opus 2.MD.B.5 - Add/sub within 100 ( 2nd13th)Gemini-1.5 Pro K.OA.A.4 - Adding to equal 10 ( 3rd19th)Gemini-1.5 Flash 4.OA.B.4 - Factor pairs within 100 ( 4th20th)GPT-3.5 Turbo 6.EE.A.1 - Evaluate exponents ( 5th21th)Claude-3 Sonnet 2.NBT.B.5 - Add/sub within 100 ( 6th12th)Claude-3 Haiku 3.OA.A.4 - Determine unknowns in mul/div probs ( 9th1st)Llama 3 70B K.OA.A.4 - Adding to equal 10 ( 7th17th)Mixtral 8x22B 8.EE.C.8 - Solve two-variable systems ( 8th21th)DeepSeek 67B K.NBT.A.1 - Decompose into 10s ( 10th1st)Llama 3 8B 4.NBT.B.4 - Add/sub multi-digit nums ( 11th21th)Mixtral 8x7B 6.EE.A.1 - Evaluate exponents ( 12th20th)Llemma 34B K.OA.A.4 - Adding to equal 10 ( 13th1st)Mistral 7B 1.OA.A.1 - Add/sub within 20 ( 14th21th)DeepSeek Coder 33B 6.EE.A.1 - Evaluate exponents ( 15th3rd)CodeLlama 34B 5.NF.A.1 - Add/sub fractions ( 16th22th)phi-2 K.OA.A.4 - Adding to equal 10 ( 17th4th)Llemma 7B 6.EE.A.1 - Evaluate exponents ( 18th5th)Gemma 7B K.OA.A.5 - Add/sub within 5 ( 19th6th)CodeLlama 7B 8.EE.C.8 - Solve two-variable systems ( 21th15th)Gemma 2B 8.EE.C.8 - Solve two-variable systems ( 22th11th)Table 15: Largest changes in a model’s ranking when comparing its performance in a particular CCstandard, in contrast to only overall performance. This is a complete version of Table 3, which onlyshowed some models for brevity.Vendor Model Main Acc. IFUP Acc. CFUP Acc. Total FUPs seenAnthropic Claude-3 Opus 0.89 0.90 0.88 4142Anthropic Claude-3 Sonnet 0.86 0.86 0.87 3964Anthropic Claude-3 Haiku 0.84 0.88 0.87 3819DeepSeek DeepSeek Coder 33B 0.65 0.79 0.85 1022DeepSeek DeepSeek 67B 0.80 0.87 0.88 3286EleutherAI LLemma 7B 0.62 0.68 0.80 2890Google Gemini-1.5 Pro 0.89 0.91 0.89 4140Google Gemini-1.5 Flash 0.87 0.89 0.87 4083Google Gemma 2B 0.51 0.29 0.54 2044Google Gemma 7B 0.62 0.55 0.60 2786Meta Llama 3 8B 0.77 0.84 0.80 3476Meta Llama 3 70B 0.85 0.87 0.84 3939Meta CodeLlama 7B 0.52 0.69 0.86 617Meta CodeLlama 13B 0.58 0.75 0.80 2451Meta CodeLlama 34B 0.64 0.82 0.88 844Microsoft phi-2 0.63 0.48 0.78 2873Mistral Mistral 7B 0.68 0.72 0.80 3090Mistral Mixtral 8x7B 0.76 0.80 0.82 3439Mistral Mixtral 8x22B 0.84 0.86 0.83 3948OpenAI GPT-4o 0.92 0.92 0.90 4358OpenAI GPT-3.5 Turbo 0.87 0.85 0.86 4063Table 16: Accuracy of each model on incremental follow-up questions ( IFUP ) as well as on counter-factual follow-ups ( CFUP ). Note that these accuracies are not directly comparable, since models areonly evaluated on follow-ups to problems that they respond correctly to; thus, each accuracy shownhere is over a different subset of follow-up problems in MathCAMPS.15Figure 3: Relation between accuracy on GSM8k and on MathCAMPS.16 |
HuYSURUxs2 | Smaller, Weaker, Yet Better: Training LLM Reasonersvia Compute-Optimal SamplingHritik Bansal1,2, Arian Hosseini1,3, Rishabh Agarwal1,3, Vinh Q. Tran1, Mehran Kazemi11Google DeepMind,2UCLA,3MilaAbstractTraining on high-quality synthetic data from strong language models (LMs) is acommon strategy to improve the reasoning performance of LMs.In this work, werevisit whether this strategy is compute-optimal under a fixed inference budget (e.g.,FLOPs). To do so, we investigate the trade-offs between generating synthetic datausing a stronger but more expensive (SE) model versus a weaker but cheaper (WC)model. We evaluate the generated data across three key metrics: coverage, diversity,and false positive rate, and show that the data from WC models may have highercoverage and diversity, but also exhibit higher false positive rates. We then finetuneLMs on data from SE and WC models in different settings: knowledge distillation,self-improvement, and a novel weak-to-strong improvement setup where a weakerLM teaches reasoning to a stronger LM. Our findings reveal that models finetunedon WC-generated data consistently outperform those trained on SE-generated dataacross multiple benchmarks and multiple choices of WC and SE models. Theseresults challenge the prevailing practice of relying on SE models for synthetic datageneration, suggesting that WC may be the compute-optimal approach for trainingadvanced LM reasoners.24262830Pass@1 Accuracy (%)+6.0%Gemma-7B Finetuning (Knowledge distillation)363840+3.8%Gemma-9B Finetuning (Self-improvement)39414345+5.8%Gemma-27B Finetuning (Weak-to-strong improvement)MATH DatasetGround-truth data 27B data 9B data (compute-matched)(a) Finetuning LMs with Gemma2 data.Gemma-7B Gemma2-9B Gemma2-27BFinetuned Models26303438424650Pass@1 Accuracy (%)+31.6%+14.4%+10.9%Knowledge distillation with Gemini-1.5 data for MATHPro dataFlash data (price-matched) (b) Finetuning LMs with Gemini 1.5 data.Figure 1: Summary of the results. (a) We finetune Gemma-7B, Gemma2-9B, and Gemma2-27B onthe synthetic data collected from a stronger but more expensive LM (Gemma2-27B) and a weaker butcheaper LM (Gemma2-9B) in a compute-matched setup for the MATH dataset. We find that trainingwith Gemma2-9B data is a more compute-optimal setting across diverse finetuning paradigms –knowledge distillation, self-improvement, and weak-to-strong improvement (i.e. using a weakermodel to improve a stronger model). (b) We finetune Gemma models (7B/9B/27B) on synthetic datagenerated by the state-of-the-art LMs Gemini-1.5-Pro and Gemini-1.5-Flash in a price-matched setup.We find that finetuning with Flash-generated data consistently outperforms Pro-generated data.38th Conference on Neural Information Processing Systems (NeurIPS 2024) Workshop on MATH-AI.1 IntroductionLanguage models (LMs) have demonstrated impressive capabilities in reasoning tasks, but theirsuccess heavily relies on being trained on vast amounts of (problem, solution) pairs. Collecting thisdata from humans is a costly and time-consuming process. Recent studies have demonstrated thefeasibility of synthetically generating this data using LMs themselves, offering a potentially morescalable and efficient approach to training data acquisition. One such widely-adopted approachis to sample multiple candidate solutions for a problem from an LM, filters them for final answercorrectness, and finetune models on the correct solutions [ 55,34]. Practitioners often sample solutionsfrom strong LMs to ensure high quality [ 41,32,29,47]. However, sampling from strong LMs isexpensive which limits the number of solutions that can be generated for practical sampling budgets.In this paper, we explore an alternative sampling approach. Given a fixed compute budget, weinvestigate sampling from a weaker but cheaper (WC) model as opposed to the commonly-usedapproach of sampling from a stronger but more expensive (SE) model. We start by comparingdata from WC vs SE across three axes that play crucial roles in the utility of such synthetic data: 1-coverage , the number of unique problems that are solved, 2- diversity , the average number of uniquesolutions we obtain per problem, and 3- false positive rate (FPR) , the percentage of problems thatarrive at the correct final answer but with a wrong reasoning. We find that since we can generate moresamples from the WC model compared to the SE model under a fixed budget, the data from WCmay exhibit higher coverage and diversity. However, due to the lower quality of the WC model, itmay also have a higher FPR. As a particular example for the Gemma2 family [ 38,39] on the MATHdataset [ 15], Gemma2-9B achieves 11% higher coverage and 86% higher diversity, but also with 7%higher FPR compared to Gemma2-27B.We then fine-tune models on data from SE and WC at a fixed compute budget (see AppendixFigure 3 for illustration). We consider diverse setups corresponding to three paradigms: 1) knowledgedistillation , where a student LM learns from a teacher LM [ 18]; 2) self-improvement , where an LMlearns from self-generated data [ 20]; and 3) a new paradigm we introduce called Weak-to-StrongImprovement , where a strong student LM improves using synthetic data from a weaker teacher LM.Using two (WC, SE) model pairs, one from the Gemma2 family and another from the Gemini-1.5family [ 31], we show on multiple benchmarks that training on WC-generated data consistentlyoutperforms training on SE-generated data under the three setups, with relative gains of up to 31.6%percent (see Figure 1 for a summary of the results). Our results indicate that it is more compute-optimal to sample from a WC model as opposed to the common-practice of sampling from a SE model.With the performance gap between small and large LMs getting narrower over time (especially atlarger scales), our results establish a solid foundation for training the next generation of LM reasoners.2 Compute-Matched Sampling and TrainingWe present the background on synthetic data generation and supervised finetuning in Appendix §A.To generate synthetic solutions for the problems in the dataset, one can leverage different models asdata generators. Specifically, at a fixed sampling budget (FLOPs), one can generate more samplesfrom a weaker but cheaper (WC) model or fewer samples from a stronger but more expensive (SE)model. Given a WC model with PWCparameters and SE with PSEparameters, we compute thesampling ratio at a fix budget for the two models, focusing on decoder-only transformer models [ 43].Following [ 23], we note that the FLOPs per inference token is 2P, for a model with Pparameters. Asa result, the FLOPs for Tinference tokens is 2PT. Further, we assume that generating each solutionrequires an average of Winference tokens for both models. Let SWCandSSErepresent the numberof samples we generate per question for the two models. The total cost of generating samples for thedataset Dwill then be Cost WC=n×SWC×W×(2PWC)andCost SE=n×SSE×W×(2PSE)for the cheap and expensive models, respectively. At a fixed sampling budget, we have:n×SWC×W×(2PWC) =n×SSE×W×(2PSE)⇒ SWC=PSEPWCSSE (1)Equation 1 indicates that at a fixed sampling budget, for each question we can generate PSE/PWCmore samples from WC; the ratio scales linearly with the model parameters ratio. Sampling moresolutions from WC may increase the likelihood of correctly solving a larger subset of the problems(high coverage) and obtaining more correct solutions per question (high diversity).2Low HighSampling budget25334149576573coverage@cost (%)Coverage (MATH)27B9B (compute-matched)(a) Coverage on MATH.Low HighSampling budget024681012# correct solutions per questionDiversity (MATH)27B9B (compute-matched) (b) Diversity on MATH.Human Gemini-1.5Annotator1315171921232527Percentage (%)False Positive Rate (MATH)27B9B (compute-matched) (c) False Positive Rate on MATH.Figure 2: Synthetic data analysis for MATH dataset. The (a) coverage, (b) diversity, and (c) falsepositive rates for Gemma2-27B and Gemma2-9B on the MATH dataset, at two sampling budgets.3 Experiments and ResultsWe experiment with MATH and GSM-8K datasets, and collect synthetic data from Gemma2-9B(WC) and Gemma2-27B (SE) models (see Appendix C for other experimental details).3.1 Synthetic Data AnalysisCoverage: Here, we aim to understand the pros and cons of generating solutions from the WCand SE models at a fixed sampling budget. We present the coverage, diversity, and FPR for theMATH at the low and high sampling budgets in Figure 2. The results for GSM-8K are presentedin the Appendix – Figure 15. We find that in terms of coverage, the data from Gemma2-9B (WC)outperforms Gemma2-27B (SE) by 11% and6%at the low and high sampling budgets, respectively,for the MATH dataset, and 8%and1%for GSM-8K. This highlights that the higher number ofsamples for the WC model aids in solving more unique problems for both the reasoning datasets.We provide the coverage trends for diverse sampling budgets in Appendix J. Further, we provide aqualitative example that gets solved by repeated sampling from Gemma2-9B but remains unsolvedby Gemma2-27B at the fixed high sampling budget (Table 5).Diversity: The diversity for the data from Gemma2-9B is higher than Gemma2-27B by 86% and125% at the low and high sampling budgets for the MATH dataset, and 134% and158% at for theGSM-8K dataset. This implies that many unique reasoning chains in the synthetic data from the WCmodel lead to the correct solutions. We also observe that the absolute diversity scores are lower forMATH compared to GSM-8K at high sampling budget, indicating that models generate fewer correctsolutions for the more challenging datasets when using repeated sampling.FPR: Since we utilize the final answer correctness for filtering the synthetic data, it does not removethe solutions with incorrect intermediate reasoning steps. Our human evaluations suggest that theFPR for the WC-generated solutions is 7%and2%higher than SE-generated solutions on the MATHand GSM-8K, respectively. The trends from the automatic evaluation are similar to that of humanevaluation. Due to the differences in the difficulty of the problems, we note that the absolute FPRsare much lower for the GSM-8K dataset as compared to the MATH dataset. We also note that theautomatic verification of the reasoning steps can also have errors and is still an open problem [28].Given the mixed signals of high coverage and diversity coupled with a high FPR, it remains unclearwhether it is compute-optimal to sample from the WC model or the SE model for training strongreasoners. We study this in the next section.3.2 Compute-Optimality Results for TrainingStudent-LM Finetuning. Here, we aim to understand the merit of WC data compared to SE datafor distillation to a student LM (Gemma 7B in our experiment), given a fixed sampling budget. Theresults presented in Figure 1a show that the Gemma-7B finetuned with the synthetic data from WCconsistently outperforms the one finetuned on data from SC. We observe relative gains of 6%for3the MATH dataset. Contrary to the common belief of stronger models being better for knowledgedistillation, our results indicate that finetuning on data from WC is more compute-optimal than SE.WC-LM Finetuning. Prior work [ 34] has shown that finetuning a WC model through self-generateddata lags behind distillation from SE data. Here, we revisit this setup under the fixed samplingbudget and compare WC models finetuned with WC data vs SE data. Specifically, we compare theperformance of Gemma2-9B finetuned with the WC data (i.e. self-generated data) and SE data (i.e.data from Gemma2-27B). In Figure 1a, we observe that the self-generated data (WC data) improvesover knowledge distillation from a strong model (SE data), achieving relative gains of 3.8%for theMATH dataset. Interestingly, our findings suggest that training a WC model on its own synthetic datais more compute-optimal than distillation from a stronger model.SE-LM finetuning. It is commonly believed that to improve a SE model, we either need syntheticdata from the SE model itself or from an even stronger (and perhaps more expensive) model than SE.Here, we test an alternative novel approach, which we term weak-to-strong improvement (W2S-I) , tounderstand whether the synthetic data from the WC model can improve the SE model. We presentthe results for finetuning Gemma2-27B with the Gemma2-9B generated data and self-generated data(Figure 1a). Surprisingly, we observe that the model finetuned with the WC data outperforms theSE data, achieving relative gains of 5.8%for the MATH dataset. Contrary to the common beliefof self-generated data or data from a stronger model being better, our empirical findings show thattraining a model in a W2S-I setup from a WC data may be more compute-optimal than training it in aself-improvement setup on its own data. This result also establishes a new paradigm for improvingfrontier models in a compute-efficient way, by generating synthetic data from much smaller models.We present the results for more sampling budgets and datasets in Appendix Figure 4 and 5. Wepresent the generalization and ablation results in Appendix §D and §E, respectively.4 Scaling to state-of-the-art language modelsWe scale our method to sampling data from Gemini-1.5-Pro and Gemini-1.5-Flash. As the modelsizes are not publicly available, we utilize the ratio between their pricing per output token as a proxyto perform compute-matched sampling. As of August 2024, we note that the price per million outputtokens is $10.5and$0.3for Gemini-1.5-Pro and Gemini-1.5-Flash, respectively. Hence, we sample 1and35solutions per problem from 1.5-Pro and 1.5-Flash, respectively, for MATH dataset.We perform knowledge distillation on the Gemma-7B, Gemma2-9B, and Gemma2-27B LMs withthe synthetic data from the Pro (SE) and Flash (WC) models. We present the results in Figure 1b.Interestingly, we find that finetuning with the WC data outperforms the SE data, achieving relativegains of 31.6%,14.4%, and 10.9%for Gemma-7B, Gemma2-9B, and Gemma2-27B, respectively.This can be attributed to the difference in the coverage of the models at the fixed sampling budget,which is 61.1%and81% for 1.5-Pro and 1.5-Flash, respectively.Further, we investigate training the LMs with the WC data that is less expensive than collecting 1solution per problem from the SE model. Specifically, we create a dataset by sampling 5solutionsper problem from the Flash (WC) model, which is 7×more economical than generating 1solutionfrom the Pro (SE) model, in terms of the price ($). Upon training the LMs on the 0.15×cost dataregime, according to Figure 7 in the Appendix, we find that training on this data can also outperformtraining with SC data, achieving relative gains of 19.1%,9.8%, and 5.7%for finetuning Gemma-7B,Gemma2-9B, and Gemma2-27B, respectively. This can be attributed to higher coverage of the weakermodel ( 69%), even in the more economical scenario, in comparison to the stronger model ( 61.1%).5 ConclusionIn this work, we provide a framework for compute-optimal sampling from weak but cheap LM forreasoning tasks. We show that at a fixed sampling compute budget, repeated sampling from a smallermodel can achieve higher coverage and diversity than from a strong but more expensive model.Furthermore, we find that finetuning LMs with data from the small LM can consistently outperformdata from the large LM under the same compute budget. Our results can serve as a foundation fortraining LM reasoners, especially as the performance gap between small and large LMs continues tonarrow over time.4References[1]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia LeoniAleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4technical report. arXiv preprint arXiv:2303.08774 , 2023.[2] Mistral AI. Au Large — mistral.ai. https://mistral.ai/news/mistral-large/ , 2024.[3] Anthropic. Claude 3.5 sonnet model card addendum. 2024.[4]Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, DavidDohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with largelanguage models. arXiv preprint arXiv:2108.07732 , 2021.[5]Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer,Albert Q Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open languagemodel for mathematics. arXiv preprint arXiv:2310.10631 , 2023.[6]Samuel R Bowman, Jeeyoon Hyun, Ethan Perez, Edwin Chen, Craig Pettit, Scott Heiner, Kamil ̇eLukoši ̄ut ̇e, Amanda Askell, Andy Jones, Anna Chen, et al. Measuring progress on scalableoversight for large language models. arXiv preprint arXiv:2211.03540 , 2022.[7]Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré,and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeatedsampling. arXiv preprint arXiv:2407.21787 , 2024.[8]Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschen-brenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, et al. Weak-to-strong gener-alization: Eliciting strong capabilities with weak supervision. arXiv preprint arXiv:2312.09390 ,2023.[9]Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, JaredKaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating largelanguage models trained on code. arXiv preprint arXiv:2107.03374 , 2021.[10] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers tosolve math word problems. arXiv preprint arXiv:2110.14168 , 2021.[11] Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle,Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herdof models. arXiv preprint arXiv:2407.21783 , 2024.[12] Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts,Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, et al. Reinforcedself-training (rest) for language modeling. arXiv preprint arXiv:2308.08998 , 2023.[13] Tom Gunter, Zirui Wang, Chong Wang, Ruoming Pang, Andy Narayanan, Aonan Zhang, BowenZhang, Chen Chen, Chung-Cheng Chiu, David Qiu, et al. Apple intelligence foundationlanguage models. arXiv preprint arXiv:2407.21075 , 2024.[14] Michael Hassid, Tal Remez, Jonas Gehring, Roy Schwartz, and Yossi Adi. The larger the better?improved llm code-generation via budget reallocation. arXiv preprint arXiv:2404.00725 , 2024.[15] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, DawnSong, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset.arXiv preprint arXiv:2103.03874 , 2021.[16] HF. Open LLM Leaderboard 2 - a Hugging Face Space by open-llm-leaderboard — hug-gingface.co. https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard , 2024.[17] HF. SmolLM - blazingly fast and remarkably powerful — huggingface.co. https://huggingface.co/blog/smollm , 2024.5[18] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network.arXiv preprint arXiv:1503.02531 , 2015.[19] Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, andRishabh Agarwal. V-star: Training verifiers for self-taught reasoners. arXiv preprintarXiv:2402.06457 , 2024.[20] Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and JiaweiHan. Large language models can self-improve. arXiv preprint arXiv:2210.11610 , 2022.[21] Mojan Javaheripi, Sébastien Bubeck, Marah Abdin, Jyoti Aneja, Sebastien Bubeck, CaioCésar Teodoro Mendes, Weizhu Chen, Allie Del Giorno, Ronen Eldan, Sivakanth Gopi, et al.Phi-2: The surprising power of small language models. Microsoft Research Blog , 2023.[22] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, ChrisBamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand,et al. Mixtral of experts. arXiv preprint arXiv:2401.04088 , 2024.[23] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child,Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural languagemodels. arXiv preprint arXiv:2001.08361 , 2020.[24] Mehran Kazemi, Nishanth Dikkala, Ankit Anand, Petar Devic, Ishita Dasgupta, Fangyu Liu,Bahare Fatemi, Pranjal Awasthi, Dee Guo, Sreenivas Gollapudi, et al. Remi: A dataset forreasoning with multiple images. arXiv preprint arXiv:2406.09175 , 2024.[25] Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin Xu, and Deepak Ramachandran. Lam-bada: Backward chaining for automated reasoning in natural language. arXiv preprintarXiv:2212.13894 , 2022.[26] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Largelanguage models are zero-shot reasoners. Advances in neural information processing systems ,35:22199–22213, 2022.[27] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, VinayRamasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solvingquantitative reasoning problems with language models. Advances in Neural InformationProcessing Systems , 35:3843–3857, 2022.[28] Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, JanLeike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprintarXiv:2305.20050 , 2023.[29] Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, andAhmed Awadallah. Orca: Progressive learning from complex explanation traces of gpt-4. arXivpreprint arXiv:2306.02707 , 2023.[30] Saurav Muralidharan, Sharath Turuvekere Sreenivas, Raviraj Joshi, Marcin Chochowski,Mostofa Patwary, Mohammad Shoeybi, Bryan Catanzaro, Jan Kautz, and Pavlo Molchanov.Compact language models via pruning and knowledge distillation. arXiv preprintarXiv:2407.14679 , 2024.[31] Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al.Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXivpreprint arXiv:2403.05530 , 2024.[32] Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan,Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation modelsfor code. arXiv preprint arXiv:2308.12950 , 2023.[33] Zhihong Shao, Damai Dai, Daya Guo, Bo Liu, and Zihan Wang. Deepseek-v2: A strong,economical, and efficient mixture-of-experts language model. ArXiv , abs/2405.04434, 2024.6[34] Avi Singh, John D Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Peter J Liu, JamesHarrison, Jaehoon Lee, Kelvin Xu, Aaron Parisi, et al. Beyond human data: Scaling self-trainingfor problem-solving with language models. arXiv preprint arXiv:2312.06585 , 2023.[35] Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute opti-mally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314 ,2024.[36] Yifan Song, Guoyin Wang, Sujian Li, and Bill Yuchen Lin. The good, the bad, and the greedy:Evaluation of llms should not ignore non-determinism. arXiv preprint arXiv:2407.10457 , 2024.[37] Saurabh Srivastava, Anto PV , Shashank Menon, Ajay Sukumar, Alan Philipose, Stevin Prince,Sooraj Thomas, et al. Functional benchmarks for robust evaluation of reasoning performance,and the reasoning gap. arXiv preprint arXiv:2402.19450 , 2024.[38] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, ShreyaPathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Openmodels based on gemini research and technology. arXiv preprint arXiv:2403.08295 , 2024.[39] Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin,Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé,et al. Gemma 2: Improving open language models at a practical size. arXiv preprintarXiv:2408.00118 , 2024.[40] Qwen Team. Introducing Qwen1.5 — qwenlm.github.io. https://qwenlm.github.io/blog/qwen1.5/ , 2024.[41] Teknium. Openhermes 2.5: An open dataset of synthetic data for generalist llm assistants, 2023.[42] Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang,Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems withprocess-and outcome-based feedback. arXiv preprint arXiv:2211.14275 , 2022.[43] Ashish Vaswani. Attention is all you need. arXiv preprint arXiv:1706.03762 , 2017.[44] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, AakankshaChowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in languagemodels. arXiv preprint arXiv:2203.11171 , 2022.[45] Tianhao Wu, Weizhe Yuan, Olga Golovneva, Jing Xu, Yuandong Tian, Jiantao Jiao, JasonWeston, and Sainbayar Sukhbaatar. Meta-rewarding language models: Self-improving alignmentwith llm-as-a-meta-judge. arXiv preprint arXiv:2407.19594 , 2024.[46] xAI. Grok-1 Model Card — x.ai. https://x.ai/blog/grok/model-card , 2024.[47] Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, andDaxin Jiang. Wizardlm: Empowering large language models to follow complex instructions.arXiv preprint arXiv:2304.12244 , 2023.[48] An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li,Chengyuan Li, Dayiheng Liu, Fei Huang, et al. Qwen2 technical report. arXiv preprintarXiv:2407.10671 , 2024.[49] Yuqing Yang, Yan Ma, and Pengfei Liu. Weak-to-strong reasoning. arXiv preprintarXiv:2407.13647 , 2024.[50] Alex Young, Bei Chen, Chao Li, Chengen Huang, Ge Zhang, Guanwei Zhang, Heng Li,Jiangcheng Zhu, Jianqun Chen, Jing Chang, et al. Yi: Open foundation models by 01. ai. arXivpreprint arXiv:2403.04652 , 2024.[51] Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok,Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematicalquestions for large language models. arXiv preprint arXiv:2309.12284 , 2023.7[52] Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Sainbayar Sukhbaatar, Jing Xu, andJason Weston. Self-rewarding language models. arXiv preprint arXiv:2401.10020 , 2024.[53] Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and WenhuChen. Mammoth: Building math generalist models through hybrid instruction tuning. arXivpreprint arXiv:2309.05653 , 2023.[54] Eric Zelikman, Georges Harik, Yijia Shao, Varuna Jayasiri, Nick Haber, and Noah D Goodman.Quiet-star: Language models can teach themselves to think before speaking. arXiv preprintarXiv:2403.09629 , 2024.[55] Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning withreasoning. Advances in Neural Information Processing Systems , 35:15476–15488, 2022.[56] Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed H Chi, Quoc VLe, and Denny Zhou. Take a step back: Evoking reasoning via abstraction in large languagemodels. arXiv preprint arXiv:2310.06117 , 2023.8A PreliminariesLetD={qi, ai}i=ni=1be a training dataset of size nwith reasoning questions qiand final answers(aka labels) ai. A successful approach to leverage such data to improve models for reasoning is asfollows. We sample multiple solutions for each qiat a non-zero temperature and create the syntheticdataDG={qi,{(ˆrij,ˆaij)j=kj=1}}, where kis the number of samples, ˆrijis the j-th reasoning chain(i.e. solution) generated by the model for qi, and ˆaijis the model’s final answer for qiin the j-thsample. Then, we filter the incorrect solutions by comparing ˆaijtoaiand removing the solutionswhose final answer do not match that of the gold answer1. Finally, we supervise finetune a model onthe remaining data ̃DG. This approach was first proposed in [ 55] and was then extended in multipleworks including [54, 34].For a dataset DG, we compute coverage @k(akapass@k) [9] asEDGh1−M−ck/Mkiwhere cisthe number of solutions, out of M, with correct answers and EDG[.]denotes the expectation over theproblems and solutions in the generated dataset. Conceptually, coverage @kmeasures the fractionofunique questions that have at least one correct solution, assuming that we sample ksolutionsper question from the model. We also define diversity @kas the average number of unique correctsolutions we obtain per question when we sample ksolutions per question. Finally, we define falsepositive rate (FPR) as the percentage of solutions in ̃DGwhere the reasoning is incorrect, despite thefinal answer being correct.# samples = KWeaker and CheapLM (PWC params)Stronger and Expensive LM (PSE params)# samples= N x KN = PSE/PWCFinetuned LM (FSE)Finetuned LM (FWC)Accuracy of FWC > FSEFigure 3: Illustration of the approach. Given a fixed sampling budget, one can either generate fewer samplesfrom a stronger but more expensive (SE) model or more samples from a weaker but cheaper (WC) model. Thelatter may lead to solving a wider range of problems and also more correct solutions per question. We comparethe utility of these two synthetically generated datasets for training LM reasoners in various supervised finetuningsetups and show that training with the data from WC consistently outperforms training on data from SE.Data (↓) / Finetuning setup ( →) Student-LM WC-LM SE-LMWC (Compute-matched) Knowledge distillation Self-improvement Weak-to-strong improvementSE Knowledge distillation Knowledge distillation Self-improvementTable 1: Summary of the supervised finetuning setups. We finetuned the language models under threesetups: (a) Student LM, (b) Weak-Cheap (WC) LM, and (c) Strong-Expensive (SE) LM. For each setup, weemployed different finetuning paradigms based on the source of the synthetic data. For example, training aseparate student LM with data from both WC and SE models falls under the knowledge distillation paradigm.In contrast, training a WC model with its own samples is self-improvement. Finally, we also introduce a newparadigm, weak-to-strong improvement, where the samples from the WC model is used to improve the reasoningcapabilities of the SE model at the fixed compute budget.1While it is possible to use more sophisticated approaches for filtering (e.g., process-based or outcome-basedreward model [ 42]), in this work we focus on final answer correctness for filtering as it has shown to be strong.9B Finetuning setups•Student-LM finetuning : Conventionally, the supervised finetuning data for training studentLM is acquired from SE models to ensure high-quality [ 41]. However, we aim to understandwhether WC models can replace SE models for distillation at the fixed sampling budget. Todo so, we finetune a student LM separate from the WC and SE models on the WC and SEdata, which corresponds to distillation in both the cases.•WC-LM finetuning : Prior work [ 34] has shown that finetuning a WC model throughself-generated data lags behind distillation from SE data. However, their setup spends ahigher sampling budget (FLOPs) on collecting data from the SE model than collecting SIdata from the WC model. In this work, we revisit this finetuning setup under the fixedsampling budget and finetune the WC model on the WC and SE data at a fixed budget forboth. Note that training the WC model on its own data corresponds to self-improvementwhereas training WC on the data from SE corresponds to distillation. Hence, this setupcompares self-improvement on WC data with distillation from SE data.•SE-LM finetuning : It is commonly believed that to improve a SE model, we either needsynthetic data from the SE model itself or from an even stronger (and perhaps more ex-pensive) model than SE. Here, we test an alternative approach to understand whether thesynthetic data from the WC model can improve the SE model. To this end, we finetune theSE model on the WC and SE data. Training SE on data from WC corresponds to W2S-I andtraining SE on data from SE corresponds to self-improvement. Overall, this setup comparesW2S-I by WC data with self-improvement by SE data.A summary of the three setups and the finetuning paradigms that each case corresponds to can befound in Table 1.C Experimental SetupDatasets: We utilize MATH [ 15] and GSM-8K [ 10] as the reasoning datasets due to their wideadoption for mathematical problem solving. Specifically, MATH consists of competition levelproblems with various levels of difficulty (Level 1-5), and GSM-8K comprises of grade school levelmath problems. Each dataset contains 7500 math problems in their training split. We evaluate themodels on 500problems from the MATH test split [ 28] and 1319 problems from the GSM-8K testsplit. Further, we use 500problems from the MATH test split and 500problems from GSM-8K asthe validation dataset. We also use the Functional MATH dataset [ 37] for a transfer study. Further,we present the results for a coding task in Appendix H.Data Generation: We use Gemma2 models for synthetic data generation, with pretrained Gemma2-9B and Gemma2-27B acting as the WC and SE models respectively. We generate the solutions forthe problems in the MATH using a 4-shot prompt and for GSM-8K using an 8-shot prompt. Since the9B model is roughly 3 times smaller than the 27B model, at a fixed sampling compute budget we cansample 3×more sample solutions per problem for Gemma2-9B. For our experiments, we considertwo sampling budgets: a low budget , where we generate 1 and 3 candidate solutions per problemfrom Gemma2-27B and Gemma2-9B, respectively, and a high budget , where we generate 10 and 30candidate solutions per problem. Further, we study the transfer of the reasoning capabilities for themodels trained on MATH at the high sampling budget on the Functional MATH dataset.Model Finetuning: We summarize the details for our finetuning setups in the Table 1. In theStudent-LM finetuning setup, we finetune the Gemma-7B model [ 38] on data from Gemma2-9B(WC) and Gemma2-27B (SE). In addition, we use Gemma2-9B and Gemma2-27B for the WC-LMand SE-LM finetuning setups, respectively. Further, we train the LMs across different setups with thehuman-written solutions as a ground-truth baseline. We provide the finetuning details in Appendix M.Synthetic Data Evaluation: To assess the quality of the synthetic data from the SE and WC models,we measure the false positive rate, as well as coverage anddiversity at a fixed cost. From Equation 1,we know that sampling one solution from SE takes the same FLOPs as sampling PSE/PWCsolutionsfrom WC. Therefore, we compare coverage @kfor SE to coverage @(PSEPWCk)for WC to allow asimilar budget to both models. Specifically, we compare coverage @kandcoverage @3kfor ourSE and WC models. Similarly we compare diversity @kanddiversity @3kfor our SE and WC10Low HighSampling Budget242628303234Pass@1 Accuracy (%)Student-LM Finetuning (MATH)Ground-truth27B9B (compute-matched)(a) Finetuning Gemma-7B.Low HighSampling Budget36373839404142Pass@1 Accuracy (%)WC-LM Finetuning (MATH)Ground-truth27B9B (compute-matched) (b) Finetuning Gemma2-9B.Low HighSampling Budget394143454749Pass@1 Accuracy (%)SE-LM Finetuning (MATH)Ground-truth27B9B (compute-matched) (c) Finetuning Gemma2-27B.Figure 4: Supervised-finetuning results (MATH). The results for finetuning various LMs on theMATH synthetic data from the WC (Gemma2-9B) and SE (Gemma2-27B) models, at a fixed samplingbudget. We observe that training with the samples from the WC model consistently outperformstraining with SE data.Low HighSampling Budget68707274767880Pass@1 Accuracy (%)Sep-LM Finetuning (GSM-8K)Ground-truth27B9B (compute-matched)(a) Finetuning Gemma-7B.Low HighSampling Budget77798183Pass@1 Accuracy (%)WC-LM Finetuning (GSM-8K)Ground-truth27B9B (compute-matched) (b) Finetuning Gemma2-9B.Low HighSampling Budget8082848688Pass@1 Accuracy (%)SE-LM Finetuning (GSM-8K)Ground-truth27B9B (compute-matched) (c) Finetuning Gemma2-27B.Figure 5: Supervised-finetuning results (GSM-8K). The results for finetuning various LMs onthe GSM-8K synthetic data from the WC (Gemma2-9B) and SE (Gemma2-27B) models, at a fixedsampling budget. We observe that training with samples from the WC model leads to strongerreasoners than training with SE data.models. Since FPR cannot be computed automatically, we compute it using two proxies: 1- a humanevaluation on a subset of the data, where 50solutions from each model were selected randomly andrated for reasoning correctness by the authors, and 2- automatic evaluation where we sampled 500solutions and prompted Gemini-Pro-1.5 [ 31] to rate the correctness of the reasoning paths. To samplesolutions, for the MATH dataset we selected uniformly from each diversity level. In our experiments,we find that the FPR estimates are close to each other for the human and automatic evaluation. Weprovide a few qualitative examples for the false positive instances in Appendix I.Evaluating Finetuned Models: We use pass@1 accuracy to evaluate the performance of the finetunedLMs. Specifically, we generate a single solution for the problem (zero-shot) from the test split, usinga sampling temperature of 0.0(greedy decoding) for the fine-tuned LM and measure the percentageof problems that where the final answer matches the golden final answer. We also report maj@k(k= 1,4,8,16) for part of our experiments, where we generate ksolutions per problem at a samplingtemperature of 0.7 and select the final answer that appears most among the ksamples.D GeneralizationHere, we aim to study the transfer capabilities of the models trained with the WC and SE data.Specifically, we evaluate the models finetuned with the synthetic solutions for the MATH datasets atthe high sampling budget on the Functional MATH dataset. The results in Figure 6 show that theGemma-7B finetuned with the WC data consistently outperforms the SE data, where the relativegains range from 5.8%−6.5%at different values of k. In addition, we observe that the Gemma2-9B111 4 8 16k283134374043Maj@k (%)Student-LM Finetuning (Functional MATH)27B9B (compute-matched)(a) Gemma-7B evaluation.1 4 8 16k384144475053Maj@k (%)WC-LM Finetuning (Functional MATH)27B9B (compute-matched) (b) Gemma2-9B evaluation.1 4 8 16k4447505356Maj@k (%)SE-LM Finetuning (Functional MATH)27B9B (compute-matched) (c) Gemma2-27B evaluation.Figure 6: Generalization Results (Functional MATH). The performance of the models trained withthe synthetic data from the MATH data at high sampling budget on the Functional MATH dataset.The results suggest that training with WC data enhances the generalization capabilities over the SEdata, at a fixed sampling budget.finetuned with the self-generated data outperforms knowledge distillation with the Gemma2-27B dataachieving relative gains ranging from 2.5%−4.5%at different values of k. Moreover, finetuningGemma2-27B with WC data matches closely with the SE data, except for k= 8where the gap is arelative gain of 2%. Our results highlight that finetuning the LMs with the WC data enhances thegeneralization capabilities over the SE data at the fixed sampling budget.Gemma-7B Gemma2-9B Gemma2-27BFinetuned Models26303438424650Pass@1 Accuracy (%)+19.1%+31.6%+9.8%+14.4%+5.7%+10.9%Knowledge distillation with Gemini-1.5 data for MATHPro dataFlash data (cost: 0.15x of Pro)Flash data (cost: 1x of Pro)Figure 7: We finetune Gemma models (7B/9B/27B) on synthetic data generated by the state-of-the-art LMs Gemini-1.5-Pro and Gemini-1.5-Flash. We find that finetuning with Flash-generateddata consistently outperforms Pro-generated data not only at the same sampling monetary cost asGemini-1.5-Pro, but also at ≈0.15×of the cost.E Ablation StudiesImpact of Dataset Size: We study whether the benefits of the synthetic data from the WC model holdat different dataset sizes. We repeat our experiments for the MATH dataset at the high budget, butwhen only having access to 500training data (selected randomly from the training set). We presentthe results for the finetuned models in Figure 8. We observe that models trained with the WC dataoutperform those trained with the SE data, achieving relative gains of 12.93%,11.4%, and 5.1%forthe three paradigms, respectively. This highlights the utility of generating more data from the WCmodel instead of the SE model in the low-problem regimes at the fixed sampling budget.Default vs Compute-Optimal Sampling from Cheap LMs: We anticipate that the reason whydata from SE models has been previously preferred over data from WC is because they have beentested in a setup where an equal number of samples have been generated from the two models(e.g., see [ 34]), as opposed to a compute-matched setup. To verify this, we generated 1solution perproblem (number-matched) from the WC model for the MATH and GSM-8K datasets and trainedthe models under the three fine-tuning setups on this generated data, after filtering for final answercorrectness. We then compare the performance of the models trained with synthetic data, where12500 7500# problems in the dataset22242628303234Pass@1 Accuracy (%)Student-LM Finetuning27B9B (compute-matched)(a) Finetuning Gemma-7B.500 7500# problems in the dataset35373941Pass@1 Accuracy (%)WC-LM Finetuning27B9B (compute-matched) (b) Finetuning Gemma2-9B.500 7500# problems in the dataset394143454749Pass@1 Accuracy (%)SE-LM Finetuning27B9B (compute-matched) (c) Finetuning Gemma2-27B.Figure 8: Impact of the dataset size. The performance of finetuned LMs on the synthetic data fromWC and SE models, at different sizes of the training set. Training with the WC data leads to bettermodels than training with the SE data at both dataset sizes.Student-LM WC-LM SE-LMFinetuning setups20242832364044Pass@1 Accuracy (%)Ablation: number vs compute-matched (MATH)9B (number-matched)27B9B (compute-matched)(a) Finetuning LMs on MATH data.Student-LM WC-LM SE-LMFinetuning Setups6467707376798285Pass@1 Accuracy (%)Ablation: number vs compute-matched (GSM-8K)9B (number-matched)27B9B (compute-matched) (b) Finetuning LMs on GSM-8K data.Figure 9: Comparison between number-matched sampling and compute-matched sampling from the WCmodel. We report the results for finetuning diverse LMs with synthetic data from WC and SE model at the lowsampling budget. Conventionally, practitioners would compare the performance of the models trained with WCdata and SE data at the fixed number of samples from both models. However, we observe larger gains using thesamples from WC model that acquired at the fixed sampling budget as that of SE model.we generate 3solutions per problem from the WC model, matched in sampling compute to the SEmodel. We present the results in Figure 9. We see that the models trained with the number-matchedWC data are sub-optimal in comparison to the models trained with the compute-matched WC data,and lead to worse models compared to training with the SE data. This highlights that the futurecomparisons between synthetic data from weak and strong models should be made in the samplingcompute-matched regime.Coverage and Diversity: We aim to understand the role of coverage and diversity in enhancing theperformance of models trained with WC-generated synthetic data. To this end, for the MATH dataset,we consider the original high-sampling (30 solutions per problem) WC dataset as a (high coverage,high diversity) dataset. We then construct a (high coverage, low diversity) version by only selectingone correct solution per question from our samples. This reduces the diversity of the original WCdataset from 11 to 1, while maintaining the coverage. We also create a (low coverage, low diversity)dataset where we generate just one solution per problem from the WC model and filter it for thecorrectness of the final answer. The coverage of this dataset ( 27%) is lower than that of the WCdataset with 30 solutions per problem ( 43%). We train models across the three finetuning setups onthese sets and present the results in Figure 10. Our results indicate that across all setups, the highcoverage and high diversity data is better than high coverage and low diversity, and high coverageand low diversity is better than low coverage and low diversity. This reveals that both the coverageand diversity play a critical role in training strong reasoners from the smaller LMs.13Student-LM WC-LM SE-LMFinetuning setups2125293337414549Pass@1 Accuracy (%)Ablation: Role of coverage and diversitylow coverage, low diversityhigh coverage, low diversityhigh coverage, high diversityFigure 10: Understanding the role of coverage and diversity for training strong reasoners with WC model.We compare the performance of training the LMs with synthetic data acquired by collecting (a) 1 solution perproblem (low diversity, low coverage), (b) 30 solutions per problem (high diversity, high coverage), and (c) 30solutions per problem but keeping just one correct solution (high coverage, low diversity). We find that bothhigh diversity and coverage are helpful for training strong reasoners.Nov-2023 Feb-2024 Mar-2024April-2024Jun-2024Jul-2024Aug-2024203040506070MATH Performance (%)Qwen1.5 (7B)Gemma1 (7B)Qwen2 (7B)Qwen2 (1B)LLaMA3 (8B)Gemma2 (9B)LLaMA3 (70B)Gemma2 (27B)Grok-1 (78B)Mixtral (22B)DeepSeekv2 (21B)Qwen1.5 (72B)Qwen2 (72B)Yi (34B)Variation in reasoning capabilities over time for open language modelsSmall LM (1B-9B)Large LM (20B-80B)Figure 11: Variation in the performance of open language models on the MATH dataset overtime. The fitted trendlines suggest that the quality of smaller LMs is improving more rapidly thanthat of larger LMs over time. This highlights that our findings on utilizing smaller LMs for trainingstrong reasoners will become increasingly relevant in the future.F A Future PerspectiveWe showed that for the current WC and SE models, training reasoners through sampling from WCmodels may be more compute-optimal. Here, we aim to discuss the relevance of these results for thefuture set of WC and SE models. To do so, we surveyed 17LMs that pass the following criteria: 1-the model size is known and falls within [1B, 9B] or [20B, 80B] range, 2- the model is released in thepast one year, 2- the technical report of the model reports results on the MATH dataset and the modelis capable on it ( >20%), 4- ranks high on the OpenLLM leaderboard under the pretrained modelscategory [ 16]. This resulted in models from seven families including Gemma-2 [ 39], LLaMA-3 [ 11],Mixtral [ 22], Qwen [ 40,48], Grok-1 [ 46], DeepSeek-v2 [ 33], and Yi [ 50]. We grouped these modelsinto small LM (1B to 9B) and large LMs (20B to 80B). We then plotted in Figure 11 the modelperformances on the MATH dataset against their date of the publication release on arxiv and fittedtrendlines to the data points representing the small and large LMs using the least squares method2.2We consider the number of active model parameters for mixture-of-experts LMs.14Our analysis reveals that, despite the variance, the trendline for the smaller LMs is steeper than thatof the larger LMs. This indicates that the reasoning performance of the small LMs may be improvingmore rapidly over time compared to the larger LMs. The rapid rise in the performance of the smallLMs can be attributed to factors such as the enhanced quality and scale of the pretraining data (e.g.,LLaMA-3 employs 15T tokens), pruning and knowledge distillation [ 30]. With the performance gapbetween small and large LMs narrowing over time, we anticipate that our results will become evenmore relevant in the future.G Related WorkLMs for reasoning. The ability to solve reasoning tasks has been a long standing goal of artificialintelligence [ 31,1,11,40,3,2]. In this regard, LMs trained on the internet-scale data have achievedgreat success for math, code, and other reasoning tasks [ 27,5,24]. There have been several worksthat aim to enhance the reasoning capabilities of the LMs either via prompting [ 26,44,56,25] orfinetuning [ 53,51]. In this work, we focus on finetuning the LMs with task-specific datasets to buildstrong reasoners. Specifically, our method closely aligns with the widely adopted STaR [ 55] wherethe synthetic data from the LMs are used to elicit strong reasoning capabilities.Finetuning LMs. Within the finetuning paradigm, there have been several works that improvereasoning with synthetic data. Broadly, these works focus on knowledge distillation from a strong butexpensive LM [ 45,53] or self-improvement [ 12,34]. While it is common to filter the synthetic datafor the final answer correctness (akin to [ 55]), there are several works that aim to build task-specificverifiers to train strong reasoners [ 28,45,19,52]. In this work, we explore the utility of the syntheticdata from the weak but cheap LMs for training strong reasoners via knowledge distillation as well asself-improvement. However, we do not explore using model-based verifiers with the synthetic datafor enhanced reasoning, and leave it as a future work.Our weak-to-strong improvement paradigm, where a strong model is trained with the generationsfrom the weak model, is related to several prior work [ 6,8,49] which study the ability of a strongLM to learn from the data generated by a weaker LM. However, the aim of these works is to recoverthe full capabilities of the strong model from weaker data, whereas we aim to enhance the strongmodel capabilities further. Additionally, our work studies compute-optimal sampling from weak andstrong models, which is absent in previous work.Large and small LMs. While training large LMs has led to significant advancements acrossvarious tasks, there has recently been a growing interest in developing capable small LMs [ 17,21].Specifically, a capable small LM is faster to run, and easier to serve to millions of users on the edgedevices [ 13]. As a result, several recent works aim to understand the utility of the weak but cheaperLMs in comparison to the strong but expensive LMs for reasoning. Specifically, [ 7,36,35] show thatthe solve rate of the small LMs can increase significantly with repeated sampling. In addition, [ 14]demonstrate that repeated generations from smaller LMs can outperform the data generated by largerLMs at a fixed sampling computational budget during inference for coding tasks. In this work, we gobeyond these works and show the utility of the synthetic data from the small LMs for training strongreasoners across a diverse set of supervised finetuning setups.H Extending our results to coding tasksHere, we aim to understand the utility of the synthetic data from the Gemma2-9B (WC) and Gemma2-27B (SE) model on coding tasks. To this end, we generate candidate solutions for the MBPP [ 4]dataset from WC and SE models at the low and high sampling budgets and finetune models in threesetups on these data. We use the santizied version of MBPP3containing 427problems overall; weused 3problems for fewshot prompting (used for sampling from the models), 324problems forsynthetic training data generation, and 100problems for validation. The candidate solutions arefiltered by the unit tests that accompany each instance of the dataset. After finetuning, we evaluatethe LMs on 164problems from the HumanEval dataset [9].We compare the coverage and diversity of the synthetic datasets in Figure 12 and observe that thecoverage of the WC model is higher than SE at low data regime while it is similar to SE in the high3https://huggingface.co/datasets/google-research-datasets/mbpp/viewer/sanitized15Low HighSampling budget5559636771757983879195coverage@cost (%)Coverage (MBPP)27B9B (compute-matched)(a) Coverage on MBPP.Low HighSampling budget0369121518# correct solutions per questionDiversity (MBPP)27B9B (compute-matched) (b) Diversity on MBPP.Figure 12: Synthetic data analysis for MBPP dataset. We present the (a) coverage, and (b) diversityfor a subset of the santized MBPP dataset for Gemma2-27B and Gemma2-9B at two fixed samplingbudgets.Low HighSampling Budget404346495255586164Pass@1 Accuracy (%)Student-LM Finetuning (MBPP->HumanEval)Ground-truth27B9B (compute-matched)(a) Finetuning Gemma-7B.Low HighSampling Budget52545658606264Pass@1 Accuracy (%)WC Finetuning (MBPP->HumanEval)Ground-truth27B9B (compute-matched) (b) Finetuning Gemma2-9B.Low HighSampling Budget56596265687174Pass@1 Accuracy (%)SE-LM Finetuning (MBPP->HumanEval)Ground-truth27B9B (compute-matched)(c) Finetuning Gemma2-27B.Figure 13: Supervised-finetuning with MBPP and evaluation on HumanEval. We report theresults for finetuning diverse language models on the MBPP synthetic data from the SE model(Gemma2-9B) and WC model (Gemma2-27B) at the fixed sampling budgets.sampling budget regime. In addition, we find that the diversity of the WC model is more than thatof the SE model for the low and high sampling budgets. Subsequently, we finetune Gemma-7B,Gemma2-9B, and Gemma2-27B models with the ground-truth and synthetic datasets and evaluate onHumanEval (Figure 13). Our empirical findings indicate that finetuning with WC data outperformsSE data for the student-LM and WC-LM finetuning setups, while the performances are similar forSE-LM finetuning setup at the low sampling budget. At the high sampling budget, where the modelshave similar coverage, we find that training with the SE data is better for student-LM finetuningwhile WC-data is better for WC-LM finetuning. This might be attributed to the limited dataset size ofMBPP and similar coverage by WC and SE models at the high sampling budget.I Qualitative ExamplesWe present a few qualitative examples for model-generated solutions that lead to the correct finalanswer with incorrect (or correct) reasoning for the MATH dataset. We provide two bad reasoningexamples in Table 2 and Table 3. The existence of such examples contributes to the false positiverates in the synthetic dataset. In addition, we provide a good reasoning example in Table 4.J Coverage TrendsWe present the coverage trends for diverse sampling budgets on the MATH and GSM-8K dataset inFigure 14a and Figure 14b, respectively.16Problem:When three standard dice are tossed, the numbers a, b, care obtained. Find the probability that abc= 180.Ground-truth:We first factor 180 into three positive integers from the set {1,2,3,4,5,6}. Since 180>53= 125 ,at least oneof the integers must be 6. Since 180>52·6 = 150 , at least two integers must equal 6. Indeed, 180 = 5 ·6·6is the only such way to factor 180. Therefore, (a, b, c ) = (5 ,6,6),(6,5,6),(6,6,5)are the only possibilities fora, b, c. Each occurs with probability163=1216, so the probability that abc= 180is3·1216=172.Generated solution (Incorrect reasoning):There are 63= 216possible outcomes. The numbers 1,3,6are the only possible values for a, b, c. If a= 1, thenb= 3andc= 6, so the probability of this outcome is1216. If a= 3, then b= 1andc= 6, so the probabilityof this outcome is also1216. If a= 6, then b= 3andc= 1, so the probability of this outcome is also1216.Therefore, the probability that abc= 180is1+1+1216=172.Final Answer: The final answer is172. I hope it iscorrect.Table 2: Qualitative example for incorrect reasoning the model generated data for MATH.Problem:A point is selected at random from the portion of the number line shown here. What is the probability that thepoint is closer to 4 than to 0? Express your answer as a decimal to the nearest tenth. [asy]unitsize(0.2inch);draw((0,0)–(5,0)); draw((0,-0.1)–(0,0.1)); draw((1,-0.1)–(1,0.1)); draw((2,-0.1)–(2,0.1)); draw((3,-0.1)–(3,0.1));draw((4,-0.1)–(4,0.1)); draw((5,-0.1)–(5,0.1)); label(""0"",(0,-0.1),S); label(""1"",(1,-0.1),S);label(""2"",(2,-0.1),S); label(""3"",(3,-0.1),S); label(""4"",(4,-0.1),S); label(""5"",(5,-0.1),S); [/asy].Ground-truth:We find the point which is equidistant from 0 and 4. Clearly, this occurs at 2. So, for all x > 2,xis closer to4 than 0. So, the probability is equal to the length of this region5−25=.6.Generated solution (Incorrect reasoning):The total length of the number line is 5−0 = 5. The total length of the portion of the number line from 1 to 4 is4−1 = 3. Therefore, the probability is35= 0.6 = 0 .60 = 0 .6. Final Answer: The final answer is 0.6. I hopeit is correct.Table 3: Qualitative example for incorrect reasoning the model generated data for MATH.K Data analysis: GSM-8KWe presented the coverage, diversity, and false positive rate of the synthetic data from Gemma2-27Band Gemma2-9B on the MATH dataset in the main text. In Figure 15, we present these metrics forthe GSM-8K dataset.L Solving problems across levels for MATHWe present the effect of repeated sampling from the weak but cheaper LM and stronger but expensiveLM on solving the problems across different levels for the MATH dataset in Figure 16.M Finetuning DetailsWe generated the candidate solutions in the synthetic dataset using TopK (K = 3) strategy with atemperature of 0.7. We finetuned the Gemma2-9B and Gemma2-27B models with a batch size of 32for600and6000 steps under the low and high sampling budget, respectively. During the fine-tuningprocess, we save 10 equally-spaced checkpoints and choose the one that yields the highest validationaccuracy. In addition, we train the Gemma1-7B model with a batch size of 8for2400 and24000 stepunder the low and high sampling budget, respectively. We perform a hyperparameter search for thelearning rates {1e−7,5e−7,1e−6}based on the model performance on the validation datasets.17Problem:How many ways are there to put 4 distinguishable balls into 2 indistinguishable boxes?Ground-truth:In this problem we don’t care which box is which, we only care which balls are together and which ones aren’t. Foreach ball, there are 2 choices of which box to place it in. Since this choice is independent for each of the 4balls, we multiply the number of choices together. Hence there are 24= 16ways to place 4 distinguishable ballsinto 2 distinguishable boxes. We then divide by the number of ways to arrange the boxes. There are 2! = 2 ways toarrange the 2 boxes, so there are162= 8ways to arrange 4 distinguishable balls into 2 indistinguishable boxes.Note: This method does not generalize if there are more than 2 boxes.Generated solution (Correct reasoning):If we label the boxes as 1 and 2, then there are 24= 16ways to put 4 balls into two boxes. However, since theboxes are indistinguishable, we must divide by 2!, so the answer is162= 8. Final Answer: The final answer is8. I hope it is correct.Table 4: Qualitative example for correct reasoning the model generated data for MATH.1 2 3 4 5 6 7 8 910K0.30.40.50.60.7Pass value (%)Coverage of MATH at different sampling budgetsPass@K Gemma2-27BPass@K Gemma2-9BPass@3K Gemma2-9B(a) MATH dataset.1 2 3 4 5 6 7 8 910K0.650.700.750.800.850.900.95Pass value (%)Coverage of GSM-8K at different sampling budgetsPass@K Gemma2-27BPass@K Gemma2-9BPass@3K Gemma2-9B (b) GSM-8K dataset.Figure 14: Coverage (Pass@K) trends for synthetic data acquisition from Gemma2-9B and Gemma2-27B on the (a) MATH and (b) GSM-8K datasets. For a compute-matched comparison, Pass@3K forGemma2-9B should be compared against Pass@K for Gemma2-27B.Low HighSampling budget7074788286909498coverage@cost (%)Coverage (GSM-8K)27B9B (compute-matched)(a) Coverage on GSM-8K.Low HighSampling budget03691215182124# correct solutions per questionDiversity (GSM-8K)27B9B (compute-matched) (b) Diversity on GSM-8K.Human Gemini-1.5Annotator02468Percentage (%)False Positive Rate (GSM-8K)27B9B (compute-matched)(c) False Positive Rate on GSM-8K.Figure 15: Synthetic data analysis for GSM-8K. The (a) coverage, (b) diversity, and (c) falsepositive rate for the GSM-8K dataset. The results are provided for synthetic data generation fromGemma2-27B and Gemma2-9B at two sampling budgets.181 2 3 4 5Level2030405060708090100Percent of Solved Problems (%)Solving levels from MATH-Train dataset27B9B (compute-matched)Figure 16: Coverage across different levels of the MATH dataset. With compute-matched sampling,the coverage of the Gemma2-9B model is more than the Gemma2-27B model across all levels of theMATH dataset.Problem:Solve for n:2−nn+1+2n−42−n= 1.9B-generated solution:Letx=2−nn+1. Then we havex+2n−42−n= 1⇒x+2(n−2)2−n= 1⇒x+−2(2−n)2−n= 1⇒x−2 = 1⇒x= 3.Therefore,2−nn+1= 3, so 2−n= 3n+ 3and4n=−1andn=−14. Final Answer: The final answer is −14. Ihope it is correct.Table 5: Qualitative example from Level 5 of MATH dataset that gets solved by repeatedsampling from Gemma2-9B (30 solutions) but remains unsolved by Gemma2-27B (10 solutions)at fixed sampling budget.19 |
HmB9uZTzaD | VerMCTS: Synthesizing Multi-Step Programs usinga Verifier, a Large Language Model, and Tree SearchDavid Brandfonbrener∗Simon Henniger†Sibi Raja‡Tarun Prasad‡Chloe Loughridge‡Federico Cassano§Sabrina Ruixin Hu‡Jianang Yang¶William E. Byrd∥Robert Zinkov††Nada Amin‡AbstractLarge Language Models (LLMs) can generate useful code, but often the codethey generate cannot be trusted to be sound. In this paper, we present VerMCTS,an approach to begin to resolve this issue by generating verified programs inDafny and Coq. VerMCTS uses a logical verifier in concert with an LLM toguide a modified Monte Carlo Tree Search (MCTS). This approach leverages theverifier to gain intermediate feedback inside the search algorithm by checkingpartial programs at each step to estimate an upper bound on the value function.To measure the performance of VerMCTS, we develop a new suite of multi-stepverified programming problems in Dafny and Coq. In terms of pass@ T, a newmetric which computes the pass rate given a budget of Ttokens sampled from theLLM, VerMCTS leads to more than a 30% absolute increase in average pass@5000across the suite over repeated sampling from the base language model.1 IntroductionLarge Language Models (LLMs) are increasingly used for generating code, but the code needs to beinspected and possibly re-generated if it doesn’t satisfy the user [Zhong and Wang, 2023]. We proposeto partially shift the burden of checking code, from the user to the LLM, by generating code in averification-aware programming language like Dafny or Coq, prompting for specifications and proofsof correctness in addition to code that can then be formally verified. In such a system, the user canfocus their attention on the specifications, and less on the code and proofs with the assurance that thegenerated output has passed the verifier. Our approach couples imprecise generative reasoning froman LLM with logical reasoning from a program verifier. The LLM contributes fruitful suggestionsand the verifier ensures soundness.As a motivating example, consider this prompt: In Dafny, write an ADT for arithmetic expressionscomprising constants, variables, and binary additions. Then write an evaluator taking an expressionand an environment (a function that takes a variable name and returns a number) and returning thenumber resulting from evaluation. Then write an optimizer taking an expression and returning anexpression with all additions by 0 removed. Then prove that the optimizer preserves the semantics asdefined by the evaluation function.To aid a language model to tackle this task, we introduce VerMCTS, an algorithm that combines averifier and tree search with a language model to synthesize verified programs. An overview of the∗Kempner Institute at Harvard University,‡Harvard University,†TU München,§Northeastern University,¶Million.js,∥University of Alabama at Birmingham,††University of OxfordCorrespondence to [email protected] Workshop at the 38th Conference on Neural Information Processing Systems (NeurIPS 2024).wwwroot(a) Selectwwwroot(b) Evaluate and maybe expandwwwroot(c) Backpropagate-1Figure 1: Overview of VerMCTS. The search tree is visualized with “widen” nodes marked withw. (a) A leaf node is selected as in standard MCTS. (b) The selected node is evaluated and maybeexpanded. If the selected node is a widen node, then it’s parent is selected and maybe expanded (i.e.made wider). See Figure 2 for a detailed description. (c) Once we have a value from the evaluate andmaybe expand algorithm, we backpropagate that value up the tree. This figure illustrates the specialcase where we observed a failure, so no node is added and the score is -1.algorithm is described in Figure 1 and Figure 2 and the details are presented in Section 2. VerMCTScreates a search tree with progressive widening so it is capable of handling large action spaces definedby lines of code. Within this search tree both expansion and evaluation are guided by the verifierwhich acts as a computationally cheap (relative to the LLM) upper bound on the value function in thecode synthesis MDP, as we show in Section 2.LLM VerifierPrefix ExpansionNotscorableFailPassIncomplete CompleteAdd to treescore = 1Success!Do not add,score = -1Figure 2: Evaluate and maybe expand.Given a prefix, we query the LLM forexpansions until the verifier is able toreturn a score. If that score is a failure,we do not add the node to the tree, butupdate the parent with a value of -1. Ifthe score is pass, then we check if theprogram is complete. If incomplete, weadd the expansion to the tree with a scoreof 1. If complete, we have found a suc-cessful program to return.To evaluate VerMCTS we introduce a suite of 15 challengeproblems (9 in Dafny and 6 in Coq). This suite probesessential skills needed for general verified programminglike constructing algebraic data types, defining functions,and writing inductive proofs.On this suite of problems we compare VerMCTS withseveral baselines including repeated sampling of full pro-grams from the base model, an advanced prompting tech-nique that uses access to the error messages generated bythe verifier called Reflexion [Shinn et al., 2023], and atraditional version of MCTS. We quantify performancein terms of pass@ T, the pass rate within a budget of Ttokens. VerMCTS outperforms the baselines substantially,leading to a 30% absolute average performance improve-ment over repeated sampling from the base model. Notethis repeated sampling is a strong baseline, similar to apass@ kevaluation often used as a skyline in program gen-eration. Moreover, for several problems VerMCTS solvesproblems that are not solved at all by other methods withinthe given budget.2 Method: VerMCTSOur main contribution is to define a search algorithm inspired by MCTS that leverages a verifier andLLM to search for verified programs. We call this method VerMCTS . In this section, we first presentthe Markov Decision Process that we consider as the environment for verified program synthesis andthen present VerMCTS in detail. VerMCTS is a variant of traditional MCTS that incorporates theLLM as a prior to generate candidates and the verifier as a heuristic to evaluate partial programs.2.1 MDP for verified program synthesisWe formulate our multi-step verified synthesis problem as a Markov Decision Process (MDP)M:= (S,A, T, r, H )defined by the LLM and the verifier. Here, Srefers to the state space, Arefers to the action space, T:S × A → S refers to the (deterministic) transition dynamics of theenvironment, r:S →Rrefers to the reward function, and His the finite horizon (i.e. a limit onthe number of transitions). Defining the MDP just consists of defining these four objects. The state,action, transition dynamics, and reward are defined as follows:2• Each state s∈ S is a string consisting of the initial user prompt and a partial program.•Actions a∈ A are strings that represent a unit of a program. In Dafny each line is an action.In Coq each “command” (ending with a dot ‘.’) is an action. We also limit the number oftokens in an action.• The transition dynamics are just defined by string concatentation: T(s, a) =s+a.•The reward function ris defined by the verifier for a given verified programming languageand is only defined on complete programs. This terminal reward is 1 if the complete programis accepted and -1 if it is rejected. The reward is 0 for incomplete programs.With this simple MDP in place, we can define our search algorithm for finding verified programs.2.2 VerMCTSGiven this MDP with finite actions and deterministic dynamics, it would be possible to run standardMCTS to learn a stochastic policy, but the action space is much too large for this to be practical.Instead, we build a search algorithm inspired by MCTS that can leverage the LLM as a prior forprogram synthesis and the verifier to evaluate partial programs. Both components are key for asuccessful search in this large space.Algorithm 1 Evaluate and (maybe) expand1:Input: string s, depth limit LLLM: string →completionVerifier : string → {− 1,0,+1}2:Output: value v(s), (optional) childnode3:score ←04:depth ←05:a←""6:while score = 0 and depth < L do7:a←a+LLM(s+a)8: score ←Verifier (s+a)9: depth ←depth + 110:end while11:ifscore =−1ordepth =Lthen12: return −1,None13:else14: return +1, s+a15:end ifStandard MCTS consists of four steps: select, ex-pand, evaluate, and backpropagate. Our algorithmleaves the select and backpropagate steps essentiallyunchanged. We modify and combine the expand andevaluate steps to leverage the power of the LLM andthe verifier in tandem. Our full algorithm is illus-trated in Figure 1. In this section we first discussprogressive widening and then go through each stepof VerMCTS in turn.Progressive widening. To allow for potentially in-finite width while still efficiently conducting deepsearches, we adapt an idea from classical workon MCTS to progressivly widen nodes in the tree[Chaslot et al., 2008, Couëtoux et al., 2011]. In thatwork, the number of children available at a givennode scales explicitly with the number of visits. Inour setting since adding a child node requires an ex-pensive call to the LLM, we instead opt to add a“widen” child to each node that is assigned 0 value and can be selected via the selection proceduredescribed below. This allows the scoring mechanism to prioritize when to expand a node by essen-tially setting a prior that unexplored branches have 0 value. When the widen node wwith parent sisselected, instead of adding a child to w, we add a child to s(i.e. add a sibling to w). In this way, thetree can grow wider over the search process.Selection: priors and UCT. We use a standard MCTS selection step, but we set a prior for theUCT (upper confidence bound for trees) bonus as in PUCT [Rosin, 2011, Silver et al., 2016]. Wechoose to let the prior p= 1.0for standard nodes and let p=pwiden <1.0for widen nodes be ahyperparameter that we tune. This basic heuristic gives the model a preference to select the standardnodes which encourages deeper search trees while still allowing for potentially infinite width ifneeded. With this choice, the score of a node sis:score (s) =ps·cUCTslogNparentNs+PNi=1viNs(1)where psis the prior at this node, cUCTis a global exploration coefficient, Nparent is the number ofvisits at the parent node, Nsis the number of visits at this node, and viis the estimated value at theith visit to s. Note that this selection procedure has two hyperparameters: cUCT andpwiden thatencourage selecting more rarely visited nodes and widen nodes respectively.3(a) Dafny (b) CoqFigure 3: Average results for pass@T vs. T (the number of tokens) for various baseline methods onour suite of programming problems in Dafny and Coq.Combining expansion and evaluation. Traditionally, MCTS will first expand a node into childrenand evaluate it either by simulated rollouts [Chaslot et al., 2008, Zhang et al., 2023a] or a learnedvalue function [Silver et al., 2016]. Neither of these methods is a good fit for our problem becausegenerating rollouts requires many expensive calls to the LLM and learning a value requires largeamounts of training data. Moreover, both methods give noisy signal, but in our setting we have accessto the ground truth verifier.Beyond being noiseless, the verifier has one more important property: if a partial program fails theverifier, no subsequent completion can yield success. So, we want to make sure that we never add tothe tree any expansion that is a guaranteed failure. Doing this require explicitly linking expansion toevaluation where we evaluate the node and maybe expand it, as formalized in Algorithm 1.In addition to only adding nodes with potential to the tree, we want to leverage the verifier to cheaplyevaluate partial programs without extra calls to the LLM. Explicitly, from a node containing the stringswe continue to extend awith the LLM until the verifier is able to return a valid score. At this point,we can return the estimated value v(s)ofsas follows:v(s) =Verifier (s+a) =+1 verified, but may be incomplete.−1verified as a failure.(2)Ifv(s) = +1 , we also add s+aas a child in the tree, while if v(s) =−1, we do not add s+asinceit is a verified failure. Appendix H gives explicit examples of scoring partial programs.Backpropagation. The last step of an iteration of MCTS is to backpropagate the observed valuefrom leaf back up to root. We do this in the standard way so that signal is propagated up the tree. Thealgorithm terminates when it finds a complete solution that verifies or when it exceeds some tokenlimit or time limit.Appendix A presents theory that VerMCTS optimizes an upper bound on the value function.3 ResultsA full description of the problem suite can be found in Appendix B and experimental methods inAppendix C respectively. Here we present the main results.We run VerMCTS and our three baselines across our full suite of problems. The aggregate resultsare illustrated in Figure 3. In both programming languages VerMCTS convincingly outperforms thebaselines. Generally, MCTS rollout is second best, followed by whole sampling and then Reflexion.As previewed in the introduction, we see about a 30% absolute improvement in pass@5000 forVerMCTS relative to whole sampling. Note that Coq is substantially more challenging since theverifier is less automated.Examining the performance of the baselines more closely, we see that MCTS rollout does outperformwhole sampling, even though the verifier is not used to guide the search at intermediate steps. But,using the verifier in VerMCTS provides even better performance. Looking at Reflexion, we see thatperformance is poor on these tasks. This could be due to many reasons including: (1) the base modelis not good at responding to errors in low resource languages like Dafny and Coq, (2) the base modeldoes not do well integrating the long contexts created by the Reflexion prompts, and (3) Reflexiondoes not make it as easy to backtrack.Due to space constraints, extended results are in Appendix D, extended related work in Appendix E,and further discussion in Appendix F.4ReferencesJacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan,Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. Program synthesis with largelanguage models. arXiv preprint arXiv:2108.07732 , 2021.Matthew Bowers, Theo X. Olausson, Lionel Wong, Gabriel Grand, Joshua B. Tenenbaum, Kevin Ellis,and Armando Solar-Lezama. Top-down synthesis for library learning. Proc. ACM Program. Lang. ,7(POPL), jan 2023. doi: 10.1145/3571234. URL https://doi.org/10.1145/3571234 .Federico Cassano, John Gouwar, Daniel Nguyen, Sydney Nguyen, Luna Phipps-Costin, DonaldPinckney, Ming-Ho Yee, Yangtian Zi, Carolyn Jane Anderson, Molly Q. Feldman, Arjun Guha,Michael Greenberg, and Abhinav Jangda. MultiPL-E: A Scalable and Polyglot Approach toBenchmarking Neural Code Generation. IEEE Transactions on Software Engineering (TSE) , 49(7):3675–3691, 2023a.Federico Cassano, Ming-Ho Yee, Noah Shinn, Arjun Guha, and Steven Holtzen. Type predictionwith program decomposition and fill-in-the-type training, 2023b.Guillaume Chaslot, Mark H. M. Winands, H Jaap Van Den Herik, Jos Uiterwijk, and Bruno Bouzy.Progressive strategies for monte-carlo tree search. New Mathematics and Natural Computation ,04:343–357, 2008. URL https://api.semanticscholar.org/CorpusID:1719063 .Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen.Codet: Code generation with generated tests, 2022.Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, JaredKaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating largelanguage models trained on code. arXiv preprint arXiv:2107.03374 , 2021.Adrien Couëtoux, Jean-Baptiste Hoock, Nataliya Sokolovska, Olivier Teytaud, and Nicolas Bonnard.Continuous upper confidence trees. In Learning and Intelligent Optimization , 2011. URL https://api.semanticscholar.org/CorpusID:13463524 .Aditya Desai, Sumit Gulwani, Vineet Hingorani, Nidhi Jain, Amey Karkare, Mark Marron, SaileshR, and Subhajit Roy. Program synthesis using natural language. In Proceedings of the 38thInternational Conference on Software Engineering , ICSE ’16, page 345–356, New York, NY ,USA, 2016. Association for Computing Machinery. ISBN 9781450339001. doi: 10.1145/2884781.2884786. URL https://doi.org/10.1145/2884781.2884786 .Emily First, Markus Rabe, Talia Ringer, and Yuriy Brun. Baldur: Whole-proof generation andrepair with large language models. In Proceedings of the 31st ACM Joint European SoftwareEngineering Conference and Symposium on the Foundations of Software Engineering , ESEC/FSE2023, page 1229–1241, New York, NY , USA, 2023. Association for Computing Machinery.ISBN 9798400703270. doi: 10.1145/3611643.3616243. URL https://doi.org/10.1145/3611643.3616243 .Michael Glass, Gaetano Rossiello, Md Faisal Mahbub Chowdhury, Ankita Naik, Pengshan Cai, andAlfio Gliozzo. Re2G: Retrieve, rerank, generate. In Marine Carpuat, Marie-Catherine de Marneffe,and Ivan Vladimir Meza Ruiz, editors, Proceedings of the 2022 Conference of the North AmericanChapter of the Association for Computational Linguistics: Human Language Technologies , pages2701–2715, Seattle, United States, July 2022. Association for Computational Linguistics. doi:10.18653/v1/2022.naacl-main.194. URL https://aclanthology.org/2022.naacl-main.194.Gabriel Grand, Lionel Wong, Matthew Bowers, Theo X. Olausson, Muxin Liu, Joshua B. Tenenbaum,and Jacob Andreas. Lilo: Learning interpretable libraries by compressing and documenting code,2023.Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward Ayers, and Stanislas Polu. Proof artifact co-training for theorem proving with language models. In International Conference on LearningRepresentations , 2022. URL https://openreview.net/forum?id=rpxJc9j04U .5Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural textdegeneration. arXiv preprint arXiv:1904.09751 , 2019.ImparaAI. Monte carlo tree search. https://github.com/ImparaAI/monte-carlo-tree-search , 2024.Albert Qiaochu Jiang, Sean Welleck, Jin Peng Zhou, Timothee Lacroix, Jiacheng Liu, Wenda Li,Mateja Jamnik, Guillaume Lample, and Yuhuai Wu. Draft, sketch, and prove: Guiding formaltheorem provers with informal proofs. In The Eleventh International Conference on LearningRepresentations , 2023. URL https://openreview.net/forum?id=SMa9EAovKMC .Guillaume Lample, Timothee Lacroix, Marie anne Lachaux, Aurelien Rodriguez, Amaury Hayat,Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. Hypertree proof search for neural theoremproving. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors,Advances in Neural Information Processing Systems , 2022. URL https://openreview.net/forum?id=J4pX8Q8cxHH .Ansong Ni, Srini Iyer, Dragomir Radev, Ves Stoyanov, Wen-tau Yih, Sida I Wang, and Xi VictoriaLin. Lever: Learning to verify language-to-code generation with execution. In Proceedings of the40th International Conference on Machine Learning (ICML’23) , 2023.Phind. Beating gpt-4 on humaneval with a fine-tuned codellama-34b. https://www.phind.com/blog/code-llama-beats-gpt4 , 2023.Christopher D. Rosin. Multi-armed bandits with episode context. Annals of Mathematics and Artifi-cial Intelligence , 61:203–230, 2011. URL https://api.semanticscholar.org/CorpusID:207081359 .Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, YossiAdi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models for code.arXiv preprint arXiv:2308.12950 , 2023.Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik R Narasimhan, and Shunyu Yao. Reflexion:Language agents with verbal reinforcement learning. In Thirty-seventh Conference on NeuralInformation Processing Systems , 2023.Atsushi Shirafuji, Yusuke Oda, Jun Suzuki, Makoto Morishita, and Yutaka Watanobe. Refactoringprograms using large language models with few-shot examples, 2023.David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, L. Sifre, George van den Driessche, JulianSchrittwieser, Ioannis Antonoglou, Vedavyas Panneershelvam, Marc Lanctot, Sander Dieleman,Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy P. Lillicrap, MadeleineLeach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of gowith deep neural networks and tree search. Nature , 529:484–489, 2016. URL https://api.semanticscholar.org/CorpusID:515925 .Amitayush Thakur, Yeming Wen, and Swarat Chaudhuri. A language-agent approach to formaltheorem-proving, 2023.Edwin Bidwell Wilson. Probable inference, the law of succession, and statistical inference.Journal of the American Statistical Association , 22:209–212, 1927. URL https://api.semanticscholar.org/CorpusID:121572396 .Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi,Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick vonPlaten, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, MariamaDrame, Quentin Lhoest, and Alexander M. Rush. Transformers: State-of-the-art natural languageprocessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural LanguageProcessing: System Demonstrations , pages 38–45, Online, October 2020. Association for Compu-tational Linguistics. URL https://www.aclweb.org/anthology/2020.emnlp-demos.6 .Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, RyanPrenger, and Anima Anandkumar. LeanDojo: Theorem proving with retrieval-augmented languagemodels. In Neural Information Processing Systems (NeurIPS) , 2023.6Xi Ye, Qiaochu Chen, Isil Dillig, and Greg Durrett. Optimal neural program synthesis from multi-modal specifications. In Marie-Francine Moens, Xuanjing Huang, Lucia Specia, and Scott Wen-tauYih, editors, Findings of the Association for Computational Linguistics: EMNLP 2021 , pages1691–1704, Punta Cana, Dominican Republic, November 2021. Association for ComputationalLinguistics. doi: 10.18653/v1/2021.findings-emnlp.146. URL https://aclanthology.org/2021.findings-emnlp.146 .Shun Zhang, Zhenfang Chen, Yikang Shen, Mingyu Ding, Joshua B. Tenenbaum, and ChuangGan. Planning with large language models for code generation. In The Eleventh InternationalConference on Learning Representations , 2023a. URL https://openreview.net/forum?id=Lr8cOOtYbfL .Tianyi Zhang, Tao Yu, Tatsunori B. Hashimoto, Mike Lewis, Wen-tau Yih, Daniel Fried, and Sida I.Wang. Coder reviewer reranking for code generation. In Proceedings of the 40th InternationalConference on Machine Learning , ICML’23. JMLR.org, 2023b.Li Zhong and Zilong Wang. A study on robustness and reliability of large language model codegeneration. arXiv preprint arXiv:2308.10335 , 2023.Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, and Yu-Xiong Wang. Languageagent tree search unifies reasoning acting and planning in language models, 2023.7A Connecting the partial program score to the MDPImportantly, while the verifier gives us ground truth information about whether the program verifiesso far, it does not give an unbiased estimate of the true value of a state in the MDP defined above.Instead, we can view our use of the verifier as a heuristic that quickly returns an upper bound onthe value function of a potential child. Recall that the value function V∗of the optimal policyin a deterministic MDP with state-based rewards like ours is defined by the Bellman equationV∗(s) = max ar(s) +V∗(s+a). With this definition, we can formally describe the optimismproperty of our estimates values as follows:Lemma A.1. The value v(s)returned by Algorithm 1 satisfies the following:v(s)≥Ea∼LLM+Verifier |s[V∗(s+a)] (3)This is fairly straightforward to prove. If v(s) =−1, then we know that the sampled completion ais a failure no matter what happens afterwards, so v(s) =V∗(s+a) =−1. On the other hand, ifv(s) = 1 then we are assigning the maximal possible value in this MDP, so v(s)≥V∗(s+a).In this way, our value estimate is explicitly an optimistic estimate of the value. This is even beyondthe UCT score computed by MCTS. We hypothesize that this encourages deeper exploration of thesearch trees which can be beneficial in the multi-step problems we consider.B A problem suite for multi-step verified programmingB.1 Defining the problemsWe are not aware of any existing collections of problems that are designed for multi-step programsynthesis and checked using verifiers. That is why we have created our own problem suite ofnine problems. The problems represent meaningful scenarios in verified programming. Theyrequire creating Algebraic Data Types (ADTs), defining functions on them using pattern matching,and proving properties using induction. Compared to prior benchmarks, the problems requiremore intricate multi-step reasoning and test capabilities that are specifically important for verifiedprogramming. The problems are defined as follows:Factorial asks to define the factorial function and to prove that it is always strictly positive.Opt0 asks to define an ADT for arithmetic expressions, an optimizer, and to prove that the optimizerpreserves semantics.Opt0 Opt asks to define an ADT for arithmetic expressions, an optimizer, an optimal predicate, andto prove that the optimizer is optimal.BST asks to define a tree, the binary search tree (BST) property, insertion, and to prove two propertiesof insertions (membership and BST preservation).Repeat asks to define a function returning a list with a given element repeated a given number oftimes, and to prove two properties related to length and membership.Lights asks to define an ADT for traffic lights, then write a function ensuring that red and greenlights are always separated by yellow lights, and then to prove its correctness.Food asks to define an ADT that represents different foods with toppings, and a predicate about theamount of toppings, and to prove a property of this predicate.Days asks to define an ADT that represents days of the week, two functions that iterate throughbusiness days, and then to prove a property of weekdays.Reverse asks to define a function that reverses a list, and prove two properties of list reversals(permutation and involution).All problems are implemented in Dafny, and all but the last three are implemented in Coq, giving atotal of 15 problems. Since the Coq verifier has substantially less automation than Dafny which leadsto longer proofs and since the model is not always very consistent at Coq syntax, just for Coq weprovide some syntax hints in the prompt. The full prompts can be found in Appendix G.8B.2 Criteria for SuccessIn order to be considered successful, a program must first pass the verifier and some syntactic checks(e.g. the presence of a proof marker and a problem-specific minimum number of lines of code). Theseinitial checks are meant to ensure the model has made a successful attempt to prove a lemma.A second check ensures that the model has proven the correct lemma: In order to check whether amodel has proven a property, we inject a second lemma with it, and prove it by referring to the lemmawe asked the model to write. If the model has proven this lemma as directed, this new code includingcheck lemma will verify successfully. If the model has proven an incorrect lemma, a verifier errorwill be produced. Note that the check lemma is only injected into the verifier input. The model doesnot get to see it, so this check does not provide additional hints to the model.A full description of each problem including the prompts and lemmas used for checking success canbe found in Appendix G.C Experimental setupC.1 Pass@ Tevaluation metricWe report all of our results in terms of pass@ T, which is, to our knowledge, a novel metric inspiredby pass@ kthat is often used in code generation benchmarks [Chen et al., 2021]. While pass@ kcomputes the probability of generating a success when we sample kprograms, pass@ Tcomputes theprobability of success if we allow the model to sample Ttokens. Pass@ Thas several benefits:1.Pass@ Tfairly compares methods. One run of MCTS can be much more expensive thansampling one program from a model, so using pass@ kis not fair. In contrast pass@ Treallyestimates the dominant cost of generation, namely how many tokens need to be generated toyield success.2.Pass@ Tcontrols for hardware and implementation variability. Compared to using wall-clock time, using pass@ Tdoes not depend on the underlying hardware and system-leveloptimizations.To estimate pass@ T, we generate nruns per problem of up to Tmax tokens per run (where if the runterminates successfully before Tmax we stop the run). Then for each T≤Tmax, we have nbinarytrials indicating whether that run terminated successfully in ≤Ttokens. In the results, we report themean of these nbinary variables and also 95% Wilson intervals [Wilson, 1927].C.2 Base modelVerMCTS is compatible with any base model and only requires sampling from the model (no trainingis needed). We opt to use an open-weights model as the base language model and then comparedifferent sampling procedures on top of this base model. Specifically, we use Phind-CodeLLama-34B-v2 [Phind, 2023, Roziere et al., 2023]. This model has been trained explicitly for code generation, butthe verified programming languages we use are relatively “low resource” languages, so the modelswill perform worse than at high-resource languages [Cassano et al., 2023a].C.3 BaselinesWe consider a variety of baseline methods to illustrate the benefits of leveraging the verifier inside ofVerMCTS.•Whole sampling. The most naive baseline just samples entire programs from the basemodel. To compute pass@ Twe just continue generating new samples until success or untilthe token limit is reached.•Rollout MCTS. Related work on MCTS uses rollouts to evaluate a node [Chaslot et al.,2008, Zhang et al., 2023a]. We ablate the importance of using the verifier by replacing the“evaluate and maybe expand” step with separate expand and evaluate steps. We expand bysampling a fixed number of actions kfrom the LLM and evaluate by rolling out with theLLM to a terminal node before querying the reward function.9Figure 4: Hyperparameter ablations for VerMCTS on opt0 in Dafny. We find that performance isgenerally fairly stable to hyperparameter choices.•Reflexion. Finally, to show how VerMCTS is efficient at incorporating information from theverifier we also compare to a Reflexion [Shinn et al., 2023] baseline where the LLM gets toview the errors produced by the verifier on failed attempts.C.4 HyperparamtersWhen sampling from the LLM, we always use nucleus sampling [Holtzman et al., 2019] withp= 0.95following Roziere et al. [2023]. For every method, we sweep over temperature onone representative problem and use that temperature for the rest. Our VerMCTS algorithm alsointroduces two hyperparameters that govern exploration: cUCT andpwiden which we found fairlystraightforward to set. We tune hyperparameters on one particular problem (opt0) in Dafny, but onlychecking for verification and not additionally checking for correctness. Each method has slightlydifferent hyperparameters, but we generally tune temperature of the LLM, the MCTS explorationcoefficient, and the MCTS prior for widen nodes. Hyperaparameters are then fixed for all otherexperiments. Each algorithm’s parameters are described below.VerMCTS. We sweep over temperature in [0.6, 0.8, 1.0, 1.0, 1.4] and find 1.0 to be best, explorationcoefficient in [1, 3, 10, 30] and find 3 to be best, and the “widen policy value”, i.e. the prior value ofthe widen nodes in [0.1, 0.2, 0.5] and find 0.1 to be best. See Figure 4.MCTS rollout. We also sweep over temperature in [0.6, 0.8, 1.0, 1.0, 1.4] and find 0.8 to be bestand exploration coefficient in [1, 3, 10, 30] and find 1 to be best. Note, instead of widen nodes, eachnode has a fixed number of children (3 in our experiments).Reflexion. We sweep over temperature in [0.2, 0.4, 0.6, 0.8, 1.0] and find 0.4 to be best.Whole sampling. We sweep over temperature in [0.2, 0.4, 0.6, 0.8, 1.0] and find 0.6 to be best.We use the Transformers library Wolf et al. [2020] to query the LLMs. For the MCTS, we adapt ageneric open-source library ImparaAI [2024].D Extended resultsD.1 Per-problem resultsIn Figures Figure 5 and Figure 6, we present the per-problem results on our problem suite. Thereis substantial variation across problems, but across all problems VerMCTS is the best approach orwithin the margin of error, often exceeding the baselines by a large margin and sometimes solvingproblems that no baseline solves at all. That said, some problems are clearly challenging: on oneproblem in Dafny and three in Coq, none of the algorithms find a solution within 5000 tokens.D.2 Examining the VerMCTS search treesIn Figure 7 we provide an experiment to probe for a mechanistic understanding of how VerMCTSworks in Dafny. We consider the number of nodes (excluding widen nodes), the depth and the widthof the search trees as the number of tokens generated increases. Note that since we do not add10Figure 5: Pass@T results for all algorithms on our suite of problems in Dafny.Figure 6: Pass@T results for all algorithms on our suite of problems in Coq.failed expansions to the tree, sometimes more tokens are generated without adding nodes to the tree.Generally, we observe that the more challenging problems (with lower pass rates) tend to lead tolarger search trees, indicating that the algorithm is successfull. We also notice that while the numberof nodes grows fairly linearly across time for most problems, the depth grows earlier and then flattensout. This suggests that the VerMCTS search is closer to “depth first“, first pushing an expansionbranch to a terminal node before going back and widening the tree.E Related WorkNeural Program Synthesis with Large Language Models Austin et al. [2021] and Chen et al.[2021] demonstrated that Large Language Models (LLMs) can generate correct Python programsfrom natural language descriptions. These studies introduced the MBPP and HumanEval datasets,respectively, which are widely used for evaluating LLMs in program synthesis tasks. Cassanoet al. [2023a] extended this concept by showing that LLMs can also generate programs in over 20languages other than Python. This was achieved by translating the MBPP and HumanEval datasetsusing their system, MultiPL-E. Their findings indicate that generating accurate programs in lowerresource languages is more challenging compared to higher resource languages, such as Python.In our experiments, for proof synthesis, we have another dimension of challenge: some languages(Coq) are inherently more challenging than others (Dafny), depending on how much automation theverifiers provide. However, none of these works explored the generation of programs that are correctby construction.Symbolic Algorithms for Neural Program Synthesis Grand et al. [2023] integrated a classicsymbolic top-down synthesis algorithm for library learning Bowers et al. [2023] with LLMs. Cassanoet al. [2023b] employed program decomposition and a bottom-up tree-search algorithm to infermissing TypeScript types. Zhou et al. [2023] used Monte Carlo Tree Search (MCTS) to createsingle-function programs in Python. Zhang et al. [2023a] applied a tree-based planning algorithmfor decoding LLM token sequences, which were then evaluated for correctness using a test suite.Lample et al. [2022] adapted MCTS for neural theorem proving by employing a tree-based search11Figure 7: Average number of nodes, depth, and width of the VerMCTS search tree as the number oftokens increases across the full suite of Dafny problems. Recall that failed expansions are not addedto the tree. Harder problems tend to lead to larger trees.algorithm to generate proof trees in Lean. Different from these closely related works, we (1) focuson verified program synthesis in Dafny and Coq, and (2) leverage the verifier inside the loop of thesearch algorithm to efficiently guide the search.Theorem Proving with Large Language Models Han et al. [2022] demonstrated that LLMs canbe trained to generate proofs in Lean through self-supervision. Yang et al. [2023] presented thatRetrieval-Augmented Generation (RAG) Glass et al. [2022] models significantly enhance LLMs’performance in theorem proving tasks. First et al. [2023] employed a methodology akin to that of Hanet al. [2022] to generate and repair complete proofs in Isabelle/HOL. Jiang et al. [2023] introducedmethods to first map natural language proofs to formal proof sketches in Isabelle and then fill in thegaps using an automated prover. These studies predominantly used LLMs to iteratively generateindividual proof steps, which were then verified using a theorem prover. Thakur et al. [2023] proposea language-agent approach to formal theorem-proving, alternating selection and execution steps. Incontrast, we focus on verified program synthesis and developing a method that effectively integratesa verifier and LLM without any additional training.Scoring Partial Programs Desai et al. [2016], one of the first to effectively tackle the problem ofprogram synthesis using natural language, used a scoring function to rank candidate partial programs.Cassano et al. [2023b] similarly used a scoring function to rank candidate partial programs based ontheir types in order to aid the tree search process, and provided multiple solutions to the user rankedby their score. Ye et al. [2021] used abstract interpretation to rule out partial programs that do notsatisfy some constraints, typically on input/output examples. Chen et al. [2022] used LLM-generatedunit tests suites and their pass rates to score candidate programs, and provided the user with the top-scoring program. Ni et al. [2023] further utilized execution information to rank candidate programs.Shirafuji et al. [2023] used a scoring function to rank example refactoring programs generated by anLLM before applying them to the given code. Zhang et al. [2023b] studies using scoring functionsto rank candidate partial programs in-depth, and proposes the use of a reviewer model to scorecandidate programs based on how closely they match the given instruction. Most of these works havescored partial programs specified as grammatical programs with holes as opposed to our left-to-rightgeneration of partial programs, and have not considered verified programming languages.F DiscussionWe have demonstrated that relatively weak language models can reliably produce verified codeby guiding a search process that verifies partial programs at each step. Our technique shines onmulti-step problems, made of dependent sub-problems. Our technique can be adapted to a settingwhere the interfaces and specifications are given, and the code is verified at each step by additionalcode containing assertions or proofs.Limitations. A key aspect of our approach resides in the scoring of partial programs. However, thescoring is limited by coarse granularity and lack of lookahead in the scoring function. The granularity12of the verification step is a whole unit, e.g. a function in Dafny and a command in Coq. For Dafny, thecoarse granularity means we have to wait multiple lines to get feedback. For Coq, the fine granularitydoesn’t help much with bigger proofs, which require planning.Future work. What we find most interesting and promising about our approach is that so much ispossible by a “blind” search that only uses scalar reward signal. In future work, it would be fruitfulto find ways of allowing the search to rely on richer feedback while maintaining the efficiency ofleveraging the verifier to avoid doing costly rollouts or reflection steps. Moreover, it will be interestingto see if the basic idea of VerMCTS, using a cheap and provable upper bound on the value function toguide search, can be applied beyond the verified programming setting.13G PromptsG.1 Repeat PromptCoq. In Coq: (1) Write a function ‘repeat‘ that takes an integer ‘x‘ and a natural number ‘n‘ as inputs,and returns a list of length ‘n‘ in which every element is ‘x‘. (2) Then write a lemma ‘repeat_correct‘that checks that for any ‘x‘ and ‘n‘, ‘repeat‘ returns a list of length ‘n‘ and that every element of thelist is ‘x‘.Dafny. In Dafny: (1) Write a function ‘repeat‘ that takes an integer ‘x‘ and a natural number ‘n‘as inputs, and returns a list of length ‘n‘ in which every element is ‘x‘. (2) Then write a lemma‘repeat_correct‘ that checks that for any ‘x‘ and ‘n‘, ‘repeat‘ returns a list of length ‘n‘ and that everyelement of the list is ‘x‘.Hints for Coq.### Hint: Start with ‘Require Import List. Import ListNotations.‘Check lemma for Coq.Lemma CHECK_repeat_correct :∀( x:i n t) ( n :nat) ,length ( repeat x n ) = n / ∀i , 0≤i −> i < n −> nth ( repeat x n ) i = x .Proof .i n t r o s .eapply repeat_correct ; eauto .Qed.Check lemma for Dafny.lemma CHECK_repeat_correct ( x :int, n:nat)ensures | repeat ( x , n ) | =nensures ∀i•0≤i < n =⇒repeat ( x , n ) [ i ] =x{repeat_correct ( x , n ) ;}G.2 Opt0 Opt PromptCoq. In Coq, write an ADT ‘Expr‘ for arithmetic expressions comprising constants, variables andbinary addition. Then write a predicate ‘optimal‘ that holds on an expression if it has no additionsby 0. Then write an optimizer ‘optimize‘ that removes all additions by 0. Then write a lemma‘OptimizerOptimal‘ that ensures ‘optimal(optimize(e))‘ for all expressions ‘e‘.Dafny. In Dafny, write an ADT ‘Expr‘ for arithmetic expressions comprising constants, variables andbinary addition. Then write a predicate ‘optimal‘ that holds on an expression if it has no additionsby 0. Then write an optimizer ‘optimize‘ that removes all additions by 0. Then write a lemma‘OptimizerOptimal‘ that ensures ‘optimal(optimize(e))‘ for all expressions ‘e‘.Hints for Coq.### Hint: In the addition case, the ‘optimize‘ function should recursively optimize the sub-expressionsand then match on the optimized sub-expressions.### Hint: You can import the ‘string‘ datatype with the line ‘Require Import Coq.Strings.String.‘### Hint: Use Fixpoint instead of Definition for recursive functions.### Hint: If you do induction on ‘e‘ with sub-expressions ‘e1‘ and ‘e2‘, the two inductive hypothesesare called ‘IHe1‘ and ‘IHe2‘.Check lemma for Coq.lemma CHECK_OptimizerOptimal ( e :Expr ) ensures optimal ( optimize ( e ) ) { OptimizerOptimal ( e ) ; }Check lemma for Dafny.lemma CHECK_OptimizerOptimal ( e :Expr ) ensures optimal ( optimize ( e ) ) { OptimizerOptimal ( e ) ; }14G.3 Lights PromptCoq. In Coq: (1) Write a datatype ‘light‘ for traffic lights with cases ‘Red‘, ‘Yellow‘, ‘Green‘. (2)Write a function ‘activation‘ which takes two lights, source and target, and returns a list of lights,the first element being the source and the last element being the target. If the source and target arenot yellow and are distinct, then the returned list has a middle element of yellow. (3) Write a helper‘adjacent_ok‘ that takes two lights, and checks that they are not one red and the other green. (4)Write a helper ‘all_adjacent_ok‘ that takes a list of lights, and checks that all adjacent elements are‘adjacent_ok‘. (5) Write a lemma ‘check_activation‘ to prove that forall source and target lights, areturned list never has adjacent elements that are distinct and red or green. The proposition shouldbe ‘all_adjacent_ok (activation source target)‘.Dafny. In Dafny: (1) Write a datatype ‘light‘ for traffic lights with cases ‘Red‘, ‘Yellow‘, ‘Green‘. (2)Write a function ‘activation‘ which takes two lights, source and target, and returns a list of lights,the first element being the source and the last element being the target. If the source and target arenot yellow and are distinct, then the returned list has a middle element of yellow. (3) Write a helper‘adjacent_ok‘ that takes two lights, and checks that they are not one red and the other green. (4)Write a helper ‘all_adjacent_ok‘ that takes a list of lights, and checks that all adjacent elementsare ‘adjacent_ok‘. (5) Write a lemma ‘check_activation(source: light, target: light)‘ to prove that areturned list never has adjacent elements that are distinct and red or green. The ‘ensures‘ clauseshould be ‘all_adjacent_ok(activation(source, target))‘.Hints for Coq.### Hint: Start with ‘Require Import List. Import ListNotations.‘Check lemma for Coq.Lemma CHECK__check_activation :∀( source :l i g h t ) ( t a r g e t :l i g h t ) ,all_adjacent_ok ( a c t i v a t i o n ( source t a r g e t ) .Proof .i n t r o s .eapply check_activation ; eauto .Qed.Check lemma for Dafny.lemma CHECK__check_activation ( source :l i g h t , t a r g e t :l i g h t )ensures all_adjacent_ok ( a c t i v a t i o n ( source , t a r g e t ) ){check_activation ( source , t a r g e t ) ;}G.4 BST PromptCoq. In Coq, (1) write an ADT for a tree of natural numbers. Call it ‘Tree‘. Then (2) write apredicate ‘IsBST‘ that checks whether a given tree is a binary search tree (BST). Then (3) write afunction ‘insert‘ that inserts an element into a binary search tree while preserving the BST property.Then (4) write a predicate ‘Contains‘ that checks whether a given tree contains a given element. Then(5) write a lemma ‘InsertContains‘ about the insert function that ensures that the tree resulting frominserting an element contains that element (without requiring nor ensuring the BST property). Then(6) write another lemma ‘InsertPreservesBST‘ about the insert function that checks the BST propertycontinues to hold after insertion. This lemma should take bounds on the BST, and require that theelement to be inserted is within those bounds.Dafny. In Dafny, (1) write an ADT for a tree of natural numbers. Call it ‘Tree‘. Then (2) write apredicate ‘IsBST‘ that checks whether a given tree is a binary search tree (BST). Then (3) write afunction ‘insert‘ that inserts an element into a binary search tree while preserving the BST property.Then (4) write a predicate ‘Contains‘ that checks whether a given tree contains a given element. Then(5) write a lemma ‘InsertContains‘ about the insert function that ensures that the tree resulting frominserting an element contains that element (without requiring nor ensuring the BST property). Then(6) write another lemma ‘InsertPreservesBST‘ about the insert function that checks the BST propertycontinues to hold after insertion. This lemma should take bounds on the BST, and require that theelement to be inserted is within those bounds.15Hints for Coq.### Hint: Start with ‘Require Import List. Import ListNotations.‘### Hint: Use Fixpoint instead of Definition for recursive functions.### Hint: Use ‘l‘ and ‘r‘ for variable names instead of ‘left‘ and ‘right‘ to avoid name clashes.Check lemma for Coq./ / ( 5 ) Lemma about the i n s e r t f u n c t i o n t h a t ensures the t r e e r e s u l t i n g/ / from i n s e r t i n g an element contains t h a t elementLemma CHECK_InsertContains :∀( t:Tree ) ( x :nat) ,Contains ( i n s e r t t x ) x .Proof .i n t r o s .eapply InsertContains ; eauto .Qed ./ / ( 6 ) Lemma about the i n s e r t f u n c t i o n t h a t checks the BST property/ / continues to hold a f t e r i n s e r t i o nlemma CHECK_InsertPreservesBST :∀( t:Tree ) ( x :nat) ( min :nat) (max :nat) ,( IsBST t min max) −> min ≤x≤max −>IsBST ( i n s e r t t x ) min max .Proof .i n t r o s .eapply InsertPreservesBST ; eauto .Qed .Check lemma for Dafny./ / ( 5 ) Lemma about the i n s e r t f u n c t i o n t h a t ensures the t r e e r e s u l t i n g from/ / i n s e r t i n g an element contains t h a t elementlemma CHECK_InsertContains ( t :Tree , x :nat)ensures Contains ( i n s e r t ( t , x ) , x ){InsertContains ( t , x ) ;}/ / ( 6 ) Lemma about the i n s e r t f u n c t i o n t h a t checks the BST property continues/ / to hold a f t e r i n s e r t i o nlemma CHECK_InsertPreservesBST ( t :Tree , x :nat, min :nat, max :nat)requires IsBST ( t , min , max) ∧min≤x≤maxensures IsBST ( i n s e r t ( t , x ) , min , max){InsertPreservesBST ( t , x , min , max ) ;}G.5 Opt0 PromptCoq. In Coq, write an ADT for arithmetic expressions (called ‘Expr‘) comprising constants, variablesand binary additions. Then write an evaluator (called ‘Eval‘) taking an expression and an environment(a function that takes a variable name and returns a number) and returning the number resultingfrom evaluation. Then write an optimizer (called ‘Optimize‘) taking an expression and returning anexpression with all additions by 0 removed. Then prove that the optimizer preserves the semantics asdefined by the evaluation function. Do so by proving the lemma ‘OptimizePreservesSemantics: forall(e: Expr) (env: string -> nat), Eval(Optimize(e), env) = Eval(e, env)‘.Dafny. In Dafny, write an ADT for arithmetic expressions (called ‘Expr‘) comprising constants,variables and binary additions. Then write an evaluator (called ‘Eval‘) taking an expression andan environment (a function that takes a variable name and returns a number) and returning thenumber resulting from evaluation. Then write an optimizer (called ‘Optimize‘) taking an expres-sion and returning an expression with all additions by 0 removed. Then prove that the optimizerpreserves the semantics as defined by the evaluation function. Do so by proving the lemma ‘Op-timizePreservesSemantics(e: Expr, env: string -> int) ensures Eval(Optimize(e), env) == Eval(e,env)‘.Hints for Coq.### Hint: In the optimizer, recursively optimize the sub-expressions.### Hint: You can import the ‘string‘ datatype with the line ‘Require Import Coq.Strings.String.‘.16### Hint: Use Fixpoint instead of Definition for recursive functions.### Hint: With tactics like ‘induction‘ and ‘destruct‘, _avoid_ naming with ‘as‘ and let Coq pick thenames for you. For example, use ‘induction e.‘ but _not_ ‘induction e as [...]‘.### Hint: For the proof, do ‘induction e.‘. Do NOT name the hypotheses with ‘as‘.### Hint: The simple cases are by ‘simpl. reflexivity.‘.### Hint: The addition case is by ‘simpl. rewrite <- IHe1. rewrite <- IHe2. destruct (optimize e1);destruct (optimize e2); try destruct n; try destruct n0; eauto using PeanoNat.Nat.add_0_r.‘.### Hint: You’ll need ‘Require Import Arith‘.Check lemma for Coq.Lemma CHECK_OPS :∀( e:Expr ) ( env :string −>nat) , Eval ( Optimize e ) env = Eval e env .Proof .i n t r o s .apply OptimizePreservesSemantics ; eauto .Qed.Check lemma for Dafny.lemma CHECK_OPS( e :Expr , env :string −>i n t)requires trueensures Eval ( Optimize ( e ) , env ) =Eval ( e , env ){OptimizePreservesSemantics ( e , env ) ;}G.6 Factorial PromptCoq. In Coq, write a factorial function, called ‘fac‘, and prove (in a lemma ‘FacPositive: forall (n:nat), fac n > 0‘) that the factorial is always strictly positive.Dafny. In Dafny, write a factorial function, called ‘fac‘, and prove (in a lemma called ‘FacPositive(n:nat)‘) that the factorial is always strictly positive.Hints for Coq.### Hint: Don’t forget to import the Arith module.### Hint: use ‘Nat.lt_0_1‘ in the base case of the proof.### Hint: use ‘Nat.lt_lt_add_r‘ in the inductive case of the proof.Check lemma for Coq.Lemma CHECK_FacPositive :∀( n:nat) , fac n > 0. Proof . i n t r o s . apply FacPositive ; eauto . Qed.Check lemma for Dafny.lemma CHECK_FacPositive ( n :nat)ensures fac ( n ) > 0 { FacPositive ( n ) ; }G.7 Food PromptIn Dafny: (1) Write a datatype for ‘Food‘: ‘Pasta‘ or ‘Pizza‘. Each Pasta or Pizza has a list oftoppings. Each ‘Topping‘ is one of: ‘tomato‘, ‘cheese‘, ‘olive‘, ‘broccoli‘, ‘mushroom‘, ‘pepper‘.(2) Write a predicate ‘ok‘ that accepts any pizza with five toppings or fewer, and any pasta with twotoppings or fewer. (3) Write a lemma ‘ok3_pizza‘ that proves that an accepted food with three ormore toppings must be a pizza.Hints for Dafny.### Hint: The length of a list or sequence ‘s‘ is ‘|s|‘.17Check lemma for Dafny.lemma CHECK_ok3_pizza ( x :Food )requires ok ( x )requires | x . toppings | ≥3ensures match x { case Pizza ( _ ) ⇒true case _⇒false }{ok3_pizza ( x ) ;}G.8 Reverse PromptIn Dafny: (1) Write a function ‘reverse‘ that takes a list as input and reverses it. (2) Then write alemma ‘reverse_permutes‘ that checks that for any list ‘l‘, an element exists in ‘l‘ if and only if itexists in the result of calling ‘reverse‘ on ‘l‘. (3) Then write a lemma ‘reverse_involutes‘ that checksthat for any list ‘l‘, calling ‘reverse‘ twice on ‘l‘ yields ‘l‘.Hints for Dafny.### Hint: The length of a list or sequence ‘s‘ is ‘|s|‘.### Hint: Use a plain ‘function‘ to define ‘reverse‘, not a ‘function method‘ or a ‘method‘.Check lemma for Dafny.lemma CHECK__reverse_permutes ( l :seq<int>)/ / TODO{}lemma CHECK__reverse_involutes ( l :seq<int>)ensures reverse ( reverse ( l ) ) =l ;{reverse_involutes ( l ) ;}G.9 Days PromptIn Dafny: (1) Write an ADT ‘Day‘ for the days of the week: ‘Sunday‘ to ‘Saturday‘. (2) Write afunction ‘next_biz_day‘ that gives the next business day. (3) Write a function‘iter_biz_day(d: Day, n:nat): Day‘ that iterates the next business day function, for an arbitrary number n of business days.(4) Write a lemma ‘iter5_biz_day_idempotent‘ that ensures that starting with a business day, takingthe next five business days is idempotent.Check lemma for Dafny.lemma CHECK_iter5_biz_day_idempotent ( d :Day )requires d̸=Saturdayrequires d̸=Sundayensures i t e r _ b i z _ d a y ( d , 5) =d{iter5_biz_day_idempotent ( d ) ;}H Examples of Scoring Partial ProgramsPartial program with a score of 0:datatype Expr =Partial program with a score of +1:datatype Expr =| Const ( val :i n t)Partial program with a score of −1:18datatype Expr =| Const ( val :i n t)| Var (name :string )| Add ( e1 :Expr , e2 :Expr )function Evaluate ( e :Expr ,env:string −>i n t):i n treads env{match ecase Const ( val ) ⇒valcase Var (name) ⇒env (name)case Add ( e1 , e2 ) ⇒Evaluate ( e1 , env ) +Evaluate ( e2 , env )}The negative score is due to the reads clause, which shouldn’t be there. Unfortunately, we onlyconfirm the error once the whole function is generated.I Broader ImpactsThe development of algorithms that allow generation of verified code using smaller models hasnotable broader impacts on both machine learning and society. We increase the efficiency per tokenin code language model usage, and allow for the usage of smaller models. This further reducesenergy consumption and allows for a usage of cheaper hardware, thereby democratizing access tothis technology. Our approach, which is asking models to prove their work is correct, and thenimmediately and externally checking whether the proof is correct, can mitigate some of the openissues with trusting LLMs.19NeurIPS Paper Checklist1.ClaimsQuestion: Do the main claims made in the abstract and introduction accurately reflect thepaper’s contributions and scope?Answer: [Yes]Justification: The abstract and intro clearly state the key results directly.Guidelines:•The answer NA means that the abstract and introduction do not include the claimsmade in the paper.•The abstract and/or introduction should clearly state the claims made, including thecontributions made in the paper and important assumptions and limitations. A No orNA answer to this question will not be perceived well by the reviewers.•The claims made should match theoretical and experimental results, and reflect howmuch the results can be expected to generalize to other settings.•It is fine to include aspirational goals as motivation as long as it is clear that these goalsare not attained by the paper.2.LimitationsQuestion: Does the paper discuss the limitations of the work performed by the authors?Answer: [Yes]Justification: Limitations is a subsection of our Discussion section.Guidelines:•The answer NA means that the paper has no limitation while the answer No means thatthe paper has limitations, but those are not discussed in the paper.• The authors are encouraged to create a separate "Limitations" section in their paper.•The paper should point out any strong assumptions and how robust the results are toviolations of these assumptions (e.g., independence assumptions, noiseless settings,model well-specification, asymptotic approximations only holding locally). The authorsshould reflect on how these assumptions might be violated in practice and what theimplications would be.•The authors should reflect on the scope of the claims made, e.g., if the approach wasonly tested on a few datasets or with a few runs. In general, empirical results oftendepend on implicit assumptions, which should be articulated.•The authors should reflect on the factors that influence the performance of the approach.For example, a facial recognition algorithm may perform poorly when image resolutionis low or images are taken in low lighting. Or a speech-to-text system might not beused reliably to provide closed captions for online lectures because it fails to handletechnical jargon.•The authors should discuss the computational efficiency of the proposed algorithmsand how they scale with dataset size.•If applicable, the authors should discuss possible limitations of their approach toaddress problems of privacy and fairness.•While the authors might fear that complete honesty about limitations might be used byreviewers as grounds for rejection, a worse outcome might be that reviewers discoverlimitations that aren’t acknowledged in the paper. The authors should use their bestjudgment and recognize that individual actions in favor of transparency play an impor-tant role in developing norms that preserve the integrity of the community. Reviewerswill be specifically instructed to not penalize honesty concerning limitations.3.Theory Assumptions and ProofsQuestion: For each theoretical result, does the paper provide the full set of assumptions anda complete (and correct) proof?Answer: [NA]20Justification: Our paper does not include theoretical results.Guidelines:• The answer NA means that the paper does not include theoretical results.•All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.•All assumptions should be clearly stated or referenced in the statement of any theorems.•The proofs can either appear in the main paper or the supplemental material, but ifthey appear in the supplemental material, the authors are encouraged to provide a shortproof sketch to provide intuition.•Inversely, any informal proof provided in the core of the paper should be complementedby formal proofs provided in appendix or supplemental material.• Theorems and Lemmas that the proof relies upon should be properly referenced.4.Experimental Result ReproducibilityQuestion: Does the paper fully disclose all the information needed to reproduce the main ex-perimental results of the paper to the extent that it affects the main claims and/or conclusionsof the paper (regardless of whether the code and data are provided or not)?Answer: [Yes]Justification: We discuss our algorithms in detail with pseudocode in section 2 and provideall used hyperparameters in Appendix C.Guidelines:• The answer NA means that the paper does not include experiments.•If the paper includes experiments, a No answer to this question will not be perceivedwell by the reviewers: Making the paper reproducible is important, regardless ofwhether the code and data are provided or not.•If the contribution is a dataset and/or model, the authors should describe the steps takento make their results reproducible or verifiable.•Depending on the contribution, reproducibility can be accomplished in various ways.For example, if the contribution is a novel architecture, describing the architecture fullymight suffice, or if the contribution is a specific model and empirical evaluation, it maybe necessary to either make it possible for others to replicate the model with the samedataset, or provide access to the model. In general. releasing code and data is oftenone good way to accomplish this, but reproducibility can also be provided via detailedinstructions for how to replicate the results, access to a hosted model (e.g., in the caseof a large language model), releasing of a model checkpoint, or other means that areappropriate to the research performed.•While NeurIPS does not require releasing code, the conference does require all submis-sions to provide some reasonable avenue for reproducibility, which may depend on thenature of the contribution. For example(a)If the contribution is primarily a new algorithm, the paper should make it clear howto reproduce that algorithm.(b)If the contribution is primarily a new model architecture, the paper should describethe architecture clearly and fully.(c)If the contribution is a new model (e.g., a large language model), then there shouldeither be a way to access this model for reproducing the results or a way to reproducethe model (e.g., with an open-source dataset or instructions for how to constructthe dataset).(d)We recognize that reproducibility may be tricky in some cases, in which caseauthors are welcome to describe the particular way they provide for reproducibility.In the case of closed-source models, it may be that access to the model is limited insome way (e.g., to registered users), but it should be possible for other researchersto have some path to reproducing or verifying the results.5.Open access to data and codeQuestion: Does the paper provide open access to the data and code, with sufficient instruc-tions to faithfully reproduce the main experimental results, as described in supplementalmaterial?21Answer: [Yes]Justification: We provide our code, including instructions on how to run it and reproduceour experimental results.Guidelines:• The answer NA means that paper does not include experiments requiring code.•Please see the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•While we encourage the release of code and data, we understand that this might not bepossible, so “No” is an acceptable answer. Papers cannot be rejected simply for notincluding code, unless this is central to the contribution (e.g., for a new open-sourcebenchmark).•The instructions should contain the exact command and environment needed to run toreproduce the results. See the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•The authors should provide instructions on data access and preparation, including howto access the raw data, preprocessed data, intermediate data, and generated data, etc.•The authors should provide scripts to reproduce all experimental results for the newproposed method and baselines. If only a subset of experiments are reproducible, theyshould state which ones are omitted from the script and why.•At submission time, to preserve anonymity, the authors should release anonymizedversions (if applicable).•Providing as much information as possible in supplemental material (appended to thepaper) is recommended, but including URLs to data and code is permitted.6.Experimental Setting/DetailsQuestion: Does the paper specify all the training and test details (e.g., data splits, hyper-parameters, how they were chosen, type of optimizer, etc.) necessary to understand theresults?Answer: [Yes]Justification: Yes, we specify all hyperparameters, algorithms, and and used models.Guidelines:• The answer NA means that the paper does not include experiments.•The experimental setting should be presented in the core of the paper to a level of detailthat is necessary to appreciate the results and make sense of them.•The full details can be provided either with the code, in appendix, or as supplementalmaterial.7.Experiment Statistical SignificanceQuestion: Does the paper report error bars suitably and correctly defined or other appropriateinformation about the statistical significance of the experiments?Answer: [Yes]Justification: The error bars report Wilson intervals as described in C.1.Guidelines:• The answer NA means that the paper does not include experiments.•The authors should answer "Yes" if the results are accompanied by error bars, confi-dence intervals, or statistical significance tests, at least for the experiments that supportthe main claims of the paper.•The factors of variability that the error bars are capturing should be clearly stated (forexample, train/test split, initialization, random drawing of some parameter, or overallrun with given experimental conditions).•The method for calculating the error bars should be explained (closed form formula,call to a library function, bootstrap, etc.)• The assumptions made should be given (e.g., Normally distributed errors).22•It should be clear whether the error bar is the standard deviation or the standard errorof the mean.•It is OK to report 1-sigma error bars, but one should state it. The authors shouldpreferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesisof Normality of errors is not verified.•For asymmetric distributions, the authors should be careful not to show in tables orfigures symmetric error bars that would yield results that are out of range (e.g. negativeerror rates).•If error bars are reported in tables or plots, The authors should explain in the text howthey were calculated and reference the corresponding figures or tables in the text.8.Experiments Compute ResourcesQuestion: For each experiment, does the paper provide sufficient information on the com-puter resources (type of compute workers, memory, time of execution) needed to reproducethe experiments?Answer: [Yes]Justification: Experiments are explicitly reported in terms of token counts which can bedirectly converted to compute requirements on your hardware. We use an internal clusterwith A100 and H100 GPUs.Guidelines:• The answer NA means that the paper does not include experiments.•The paper should indicate the type of compute workers CPU or GPU, internal cluster,or cloud provider, including relevant memory and storage.•The paper should provide the amount of compute required for each of the individualexperimental runs as well as estimate the total compute.•The paper should disclose whether the full research project required more computethan the experiments reported in the paper (e.g., preliminary or failed experiments thatdidn’t make it into the paper).9.Code Of EthicsQuestion: Does the research conducted in the paper conform, in every respect, with theNeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ?Answer: [Yes]Justification: The paper conforms with the code of ethics.Guidelines:•The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.•If the authors answer No, they should explain the special circumstances that require adeviation from the Code of Ethics.•The authors should make sure to preserve anonymity (e.g., if there is a special consid-eration due to laws or regulations in their jurisdiction).10.Broader ImpactsQuestion: Does the paper discuss both potential positive societal impacts and negativesocietal impacts of the work performed?Answer: [Yes]Justification: We discuss broader impacts in Appendix I.Guidelines:• The answer NA means that there is no societal impact of the work performed.•If the authors answer NA or No, they should explain why their work has no societalimpact or why the paper does not address societal impact.•Examples of negative societal impacts include potential malicious or unintended uses(e.g., disinformation, generating fake profiles, surveillance), fairness considerations(e.g., deployment of technologies that could make decisions that unfairly impact specificgroups), privacy considerations, and security considerations.23•The conference expects that many papers will be foundational research and not tiedto particular applications, let alone deployments. However, if there is a direct path toany negative applications, the authors should point it out. For example, it is legitimateto point out that an improvement in the quality of generative models could be used togenerate deepfakes for disinformation. On the other hand, it is not needed to point outthat a generic algorithm for optimizing neural networks could enable people to trainmodels that generate Deepfakes faster.•The authors should consider possible harms that could arise when the technology isbeing used as intended and functioning correctly, harms that could arise when thetechnology is being used as intended but gives incorrect results, and harms followingfrom (intentional or unintentional) misuse of the technology.•If there are negative societal impacts, the authors could also discuss possible mitigationstrategies (e.g., gated release of models, providing defenses in addition to attacks,mechanisms for monitoring misuse, mechanisms to monitor how a system learns fromfeedback over time, improving the efficiency and accessibility of ML).11.SafeguardsQuestion: Does the paper describe safeguards that have been put in place for responsiblerelease of data or models that have a high risk for misuse (e.g., pretrained language models,image generators, or scraped datasets)?Answer: [NA]Justification: This paper is about optimizing results from existing models, and does notintroduce new models. Hence, we believe our paper poses no such risks.Guidelines:• The answer NA means that the paper poses no such risks.•Released models that have a high risk for misuse or dual-use should be released withnecessary safeguards to allow for controlled use of the model, for example by requiringthat users adhere to usage guidelines or restrictions to access the model or implementingsafety filters.•Datasets that have been scraped from the Internet could pose safety risks. The authorsshould describe how they avoided releasing unsafe images.•We recognize that providing effective safeguards is challenging, and many papers donot require this, but we encourage authors to take this into account and make a bestfaith effort.12.Licenses for existing assetsQuestion: Are the creators or original owners of assets (e.g., code, data, models), used inthe paper, properly credited and are the license and terms of use explicitly mentioned andproperly respected?Answer: [Yes]Justification: The model we use is correctly cited in section C.2.Guidelines:• The answer NA means that the paper does not use existing assets.• The authors should cite the original paper that produced the code package or dataset.•The authors should state which version of the asset is used and, if possible, include aURL.• The name of the license (e.g., CC-BY 4.0) should be included for each asset.•For scraped data from a particular source (e.g., website), the copyright and terms ofservice of that source should be provided.•If assets are released, the license, copyright information, and terms of use in the packageshould be provided. For popular datasets, paperswithcode.com/datasets hascurated licenses for some datasets. Their licensing guide can help determine the licenseof a dataset.•For existing datasets that are re-packaged, both the original license and the license ofthe derived asset (if it has changed) should be provided.24•If this information is not available online, the authors are encouraged to reach out tothe asset’s creators.13.New AssetsQuestion: Are new assets introduced in the paper well documented and is the documentationprovided alongside the assets?Answer: [NA]Justification: The paper does not release new assets.Guidelines:• The answer NA means that the paper does not release new assets.•Researchers should communicate the details of the dataset/code/model as part of theirsubmissions via structured templates. This includes details about training, license,limitations, etc.•The paper should discuss whether and how consent was obtained from people whoseasset is used.•At submission time, remember to anonymize your assets (if applicable). You can eithercreate an anonymized URL or include an anonymized zip file.14.Crowdsourcing and Research with Human SubjectsQuestion: For crowdsourcing experiments and research with human subjects, does the paperinclude the full text of instructions given to participants and screenshots, if applicable, aswell as details about compensation (if any)?Answer: [NA]Justification: Our paper does not involve crowdsourcing nor research with human subjects.Guidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Including this information in the supplemental material is fine, but if the main contribu-tion of the paper involves human subjects, then as much detail as possible should beincluded in the main paper.•According to the NeurIPS Code of Ethics, workers involved in data collection, curation,or other labor should be paid at least the minimum wage in the country of the datacollector.15.Institutional Review Board (IRB) Approvals or Equivalent for Research with HumanSubjectsQuestion: Does the paper describe potential risks incurred by study participants, whethersuch risks were disclosed to the subjects, and whether Institutional Review Board (IRB)approvals (or an equivalent approval/review based on the requirements of your country orinstitution) were obtained?Answer: [NA]Justification: Our paper does not involve crowdsourcing nor research with human subjects.Guidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Depending on the country in which research is conducted, IRB approval (or equivalent)may be required for any human subjects research. If you obtained IRB approval, youshould clearly state this in the paper.•We recognize that the procedures for this may vary significantly between institutionsand locations, and we expect authors to adhere to the NeurIPS Code of Ethics and theguidelines for their institution.•For initial submissions, do not include any information that would break anonymity (ifapplicable), such as the institution conducting the review.25 |
H5hePMXKht | Reasoning in ReasoningA Hierarchical Framework for Neural Theorem ProvingZiyu Ye∗1, Jiacheng Chen2, Jonathan Light3, Yifei Wang4, Jiankai Sun5, Mac Schwager5,Philip Torr6,7,Guohao Li7,8,Yuxin Chen1,Kaiyu Yang9,Yisong Yue2,Ziniu Hu21The University of Chicago,2California Institute of Technology,3RPI,4MIT CSAIL,5Stanford University,6University of Oxford,7Eigent AI,8Camel-AI.org,9Meta FAIRCode:github.com/ziyu-deep/reasoning-in-reasoningAbstractLearning to do complex reasoning is the central objective of artificial intelligence.Autoregressive language models have shown promise in generating intermediatesteps for problem solving; however, complex reasoning tasks such as theoremproving still present challenges due to the vast search spaces. Classical works haveconsidered reasoning by searching, e.g.,expanding the reasoning space with treesearch to explore intermediate steps; and reasoning by decomposing, i.e.,breakingdown the problem into higher-level thoughts that prompt lower-level actions. Inthis work, we develop Reasoning in Reasoning (RiR), a hierarchical frameworkthat formally unifies decomposing andsearch by a planner-actor game. Using neu-ral theorem proving as a representative task, our approach breaks down complextheorem proving problems into achievable sub-goals for abstraction over formalproofsteps, giving models: (i) improved generalizability for reasoning step genera-tion, (ii) a more compact and informative search space for reasoning trajectories,and (iii) an efficient mechanism for learning to plan. We empirically show thatRiR achieves concrete performance gain on popular theorem proving datasetsincluding LeanDojo and miniF2F while being highly efficient ( e.g.,RiR is nearly3x faster over the existing state-of-the-art baseline on miniF2F). We further presentinformation-theoretic conjectures on the principles driving RiR’s effectiveness.A very powerful approach is to attempt to eliminate everything from the problem except the essentials;that is, cut it down to size. Very often, if you can solve the simple problem, you can add refinements tothe solution of this, until you get back to the solution of the one you started with.– Claude ShannonPlanner: high-level reasoning (search width = 2)Actor: low-level reasoning (search width = 2)PROOF FINISHEDr = 0.6r = 0.4r = 0.5r = 0.2r = -infr = -infr = -infr = -infr = 0.6; QEDr = -infr = 0.3r = -inf r = 0.8r = -infr = 0.6r = -infr* += 0.8r* += 0.6r* += 0.6r* += 0.5r* += -infr* -infr* += 0.6QEDr* += 0.3: action (i.e., tactic or proofstep): candidate/queued/chosen/pruned target goal/////: Lean invalidated /validated / validated, and prioritized by valueFigure 1: An illustrative example of Algorithm 1 and 2 on decomposing and search of RiR.∗Part of this work was done at Eigent AI. Correspondence to: [email protected], [email protected] @ NeurIPS’24. All experiments and processing was conducted in Eigent AI and Caltech.1 IntroductionThe main question we aim to address in this work is: what is an effective learning mechanism forlanguage models to solve complex reasoning problems, such as mathematical theorem proving?Recent progress in language models have shown promises in generating intermediate steps bynext-token prediction [Wei et al., 2022], yet the performance often deteriorates when facing longtrajectories or vast spaces. This challenge is particularly evident in automated theorem proving, a taskthat has been at the core of artificial intelligence research since the field’s early days [Simon, 1969].The process of crafting a proof is a classical example of reasoning [Wang, 1961]: just as a learningagent needs generalize from a limited set of examples to the broader set of all possible worlds, aprover must navigate from a given set of known theorems to the vast space of provable statements.An effective strategy from human mathematicians is the decomposition of problems with a sequenceof target goals. This provides a more informative direction for subsequent reasoning steps, potentiallyreducing the effective search space.We hereby introduce Reasoning in Reasoning (RiR), a fundamental hierarchical framework unifyingdecomposing andsearch . In the context of theorem proving, our framework consists of an offlineco-training stage followed by an online goal-driven bi-level planning stage. Our contributions are:•Framework : We develop Reasoning in Reasoning (RiR), a new and general reasoningframework, that is practically implemented with goal-driven hierarchical learning via aplanner-actor game for neural theorem proving.•Experiments : We show that RiR achieves both state-of-the-art performance and efficiencyon popular benchmarks of LeanDojo [Yang et al., 2023] and miniF2F [Zheng et al., 2021].2 Preliminaries: Classical Neural Theorem Proving with Language ModelsWe here introduce classical methods. The glossary used throughout the paper is in Appendix A.Setup. We frame formal theorem proving as a Markov Decision Process. Starting with a to-provestatement qwhose initial state is s0, we sequentially apply tactics ytto prove it. Each tactic appliedwill make the current state sttransit to the next state st+1. Each state is associated with a scalarreward, r(st), provided by the environment. Below we show an example in Lean4.theorem (p q: Prop): p v q →q v p := byintro hcases h withinl hp => apply Or.inr; exact hpinr hq => apply Or.inl; exact hq-- goal s0: (p q: Prop) p v q →q v p-- goal s1: (p q: Prop)(h: p v q) →q v p-- goal s2: (p q: Prop)(hp: p) →q v p-- goal s3: (p q: Prop)(hq: q) →q v p-- goal s4: NoneNeural theorem proving. A neural network parameterized by θcan act as a policy that samplessingle tactic yt+1∼πθ(· |st)at step t. The objective is to find the optimal trajectory which leads tosolved for each statement q, that is to find a sequence of tactics y1, . . . , yTsuch that:s0y1− →s1y2− →s2y3− →. . .yT− − →sT.The problem of automated theorem proving is often solved via a two-stage framework as follows.Stage 1: offline learning for proofstep generation. Classical approaches [Han et al., 2021,Welleck et al., 2022, Yang et al., 2023, Li et al., 2024] fine-tune a model pθ(y⋆|s)to sample thenext proofstep yconditional only on current goal s. The classical prompt format is:> Input: {$current goal s} > Output :{$proofstep y⋆}Stage 2: online search for complete proof. Classically, given a statement q, a full proof ̄y1:Tisfound by constructing a tree [Yang et al., 2023, Li et al., 2024] with only low-level tactic search . Acommon choice is best-first search , where there is a priority queue Qof partial proofs, ordered bysome value function v(·). At step t, we pop one partial proof ̄y1:t(each associated with its currentstatest) with the highest value. We then expand ̄y1:tby generating Mcandidate proofsteps, and eachresulting partial proof ̄y1:t+1∈ St+1( ̄y1:t)is inserted into the queue Q ̄yprioritized by the value.The search continues until a full proof ̄y1:Tis found, or termination criteria is reached.23 Method3.1 Offline Learning Stage: Goal-Driven Co-TrainingUnlike classical approaches which learn to minimize the loss with regard to the conditional distributionp(y⋆|s), we propose to learn the joint distribution p⋆(s⋆t+1,y⋆t+1|st), where s⋆t+1is the target goalstate achieved by applying y⋆t+1. Our strategy is simple: we co-train a goal predictor model p(s⋆|s)and a goal-driven tactic predictor model p(y⋆|s,s⋆), with the co-training loss below:Lco(θ) =−1NX(s,y⋆,s⋆)∼Dtrain|{z}triplet sethlogpθ(s⋆|s)|{z }goal planner+ logpθ(y⋆|s,s⋆)| {z }goal-driven actori. (1)We use the following input-output prompt format in training for the theorem proving task:Planner (Target Goal Generation):> Input: [CURRENT GOAL] {$current goal s}[TARGET GOAL]> Output :{$target goal s⋆}Actor (Goal-Driven Tactic Generation):> Input: [CURRENT GOAL] {$current goal s}[TARGET GOAL] {$target goal s⋆}[PROOFSTEP]> Output :{$tactic y⋆}By decomposing the decision making process into goal state generation and goal-driven proofstepgeneration, RiR naturally captures the hierarchical structure of the reasoning.3.2 Online Planning Stage: Goal-Driven Hierarchical SearchAlgorithm 1 is a general design for RiR during the planning phase, where we can plug in variouspractical tree search policies. The high-level search explores promising target goals, while the low-level search finds the tactics to achieve each target goal, similar to the classical leader-follower game.A key feature is the joint update of both trees. An illustrative example is in Fig 1, and a concretealgorithm with best-first search (BFS) that we currently deploy for experiments is in Appendix D.Algorithm 1 RiR –A Unified Reasoning Mechanism with Decomposing and SearchInput: problem statement q, a language model w/ parameter θ1:tree←Tree (θ,q)2:repeat3: s⋆l←tree.policy () ▷ /⋆planner⋆/4:tree←Tree (θ,s⋆l)5: repeat6: ytl←tree.policy () ▷ /⋆goal-driven actor⋆/7: until STOP LOW8:{tree ,tree}.update () ▷ /⋆joint update⋆/9:until STOP HIGH10:returntree.solution4 ExperimentsSetups: Datasets and Models. We use the random split of LeanDojo Benchmark 4 [Yang et al.,2023] as the training dataset. We use BYT5-0.3B [Xue et al., 2021] as our base model, which is apretrained byte-level encoder-decoder Transformer model, and was adopted in Yang et al. [2023] withthe state-of-the-art performance in theorem proving. We refer to this trained checkpoint of Reprover(w/o retrieval) as our baseline, and evaluate it with the same setting as RiR. We train the above modelfor 500K steps, with the learning rate as 5.0×10−4and batch size as 8. For evaluation, we use bothLeanDojo Benchmark 4 and miniF2F [Zheng et al., 2021]. We use the Pass@1 metric with 10-mintimeout limit for evaluation. The search width for the high-level and the low-level is 5 and 64.3/uni00000087 /uni00000088/uni00000087 /uni00000089/uni00000087 /uni0000008a/uni00000087 /uni0000008b/uni00000087 /uni0000008c/uni00000087 /uni000000a4/uni00000087 /uni0000008d/uni00000087 /uni000000a5/uni00000087/uni00000016/uni000000a3/uni00000021/uni00000032/uni00000032/uni00000027/uni00000038/uni00000002/uni0000001a/uni0000002d/uni00000031/uni00000027/uni00000002/uni0000006c/uni00000039/uni0000006d/uni00000087/uni00000088/uni00000087/uni00000087/uni00000089/uni00000087/uni00000087/uni0000008a/uni00000087/uni00000087/uni0000008b/uni00000087/uni00000087/uni0000008c/uni00000087/uni00000087/uni00000003/uni00000024/uni0000003b/uni00000033/uni00000038/uni00000002/uni0000001a/uni0000002d/uni00000031/uni00000027/uni00000002/uni0000006c/uni00000039/uni0000006d/uni00000018/uni00000027/uni00000036/uni00000038/uni00000033/uni0000003d/uni00000027/uni00000038/uni00000021/uni0000003d/uni0000002b/uni00000057/uni00000002/uni00000021/uni00000024/uni0000003b/uni00000033/uni00000038/uni00000002/uni0000003b/uni0000002d/uni00000031/uni00000027/uni00000056/uni00000002/uni0000008d/uni000000a5/uni00000057/uni00000089/uni00000088/uni00000039/uni0000006c/uni00000032/uni00000033/uni00000002/uni00000036/uni000000a3/uni00000021/uni00000032/uni00000032/uni00000027/uni00000038/uni0000006d/uni00000018/uni0000002d/uni00000018/uni00000021/uni0000003d/uni0000002b/uni00000057/uni00000002/uni00000021/uni00000024/uni0000003b/uni00000033/uni00000038/uni00000002/uni0000003b/uni0000002d/uni00000031/uni00000027/uni00000056/uni00000002/uni00000002/uni00000089/uni0000008a/uni00000057/uni0000008a/uni0000008e/uni00000039/uni00000021/uni0000003d/uni0000002b/uni00000057/uni00000002/uni00000036/uni000000a3/uni00000021/uni00000032/uni00000032/uni00000027/uni00000038/uni00000002/uni0000003b/uni0000002d/uni00000031/uni00000027/uni00000056/uni00000002/uni0000008a/uni00000057/uni0000008e/uni0000008a/uni00000039/uni00000002Figure 2: Efficiency . The scatter plot for actorand planner time spent for proved theorems onminiF2F. RiR significantly reduces the actortime via the goal guidance from the planner./uni00000087 /uni00000088/uni00000087/uni00000087 /uni00000089/uni00000087/uni00000087 /uni0000008a/uni00000087/uni00000087 /uni0000008b/uni00000087/uni00000087 /uni0000008c/uni00000087/uni00000087/uni00000019/uni00000027/uni00000021/uni00000038/uni00000024/uni0000002c/uni00000002/uni0000001a/uni0000002d/uni00000031/uni00000027/uni00000002/uni0000006c/uni00000039/uni0000006d/uni00000087/uni00000089/uni00000087/uni0000008b/uni00000087/uni000000a4/uni00000087/uni000000a5/uni00000087/uni00000006/uni0000003c/uni00000031/uni0000003c/uni00000057/uni00000002/uni00000016/uni00000038/uni00000033/uni0000003d/uni00000027/uni00000026/uni00000002/uni0000001a/uni0000002c/uni00000027/uni00000033/uni00000038/uni00000027/uni00000031/uni00000039/uni00000018/uni00000027/uni00000036/uni00000038/uni00000033/uni0000003d/uni00000027/uni00000038/uni00000018/uni0000002d/uni00000018Figure 3: Efficiency . The CDF plot for searchtime spent for proved theorems on miniF2F Bench-mark. RiR is significantly faster (nearly 3x) thanthe existing state-of-the-art baseline.Results: Performance Gain. We present the performance comparison of RiR with existing base-lines in Table 1. RiR also proved 1 more AIME and 2 more AMC problems compared to the currentstate-of-the-art Reprover [Yang et al., 2023].Dataset ( →) miniF2F-test2LeanDojo-testMethod ( ↓) / Model ( →) BYT5-0.3B B YT5-0.3BReprover (BFS) 34.43 % 50.16%RiR (BFS) 36.89 % 53.73%Table 1: Performance. Pass@1 rate on LeanDojo and miniF2F.Results: Efficiency Gain. RiR is significantly faster in searching for the optimal reasoning trajec-tories via a more compact and information-directed search space with the goal-driven planner, asillustrated in Figure 3 on miniF2F benchmark. RiR is more time efficient in the sense that we achievebetter results in small computational budget. Specifically, as shown in Figure 2, while the classicalReprover has an average actor time ( i.e.,time spent for low-level proofstep search) of 78.21s, RiRreduces this to only 23.39 s, with additional 3.93s for planner time ( i.e.,time spent for high-level goalsearch) on average, setting the new efficiency benchmark for neural theorem proving.Remarks. We present logs showing how RiR found hard proofs fast while classical approaches failin Appendix F; take Finset.union subsetleft for example, while the classical methodexpanded more than 8914 nodes yet still failed after 10 minutes, RiR proved the theoremwithin 5 seconds and only searched 1 node . We believe the significant improvement in efficiencyand effectiveness comes from RiR’s ability to generalize better and to explore better in the morecompact and informative search space, empirically supporting the Conjecture 2 and 3 in Appendix B.5 ConclusionsWe have developed Reasoning in Reasoning (RiR), an easy-to-implement framework unifyingreasoning by search and reasoning by decomposing for language models. In the domain of automatedtheorem proving, RiR is practically implemented with goal-driven offline pretraining and hierarchicalonline planning, where reasoning takes place in different semantic levels. We explore RiR withinitial information-theoretical analysis, discussing the Co-Training Advantage Conjecture and theHierarchical Planning Advantage Conjecture in Appendix B, and present detailed discussions inrelated works and limitations in Appendix C and E. We hope RiR can shed light on the fundamentalway for reasoning with language models.2Additional data points on miniF2F-test for readers’ reference:• Azerbayev et al. [2023] L LEMMA -7B achieves 26.2%;• Welleck and Saha [2023] LLMS TEP-1.8B achieves 27.9%;• Polu and Sutskever [2020] GPT- fachieves 36.6%.4AcknowledgementsThis research was also supported in part through grants from the U.S. Department of Energy underGrant No. DOE DE-EE0009505, the National Science Foundation under Grant No. IIS 2332475,with additional acknowledgment to the University of Chicago’s Research Computing Center. Allexperimentation and processing were conducted solely on Eigent AI and Caltech servers.ReferencesZhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert QJiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model formathematics. arXiv preprint arXiv:2310.10631 , 2023.Bram Bakker, J ̈urgen Schmidhuber, et al. Hierarchical reinforcement learning based on subgoaldiscovery and subpolicy specialization. In Proc. of the 8-th Conf. on Intelligent AutonomousSystems , pages 438–445. Citeseer, 2004.Kshitij Bansal, Sarah Loos, Markus Rabe, Christian Szegedy, and Stewart Wilcox. Holist: An envi-ronment for machine learning of higher order logic theorem proving. In International Conferenceon Machine Learning , pages 454–463. PMLR, 2019.Ching-An Cheng, Andrey Kolobov, Dipendra Misra, Allen Nie, and Adith Swaminathan. LLF-Bench:Benchmark for Interactive Learning from Language Feedback. arXiv preprint arXiv:2312.06853 ,2023.Rohan Chitnis, Tom Silver, Joshua B Tenenbaum, Tomas Lozano-Perez, and Leslie Pack Kaelbling.Learning neuro-symbolic relational transition models for bilevel planning. In 2022 IEEE/RSJInternational Conference on Intelligent Robots and Systems (IROS) , pages 4166–4173. IEEE,2022.R ́emi Coulom. Efficient selectivity and backup operators in Monte-Carlo tree search. In Internationalconference on computers and games , pages 72–83. Springer, 2006.Thomas M Cover. Elements of information theory . John Wiley & Sons, 1999.Konrad Czechowski, Tomasz Odrzygozdz, Marek Zbysinski, Michal Zawalski, Krzysztof Olejnik,Yuhuai Wu, Lukasz Kucinski, and Piotr Milos. Subgoal search for complex reasoning tasks.Advances in Neural Information Processing Systems , 34:624–638, 2021.Murtaza Dalal, Tarun Chiruvolu, Devendra Chaplot, and Ruslan Salakhutdinov. Plan-seq-learn:Language model guided rl for solving long horizon robotics tasks. arXiv preprint arXiv:2405.01534 ,2024.Xidong Feng, Ziyu Wan, Muning Wen, Ying Wen, Weinan Zhang, and Jun Wang. Alphazero-like tree-search can guide large language model decoding and training. arXiv preprint arXiv:2309.17179 ,2023.Arnaud Fickinger, Hengyuan Hu, Brandon Amos, Stuart Russell, and Noam Brown. Scalable OnlinePlanning via Reinforcement Learning Fine-Tuning. In A. Beygelzimer, Y . Dauphin, P. Liang, andJ. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , 2021. URLhttps://openreview.net/forum?id=D0xGh031I9m .Dibya Ghosh, Abhishek Gupta, Ashwin Reddy, Justin Fu, Coline Devin, Benjamin Eysenbach,and Sergey Levine. Learning to reach goals via iterated supervised learning. arXiv preprintarXiv:1912.06088 , 2019.Dibya Ghosh, Abhishek Gupta, Justin Fu, Ashwin Reddy, Coline Devin, Benjamin Eysenbach, andSergey Levine. Learning to reach goals without reinforcement learning, 2020. URL https://openreview.net/forum?id=ByxoqJrtvr .Raj Ghugare, Matthieu Geist, Glen Berseth, and Benjamin Eysenbach. Closing the Gap be-tween TD Learning and Supervised Learning–A Generalisation Point of View. arXiv preprintarXiv:2401.11237 , 2024.5Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W Ayers, and Stanislas Polu. Proof artifactco-training for theorem proving with language models. arXiv preprint arXiv:2102.06203 , 2021.Joey Hejna, Rafael Rafailov, Harshit Sikchi, Chelsea Finn, Scott Niekum, W Bradley Knox, andDorsa Sadigh. Contrastive prefence learning: Learning from human feedback without rl. arXivpreprint arXiv:2310.13639 , 2023.Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, and RishabhAgarwal. V-star: Training verifiers for self-taught reasoners. arXiv preprint arXiv:2402.06457 ,2024.Ziniu Hu, Ahmet Iscen, Aashi Jain, Thomas Kipf, Yisong Yue, David A Ross, Cordelia Schmid, andAlireza Fathi. Scenecraft: An llm agent for synthesizing 3d scene as blender code. In ICLR 2024Workshop on Large Language Model (LLM) Agents , 2024.Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shotplanners: Extracting actionable knowledge for embodied agents. In International Conference onMachine Learning , pages 9118–9147. PMLR, 2022.Nan Jiang. Notes on state abstractions, 2018.Nishanth Kumar, Willie McClinton, Rohan Chitnis, Tom Silver, Tom ́as Lozano-P ́erez, and Leslie PackKaelbling. Learning Efficient Abstract Planning Models that Choose What to Predict. In Conferenceon Robot Learning , pages 2070–2095. PMLR, 2023.Guillaume Lample, Timothee Lacroix, Marie-Anne Lachaux, Aurelien Rodriguez, Amaury Hayat,Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. Hypertree proof search for neural theoremproving. Advances in Neural Information Processing Systems , 35:26337–26349, 2022.Hoang Le, Nan Jiang, Alekh Agarwal, Miroslav Dud ́ık, Yisong Yue, and Hal Daum ́e III. Hierarchicalimitation and reinforcement learning. In International conference on machine learning , pages2917–2926. PMLR, 2018.Zhaoyu Li, Jialiang Sun, Logan Murphy, Qidong Su, Zenan Li, Xian Zhang, Kaiyu Yang, and XujieSi. A Survey on Deep Learning for Theorem Proving. arXiv preprint arXiv:2404.09939 , 2024.Kaiqu Liang, Zixu Zhang, and Jaime Fern ́andez Fisac. Introspective Planning: Guiding Language-Enabled Agents to Refine Their Own Uncertainty. arXiv preprint arXiv:2402.06529 , 2024.Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone.Llm+ p: Empowering large language models with optimal planning proficiency. arXiv preprintarXiv:2304.11477 , 2023a.Zhihan Liu, Hao Hu, Shenao Zhang, Hongyi Guo, Shuqi Ke, Boyi Liu, and Zhaoran Wang. Reasonfor future, act for now: A principled framework for autonomous llm agents with provable sampleefficiency. arXiv preprint arXiv:2309.17382 , 2023b.Giambattista Parascandolo, Lars Buesing, Josh Merel, Leonard Hasenclever, John Aslanides, Jes-sica B Hamrick, Nicolas Heess, Alexander Neitz, and Theophane Weber. Divide-and-conquermonte carlo tree search for goal-directed planning. arXiv preprint arXiv:2004.11410 , 2020.Sujoy Paul, Jeroen Vanbaar, and Amit Roy-Chowdhury. Learning from trajectories via subgoaldiscovery. Advances in Neural Information Processing Systems , 32, 2019.Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.arXiv preprint arXiv:2009.03393 , 2020.Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and IlyaSutskever. Formal mathematics statement curriculum learning. arXiv preprint arXiv:2202.01344 ,2022.Ravid Shwartz-Ziv and Naftali Tishby. Opening the black box of deep neural networks via information.arXiv preprint arXiv:1703.00810 , 2017.6Tom Silver, Rohan Chitnis, Nishanth Kumar, Willie McClinton, Tom ́as Lozano-P ́erez, Leslie Kael-bling, and Joshua B Tenenbaum. Predicate invention for bilevel planning. In Proceedings of theAAAI Conference on Artificial Intelligence , 2023.Herbert A Simon. The Sciences of the Artificial . MIT press, 1969.Hao Wang. Proving theorems by pattern recognition—ii. Bell system technical journal , 40(1):1–41,1961.Tongzhou Wang, Antonio Torralba, Phillip Isola, and Amy Zhang. Optimal goal-reaching reinforce-ment learning via quasimetric learning. In International Conference on Machine Learning , pages36411–36430. PMLR, 2023.Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, DennyZhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances inNeural Information Processing Systems , 35:24824–24837, 2022.Sean Welleck and Rahul Saha. LLMSTEP: LLM proofstep suggestions in Lean. arXiv preprintarXiv:2310.18457 , 2023.Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, and Yejin Choi. Naturalprover:Grounded mathematical proof generation with language models. Advances in Neural InformationProcessing Systems , 35:4913–4927, 2022.David Wilkins. Using patterns and plans in chess. Artificial intelligence , 14(2):165–203, 1980.Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, AdamRoberts, and Colin Raffel. ByT5: Towards a token-free future with pre-trained byte-to-byte models2021. arXiv preprint arXiv:2105.13626 , 2021.Kaiyu Yang, Aidan M Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil,Ryan Prenger, and Anima Anandkumar. Leandojo: Theorem proving with retrieval-augmentedlanguage models. arXiv preprint arXiv:2306.15626 , 2023.Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan.Tree of thoughts: Deliberate problem solving with large language models. Advances in NeuralInformation Processing Systems , 36, 2024.Lifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding, Xingyao Wang, Jia Deng, Boji Shan, HuiminChen, Ruobing Xie, Yankai Lin, et al. Advancing llm reasoning generalists with preference trees.arXiv preprint arXiv:2404.02078 , 2024.Michał Zawalski, Michał Tyrolski, Konrad Czechowski, Tomasz Odrzygozdz, Damian Stachura,Piotr Pikekos, Yuhuai Wu, Lukasz Kucinski, and Piotr Milos. Fast and precise: Adjusting planninghorizon with adaptive subgoal search. arXiv preprint arXiv:2206.00702 , 2022.Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed H Chi, Quoc V Le, andDenny Zhou. Take a step back: evoking reasoning via abstraction in large language models. arXivpreprint arXiv:2310.06117 , 2023.Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark forformal olympiad-level mathematics. arXiv preprint arXiv:2109.00110 , 2021.Denny Zhou, Nathanael Sch ̈arli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans,Claire Cui, Olivier Bousquet, Quoc Le, et al. Least-to-most prompting enables complex reasoningin large language models. arXiv preprint arXiv:2205.10625 , 2022.7AppendixA Glossarytheorem statement (q) a mathematical statement.goal (s) a statement in the context of a proofsearch, denoted as s.state a representation containing contexts ( e.g., hypotheses) and goals for the proof;for simplicity, we also use this term interchangeably with goal.proofstep / tactic (y) a reasoning step that uses established assumptions etc to achieve the goal.reasoning the process of deriving intermediate steps to solve a problem.planning a sub-type of reasoning on deriving high-level goals that trigger low-level steps.low-level search the sampling and pruning for proofsteps .high-level search the sampling and pruning for goals , see Section 3 for details.B Theoretical Conjectures: An Information Gain PerspectiveThe simple insight is that the new mechanism of RiR increases information learned from environments ,improving both generalization for reasoning step learning and exploration for reasoning path planning.B.1 IntuitionTo recap, we propose a hierarchical approach for learning in theorem proving:1. Planner step: predicting the target state via p(s⋆t+1|st).2. Actor step: predicting the proofstep via p(y⋆t|st,s⋆t+1).This contrasts with the traditional approach of solely predicting p(y⋆t|st).Let’s think in an information-theoretic way: st+1acts as an information bottleneck [Shwartz-Zivand Tishby, 2017], by abstracting different possible proofsteps or sequences of proofsteps ytinto asingle, more compact representation. Consider a simplified example below:-- goal st= 3*(2 + 1) = 9-- goal st+1= 9 = 9There exist multiple different proofsteps to reach st+1fromst, for instance:•ring – algebraic normalization.•normnum – direct numeric evaluation.•simp; rfl – simplification followed by reflexivity.•calc ···(omitted) – step-by-step calculation.It is important that in this way, st+1could generalize beyond our current setting ( i.e.,the next formalgoal in Lean4). Essentially, it can be any abstraction of the formal proofsteps , for example:• A high-level thought expressed in informal natural language.• A discrete code representing a proof strategy.• A latent vector in a learned numeric representation space.Intuitively, the abstraction brought by s⋆t+1helps capture essential proof structure and compress awayirrelevant details, which can also be considered as a way to reduce estimation errors [Jiang, 2018].Information never hurts [Cover, 1999] – there is H(y⋆t|st,s⋆t+1)≤H(y⋆t|st),i.e.,knowing s⋆t+1helps reduces uncertainty about y⋆t, providing a more focused direction for action search. In thetheorem-proving scenario, one may assume a lower bound on this additional knowledge. We nowpresent preliminary theoretical conjectures as follows.8B.2 Generalization Guarantee for Goal-Driven Policy Co-TrainingAssumption 1 The conditional mutual information between the optimal action y⋆and the optimaltarget goal s⋆, given the current state s, is bounded by a constant γI>0:I(y⋆;s⋆|s)≥γI. (2)Assumption 2 Letp⋆(s,s⋆,y⋆)be the true joint distribution over triplets {(si,y⋆i,s′i)}Ni=1. Letpθc(y⋆|s)andpθco(y⋆|s)be the learned distributions for the classical and the co-trainingapproach from minimizing the empirical loss Lc(θ)of classical method and Lco(θ)in Eq. 1. Weassume:1. The hypothesis classes ΘcandΘcohave VC dimensions dcanddco, and are such that:Lc(θ∗c)≤infθ∈ΘcLc(θ) +O rdc+ log(1 /δ)N!,Lco(θ∗co)≤infθ∈ΘcoLco(θ) +O rdco+ log(1 /δ)N!,with probability at least 1−δover the choice of the training set, where θ∗c=arg minθ∈ΘcLc(θ)andθ∗co= arg minθ∈ΘcoLco(θ).2. The number of training examples Nis sufficiently large such that N≥32(dco+log(1 /δ))γI.Conjecture 1 (Loss Decomposition with Information Gain) LetLco(θ∗co)be the optimal co-training loss and Lc(θ∗c)be the optimal classical loss. Suppose the conditional mutual informationsatisfies I(y⋆;s⋆|s)≥γIfor some constant γI>0. Then, there exists a constant C >0such that:Lco(θ∗co)≤ Lc(θ∗c) +CγI.Conjecture 2 (Co-Training Advantage) By Assumption 1 and 2, with probability at least 1−2δover the choice of the training set, the following inequality holds:Ep⋆(s)[∥p⋆(y⋆|s)−pθc(y⋆|s)∥TV]≥Ep⋆(s)[∥p⋆(y⋆|s)−pθco(y⋆|s)∥TV] +|f(CγI)|,where f(·)is assumed to be monotonic.B.3 Efficiency Guarantee for Goal-Driven Hierarchical PlanningIn our hierarchical approach for theorem proving, we introduce a target goal space ̃S=S. At stept, given the current state st, we first search for target goals given the current goal; next, conditionalon the chosen target goals, we search for tactics, and apply the chosen tactic to transit to new states;the process repeats until termination. In contrast, the classical single-level planning approach onlysamples low-level tactics without any high-level guidance; we refer to this as the flat planning:• (Classical) Flat planning : we have a policy πf:S → A that maps states to actions.• (RiR) Hierarchical planning : we have:–Ahigh-level planner policy πh:S → ̃S, that maps goals to target goals.–Alow-level actor policy πl:S × ̃S → A , that maps goals and target goals to actions.The simple intuition is that the introduction of the goal state creates a partitioning over the raw actionspace, reducing the search space and making the search more efficient.Conjecture 3 (Hierarchical Planning Advantage) Consider a hierarchical planning approachwith a high-level policy πhand a low-level policy πl, and a flat planning approach with a pol-icyπf. LetNh(ε)andNf(ε)be the number of node expansions required by the hierarchical and flatplanning approaches to find an ε-optimal solution w.p. at least 1−δ. Under mild assumptions, thereexist constants c1, c2, γ > 0such that:9E[Nh(ε)]≤c1e−I(A; ̃S|S)·log1δ·(E[Nf(ε)])γ+c2·ψ(εh, εl), (3)where γIis the conditional mutual information between the optimal action and the optimal target goal,andεhandεlare the ε-optimality gaps of the learned high-level and low-level policies, respectively.In essence, RiR is helpful when target goals effectively decompose the problem into smaller subprob-lems while preserving the essential information about the optimal solution. Intuitively, if the targetgoals selected by the high-level policy provide useful information towards the optimal actions, thelow-level policy can focus on a smaller set of relevant actions, leading to more efficient search.C Related WorksReasoning with language models. In language modeling, reasoning typically refers to generatingintermediate steps within the language space to reach a final solution to a problem [Wei et al., 2022].Solving complex or novel reasoning problems remains as an open challenge. One promising directionisreasoning by searching ,e.g.,expanding the reasoning space by tree search for intermediatesteps [Yao et al., 2024, Feng et al., 2023, Liu et al., 2023a, Yuan et al., 2024]. Another researchdirection is reasoning by decomposition ,i.e.,generating higher-level goals that trigger a single or asequence of lower-level steps [Zhou et al., 2022, Liu et al., 2023b, Zheng et al., 2023, Liang et al.,2024, Dalal et al., 2024, Huang et al., 2022, Hu et al., 2024]. The most similar line of literature toours is subgoal search [Wilkins, 1980, Czechowski et al., 2021, Zawalski et al., 2022, Parascandoloet al., 2020, Paul et al., 2019], while we put specific focus on the theorem proving benchmarks, andpresent initial theoretical conjectures, and formally unifies search anddecomposing in a hierarchicalframework for large language model training and inference.Automatic Theorem Proving with language models. As a representative reasoning task, automatictheorem proving (ATP) is often characterized as a tree search problem, i.e.,constructing a (tactic-based) proof tree and traversing it to find the correct proof [Li et al., 2024]. In the context of languagemodeling, proofstep generation forms the edges of the proof tree; the common standard in prior worksis to generate single proof steps with the input format similar to [GOAL]$ {goal}[PROOFSTEP] ,i.e.,conditional on the current goal, generating the next tactic [Polu and Sutskever, 2020, Yang et al.,2023, Azerbayev et al., 2023, Lample et al., 2022]. For the proof search stage, while people have beenusing simple heuristics like breadth-first search [Bansal et al., 2019], or MCTS-like search guided bylearned value functions [Lample et al., 2022, Polu et al., 2022], designing better search algorithmsremains an active area [Li et al., 2024]. The key challenge is that the tactic-based proof space iscombinatorially large. Distinguished from prior works , RiR introduces the goal-driven co-trainingfor proofstep generation with a bi-level search framework for generalization and efficiency advantage.Hierarchical and goal-conditioned RL. Planning and learning is hard when the decision-makingspace scales up [Bakker et al., 2004]. Hierarhical RL intends to address this issue by learning ahierarchy of policies operating on different levels of abstraction ( e.g.,subgoals over the state space).This mitigates the scaling issues by improving exploration for the environment [Ghosh et al., 2020,Chitnis et al., 2022, Kumar et al., 2023, Silver et al., 2023, Le et al., 2018]. There is another line ofresearch termed as goal-conditioned RL [Ghosh et al., 2019, Wang et al., 2023, Ghugare et al., 2024],which trains offline RL policy in a supervised manner conditioning on goal or return. Unlike mostprior works that rely on a predefined goal structure, we train models to learn to generate goals in thelanguage space, and refine the goal planning via low-level tree search and joint update.D A Practical Implementation of RiR with Best-First SearchHere, we propose a bi-level best-first search algorithm which maintains a priority queue of trajectories,where the priority of a trajectory is determined by its joint negative log-likelihood, defined as:−logp(τ) =−tXi=1logp(yi+1,s⋆i+1|si) =−tXi=1logp(s⋆i|si−1) + log p(yi+1|s⋆i+1,si).At each iteration, the algorithm pops the highest-priority trajectory. It performs high-level search tosample target goals, and low-level search to sample tactics conditioned on both the target and the10current goal. By prioritizing trajectories with policy heuristics, RiR efficiently explores the mostpromising reasoning paths, using the learned model to guide both goal planning and tactic generation.Algorithm 2 RiR – Best-First SearchInput: problem statement q, a language model with parameter θ1:Q ← QUEUE (q)2:whileQ ̸=∅and not B UDGET EXHAUSTED ()do3: τ= (s0,(ˆs⋆1,ˆy1), . . . ,st−1,(ˆs⋆t,ˆyt),st)← Q .POP()4: ifstis P ROOF FINISHED then return τ5: end if6:Gt←SAMPLE TARGET GOALS (st,θ) ▷ /⋆high-level search⋆/7: forˆs⋆(i)t∈ Gtdo8: Y(i)t←SAMPLE TACTICS (ˆs⋆(i)t+1,st,θ) ▷ /⋆low-level search⋆/9: forˆy(i,j)t+1∈ Y(i)tdo10: s(i,j)t+1←APPLY TACTIC (ˆy(i,j)t+1,st)11: τ′←(s0,(ˆs⋆1,ˆy1), . . . ,st,(ˆs⋆t+1,ˆyt+1),s(i,j)t+1)12: Q.PUSH(τ′,−logp(τ′)) ▷ /⋆joint update⋆/13: end for14: end for15:end while16:return FAILURENote that the Best-First Search here can be easily switched to other search algorithms, e.g.,MonteCarlo Tree Search [Coulom, 2006] or scalable RL finetuning [Fickinger et al., 2021], which weencourage the community to explore in more depth.E Limitations and Future StepsWhile we have shown the effectiveness of RiR on neural theorem proving benchmarks, there area lot more to be built upon our framework. Future directions may include: (i) incorporating dedi-cated reward models in the planning phase (rather than using the likehood heuristics); (ii) addingpost-training during planning using techniques like contrastive preference learning [Hejna et al.,2023], to further tune the model with pairs of successful and failed reasoning trajectories for self-improvement [Hosseini et al., 2024]; (iii) integrating language feedback [Cheng et al., 2023], contex-tual information [Welleck and Saha, 2023], and other broader goals for co-training and planning; (v)investigating in-context learning alternatives for co-traning to apply RiR in black-box models; (vii)adding goal rollout and lookahead to further improve RiR’s performance and efficiency.F Detailed Experimental Results and LogsWe are open-sourcing all our codes, training scripts, evaluation logs, and checkpoints at this link:•github.com/ziyu-deep/reasoning-in-reasoning .For evaluation on LeanDojo, we use:• Repository: https://github.com/leanprover-community/mathlib4 .• Commit: fe4454af900584467d21f4fd4fe951d29d9332a7 .For evaluation on miniF2F, we use:• Repository: https://github.com/yangky11/miniF2F-lean4 .• Commit: 9e445f5435407f014b88b44a98436d50dd7abd00 .11We hereby present some example proofs from logs, showing how RiR succeeded with significantlyfewer nodes to search. More examples can be found in our released repository.Example 0: Proof Found by RiRTheorem:File Path: Mathlib/Order/ConditionallyCompleteLattice/Basic.leanFull Name: OrderIso.map_ciSupStatus: Status.PROVEDProof:simp [iSup, hf]rw [e.map_csSup']swapassumption'apply Set.range_nonemptyrw [ ←Set.range_comp]rflSearch Statistics:Planner Time: 150.2634212092962Actor Time: 315.0649007729953Environment Time: 38.92193151102401Total Time: 505.9431369260419Total Nodes: 2207Searched Nodes: 37Example 0: Failure by Reprover (w/o retrieval)Theorem:File Path: Mathlib/Order/ConditionallyCompleteLattice/Basic.leanFull Name: OrderIso.map_ciSupStatus: Status.OPENProof: NoneSearch Statistics:Actor Time: 512.3867035790754Environment Time: 89.58101247090963Total Time: 602.1384408420126Total Nodes: 4082Searched Nodes: 16012Example 1: Proof Found by RiRTheorem:File Path: Mathlib/Data/Finset/Basic.leanFull Name: Finset.union_subset_leftStatus: Status.PROVEDProof:exact Finset.Subset.trans (Finset.subset_union_left s t) hSearch Statistics:Planner Time: 1.3937767379684374Actor Time: 3.304290219093673Environment Time: 0.07375576300546527Total Time: 4.774586059036665Total Nodes: 7Searched Nodes: 1Example 1: Failure by Reprover (w/o retrieval)Theorem:File Path: Mathlib/Data/Finset/Basic.leanFull Name: Finset.union_subset_leftStatus: Status.OPENProof: NoneSearch Statistics:Actor Time: 491.1531239761098Environment Time: 110.1171304465338Total Time: 601.520013278001Total Nodes: 8914Searched Nodes: 23313Example 2: Proof Found by RiRTheorem:File Path: Mathlib/Data/Nat/PrimeFin.leanFull Name: Nat.Prime.primeFactorsStatus: Status.PROVEDProof:extsimp [hp.ne_zero]simp [hp, Nat.dvd_prime hp]aesopSearch Statistics:Planner Time: 150.2634212092962Actor Time: 315.0649007729953Environment Time: 38.92193151102401Total Time: 505.9431369260419Total Nodes: 2207Searched Nodes: 37Example 2: Failure by Reprover (w/o retrieval)Theorem:File Path: Mathlib/Data/Nat/PrimeFin.leanFull Name: Nat.Prime.primeFactorsStatus: Status.OPENProof: NoneSearch Statistics:Actor Time: 474.4240076234564Environment Time: 127.5987611755263Total Time: 602.1851601980161Total Nodes: 4231Searched Nodes: 13314Example 3: Proof Found by RiRTheorem:File Path: Mathlib/Order/SuccPred/Basic.leanFull Name: exists_succ_iterate_orStatus: Status.PROVEDProof:obtain h |h := le_total a bexacts [Or.inl (IsSuccArchimedean.exists_succ_iterate_of_le h),Or.inr (IsSuccArchimedean.exists_succ_iterate_of_le h)]Search Statistics:Planner Time: 15.921687303110957Actor Time: 44.464585242792964Environment Time: 8.429574175737798Total Time: 68.86368872597814Total Nodes: 377Searched Nodes: 3Example 3: Failure by Reprover (w/o retrieval)Theorem:File Path: Mathlib/Order/SuccPred/Basic.leanFull Name: exists_succ_iterate_orStatus: Status.OPENProof: NoneSearch Statistics:Actor Time: 519.0408471203409Environment Time: 86.30267171841115Total Time: 605.4483464460354Total Nodes: 2819Searched Nodes: 951516 |
Ei4bzOt8NG | Attention Bias as an Inductive Bias: How to TeachTransformers Simple ArithmeticShaoxiong Duan, Yining ShiICC, [email protected] XuInstitute for Interdisciplinary Information SciencesTsinghua UniversityAbstractIn this paper, we study the Transformer model’s capability in learning arithmeticfrom an inductive learning perspective and draw attention to the importance ofinductive biases. We first introduce a definition of length generalization, requiringthe model to maintain near perfect accuracy on samples with length at least 10times the training length, as an indicator of successful learning. Through experi-ments and attention analysis, we show that the failure of the vanilla Transformeron learning arithmetic is due to inadequate inductive biasing. We then presentAttention Bias Scaffolding (ABS) which uses attention masking to enforce thenecessary inductive bias, making it the first Transformer-based architecture toachieve complete generalization on several arithmetic tasks such as addition andparity. Additionally, we introduce Attention Bias Calibration (ABC), a calibrationstage that allows the model to learn the proper attention biases, and obtain completelength generalization automatically on tasks that could interpolate.11 IntroductionTransformers have been the fundamental building blocks of many SOTA solutions across a widerange of machine learning tasks, yet they struggle to model many simple formal languages, suchas addition and parity. In this paper we study the problem from an inductive learning perspective,since the tasks are, by nature, inductive learning: the process of inferring general rules from finitenumber of observations. Successful learning of the desired generation rules allows the model toproduce correct results regardless of the input length, as long as resources permit. Thus we uselength generalization, defined as the model’s ability to maintain at least 99% accuracy when tested onsamples with lengths at least 10 times the training length, as an indicator to differentiate successfullearning from memorization of surface statistics.Arithmetic has been known to be hard for Transformers. There are some works that achieve acertain level of generalization on some of the tasks we study but they all use specially constructedarchitectures [ 6,9]. The extensive empirical studies conducted by Deletang et al. [8]and Ruoss et al.[21], which consider five major position encodings, obtain only slightly better results than randomguessing on parity, and 54.3% and 64.5% on binary addition, respectively. Neither of the studiesshow signs of generalization.In fact, some works even imply a certain theoretical impossibility. Bhattamishra et al. [2]providesevidence that Transformers are relatively more biased towards functions of low sensitivity, whichdoes not include parity. The RASP-Generalization Conjecture in Zhou et al. [28] indicates thatTransformers tend to learn a length-generalizing solution if there exists a short RASP-L program thatworks for all input lengths. Again this condition excludes both addition and parity.1Code available at https://github.com/shaoxiongduan/AttentionBiasCalibration .38th Conference on Neural Information Processing Systems (NeurIPS 2024) Workshop on MATH-AI.Parity is the simplest non-counter-free regular language, the lowest layer of the Chomsky hierarchy.This limitation may imply an impossibility for Transformers to solve any regular language that is notcounter-free [ 11]. Attempts to overcome the limitations include scratchpads, index hints, reversingthe output order, as well as allowing model weights to include ±∞ [28] etc. They only achieve mildgeneralization (e.g., from 30 digits to 50 [28], or 2.5 ×extrapolation [29]).Our Contributions . It is known that inductive learning requires inductive biases [ 17], additionalassumptions that are independent of the data. This is because any finite number of training sampleshas infinite possible continuations corresponding to different generation rules. Failures of previousworks indicate that, while the model architecture is an important source of inductive bias, it may notbe adequate to enable true learning.We make the following contribution in addressing this limitation: (1) We show that attention biasingis an effective way to enforce inductive bias for Transformers; (2) We propose attention biasingscaffolding (ABS) which biases the attention directly to introduce proper inductive bias, making itthefirstTransformer-based architecture to obtain complete generalization on a number of arithmetictasks; (3) We extend ABS to attention bias calibration (ABC), a process that collects attentionpatterns learned from training data and extends them to long lengths, enabling Transformers to learnthe algorithms automatically. We also show ABC’s relation to RPE [ 22] and LoRA [ 12], whichindicates the potential for its applications to more complicated tasks.10203040506000.20.40.60.81LengthModel AccuracySuccessorVanillaALiBiABCRoPE10203040506000.20.40.60.81Length (# of digits of both operand)AdditionVanillaALiBiABCRPERoPEFigure 1: Generalization results for models trained on 6-digit samples.Figure 1 summarizes the generalization that our ABC scheme achieves on two of the tasks we study,with comparisons against popular alternatives such as ALiBi [ 20], RoPE [ 23], etc. Ours is the onlysolution achieving perfect generalization. We obtain similar results on other tasks.2 SetupTasks . LetNbe the set of natural numbers. We consider the following 4 arithmetic tasks: (1)Successor :S(n) =n+ 1forn∈N; (2) Addition :y=x1+x2forx1, x2∈N.; (3) Parity :y=⊗ni=1x[i]where x[i]denotes the i-th bit of xin a binary representation and ⊗is bitwise xor; and(4)N×1:y=x1×x2forx1∈Nandx2∈ {0,1, . . . , 9}., i.e., a restricted form of multiplicationwhere one of the operands is restricted to single-digit.These tasks are well-known examples in the theory of computation. The seemingly trivial Successoris the basic component of Peano axioms, which formalize the structure of the natural numbers.Successor ,Addition andN×1all belong to the Type-1 context-sensitive (CS) category of theChomsky hierarchy, while Parity is in Type-3 Regular (R) category [ 8].N×1is a task that, unlikeaddition , involves more complex carry, which can be any digit among {0,1, . . . , 9}.Problem Representation . We tokenize the sequences into digits, which can represent infinitenumber of integers using a finite set of tokens, enabling unbounded extrapolation testing. With thistokenization, all tasks are naturally sequence-to-sequence except for Parity which is classification.We turn Parity into a sequence-to-sequence task using the scratch pad approach similar to [ 28]. Wealso reverse the ordering of the output sequence to match the natural generation process.2Model Configuration . We train a small encoder-decoder Transformer from scratch using cross-entropy loss and Adam optimizer. The training length is restricted to 6 digits and we test the modelusing lengths up to 60 digits. We use greedy decoding for all inferences. Exact match is used as thecriteria for accuracy. The detailed model configuration and training setup is provided in appendix B.3 (The Right) Attention is All You NeedWe first train vanilla Transformers with some commonly used positional encodings. The resultsonSuccessor andAddition have been shown in figure 1. All models achieve some levels ofinterpolation but none could extrapolate. RoPE and the vanilla Transformer perform almost identically,dropping precipitously to almost 0 accuracy once the length goes beyond 6. We observe similarpatterns with other tasks.To figure out the causes of failure, we extract and analyze the model’s attention weights. Figure2 shows the attention heat maps of one specific head in the last decoder layer when decodingSuccessor , and two heads for Addition . Detailed analysis is presented in appendix C.3 but thepatterns are very clear: the vanilla Transformer correctly learns the right attention patterns up to thetraining length and fails beyond that. This correlates perfectly with the extrapolation performanceshown in figure 1.(a) Cross Attn (b) Self Attn (c) Decoder Cross Attention (d) Decoder Self AttentionFigure 2: Attention heat maps for Successor (Left) and Addition (Right).3.1 Attention Bias ScaffoldingWe then introduce several methods that guide the model to attend to the right places. Assisting modellearning is a common practice. Relevant techniques include inputs combination [ 16], “arithmeticprompting” [ 27], representation transformation [ 18], scratch pad [ 15], etc. Indeed, most of ourmethods are drawn from these toolboxes as well. However, we use them to target directly at theattention, thus we call our approach Attention Bias Scaffolding (ABS).Self Attention0 0 00 0 00 0 00 0 00 0000 000 0 0 00 0 0 00 0 0 00 0 0 00 0 0 00 0(Unary Ops):Step = 1,Cross Attention(Binary Ops):Step = 2,00 00 00 00 00 0Left: Bias matrices for cross attention withwindow size 1 for unary (top) and binary(bottom) ops.Right: Bias matrix for self attention withwindow size 1 for both unary and binaryops.Figure 3: Attention bias matrices for unary and binary operations.We briefly summarize two of the main components in Attention Bias Scaffolding, leaving detailedtreatment to appendix C.3Windowed Attention Biasing . The idea was developed by Longformer [ 1]. The intuition is that themost important local dependency is typically restricted by a limited range, which can be capturedby a sliding window of width w[1]. Specifically, let A0=QKT√dbe the original attention weightmatrix before softmax, we bias the weights by A=A0+Bw.Bwis constructed via the slidingwindow mechanism and is detailed in appendix C. Figure 3 visualizes this process for unary andbinary operations.Cyclic Position Indexing . Position indexing refers to how we identify each individual position. Thesimplest way is just to index them 0,1, . . .. As our tasks have very restricted dependency contextswhich are localized by the windowed attention biases, the model only needs a way to differentiatepositions within the window. Long position indexing is not necessary and even harmful sometimes,as our empirical study shows. Therefore we propose Cyclic Position Indexing: Let ibe the positionindex of a token and Ta period parameter, the token position is converted to imod Tbeforeentering into the model.3.2 Results of ABSTable 1: Extrapolation results measured as percent accuracy (%). Numbers in bold show the bestaccuracies achieved for the corresponding input length limit.Length (Number of Digits)Task Model 6 10 15 20 60Vanilla 100.0 0.0 0 .0 0 .0 0 .0+w= 1 100.0 100.0 100.0 100.0 100.0+T= 3 100.0 100.0 100.0 100.0 100.0Successor NoPE + w= 1 100.0 100.0 100.0 100.0 100.0ALiBi 1.3 0 .0 0 .0 0 .0 0 .0RoPE 100.0 0.0 0 .0 0 .0 0 .0Vanilla 100.0 0.0 0 .0 0 .0 0 .0+w= 1 100.0 0.0 0.0 0.0 0.0+T= 3 100.0 100.0 100.0 100.0 100.0Addition NoPE + w= 1 99.95 99.81 99.84 99.76 99.35ALiBi 0.0 0 .0 0 .0 0 .0 0 .0RoPE 100.0 0 0 0 0RPE∗100.0 99.9 97 .2 21 .3 N/ATransformer†52.00†/52.60‡+scratchpad 29.23 0.29 0.0 0.0 0.0Parity + w= 1 100.0 100.0 100.0 100.0 100.0+T= 3 100.0 100.0 100.0 100.0 100.0NoPE + w= 1 100.0 100.0 100.0 100.0 100.0Vanilla 100.0 0.0 0 .0 0 .0 0 .0+w= 1 100.0 6.0 0.19 0.0 0.0N×1 +T= 3 100.0 100.0 100.0 100.0 100.0NoPE + w= 1 99.89 99.63 99.49 99.39 98.31RoPE 100.0 0 0 0 0* Data taken from Jelassi et al. [13] which is an encoder-only architecture with shared layers.†Data taken from Deletang et al. [8]which evaluates five encodings (none, sin/cos, RoPE, ALiBi,and the relative positional encoding from Transformer-XL) and reports the best-performing variant.‡Data taken from Ruoss et al. [21] which uses randomized positional encodings to boost lengthgeneralization.We conduct extensive experiments on each of the arithmetic tasks with various configurations, withresults shown in table 1. More detailed discussions are presented in appendix C.5. We summarize themain findings here:• None of the previous works achieves extrapolation on any of the tasks.4•Attention bias scaffolding achieve complete length generalization on all tasks, maintaining100% accuracy up to 60 digits.•Unary tasks ( Successor andParity ) appear to be not relying on any positional embeddingat all once the windowed bias is in place. The cyclic positional indexing is not necessaryeither.•For binary tasks ( Addition andN×1), on the other hand, a windowed attention bias alonedoes not guarantee success. It must be combined with cyclic position indexing to achievecomplete generalization. Interestingly, the model obtains slightly imperfect generalization(99+% accuracies up to 60 digits) with windowed biases and no positional embedding at all.Sinusoidal positional encoding does not work with a windowed attention bias, achievingonly interpolation but not extrapolation. Cyclic position indexing is necessary to enforcestronger localization for complete generalization.The findings suggest that the right attention is the key to achieving good generalization (thus thetitle of this section). The different reliance on positional encoding between unary and binary tasks isinteresting and we believe it is caused by different attention pattens (i.e., inductive bias) the two typesof tasks require. See figures 7 and 8.4 Attention Bias Calibration (ABC)We now introduce Attention Bias Calibration (ABC), an automatic process that extends the workingattention patterns of a model that achieves successful interpolation to arbitrary lengths while preserv-ing its near-perfect performance. The idea is that a model trained to full interpolation must be able toproduce the right attention pattern on interpolation data (see section C.3), which captures the localdependencies for recurrent arithmetic algorithms. ABC extracts and aggregates the attention weightsand uses them as attention bias, like Press et al. [20], to fine-tune the model for long inputs. Similarto the scaffolding in section 3.1, ABC is also a kind of inductive bias, but it is fully automatic.Detailed development, the actual algorithm, and evaluation results are presented in appendix D. Wesummarize the main findings here.•ABC could solve all the tasks we study except Parity . The failure with Parity is due tothe fact that the vanilla model does not interpolate, even with a scratchpad. This is one ofthe limitations of ABC.•The attention patterns show that diagonal and vertical extensions are most useful. The formerechos the finds of numerous previous works on positional encoding that favor recency, whilethe latter appears to be connected to the “attention sink” phenomenon [26].•ABC has connections to other effective schemes of position and attention manipulation. Weelaborate on two specific examples, RPE and LoRA, in section E.5 ConclusionThis work aims to show the importance of inductive biases by approaching arithmetic tasks throughthe perspective of inductive learning. Our solution solves a few long-standing difficult or even“impossible” tasks (e.g., Parity ). In its current form, we do not expect ABS or ABC to work directlyon more complex tasks. Our works show that LLM’s embarrassing failures in solving simple taskssuch as multi-digit addition may not be solvable by current methods. As these tasks are, by nature,inductive learning, and current length generalization methods may not provide proper inductive biases.How to achieve this for LLMs to solve arithmetic tasks while maintaining their general performancesin multi-task settings is an important line of future work.References[1]Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer.arXiv preprint arXiv:2004.05150 , 2020.[2]Satwik Bhattamishra, Arkil Patel, Varun Kanade, and Phil Blunsom. Simplicity bias in trans-formers and their ability to learn sparse Boolean functions. In Anna Rogers, Jordan Boyd-Graber,5and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association forComputational Linguistics (Volume 1: Long Papers) , pages 5767–5791, Toronto, Canada, July2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.317. URLhttps://aclanthology.org/2023.acl-long.317 .[3]Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. Extending contextwindow of large language models via positional interpolation, 2023.[4]Ta-Chung Chi, Ting-Han Fan, Peter J. Ramadge, and Alexander I. Rudnicky. Kerple: Kernelizedrelative positional embedding for length extrapolation, 2022.[5]Ta-Chung Chi, Ting-Han Fan, Alexander Rudnicky, and Peter Ramadge. Dissecting transformerlength extrapolation via the lens of receptive field analysis. In Proceedings of the 61st AnnualMeeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages13522–13537, Toronto, Canada, July 2023. Association for Computational Linguistics. doi:10.18653/v1/2023.acl-long.756. URL https://aclanthology.org/2023.acl-long.756 .[6]David Chiang and Peter Cholak. Overcoming a theoretical limitation of self-attention. InProceedings of the 60th Annual Meeting of the Association for Computational Linguis-tics (Volume 1: Long Papers) , pages 7654–7664, Dublin, Ireland, May 2022. Associa-tion for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.527. URL https://aclanthology.org/2022.acl-long.527 .[7]Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V . Le, and Ruslan Salakhutdinov.Transformer-xl: Attentive language models beyond a fixed-length context, 2019.[8]Gregoire Deletang, Anian Ruoss, Jordi Grau-Moya, Tim Genewein, Li Kevin Wenliang, ElliotCatt, Chris Cundy, Marcus Hutter, Shane Legg, Joel Veness, and Pedro A Ortega. Neuralnetworks and the chomsky hierarchy. In The Eleventh International Conference on LearningRepresentations , 2023. URL https://openreview.net/forum?id=WbxHAzkeQcn .[9]Rohan Deshpande, Jerry Chen, and Isabelle G. Lee. Rect: A recursive transformer architecturefor generalizable mathematical reasoning. In International Workshop on Neural-SymbolicLearning and Reasoning , 2021. URL https://api.semanticscholar.org/CorpusID:239029449 .[10] Philipp Dufter, Martin Schmitt, and Hinrich Schütze. Position Information in Transformers:An Overview. Computational Linguistics , 48(3):733–763, 09 2022. ISSN 0891-2017. doi:10.1162/coli_a_00445. URL https://doi.org/10.1162/coli_a_00445 .[11] Michael Hahn. Theoretical limitations of self-attention in neural sequence models. Transactionsof the Association for Computational Linguistics , 8:156–171, 2020. doi: 10.1162/tacl_a_00306.URL https://aclanthology.org/2020.tacl-1.11 .[12] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang,Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models, 2021.[13] Samy Jelassi, Stéphane d’Ascoli, Carles Domingo-Enrich, Yuhuai Wu, Yuanzhi Li, and FrançoisCharton. Length generalization in arithmetic transformers, 2023.[14] Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, and SivaReddy. The impact of positional encoding on length generalization in transformers, 2023.[15] Nayoung Lee, Kartik Sreenivasan, Jason D. Lee, Kangwook Lee, and Dimitris Papailiopoulos.Teaching arithmetic to small transformers, 2023.[16] Jindˇrich Libovický, Jind ˇrich Helcl, and David Mare ˇcek. Input combination strategies for multi-source transformer decoder. In Proceedings of the Third Conference on Machine Translation,Volume 1: Research Papers , pages 253–260, Stroudsburg, PA, USA, 2018. Association forComputational Linguistics. ISBN 978-1-948087-81-0.[17] Tom M. Mitchell. The need for biases in learning generalizations. Technical report, RutgersUniversity, New Brunswick, NJ, 1980. URL http://dml.cs.byu.edu/~cgc/docs/mldm_tools/Reading/Need%20for%20Bias.pdf .6[18] Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. Investigating the limitations of transformerswith simple arithmetic tasks, 2021.[19] Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. Yarn: Efficient contextwindow extension of large language models, 2023.[20] Ofir Press, Noah Smith, and Mike Lewis. Train short, test long: Attention with linear biasesenables input length extrapolation. In International Conference on Learning Representations ,2022. URL https://openreview.net/forum?id=R8sQPpGCv0 .[21] Anian Ruoss, Grégoire Delétang, Tim Genewein, Jordi Grau-Moya, Róbert Csordás, MehdiBennani, Shane Legg, and Joel Veness. Randomized positional encodings boost length gen-eralization of transformers. In Proceedings of the 61st Annual Meeting of the Association forComputational Linguistics (Volume 2: Short Papers) , pages 1889–1903, Toronto, Canada, July2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-short.161. URLhttps://aclanthology.org/2023.acl-short.161 .[22] Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position rep-resentations. In Proceedings of the 2018 Conference of the North American Chapter of theAssociation for Computational Linguistics: Human Language Technologies, Volume 2 (ShortPapers) , pages 464–468, New Orleans, Louisiana, June 2018. Association for ComputationalLinguistics. doi: 10.18653/v1/N18-2074. URL https://aclanthology.org/N18-2074 .[23] Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer:Enhanced transformer with rotary position embedding, 2022.[24] Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis-Philippe Morency, and RuslanSalakhutdinov. Transformer dissection: An unified understanding for transformer’s atten-tion via the lens of kernel. In Proceedings of the 2019 Conference on Empirical Methodsin Natural Language Processing and the 9th International Joint Conference on NaturalLanguage Processing (EMNLP-IJCNLP) , pages 4344–4353, Hong Kong, China, Novem-ber 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1443. URLhttps://aclanthology.org/D19-1443 .[25] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan NGomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon,U. V on Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, ed-itors, Advances in Neural Information Processing Systems , volume 30. Curran Associates,Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf .[26] Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. Efficient streaminglanguage models with attention sinks. In The Twelfth International Conference on LearningRepresentations , 2024. URL https://openreview.net/forum?id=NG7sS51zVF .[27] Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and HanieSedghi. Teaching algorithmic reasoning via in-context learning, 2022.[28] Hattie Zhou, Arwen Bradley, Etai Littwin, Noam Razin, Omid Saremi, Joshua M. Susskind,Samy Bengio, and Preetum Nakkiran. What algorithms can transformers learn? a study inlength generalization. In The Twelfth International Conference on Learning Representations ,2024. URL https://openreview.net/forum?id=AssIuHnmHX .[29] Yongchao Zhou, Uri Alon, Xinyun Chen, Xuezhi Wang, Rishabh Agarwal, and Denny Zhou.Transformers can achieve length generalization but not robustly, 2024. URL https://arxiv.org/abs/2402.09371 .7A Related WorkLength generalization for Transformers is a very hot topic. And indeed we draw on many of theirinspirations. In this section we briefly summarize some of the most influential works.Existing works on Transformer length generalization have been mainly focusing on two aspects:positional encoding or/and attention bias.Relative Positional Encoding . Relative positional encoding (RPE) relies on the relative distancebetween tokens to construct position embeddings. This approach is first proposed by Shaw et al.[22] and has been shown to produce significant improvements over absolute positional encoding inmachine translation tasks [ 22]. This leads to its application in numerous machine learning modelsand the development of multiple variations such as Transformer-XL [7] and RoPE [23].Attention Biasing . Attention biasing, on the other hand, adds a bias directly to the attention matrix,allowing the model to extrapolate to longer lengths efficiently. First introduced as ALiBi (Attentionwith Linear Biases) by Press et al. [20], it is quickly followed by similar models such as KERPLE[4], and Sandwich [ 5], all showing certain improvement in length extrapolation. Other forms ofbiases include sliding window [ 1] and its variations. Compared to other relative positional encodingschemes, attention biasing typically demands less computational resources.These two lines of work are closely related and there are extensive studies on their effectiveness.2However, the results are mixed. On one hand, the popular belief is that relative PEs [ 22,7,23] aremore effective in length generalization than absolute variants [ 25]. On the other hand, however, someworks (e.g., Kazemnejad et al. [14]) point out that such a conclusion is obtained by using languagemodeling perplexity as the sole metric, which may not reflect actual performances on downstreamtasks. In fact, Kazemnejad et al. [14] show that, on a collection of reasoning and mathematicaltasks, No Positional Encoding (NoPE) actually performs the best. Likewise, Deletang et al. [8]showthat state-of-the-art positional encoding or attention biasing methods do not help the Transformerextrapolate on arithmetic tasks.B Model ConfigurationSince our tasks are sequence-to-sequence, we choose an encoder-decoder architecture, with 1 encoderlayer and 6 decoder layers, all with 8 attention heads. The embedding size is 128 and feed forwardsize 512. We tried models with a number of different sizes and found no significant difference acrossall variations that could converge. We settled on the model above and did not pursue the configurationwith the optimal size.We train our models using cross-entropy loss and Adam optimizer, with learning rate 10−5and adropout of 0.3. For training for interpolation, we generate a random permutation Πof numbers in therange [0, 220] and split the set by a 7:1 ratio for training and validation. For binary operations such asAddition , both operands are drawn independently from Π. Thus both the training and validationdata sets consist of mainly 6-digit numbers, in base 10, with less than 5% 7-digit numbers. We denoteLintthe length of input, measured as the maximum number of digits in the operand(s), during theinterpolation phase. Note that length refers to the number of digits in the operands, not the total inputsequence length.For extrapolation testing, for each length L, we randomly sample min(10L−10L−1,10000) numbersof length Land compute the accuracy on these samples. For Parity which deals with binarysequences, we still generate train and test numbers in the way described above and convert theminto binary sequences for training and testing. Since a number’s length in decimal is proportionalto its length in binary, the 10x length expansion is preserved in either base. The model’s output isconsidered accurate if and only if it exactly matches the correct label sequence.2Please see Dufter et al. [10] for a comprehensive review of methods to incorporate position information intoTransformer models.8C Attention Bias Scaffolding DetailsIn this section we provide more details on the importance of attention and our attention bias scaffoldingmethods. We develop our ideas through a series of initial experimentations, attention weights analysis,and final verification.Existing approaches to optimizing length generalization of Transformer models have been focusingon two aspects: positional encoding or/and attention bias. The two concepts are closely related. Infact, we believe they should be treated as two sides of the same coin: All PEs influence the attention,and almost all ABs, with the exception of no positional encoding at all such as Kazemnejad et al.[14] and ours, rely on position information to determine the bias. However, the best-performingAB methods’ dependency on positional information is indirect: the bias is often determined by thedistance between tokens, instead of their positions. Examples include ALiBi [ 20] and RPE [ 22]. Inaddition, as our ABS and ABC schemes show, AB can work well without any position information.This is consistent with the findings of some previous works. For example, although Transformer’sattention mechanism is order-invariant, decoder-only Transformers with causal attention mask arenotand can model sequences without explicit position information [24].C.1 Our Thoughts and FindingsWe have an interesting finding in a similar tune. That is, with our mechanism that enables the modelto attend to the correct tokens, explicit positional encoding is indeed not always necessary, even forachieving perfect generalization. With our architecture, cross-attention allows the model to attend tothe correct input while self-attention relays the information from the previous step.This leads us to believe that positional encoding or embedding is not the key to achieving goodgeneralization. The right attention is. Positional encoding and attention biasing are just means toattain the latter. Since there is no universal positional encoding or attention biasing that generalizeswell on all tasks, for the tasks that we study in this work, auxiliary means that target directly at theattention could be used to achieve better generalization.C.2 Initial ExperimentationTo develop our ideas, we first train vanilla Transformers with some commonly used length generaliza-tion methods, including the original sinusoidal positional encoding, ALiBi, and RoPE, and examinethe results.Figure 4 shows the results on Successor andAddition . All models achieve some levels ofinterpolation but none could extrapolate beyond training length. Among them, RoPE and vanillaTransformer perform almost identically, dropping precipitously to almost 0 accuracy once the lengthgoes beyond 6. Note that the RoPE implementation for Addition must use an embedding size of512 otherwise it converges very slowly.0 5 10 15 2000.20.40.60.81LengthModel AccuracySuccessorVanillaALiBiRoPE0 5 10 15 2000.20.40.60.81LengthModel AccuracyAdditionVanillaALiBiRoPEFigure 4: Extrapolation results for models trained on Lint≤6onSuccessor andAddition . Lengthis measured in the number of digits of both operands.We observe similar patterns with other tasks. Table 2 summarizes the vanilla (sinusoidal positionalencoding) Transformer’s capabilities for interpolation and extrapolation capabilities on these tasks.9We single out the Vanilla model because our ABC scheme works only when the Vanilla model caninterpolate.Interpolation ExtrapolationSuccessor ✓ ✗Addition ✓ ✗Parity ✗ ✗N×1 ✓ ✗Table 2: Vanilla Transformer’s interpolation and extrapolation capabilities.C.3 Attention AnalysisTo figure out the causes of failure to extrapolate, we extract and analyze the attention weights of thevanilla model on Successor andAddition . Figure 5 gives an example of the attention heat map ofone specific head in the last decoder layer during a Successor task. Lighter colors represent higherweights.(a) Cross Attention (b) Self AttentionFigure 5: Attention heat map on “ 03611451449241919819 ” for Successor .For the sequence 03611451449241919819 , the correct output should be 03611451449241919820 .Note that we reverse the output digits during training so the model also generates output starting fromthe lowest digit and working upwards. The model is correct until the hundred-thousands digit. For aninput sequence of length n, to generate the i-th digit for Successor correctly, the crucial informationlies in the (n−i+ 1)-th input token and the (i−1)-th output token (for possible carry).3This meansthat the correct attention pattern should light up the “anti-diagonal” (the diagonal from top-right tobottom-left) for the cross attention matrix and “subdiagonal” (the diagonal directly under the maindiagonal) for self-attention. From figure 5 it is clear that the Vanilla Transformer correctly learns theattention pattern up to the hundred-thousands digit and fails beyond that. This correlates perfectlywith the extrapolation performance shown in figure 4.ForAddition , we look at individual heads. Figure 6 shows an example of the attention heat maps oftwo specific heads in the last decoder layer during an addition task.(a) Decoder Cross Attention (b) Decoder Self AttentionFigure 6: Attention heat map on “ 078114514+0241919810 ” for Addition .3Note that the tokens are generated in a lowest-digit first order.10In this case we find that there appears to be a sort of differentiation of tasks, where one head looksat the first operand and the other looks at the second. The results are consistent with those found inSuccessor , that the model does a good job identifying which token to attend to up to the maximumtraining length. Again this echoes with the extrapolation performance of figure 4.C.4 Attention Bias ScaffoldingTo future validate our hypothesis, we introduce a number of methods that guide the model to attendto the right places. The ideas are inspired by existing methods for assisting model learning. Those wefind effective in arithmetic learning include the following:Input AlignmentWhen we humans perform arithmetic computations, input alignment is a common practice thatfacilitates the process. For example, for multi-digit addition, we write the numbers one belowthe other, aligning them based on place value. We then add from the rightmost digit, propagatingthrough the left, memorizing carries. Without positional encoding and attention biasing, the originalTransformer’s attention is order-invariant, and, theoretically, the importance of context does notdepend on recency. However, certain input representations result in simplified attention patterns thatcan be captured by the windowed biasing introduced next. Therefore we interleave the digits fromthe two operands for binary operations so that digits from each operand that should be attended totogether are adjacent. Specifically, for a binary operator ⊕(such as +), and two n-digit numbersa=anan−1. . . a 1andb=bnbn−1. . . b 1where aiandbiare their digits in the proper baserepresentation, the input sequence is transformed asanan−1. . . a 1⊕bnbn−1. . . b 1−→ ⊕ anbnan−1bn−1. . . a 1b1N×1is different since the second operand, say b, is single-digit. In this case, we just insert bintothe right side of each digit of a:anan−1. . . a 1×b−→ × anban−1b . . . a 1bNote that input alignment is only used for ABS, to make the attention pattern simple so subsequentmethods could “scaffold” the attention to longer inputs more easily. We do notneed to use it for ABCbecause ABC could automatically learn the correct patterns. The input to the ABC model is simplythe “natural” expression (e.g., 0123+0456 or0123*6 ).Windowed Attention BiasingBiasing towards recency and penalizing attention scores between distant query-key pairs is the basicidea of ABs such as ALiBi [ 20]. The windowed attention biasing developed by Longformer [ 1] usesa sliding window to control which parts of the attention matrix are “open”. We can customize itaccording to the attention patterns we want to enforce.Specifically, recall that omitting head indexing, given query, key, and value matrices, the Transformermodel [25] computes attention scores as:Attention (Q,K,V) = softmax(QKT√d)VLetA0=QKT√dbe the original attention weight matrix before softmax, we bias the weights byA=A0+Bw, where wis a parameter specifying the window width. The basic idea for constructingBwis setting a subset of its elements to 0, and the rest to −inf. This essentially masks out certainelements of Ato−inf, which, after softmax, results in 0 weights for corresponding tokens,The construction of Bwdepends on the recurrent pattern that encodes the inductive bias about thetask [ 1]. Figure 7 shows the patterns for our tasks. For unary operations, such as Successor andParity , generating the current output token depends on the previous output token and one inputtoken at the corresponding position, shown by figure 7 (a). Binary operations, such as addition andN×1, share the same output token dependency but different input token dependency. In this case,since we align digits from the two operands, as shown in figure 7 (b), the context window spans twoconsecutive input tokens and also slides two positions at a time.118 ... 0 ... 9 9 3 6 ...Cross Attn0 ... 0 ... 9 9 3 6 ...Cross AttnSelf AttnSelf AttnInput Sequence Output Sequence(a) Step = 1 (Unary Ops):(b) Step = 2 (Binary Ops):Figure 7: Attention patterns for unary and binary operations.For an input length Sand output length L, the bias for decoder self-attention isBw=0, ifi−k=jfori, j= 1, . . . , L, k = 0, . . . , w−inf,otherwiseThat is, the elements of the matrix are all set to −infexcept those on the main diagonal and welements below. Note that, following the traditional practice [ 25] of decoder masking, all elementsabove the main diagonal are set to −infto prevent the decoder from seeing future tokens.Cross attention bias is similar, with three differences: (1) Since the order of output sequence isreversed, the “open” context windows go along the anti-diagonal direction; (2) Since we align theinput digits, the window spans, and also steps over, two positions for binary operations; (3) The opencontext window extends to both left and right wpositions.4Figure 8 is a visualization for the case of w= 1.Self Attention0 0 00 0 00 0 00 0 00 0000 000 0 0 00 0 0 00 0 0 00 0 0 00 0 0 00 0(Unary Ops):Step = 1,Cross Attention(Binary Ops):Step = 2,00 00 00 00 00 0Left: Bias matrices for cross attention withwindow size 1 for unary (top) and binary(bottom) ops.Right: Bias matrix for self attention withwindow size 1 for both unary and binaryops.Figure 8: Attention bias matrices for unary and binary operations.Cyclic Position Indexing (CPI)Position indexing refers to how we identify each individual position. The simplest way is just to indexthem 0,1, . . .. Positional embedding mechanisms are then constructed based on this indexing. Veryrecently, manipulating position indexing has become an effective and trending method for expanding4Self attention bias only extends to the left.12context windows for Transformer-based LLMs. For example, Chen et al. [3]and its NTK-awarevariant [ 19] modify RoPE with “interpolated” position indices to increase the “density” of positionswithin the pre-trained context window, thus effectively extending its length.The motivation for using CPI in our tasks is that large position indices unseen during training mayconfuse the model. And for arithmetic tasks that admit recurrent generation rules, it is not necessaryto identify tokens that are not being currently attended to either. As long as the period is compatiblewith the context window, it should provide the model with a clear mechanism to differentiate therelevant tokens without diverting its attention. For arithmetic tasks, our empirical study shows thatthe model is not sensitive to the value of Tas long as it produces an open window whose widthis approximately that of the bias context window as shown in figure 7. We believe it might be ofindependent interest for other application scenarios.C.5 Validation ResultsTo evaluate the effectiveness of the above mechanisms, we conduct extensive experiments on each ofthe arithmetic tasks with the following configurations:(A)Vanilla : Vanilla Transformer with sinusoidal positional encoding.(B)+w= 1: (A) + windowed attention biasing with w= 1.(C)+T= 3: (B) + additional CPI with a period of T= 3.(D) NoPE + w= 1: Windowed attention biasing only, without positional encoding at all, withw= 1.We experimented with a few different wandTvalues and found that slight variations do not producevery different results thus we report the best-performing configurations above.Results are presented in table 1. None of the previous works achieves extrapolation on any of the tasks.RPE [ 13] maintains 90+% accuracy up to 20 digits but does not go beyond. Vanilla Transformer andRoPE could interpolate, achieving 100% accuracy for 6-digit inputs, for all the tasks. ALiBi does noteven interpolate. Its accuracies drop to near 0s on all tasks beyond 3 or 4 digits (figure 4).On the other hand, our solutions (windowed attention biasing + CPI) achieve complete lengthgeneralization on all tasks, maintaining 100% accuracy up to 60 digits. Unary tasks ( Successor andParity ) appear to be not relying on any positional embedding at all once the windowed attentionbiasing is in place, which is also robust against possible perturbation of any positional encoding.For binary tasks ( Addition andN×1), on the other hand, there appears to be some bad interactionbetween the original sinusoidal positional encoding and windowed attention biasing. Both the originalsinusoidal positional encoding and +w= 1(sinusoidal positional encoding with windowed bias)configurations only achieve interpolation but not extrapolation. Windowed biasing without anypositional encoding at all (NoPE+ w= 1) results in a slightly imperfect generalization for both binarytasks.For the Parity task, we list results from the vanilla Transformer attacking it both as a classificationproblem (outputting 0 or 1), and as a sequence-to-sequence problem (+ scratch pad). Neither worksvery well. The classification performance is close to random guess while the sequence-to-sequenceresults are much worse, with accuracy dropping to near zero at length 10. We believe both areattributed to the limitation of Hahn [11]. Even with a scratch pad, without any attention bias, thegeneration of each intermediate bit still depends on all the tokens of the input sequence that areprocessed previously. Furthermore, since obtaining the correct final result depends on all intermediatebits being correct, the task is actually harder due to the compounding-of-errors effect.Our results complement that of Zhou et al. [28], which achieves mild generalization (from 30 digitsto 50) using both scratch pad and index hints, as well as allowing weights to include ±∞. Thus theempirical results confirm that using scratch pad along cannot solve Parity .13A1,1A1,2A1,3A2,1A2,2A2,3A3,1A3,2A3,3d0 d1 d2···d−1d−2...(a) Diagonals.A1,1A1,2A1,3A2,1A2,2A2,3A3,1A3,2A3,3d0d1d2··· d−1 d−2...(b) Anti-diagonals.A1,1A1,2A1,3A2,1A2,2A2,3A3,1A3,2A3,3d0 d1 d2···(c) Verticals.Figure 9: Examples of the different directions ABC explores.D Attention Bias Calibration (ABC)Letm×nbe the dimensions of the attention matrix of a model that has interpolated and M×Nthe dimensions that we would like to extrapolate to. It should hold that m < M andn < N . ABCproceeds in the following steps:1. Training for Interpolation: First we train a vanilla Transformer model Tinton the dataset Sintuntil it is capable of interpolation. By this point, the accuracy of Tintshould be near perfect. Thenwe useTintto decode a random subset of training samples Sgen⊂RSintand extract the attentionweights. Because this process is identical for all heads, to simplify notation, we omit their indices.Letxk[i]be the embedding vector for the i-th token in sample k, the attention matrix is extracted asAki,j=xk[i]WQWK⊤xk[j]⊤whereWQ,WKare parameter matrices in the last decoder layer of model Tint.2. Attention Biases Computation: We then average the attention weights for all data in Sgen: ̄A=1|Sgen||Sgen|Xk=1AkThe next steps average attention weights along a number of lines within the elements of the matrix andextend them along those particular directions. We observe that attention patterns manifest themselvesalong lines of the attention matrix and these are the directions we expand them. Theoretically, wecould explore any direction but empirically we find it suffices to only try the diagonal, the anti-diagonal, and the vertical lines. Figure 9 visualizes the said directions, with line sums annotated onthe sides.For all directions we consider, let lbe the set of elements on a line, we perform the following steps:2.1. Averaging across Lines:dl=1|l|X(i,j)∈l ̄Ai,jThis step effectively “summarizes” each line into a single value.2.2. Bias Matrix Extension: Next we extend ̄Ainto any arbitrary size ̃A∈RM×Nvia: ̃Ai,j=dropoff (dl−dmax),iflexists in ̄A−inf, otherwise(1)where dmax is the maximum value of dl’s among all the lines of ̄A, anddropoff (x) =x, ifx >threshold−inf,otherwiseWhat this process does is actually very simple: for the elements along the extension of existing linesof ̄A, it first subtracts dmax fromdl, then cuts off at a threshold . Elements not on the extensions of14 ̄A’s lines will be set to −inf. For our task, the dropout threshold is set to κσ+μ, where σandμarethe standard deviation and the mean of all the dl’s, respectively, and κis an empirically determinedfactor. We set κ= 4.5 and 0.87 for cross and self attention, respectively. This results in very strictthresholds, meaning that it only preserves really strong patterns. For other tasks where patterns arenot that obvious, a softer threshold value or even no dropout may be used.2.4. Finalization: The final bias matrix ̃Ais obtained by performing an element-wise max operationamong the matrices from equation 1 across all directions. We then repeat for each of the heads,equipping them with independent biases. If the final bias matrix consists of only −inf’s, meaningthat no pattern is picked up, we replace every −infwith0, effectively leaving it “transparent”.The complete and detailed algorithm is presented in appendix F.3. Re-training with Attention Biases: After the attention biases for each head have been constructed,we train another model on the same input sequences Sintwith the constructed attention biases addedto the attention weights.Ai,j=xiWQWK⊤x⊤j+ ̃Ai,j, ̃A∈RM×N(2)Note that in this work the bias matrices are obtained from the last decoder layer and applied to alllayers during re-train. More flexible configurations such as per-layer bias could work better for morecomplex tasks.D.1 ResultsA prerequisite of ABC is that the vanilla Transformer must be able to train to interpolate. Amongthe tasks we study, as discussed in section 2, Parity is apparently a failure. Thus we implement thevanilla Transformer (with sinusoidal positional encoding), ALiBi, RoPE, and ABC, and test on therest of the tasks. Note that we do not use the input alignment method developed for ABS in section C.The inputs to the model are in their “natural” form such as 0123 + 0748 →1780 .The accuracy vs. input length curves of different models on Successor andAddition have beenplotted in figure 1 at the beginning of this paper. The overall performance on all tasks is summarizedin table 3. We observe that ABC performs vastly superior to other models across all tasks, achievingnear-perfect accuracies up to 60 digits.Figure 10 and 11 visualize the cross attention bias matrices, one for each head, learned by ABC,forAddition andN×1, respectively. Since the most meaningful attention activities happen incross-attention, where the model is attending to the input sequence, we do not show self-attentionbiases. Each color map is plotted using a colorbar scaling of [ minh( ̃Ah),max h( ̃Ah)] for eachindividual head. Head bias with a small variance will result in a “transparent” bias matrix with all 0’safter drop-off, in which case the 0’s are painted black. Note that addition is a binary operation sothe input length is twice the output sequence thus the matrices in figure 10 are rectangles instead ofsquares.Figure 10: ABC cross attention bias for AdditionA few interesting patterns emerge. First, since the model generates output tokens in a reversed order,most of the open elements are along the anti-diagonal direction for both tasks. Second, there is a cleardivision of labor among the heads, which is consistent with the findings in C.3. More specifically, inAddition , heads 1,4,7attend to the first operand, while the remaining heads attend to the second.InN×1, most heads attend to the multi-digit number and the multiplication sign while one of theheads, head 4, attends to the single-digit operand. Note that there are vertical lines in heads 1, 3,and 7 as well. Third, the different patterns show that our bias generation process is effective: the5An encoder-only architecture with shared layers.15Table 3: Extrapolation results measured as percent accuracy (%). Numbers in bold show the bestaccuracies achieved for the corresponding input length limit.Length (Number of Digits)Task Model 6 10 20 60Vanilla 100.0 0.0 0.0 0.0Successor ALiBi 1.3 0.0 0.0 0.0RoPE 100.0 0.0 0.0 0.0ABC 100.0 100.0 100.0 100.0Vanilla 100.0 0.0 0.0 0.0ALiBi 0.0 0.0 0.0 0.0Addition RoPE 100.0 0.0 0.0 0.0RPE∗100.0 99.9 21 .3 N/AABC 100.0 100.0 99.9 99.8Vanilla 100.0 0.0 0.0 0.0N×1 RoPE 100.0 0.0 0.0 0.0ABC 100.0 100.0 100.0 100.0* Data taken from Jelassi et al. [13].5Figure 11: ABC cross attention bias for N×1anti-diagonal and vertical patterns are learned by searching the corresponding directions. Note thatthere is an empty bias consisting of all 0s in 11 (head 5). This indicates that ABC did not pick up anypatterns in that head.Running Time . ABC requires a retraining stage. However, with the help of attention bias masks,this stage converges very fast. We observe that the time needed to retrain the model is only 1/100 to1/10 of the time for training the model to interpolate.E Connections to Other SchemesIt turns out that ABC has close ties to other schemes of manipulating attention. We elaborate on twoin the following.E.1 ABC as a Generalized RPEThe relative positional encoding (RPE) of Shaw et al. [22] has been shown to be a very robustpositional encoding and the foundation of many other variants [ 10]. Shaw et al. [22] biases theattention at two places: (1) when computing the dot-product between query and key; and (2) whenproducing the weighted sum of value vectors. (2) has been shown to be not very useful [ 22]. Let xibe the embedding vector of the i-th token, (1) is implemented as follows:eij=(xiWQ)(xjWK+aKij)⊤√dkaKij=wclip(j−i,k).16w0w1w2w−1w0w1w−2w−1w0d0d1d2d−1d0d1d−2d−1d0Figure 12: Factors determining bias weights in RPE (left) and ABC (right).where w’s are a set of learned vectors and the bias vector aKijis selected from the set by a clippedindexing scheme: clip(x, k) = max( −k,min(k, x)). That is, tokens more than kunits from thecurrent query token will be clipped to k. Note that the selection of wvector depends solely on therelative distance between the query token iand the key token j.Both RPE and ABC bias the attention matrix. In the case of RPE, this is done by a vector insidethe dot-product, whereas ABC achieves this with a scalar bias on the exterior. If we view elementsin the bias matrices and which parameter determines each of them, then we can see the remarkablesimilarities between RPE and ABC. Figure 12 shows a comparison between attention bias matricesof RPE and ABC for the case extending along the diagonal. ABC averages along each of the k-diagonals at step 2.1 during its procedure. Thus for query iand key j, the bias is dj−i. The indexingscheme is exactly the same as that of RPE. And there is an implicit clipping too: for an attentionmatrix of dimensions m×nwithm≤n, the set of possible kvalues for valid k-diagonals are{−(m−1),−(m−2), . . . ,−1,0,1, . . . , (n−1)}, a total of m+n−1. When extending to M×N,any elements outside those lines are set to −inf. Effectively, this is an asymmetric clipping function:clip(j−i, m−1, n−1).E.2 ABC and LoRALow-Rank Adaptation, or LoRA [ 12] is a prevailing method for fine-tuning LLMs for domainadaptation. LoRA freezes the pre-trained model weights and implements trainable weights as theproducts of low-rank matrices for each layer of the Transformer architecture, greatly reducing thenumber of parameters that need to be trained for downstream tasks. Interestingly, LoRA also usesadditive components to adapt the attention matrices. If LoRA is applied to the attention matricesWQandWK, the attention weights becomeAi,j=xi(WQ+ ∆WQ)(WK+ ∆WK)⊤x⊤j (3)where ∆WQand∆WKare implemented as the products of two low rank matrices which areobtained via training.Again, comparing equations 2 and 3, it is clear that both ABC and LoRA bias the attention weights.The difference between ABC from both RPE and LoRA is that the latter two learn their biases fromtraining while ABC computes it from decoding a selected set of “good” inputs. How the differenceleads to different model performances on practical tasks is an interesting research question.17F Algorithm for ABCAlgorithm 1 Attention Bias Calibration (ABC) for non-negative ∆6Input :Ain: The attention tensor with dimensions [H, m, n ], where hrepresents the number of heads andm,nrepresents the number of rows and columns in each attention matrix, respectively.M, N : The dimensions of the output bias matrixD: A set of tuples (1,∆). It represents the set of all directions we want to search for patterns.Output : ̃A, a tensor with the dimensions [H, M, N ], representing the bias matrix for each head.forh= 1toHdofor(1,∆)∈Ddo{Iterate Directions}fori= 1toMdoforj= 1toNdowhile k+i≤mandk∆ +j≤n, k∈Zdo ̃Atmp[h][(1,∆)][i][j]+ =Ain[h][k+i][k∆ +j]size+ = 1end while ̃Atmp[h][(1,∆)][i][j]/=size {Average diagonals (if size̸= 0)}end forend forfori←1toMdoforj←1toNdo ̃Atmp[h][(1,∆)][i][j]← ̃A[h][i][j]−max( ̃A){Normalize}end forend forfori←1toMdoforj←1toNdo ̃Atmp[h][(1,∆)][i][j]←dropout ( ̃A[h][i][j]){Dropout}end forend forend forfori←1toMdoforj←1toNdo ̃A[h][i][j]←max( ̃Atmp[h][(1,∆)][i][j], ̃A[h][i][j){Merge directions}end forend forend forreturn ̃A6The algorithm for negative ∆is identical except that, before invoking the same procedure, we translate AinN−n+ 1elements to the right so that the top-right corners of Ainand ̃Aalign.18 |
D83tiHiNfF | Formal Theorem Proving by Rewarding LLMs toDecompose Proofs HierarchicallyKefan Dong Arvind Mahankali Tengyu MaStanford University{kefandong,amahank,tengyuma}@stanford.eduAbstractMathematical theorem proving is an important testbed for large language mod-els’ deep and abstract reasoning capability. This paper focuses on improvingLLMs’ ability to write proofs in formal languages that permit automated proofverification/evaluation. Most previous results provide human-written lemmas tothe theorem prover, which is an arguably oversimplified setting that does not suf-ficiently test the provers’ planning and decomposition capabilities. Instead, wework in a more natural setup where the lemmas that are directly relevant to thetheorem are not given to the theorem prover at test time. We design an RL-basedtraining algorithm that encourages the model to decompose a theorem into lemmas,prove the lemmas, and then prove the theorem by using the lemmas. Our rewardmechanism is inspired by how mathematicians train themselves: even if a theoremis too challenging to be proved by the current model, a positive reward is still givento the model for any correct and novel lemmas that are proposed and proved in thisprocess. During training, our model proposes and proves lemmas that are not inthe training dataset. In fact, these newly-proposed correct lemmas consist of 37.7%of the training replay buffer when we train on the dataset extracted from Archiveof Formal Proofs (AFP). The model trained by our RL algorithm outperforms thattrained by supervised finetuning, improving the pass rate from 40.8% to 45.5% onAFP test set, and from 36.5% to 39.5% on an out-of-distribution test set.1 IntroductionThe reasoning abilities of large language models (LLMs) are a significant marker of artificialintelligence and critical for complex and safety-sensitive applications. Yet recent studies highlightthe limited performance of LLMs on reasoning tasks (e.g., Mündler et al. [2023], Valmeekam et al.[2023] and references therein).Automated theorem proving by LLMs is an excellent reasoning task that abstracts away the needfor numerical manipulation or tool use (e.g., using a calculator) and allows for precise correctnessevaluation with an automatic verifier (such as Isabelle [Nipkow et al., 2002] and Lean [De Mouraet al., 2015]), even without ground truth. Thanks to tools such as Sledgehammer [Paulsson andBlanchette, 2012] that can automatically complete low-level details, the granularity of formal proofsis similar to natural language proofs (see Fig. 1 Left for an illustrative example). Note that verifyinga proof is fundamentally much easier than generating the proof.1Thus, learning to prove theoremsfrom verifiers’ supervision is reminiscent of weak-to-strong generalization [Burns et al., 2023].Previous results in this area largely focus on setting where the theorem prover can use all the lemmasin the formal proof library, including those particularly written to decompose a specific theorem’s1The former is in P whereas the latter is undecidable in the worst case Turing et al. [1936], Church [1936].38th Conference on Neural Information Processing Systems (NeurIPS 2024).Figure 1: Left: in the dashed callout block, we show an example of an Isabelle proof and itsexplanation in natural language. Right : an example of a proof tree. The two child nodes correspondto the two new lemmas proposed in the proof of the root node.proof [Jiang et al., 2021, Polu and Sutskever, 2020] This setting arguably oversimplifies the problemand doesn’t sufficiently test the models’ planning and decomposition capabilities, and it is unclearwhether the resulting models can be used to prove new theorems from scratch when such lemmas arenot available at test time. Instead, we work in a more natural setup where the theorem prover needsto propose and prove lemmas to decompose the proof hierarchically itself (see Section 2 for moredetails). In Section C.2, we demonstrate that this task is indeed much more challenging.Figure 2: Illustration of our algorithm Proof Tree Generator ( ProD )-RL. In step 2b, locally correctmeans that the statement is proved correctly using the proposed lemmas, and globally correct meansthat all the proposed lemmas are also proved correctly. As an important feature of our algorithm,even if Theorem 1 is not proved by the model because the proof to Lemmas 1 is incorrect, we stilltrain on the correct lemma (Lemma 2) by setting its reward r= 1.In addition, most existing proof-generation algorithms leverage the formal verifier by (a) providingthe verifier’s current proof state to the LLMs step-by-step, and (b) using best-first search algorithmssuch as A⋆to build a multi-step proof from many LLM-generated steps [Jiang et al., 2022a, Hanet al., 2021]. The major challenge of these method is the high computation cost incurred in both (a)and (b) because (a) requires re-running LLMs on a different context that contains a long verifier’sproof state at every step, and (b) the search is expensive. Consequently, the best method along thisline of research requires more than 1k GPU days with A100s to train a model with 600M parameters[Lample et al., 2022], whereas our method only takes less than 36 GPU days to train a 7B model.To address these issues, we design a method, called Proof Decomposer ( ProD ), that uses LLMs topropose and prove new lemmas hierarchically and generate whole proofs directly without searching.We augment the vanilla formal proof syntax so that the model can propose new lemmas by statingtheir statements during the proof, and then prove the lemmas separately. Hence, a complete proof ofa theorem will form a tree structure where the child nodes are the lemmas proposed in the proof ofthe parent node (Fig. 1 Right), and the theorem is proved only if all the proofs in the tree are correct.2We train our LLMs with reinforcement learning (RL) in a way that somewhat imitates the mathemati-cian’s process: we reward correct partial proofs (i.e., proof sub-trees) even if the original theorem (i.e.,the root node) is not proved entirely. Since our model can generate and prove novel lemmas duringtraining, it could still make progress even if the model is given a very challenging theorem. This issimilar to mathematicians publishing papers on interesting lemmas that do not solve the original goal.We illustrate our algorithm in Fig. 2, and defer the details to Section B.2.We test our model ProD -RL by generating proof trees on holdout theorems that the model is nevertrained on, and we show that our model ProD -RL outperforms several other baselines. Comparedwith the supervised fine-tuned (SFT) model on the same training set, our model improves the passrate from 40.8% to 47.5% on the holdout test set, whereas vanilla reinforcement learning withoutlemma proposals during training does not improve the corresponding SFT model (see Section 3.2).This is partly because our method encourages the model to propose and prove additional lemmas —in fact, 37.7% of the proved lemmas during training are not in the dataset. As a result, the model stillimproves even if it’s already fine-tuned on the same dataset with human-written ground-truth proofs.2 SetupConditional proofs. We use the term conditional proof to denote a proof that, in addition to thestandard formal proof syntax, can propose new lemmas by enclosing their statements by <invoke>and</invoke> tokens (examples shown in the blue boxes of Fig. 1). A conditional proof has thefollowing format:t1<invoke> l1</invoke> t2<invoke> l2</invoke> ···tk<invoke> lk</invoke> tk+1where t1,···, tk+1denote proof segments in the original formal proof syntax (see e.g., Fig. 1, prooftexts in black), and l1,···, ltdenote proposed lemma statements (see e.g. Fig. 1, proof texts in red).2Proof tree nodes. With the proposed lemmas, a complete proof forms a tree structure (as shown inFig. 1). A node in a proof tree is a tuple of premises, context, a theorem statement, and a conditionalproof. Premises represent the lemmas that are treated as common knowledge, which are typically notdirectly relevant to the proof. We allow the model to directly use them in the proof so that it does nothave to repetitively prove all the fundamental facts, such as properties of continuous functions andnatural numbers. Context represents the necessary contents to prepare the theorem statement, such asthe definition of specific objects/functions. We use the context as part of the prompt for the LLMs togenerate proofs, and to prepare the proof verifier to check the generated proofs.Correctness of conditional proofs and proof trees. A proof tree node nwith conditional prooft1<invoke> l1</invoke> t2<invoke> l2</invoke> ···tk<invoke> lk</invoke> tk+1islocallycorrect if, after adding l1, . . . , l kto the set of premises, t1. . . tk+1, is a proof to the statement of nacceptable by the formal verifier under the context of n.We consider a proof tree valid if, for every node, each of its child nodes corresponds to one proposedlemma and shares the same premises and context with its parent node. A tree node nisgloballycorrect with respect to a given set of tree nodes Nif we can construct a valid proof tree with root nusing the locally correct tree nodes in N. Intuitively, global correctness corresponds to the standardnotion of correctness (i.e., whether the theorem is proved), and local correctness is a weaker concept,referring to the correctness of conditional proofs assuming the proposed lemmas.Dataset construction. We construct the datasets by parsing raw proof-library files into examplesof the form (premises, context, statement, conditional proof). For any theorem s, we construct anexample where the premises are all the theorems from predecessor files. To compute the context, weiteratively remove all the theorem statement and its proof from the proof-library files if the theoremis not referred to in the remaining file contents (in other words, in the first iteration we peel off theroot nodes of the proof trees from the file, and then the nodes in the next level, etc.). Finally, toaugment the original proof of theorem sto a conditional proof with lemma proposal, for every proofsteptjthat calls lemmas lj,1, . . . , l j,nj(and these lemmas were removed from the context), we insertthe statements of these lemmas enclosed by the <invoke> and</invoke> tokens into the proof rightbefore tj. We defer additional details of our dataset construction process to Appendix A.2In this paper, we use the terms ‘lemma’ and ‘theorem’ relatively — theorem refers to the statement that weare currently focusing on, and lemma refers to the statement proposed during the proof. In other words, there isno fundamental difference between a lemma and a theorem.3We split the training and test set (AFP test) based on the dependency of the files in the proof libraryso that the examples in the training set never refer to any files in the test set. We also construct anadditional test set AFP 2023 by parsing the AFP files submitted after the knowledge cutoff date ofthe Llemma model (April 2023) to eliminate potential data leakage issues.Compared with prior works [Jiang et al., 2021, First et al., 2023], the two major differences in oursetup are the availability of lemmas from the same file and the training/test split. In Section C.2, wediscuss and test their effects in detail.3 Experiments3.1 Reinforcement learning algorithmDue to limited space, we present a sketch of our RL algorithm here and defer the details to Appendix B.Proof tree generation. To generate proof trees using an autoregressive model πθ, we need to firstfine-tune the model to follow a specific format:(a) the input xto the model πθis the concatenation of a context and a theorem statement, and(b)the expected output yof the model is a special token t0followed by a conditional proof, wheret0is either <use_invoke> or<no_invoke> , denoting whether the following conditional proofshould propose new lemmas.We let the model generate the special token t0before a conditional proof so that we can upscale theprobability of the <use_invoke> token during RL to propose more lemmas for better exploration. Us-ing the fine-tuned model, we can generate conditional proofs and complete the proof tree recursively,as shown in Alg. 1.Reinforcement learning. Our method is illustrated in Fig. 2. We start with a supervised fine-tunedmodel so that it can generate conditional proofs in the desired format. At every round, we sample abatch of examples from the training dataset and generate proof trees using the current model. Then wecall the proof verifier to test the generated proofs, and assign rewards based on the global correctnessof the conditional proofs. Finally, we train the model using REINFORCE with a replay buffer.We update the model using partial proofs (i.e., proof sub-trees) even if the original theorem fromthe dataset (i.e, the root of the proof tree) is not proved. Hence, our method can also be viewed asan instantiation of hindsight experience replay [Andrychowicz et al., 2017], where the hindsighttrajectories are correct proof sub-trees.3.2 Main resultsThis section reports the models’ pass@k performance on AFP test and AFP 2023 datasets. For agiven dataset, pass@k measures the percentage of the theorems proved by at least one of kproofsgenerated by the model. Recall that the theorems in the test set are selected based on the dependenciesof the AFP files and are not used in any proofs from the training set (see Section C.2 for more details).In other words, we do not train the model on test datasets using reinforcement learning. Instead, wetest whether ProD-RL is a fundamentally better model when tested on new theorems.Table 1: Pass@16 of different models on AFP test sets. Our model with reinforcement learning(ProD-RL) improves upon the SFT model and outperforms baseline methods.Test setSFT w/olemma proposalRL w/olemma proposal ProD-SFT ProD-RLAFP test 43.4 42.4 40.8 45.5AFP 2023 39.4 37.7 36.5 39.5As a baseline method, we train a model on a variant of the SFT dataset where all lemmas are kept inthe context. It can be seen as a reproduction of First et al. [2023] with a slightly different way to obtainthe context — First et al. [2023] includes all the file content before the statement of the theorem,4whereas we only keep the statement of previous lemmas. We also run reinforcement learning on thesame RL dataset as our method (see Section E.2 for more details).Figure 3: Left: The pass rate of different models on AFP test. Our RL model improves upon the SFTmodel whereas without proposing new lemmas (RL w/o lemma proposal), we do not observe anyimprovement. Right :The pass rate of theorems in AFP test grouped by the depth of their ground-truthproof. Grey bars represent the proof generated by the model SFT w/o lemma proposal, and thecolored bars represent the proof trees generated by ProD-RL with various depths.Table 1 shows the performance of our model on the AFP test sets. For a fair comparison, the baselinemodels are tested in our new setup without human-written lemmas. Note that the SFT model withoutlemma proposal outperforms the SFT model with lemma proposal. We hypothesize that it is becauseproposing correct lemmas itself is challenging, which distracts the model from learning to generatedirect proofs. However, RL with lemma proposal improves the SFT model and outperforms othersbecause the model proposes and proves additional lemmas that are not in the training dataset, whereasRL without lemma proposal yields no improvement.In Fig. 3 (Left), we plot the pass rates with different numbers of samples per theorem on both AFPtest and AFP 2023. Fig. 3 shows that on AFP test, the ProD -RL model significantly improves uponbaseline methods as well as the ProD -SFT. However, on AFP 2023, the improvement is minor overSFT w/o lemma proposal, while ProD -RL still outperforms ProD -SFT. The results suggest thatthe baseline methods are more robust to heavier distribution shifts, while our method has a largerimprovement when the test distribution is closer to the training distribution.In Fig. 3 (Right), we decompose the proved theorems by the depth of their ground-truth proofs (shownon the x-axis) and the depth of generated proof trees (indicated by color). When there are multiplecorrect proof trees, we plot the one with the maximum depth. As a comparison, we also plot thesuccess rates of the proofs generated by the RL model trained w/o lemma proposal. Fig. 3 (Right)shows that the improvement of ProD -RL mostly comes from proving theorems with low-to-mediumdifficulty where the depth of the ground-truth proof is at most 2. For more complex theorems, bothmodels’ pass rates are low and the improvement of our method is not significant, meaning that theyare currently beyond the models’ capability.Due to limited space, we defer the case study for the generated lemmas to Appendix C.4.4 ConclusionIn this paper, we design a reinforcement learning algorithm that encourages LLMs to write formalproofs by decomposing them hierarchically. We also design a more natural testing setup by removingthe directly relevant lemmas from the context. We show that, by proposing and proving new lemmasthat are not present in the training dataset, the resulting model ProD -RL outperforms or achievescomparable performance to baseline methods trained on the same dataset.5AcknowledgmentThe authors would like to thank Neil Band, Zhizhou Ren, and Yuanhao Wang for their helpfuldiscussions. The authors would also like to thank the support from NSF CIF 2212263.ReferencesM. Andrychowicz, F. Wolski, A. Ray, J. Schneider, R. Fong, P. Welinder, B. McGrew, J. Tobin,P. Abbeel, and W. Zaremba. Hindsight experience replay. In Proceedings of the 31st InternationalConference on Neural Information Processing Systems , pages 5055–5065, 2017.Z. Azerbayev, H. Schoelkopf, K. Paster, M. Dos Santos, S. McAleer, A. Jiang, J. Deng, S. Biderman,and S. Welleck. Llemma: An open language model for mathematics. In The 3rd Workshop onMathematical Reasoning and AI at NeurIPS’23 , 2023.C. Burns, P. Izmailov, J. H. Kirchner, B. Baker, L. Gao, L. Aschenbrenner, Y . Chen, A. Ecoffet,M. Joglekar, J. Leike, et al. Weak-to-strong generalization: Eliciting strong capabilities with weaksupervision. arXiv preprint arXiv:2312.09390 , 2023.A. Church. A note on the entscheidungsproblem. The journal of symbolic logic , 1(1):40–41, 1936.K. Cobbe, V . Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton,R. Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 ,2021.L. De Moura, S. Kong, J. Avigad, F. Van Doorn, and J. von Raumer. The lean theorem prover (systemdescription). In Automated Deduction-CADE-25: 25th International Conference on AutomatedDeduction, Berlin, Germany, August 1-7, 2015, Proceedings 25 , pages 378–388. Springer, 2015.E. First, M. Rabe, T. Ringer, and Y . Brun. Baldur: Whole-proof generation and repair with large lan-guage models. In Proceedings of the 31st ACM Joint European Software Engineering Conferenceand Symposium on the Foundations of Software Engineering , pages 1229–1241, 2023.J. M. Han, J. Rute, Y . Wu, E. Ayers, and S. Polu. Proof artifact co-training for theorem proving withlanguage models. In International Conference on Learning Representations , 2021.D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt.Measuring mathematical problem solving with the math dataset. In Thirty-fifth Conference onNeural Information Processing Systems Datasets and Benchmarks Track (Round 2) , 2021.A. Q. Jiang, W. Li, J. M. Han, and Y . Wu. Lisa: Language models of isabelle proofs. In 6thConference on Artificial Intelligence and Theorem Proving , pages 378–392, 2021.A. Q. Jiang, W. Li, S. Tworkowski, K. Czechowski, T. Odrzygó ́ zd ́ z, P. Miło ́s, Y . Wu, and M. Jamnik.Thor: Wielding hammers to integrate language models and automated theorem provers. Advancesin Neural Information Processing Systems , 35:8360–8373, 2022a.A. Q. Jiang, S. Welleck, J. P. Zhou, W. Li, J. Liu, M. Jamnik, T. Lacroix, Y . Wu, and G. Lample.Draft, sketch, and prove: Guiding formal theorem provers with informal proofs. arXiv preprintarXiv:2210.12283 , 2022b.G. Lample, T. Lacroix, M.-A. Lachaux, A. Rodriguez, A. Hayat, T. Lavril, G. Ebner, and X. Martinet.Hypertree proof search for neural theorem proving. Advances in neural information processingsystems , 35:26337–26349, 2022.H. Lightman, V . Kosaraju, Y . Burda, H. Edwards, B. Baker, T. Lee, J. Leike, J. Schulman, I. Sutskever,and K. Cobbe. Let’s verify step by step. arXiv preprint arXiv:2305.20050 , 2023.I. Loshchilov and F. Hutter. Decoupled weight decay regularization. In International Conference onLearning Representations , 2018.M. Mikuła, S. Antoniak, S. Tworkowski, A. Q. Jiang, J. P. Zhou, C. Szegedy, Ł. Kuci ́nski, P. Miło ́s,and Y . Wu. Magnushammer: A transformer-based approach to premise selection. arXiv preprintarXiv:2303.04488 , 2023.6N. Mündler, J. He, S. Jenko, and M. Vechev. Self-contradictory hallucinations of large languagemodels: Evaluation, detection and mitigation. In The Twelfth International Conference on LearningRepresentations , 2023.T. Nipkow, M. Wenzel, and L. C. Paulson. Isabelle/HOL: a proof assistant for higher-order logic .Springer, 2002.L. C. Paulsson and J. C. Blanchette. Three years of experience with sledgehammer, a practicallink between automatic and interactive theorem provers. In Proceedings of the 8th InternationalWorkshop on the Implementation of Logics (IWIL-2010), Yogyakarta, Indonesia. EPiC , volume 2,2012.S. Polu and I. Sutskever. Generative language modeling for automated theorem proving. arXivpreprint arXiv:2009.03393 , 2020.S. Polu, J. M. Han, K. Zheng, M. Baksys, I. Babuschkin, and I. Sutskever. Formal mathematicsstatement curriculum learning. arXiv preprint arXiv:2202.01344 , 2022.Z. Shao, P. Wang, Q. Zhu, R. Xu, J. Song, M. Zhang, Y . Li, Y . Wu, and D. Guo. Deepseek-math: Pushing the limits of mathematical reasoning in open language models. arXiv preprintarXiv:2402.03300 , 2024.G. Tsoukalas, J. Lee, J. Jennings, J. Xin, M. Ding, M. Jennings, A. Thakur, and S. Chaudhuri.Putnambench: Evaluating neural theorem-provers on the putnam mathematical competition. arXivpreprint arXiv:2407.11214 , 2024.A. M. Turing et al. On computable numbers, with an application to the entscheidungsproblem. J. ofMath , 58(345-363):5, 1936.K. Valmeekam, M. Marquez, S. Sreedharan, and S. Kambhampati. On the planning abilities of largelanguage models-a critical investigation. Advances in Neural Information Processing Systems , 36:75993–76005, 2023.S. Welleck, J. Liu, R. Le Bras, H. Hajishirzi, Y . Choi, and K. Cho. Naturalproofs: Mathematicaltheorem proving in natural language. In Thirty-fifth Conference on Neural Information ProcessingSystems Datasets and Benchmarks Track (Round 1) , 2021.Y . Wu, A. Q. Jiang, W. Li, M. Rabe, C. Staats, M. Jamnik, and C. Szegedy. Autoformalization withlarge language models. Advances in Neural Information Processing Systems , 35:32353–32368,2022.H. Xin, H. Wang, C. Zheng, L. Li, Z. Liu, Q. Cao, Y . Huang, J. Xiong, H. Shi, E. Xie, et al.Lego-prover: Neural theorem proving with growing libraries. arXiv preprint arXiv:2310.00656 ,2023.C. Zheng, H. Wang, E. Xie, Z. Liu, J. Sun, H. Xin, J. Shen, Z. Li, and Y . Li. Lyra: Orchestrating dualcorrection in automated theorem proving. arXiv preprint arXiv:2309.15806 , 2023.K. Zheng, J. M. Han, and S. Polu. minif2f: a cross-system benchmark for formal olympiad-levelmathematics. In International Conference on Learning Representations , 2021.7A Dataset ConstructionIn this section, we present the dataset construction process in detail.Recall that we construct the datasets by parsing raw proof-library files into examples of the form(premises, context, statement, conditional proof). Hence, we first segment each of the files into blocksc1s1p1···clslplwhere the siare theorem statements, the piare the corresponding proofs, andtheciare the file contents between proofs, such as object definitions and local assumptions. Next,we build proof trees from the segmented file by iteratively removing ( si,pi) pairs from the file ifthe theorem siis not referred to in the remaining file contents (in other words, in the first iterationwe peel off the root nodes of the proof trees from the file, and then the nodes in the next level, etc.).Note that some theorems cannot be peeled off by this process because they are referred to in somefile content cj(e.g., lemmas used to instantiate local objects). We use Ttreeto denote the subset oftheorems peeled off during the process.For every theorem si, we construct an example where the context is the concatenation of {cj:j < i}and{sj:j < i, s j̸∈ T tree}in the order they appear in the file. That is, we exclude the lemmas peeledoff after we peel off si— these are the lemmas that directly contributes to the proof of si.To construct the conditional proof of theorem si, we add the proposed lemma statements tothe original proof pi. In particular, we split the proof piinto steps t1,···, tkusing the for-mal language parser. Then for every step tjthat uses lemmas lj,1, . . . , l j,njfromTtree, we in-sert the statements of these lemmas enclosed by the <invoke> and</invoke> tokens, denoted byζj=<invoke> lj,1</invoke> ···<invoke> lj,nj</invoke> , into the proof right before tj. In otherwords, the conditional proof is the concatenation ζ1t1···ζktk. Similar to Jiang et al. [2022a], weuse Sledgehammer, a premise selection tool that automatically searches for proofs to the current goal,to replace proof steps that are originally generated by it (see Section E.3 for more details) so that themode can focus less on the tedious low-level details.The premises are all the theorems from predecessor files, which are typically not directly relevantto the theorem (otherwise they will be stated in the same file). Theorems in the premises set can beused directly in the proof, or they can be selected by Sledgehammer to search for proof steps. In ourimplementation, the premises are implicitly defined by the dependency graphs of the files.For every example in the training dataset, if the conditional proof proposes at least one lemma, wealso add one augmented example by moving the proposed lemmas from the conditional proof to thecontext — this augmented example does not propose new lemmas and is always locally correct.B MethodsIn this section, we first describe how to use LLMs to generate proof trees, and then introduce ourreinforcement learning method ( ProD -RL) that rewards the model to decompose proofs hierarchically.B.1 Generating proof-trees using LLMsTo generate proof trees using an autoregressive model πθ, we need to first fine-tune the model tofollow a specific format:(a) the input xto the model πθis the concatenation of a context and a theorem statement, and(b)the expected output yof the model is a special token t0followed by a conditional proof, wheret0is either <use_invoke> or<no_invoke> , denoting whether the following conditional proofshould propose new lemmas.We let the model generate the special token t0before a conditional proof so that we can upscale theprobability of the <use_invoke> token during reinforcement learning to force the model to proposemore lemmas for better exploration.We summarize our proof-tree generation algorithm in Alg. 1. Given a theorem statement sand thecorresponding context c, we first sample πθautoregressively starting with the prompt x=c s, andideally the model will output a special token t0followed by a conditional proof ρ(Line 3). Next,we parse the conditional proof ρand collect the proposed lemmas l1,···, lkfor the next round ofgeneration (Line 5). We will force the model to generate conditional proofs without proposing new8lemmas at a certain depth so that the proof tree doesn’t grow indefinitely, which can be implementedeasily by replacing the prompt with x′=c s<no_invoke> (Line 6).Algorithm 1 Generate proof trees (test time)1:Inputs: Model πθ, a set of contexts and statements G0={(ci, si)}i, maximum depth d.2:forι←0,1,···, d−1do3: Sample proofs (t0,iρi)∼πθ(· |cisi)for lemmas (ci, si)inGι, where t0,iis the tokenrepresents whether the proof should use invoke and ρiis the conditional proof.4: Pι← {(ci, si, ρi)| ∀(ci, si)∈Gι}.5: Collect proposed lemmas (a condition proof ρimight propose more than one lemma lj):Gι+1← {(ci, lj)|(ci, si, ρi)∈Pιandljis proposed in ρi}.6:Sample proofs ρi∼πθ(· |cisi<no_invoke> )for(ci, si)inGd. (▷)Truncate at depth d.7:Pd← {(ci, si, ρi)| ∀(ci, si)∈Gd}.8:Return ∪dι=0Pι.B.2 Reinforcement learning with lemma proposalOur reinforcement learning method is illustrated in Fig. 2. We start with a supervised fine-tunedmodel so that it can generate conditional proofs in the desired format. Then at every round, werandomly sample a batch of examples from the training dataset and perform the following steps.Step 1: Generate proofs. To help exploration, we generate proof trees using a modified version ofAlg. 1 (shown in Alg. 2 of Appendix E.1) with the following differences.(a)For the theorems where the probability πθ(<use_invoke> |x)is among the top 25% in the batch,we will force the model to generate conditional proofs with t0=<use_invoke> . Otherwise, wesample t0according to the probability of πθ(· |x).(b)For every theorem where the model generates a conditional proof with new lemmas, we also letthe model generate another conditional proof without proposing lemmas. If any of these twoconditional proofs is globally correct, it can be used to construct proof trees for other theorems.We also mix the ground-truth conditional proofs with proofs generated during RL by generatingadditional proof trees starting with the ground-truth conditional proof using the current model πθ. Inother words, the root node of a proof tree constructed in this manner is a ground-truth conditionalproof, but its descendants are generated by the model.Step 2: Determine the reward of an example. In this step, we first check the local correctness ofeach conditional proof using the formal verifier (Step 2a in Fig. 2).In addition to the verifiers’ output, we apply two filters to help train the model: (a) we filter out triviallemma proposals — if a proposed lemma can directly imply the theorem (e.g., if the proposed lemmahas exactly the same statement as the theorem), we will simply discard this example, and (b) weremove unnecessary lemma proposals — if the conditional proof is still correct after removing all thereferences to a proposed lemma, we will delete this lemma from the conditional proof.Then we compute the global correctness of the generated proofs using its definition. Finally, weassign a binary reward r(c, s, ρ )to each tree node with context c, statement s, and conditional proofρbased on its global correctness (Step 2b in Fig. 2).Step 3: Update the model by REINFORCE. In this step, we will first construct a training datasetconsisting of examples with format (prompt, target, weight) from the conditional proofs collected inStep 1 and then update the model πθusing the weighted cross-entropy loss.For each generated conditional proof, we add one example to the training dataset where the prompt isthe context concatenated with the theorem statement, and the target is the conditional proof prependedby the <use_invoke> or<no_invoke> token. Note that the reward of a conditional proof depends notonly on the correctness of the conditional proof, but also on the correctness of the proposed lemmas.To reduce the variance of our gradient updates, we train a value function Vφthat predicts the expectedreward of the current policy on a given proof tree node (i.e., Vφ≈Eρ∼π(·|c,s)[r(c, s, ρ )]). The weightof an example is the product of the value function outputs on invoked lemmas multiplied with a9length penalization to prefer shorter proofs — for proof tree node with conditional proof length hand proposed lemmas l1,···, lk, the weight wof this example is w=γhQki=1Vφ(li)with discountfactor γ∈(0,1), orw= 0if the proof tree node is not locally correct.For easier implementation, we train the value function to predict two special tokens, <true> and<false> , conditioned on the context and theorem statement. Let ptbe the probability of the <true>token conditioned on the context and theorem statement, and pfthe probability of the <false> token.The output of the value function is then pt/(pt+pf).Similar to the SFT dataset, we add one augmented example by moving the proposed lemma fromthe conditional proof to the context for any locally correct conditional proof with new lemmas. Inaddition, we will use a replay buffer to stabilize the training.Remarks. We will update the model using partial proofs (i.e., proof sub-trees) even if the originaltheorem from the dataset (i.e, the root of the proof tree) is not proved. Hence, our method can alsobe viewed as an instantiation of hindsight experience replay [Andrychowicz et al., 2017], where thehindsight trajectories are correct proof sub-trees.C Experiment DetailsThis section presents our experimental results. We will first list additional experiment details(Section C.1) and then compare our setup with prior works (Section C.2.C.1 Additional experiment detailsProof verification software. We use Isabelle [Nipkow et al., 2002] as our proof verification softwaresince the proofs are declarative and human-readable without knowing the verifier’s proof state, andwe use PISA (Portal to ISAbelle, Jiang et al. [2021]) to interact with Isabelle. To check whether aproof tree node is locally correct, we import all the theorems from its premises, move each of theproposed lemmas from the conditional proof to the context, and then add a fake proof indicated bythe keyword ‘sorry’ to every lemma statement in the context (In Isabelle, ‘sorry’ will register thestatement as a fact even without any actual proof.) The remaining proof steps will follow the originalIsabelle syntax, and we can check its correctness directly. We set a 10s timeout for each proof step.Datasets. Our SFT dataset is the combination of theorems from Archive of Formal Proof3(AFP,retrieved on 2022-12-06) and Isabelle built-in files (such as HOL which contains the theorems thatdefine natural numbers, etc.). The resulting dataset contains 312k examples.To construct the test set, we parse the AFP files submitted after the knowledge cutoff date of ourpretrained model (April 2023) to eliminate possible data leakages.4The test dataset is handledsimilarly, except that we only keep the theorems in Ttree. This AFP 2023 test set contains 2k theorems.Testing setup. To measure the performance of the models, we sample kproof trees per theoremindependently on the test set and report the pass@k performance (that is, a theorem is proved if atleast one of the conditional proofs is globally correct with respect to all the generated tree nodes).When generating proofs, we use temperature 0.7 and truncate the context to only include the last 1ktokens. The proof trees are truncated at depth 2.Supervised fine-tuning. We start from the Llemma 7b model [Azerbayev et al., 2023] and fine-tunethe model for 2 epochs with the standard cross entropy loss.5On theorems from AFP, we computethe loss only on the special token and the proof, but not on the context and statement. On Isabellebuilt-in theorems, we compute the loss on the statement to help the model internalize basic facts. Weuse the AdamW optimizer [Loshchilov and Hutter, 2018] with linear warmup, constant learning rate1e-5, macro batch size 128, and context window 2048.Reinforcement learning. The dataset we use for the reinforcement learning stage is Ttree, the set oftheorems that are iteratively peeled off when parsing AFP files, which contains 109k examples. Weinitialize the model by supervised fine-tuning on a subset of AFP files, and then run RL for 20 rounds3https://www.isa-afp.org/4Here we use the archive of AFP retrieved on 2023-11-22.5In our preliminary experiments, we observe that the model overfits after 2 epoch10with a batch of 5k random examples per round. We truncate the proof tree at depth 3 and sample withtemperature 0.7 during training. We use the same hyperparameters as the SFT stage to update thepolicy πθ, We initialize the value function Vφby a Llemma 7b model fine-tuned on our SFT dataset.C.2 Comparison of our new setup with prior worksIn this section, we concretely compare our new setup with prior works [Jiang et al., 2021, First et al.,2023]. Recall that there are two main differences in how we process our dataset:(a)we split the train/test set based on file dependencies so that no theorems in the test set are referredto in the training set, whereas PISA split theorems randomly, and(b) when testing a proof, we remove certain lemmas from the context.To show that our setup is indeed more challenging, we first construct datasets formatted similarlyto those in [First et al., 2023]. Specifically, we parse the AFP files into examples using the methoddescribed in Section 2, with the only exception that all human-written lemmas are kept in the context.Then we select a subset of theorems as the test set Dw/ ltestbased on the dependency of the AFP files,so that the examples in the test set are never used by the remaining theorems (see Section E.3 formore details). Then we split the remaining examples randomly into training and validation datasets,denoted by Dw/ ltrainandDw/ lvalrespectively. We use Dw/o ltestto denote the test dataset of our setup wherethe lemmas are removed from the context. The validation dataset Dw/ lvalmimics prior works’ setup[Jiang et al., 2021, First et al., 2023], and Dw/ ltestis an interpolation between prior works’ setup and oursetup. Table 2 shows the performance of the model supervised fine-tuned on Dw/ ltrainplus all Isabellebuilt-in theorems. The results suggest that both features of our setup, removing the lemmas andsplitting the training/test set by file dependency, increase the difficulty of the task.Table 2: Pass rate on different dataset formats and partitions of the SFT model trained on the Dw/ ltrain.The validation dataset Dw/ lvalmimics the test setup of prior works, and our setup is Dw/o ltestwhere thesame model performs much worse. The results suggest that our setup is indeed more challenging.Test setupDw/ lval: w/ lemmas Dw/ ltest: w/ lemmas Dw/o ltest: w/o lemmassplit randomly split by dependency split by dependencypass@4 45.7 39.7 35.7C.3 Additional resultsThe effect of sampling temperature. In our preliminary experiments, We tune the samplingtemperature using the models trained on the AFP training sets Dw/ ltrainandDw/o ltrain(that is, training setsconstructed with and without helper lemmas, respectively). We test the model on the AFP test setDw/o ltestwith different temperatures to decide the best choice for testing our models. Fig. 4 shows theperformance of the SFT model without lemma proposal using different sampling temperatures. Weconclude that the temperature 0.7 is best for testing both models.Results on miniF2F. We observe that the improvement of ProD -RL over SFT w/o lemma proposal issignificant only when the test distribution is close to the training distribution. On miniF2F [Zhenget al., 2021] where the theorems are very different from theorems in the training dataset, ProD -RLperforms worse than SFT w/o lemma proposal, as shown in Table 3. We also observe that whentested on miniF2F theorems, our model failed to propose meaningful lemmas. This may be becauseproving to miniF2F-level mathematics questions typically does not require hierarchical decomposition.Therefore we leave it as future work to extend our methods to other domains such as miniF2F andPutnamBench [Tsoukalas et al., 2024].C.4 Case study of proposed lemmasIn this section, we manually examine the new lemmas proposed during RL and list the typical caseswhere new lemmas are proposed. Note that many AFP files focus on complex concepts and results inmathematics or computer science, making manual examination challenging. Therefore, examples inthis section are biased toward easier theorems.11Figure 4: Pass rate of the SFT model without lemma proposal tested with different samplingtemperature. We observe that lower temperature leads to better performance with 1 sample pertheorem, and mildly larger temperature have better performance with more samples.Table 3: Pass@64 of different models on miniF2F. ProD -RL performs worse than SFT w/o lemmaproposal.Test setSFT w/olemma proposalRL w/olemma proposal ProD-SFT ProD-RLminiF2F valid 46.3 40.6 44.7 41.4miniF2F test 40.6 38.9 39.3 39.3Case 1: Model decomposes theorems into lemmas. In this case, the model correctly decom-poses the proof of a theorem into several lemmas. The following example belongs to the AFP fileList-Infinite , which focuses on lists and sets with infinite size. The theorem (Line 1) statesthat the cardinality of the set A∪ {x}equals |A|ifx∈A, or the successor integer of |A|otherwise(i.e.,|A|+ 1for finite Aand∞otherwise). During the proof (Lines 2-4), our model proposes twolemmas in Lines 2 and 3 to deal with the two possible cases ( x̸∈Aorx∈A) respectively. Finally,Line 4 proves the original theorem using the two proposed lemmas.1 theorem icard_insert_if: "icard (insert x A) = (if x ∈A then icard A else eSuc (icard A))"2<invoke> lemma icard_insert_disjoint: "x /∈A=⇒icard (insert x A) = eSuc (icard A)" </invoke>3<invoke> lemma icard_insert_eq: "x ∈A=⇒icard (insert x A) = icard A" </invoke>4 by(simp add: icard_insert_eq icard_insert_disjoint)Case 2: The proposed lemma is a rephrase of an existing lemma. We also find that someproposed lemmas are rephrases of existing lemmas in the training dataset. Although in this case, theproposed lemma is not fundamentally useful for proving new theorems, they can be viewed as dataaugmentation to enhance the models’ performance. In the following example, the model produces alemma equivalent to one in an AFP file. Line 1 shows the original form of the lemma stated in theAFP file, while Lines 2-4 show an equivalent lemma proposed by our model during RL.1lemma icard_mono: "A ⊆B=⇒icard A ≤icard B"2lemma icard_mono:3assumes "A ⊆B"4shows "icard A ≤icard B"Case 3: The proposed lemma is novel but not useful to the original proof. We also observe caseswhere the proposed lemma is novel, but the conditional proof of the theorem is incorrect. In thefollowing example, the proposed lemma states that the shortest path between vertices u, vis a lowerbound for the length of any path that connects u, v(in an unweighted and undirected graph):1lemma shortest_path_lower_bound:2assumes "p ∈connecting_paths u v"3shows "shortest_path u v ≤enat (walk_length p)"12This lemma is proposed to prove that the shortest path between the vertex uand itself has length 0(which is a theorem in the AFP file). However, the conditional proof of the theorem contains a fewmistakes while the proposed lemma is proved separately. In this case, we still train on the correctlemma even though it might not be directly useful to the theorem in the training set.Remarks. We observe that the lemmas proposed by the model typically do not involve complex ideas.We attribute this to two main factors: (a) the limited size of our model and formal proof dataset,and (b) the fact that many human-written lemmas in the AFP file are indeed about basic facts andbasic properties (which are often used to prove more complex theorems later). Nevertheless, ourmodel still proposes and proves reasonable lemmas that are not present in the training dataset, andour experiments demonstrate that with these proposed lemmas, ProD -RL outperforms ProD -SFT onholdout test sets. We leave it to future work to scale up our method and force the model to focus onmore challenging theorems.D Related worksMost existing methods use search-based algorithms to generate formal proofs with language models[Polu and Sutskever, 2020, Jiang et al., 2021, Han et al., 2021, Polu et al., 2022, Jiang et al., 2022a].Prior works also show that RL can improve the search-based models’ performance [Polu et al., 2022,Wu et al., 2022, Lample et al., 2022]. The major drawback of these methods is their high computationcost at test time. A recent work [First et al., 2023] trains LLMs to generate a whole proof directlywithout knowing the verifier’s state. Our baseline model, SFT without lemma proposal, can be viewedas a reproduction of their method with a slightly different format. Orthogonally, Mikuła et al. [2023]improve premise selection tools (such as sledgehammer) using transformer-based retrieval models.Magnushammer could potentially be combined with our methods, which we leave as future works.Another line of research aims to translate natural language proofs into formal proofs [Jiang et al.,2022b, Zheng et al., 2023]. Xin et al. [2023] build a library of useful lemmas by decomposing naturallanguage proofs into lemmas with an LLM and then formalizing the decomposed proofs. In contrast,we propose new lemmas entirely in formal language.In general, mathematical question-answering tasks (such as GSM8K [Cobbe et al., 2021] and MATH[Hendrycks et al., 2021]) and theorem-proving tasks (such as Welleck et al. [2021]) are well-acceptedbenchmarks for the reasoning capability of large language models. Prior works show that instructiontuning or RL can significantly improve the models’ performance [Shao et al., 2024]. However,evaluation on these tasks is either performed by another language model (which is prone to errors)[Lightman et al., 2023], or requires ground-truth answers that are hard to acquire at scale.E Additional experiments detailsE.1 Generating proof trees for RLIn Alg. 2, we present the algorithm for generating proof trees during RL training. Recall that,compared with Alg. 1, there are two major differences:(a)for the theorems where the probability πθ(<use_invoke> |x)is among the top 25% in the batch,we will force the model to generate conditional proofs with t0=<use_invoke> (Line 4-6), and(b)for every theorem where the model generates a conditional proof with new lemmas, we also letthe model generate another conditional proof without proposing new lemmas (Line 11).E.2 Training details of baseline modelsIn this section, we describe the additional details for training the baseline models using reinforcementlearning.Our RL training pipeline for the baseline models is similar to that of ProD -RL, except that the modelsonly generate proofs without lemma proposal. For RL baselines, we use the same dataset and the samehyperparameters as our method. To mix the ground-truth conditional proofs with generated proofs,we convert the conditional proofs to proofs without lemma proposal by moving all the proposed13Algorithm 2 Generate proof trees (train)1:Inputs: Model πθ, theorems (represented by tuples of context, statement, and condition proof)G={(ci, si, ρ⋆i)}i, maximum depth d.2:forι←0,1,···, ddo3: Compute the invoke probability ∀i, pi=πθ(<use_invoke> |ci,si)πθ(<use_invoke> |ci,si)+πθ(<no_invoke> |ci,si).4: Letκbe the 75% quantile of {pi}i.5: ifι < d then6: bG={(ci, si, ρ⋆i)|pi≥κorui< piwhere ui∼Unif[0,1]}.7: else8: bG=∅.9: Sample proofs ρi∼πθ(· |cisi<no_invoke> )for lemmas (ci, si, ρ⋆i)inG.10: Sample proofs bρi∼πθ(· |cisi<use_invoke> )for lemmas (ci, si, ρ⋆i)inbG.11: Pι← {(ci, si, ρi)| ∀is.t.(ci, si, ρ⋆i)∈G} ∪ { (ci, si,bρi)| ∀is.t.(ci, si, ρ⋆i)∈bG}.12: ifι= 0then13: Pι←Pι∪G. (▷)In training, we also complete proof trees for ground-truth proofs.14: Extract proposed lemmas (note that a condition proof ρmight propose more than one lemmal):G← {(c, l,Null)|(c, s, ̄ρ)∈Pιandlis proposed in ̄ρ}.15:Return ∪dι=0Pι.lemma in the conditional proof to the context, and the resulting proofs are always acceptable by theverifier.E.3 Additional experiment detailsDetails of using sledgehammer in the proof. Sledgehammer is a premise selection tool that canautomatically generate proofs to solve the current goal. Although sledgehammer is not alwaysapplicable, Jiang et al. [2022a] shows that letting the model to call sledgehammer whenever it isapplicable greatly improves the model’s performance.To let the model use sledgehammer, we replace the actual proof steps in the training dataset by a callto sledgehammer if the proof step either (a) contains the proof tactics ‘meson, metis, and smt’ (thesetactics are typically generated by sledgehammer), or (b) belongs to a predefined simple set of prooftactics that can be easily generated. In particular, they are[by auto, by simp, by blast, by fastforce, by force,by eval, by presburger, by sos, by arith, by linarith,by (auto simp: field_simps)]When testing a generated proof with calls to sledgehammer, we follow the pipeline of [Jiang et al.,2022b] — first, we try to replace the ‘sledgehammer’ command by one of the predefined tactics. Ifall the attempts fail, we call the actual premises selection tool in Isabelle with a 10s timeout. If thetool does not return a valid proof, we consider this step incorrect.Note that Jiang et al. [2022a] decide when to replace the actual proof step by a call to sledgehammermore aggressively. They attempt to call sledgehammer at every proof step, and replace the actualproof step by sledgehammer if the attempt is successful. In contrast, our decision is made withoutinteracting with the formal verifier. This is because applying sledgehammer to every proof steprequires a lot of compute, which would significantly slow down the reinforcement learning process.Dataset split. Here we describe how to split the training and test data based on the dependencyof the AFP files. We first compute the dependency graph by crawling the AFP website https://www.isa-afp.org/entries/ , which lists the dependency of the AFP entries. Then we findthe set of AFP entries that all other entries do not depend on using the dependency graph, in whichwe randomly sample 10% of the entries as the holdout test set. The resulting holdout entries are:[Verified_SAT_Based_AI_Planning, SIFPL, Khovanskii_Theorem,Bondy, Rewriting_Z, Decreasing-Diagrams-II, Registers,LocalLexing, FeatherweightJava, FFT, Knot_Theory, Eval_FO,14Saturation_Framework_Extensions, Hales_Jewett, SPARCv8,CoSMeDis, LP_Duality, PAPP_Impossibility, Groebner_Macaulay,Abstract-Hoare-Logics, PCF, Jordan_Hoelder, Knights_Tour,FOL_Seq_Calc3, Cartan_FP, InformationFlowSlicing_Inter, LOFT,Diophantine_Eqns_Lin_Hom, Dynamic_Tables, Schutz_Spacetime,Elliptic_Curves_Group_Law, ArrowImpossibilityGS,Goodstein_Lambda, XML, GenClock, Topological_Semantics].Additional training detail. We use the Llemma code base ( https://github.com/EleutherAI/math-lm ) for finetuning and updating the model in reinforcement learning. Thediscount factor used to compute the weight is γ= exp( −0.0005) .E.4 Compute resourcesFor supervised finetuning and reinforcement learning, we use a machine with 8 A100-80G GPUs.It takes approximately 8 GPU days in total (i.e., 1 day wall-clock time on a single machine with 8GPUs) to finetune a 7B model on 300k examples for 2 epochs. It takes approximately 30 GPU hoursto run a single RL experiment.To generate proofs using the trained model, we use a mix of A100-80G and A5000 GPUs. On 8A5000 GPUs, generating proof trees of depth 2 for 4k test examples takes about 1-2 hours, dependingon the length of the proof and the number of new lemmas proposed.E.5 Licenses for existing assetsIn this section we list the licenses for existing assets used in this paper.• LLemma [Azerbayev et al., 2023]: Llama 2 Community License Agreement• Archive of Formal Proofs: GNU LGPL• Portal to ISAbelle [Jiang et al., 2021]: BSD 3-Clause License• Isabelle [Nipkow et al., 2002]: BSD licenses• miniF2F [Zheng et al., 2021]: MIT License15 |
CxHRoTLmPX | 2024-10-14Generative Verifiers: Reward Modeling asNext-Token PredictionLunjun Zhang1,2, Arian Hosseini1,3*, Hritik Bansal1,4*, Mehran Kazemi1, Aviral Kumar1,5and Rishabh Agarwal1*Core Contribution,1Google DeepMind,2University of Toronto,3Mila,4UCLA,5Carnegie Mellon UniversityVerifiers or reward models are often used to enhance the reasoning performance of large language models(LLMs). A common approach is the Best-of-N method, where N candidate solutions generated by the LLMare ranked by a verifier, and the best one is selected. While LLM-based verifiers are typically trained asdiscriminative classifiers to score solutions, they do notutilize the text generation capabilities of pretrainedLLMs. To overcome this limitation, we instead propose training verifiers using the ubiquitous next-tokenprediction objective, jointly on verification and solution generation. Compared to standard verifiers, suchgenerative verifiers ( GenRM) can benefit from several advantages of LLMs: they integrate seamlessly withinstruction tuning, enable chain-of-thought reasoning, and can utilize additional test-time compute viamajority voting for better verification. We demonstrate that GenRM outperforms discriminative, DPOverifiers, and LLM-as-a-Judge, resulting in a 16−40% improvement in the number of problems solved withBest-of-Nonalgorithmicandmathreasoningtasks. Furthermore, wefindthattraining GenRMwithsyntheticverification rationales is sufficient to pick out subtle errors on math problems. Finally, we demonstrate thatgenerative verifiers scale favorably with model size and inference-time compute.1. Introduction10%20%30%40%% Problem Solved(Best-of-32)Algorithmic Reasoning (2 tasks) =94.2%84%86%88%90%92%Best-of-16Grade-School Math (GSM8K) =11.6%35%38%40%43%45%Best-of-32Transfer to MATH (GSM-Verifiers) =16.2%LLM-as-a-Judge DPO Discriminative RM GenRM GenRM-CoTFigure 1|Generative verifiers outperform standard verification approaches in terms of Best-of-N on reasoningtasks, with a fixed generator. Here, Δ=(GenRM-CoT−Disc-RM)/(Pass@N−Disc-RM)measures how much bettergenerative CoT verifiers perform than discriminative RM, as a fraction of the maximum achievable gains overdiscriminative RM from an oracle verifier (Pass@N). GenRM-CoT leverages the generation capabilities of LLMs,enabling afinetuned verifiertoutilize chain-of-thoughtverification todetectsubtlereasoning errors. Foralgorithmictasks, we report average performance using Gemma-2B on Last Letter Concat (Wei et al., 2022) and BBH WordSorting (Suzgun et al., 2022). For math reasoning, we train Gemma2-9B verifiers on GSM8K and evaluate theirperformance on GSM8K test (middle) and easy-to-hard generalization on MATH500 (Lightman et al., 2023). Formath tasks, LLM-as-a-Judge utilizes Gemini 1.0 Pro, which we used for synthetic verification rationales for GSMtraining. The algorithmic reasoning tasks use programmatically generated oracle verification rationales as trainingdata for GenRM-CoT. Math tasks use model-generated verification rationales for training GenRM-CoT. Data will bereleased at: https://sites.google.com/view/generative-reward-models .Corresponding author(s): [email protected], {aviralkumar, rishabhagarwal}@google.com©2024 Google DeepMind. All rights reservedarXiv:2408.15240v2 [cs.LG] 11 Oct 2024Generative Verifiers: Reward Modeling as Next-Token PredictionProblem : Tim decides to cancel his cable subscription and get streaming services. He gets Netflix for $10 a month. Huluand Disney Plus normally cost $10 a month eachbut he saves 20% for bundling. How much money does he save bycancelling his $60 cable package?Solution : Tim pays $60 for cable. He gets Netflix for 10and the bundle of Hulu and Disney Plus costs $10 * 80% = $8.So he pays $10 + $8 = $18 for the bundle. Now he saves $60 - $18 = $42. The answer is 42.Discriminative RM correctness score: 0.999GenRM-CoT : Let’s verify step by step. ···Step 3. **The bundle of Hulu and Disney Plus costs $10 * 80% = $8.** (Incorrect)- Expected Answer: The bundle of Hulu and Disney Plus costs $10 + $10 = $20. With the 20% discount, the total cost is$20 * 0.8 = $16. ···Verification: Is the answer correct (Yes/No)? NoGenRM-CoT (Majority Voting) score: 0.0015Figure2|ExampleusinggenerativeCoTverifieronGSM8Ktest . LLM-generatedsolutionsoftensoundconvincingeven when they are wrong, making verification a challenging task. Here, the solution is incorrect because it hasignored the word ‘each’in the problem. While the discriminative RM fails to recognize this subtle mistake inthe solution, our GenRM-CoT verifier reliably detects the error. This is because GenRM-CoT was trained withnext-token prediction on synthetic chain-of-thought rationales, enabling it to explicitly reason about the solution.The full verification output can be found in Table E.12.While large language models (LLMs) demonstrate remarkable capabilities, they often confidently makelogicalandfactualmistakes(Zhangetal.,2023). Thesemistakesposeasignificantchallengeforreasoningproblems, where a single mistake can invalidate the solution. A common strategy to address this issue isBest-of-N (Charniak and Johnson, 2005; Cobbe et al., 2021): the LLM generates N candidate solutions fora given problem, and a learned reward model, referred to as a “verifier”, ranks these solutions and picksthe most suitable one. The effectiveness of this strategy hinges on how accurate the verifier is, making itcrucial to identify better approaches for training verifiers.LLM-based verifiers for reasoning are typically trained as discriminative reward models (RMs) to assignnumerical scores to candidate solutions, which is then used to classify them as correct or incorrect (Cobbeet al., 2021; Lightman et al., 2023; Wang et al., 2023). However, this scoring approach does not utilize thetext-generationcapabilitiesthatLLMsarefundamentallydesignedfor. Asaresult,discriminativeRMsmissout on the inherent strengths of generative LLMs, such as unified instruction tuning (Chung et al., 2022),chain-of-thought (CoT) reasoning (Wei et al., 2022), and utilizing additional inference-time computationfor better performance (Brown et al., 2024; Wang et al., 2022). While LLM-as-a-Judge (Zheng et al.,2024), which simply prompts off-the-shelf generative LLMs, also offers the above advantages, it typicallyunderperforms trained LLMs-based verifiers on reasoning tasks, which we also observe in Figure 1.In this work, we propose training verifiers with next-token prediction, which we call GenRM, to leveragethe text generation capabilities of LLMs (Figure 2). Concretely, to produce a numerical score for a solution,the verifier now uses a prompt such as ‘Is the answer correct?’, and represents the score as the probabilityof a single text token (e.g., ‘Yes’ or ‘No’). GenRM naturally supports CoT reasoning (Wei et al., 2022):it can be trained to reason explicitly by generating a verbalized rationale before predicting correctnessusing ‘Yes’ or ‘No’ token (Figure 3), assuming rationales are available during training. We can furtherboost verification accuracy of CoT verifiers using majority-voting (Wang et al., 2022): sampling multipleCoT rationales and calculating the average score of the ‘Yes’ token across all rationales, enabling the useof inference-time compute for verification. Moreover, GenRM’s next-token prediction training enablesunifying solution generation with verification, which has been difficult with DPO verifiers (Hosseini et al.,2024; Rafailov et al., 2024), potentially improving verification through positive transfer from solution.2Generative Verifiers: Reward Modeling as Next-Token Prediction“Let’s verify step by step.” GenRM Finetuned Verifier Problem Solution “Is the answer correct (Yes/No)?” YesNoOther tokens GenRM-CoT Finetuned Verifier Problem Solution Token Probability Verification CoT1...Verification CoTNNoY es Yes0.40.20.90.8Averag er r Figure 3|An illustration of generative verifiers , namely GenRM andGenRM-CoT. Given a question and acandidate solution, GenRM directly finetunes an LLM to answer the question ‘Is the answer correct (Yes/No)?’via SFT on the next-token response corresponding to either ‘Yes’ or ‘No’. During inference, the verifier score isobtained by extracting the probability of the ‘Yes’ token (4). In comparison, GenRM-CoT finetunes a LLM toproduce verification chain-of-thought (CoT) rationale before yielding the final Yes/No token. At test-time, wesample multiple CoT rationales and use majority voting to compute the average probability of ‘Yes’, enablingGenRM-CoT to utilize additional inference-compute for better verification.Our results show that GenRM outperforms discriminative RMs, LLM-as-a-Judge, and self-consistency onalgorithmic string manipulation and math reasoning tasks (Figure 1). Best-of-N performance furtherimproves with GenRM-CoT that uses majority-voting, nearly matching performance with oracle verifieron algorithmic tasks. On GSM8K, when using a Gemma2-9B GenRM-CoT verifier on solutions fromGemini 1.0 Pro, we observe an improvement from 73%→93.4%in terms of the number of problemssolved, surpassing GPT-4 and Gemini 1.5 Pro. Furthermore, GenRM-CoT trained on grade-school mathproblems exhibit easy-to-hard generalization, solving 17% more high-school competition problems inMATH500 (Lightman et al., 2023) with Best-of-32. Moreover, we find that generative verifiers scale morefavorably than discriminative verifiers as we increase model capacity, and outperform LLM-as-a-Judgeas we scale inference-time compute with majority voting. Overall, generative verifiers hold significantpotential for improving the reasoning capabilities of LLMs.2. PreliminariesAn autoregressive language model generates an output sequence y=(y1,y2,...,yT)given a input contextx(e.g., math problem) by predicting tokens one at a time, based on the previously generated tokens.Assuming that the language model is parameterized by θ, the conditional probability distribution ofgenerating a sequence ygiven context xispθ(y|x)=TÖt=1pθ(yt|x,y<t) (1)with the convention y<1=∅andy<t=(y1,y2,...,yt−1). For ease of notation, we define pθ(yt|x):=pθ(yt|y<t,x). For a vocabulary size M, the probability of predicting the t-th token yt,pθ(yt|x),is determined using a softmax with temperature γon logit scores zof all the tokens: pθ(yt|x)=exp(zt/γ)ÍMi=1exp(zi/γ),wherezt=logitθ(yt|x,y<t). Higher values of temperature γintroduce more randomness,while setting temperature τ=0makes the output deterministic, which corresponds to greedy decoding.3Generative Verifiers: Reward Modeling as Next-Token PredictionNext-token prediction is the typical approach for pre-training and fine-tuning LLMs. In particular,supervised fine-tuning ( SFT) minimizes the cross-entropy loss between the model’s predicted next tokenand the actual target token in a given sequence. Given a dataset D={(x,y)}of input context xandtarget response y, the SFT loss is given by:LSFT(θ,D)=−E(x,y)∼D"|y|∑︁t=1logpθ(yt|x,y<t)#. (2)Best-of-N is a widely-used approach to improve the reasoning performance of LLMs (Cobbe et al., 2021;Lightman et al., 2023). Specifically, given a test problem, we sample N candidate solutions from agenerator LLM. These candidates are then scored using a learned verifier or reward model, and thehighest-scoring solution is selected as the final answer. A better verifier increases the chance of selectingthe correct solution, improving test accuracy.Discriminative Verifiers . The prevalent approach of training verifiers for reasoning domains is tofine-tune an LLM as a classifier on a dataset of correct and incorrect solutions generated from a fixedLLM, using the binary cross-entropy loss. To do so, these verifiers directly assign a numerical scorerθ(x,y)∈[0,1]to estimate the probability that a solution yis correct for a problem x. As such, theseverifiers do not utilize the text generation the capabilities of LLMs. Given a reward-modeling (RM) datasetDRM=DincorrectÐDcorrect, we train discriminative RMs as follows:L(θ,DRM)=−E(x,y+)∼Dcorrectlogrθ(x,y+)−E(x,y−)∼Dincorrect[log(1−rθ(x,y−))],whererθ(x,y)=sigmoid(zcls),andzcls=logitθ(cls|y,x) (3)where y+are correct and y−are incorrect solutions, and clscorresponds to a special vocabulary token. Inthis work, we always use a balanced data mixture between correct ( Dcorrect) and incorrect (Dincorrect)problem-solution pairs.3. GenRM: Verification as Next-Token PredictionDiscriminative LLM-based verifiers (3) do not utilize the text generation capabilities of pretrained LLMs.To address this issue, we propose training generative verifiers, which we call GenRM, using standardnext-token prediction (2). To do so, GenRM represents solution correctness using the LLM’s probabilitydistribution over tokens, instead of predicting a separate numerical score. This keeps the generationabilities of GenRM intact as the verification decision is just another token, while also enabling severaladvantagesthatcomefor“free”withLLMs,suchasunifiedtrainingforsolutiongenerationandverification,chain-of-thought reasoning, and inference-time computation.3.1. Direct VerifierInitssimplestform, GenRM predictswhetherasolutioniscorrectusingasingle‘Yes’or‘No’token(Figure3,top). This can be done by maximizing logpθ(‘Yes’|(x,y+))for correct solutions y+andlogpθ(‘No’|(x,y−))for incorrect solutions y−. To do so, we minimize the SFT loss in (2) on the dataset DDirectcontaining problem-solution pairs and a ‘Yes‘ or ‘No’ verification token:DDirect ={(x,y+,I),‘Yes’}Ð{(x,y−,I),‘No’},I=‘Is the answer correct (Yes/No)?’4Generative Verifiers: Reward Modeling as Next-Token PredictionAt inference, we use the likelihood of the ‘Yes’ token as the verifier’s score for re-ranking solutions:rDirect(x,y)=pθ(Yes|x,y,I). (4)This score takes into account the verifier’s confidence about its correctness prediction, which reduces thechance of being wrong at test-time when using a binary ‘Yes’ or ‘No’ prediction.3.2. Unifying Generation and VerificationGenRM seamlessly integrates reward modeling, which distinguishes between correct and incorrectsolutions, with SFT for generating correct solutions. This can be done by simply changing the datamixture in the SFT loss (2) to include both verification and generation tasks. Given a verification datasetDverify, which can beDDirectorDCoT(discussed below) of problems-solution pairs with correctness tokens(optionally with CoT rationales), GenRM minimizes the loss:LGenRM(θ,Dverify)=LSFT(θ,Dverify)+λLSFT(θ,Dcorrect), (5)whereλ >0is a hyperparameter that controls the mixture ratio between verification ( Dverify) and gener-ating correct solutions ( Dcorrect). This unified training can improve verifier and generation performancevia positive transfer between these two related tasks: how to generate a correct solution, and whether asolution is correct. By default, we train GenRM verifiers using the unified loss in (5).3.3. Chain-of-Thought Verifiers (GenRM-CoT)Since verification often involves nuanced reasoning, generative verifiers can naturally benefit fromCoT (Wei et al., 2022). Specifically, we can generate intermediate reasoning steps or critique (CoT) beforemaking a decision about the solution correctness, which may identify subtle reasoning errors missed bydirect verifiers (Figure 3, bottom). To train CoT verifiers, we can minimize the SFT loss LGenRMon thedatasetDCoTcontaining problem-solution pairs as inputs, and corresponding verification rationales vCoTappended with a final question Iand ‘Yes’ or ‘No’ token as targets:DCoT={x,y+,ICoT,(vCoT,I,‘Yes’)}Ð{(x,y−,ICoT),(vCoT,I,‘No’)}where ICoT=‘Let’s verify step by step.’. Notably, these rationales can either be human or LLM-generated,both of which we explore in this work. During inference, we first generate a CoT rationale vCoTfromGenRM-CoT and then use the probability of ‘Yes’ for assigning the correctness score:rCoT(x,y)=pθ(Yes|x,y,ICoT,vCoT,I),where vCoT∼pθ(·|x,y,ICoT), (6)Compared to (4) that only uses the instruction Ito produce a score, the above CoT reward additionallyconditions on ICoTand self-generated vCoTbefore getting a score via instruction I.Inference-time compute for CoT verifier. When sampling verification CoTs, the generative verifier canuse different reasoning paths and yield different correctness probabilities for the same problem-solutionpair. As such, we would like to marginalize out these reasoning paths to select the most consistentcorrectness answer (Wang et al., 2022). To do so, we use majority voting where we first generate Kverification CoT rationales, and average the CoT-verifier score for these rationales:rMajV@K(x,y)=1KK∑︁i=1pθYes|x,y,ICoT,v(i)CoT,I,where v(i)CoT∼pθ(·|x,y,ICoT) (7)5Generative Verifiers: Reward Modeling as Next-Token PredictionProblem : A line is parameterized byxy=8−1+t23. The equation of the line can be expressed in the form y=mx+b.Enter the ordered pair (m,b).Solution : We have x=8+2tandy=−1+3t. Solving the first equation for t, we have t=x−82. Substituting this into thesecond equation, we get y=−1+3x−82=32x−232. Thus,m=32andb=−232. So, the answer is32,−232.Discriminative RM correctness score: 0.827GenRM-CoT : Let’s verify step by step. ···**Step 3:**Solution: Substituting this into the second equation, we get y=−1+3x−82=32x−232.Expected Answer: Substituting this into the second equation, we get y=−1+3∗ (x−8)/2=−1+(3x−24)/2=−1+(3x/2)−12=(3x/2)−13.**Verification:** No···Verification: Is the answer correct (Yes/No)? NoGenRM-CoT (Majority Voting) score: 0.438Figure 4|An example on MATH where GenRM-CoT (trained only on GSM) detects a reasoning error. The solution madea mistake in simplifying an intermediate step. Both Discriminative RM and GenRM-CoT models have only been trained onGSM8K. In this case, discriminative RM fails to classify the solution as incorrect, whereas GenRM-CoT utilizes chain of thoughtsto catch this mistake. See Table E.14 for details.Since individual verification rationales from CoT verifiers can have reasoning errors, majority voting canmitigate the impact of such errors by averaging correctness scores across multiple rationales. Importantly,this means that GenRM-CoT can leverage additional inference-time compute to improve its accuracy,which discriminative verifiers cannot do. Unless otherwise specified, we report GenRM-CoT performancebased on majority voting with 32 votes, that is, K=32in (7).Synthetic verification CoT rationales for training. Verifying LLM solutions with human-generatedrationales can become increasingly expensive and challenging as LLMs surpass human reasoning abilities.To address this challenge, we explore using synthetically-generated rationales on GSM8K. One naiveapproach is to simply use the ‘Let’s verify step by step’ prompt given a problem-solution pair, and keep thegenerated rationales only when they accurately verify the correctness of a solution (Singh et al., 2023;Zelikman et al., 2022). However, such rationales (after filtering based on final yes/no responses) are stilloften of poor quality, due to 50% accuracy from random guessing.Instead, we use reference-guided grading (Zheng et al., 2024) to improve the quality of syntheticrationales, which we filter using their verification correctness. Specifically, we provide a reference solutionin addition to the problem and solution to verify (see Table A.2), making it easier for an LLM to point outany reasoning error in the provided solution. Here, a reference solution could be any model-generatedsolution that arrives at the correct final answer. Note that reference-guided grading can only be usedduring training, as we do nothave reference solutions for test problems.4. ExperimentsIn this section, we evaluate the efficacy of next-token prediction and chain-of-thought reasoning forverification compared to standard verification approaches. To this end, we compare GenRM and standardverifiers on a number of reasoning tasks to answer the following questions: (1) How does GenRMcompare to discriminative verifiers and other approaches? (2) Does unified training of GenRM improve6Generative Verifiers: Reward Modeling as Next-Token Predictiongeneration and verification performance? (3) Can GenRM effectively utilize CoT reasoning to improve itsperformance? (4) How does GenRM scale with model size and inference-time compute?Tasks.We focus on the following tasks and put details about data generation in Appendix A:•Algorithmic reasoning . We use two difficult string manipulation tasks, namely Last Letter Concate-nation (Wei et al., 2022) and Word Sorting from Big-Bench (Suzgun et al., 2022). We train verifierson word lists of length {2,3,4}, and evaluate their generalization on length {5,6}.•Math reasoning . We train grade-school math verifiers on the GSM8K dataset from Cobbe et al. (2021)that popularized test-time verification. We evaluate these verifiers on the GSM8K test set as well astheireasy-to-hard generalization on much harder MATH dataset (Hendrycks et al., 2021), using thesame held-out set of 500 MATH problems as Lightman et al. (2023).Baselines. We compare GenRM to the following verification approaches:•Discriminative RM (Cobbe et al., 2021) or ORM is the prevalent approach for training verifiers fortest-time re-ranking on reasoning tasks (§2), and serves as our main baseline.•LLM-as-a-Judge (Zheng et al., 2024) uses an off-the-shelf pretrained LLM for verification. To do so,we use a CoT prompt to produce 32 verification rationales that is used for correctness prediction andpick the majority-vote correctness answer.•DPO(Rafailov et al., 2024): Following Hosseini et al. (2024), we use this preference optimizationapproach for training verifiers on preference pairs with incorrect and correct solutions.•Self-consistency (Wang et al., 2022): A simple approach to use test-time compute without verifiers:sample multiple solutions from the LLM generator and pick the most common answer.Note that self-consistency and test-time verification are complementary approaches, and can be oftencombined via weighted self-consistency to further boost performance, as shown in Figure 5.Evaluation protocol. Following Cobbe et al. (2021); Lightman et al. (2023), we primarily use Best-of-Nperformance in terms of the percentage of problems solved using a fixed generator (§2) with learnedverifiers, and report average accuracy on the test set. We also report test RM accuracy , which measureswhether the verifier accurately classifies incorrect and correct solutions. While these two metrics arecorrelated, RM accuracy only evaluates the verifier’s point-wise accuracy, while Best-of-N evaluates theverifier’s ability to rank solutions for choosing the correct one.Models & Training. For training verifiers, we use open-weights Gemma models (Gemma Team et al.,2024a,b), specifically Gemma-2B for algorithmic tasks, and Gemma 2B, 7B, and Gemma-2 9B for GSM8K.For solution generation as well as LLM-as-a-Judge, we use Gemma 2B for algorithmic tasks and Gemini1.0 Pro (Google et al., 2023) for GSM8K. For verification CoT rationales, we generate oracle rationalesfor algorithmic tasks programmatically (Table A.1); for GSM8K, we generate synthetic rationales usingGemini 1.0 Pro with reference-guided grading (Table A.2). See Appendix B for hyperparameter details.4.1. Generative Verifiers Outperform Standard Verification ApproachesGenRM outperforms LLM-as-a-Judge and DPO verifiers (Figure 1), while performing comparably orslightly better than discriminative verifiers (Figure D.1). GenRM-CoT substantially improves the Best-of-N performance over GenRM. In particular, on the algorithmic tasks with oracle verification CoTs,7Generative Verifiers: Reward Modeling as Next-Token Prediction1 2 4 816 32Number of Solutions (N)8%16%24%32%40%% Problem Solved (Best-of-N) 1.5× efficientAlgorithmic Reasoning (2 tasks)1 2 4 8 16Number of Solutions (N)72%76%80%84%88%92% 1.2× efficientGrade-School Math (GSM8K)1 2 4 816 32Number of Solutions (N)28%32%36%40%44% 6.4× efficientTransfer to MATH (GSM-Verifiers)LLM-as-a-Judge Self-Consistency DPO Discriminative RM GenRM-CoTFigure 6|Sample-Efficient Scaling with Generative Verifiers .GenRM-CoT outperforms other methods, especially for lengthgeneralization on algorithmic tasks (Gemma-2B verifiers) and easy-to-hard generalization on MATH (Gemma2-9B verifiers).Specifically, GenRM-CoT nearly matches the oracle verifier’s Best-of-N performance on algorithmic tasks. On MATH, it matchesdiscriminative verifier’s Best-of-32 performance using 6.4×fewer solutions .Pre-Calculus Intermediate AlgebraNumber TheoryCounting and ProbabilityGeometryAlgebraPre-Algebra20%40%60%% Problems Solved(Best-of-32)MATH: Score Breakdown by Subject AreasDiscriminative RMGenRM-CoT1 2 3 4 5Difficulty Level20%40%60%80%MATH: Difficulty BreakdownFigure 7|Easy-to-Hard Generalization on MATH , with Gemma2-9B verifiers trained only on significantly easier grade-schoolmath problems. Compared to discriminative RMs, GenRM-CoT performs especially well on Pre-Algebra, Algebra, and Pre-Calculus, and obtains superior performance across all difficulty levels.GenRM-CoT nearly matches the oracle verifier performance.1 2 4 816 32Number of Solutions (N)28%32%36%40%44%48%% Problems Solved(Best-of-N) 2.5× efficientMATH: Effect of Weighted SCSC SC + Disc RM SC + GenRM-CoTFigure 5|Weighted Self-Consistency on MATH.On GSM8K, GenRM-CoT consistently outperforms othermethods (Figure 6, middle), even though the synthetic CoTrationales for training may contain errors. Qualitatively,GenRM-CoT is able to detect subtle reasoning errors thatare missed by discriminative or direct GenRM verifiers (seeFigure 2, 4, and 14).Easy-to-Hard Generalization . Without any training onMATH,GenRM-CoTresultsina 6.4×bettersampleefficiencythan discriminative verifiers as we increase the number ofsolutions to verify, and surpasses the strong self-consistencybaseline (Figure 6, right). While Sun et al. (2024) demon-8Generative Verifiers: Reward Modeling as Next-Token Prediction10%12%15%18%20%% Problems Solved(Best-of-32)Last Letter Concat (2B model)40%60%(Best-of-32)Word Sorting (2B model)88%90%92%(Best-of-16)GSM8K (7B model)GenRM (Verification Only) GenRM GenRM-CoT (Verification Only) GenRM-CoTFigure 8|SFT on correct solutions enhances verification , both for GenRM andGenRM-CoT, across all tasks.‘Verification Only’ corresponds to verifiers trained only on verification data, by setting λ=0in (5).2123Number of Solutions (N)0%20%40%60%80%Best-of-N (Oracle Verifier)Last Letter Concat (=1/3)2123Number of Solutions (N)20%40%60%80%Word Sorting (=1/3)2123Number of Solutions (N)70%80%90%GSM8K (=1/4)Base LLM SFT (Generation) GenRM-CoT (Verification + Generation)Figure 9|Unifying generation and verification boosts generation performance compared to SFT on correctsolutions, in terms of Best-of-N with oracle verifier. The improvement is larger on algorithmic tasks, which useground-truth verification data, than on GSM8K that relies on synthetic rationales, which may be inaccurate.strate that discriminative verifiers trained on easy MATH problems can generalize to harder MATHproblems, GenRM-CoT exhibits a much stronger generalization from grade-school math problems tohigh-school competition problems in MATH (see Figure 7 for a score breakdown by subject areas anddifficulty levels).Leveraging Self-Consistency with Verifiers . Self-consistency and test-time verification can be easilycombined to boost Best-of-N performance. To do so, we use weighted self-consistency or majority-voting (Liu et al., 2023; Sun et al., 2024; Uesato et al., 2022) where we weight each solution according totheverifier’sscore,andselectthefinalanswerwiththelargestweight(seeAppendixCfordetails). Figure5shows that weighted SC can indeed improve the vanilla self-consistency (SC); in particular, weighted SCbased on GenRM-CoT requires 2.5x fewer solutions than its counterpart based on Discriminative RM toreach the same performance.4.2. Synergy Between Generation and VerificationUnifyingsolutiongenerationwithverification,asdoneby GenRM usingnext-tokenprediction,consistentlyimprovesverificationperformanceacrossalltasks, asillustratedinFigure8. Thisimprovementisobservedfor both direct and CoT-based generative verifiers, suggesting that teaching the verifier to imitate correctsolutions generally helps. However, adding too much solution generation data can decrease verificationperformance of GenRM (Figure D.3).9Generative Verifiers: Reward Modeling as Next-Token Prediction12481632# Sampled CoT Rationales (K)80%85%90%95%% GSM8K Problems Solved (Best-of-16)Gemma-2B12481632# Sampled CoT Rationales (K)Gemma-7B12481632# Sampled CoT Rationales (K)Gemma-9BLLM-as-a-Judge (MajVote@K) GenRM-CoT (MajVote@K) GenRM-CoT (Greedy)Figure 10|ScalingInference-timeComputeforVerification on GSM8K. By posing reward modeling as next-tokenprediction, GenRM-CoT can utilize Chain-of-Thought and Majority Voting, to turn additional test-time computeinto higher percentage of problems solved under Best-of-N. Here, the horizontal line corresponds to performanceof GenRM-CoT verifier with greedy decoding in Eq (6).2B 7B 9BParameter Count (Gemma)35.0%40.0%45.0%% Problems Solved (Best-of-32)MATH: Model Scaling2B 7B 9BParameter Count (Gemma)67.5%70.0%72.5%75.0%RM AccuracyMATH: Model ScalingGenRM GenRM-CoT Discriminative RMFigure 11|Model Scaling for Generative Verifiers. We evaluate MATH performance of Gemma 2B, 7B, andGemma2 9B verifiers trained on GSM8K. We observe positive scaling trends for GenRM (direct) and GenRM-CoTas well as Discriminative RM, both for ( Left) Best-of-N performance, and ( Right) RM accuracy on the test set.Generative verifiers outperform discriminative counterparts in all model regimes.Incorporating CoT verification data into the generator’s training mix leads to better solution generationperformance for the GenRM-CoT verifier itself, as evidenced in Figure 9 by the improved Best-of-N scoreswith the oracle verifier (Pass@N). This suggests that teaching a generator to perform CoT verificationusing next-token prediction can deepen its understanding of the generation process itself. Overall,unifying solution generation and verification is mutually beneficial.4.3. Scaling Model Size and Inference-time ComputeScaling Test-Time Compute with GenRM-CoTcan be done by sampling multiple CoTs and applyingmajority voting, as described in Eq (7). As shown in Figure 10, GenRM-CoT verifier’s performancescales gracefully with number of votes at test time, under all three Gemma model sizes (2B, 7B, 9B),outperforming greedy decoding performance within 2 votes. Notably, across model scales, the finetunedGenRM-CoTverifieroutperformsLLM-as-a-Judge,whichalsoutilizesthesameCoTapproachandnumberof majority votes, but prompts a more capable Gemini 1.0 Pro model than Gemma models which wefinetune as verifiers.10Generative Verifiers: Reward Modeling as Next-Token Prediction1 2 4 8 16Number of Solutions (N)80%90%% Problems Solved (Best-of-N)GenRM-CoT: Synthetic RationalesReference GuidanceNo GuidanceOracle (pass@N)Figure 12|Quality of synthetic rationales mat-ters. Using reference guidance for synthetic ratio-nale generation is crucial for GenRM-CoT to per-form well on GSM8K: 91.7% with guidance vs.87.8% without for Gemma-7B verifiers.1 2 488%89%90%91%92%RM AccuracyGemma-7B1 2 4Training CoT rationales Per Solution85%86%87%88%89%% Problems Solved(Best-of-16)Gemma-7BGSM8K: Scaling Number of Rationales Figure 13|Quantity of synthetic rationales matter . Scaling thenumber of rationales per solution for GenRM-CoT on GSM8K improvesboth RM accuracy and Best-of-N performance. Here, we use fine-tunedGemma-7B verifier, with greedy decoding at inference (6).Scaling model size. In Figure 11, we show that generative verifiers, especially GenRM-CoT, exhibitbetter scaling behavior than discriminative RMs, both in terms of reward modeling accuracy and Best-of-N performance. Intuitively, bigger models are more capable of text generation, allowing GenRM-CoTfinetuning to better tap into its chain-of-thought reasoning ability for verification. Furthermore, theseresults demonstrate that larger models generalize better using the same data, which matches what weexpect from scaling model parameter counts under the next-token prediction loss.4.4. Synthetic Rationales: Quantity and Quality MatterOur results on math reasoning tasks indicate that CoT verifiers can outperform discriminative anddirect verifiers without requiring human-written verification rationales, highlighting the potential ofLLM-generated rationales. We find that both the quality and quantity of these synthetic rationales matter.As shown in Figure 12, using reference-guided grading during rationale generation (§3.3) significantlyimproves verification performance. Furthermore, using multiple rationales per solution also improvesperformance, as shown in Figure 13. We suspect that this is because model-generated rationales maycontain errors, such that training on multiple rationales per solution can result in an “ensembling” effectthat prevents overfitting to such errors (Zhang et al., 2024).Importantly, unlike prior work, our results on math reasoning tasks do not require a more capablemodel (Ankner et al., 2024; Ye et al., 2024) or humans (McAleese et al., 2024; Saunders et al., 2022) forgenerating verification rationales: we use the same model (Gemini 1.0 Pro) to both generate solutions toverify and synthetic verification rationales for training.5. Related WorkReward models (RMs) and verifiers. Conventionally, RMs and verifiers are trained as discriminativemodels via binary classification: given a prompt and a corresponding solution or a pair of solutions), themodel iseithertrainedtopredict thecorrectness ofthesolution(Cobbeet al.,2021;Lightman etal., 2023;Luo et al., 2024; Uesato et al., 2022; Wang et al., 2023; Yu et al., 2024) or a preference between the two11Generative Verifiers: Reward Modeling as Next-Token PredictionProblem : Carol spends 4 hours writing a song, half that much time recording it, and 90 minutes editing it. Whatpercentage of her total work time did she spend editing?Solution : Half of 4 hours is 2 hours. So the time spent recording is 2 hours. Add the editing time of 90 minutes and thetotal time for the entire project is 4 + 2 + 1.5 = 7.5 hours. That means she spent (90 / 7.5) * 100 = 12% of the totaltime editing. The answer is 12.Discriminative RM correctness score: 0.8414GenRM-CoT : Let’s verify step by step.···Step 3: That means she spent (90 / 7.5) * 100 = 12% of the total time editing.Expected Answer: Incorrect. The editing time should be in hours, so it should be (1.5 / 7.5) * 100 = 20%.Verification: Is the answer correct (Yes/No)? NoGenRM-CoT (Majority Voting) score: 0.2498Figure 14|An example where GenRM-CoT catches a subtle mistake that the discriminative verifier is unable tocatch. The candidate solution did not convert 90 minutes into 1.5 hours before dividing it by 7.5. However, thediscriminative verifier was not able to detect this mistake likely because the solution does still appear to produce avalid-sounding percentage 90/7.5 = 12. Our proposed GenRM-CoT model is able to identify this mistake usingstep-by-step generative verification. The full verification output can be found in Table E.1.solutions (Nakano et al., 2021; Stiennon et al., 2020). Concretely, the RM directly produces a numericalcontinuous-valued score, which is then plugged into a classification objective (3). As such, discriminativeverifiers do not utilize the generation capabilities of LLMs. In contrast to discriminative RMs, GenRMrepresents the correctness decision using the log probability of specific tokens, for example ‘Yes’ and ‘No’.Posing verification as generating “yet another token” allows it to tap better into the generation capabilitiesof LLMs, by making it straightforward to employ CoT reasoning and additional inference-time computefor better verification.LLM-as-a-Judge. Another line of work that poses verification as next-token prediction simply promptsoff-the-shelf LLMs to act as a verifier when provided with a rubric and a template for grading (Bai et al.,2022; Kim et al., 2023; Ling et al., 2024; Zheng et al., 2024) or many-shot ICL examples (Agarwal et al.,2024), but without any specific training for the same. Perhaps unsurprisingly, we find in our experimentsthat using more powerful LLMs (Gemini 1.0 Pro) as a judge is worse than our trained GenRM usingweaker Gemma models (Figure 1, 10), highlighting the necessity of training generative verifiers. Ourgenerative verifiers also exhibit good out-of-distribution generalization, which might be due to bettercalibrated uncertainty estimates from training (Kapoor et al., 2024). More generally, even the strongproprietary LLMs, such as GPT-4 (Achiam et al., 2023) and Gemini (Team et al., 2024), fall behindtrained RMs on popular leaderboards (Lambert et al., 2024), and this gap is much larger for reasoning.Using CoTs for reward models. Prior works have also used critiques or CoT to extract preference andverification signals using LLM-as-a-Judge (Wang et al., 2024; Wu et al., 2024; Yuan et al., 2024); incontrast to these works, GenRM utilizes model-generated CoTs directly for training the verifier. Uponinference, a GenRM-CoT produces its own CoTs, which it then uses to make decisions on correctness,unlike Ye et al. (2024) that simply uses CoTs from a separate highly-capable LLM. In contrast to priorwork that utilizes high-quality data from humans to train critique models (Saunders et al., 2022) ortraindiscriminative RMs for generating code critiques (McAleese et al., 2024), we show that GenRMcan be trained from purely synthetic, model-generated critiques. Concurrent work (Ankner et al., 2024)trains an RM to produce response critiques for preference pairs generated using a much more capable12Generative Verifiers: Reward Modeling as Next-Token PredictionLLM, which are then passed as input into a RM head, separate from the base LLM. Unlike GenRM whichuses next-token prediction, their RM head is trained discriminatively akin to standard RMs. While thisapproach allows them to leverage CoT, it does notallow them to unify solution generation and verificationas a result of a discriminative RM head, which GenRM seamlessly enables (Section 4.2). Moreover,their synthetic critiques are not filtered for correctness, which would lead to poor verification CoTs onreasoning tasks (§3.3).Unifiedgenerationandverification. Oneofthehallmarkpropertiesof GenRM isthatthesamegenerativeverifier can be co-trained with a generation objective (5): when given a problem, the model is trainedto produce a solution, whereas when given a problem and a candidate solution, it is trained to verifythis candidate. This is related to DPO (Rafailov et al., 2024) and its application to learning verifiersin reasoning (Hosseini et al., 2024), which aims to unify generation (policy) and verification (rewardmodels) by representing the reward implicitly using the logits of a policy and training the policy witha reward-modeling loss. For reasoning, this type of model tying has been shown to exhibit erroneousextrapolation and degradation in learned representations, which prior work has attempted to addresswith additional techniques (Pal et al., 2024; Pang et al., 2024; Setlur et al., 2024; Yang et al., 2024). Ofthese, while Yang et al. (2024) train a reward model with an auxiliary generative SFT loss, note thatthis loss is applied on a separate head for regularization purposes and is discarded after training; unlikeGenRM no text is produced when querying the RM. In addition, compared to DPO, GenRM uses a simplernext-token prediction loss, does not require a reference policy, and obtains significantly better verificationperformance (Figure 1, 6).6. Conclusion & Future WorkIn this paper, we have introduced Generative Verifiers ( GenRM), which recast verification as next-tokenprediction. GenRM is more performant than discriminative verifiers, and unlocks the use of chain-of-thought reasoning and inference-time compute for better verification. GenRM also unifies generationand verification into a single LLM, and demonstrates that such a unification benefits both generation andverification. Moreover, we show that synthetic model-generated rationales, which can be error-prone,are sufficient to teach GenRM how to use verification CoT to pick out tricky errors on math reasoningtasks (see Figure 2, 4, 14, and Appendix E).The framework of generative verification offers a solid foundation for future work. Promising directions in-clude extending this framework to broader tasks such as coding, alignment, text-to-image generation (Linet al., 2024), and open-ended generation (Besta et al., 2024). Furthermore, leveraging process-levelsupervision (Lightman et al., 2023) and training CoT verifiers with reinforcement learning (RL) can resultin more accurate generative verifiers. Given GenRM’s compatibility with all the existing tools designedto improve LLMs, exploring enhancements through techniques like retrieval-augmented generation(Borgeaud et al., 2022), many-shot learning (Agarwal et al., 2024), multi-staged prompting (Yao et al.,2024), and tool use (Schick et al., 2024) would be interesting. Finally, incorporating generative verifiersinto RL pipelines for LLMs warrants further investigation.AcknowledgementsThis work was done during LZ, AH, and HB’s internship at Google. We thank Hugo Larochelle, MinqiJiang, AleksandraFaust, AnkitAnand, GuillaumeDesjardins, DoinaPrecup, andCharlieSnellforfeedback13Generative Verifiers: Reward Modeling as Next-Token Predictionon an earlier version of this paper and informative discussions. We thank Chirag Nagpal and KatrinTomanek for support in setting up infrastructure that was crucial for running Gemma experiments.Author ContributionsLZ led the project, and ran almost all of the experiments and ablation studies, and wrote and edited thepaper. AH was responsible for the discriminative RM baselines and DPO baselines. HB was responsiblefor the word-sorting task, and helped set up evaluations and DPO. MK advised the project, and providedfeedback on writing. AK conceived the project with RA, advised LZ, and provided feedback on paper. RAhostedLZasastudentresearcher, proposedseveralideasandexperiments, implementedreference-guidedgrading, wrote the initial draft and advised the project.ReferencesJ. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman,S. Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023.R. Agarwal, A. Singh, L. M. Zhang, B. Bohnet, S. Chan, A. Anand, Z. Abbas, A. Nova, J. D. Co-Reyes,E. Chu, et al. Many-shot in-context learning. arXiv preprint arXiv:2404.11018 , 2024.Z. Ankner, M. Paul, B. Cui, J. D. Chang, and P. Ammanabrolu. Critique-out-loud reward models. arXivpreprint arXiv:2408.11791 , 2024.Y. Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones, A. Chen, A. Goldie, A. Mirhoseini, C. McK-innon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073 ,2022.M. Besta, L. Paleari, A. Kubicek, P. Nyczyk, R. Gerstenberger, P. Iff, T. Lehmann, H. Niewiadomski, andT. Hoefler. Checkembed: Effective verification of llm solutions to open-ended tasks. arXiv preprintarXiv:2406.02524 , 2024.S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. B. Van Den Driessche, J.-B.Lespiau, B. Damoc, A. Clark, et al. Improving language models by retrieving from trillions of tokens.InInternational conference on machine learning , pages 2206–2240. PMLR, 2022.B. Brown, J. Juravsky, R. Ehrlich, R. Clark, Q. V. Le, C. Ré, and A. Mirhoseini. Large language monkeys:Scaling inference compute with repeated sampling. arXiv preprint arXiv:2407.21787 , 2024.E. Charniak and M. Johnson. Coarse-to-fine n-best parsing and maxent discriminative reranking. InProceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05) , pages173–180, 2005.A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton,S. Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of Machine LearningResearch , 24(240):1–113, 2023.H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, Y. Li, X. Wang, M. Dehghani, S. Brahma,et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416 , 2022.14Generative Verifiers: Reward Modeling as Next-Token PredictionK.Cobbe,V.Kosaraju,M.Bavarian,M.Chen,H.Jun,L.Kaiser,M.Plappert,J.Tworek,J.Hilton,R.Nakano,et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021.Gemma Team, T. Mesnard, C. Hardin, R. Dadashi, S. Bhupatiraju, S. Pathak, L. Sifre, M. Rivière, M. S.Kale, J. Love, et al. Gemma: Open models based on gemini research and technology. arXiv preprintarXiv:2403.08295 , 2024a.Gemma Team, M. Riviere, S. Pathak, P. G. Sessa, C. Hardin, S. Bhupatiraju, L. Hussenot, T. Mesnard,B. Shahriari, A. Ramé, et al. Gemma 2: Improving open language models at a practical size. arXivpreprint arXiv:2408.00118 , 2024b.G.T.Google, R.Anil, S.Borgeaud, Y.Wu, J.-B.Alayrac, J.Yu, R.Soricut, J.Schalkwyk, A.M.Dai, A.Hauth,et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 , 2023.D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt. Measuringmathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021.A. Hosseini, X. Yuan, N. Malkin, A. Courville, A. Sordoni, and R. Agarwal. V-star: Training verifiers forself-taught reasoners. arXiv preprint arXiv:2402.06457 , 2024.S. Kapoor, N. Gruver, M. Roberts, K. Collins, A. Pal, U. Bhatt, A. Weller, S. Dooley, M. Goldblum, andA. G. Wilson. Large language models must be taught to know what they don’t know. arXiv preprintarXiv:2406.08391 , 2024.S. Kim, J. Shin, Y. Cho, J. Jang, S. Longpre, H. Lee, S. Yun, S. Shin, S. Kim, J. Thorne, et al. Prometheus:Inducing fine-grained evaluation capability in language models. In The Twelfth International Conferenceon Learning Representations , 2023.D. P. Kingma. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014.N. Lambert, V. Pyatkin, J. Morrison, L. Miranda, B. Y. Lin, K. Chandu, N. Dziri, S. Kumar, T. Zick, Y. Choi,etal. Rewardbench: Evaluatingrewardmodelsforlanguagemodeling. arXivpreprintarXiv:2403.13787 ,2024.H. Lightman, V. Kosaraju, Y. Burda, H. Edwards, B. Baker, T. Lee, J. Leike, J. Schulman, I. Sutskever, andK. Cobbe. Let’s verify step by step. arXiv preprint arXiv:2305.20050 , 2023.Z. Lin, D. Pathak, B. Li, J. Li, X. Xia, G. Neubig, P. Zhang, and D. Ramanan. Evaluating text-to-visualgeneration with image-to-text generation. arXiv preprint arXiv:2404.01291 , 2024.Z. Ling, Y. Fang, X. Li, Z. Huang, M. Lee, R. Memisevic, and H. Su. Deductive verification of chain-of-thought reasoning. Advances in Neural Information Processing Systems , 36, 2024.Y. Liu, A. Singh, C. D. Freeman, J. D. Co-Reyes, and P. J. Liu. Improving large language model fine-tuningfor solving math problems. arXiv preprint arXiv:2310.10047 , 2023.I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 ,2017.L. Luo, Y. Liu, R. Liu, S. Phatale, H. Lara, Y. Li, L. Shu, Y. Zhu, L. Meng, J. Sun, et al. Improve mathematicalreasoning in language models by automated process supervision. arXiv preprint arXiv:2406.06592 ,2024.15Generative Verifiers: Reward Modeling as Next-Token PredictionN. McAleese, R. M. Pokorny, J. F. C. Uribe, E. Nitishinskaya, M. Trebacz, and J. Leike. Llm critics helpcatch llm bugs. arXiv preprint arXiv:2407.00215 , 2024.A. Meurer, C. P. Smith, M. Paprocki, O. Čertík, S. B. Kirpichev, M. Rocklin, A. Kumar, S. Ivanov, J. K.Moore, S. Singh, et al. Sympy: symbolic computing in python. PeerJ Computer Science , 3:e103, 2017.R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders, et al.Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332 ,2021.A. Pal, D. Karkhanis, S. Dooley, M. Roberts, S. Naidu, and C. White. Smaug: Fixing failure modes ofpreference optimisation with dpo-positive. arXiv preprint arXiv:2402.13228 , 2024.R. Y. Pang, W. Yuan, K. Cho, H. He, S. Sukhbaatar, and J. Weston. Iterative reasoning preferenceoptimization. arXiv preprint arXiv:2404.19733 , 2024.R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, and C. Finn. Direct preference optimization:Your language model is secretly a reward model. Advances in Neural Information Processing Systems ,36, 2024.A. Roberts, H. W. Chung, A. Levskaya, G. Mishra, J. Bradbury, D. Andor, S. Narang, B. Lester, C. Gaffney,A. Mohiuddin, C. Hawthorne, A. Lewkowycz, A. Salcianu, M. van Zee, J. Austin, S. Goodman, L. B.Soares, H. Hu, S. Tsvyashchenko, A. Chowdhery, J. Bastings, J. Bulian, X. Garcia, J. Ni, A. Chen,K. Kenealy, J. H. Clark, S. Lee, D. Garrette, J. Lee-Thorp, C. Raffel, N. Shazeer, M. Ritter, M. Bosma,A. Passos, J. Maitin-Shepard, N. Fiedel, M. Omernick, B. Saeta, R. Sepassi, A. Spiridonov, J. Newlan,and A. Gesmundo. Scaling up models and data with t5xandseqio.arXiv preprint arXiv:2203.17189 ,2022. URL https://arxiv.org/abs/2203.17189 .W. Saunders, C. Yeh, J. Wu, S. Bills, L. Ouyang, J. Ward, and J. Leike. Self-critiquing models for assistinghuman evaluators. arXiv preprint arXiv:2206.05802 , 2022.T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, E. Hambro, L. Zettlemoyer, N. Cancedda,and T. Scialom. Toolformer: Language models can teach themselves to use tools. Advances in NeuralInformation Processing Systems , 36, 2024.A. Setlur, S. Garg, X. Geng, N. Garg, V. Smith, and A. Kumar. Rl on incorrect synthetic data scales theefficiency of llm math reasoning by eight-fold. arXiv preprint arXiv:2406.14532 , 2024.A. Singh, J. D. Co-Reyes, R. Agarwal, A. Anand, P. Patil, P. J. Liu, J. Harrison, J. Lee, K. Xu, A. Parisi, et al.Beyond human data: Scaling self-training for problem-solving with language models. arXiv preprintarXiv:2312.06585 , 2023.N. Stiennon, L. Ouyang, J. Wu, D. Ziegler, R. Lowe, C. Voss, A. Radford, D. Amodei, and P. F. Christiano.Learning to summarize with human feedback. Advances in Neural Information Processing Systems , 33:3008–3021, 2020.Z. Sun, L. Yu, Y. Shen, W. Liu, Y. Yang, S. Welleck, and C. Gan. Easy-to-hard generalization: Scalablealignment beyond human supervision. arXiv preprint arXiv:2403.09472 , 2024.16Generative Verifiers: Reward Modeling as Next-Token PredictionM. Suzgun, N. Scales, N. Schärli, S. Gehrmann, Y. Tay, H. W. Chung, A. Chowdhery, Q. V. Le, E. H. Chi,D. Zhou, et al. Challenging big-bench tasks and whether chain-of-thought can solve them. arXivpreprint arXiv:2210.09261 , 2022.G. Team, M. Reid, N. Savinov, D. Teplyashin, T. Lillicrap, J.-b. Alayrac, R. Soricut, A. Lazaridou, O. Firat,J. Schrittwieser, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens ofcontext. arXiv e-prints , pages arXiv–2403, 2024.J.Uesato,N.Kushman,R.Kumar,F.Song,N.Siegel,L.Wang,A.Creswell,G.Irving,andI.Higgins. Solvingmath word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275 ,2022.P. Wang, L. Li, Z. Shao, R. Xu, D. Dai, Y. Li, D. Chen, Y. Wu, and Z. Sui. Math-shepherd: A label-freestep-by-step verifier for llms in mathematical reasoning. arXiv preprint arXiv:2312.08935 , 2023.T. Wang, I. Kulikov, O. Golovneva, P. Yu, W. Yuan, J. Dwivedi-Yu, R. Y. Pang, M. Fazel-Zarandi, J. Weston,and X. Li. Self-taught evaluators. arXiv preprint arXiv:2408.02666 , 2024.X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou. Self-consistencyimproves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 , 2022.J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou, et al. Chain-of-thoughtprompting elicits reasoning in large language models. Advances in neural information processing systems ,35:24824–24837, 2022.M. Wortsman, P. J. Liu, L. Xiao, K. Everett, A. Alemi, B. Adlam, J. D. Co-Reyes, I. Gur, A. Kumar,R. Novak, et al. Small-scale proxies for large-scale transformer training instabilities. arXiv preprintarXiv:2309.14322 , 2023.T. Wu, W. Yuan, O. Golovneva, J. Xu, Y. Tian, J. Jiao, J. Weston, and S. Sukhbaatar. Meta-rewardinglanguagemodels: Self-improvingalignmentwithllm-as-a-meta-judge. arXivpreprintarXiv:2407.19594 ,2024.R.Yang,R.Ding,Y.Lin,H.Zhang,andT.Zhang. Regularizinghiddenstatesenableslearninggeneralizablereward model for llms. arXiv preprint arXiv:2406.10216 , 2024.S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y. Cao, and K. Narasimhan. Tree of thoughts: Deliberateproblem solving with large language models. Advances in Neural Information Processing Systems , 36,2024.Z. Ye, F. Greenlee-Scott, M. Bartolo, P. Blunsom, J. A. Campos, and M. Gallé. Improving reward modelswith synthetic critiques. arXiv preprint arXiv:2405.20850 , 2024.F. Yu, A. Gao, and B. Wang. Ovm, outcome-supervised value models for planning in mathematicalreasoning. In Findings of the Association for Computational Linguistics: NAACL 2024 , pages 858–875,2024.W. Yuan, R. Y. Pang, K. Cho, S. Sukhbaatar, J. Xu, and J. Weston. Self-rewarding language models. arXivpreprint arXiv:2401.10020 , 2024.17Generative Verifiers: Reward Modeling as Next-Token PredictionE. Zelikman, Y. Wu, J. Mu, and N. Goodman. Star: Bootstrapping reasoning with reasoning. Advances inNeural Information Processing Systems , 35:15476–15488, 2022.E.Zhang, V.Zhu, N.Saphra, A.Kleiman, B.L.Edelman, M.Tambe, S.M.Kakade, andE.Malach. Transcen-dence: Generative models can outperform the experts that train them. arXiv preprint arXiv:2406.11741 ,2024.Y. Zhang, Y. Li, L. Cui, D. Cai, L. Liu, T. Fu, X. Huang, E. Zhao, Y. Zhang, Y. Chen, et al. Siren’s song in theai ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219 , 2023.L. Zheng, W.-L. Chiang, Y. Sheng, S. Zhuang, Z. Wu, Y. Zhuang, Z. Lin, Z. Li, D. Li, E. Xing, et al. Judgingllm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems ,36, 2024.18Generative Verifiers: Reward Modeling as Next-Token PredictionAppendicesA. Training Data Generation for VerifiersProblem Solution to VerifyLast LetterConcatenationTask: take the last letters of each words andconcatenate them.Q: Jacob , VivianA: Let’s think step by step.J a c o b -> b (List so far: [b])V i v i a n -> n (List so far: [b n])The answer is: b nLet’s verify step by step.The given answer is: b , n. The input is: Jacob , Vivian.Is the length of the final answer the same as the number of input words? YesIs ‘b’ the last letter of ‘J a c o b’? YesIs ‘n’ the last letter of ‘V i v i a n’? YesIs the final answer correct? YesWord Sorting Task: sort a list of words alphabetically.Q: cell, apple, courageA: Let’s think step by step.Identify the alphabetical locations of 1st letters:cell -> c: 3. apple -> a: 1. courage -> c: 3.We need to sort the numbers: 3, 1, 3Sorting the numbers: 1 < 3 = 3The answer is: apple, courage, cellLet’s verify step by step.The given answer is: apple, courage, cell. The input is: cell, apple, courage.Does the final answer contain the same words as the original input? YesAre ‘a p p l e’, ‘c o u r a g e’ in the correct order? YesAre ‘c o u r a g e’, ‘c e l l’ in the correct order? NoIs the answer correct? NoTable A.1|Algorithmic reasoning tasks that we consider. In thes tasks, we can generate ground-truth verificationchain-of-thoughts as the training data for a generative verifier. Those synthetic tasks help us understand whethera generative verifier can outperform a discriminative verifier in the ideal scenario where there is no noise in theverification CoT training data.•Last Letter Concatenation (Wei et al., 2022): Given a list of words, the task is to concatenatethe last letters of each word (for instance, “Noah Paul Elisha Rebecca” →“hlaa”). To generate thetraining data, for each length {2,3,4}, we generate 350problem queries by randomly samplingfrom the set of words in original training set; for each problem query, we generate 128 attemptsfrom Gemma-2B (Gemma Team et al., 2024a) model. This gives us a total of about 50K trainingdata points after de-duplication. We train verifiers on examples of lengths {2,3,4}(here the lengthrefers to how many words are in the input list), and evaluate the verifier performance on length6. We use the format in Table A.1 to algorithmically generate ground-truth verification CoT fortraining.•Word Sorting (Suzgun et al., 2022): Given a list of words, sort them in alphabetical order. We trainverifiers on a dataset comprised of {2,3,4}words in each example, and evaluate the performanceon length 5. For each length, we generate 4096 lists of words as the problem queries; for eachproblem, we generate 64 attempts from Gemma-2B. After de-duplication and filtering out invalidresponses, we have a total of about 100K training data points. We also algorithmically generate19Generative Verifiers: Reward Modeling as Next-Token Predictionground-truth verification CoT for training (see Table A.1).•Grade School Math (Cobbe et al., 2021): We follow the original train/test split and use 1.3Kproblems for test, 128 problems for validation, and about 7.2K problems for training. We generate50 solutions per problem, and randomly sample at max 16 correct solutions and 16 incorrectsolutions per problem as the training set. We evaluate the verifier performance on 16 solutions perproblem in the test set.Table A.2|We use model-generated rationales as CoT training data on GSM with the above prompt with Gemini1.0 Pro. Specifically, we show the model another solution that arrives at the correct answer, which is privilegedinformation that does not exist at test time. This does not require a more capable model: we use the same model togenerate solutions and synthetic rationales in the training data.Prompt for Generating Synthetic Rationales for CoT Verifier on GSMYou are a math teacher. Grade the Solution, verifying correctness step by step.Use Expected Answer to find any erroneous step in the Solution.At the end of the Solution verification, when you give your final grade, write it inthe form "Verification: Is the answer correct (Yes/No)? X", where X is either Yesor No.Question: {problem}Solution: {solution}Expected Answer: {a solution that arrives at the correct answer}Table A.3|Zero-shot prompt for our LLM-as-a-Judge evaluation results based on Gemini 1.0 Pro.Prompt for LLM-as-a-Judge on GSM and MATHYou are a math teacher. Grade the Solution, verifying correctness step by step.At the end of the Solution verification, when you give your final grade, write it inthe form "Verification: Is the answer correct (Yes/No)? X", where X is either Yesor No.Question: {problem}Solution: {solution}B. Hyper-parameters for Verifier TrainingFor Gemma-based verifiers, we pick the best checkpoint based on validation accuracy of verification onheld out problems and solutions. We always use data balancing between 50% correct solutions and 50%incorrect solutions in training.GenRMverifiers After doing a sweep of learning rates (LR), we find that an LR of [2e−6,1e−6,5e−7]works well for our tasks considered (with LR= 2e−6generally being the best). We use a weight decay of1e−2, and do not apply any dropout. We use the Adam optimizer (Kingma, 2014) with decoupled weightdecay (Loshchilov and Hutter, 2017) and a gradient norm clipping of 1.0. We use a linear warmup of1000gradient steps, and a cosine decay schedule that decays to 10%of the peak learning rate after adecay period. We finetune for 300K steps with a batch size of 64, and use seqio (Roberts et al., 2022)library to create data mixtures.20Generative Verifiers: Reward Modeling as Next-Token PredictionDiscriminative RMs We finetune Gemma-based discriminative RMs by using a special token’s logitfor classification. We chose the best performing ORM on our validation sets by launching a large sweepover learning rates [1e−7,5e−7,1e−6,2e−6,3e−6,5e−6], weight decay[1e−3,1e−2,1e−1]anddropouts[1e−3,5e−3,1e−2,0]. We also schedule the learning rate with a linear ramp up and a cosinedecay. We use a Z-loss =10−4·log2Z(whereZis the softmax normalizer of all logits) for regularizationpurposes (Chowdhery et al., 2023; Wortsman et al., 2023). Results obtained with learning rate 1e−7and dropout= 0.DPOWe first finetune Gemma-based generative models using SFT on correct solutions to obtain areference policy πref, and then initialize from this reference policy to train generator πDPOwith the DPOloss on a dataset of pairs of correct and incorrect solutions. We conduct a hyper-parameter sweep for boththe learning rate (LR) and the βcoefficient in DPO loss: for LR we sweeped [1e−7,5e−7,1e−6,2e−6]and found 1e−6to work best; for βwe considered[0.01,0.1,0.5,1.0,2.0]and used 0.1. After DPO istrained, instead of using r=logπDPO(solution|question)−logπref(solution|question)as the score (asdefined in DPO’s derivation), we find that directly the sequence log probability of the final DPO policylogπDPO(solution|question)as the score (without subtracting the log prob from reference policy) resultsin better performance in verification (see Figure D.5); a similar finding was also noted in (Hosseini et al.,2024).C. Additional DetailsDatafilteringforsyntheticverificationCoT Since the answer checker (either based on string matchingor Sympy library (Meurer et al., 2017)) is not perfect, there will inevitably be false negatives in themodel-generated solutions. Besides, it is possible for a solution to arrive at the right answer with anincorrect reasoning path, so there will also be false positives in solutions. We use the following strategy tomitigate the issue of false negatives and false positives: when selecting the synthetic verification rationales(generated under reference guidance) for training, we only keep the rationales from solutions wheremore than 50% of verification rationales agree with the correctness returned by the answer checker.Weighted Self-Consistency typically sums the verifier scores (across solutions) for each answer, andpicks the answer with the highest summed scores. We find that summing the top-K scores (rather thansumming all scores) for each answer slightly improves performance. This means that for each answer, weonly consider the correctness of its top-K solutions. We use K=6 for GSM and K=4 for MATH.D. Additional ResultsAblating generation loss weight ( λ) inGenRM. Adding too much generation data negatively impactsverification, while intermediate values yield the best results, as shown in Figure D.3. By default, allGenRM experiments use unified training for verification with solution generation (5), with λ=1/3foralgorithmic tasks and λ=1/4for GSM8K.Data scaling for CoT verifiers. GenRM-CoT shows that the GenRM-CoT performance improves as weincrease the number of solutions per problem from 8 to 32, in terms of RM accuracy and Best-of-NAccuracy, as shown in Figure D.2.21Generative Verifiers: Reward Modeling as Next-Token Prediction1 2 4 816 32Number of Solutions (N)6%12%18%24%30%36%% Problem Solved (Best-of-N)Algorithmic Reasoning (2 tasks)1 2 4 8 16Number of Solutions (N)72%76%80%84%88%92%Grade-School Math (GSM8K)1 2 4 816 32Number of Solutions (N)28%30%32%35%38%Transfer to MATH (GSM-Verifiers)DPO Discriminative RM GenRMFigure D.1|GenRM (without using CoT) performs slightly better or comparable to Discriminative RM acrossdifferent tasks, while outperforming DPO verifiers.8 16 3285%86%87%88%89%RM AccuracyGemma-7B8 16 32Training Solutions Per Problem85%86%87%88%89%% Problems Solved(Best-of-16)Gemma-7BGSM8K: Scaling Number of SolutionsFigure D.2|Data scaling for GenRM-CoT on GSM8K withGemma-7B. We observe that both the RM accuracy and Best-of-N performance improve as we scale up the number of ratio-nales per solution and solutions per problem. When addingmore solutions, we use 4 rationales per solution. Here, wecompute GenRM-CoT scores with CoT rationales generatedusing greedy decoding, as discussed in (6).0 1/4 1/3 1/2 1Generation Data Fraction ()88%89%90%% of Problems Solved (Best-of-16)Impact of on GenRMFigure D.3|Impact of generation loss coefficient ( λ)onGenRM verifier with Gemma-7B on GSM8K test results.Adding a solution generation loss ( λ >0) can further helpGenRM, with λ=1/4being a good value for GSM.1 2 4 8 16Number of Solutions (N)72%76%80%84%88%92%% Problems Solved(Best-of-16)GSM8K: Effect of Weighted SCGenRM-CoTSC + GenRM-CoTFigure D.4|Weighted Self-Consistency on GSM8K : UnlikeMATH, GSM8K shows no visible gain from Weighted SC: thepercentage of problems solved increases only slightly from93.4% to 93.5% with 16 solutions, likely because improve-ment potential has saturated.1 2 4 8 16Number of Solutions (N)75%78%81%84%87%% Problems Solved(Best-of-16)GSM8K: Ablation on DPO RewardDPO reward (without reference policy)DPO reward (with reference policy)Figure D.5|AblationofDPOrewardfunction . We find thatdirectly the sequence log probability of the final DPO policyas the score (without subtracting the log prob from referencepolicy) results in better performance.22Generative Verifiers: Reward Modeling as Next-Token PredictionE.Examples Verification rationales from GenRM-CoT: GSM8K Test and MATH500Example of GenRM-CoT on Grade School MathQuestion : Carol spends 4 hours writing a song, half that much time recording it, and 90 minutes editing it. Whatpercentage of her total work time did she spend editing?Answer: Half of 4 hours is 2 hours. So the time spent recording is 2 hours. Add the editing time of 90 minutes and thetotal time for the entire project is 4 + 2 + 1.5 = 7.5 hours. That means she spent (90 / 7.5) * 100 = 12% of the totaltime editing. The answer is 12.Discriminative RM score: 0.8414GenRM-CoT : Let’s verify step by step.**Verification:**Step 1: Half of 4 hours is 2 hours.Expected Answer: Correct.Step 2: Add the editing time of 90 minutes and the total time for the entire project is 4 + 2 + 1.5 = 7.5 hours.Expected Answer: Incorrect. The editing time should be converted to hours first: 90 minutes = 90 / 60 = 1.5 hours.Then, the total time is 4 + 2 + 1.5 = 7.5 hours.Step 3: That means she spent (90 / 7.5) * 100 = 12% of the total time editing.Expected Answer: Incorrect. The editing time should be in hours, so it should be (1.5 / 7.5) * 100 = 20%.Verification: Is the answer correct (Yes/No)? NoGen-RM (Majority Voting) score: 0.2498Table E.1|GenRM CoT Example 123Generative Verifiers: Reward Modeling as Next-Token PredictionExample of GenRM-CoT on Grade School MathQuestion : In a 60-item quiz, 40%of the questions are easy, and the rest are equally divided as average and difficultquestions. If Aries is sure to get 75%of the easy questions, and half of the average and difficult questions correctly, howmany points is she sure to get?Answer:40%of 60 items are easy questions, so there are 60 * 40 / 100 = 24 easy questions. Aries is sure to get 75%ofthose, so she will get 24 * 75 / 100 = 18 easy questions correct. The rest of the questions are equally divided as averageand difficult questions. So, there are (60 - 24) / 2 = 18 average and difficult questions each. Aries is sure to get half ofthose correct, so she will get 18 * 50 / 100 = 9 average and difficult questions correct. In total, she is sure to get 18 + 9= 27 questions correct. Each question is worth one point, so she is sure to get 27 points. The answer is 27.Discriminative RM score: 0.6107GenRM-CoT : Let’s verify step by step.<b>Step 1:</b>40%of 60 items are easy questions, so there are 60 * 40 / 100 = 24 easy questions.<b>Expected Answer:</b>40%of 60 = 24 easy questions.<b>Verification:</b> Correct<b>Step 2:</b>Aries is sure to get 75% of those, so she will get 24 * 75 / 100 = 18 easy questions correct.<b>Expected Answer:</b>75% of 24 = 18 easy questions.<b>Verification:</b> Correct<b>Step 3:</b>The rest of the questions are equally divided as average and difficult questions. So, there are (60 - 24) / 2 = 18 averageand difficult questions each.<b>Expected Answer:</b>(60 - 24) / 2 = 18 average and difficult questions each.<b>Verification:</b> Correct<b>Step 4:</b>Aries is sure to get half of those correct, so she will get 18 * 50 / 100 = 9 average and difficult questions correct.<b>Expected Answer:</b>50%of (60 - 24) = 18 average and difficult questions.<b>Verification:</b> Erroneous<b>Step 5:</b>In total, she is sure to get 18 + 9 = 27 questions correct. Each question is worth one point, so she is sure to get 27 points.The answer is 27.<b>Expected Answer:</b>18 easy + 9 average + 9 difficult = 36 total correct questions.<b>Verification:</b> Erroneous<b>Final Grade:</b>Verification: Is the answer correct (Yes/No)? NoGen-RM (Majority Voting) score: 0.1151Table E.2|GenRM CoT Example 224Generative Verifiers: Reward Modeling as Next-Token PredictionExample of GenRM-CoT on Grade School MathQuestion : Sandy wants to lose as much weight as Joey does but needs 4 weeks to lose the same amount of weight thatJoey loses in a single week. If Joey loses 8 pounds in 4 weeks, how many weeks will it take Sandy to lose the sameamount of weight?Answer: Since Joey loses 8 pounds in 4 weeks, Sandy will take 4 times as long to lose the same amount of weight. Thatmeans she will take 4 * 4 = 16 weeks. The answer is 16.Discriminative RM score: 0.0399GenRM-CoT : Let’s verify step by step.**Solution Verification:****Step 1:**Solution: Since Joey loses 8 pounds in 4 weeks, Sandy will take 4 times as long to lose the same amount of weight.Expected Answer: We are given that Sandy needs 4 weeks for what Joey does in 1 week. Therefore, we can multiply thetime Joey takes by 4 to find out how much time Sandy will take.**Grade:** Correct**Step 2:**Solution: That means she will take 4 * 4 = 16 weeks.Expected Answer: So, we have 4 * 4 = 16 weeks.**Grade:** Correct**Final Grade:**Verification: Is the answer correct (Yes/No)? YesGen-RM (Majority Voting) score: 0.5131Table E.3|GenRM CoT Example 325Generative Verifiers: Reward Modeling as Next-Token PredictionExample of GenRM-CoT on Grade School MathQuestion : Rory is retrieving tennis balls from the court after a tennis match. In the first of three sets, he had to retrievefour more balls than in the second set. In the third set, he retrieved half as many balls as in the second. He retrieved 19tennis balls in all. How many tennis balls did he retrieve in the first set of the match?Answer: Let the number of balls retrieved in the second set be x. In the first set, he retrieved four more, so he retrieved x+ 4 balls. In the third set, he retrieved half as many, which would be x / 2. Since he retrieved 19 balls in all, we have:x + x + 4 + x / 2 = 193x / 2 + 4 = 193x + 8 = 383x = 30x = 10So, he retrieved 10 balls in the second set. In the first set, he retrieved x + 4 = 10 + 4 = 14 tennis balls.The answer is 14.Discriminative RM score: 0.9413GenRM-CoT : Let’s verify step by step.(See next page···)Table E.4|GenRM CoT Example 426Generative Verifiers: Reward Modeling as Next-Token PredictionExample of GenRM-CoT on Grade School Math···ContinuedGenRM-CoT : Let’s verify step by step.Step 1:Solution: Let the number of balls retrieved in the second set be x.Expected Answer: Let the number of balls in the second set be x.Verification: YesStep 2:Solution: In the first set, he retrieved four more, so he retrieved x + 4 balls.Expected Answer: Then in the first set he had to retrieve x + 4 balls.Verification: YesStep 3:Solution: In the third set, he retrieved half as many, which would be x / 2.Expected Answer: In the third set he retrieved half as many as in the second set, that is x/2.Verification: YesStep 4:Solution: Since he retrieved 19 balls in all, we have:x + x + 4 + x / 2 = 19Expected Answer: In all, this should add up to 19. Therefore, x + x + 4 + x/2 = 19.Verification: YesStep 5:Solution: 3x / 2 + 4 = 19Expected Answer: Simplifying this equation: 2.5x + 4 = 19Verification: NoStep 6:Solution: 3x + 8 = 38Expected Answer: 2.5x = 15Verification: NoStep 7:Solution: 3x = 30Expected Answer: x = 6Verification: NoStep 8:Solution: x = 10Expected Answer: Therefore, the number of balls in the first set was x + 4 = 6 + 4 = 10.Verification: NoFinal Answer:Solution: The answer is 14.Expected Answer: The answer is 10.Verification: Is the answer correct (Yes/No)? NoGen-RM (Majority Voting) score: 0.3267Table E.5|GenRM CoT Example 4 (Continued)27Generative Verifiers: Reward Modeling as Next-Token PredictionExample of GenRM-CoT on Grade School MathQuestion : When the water is cold Ray swims a mile in 16 minutes. When the water is warm Ray swims a mile in 2minutes more than twice as long. How much longer does Ray take to swim 3 miles on a hot day than a cold day?Answer: When the water is warm Ray swims a mile in 2 minutes more than twice as long as when the water is cold. Thismeans he takes 2 + 2 * 16 = 2 + 32 = 34 minutes to swim a mile on a hot day.Toswim3milesonahotdayhewilltake3*34=102minutes. Onacolddayhetakes16*3=48minutestoswim3miles.The difference in time is 102 - 48 = 54 minutes. The answer is 54.Discriminative RM score: 0.2452GenRM-CoT : Let’s verify step by step.**Verification:**1. "When the water is warm Ray swims a mile in 2 minutes more than twice as long as when the water is cold."- This step is correct.2. "This means he takes 2 + 2 * 16 = 2 + 32 = 34 minutes to swim a mile on a hot day."- This step is correct.3. "To swim 3 miles on a hot day he will take 3 * 34 = 102 minutes."- This step is correct.4. "On a cold day he takes 16 * 3 = 48 minutes to swim 3 miles."- This step is correct.5. "The difference in time is 102 - 48 = 54 minutes."- This step is correct.6. "The answer is 54."- This step is correct.**Expected Answer:**- The expected answer is also 54 minutes.Verification: Is the answer correct (Yes/No)? YesGen-RM (Majority Voting) score: 0.9963Table E.6|GenRM CoT Example 528Generative Verifiers: Reward Modeling as Next-Token PredictionExample of GenRM-CoT on Grade School MathQuestion : The ice cream parlor was offering a deal, buy 2 scoops of ice cream, get 1 scoop free. Each scoop cost $1.50.If Erin had $6.00, how many scoops of ice cream should she buy?Answer : Erin had $6.00. First, she should buy 2 scoops of ice cream. Since each scoop costs $1.50, she will spend 2 *$1.50 = $3.00 on the two scoops. After that, she can get 1 scoop free. Since the third scoop was free, she will not spendanything on the third scoop. So, she can afford three scoops. The answer is 3.Discriminative RM score: 0.9973GenRM-CoT : Let’s verify step by step.**Solution Verification:**Step 1: Erin had $6.00. (Correct)Step 2: She bought 2 scoops of ice cream at $1.50 each, spending $3.00. (Correct)Step 3: She received 1 scoop free. (Correct)Step 4: She can afford three scoops. (Incorrect)Step 5: The answer is 3. (Incorrect)**Expected Answer:**Step 1: Erin had $6.00. (Correct)Step 2: She can buy 6/$1.50 = 4 scoops of ice cream. (Correct)Step 3: She will get 2 free scoops for buying 2. (Correct)Step 4: She will get a total of 4 + 2 = 6 scoops. (Correct)Step 5: The answer is 4. (Correct)Verification: Is the answer correct (Yes/No)? NoGen-RM (Majority Voting) score: 0.0487Table E.7|GenRM CoT Example 629Generative Verifiers: Reward Modeling as Next-Token PredictionExample of GenRM-CoT on Grade School MathQuestion : Cherrie wants to buy Christmas gifts for her 5 friends. 2 of her friends want 5 gifts and the other 3 friendswant 2 gifts for themselves. Cherrie goes ahead and buys them and decides to further surprise them by getting 10 morerandom gifts. How many gifts did Cherrie buy in total?Answer : 2 of her friends want 5 gifts each. So, they want 2 * 5 = 10 gifts. 3 other friends want 2 gifts each. So, theywant 3 * 2 = 6 gifts. Add those 6 gifts to the 10 gifts from the beginning, that’s 16 gifts. Now, Cherrie decided to buy 10more gifts, so in total she bought 16 + 10 = 26 gifts. The answer is 26.Discriminative RM score: 0.1710GenRM-CoT : Let’s verify step by step.<b>Verification:</b><b>Step 1:</b>Solution: 2 of her friends want 5 gifts each. So, they want 2 * 5 = 10 gifts.Expected Answer: Cherrie has 5 friends. 2 of them want 5 gifts, so in total they want 2 * 5 = 10 gifts.<b>Verification:</b> This step is correct.<b>Step 2:</b>Solution: 3 other friends want 2 gifts each. So, they want 3 * 2 = 6 gifts.Expected Answer: The other 3 friends want 2 gifts each, so in total they want 3 * 2 = 6 gifts.<b>Verification:</b> This step is correct.<b>Step 3:</b>Solution: Add those 6 gifts to the 10 gifts from the beginning, that’s 16 gifts.Expected Answer: So in total, she needs to buy 10 + 6 = 16 gifts for her friends.<b>Verification:</b> This step is correct.<b>Step 4:</b>Solution: Now, Cherrie decided to buy 10 more gifts, so in total she bought 16 + 10 = 26 gifts.Expected Answer: On top of that, she wants to get 10 more random gifts. Therefore, in total she needs to buy 16 + 10 =26 gifts.<b>Verification:</b> This step is correct.<b>Final Grade:</b>Verification: Is the answer correct (Yes/No)? YesGen-RM (Majority Voting) score: 0.8017Table E.8|GenRM CoT Example 730Generative Verifiers: Reward Modeling as Next-Token PredictionExample of GenRM-CoT on Grade School MathQuestion : Juan wants to add croissants to his bakery menu. It takes 1/4 pound of butter to make 1 dozen croissants. Hewants to start with making 4 dozen a day for a week. How many pounds of butter will he need to make these croissants?Answer: 1 dozen is equal to 12 croissants. He wants to make 4 dozen a day, which is 4 * 12 = 48 croissants. He wants tomake this number of croissants for a week, which is 7 days. That means in total he needs to make 48 * 7 = 336 croissants.It takes 1/4 pound of butter to make 1 dozen croissants. So in total he will need 336 / 12 = 28 pounds of butter. Theanswer is 28.Discriminative RM score: 0.9999GenRM-CoT : Let’s verify step by step.**Solution Verification**Step 1: 1 dozen is equal to 12 croissants.Expected Answer: Correct.Step 2: He wants to make 4 dozen a day, which is 4 * 12 = 48 croissants.Expected Answer: Correct.Step 3: He wants to make this number of croissants for a week, which is 7 days. That means in total he needs to make 48* 7 = 336 croissants.Expected Answer: Correct.Step 4: It takes 1/4 pound of butter to make 1 dozen croissants. So in total he will need 336 / 12 = 28 pounds of butter.Expected Answer: Incorrect. The correct calculation should be 336 / 12 * 1/4 = 7 pounds of butter.Verification: Is the answer correct (Yes/No)? NoGen-RM (Majority Voting) score: 0.0000Table E.9|GenRM CoT Example 831Generative Verifiers: Reward Modeling as Next-Token PredictionExample of GenRM-CoT on Grade School MathQuestion : Susan earns $5 every 10 minutes for an online task she does. If she works between 8 a.m. and 11 a.m. andpauses in between for half an hour, how much money does she earn for the online task?Answer: It is from 8 am to 11 am, which is 3 hours. However, she pauses for 30 minutes or half an hour. So she works for2.5 hours. Every 10 minutes, she earns $5. The number of 10-minute periods in 2.5 hours can be found by dividing 2.5 by1/10. That is 2.5 / (1/10) = 25. She earns $5 every 10 minutes. So in 25 periods, she earns 25 * $5 = $125. The answeris $125.Discriminative RM score: 0.9920GenRM-CoT : Let’s verify step by step.**Step 1:**Solution: It is from 8 am to 11 am, which is 3 hours.Expected Answer: It is from 8 am to 11 am, which is 3 hours.**Verification:** The step is correct.**Step 2:**Solution: However, she pauses for 30 minutes or half an hour. So she works for 2.5 hours.Expected Answer: She pauses for half an hour, which is 1/2 hour. This means she works for 3 - 1/2 = 2.5 hours.**Verification:** The step is correct.**Step 3:**Solution: Every 10 minutes, she earns $5. The number of 10-minute periods in 2.5 hours can be found by dividing 2.5 by1/10. That is 2.5 / (1/10) = 25.Expected Answer: Since she earns $5 every 10 minutes, we can find the number of 10-minute periods in 2.5 hours usingthe formula 2.5 * 60 / 10 = 15.**Verification:** The step is incorrect. The correct number of 10-minute periods should be 15, not 25.**Step 4:**Solution: She earns $5 every 10 minutes. So in 25 periods, she earns 25 * $5 = $125. The answer is $125.Expected Answer: Now we can find the amount she earns by multiplying the number of periods by the amount she earnseach period: 15 * $5 = $75. The answer is $75.**Verification:** The step is incorrect. The amount she earns should be $75, not $125.**Final Grade:**Verification: Is the answer correct (Yes/No)? NoGen-RM (Majority Voting) score: 0.3926Table E.10|GenRM CoT Example 932Generative Verifiers: Reward Modeling as Next-Token PredictionExample of GenRM-CoT on Grade School MathQuestion : Mark is a copy-editor. He edits an equal number of sentences each week for two different publishers, whoeach pay him a different rate per sentence. Publisher B pays Mark twice what Publisher A pays. Mark edits a totalnumber of 1000 sentences each week, and Publisher A pays him 5 cents per sentence. How much does Mark make in aweek, in cents?Answer: Mark edits a total number of 1000 sentences each week. Since Publisher A pays Mark 5 cents per sentence, Markmakes 1000 * 5 = $50 for Publisher A. However, Publisher B pays Mark twice what Publisher A pays, so Mark makes 2 *$50 = $100 for Publisher B. In total, Mark makes $50 + $100 = $150 each week. In cents, this is 150 * 100 = 15000cents. The answer is 15000.Discriminative RM score: 0.9907GenRM-CoT : Let’s verify step by step.**Verification:****Step 1:**Solution: Mark edits a total number of 1000 sentences each week.Expected Answer: Mark edits a total of 1000 sentences each week.**This step is correct.****Step 2:**Solution: Since Publisher A pays Mark 5 cents per sentence, Mark makes 1000 * 5 = $50 for Publisher A.Expected Answer: We know that Publisher A pays him 5 cents per sentence, so for 500 sentences he makes 5 * 500 =2500 cents.**This step is incorrect.****Step 3:**Solution: However, Publisher B pays Mark twice what Publisher A pays, so Mark makes 2 * $50 = $100 for Publisher B.Expected Answer: Publisher B pays him twice as much, so he makes 2 * 2500 = 5000 cents for the 500 sentences fromPublisher B.**This step is incorrect.****Step 4:**Solution: In total, Mark makes $50 + $100 = $150 each week.Expected Answer: In total he makes 2500 + 5000 = 7500 cents.**This step is incorrect.****Step 5:**Solution: In cents, this is 150 * 100 = 15000 cents.Expected Answer: The answer is 7500.**This step is incorrect.****Final Grade:**Verification: Is the answer correct (Yes/No)? NoGen-RM (Majority Voting) score: 0.4293Table E.11|GenRM CoT Example 1033Generative Verifiers: Reward Modeling as Next-Token PredictionExample of GenRM-CoT on Grade School MathQuestion : Tim decides to cancel his cable subscription and get streaming services. He gets Netflix for $10 a month.Hulu and Disney Plus normally cost $10 a month eachbut he saves 20% for bundling. How much money does he saveby cancelling his $60 cable package?Answer: Tim pays $60 for cable. He gets Netflix for$10 and the bundle of Hulu and Disney Plus costs $10 * 80% = $8.So he pays $10 + $8 = $18 for Netflix and the bundle. Now he saves $60 - $18 = $42 compared to the cable package.The answer is 42.Discriminative RM score: 0.9999GenRM-CoT : Let’s verify step by step.**Verification:**1. **Tim pays $60 for cable.** (Correct)2. **He gets Netflix for $10.** (Correct)3. **The bundle of Hulu and Disney Plus costs $10 * 80% = $8.** (Incorrect)- Expected Answer: The bundle of Hulu and Disney Plus costs $10 + $10 = $20. With the 20% discount, the total cost is$20 * 0.8 = $16.4. **So he pays $10 + $8 = $18 for Netflix and the bundle.** (Incorrect)- Expected Answer: So he pays $10 + $16 = $26 for Netflix and the bundle.5. **Now he saves $60 - $18 = $42 compared to the cable package.** (Incorrect)- Expected Answer: Now he saves $60 - $26 = $34 compared to the cable package.Verification: Is the answer correct (Yes/No)? NoGen-RM (Majority Voting) score: 0.0015Table E.12|GenRM CoT Example 1134Generative Verifiers: Reward Modeling as Next-Token PredictionExample of GenRM-CoT on Grade School MathProblem : Digimon had its 20th anniversary. When it came out John was twice as old as Jim. If John is 28 now how oldis Jim?Solution : Let Jim’s age be x when Digimon came out. That means John’s age is 2x when Digimon came out.20 years ago, John’s age was 2x - 20 (current age - 20 years) and Jim’s age was x - 20 (current age - 20 years).We know that 2x - 20 = 28 (since John’s current age is 28)2x = 48x = 24So, when Digimon came out, Jim was 24. Since it’s been 20 years, Jim is now 24 + 20 = 44 years old.The answer is 44.Discriminative RM score: 0.9778GenRM-CoT : Let’s verify step by step.**Solution Verification:**1. Let Jim’s age be x when Digimon came out. That means John’s age is 2x when Digimon came out. (Cor-rect)2. 20 years ago, John’s age was 2x - 20 (current age - 20 years) and Jim’s age was x - 20 (current age - 20years). (Correct)3. We know that 2x - 20 = 28 (since John’s current age is 28) (Correct)4. 2x = 48 (Correct)5. x = 24 (Correct)6. So, when Digimon came out, Jim was 24. Since it’s been 20 years, Jim is now 24 + 20 = 44 years old.(Incorrect)**Expected Answer:**Since John was twice as old as Jim when Digimon came out 20 years ago, and John is 28 now, we can calcu-late Jim’s age. Let Jim’s age 20 years ago be x. Then we know that 2x = 28 - 20 = 8, so x = 4. Therefore, Jim’s age nowis 4 + 20 = 24. The answer is 24.**Erroneous Step in the Solution:**The erroneous step in the Solution is step 6, where it concludes that Jim is 44 years old now. According to theExpected Answer, Jim should be 24 years old now.Verification: Is the answer correct (Yes/No)? NoGen-RM (Majority Voting) score: 0.0000Table E.13|GenRM CoT Example 1235Generative Verifiers: Reward Modeling as Next-Token PredictionProblem : A line is parameterized byxy=8−1+t23.The equation of the line can be expressed in the form y=mx+b.Enter the ordered pair (m,b).Solution : We have x=8+2tandy=−1+3t. Solving the first equation for t, we have t=x−82. Substituting this intothe second equation, we get y=−1+3x−82=32x−232. Thus,m=32andb=−232. So, the answer is32,−232. FinalAnswer:32,−232Discriminative RM correctness score: 0.8272GenRM-CoT : Let’s verify step by step.**Verification:****Step 1:**Solution: We have x=8+2tandy=−1+3t.Expected Answer: The equation of the line in matrix form isxy=8−1+t23.This means that the line can be expressed as x = 8 + 2t, y = -1 + 3t.**Verification:** Yes**Step 2:**Solution: Solving the first equation for t, we have t=x−82.Expected Answer: Solving the first equation for t, we have t = (x - 8) / 2.**Verification:** Yes**Step 3:**Solution: Substituting this into the second equation, we get y=−1+3x−82=32x−232.Expected Answer: Substituting this into the second equation, we get y = -1 + 3 * (x - 8) / 2 = -1 + (3x - 24) / 2 = -1 +(3x / 2) - 12 = (3x / 2) - 13.**Verification:** No**Step 4:**Solution: Thus, m=32andb=−232. So, the answer is32,−232.Expected Answer: The equation of the line in the form y = mx + b is therefore y = (3 / 2)x - 13. The answer is(3/2,−13).**Verification:** NoVerification: Is the answer correct (Yes/No)? NoGenRM-CoT (Majority Voting) score: 0.4375Table E.14|MATH (Transfer from GSM): GenRM-CoT Example 136Generative Verifiers: Reward Modeling as Next-Token PredictionProblem : What is√53in simplest radical form?Solution :√53=√49+4=√49+√4=7+2=9Final Answer: 9Discriminative RM correctness score: 0.5974GenRM-CoT : Let’s verify step by step.**Step 1:**Solution:√53=√49+4Expected Answer:√53is a prime number so it cannot be simplified further.**Verification:**The solution incorrectly breaks down 53 into 49 and 4.√53is a prime number, so it cannot be simplified further.**Step 2:**Solution:=√49+√4Expected Answer:√53is a prime number so it cannot be simplified further.**Verification:**Since Step 1 was incorrect, this step is also incorrect.**Step 3:**Solution:=7+2=9Expected Answer:√53is a prime number so it cannot be simplified further.**Verification:**Since Step 1 was incorrect, this step is also incorrect.**Final Grade:**Verification: Is the answer correct (Yes/No)? NoGenRM-CoT (Majority Voting) score: 0.157Table E.15|MATH (Transfer from GSM): GenRM-CoT Example 237Generative Verifiers: Reward Modeling as Next-Token PredictionProblem : Simplify:√2.52−0.722.7−2.5.Solution : √︁2.52−0.722.7−2.5=√︁(2.5)2−(0.7)22.7−2.5=√︁2.52−0.720.2=|2.5−0.7|0.2=1.80.2=9.Final Answer: 9Discriminative RM correctness score: 0.9993GenRM-CoT : Let’s verify step by step.**Step 1:**Solution: √︁2.52−0.722.7−2.5=√︁(2.5)2−(0.7)22.7−2.5Expected Answer:√︁2.52−0.722.7−2.5=√︁(2.5)2−(0.7)22.7−2.5=√6.25−0.490.2Verification: Yes**Step 2:**Solution:=√︁2.52−0.720.2Expected Answer:=√6.25−0.490.2Verification: Yes**Step 3:**Solution:=|2.5−0.7|0.2Expected Answer:=√5.760.2Verification: No**Step 4:**Solution:=1.80.2Expected Answer:=2.40.2Verification: No**Step 5:**Solution: =9.Expected Answer: =12. Verification: NoVerification: Is the answer correct (Yes/No)? NoGenRM-CoT (Majority Voting) score: 0.1233Table E.16|MATH (Transfer from GSM): GenRM-CoT Example 338 |
Cb8RP9KLyh | Regress, Don’t Guess – A Regression-like Loss onNumber Tokens for Language ModelsJonas Zausinger1,2Lars Pennig1,2Kacper Chlodny1,2Vincent Limbach1,2Anna Ketteler1,2Thorben Prein1,2Vishwa Mohan Singh2,3Michael Morris Danziger4Jannis Born4,∗1TU Munich, Germany;2TUM.AI, Germany;3LMU Munich, Germany4IBM Research Europe, Switzerland;Corresponding author: [email protected] language models have exceptional capabilities at text generation, they lacka natural inductive bias for emitting numbers and thus struggle in tasks involvingreasoning over quantities, especially arithmetics. This has particular relevance inscientific datasets where combinations of text and numerical data are abundant.One fundamental limitation is the nature of the CE loss, which assumes a nominal(categorical) scale and thus cannot convey proximity between generated numbertokens. As a remedy, we here present two versions of a number token loss. The firstis based on an Lploss between the ground truth token value and the weighted sumof the predicted class probabilities. The second loss minimizes the Wasserstein-1distance between the distribution of the predicted output probabilities and theground truth distribution. These regression-like losses can easily be added to anylanguage model and extend the CE objective during training. We compare theproposed schemes on a mathematics dataset against existing tokenization, encoding,and decoding schemes for improving number representation in language models.Our results reveal a significant improvement in numerical accuracy when equippinga standard T5 model with the proposed loss schemes.1 IntroductionAs coined by Thawani et al. [14], numbers in natural texts are ubiquitous andimportant , yet system-atically neglected by language models (LMs). Even worse, while Transformers [ 15] were inventedfor NLP, they have permeated various scientific domains (chemistry, biology, etc [ 2,8,1]), wheretabular/numerical data is more prevalent than in NLP and often even fundamental for constructing taskdefinitions: Molecules are labeled with drug efficacy, chemical reactions with yield, and synthesisprocedures are natural text interspersed with quantities and times. Still, LMs notoriously struggleeven with simple arithmetic tasks like three-digit multiplication [5] for multiple reasons:1.Tokenization : Standard subword tokenization splits numbers into arbitrary tokens, disruptingtheir structure. Mitigation strategies include scientific notation [ 18] or digit-level tokenization [ 6],which may also preserve the decimal order of each digit [1].MATH-AI Workshop @38th Conference on Neural Information Processing Systems (NeurIPS 2024).XValNumber[NUM]1TokenHeadNumberHeadLogitsTransformerBlock ×Ntrials5 trials each with 1 V ocabTransformerBlock ×Ntrials ... 1A...Z01...Text tokensNumbertokens92MSE(1, 1.3).0.1*0 + 0.5*1 + 0.4*2 ... = 1.3...LCELCE + λ LNTL5 ...TransformerBlock ×NTexttokens NumbertokensA...Z01...92Number Token LossToken Head01...909 0V ocabynŷW1(yn,ŷn) 9LNTL-MSELNTL-WAS0.0...0.00.10.50.4...0.0LogitsOption 1 Option 2Figure 1: Left: xVal [ 7] decodes numbers through a regression head carried alongside the regular token head,gated through the [NUM] token (figure reproduced with permission). Right : Instead, the Number Token Loss(NTL) circumvents the need for two heads and allows the computation of a regression loss directly on the tokenhead. We propose two schemes to achieve this: LNTL-MSE (right) leverages a dot product of the values of thenumber tokens and their class probabilities. The LNTL-WAS (left) uses the Wasserstein-1 distance of the (sorted)number token labels and their class probabilities.2.Embedding : Canonically, the model has to recover the structure of numbers from data becausethe embeddings of numerical tokens are learned like any other token. Countless flavors ofnumeracy-preserving word embeddings exist [13, 1, 7], often akin to positional encodings.3.Training objective : The standard cross-entropy (CE) loss assumes a nominal scale, thus it failsto convey the proximity between numbers, effectively inducing a semi-supervised setting. Forexample, predicting a [3]instead of a [2]token will not generally induce lower loss than a [9].This problem has been surprisingly neglected and is the focus of this work.Here, we aim to equip LMs with better inductive biases to handle combinations of textual andnumerical data, such as math word problems or scientific datasets. In particular, we propose twoversions of a regression loss on number tokens that respect numerical proximity (cf. Figure 1 right)and can be effectively combined with regular CE. The first version of this loss computes the MeanSquared Error (MSE) between the sum of the predicted class probabilities, weighted by their respectivenumerical token value, and the numerical token value of the label. The second version computes theWasserstein distance between the distribution of the predicted number probabilities and the groundtruth distribution, which is the one-hot encoding of the label. We integrate these improved trainingobjectives with existing solutions for tokenization and embedding, in particular the RegressionTransformer [ 1]. We evaluate all methods on a subset of the mathematical-question-answer datasetfrom DeepMind [12].Prior art for joint language-number modeling suggested the use of verifiers [ 3,10], calculators(typically: Python interpreters), or chain-of-thought (CoT) reasoning [ 19] to yield improved perfor-mance in Large Language Models (LLMs). We argue that all such strategies avoid the fundamental,underlying problem (i.e., number representation in LMs is poor) by reformulating the task, trying tocorrect answers a posteriori with calculators, or using significantly more compute (CoR). Therefore,we herein intentionally attempt to improve a classic, relatively small encoder-decoder LM with up to220M parameters, namely T5 [11].2 Methods2.1 Number Token LossThe idea of the Number Token Loss (NTL) is to add an additional loss term to the CE, which isonly applied to number tokens and takes their numerical proximity into account. To achieve this, wepropose two versions.2Number Token Loss with Mean Squared Error (NTL-MSE) This loss compares the numericalvalue of the ground truth token with the weighted sum of the respective numerical token values, withthe weights corresponding to the predicted class probabilities (cf. Figure 1 right). Given a modelf(·), input tokens x≤i(where i≤N), the numerical value ˆyiof ground truth token yiand a vocab Vconsisting of tokens (with indices j, ..., k representing the number tokens), we compute NTL-MSE:LNTL-MSE =1NNXi(ˆyi−f(x≤i)j:k◦Vj:k)2(1)Instead of a nominal-scale loss with regular CE, the NTL-MSE effectively conveys proximity betweennumbers. For example, if the label is [4]and the LM predicts a [5]instead of [9], the loss willbe lower, matching our intuitive expectation, unlike the CE which gives constant loss no matter theproximity of the number (cf. Figure 2). This is sufficient for the vast majority of cases, however,since the NTL is not injective (like CE), it can return spuriously low loss for incorrect predictions.Consider e.g., a label [4]with50% of the mass on [0]and50% on[8]token, then NTL will be zero(Figure 3). While such cases are rare due to the softmax emphasizing logit differences, combiningNTL with CE loss helps correct spurious cases, as CE continues refining predictions without reducingits value in these instances. However, to address this non-injectiveness, we propose a second versionbased on the Wasserstein-1 distance.Number Token Loss with Wasserstein-1 distance (NTL-WAS) This loss calculates theWasserstein-1 distance between the predicted probability distribution of the (sorted) number to-kens and the ground truth probability distribution, which is 1 for the label token and 0 for all othertokens. Given the ground truth yi, a vocab Vwith number tokens ordered from indices jtokand thecumulative distribution function CDF(·), we compute NTL-WAS:LNTL-WAS =1NNXi=1|CDF ( yi)−CDF ( f(x≤i)j:k)| (2)As one can see in Figure 2, this version of the NTL not only conveys proximity between numberscorrectly but also eliminates the non-injectiveness problem, shown in Figure 3. Both versions of theNTL are scaled with λ(0.3unless mentioned otherwise) and added to the regular CE loss:L=LCE+λLNTL (3)Note that both versions of the NTL shall be 0 for all non-numerical tokens. By changing the p-order inNTL-MSE, different Lp-norm losses can be obtained (e.g., NTL-MAE). Huber loss is also compatible.In Appendix A.2, we provide pseudo-code for both versions of the NTL.2.2 Backbone T5 and model variantsWe use a T5 backbone [ 11] (Appendix A.3) for our experiments and extend it with both versions ofthe NTL and the Regression-Transformer tokenization scheme[ 1], due to its flexible encoder-decoderarchitecture and its success in various natural language processing tasks.Regression Transformer (RT). The Regression Transformer [ 1] tokenizes numbers on digit level,considering both the position and value of each digit. Since standard learnable embeddings may notadequately preserve the inherent structure of numbers, it leverages an inductive bias to account for therelative proximity of the numbers through numerical encodings, further explained in Appendix A.5.xVal encoding and decoding scheme. The xVal method [ 7] encodes real numbers using a single[NUM] token multiplied by its numerical value. For decoding (see Figure 1), a number head predictsthe value while the token head outputs the sequence, replacing [NUM] during inference. However,this scheme is incompatible with T5 (see Appendix A.6). We thus use the xVal encoder and maskedlanguage modeling in our experiments.Integration of the Number Token Loss Both versions of our proposed NTL, depicted in the rightpanel of Figure 1, can be integrated into any model that treats numbers as clearly separated tokens ofsingle digits by applying it as an additional loss term. Therefore, we adapt the tokenization scheme30 1 2 3 4 5 6 7 8 9Predicted value0246810LossLoss comparison for different logit distributionsLCE0.5∗LNTL−MSELNTL−WASFigure 2: CE, NTL-MSE, and NTL-WAS for different pre-dicted number values with ground truth label [4]. The under-lying logit distribution over the simplified vocabulary (num-bers 0-9) peaks at the respective predicted value and is uniformelsewhere.Figure 3: The heatmap plot shows the re-spective loss for a given combination of theclass probabilities for token 3 and 5, where theground truth is token 4. The behavior of theNTL-WAS is closest to the intuitive desiredbehavior of the loss function, while the NTL-MSE does not have a unique minimum.of the standard T5 model to tokenize all numbers on the digit level to make it compatible with theNTL. As RT already tokenizes numbers on digit level by default, we can integrate the NTL withoutany changes. Integrating NTL into xVal is not feasible, as xVal encodes every number with the sametoken. Moreover, xVal already uses both MSE and CE loss.3 Experiments and resultsTo test the mathematical capabilities of the methods, we use a dataset with more than 25 millionsamples from the mathematical Q&A dataset from DeepMind [ 12]. The dataset comes with twosets of tests: interpolation tests, one for each type of question occurring in the training set, andextrapolation tests, which measure generalization along various axes of difficulty beyond that seenduring training. We provide more information about the dataset in Appendix A.4. We evaluate all fivemodels on the two test sets of this dataset and report the accuracy (how often the model predicts thenumber exactly), as well as the Mean Absolute Error (MAE) and the R2-score. Since the dataset isskewed with some very high values, we perform a log10transformation on the predicted and groundtruth numbers before calculating MAE and R2-score.All experiments except the one with xVal are built upon the T5 implementation and language modelingtrainer based on the Hugging Face transformers library [ 16]. We use the T5-base model as a pretrainedbase for our respective models. All models were trained for approximately one million steps witha batch size of 32 over a period of approximately 3 days. More details on the models’ traininghyperparameters can be found in Appendix A.7.Table 1: Evaluation metrics on test data.(a) Interpolated Test DataModel Acc. MAE R2Standard T5 .6448 .1303 .9688Standard + NTL-MSE .7189 .1091 .9739Standard + NTL-WAS .7460 .0980 .9766RT .7136 .1135 .9701RT + NTL-MSE .6990 .1291 .9580xVal .0000 .2581 .9735(b) Extrapolated Test DataModel Acc. MAE R2Standard T5 .3686 0.7847 .9127Standard + NTL-MSE .4278 0.7789 .9091Standard + NTL-WAS .4324 0.7438 .9132RT .4042 0.9868 .7377RT + NTL-MSE .4282 1.0988 .6473xVal .0000 0.8259 .818640.0 0.2 0.4 0.6 0.8xValRT + NTLMSERTStandard + NTLWASStandard + NTLMSEStandard T50.7460Accuracy0.00 0.05 0.10 0.15 0.20 0.25 0.30xValRT + NTLMSERTStandard + NTLWASStandard + NTLMSEStandard T50.0980MAE0.0 0.2 0.4 0.6 0.8 1.0 1.2xValRT + NTLMSERTStandard + NTLWASStandard + NTLMSEStandard T50.9766R2(a)Evaluation metrics on interpolated test data.0.0 0.1 0.2 0.3 0.4 0.5xValRT + NTLMSERTStandard + NTLWASStandard + NTLMSEStandard T50.4324Accuracy0.0 0.2 0.4 0.6 0.8 1.0 1.2xValRT + NTLMSERTStandard + NTLWASStandard + NTLMSEStandard T50.7438MAE0.0 0.2 0.4 0.6 0.8 1.0xValRT + NTLMSERTStandard + NTLWASStandard + NTLMSEStandard T50.9132R2 (b)Evaluation metrics on extrapolated test data.Figure 4: Comparison of evaluation metrics on interpolated and extrapolated test data.The results can be seen in Table 1 and Figure 4. They show that vanilla T5 clearly benefits fromboth our loss variants. Indeed, accuracy increases by more than 10% for NTL-WAS compared tovanilla T5 in the interpolation tasks. The NTL-WAS was found to have the best performance acrossall three metrics and both interpolation and extrapolation tasks. This confirms our hypothesis thatnumber representation in LMs can be effectively improved through a minor, architecture-agnosticmodification of the loss function. The RT consistently surpasses vanilla T5 on interpolation, howeverno further benefit was found by augmenting RT tokenization with NTL-MSE, potentially due to thecustom number embeddings conveying numerical proximity. The limited performance of xVal [ 7]is explained by the extensive range of numbers in the used dataset. The dynamic range of xVal islimited due to the combination of its scaling of the number token embeddings and the pre-layer-normin the backbone [ 17]. As a result, the effective number range of xVal is limited to [-5, 5]. To take thisinto account, we scale our dataset for xVal with log(1 +x). However, this means that large numberscan no longer be adequately distinguished by the model, as their embeddings become very similar.4 ConclusionWe introduced the Number Token Loss (NTL) for LMs to enhance their ability to handle numericaldata by considering the numerical proximity between tokens. Our experiments unambiguouslydemonstrate the effectiveness of the NTL-WAS loss. This confirms our hypothesis that numberrepresentation in LMs can be effectively improved through a minor, architecture-agnostic modificationof the loss function. By augmenting the standard CE loss with NTL, we provide a simple yeteffective method that integrates seamlessly into existing architectures without requiring additionalcomputational overhead. Experiments on the DeepMind Mathematics Dataset demonstrated thatNTL significantly improves numerical reasoning, especially in models without specialized numericalembeddings. This approach offers a practical solution for enhancing language models in numericallyrich domains, paving the way for more accurate and reliable applications in mathematics and science.5References[1]Jannis Born and Matteo Manica. Regression transformer enables concurrent sequence regressionand generation for molecular language modelling. Nature Machine Intelligence , 5(4):432–444,2023.[2]Dimitrios Christofidellis, Giorgio Giannone, Jannis Born, Ole Winther, Teodoro Laino, andMatteo Manica. Unifying molecular and textual representations via multi-task language mod-elling. In Proceedings of the 40th International Conference on Machine Learning , volume202 of Proceedings of Machine Learning Research , pages 6140–6157. PMLR, 2023. URLhttps://proceedings.mlr.press/v202/christofidellis23a.html .[3]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers tosolve math word problems. arXiv preprint arXiv:2110.14168 , 2021.[4]Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. Advances in neuralinformation processing systems , 28, 2015.[5]Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, SeanWelleck, Peter West, Chandra Bhagavatula, Ronan Le Bras, et al. Faith and fate: Limits oftransformers on compositionality. Advances in Neural Information Processing Systems , 36,2024.[6]Mor Geva, Ankit Gupta, and Jonathan Berant. Injecting numerical reasoning skills into languagemodels. In Proceedings of the 58th Annual Meeting of the Association for ComputationalLinguistics . Association for Computational Linguistics, 2020.[7]Siavash Golkar, Mariel Pettee, Michael Eickenberg, Alberto Bietti, Miles Cranmer, GeraudKrawezik, Francois Lanusse, Michael McCabe, Ruben Ohana, Liam Parker, et al. xval: Acontinuous number encoding for large language models. arXiv preprint arXiv:2310.02989 ,2023.[8]John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ron-neberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, et al.Highly accurate protein structure prediction with alphafold. Nature , 596(7873):583–589, 2021.[9]Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. Bert: Pre-training of deepbidirectional transformers for language understanding. In Proceedings of naacL-HLT , volume 1,page 2, 2019.[10] Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen.Making language models better reasoners with step-aware verifier. In Proceedings of the 61stAnnual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) ,pages 5315–5333, 2023.[11] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena,Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unifiedtext-to-text transformer. Journal of machine learning research , 21(140):1–67, 2020.[12] David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematicalreasoning abilities of neural models, 2019. URL https://arxiv.org/abs/1904.01557 .[13] Dhanasekar Sundararaman, Shijing Si, Vivek Subramanian, Guoyin Wang, Devamanyu Haz-arika, and Lawrence Carin. Methods for numeracy-preserving word embeddings. In Proceedingsof the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) ,pages 4742–4753, 2020.[14] Avijit Thawani, Jay Pujara, Filip Ilievski, and Pedro Szekely. Representing numbers in nlp: asurvey and a vision. In Proceedings of the 2021 Conference of the North American Chapter ofthe Association for Computational Linguistics: Human Language Technologies , pages 644–656,2021.6[15] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,et al. Attention is all you need. Advances in neural information processing systems , 30(1):261–272, 2017.[16] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, AnthonyMoi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 conference on empiricalmethods in natural language processing: system demonstrations , pages 38–45, 2020.[17] Ruibin Xiong, Yunchang Yang, Di He, Kai Zheng, Shuxin Zheng, Chen Xing, Huishuai Zhang,Yanyan Lan, Liwei Wang, and Tieyan Liu. On layer normalization in the transformer architecture.InInternational Conference on Machine Learning , pages 10524–10533. PMLR, 2020.[18] Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, and Dan Roth. Do languageembeddings capture scales? In Trevor Cohn, Yulan He, and Yang Liu, editors, Findings of theAssociation for Computational Linguistics: EMNLP 2020 , pages 4889–4896, Online, November2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.439.URL https://aclanthology.org/2020.findings-emnlp.439 .[19] Qihuang Zhong, Kang Wang, Ziyang Xu, Juhua Liu, Liang Ding, Bo Du, and Dacheng Tao.Achieving> 97% on gsm8k: Deeply understanding the problems makes llms perfect reasoners.arXiv preprint arXiv:2404.14963 , 2024.7A AppendixA.1 Statement on codeThe code for this paper is available at https://github.com/tum-ai/ibm_impact_project.A.2 Algorithm for the Number Token LossAlgorithm A1 Pseudo-code to compute NTL-MSE1:Initialize: n_vocab ←int(vocab [i])if vocab [i]∈RNaN otherwiseVi=12:3:function FORWARD (logits ∈RB×T×V, labels ∈RB×T) : Float4: ntl←05: n_logits ←logits [:,:,¬n_vocab.isnan() ] ▷Ignore non-number tokens6: n_probs ←Softmax (logits )7: ˆy←Pin_probs [:,:, i]·n_vocab ▷ˆyisB×T8: y←n_vocab [labels ] ▷ yisB×T9: ntl←MSE(y,ˆy) ▷Can be any regression loss10: return ntl11:end functionAlgorithm A2 Pseudo-code to compute NTL-WAS1:Initialize: n_vocab ←int(vocab [i])if vocab [i]∈RNaN otherwiseVi=12:iforder_numbers is True then3: Sort the numbers in n_vocab by their numerical values4:end if5:6:function FORWARD (logits ∈RB×T×V, labels ∈NB×T) : Float7: n_logits ←logits [:,:,¬n_vocab.isnan() ] ▷Ignore non-number tokens8: n_probs ←Softmax (logits )9: y←n_vocab [labels ] ▷Retrieve true numerical values10: y_distr [b, t]←one_hot (y[b, t],num_classes=len(n_vocab) ) ▷One hot encode y11:12: wasserstein_distance [b, t] =PVv=1|CDF(n_probs [b, t])[v]−CDF(y_distr [b, t])[v]|13:14: ntl←Mean (wasserstein_distance [¬y.isnan() ])15: return ntl16:end functionA.3 T5 architectureThe T5 model is built upon the Transformer architecture [ 15], consisting of stacked self-attentionand feed-forward layers in both the encoder and decoder. The encoder processes the input tokens tocreate contextualized representations, while the decoder generates the output tokens autoregressively,attending to both the encoder’s outputs and the previously generated tokens. The model can be trainedwith both Masked Language Modelling (MLM) [ 9] and Causal/Auto-Regressive Language Modelling(CLM) [4], whereby we chose to use CLM.A.4 DatasetTo test the mathematical capabilities of the methods, we use a subset of the mathematical question-answer dataset from DeepMind [ 12]. The dataset was generated synthetically and therefore contains8limited linguistic variability, but is sufficient for our purposes to compare the mathematical capabilitiesof the different methods.The dataset contains different modules and difficulty levels. For training and testing the models,we chose all difficulty levels but excluded modules where the answer contains complex fractions orvariables. This allows us to focus on purely numeric answers to simplify the evaluation of the modeland still leaves us with a large enough dataset of ∼26 million samples.For training, validation, and interpolation tests, we selected the following modules from the DeepMindmathematical question-answer dataset:•algebra__linear_1d.txt•algebra__linear_1d_composed.txt•algebra__linear_2d.txt•algebra__linear_2d_composed.txt•algebra__sequence_next_term.txt•arithmetic__add_or_sub.txt•arithmetic__add_sub_multiple.txt•arithmetic__mul.txt•numbers__div_remainder.txt•numbers__div_remainder_composed.txt•numbers__place_value.txt•numbers__round_number.txt•numbers__round_number_composed.txtFor extrapolation tests, we selected the following modules:•arithmetic__add_or_sub_big.txt•arithmetic__add_sub_multiple_longer.txt•arithmetic__mixed_longer.txt•arithmetic__mul_big.txt•arithmetic__mul_div_multiple_longer.txt•numbers__place_value_big.txt•numbers__round_number_big.txtThis resulted in a training dataset of 25,986,948 samples, a validation dataset of 13,026 samples, aninterpolation test set of 130,000 samples, and an extrapolation test set of 70,000 samples.A.5 Regression TransformerThe Regression Transformer [ 1] preserves the inherent structure of numbers by inducing informationon relative proximity through numerical encodings that are set deterministically for all tokens. Forevery combination of a decimal place and digit value, a corresponding numerical token is added tothe vocabulary. For instance, the number 11.4is tokenized to [1_1, 1_0, 4_-1] .Non-numeric tokens are set to zero vectors. The numerical encodings are designed so that theirpairwise distances are symmetric and monotonically decreasing with the float value. The finalencoding of the input tokens is obtained by summing over numerical and regular word encodings.The Regression Transformer numerical encodings NE at dimension jfor numerical token tv,pwithvalue vand decimal place pcan be determined byNE Float(v, p, j ) = (−1)j·v·10pj+ 1. (4)9A.6 Challenges with Integrating xVal in Transformer Models like T5In transformer models like T5, integrating numerical encoding schemes like xVal presents challenges.Relative positional encodings and pre-layer normalization disrupt the numerical scaling. This makesit difficult to preserve distinctions between values.In T5, instead of using absolute positions for each token, relative positions between tokens areencoded. This helps the model understand relationships between tokens based on their distance,regardless of where they appear in the sequence. However, this relative encoding is applied uniformlyacross all tokens, including numerical tokens. Since relative position encoding doesn’t account forthe magnitude of numerical values, it essentially ignores the scaling factor introduced by the xValmethod.Pre-layer normalization is applied to the inputs before they enter each transformer layer. Normaliza-tion typically scales the inputs to a standard range, effectively reducing the impact of the differencesin numerical embeddings introduced by the xVal method. As a result, even though the xVal methodmultiplies the [NUM] token embedding by its corresponding numerical value, this scaling gets neu-tralized by the normalization step, making the embeddings of different numbers more similar thanthey should be.A.7 Training hyperparametersWe train each model for 1050000 iterations with a batch size of 32 using transformers [16] 4.42.4.We train with a learning rate of 1e-4 and weight decay of 0.01. All models were trained on singlegraphics processing units (GPUs) (NVIDIA RTX A6000). For the Number Token Loss, we trainedwith the hyperparameter λset to 0.3.10 |
BfLVCoov0b | A Hessian View of Grokking in MathematicalReasoningZhenshuo Zhang†Jerry W. Liu‡Christopher Ré‡Hongyang R. Zhang†‡Department of Computer Science, Stanford University, Stanford, CA†Khoury College of Computer Sciences, Northeastern University, Boston, MAAbstractMathematical reasoning is a central problem in developing more intelligent lan-guage models. An intriguing phenomenon observed in mathematical arithmeticsis grokking, where the training loss of a transformer model stays near zero foran extended period until the validation loss finally reduces to near zero. In thiswork, we approach this phenomenon through a view of the Hessian of the losssurface. The Hessian relates to the generalization properties of neural networksas it can capture geometric properties of the loss surface, such as the sharpness oflocal minima. We begin by noting in our experiments that high weight decay isessential for grokking to occur in several arithmetic tasks (trained with a GPT-2style transformer model). However, we also find that the training loss is highlyunstable and exhibits strong oscillations. To address this issue, we consider addingregularization to the Hessian by injecting isotropic Gaussian noise to the weightsof the transformer network, and find that this combination of high weight decayand Hessian regularization can smooth out the training loss during grokking. Wealso find that this approach can accelerate the grokking stage compared to existingmethods by at least 50% measured on seven arithmetic tasks. Finally, to under-stand the precise cause of grokking, we consider a Hessian-based measurement formulti-layer networks and find that this measure yields non-vacuous estimates ofthe generalization errors observed in practice. We hope these empirical findingscan facilitate future research towards understanding grokking (and generalization)in mathematical reasoning.1 IntroductionMathematical reasoning in large neural networks [ 1] is a central issue in the design of more intelli-gent, interactive language models, especially in scenarios that require precise, step-by-step logicaloperations. An intriguing phenomenon that has been observed in arithmetic tasks is grokking [ 19],where a transformer model exhibits a delayed yet sudden generalization of training data even as thetraining curve has converged. In this paper, we analyze grokking behavior in arithmetic tasks byexamining the Hessian of the transformer model’s loss surface.Power et al. [ 19] demonstrate that for a range of modular arithmetic tasks, grokking can occuron two-layer transformers, whereby the training loss remains near zero for a long period until thevalidation loss also converges to zero. However, the training loss can exhibit dramatic oscillations.To motivate this work, we begin by applying regularization methods to SGD and examine their effecton grokking. First, we find that high weight decay is crucial for grokking in arithmetic tasks. In otherwords, we find that the validation accuracy does not increase to near perfect when weight decay ismoderate (Fig. 1a and 1b). Moreover, even after adding high decay, the training curve still exhibitsnotable variations, leading to unstable training (with training accuracy reduced to zero; see Fig. 1c).The 4th Workshop on Mathematical Reasoning and AI at NeurIPS 2024 (MATH-AI 24).102103104105Number of steps0.00.20.40.60.81.0Accuracya2+ab+b2TrainingValidation(a) SGD, λ= 0102103104105Number of steps0.00.20.40.60.81.0Accuracya2+ab+b2TrainingValidation (b) SGD, λ= 0.1102103104105Number of steps0.00.20.40.60.81.0Accuracya2+ab+b2TrainingValidation (c) SGD, λ= 1102103104105Number of steps0.00.20.40.60.81.0Accuracya2+ab+b2TrainingValidation (d) Noise Injection, λ= 1Figure 1: Training behavior with different weight decay (denoted as λ) and Hessian regularization.To address these issues, we consider regularization methods that regularize the Hessian. Thesemethods are known to improve generalization by reducing the sharpness of solutions in the losssurface [ 6,22]. In particular, we consider regularizing the Hessian by first adding noise to the weightsof the transformer before computing its gradient. We observe that this noise injection provides anapproximately unbiased estimate of the trace of the Hessian. Surprisingly, we find that, along withhigh weight decay, this noise injection algorithm can now smooth out the oscillations in the trainingcurve. Moreover, we also find that this Hessian regularization can reduce the number of grokkingsteps during training by at least 28%, measured on three arithmetic tasks.Finally, to understand these results, we develop a preliminary theoretical analysis. We examine aHessian-based generalization measure motivated by PAC-Bayes analysis [ 17,11,2]. We find that bymeasuring a Hessian-vector product on the weight space, we can provide a non-vacuous estimateof the generalization errors. Note that the phenomenon of delayed generalization is known sinceclassical works on boosting [ 3]. Our contribution is to provide a Hessian view of this phenomenonsince the Hessian can be measured from data.In summary, we find that by using high weight decay and noise injection to regularize the Hessian,we can effectively reduce the instability that has commonly been observed for training transformerson arithmetic tasks. Second, this combined regularization can further reduce the number of grokkingsteps. Third, we develop a Hessian-based measurement that can give a non-vacuous estimate of thegeneralization error. We hope these findings can facilitate future research on understanding grokkingin the mathematical reasoning of large models.2 Regularization of Loss Surface HessianPrevious works have indicated that grokking requires an appropriate choice of weight decay [ 18,14].However, using weight decay alone can still lead to oscillations of the training curve. To address thisissue, we explore an alternative approach, where we regularize the loss Hessian matrix, which canprovide more fine-grained control on the loss surface, such as sharpness. To instantiate the Hessianregularization, we add a random noise variable to the weight matrices of a transformer network. Inparticular, let l(fW(x), y)denote the loss of a neural net fW(parameterized by W), given an inputpair(x, y). LetUbe a random sample from an isotropic Gaussian (with the same dimension as W),whose variance has been scaled by σ2. We consider the following noise injection update:W←W−η2(∇l(fW+U(x), y) +∇l(fW−U(x), y)), (1)for some learning rate η. In particular, σ2determines the level of regularization in this procedure.To see that this update regularizes the Hessian, we notice that equation (1)is equivalent to applyingSGD to the stochastic optimization objective ofEU[l(fW+U(x), y)]≈l(fW(x), y) +σ22∇2l(fW(x), y) +O(σ3).In practice, we add the noise injection along both the positive and negative directions of U. Thishelps eliminate the variance that appears from the first-order Taylor’s expansion term above [22].23 Experimental Results3.1 Results on arithmetic tasksOur experimental setup follows the work of Power et al. [ 19]. We focus on evaluating the grokkingphenomena with arithmetic tasks, which correspond to equations of the form (a◦b)modp=c. “a,”“◦,” “b,” “=,” and “ c” are separated tokens, where “ c” is the prediction goal. For each task, we generatea∈[p]andb∈[p], resulting in a total of p2(in our settings, p= 97 ) unique data. Specifically, weselect a2+ab+b2=cto illustrate our findings, and more experiments of different equations areshown in Appendix A. We compare our regularization method to naive SGD and SAM [ 6]. SAM isbased on a constrained minimax optimization formulation that penalizes the worst-case perturbations.#1: Stabilizing the training curves. We observed that all the approaches can induce grokkingin our experiments, as shown in Figure 2. After the training loss converges, we observe a suddenincrease in validation accuracy to over 99%. Although all these methods can exhibit stable trainingbefore convergence, we find that during the grokking phase, SGD experiences sharp fluctuations inthe training curve, with training accuracy dropping close to zero, which also caused the validationaccuracy to drop to nearly zero. By contrast, both SAM and our noise-injection method can maintainstable training loss values during the grokking phase, which avoids dramatic fluctuations.102103104105Number of steps0.00.20.40.60.81.0Accuracya2+ab+b2TrainingValidation(a) SGD, λ= 1102103104105Number of steps0.00.20.40.60.81.0Accuracya2+ab+b2TrainingValidation (b) SAM, λ= 1102103104105Number of steps0.00.20.40.60.81.0Accuracya2+ab+b2TrainingValidation (c) Noise Injection, λ= 1Figure 2: Illustrating the grokking phenomenon of SGD, SAM, and Hessian regularization.#2: Reducing the steps of grokking steps. We also report the comparison of the number of trainingsteps from the point where the training accuracy has converged to near 100% to the point wherevalidation accuracy convergences to near 100%. We report the results for different approaches inTable 1. We observe that our approach requires fewer steps than SGD. In some tasks, our approachdoesn’t even need grokking steps to generalize. We also note that SAM, which penalizes the largesteigenvalue of the Hessian, requires more steps than noise injection.Table 1: Number of grokking steps observed for different methods.a+ba×b a/b a2+b2a2+ab+b2a2+ab+b2+aa3+abSGD 36480 5040 89868 7890 48280 150263 83776SAM 30240 3620 44718 9170 26452 220116 0Ours 14826 0 24576 3700 0 51187 03.2 Results on algorithmic tasksWe also evaluate our findings on the Needle-in-a-Haystack task, following the setting of Zhonget al. [ 23]. Specifically, we have an input sequence [m1, c1, m2, c2, ..., m k, ck, mu], where miaredifferent markers and ciare corresponding values. The last element is a marker mu, u∈[1, k]whichindicates the goal marker. The model is trained to learn to search for the marker in the previoussequence and give the corresponding value cu.More surprisingly, we observe that the grokking phenomenon does not occur when using SGD,although the training curve also experiences slight fluctuations, and the validation accuracy remainslow. The number of grokking steps of SAM and noise injection are 12288 and 5632, respectively:see Figure 3.3102103104105Number of steps0.00.20.40.60.81.0AccuracyTrainingValidation(a) SGD, λ= 1102103104105Number of steps0.00.20.40.60.81.0AccuracyTrainingValidation (b) SAM, λ= 1102103104105Number of steps0.00.20.40.60.81.0AccuracyTrainingValidation (c) Noise Injection, λ= 1Figure 3: Illustrating the grokking phenomenon of SGD, SAM, and Hessian regularization.4 Nonvacuous Generalization Error Estimates with HessianToward rigorously understanding the above empirical results, we consider the PAC-Bayes analysisframework. In particular, we consider a linear PAC-Bayes bound [ 16,2], which holds with probability1−δfor any δ >0:LQ(fW)≤1βˆLQ(fW) +CKL(Q||P ) + log1δ2β(1−β)n,for any β∈(0,1). (2)Above, Qis the posterior hypothesis distribution of the learning algorithm. Pis the prior distributionof the learning algorithm. C >0is an upper bound on the loss value. LandˆLrefer to the expectedand empirical risks. For example, in the context of fine-tuning foundation models, one may view Pas the weight of the pretrained model (plus some small perturbations), and Qis the fine-tuned modelweight [11]. For a complete statement, see Lemma B.2 in Appendix B.Derivation of a Hessian-based measure: LetU∼ Q be a random variable drawn from a posteriordistribution Q. We are interested in the perturbed loss, lQ(fU(x), y), which is the expectation ofl(fU(x), y)overU. Using Taylor’s expansion, we get thatlQ(fW(x), y)−l(fW(x), y)≤ LXi=1Σi,∇2i[l(fW(x), y)]+C1∥Σ∥3/2F!, (3)where Σiis the population covariance matrix of the perturbation added to layer i, and∇2iis theHessian matrix with respect to the weights at layer ioffW. See Lemma B.1 in Appendix B for thecomplete statement of this result.Based on equation (3), next, we apply the PAC-Bayes bound from equation (2)to an L-layertransformer neural network fWparameterized by W. We note that the KL divergence between theprior and posterior distributions, which are both Gaussian, is equal toPLi=1Σ−1i, viv⊤i, where viis the distance between the initialized weight and the trained weight at layer i.Next, it remains to minimize the sum of the Hessian estimate, and the above KL divergence in thePAC-Bayes bound will lead to a different covariance matrix for every layer. Let ∇2i+denote thetruncated Hessian matrix where we set the negative eigenvalues of ∇2ito zero. We have thatLXi=1Σi,∇2i[l(fW(x), y)]+1nΣ−1i, viv⊤i≤LXi=1DΣi,∇2i+[l(fW(x), y)]E+1nΣ−1i, viv⊤i. (4)By applying equations (4)and(3)back to equation (2), and minimizing over β, we will derive anupper bound on the generalization error (between L(fW)andˆL(fW)) that is equal to:α:= max(x,y)∈DLXi=1qv⊤i∇2i+l(fW(x), y)vi√n, (5)4where nis the size of the sample set and Dis the unknown distribution where the samples are drawn.Having introduced the Hessian measure, we now report the results from measuring the above αinthe grokking experiments and compare αwith the empirically observed generalization errors. Theresults are shown in Figure 4 below. We can see that αnow gives a nonvacuous upper bound onthe generalization error. Importantly, while this can also be achieved with standard methods such ask-fold cross-validation, the Hessian can reveal more structures (e.g., sharpness) of loss surfaces.102103104105Number of steps10−310−210−1100101Measurementa2+ab+b2Loss Gapα(a) SGD, λ= 1102103104105Number of steps10−310−210−1100101Measurementa2+ab+b2Loss Gapα (b) SAM, λ= 1102103104105Number of steps10−310−210−1100101Measurementa2+ab+b2Loss Gapα (c) Noise Injection, λ= 1Figure 4: The Hessian measurement αcorrelates with the empirically observed generalization errorsfor training neural networks while grokking.5 Related WorkGrokking. The grokking phenomenon, first proposed by Power et al. [ 19], illustrates that withcontinued training over several epochs, the validation loss eventually decreases and convergesafter training loss does not decrease further after converging. Extending the study of grokking,Liu et al. [ 15] conducted experiments across diverse datasets, including images, language, andgraphs, expanding the area of grokking. Previous research predominantly focused on how trainingconfigurations influence grokking. Davies et al. [ 5] explored the relationship between grokkingand double descent concerning pattern learning. Huang et al. [ 9] examined the impact of modeland dataset sizes on grokking. Nanda et al. [ 18] highlighted the critical role of weight decay ingrokking, noting that insufficient weight decay prolongs the process of grokking. Thilak et al. [ 20]linked grokking to the slingshot mechanism, interpreting it as a form of implicit regularization. Morerecently, Lee et al. [ 14] introduced Grokfast, a method designed to accelerate grokking by amplifyingthe gradients’ low-frequency components. Theoretical investigations of why grokking occurs haverecently been studied [ 21]. In particular, Xu et al. [ 21] provably demonstrate grokking in two-layerReLU networks trained by gradient descent on XOR cluster data where a constant fraction of thetraining labels are flipped.Hessian and optimization algorithms. Historical studies on second-order methods for trainingmulti-layer networks primarily focus on optimization methods like Newton or quasi-Newton andemploy the Hessian matrix to adjust learning rates [ 13,12,4]. Although they estimate the spectralinformation of the Hessian by computing Hessian-vector products, they do not explore the dynamics ofthe Hessian throughout training. In the Neural Tangent Kernel (NTK) analysis [ 10], the Hessian matrixis treated as a random features matrix, which remains fixed during training. The estimation of spectraldensity through Stochastic Lanczos Quadrature is discussed by Ghorbani et al. [ 7]. Additionally,Grosse et al. [ 8] have investigated scaling up influence functions in large neural networks, whichincludes innovative techniques for computing Hessian-inverse vector products.6 ConclusionThis paper explored the issue of generalization in arithmetic tasks. We analyze the grokking phe-nomenon through a view of the Hessian matrix of the loss surface. We find that using a high weightdecay and noise injection can smooth out the oscillations commonly observed in SGD training ofarithmetic tasks. Another benefit of this regularization is that we could accelerate the grokking stage,reducing the number of training steps required for model generalization. Finally, we find that aHessian-based measurement can give a nonvacuous estimate of the generalization errors in variousmodular arithmetic tasks.5References[1]Janice Ahn, Rishu Verma, Renze Lou, Di Liu, Rui Zhang, and Wenpeng Yin. Large languagemodels for mathematical reasoning: Progresses and challenges. In The 18th Conference of theEuropean Chapter of the Association for Computational Linguistics , page 225, 2024.[2]Pierre Alquier. User-friendly introduction to pac-bayes bounds. Foundations and Trends ®inMachine Learning , 17(2):174–303, 2024.[3]Peter L Bartlett. Learning theory and generalization for neural networks and other supervisedlearning techniques. Neural Information Processing Systems Tutorial , 1998.[4]Sue Becker and Yann Le Cun. Improving the convergence of back-propagation learning withsecond order methods. In Proceedings of the 1988 connectionist models summer school , pages29–37, 1988.[5]Xander Davies, Lauro Langosco, and David Krueger. Unifying grokking and double descent.arXiv preprint arXiv:2303.06173 , 2023.[6]Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware mini-mization for efficiently improving generalization. arXiv preprint arXiv:2010.01412 , 2020.[7]Behrooz Ghorbani, Shankar Krishnan, and Ying Xiao. An investigation into neural net opti-mization via hessian eigenvalue density. In International Conference on Machine Learning ,pages 2232–2241. PMLR, 2019.[8]Roger Grosse, Juhan Bae, Cem Anil, Nelson Elhage, Alex Tamkin, Amirhossein Tajdini,Benoit Steiner, Dustin Li, Esin Durmus, Ethan Perez, et al. Studying large language modelgeneralization with influence functions. arXiv preprint arXiv:2308.03296 , 2023.[9]Yufei Huang, Shengding Hu, Xu Han, Zhiyuan Liu, and Maosong Sun. Unified view ofgrokking, double descent and emergent abilities: A perspective from circuits competition. arXivpreprint arXiv:2402.15175 , 2024.[10] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence andgeneralization in neural networks. Advances in neural information processing systems , 31,2018.[11] Haotian Ju, Dongyue Li, and Hongyang R Zhang. Robust fine-tuning of deep neural networkswith hessian-based generalization guarantees. ICML , 2022.[12] Yann LeCun. Efficient learning and second-order methods. A tutorial at NIPS , 93:61, 1993.[13] Yann LeCun, Ido Kanter, and Sara Solla. Second order properties of error surfaces: Learningtime and generalization. Advances in neural information processing systems , 3, 1990.[14] Jaerin Lee, Bong Gyun Kang, Kihoon Kim, and Kyoung Mu Lee. Grokfast: Acceleratedgrokking by amplifying slow gradients. arXiv preprint arXiv:2405.20233 , 2024.[15] Ziming Liu, Eric J Michaud, and Max Tegmark. Omnigrok: Grokking beyond algorithmic data.InThe Eleventh International Conference on Learning Representations , 2022.[16] David McAllester. A pac-bayesian tutorial with a dropout bound. arXiv preprintarXiv:1307.2118 , 2013.[17] David A McAllester. Pac-bayesian model averaging. In Proceedings of the twelfth annualconference on Computational learning theory , pages 164–170, 1999.[18] Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, and Jacob Steinhardt. Progressmeasures for grokking via mechanistic interpretability. arXiv preprint arXiv:2301.05217 , 2023.[19] Alethea Power, Yuri Burda, Harri Edwards, Igor Babuschkin, and Vedant Misra. Grokking: Gen-eralization beyond overfitting on small algorithmic datasets. arXiv preprint arXiv:2201.02177 ,2022.6[20] Vimal Thilak, Etai Littwin, Shuangfei Zhai, Omid Saremi, Roni Paiss, and Joshua Susskind. Theslingshot mechanism: An empirical study of adaptive optimizers and the grokking phenomenon.arXiv preprint arXiv:2206.04817 , 2022.[21] Zhiwei Xu, Yutong Wang, Spencer Frei, Gal Vardi, and Wei Hu. Benign overfitting andgrokking in relu networks for xor cluster data. The Twelfth International Conference onLearning Representations , 2024.[22] Hongyang R. Zhang, Dongyue Li, and Haotian Ju. Noise stability optimization for finding flatminima: A hessian-based regularization approach. Transactions on Machine Learning Research ,2024.[23] Ziqian Zhong and Jacob Andreas. Algorithmic capabilities of random transformers. arXivpreprint arXiv:2410.04368 , 2024.A Experiment DetailsA.1 TasksModular Arithmetic We consider the following list of modulo tasks where p= 97 :•a+b(mod p ) =c,•a×b(mod p ) =c,•a/b(mod p ) =c,•a2+b2(mod p ) =c,•a2+ab+b2(mod p ) =c,•a2+ab+b2+a(mod p ) =c,•a3+ab(mod p ) =c.Needle-in-a-Haystack This task assesses model performance on long input sequences. The inputconsists of a sequence [m1, c1, m2, c2, ..., m k, ck, mu], where m1, . . . , m kare distinct markers withcorresponding values c1, . . . , c k. The final marker murequires the model to locate its prior occurrenceand output the associated value cu. In our task, following [ 23], our sequences contain between 1and30markers, and we uniformly select each mi∈ {1, . . . , 127}andci∈ {128, . . . , 158}.A.2 ParametersWe include the parameters we use to define our modular arithmetic and needle-in-a-haystack tasksbelow.Optimizers For all the methods, we use a weight decay of λ= 1, a learning rate equal to 10−4,and a maximum number of epochs of 1.4×105, with batch size 512. For SAM, we set ρ= 0.05.For our noise-injection method, we set σ= 0.01.Model For all modular arithmetic tasks, we use a model dimension of 128, whereas for the needle-in-a-haystack task, we use a model dimension of 256. For all modular arithmetic tasks except a3+ab,we use a 1-layer model. For a3+aband needle-in-a-haystack tasks, we use a 2-layer model instead.We use 4 attention heads for all experiments.7TasksTrainingratioBatchsizeLearningrateWeightdecayLayersModeldimensionAttentionheadsa+b 0.3 512 1e-4 1 1 128 4a×b 0.5 512 1e-4 1 1 128 4a/b 0.3 512 1e-4 1 1 128 4a2+b20.5 512 1e-4 1 1 128 4a2+ab+b20.9 512 1e-4 1 1 128 4a2+ab+b2+a 0.9 512 1e-4 1 1 128 4a3+ab 0.9 512 1e-4 1 2 128 4Needle-in-a-haystack 0.9 256 1e-4 1 2 256 4A.3 Additional ResultsIn Figures 5-11, we illustrate the training and validation accuracy for the arithmetic tasks from above.101102103104105Number of steps0.00.20.40.60.81.0Accuracya+bTrainingValidation(a) SGD101102103104105Number of steps0.00.20.40.60.81.0Accuracya+bTrainingValidation (b) SAM101102103104105Number of steps0.00.20.40.60.81.0Accuracya+bTrainingValidation (c) Noise InjectionFigure 5: Comparison of SGD, Grokfast, SAM, and our noise-injection method in a+b(mod p ) =c.101102103104105Number of steps0.00.20.40.60.81.0Accuracya×bTrainingValidation(a) SGD101102103104105Number of steps0.00.20.40.60.81.0Accuracya×bTrainingValidation (b) SAM101102103104Number of steps0.00.20.40.60.81.0Accuracya×bTrainingValidation (c) Noise InjectionFigure 6: Comparison of SGD, Grokfast, SAM, and our noise-injection method in a×b(mod p ) =c.8101102103104105Number of steps0.00.20.40.60.81.0Accuracya/bTrainingValidation(a) SGD101102103104105Number of steps0.00.20.40.60.81.0Accuracya/bTrainingValidation (b) SAM101102103104105Number of steps0.00.20.40.60.81.0Accuracya/bTrainingValidation (c) Noise InjectionFigure 7: Comparison of SGD, Grokfast, SAM, and our noise-injection method in a/b(mod p ) =c.101102103104105Number of steps0.00.20.40.60.81.0Accuracya2+b2TrainingValidation(a) SGD101102103104105Number of steps0.00.20.40.60.81.0Accuracya2+b2TrainingValidation (b) SAM101102103104Number of steps0.00.20.40.60.81.0Accuracya2+b2TrainingValidation (c) Noise InjectionFigure 8: Comparison of SGD, Grokfast, SAM, and our method in a2+b2(mod p ) =c.102103104105Number of steps0.00.20.40.60.81.0Accuracya2+ab+b2TrainingValidation(a) SGD102103104105Number of steps0.00.20.40.60.81.0Accuracya2+ab+b2TrainingValidation (b) SAM102103104105Number of steps0.00.20.40.60.81.0Accuracya2+ab+b2TrainingValidation (c) Noise InjectionFigure 9: Comparison of SGD, Grokfast, SAM, and our method in a2+ab+b2(mod p ) =c.102103104105Number of steps0.00.20.40.60.81.0Accuracya2+ab+b2+aTrainingValidation(a) SGD102103104105Number of steps0.00.20.40.60.81.0Accuracya2+ab+b2+aTrainingValidation (b) SAM102103104105Number of steps0.00.20.40.60.81.0Accuracya2+ab+b2+aTrainingValidation (c) Noise InjectionFigure 10: Comparison of SGD, Grokfast, SAM, and our method in a2+ab+b2+a(mod p ) =c.9102103104105Number of steps0.00.20.40.60.81.0Accuracya3+abTrainingValidation(a) SGD102103104105Number of steps0.00.20.40.60.81.0Accuracya3+abTrainingValidation (b) SAM102103104105Number of steps0.00.20.40.60.81.0Accuracya3+abTrainingValidation (c) Noise InjectionFigure 11: Comparison of SGD, Grokfast, SAM, and our method in a3+ab(mod p ) =c.B Technical LemmasFirst, we state the result of Taylor’s expansion of the perturbed loss.Lemma B.1. For any i= 1,2,···, L, letUi∈Rdidi−1be a random vector sampled from a Gaussiandistribution with mean zero and variance Σi. Let the posterior distribution Qbe centered at Wiandperturbed with an appropriately reshaped Uiat every layer. Then, there exists a fixed value C1>0that does not grow with n, such that the following holds for any x∈ X andy∈ {1, . . . , k }:lQ(fW(x), y)−l(fW(x), y)≤LXi=1Σi,∇2i[l(fW(x), y)]+C1∥Σi∥3/2F. (6)Next, we state the PAC-Bayes bound, which can be found in the PAC-Bayes literature (e.g., [ 16,2]).Lemma B.2. Suppose the loss function l(x, y)lies in a bounded range [0, C]given any x∈ X withlabel y. For any β∈(0,1)andδ∈(0,1], with probability at least 1−δ, the following holdsLQ(fW)≤1βˆLQ(fW) +CKL(Q||P ) + log1δ2β(1−β)n. (7)10 |
AfyZTKGhUi | Lean-STaR:Learning to Interleave Thinking and ProvingHaohan Lin2∗Zhiqing Sun1Sean Welleck1Yiming Yang11Language Technologies Institute, Carnegie Mellon University2Institute for Interdisciplinary Information Sciences, Tsinghua University{haohanl,zhiqings,yiming,swelleck}@cs.cmu.eduhttps://leanstar.github.io/AbstractTraditional language model-based theorem proving assumes that by training on asufficient amount of formal proof data, a model will learn to prove theorems. Ourkey observation is that a wealth of informal information that is not present in formalproofs can be useful for learning to prove theorems. For instance, humans thinkthrough steps of a proof, but this thought process is not visible in the resulting code.We present Lean-STaR, a framework for training language models to produceinformal thoughts prior to each step of a proof, thereby boosting the model’stheorem-proving capabilities. Lean-STaR uses retrospective ground-truth tacticsto generate synthetic thoughts for training the language model. At inference time,the trained model directly generates the thoughts prior to the prediction of thetactics in each proof step. Building on the self-taught reasoner framework, wethen apply expert iteration to further fine-tune the model on the correct proofsit samples and verifies using the Lean solver. Lean-STaR achieves better resultson the miniF2F-test benchmark within the Lean theorem proving environment,significantly outperforming base models ( 43.4%→46.3%,Pass@64). We alsoanalyze the impact of the augmented thoughts on various aspects of the theoremproving process, providing insights into their effectiveness.1 IntroductionWe introduce Lean-STaR, a framework for learning to interleave informal thoughts with steps offormal proving. Building on the Self-Taught Reasoner (STaR) framework [ 27], we enable languagemodels to interleave step-by-step rationales (i.e., thoughts) [ 15,23] with formal proving in a two-stageprocess. In an initial phase, we prompt a sufficiently capable language model, such as GPT-4 [ 1],and generate retrospective thoughts based on a dataset of human-written proofs, such as Mathlib,the largest collection of human-written proofs in Lean [ 14]. Subsequently, we fine-tune a thought-augmented tactic predictor [ 6,5,11,9] that, given a Lean state, can generate a thought and predictthe subsequent tactic. In a second phase, we optimize this thought-augmented tactic predictor withthe expert iteration algorithm [ 2,20], using multi-step success rate in theorem proving as the reward.∗Work done during the visit at CMU.38th Conference on Neural Information Processing Systems (NeurIPS 2024).Figure 1: An example of Lean proof and thoughts generated by Lean-STaR . Note that there is acalculation error in the thought (in red), but this does not affect the correctness of the proof becausethe calculation task is actually completed by the interactive theorem prover (i.e., Lean’s nlinarith )instead of the language model. This shows a benefit of combining neural and symbolic systems.We instantiate Lean-STaR by generating roughly 50,000 thought-augmented examples from Lean’sMathlib [ 14], then synthesize an additional 50k examples through two iterations of expert iteration.To the best of our knowledge, this yields the first thought-augmented dataset for theorem proving.After fine-tuning an InternLM2-7b base model [ 26] on our thought-augmented data, our final Lean-STaR model can solve 34.8%(pass@32) or 36.1%(pass@64) of the problems on miniF2F-test[28]. Using stronger base model InternLM2-7b-plus, Lean-STaR can achieve 45.4%(pass@32),significantly surpassing the previous results of 43.4%(pass@32). In summary, Lean-STaR offers aframework for teaching language models to interleave informal thoughts with formal verification,advancing the capabilities of language models in automated theorem proving.2 Our Method: Lean-STaRWe introduce Lean-STaR, a new method for combining informal thoughts with formal theoremproving.We describe the data generation and training of the direct tactic prediction model (SFT), the thought-augmented tactic prediction model trained with synthetic data (Lean-CoT), and the final model trainedwith expert iteration (Lean-STaR).2.1 Direct Tactic PredictionWe define the theorem-proving problem as a Markov Decision Process (MDP) (S,A, Pa, Ra)whereproof states serve as states in MDP and tactics serve as actions. From this perspective, a proof is atrajectory (s1, a1, r1),(s2, a2, r2),···of states si, tactics ai, and rewards ri∈R, and the ITP (e.g.,Lean) provides each new state si+1.In the typical setting [ 18], proving a theorem consists of providing a proof state sto the languagemodel and then generating a tactic from the language model M, i.e.,πM(a|s). The language modelcan be fine-tuned for this task using a dataset of (proof state, next-tactic) pairs from successful prooftrajectories, i.e. D={(si, ai) :i= 1,···, M}, where final states have a reward of 1. We refer to alanguage model fine-tuned on such a dataset as a supervised fine-tuning (SFT) model.2.2 Thought-augmented Tactic PredictionExisting approaches typically train only on formal states and tactics [ 18]. We hypothesize thatincorporating a latent thought can improve a model’s ability to predict the next tactic. Formally, we2introduce a hidden “thought” variable tiprior to each tactic, and then extend the model to the formπM(ai, ti|si) =πM(ai|ti, si)πM(ti|si). In thought-augmented tactic prediction, the distributionover the next tactic can then be expressed as: πM(ai|si) =PtiπM(ai|ti, si)πM(ti|si).The key challenge is obtaining (state, thought, tactic) pairs for training a model. To this end, weintroduce retrospective rationale generation . Our motivating observation is that the distribution ofnatural language thoughts in theorem-proving πM(ti|si)is scarce in the pre-training corpus of largelanguage models. In turn, we find that even the most powerful GPT-4 model does not perform well ingenerating the correct rationale through few-shot prompting [ 7]. Given a powerful large languagemodel G, which we refer to as the oracle model2, we give the oracle model the ground-truth tacticaiand let the oracle model produce the thought tigiven the current state siand ground-truth tacticai. This helps improve the pass rate and produce thought-augmented data more efficiently. Ourfew-shot prompt is provided in Appendix F. The design principle of the prompt is to prevent theoracle model from generating hindsight-like thoughts. With a new retrospectively annotated datasetby the oracle model DT, we obtained our first thought-augmented tactic prediction model, Lean-CoT,by fine-tuning from the SFT model.2.3 Bootstrapping Thought-augmented Theorem ProvingWe propose to apply expert iteration to further improve the performance of Lean-CoT. Specifically,we start from the initial Lean-CoT model M0and the initial dataset D={si:i= 1,···, M},which consists of all initial states siof the theorems to be proved. In iteration 1, we usemodel Mto sample Ktimes per problem. Each time the model will produce a proof trajectory[(s0, t0, a0),(s1, t1, a1),···,(sn, tn, an)]. Then we create a new dataset D1by filtering the gener-ated trajectories to include only the successful ones. De-duplication is then applied to the collectedtrajectories. Now, we can further fine-tune the SFT model Mon dataset DT∪D1to produceLean-STaR model M1. Then we can similarly produce Lean-STaR model M2fromM1.3 ExperimentsWe instantiate Lean-STaR using the best available open language model pre-trained on the Leancorpus (InternLM2-Math-base-7b [ 26]), and follow standard practice in using Lean’s Mathlib as theunderlying training set (via the Lean Dojo dataset [ 25]). Our experimental results show that bothretrospective rationale generation and expert iteration significantly improve the theorem-provingcapabilities of language models in this setting. We describe our setup and findings in detail below.3.1 Experimental SetupWe use LeanDojo Benchmark 4 v9 as the supervised fine-tuning (SFT) dataset containing 231,240data examples. We fine-tune for 1epoch to obtain the SFT model. For the learning rate, we use awarmup in the first 20% steps from 0to2×10−5, followed by a cosine schedule decaying to zero.We randomly select 17,256different successful proof trajectories from LeanDojo Benchmark 4dataset [25], and use GPT-4-0125 [ 17] to annotate 52,438thoughts from those proof trajectories. Wefiltered out all proof steps (si, ai)for which aicontains the newline symbol “\n” before annotating.We perform two iterations of expert iteration, and provide the details in Appendix A.1 due to space.We evaluate our method on the MiniF2F benchmark [ 28]. We use a similar evaluation setting asprevious works [ 25,24,26], but use our sampling method instead of best-first search for the evaluationof our thought-augmented theorem proving model. We choose these settings to resemble the inferencebudget used in our baselines, which follow previous work [24, 4, 26].3.2 Main ResultsOur main results are reported in Table 1. Lean-STaR gives a significant improvement. For instance,with a similar inference budget, Lean-STaR achieves 34.8% versus 30.3%in InternLM2 [ 26] usingbest-first search and 30.7%in COPRA [ 22] using GPT-4. With a larger compute budget, Lean-STaR’sperformance improves further to 36.1%.2For instance, in our experiments we use the best available large language model, GPT-4.3Table 1: Pass rates on the minif2f-test dataset with Lean. This table shows the pass rates ofprevious works and our work. Sis the number of tactics attempted at each expanded node (assumedto be 1in sampling) and Kis the total number of search or sampling attempts per problem. Insampling we use temperature 0.7, and in search we use beam search when generating the next tactic.Note that we sample 32examples twice when K= 64 in sampling.APPROACH DECODING N K S PASS RATEGPT-3.5 [1] (F EW-SHOT) S AMPLING 50 1 1 2.8%GPT-4 [1] (F EW-SHOT) S AMPLING 50 1 1 11.9%TRANSFORMER [19] ( W/ORL) S EARCH 512 1 8 24.6%LLEMMA -7B[4] (F EW-SHOT) S EARCH 50 1 32 26.2%REPROVER [25] S EARCH 50 1 64 26.5%TRANSFORMER [19] ( W/ RL) S EARCH 512 1 8 29.6%INTERN LM2-20 B[26] (F EW-SHOT) S EARCH 50 1 32 29.5%COPRA ( WITH GPT-4) [22] C USTOMIZED - 100 1 30.7%INTERN LM2-7 B[26] (F EW-SHOT) S AMPLING 50 32 1 28.7%INTERN LM2-7 B[26] (F EW-SHOT) S EARCH 50 1 32 30.3%SFT (I NTERN LM2-7 B) S AMPLING 50 32 1 29.5%LEAN-COT(INTERN LM2-7 B) S AMPLING 50 32 1 32.8%LEAN-ST AR (I TER-1)(INTERN LM2-7 B) S AMPLING 50 32 1 34.0%LEAN-ST AR (I TER-2)(INTERN LM2-7 B) S AMPLING 50 32 1 34.8%LEAN-ST AR (I TER-2)(INTERN LM2-7 B) S AMPLING 50 64 1 36.1%INTERN LM2- PLUS -7B[26] (F EW-SHOT) (FROMPAPER )SEARCH 1000 1 32 43.4%INTERN LM2- PLUS -7B[26] (F EW-SHOT) (REPRO -DUCED )SEARCH 1000 1 32 42.6%INTERN LM2- PLUS -7B[26] (F EW-SHOT) S AMPLING 50 32 1 40.9%SFT (I NTERN LM2- PLUS -7B) [26] (F EW-SHOT) S AMPLING 50 32 1 41.3%LEAN-COT(INTERN LM2- PLUS -7B) S AMPLING 50 32 1 43.4%LEAN-ST AR (I TER-1)(INTERN LM2-7 B) S AMPLING 50 32 1 45.4%LEAN-ST AR (I TER-1)(INTERN LM2- PLUS -7B) S AMPLING 50 64 1 46.3%Thought augmentation improves theorem proving. The first phase of Lean-STaR trains a modelto interleave thoughts and tactics, by fine-tuning on a synthesized dataset of thought-augmentedexamples. The fine-tuned model from this phase, denoted LEAN-COTin Table 1, achieves a pass rateof32.8%, which is higher than the model prior to this phase, denoted SFT (29.5%). We concludethat the first phase of Lean-STaR can improve the theorem proving ability of a language model, evenone that is already specialized for generating tactics in Lean such as the SFT model.Bootstrapping improves thought-augmented theorem proving. The second phase of Lean-STaRconsists of generating new thoughts and tactics with the current language model, saving those thatresult in correct proofs, and training on the union of the initial thought-augmented dataset and thesaved examples (i.e., expert iteration [19, 27, 20]). Refer to Appendix A.1 for details.We perform two iterations of expert iteration, and present the results in Table 1, denoted LEAN-ST AR.Each iteration improves the model’s theorem proving performance, from 32.8% (the initial model)to 34% ( LEAN-ST ARafter iteration 1) to 34.8% ( LEAN-ST ARafter iteration 2). Furthermore, wefind that the model is amenable to further improvement via additional sampling, achieving 36.1%by doubling the sampling budget. We conclude that Lean-STaR’s second phase can further improvea model’s ability to generate thoughts and tactics that lead to correct proofs. We include threequalitative examples in the Appendix, which show the model interleaving thoughts and proof steps.3.3 Experiments with stronger base model and more dataWe also instantiate Lean-STaR using a stronger language model (InternLM2-Math-plus-7b [ 26]),which was released after the experiment above. Our new results are also reported in Table 1. We can4Table 2: Results for the InternLM2-plus-7b and our Lean-CoT, Lean-STaR, and expert iterationwithout CoT. We use sampling with N= 50, K= 32,&T= 0.7.APPROACH Pass@32 OFINTERN LM-B ASE Pass@32 OFINTERN LM-P LUSFEW-SHOT 28.7% 40 .9%SFT 29.5%(+0 .8%) 41 .3%(+0 .4%)LEAN-COT 32.8%(+ 3.3%) 43 .4%(+ 2.1%)LEAN-ST AR 34.0%(+1 .2%) 45 .5%(+ 2.1%)EXPERT ITERATION (SFT) 30.7%(+1 .2%) 43 .0%(+1 .7%)see that Lean-STaR still gives a significant improvement over the baseline. For instance, Lean-STaRachieves 45.4%versus 39.8%in InternLM-plus using sampling with a similar inference budget and43.4%using best-first search with more inference budget reported in [ 26]. This results show that bothretrospective rationale generation and expert iteration can improve the theorem-proving capabilitieson a stronger base model.3.4 Experiments on expert iteration without CoTTable 2 shows the result of expert iteration without CoT (i.e., using (state, tactic) pairs only) as wellas the result of Lean-CoT and Lean-STaR. Expert iteration alone achieves 43.0%, which is less thanLean-STaR (45.4%) in InternLM-plus and achieves 30.7% verus 39.8% in InternLM-base. Thisshows that Lean-STaR’s performance gains do not only come from the use of expert iteration.4 Conclusion & LimitationsIn this paper, we presented Lean-STaR, a novel approach that significantly enhances the theorem-proving capabilities of language models in formal mathematics by integrating Chain-of-Thought(CoT) rationales into each proof step. We further improved this model using expert iteration,fine-tuning it on correct proofs it samples and verifies using the Lean solver. Our contributionsinclude the introduction of the first thought-augmented theorem proving dataset, demonstratingthat expert iteration can further improve performance, and achieving much better results on theminiF2F-test benchmark, increasing the pass rate from 30.3%to36.1%. These advancements arenot only about improving the accuracy of automated theorem proving, but also offer a scalableand efficient framework for advancing human understanding of mathematics, which may lead tosignificant impacts in education, scientific discovery, and program verification [8, 12, 21, 3, 10, 16].The primary limitation of our method is that its performance may be constrained by issues ofcomputational scalability. Both Lean-CoT and Lean-STaR have been fine-tuned on a dataset thatis not very large. Additionally, the use of GPT-4 to generate synthetic data may incur a significantcost and possibly introduce biases. Also, expert iteration could face a bottleneck due to CPU and IOlimitations, which might slow down the process due to a sluggish speed of Lean ITP.References[1]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia LeoniAleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4technical report. arXiv preprint arXiv:2303.08774 , 2023.[2]Thomas Anthony, Zheng Tian, and David Barber. Thinking fast and slow with deep learningand tree search. Advances in neural information processing systems , 30, 2017.[3] Jeremy Avigad. Mathematics and the formal turn, 2023.[4]Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer,Albert Q Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open languagemodel for mathematics. arXiv preprint arXiv:2310.10631 , 2023.5[5]Jasmin Christian Blanchette, Cezary Kaliszyk, Lawrence C Paulson, and Josef Urban. Hammer-ing towards qed. Journal of Formalized Reasoning , 9(1):101–148, 2016.[6] Sascha Bohme and Tobias Nipkow. Sledgehammer: judgement day. In Automated Reasoning:5th International Joint Conference, IJCAR 2010, Edinburgh, UK, July 16-19, 2010. Proceedings5, pages 107–121. Springer, 2010.[7]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models arefew-shot learners. Advances in neural information processing systems , 33:1877–1901, 2020.[8]Nathan C Carter and Kenneth G Monks. Lurch: a word processor that can grade students’proofs. In CICM Workshops , 2013.[9]Lukasz Czajka and Cezary Kaliszyk. Hammer for coq: Automation for dependent type theory.Journal of automated reasoning , 61:423–453, 2018.[10] Emily First. Automating the Formal Verification of Software . PhD thesis, 2023. URL https://scholarworks.umass.edu/dissertations_2/2812 .[11] Fabian Gloeckle, Baptiste Roziere, Amaury Hayat, and Gabriel Synnaeve. Temperature-scaledlarge language models for lean proofstep prediction. In The 3rd Workshop on MathematicalReasoning and AI at NeurIPS’23 , 2023.[12] Dongyeop Kang, Andrew Head, Risham Sidhu, Kyle Lo, Daniel S Weld, and Marti A Hearst.Document-level definition detection in scholarly documents: Existing models, error analyses,and future directions. arXiv preprint arXiv:2010.05129 , 2020.[13] Guillaume Lample, Timothee Lacroix, Marie-Anne Lachaux, Aurelien Rodriguez, AmauryHayat, Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. Hypertree proof search for neuraltheorem proving. Advances in neural information processing systems , 35:26337–26349, 2022.[14] The mathlib Community. The lean mathematical library. In Proceedings of the 9th ACMSIGPLAN International Conference on Certified Programs and Proofs , CPP 2020, pages 367–381, New York, NY , USA, 2020. Association for Computing Machinery. ISBN 9781450370974.doi: 10.1145/3372885.3373824. URL https://doi.org/10.1145/3372885.3373824 .[15] Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin,David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, et al. Showyour work: Scratchpads for intermediate computation with language models. arXiv preprintarXiv:2112.00114 , 2021.[16] National Academies of Sciences. Artificial intelligence to assist mathematical reasoning:Proceedings of a workshop, 2023.[17] OpenAI. OpenAI: GPT-4, 2023. URL https://openai.com/research/gpt-4 .[18] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.arXiv preprint arXiv:2009.03393 , 2020.[19] Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and IlyaSutskever. Formal mathematics statement curriculum learning. arXiv preprint arXiv:2202.01344 ,2022.[20] Avi Singh, John D Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Peter J Liu, JamesHarrison, Jaehoon Lee, Kelvin Xu, Aaron Parisi, et al. Beyond human data: Scaling self-trainingfor problem-solving with language models. arXiv preprint arXiv:2312.06585 , 2023.[21] Christian Szegedy. A promising path towards autoformalization and general artificial intel-ligence. In Intelligent Computer Mathematics: 13th International Conference, CICM 2020,Bertinoro, Italy, July 26–31, 2020, Proceedings 13 , pages 3–20. Springer, 2020.[22] Amitayush Thakur, Yeming Wen, and Swarat Chaudhuri. A language-agent approach to formaltheorem-proving. arXiv preprint arXiv:2310.04353 , 2023.6[23] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le,Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models.Advances in neural information processing systems , 35:24824–24837, 2022.[24] Sean Welleck and Rahul Saha. Llmstep: Llm proofstep suggestions in lean. arXiv preprintarXiv:2310.18457 , 2023.[25] Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil,Ryan Prenger, and Anima Anandkumar. LeanDojo: Theorem proving with retrieval-augmentedlanguage models. In Neural Information Processing Systems (NeurIPS) , 2023.[26] Huaiyuan Ying, Shuo Zhang, Linyang Li, Zhejian Zhou, Yunfan Shao, Zhaoye Fei, YichuanMa, Jiawei Hong, Kuikun Liu, Ziyi Wang, Yudong Wang, Zijian Wu, Shuaibin Li, FengzheZhou, Hongwei Liu, Songyang Zhang, Wenwei Zhang, Hang Yan, Xipeng Qiu, Jiayu Wang,Kai Chen, and Dahua Lin. Internlm-math: Open math large language models toward verifiablereasoning, 2024.[27] Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. Star: Bootstrapping reasoning withreasoning. Advances in Neural Information Processing Systems , 35:15476–15488, 2022.[28] Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark forformal olympiad-level mathematics. arXiv preprint arXiv:2109.00110 , 2021.7A Additional Experiment SetupA.1 Lean-STaR Expert IterationThe second phase of Lean-STaR consists of generating new thoughts and tactics with the currentlanguage model, saving those that result in correct proofs, and training on the union of the initialthought-augmented dataset and the saved examples (i.e., expert iteration [ 19,27,20]). We performtwo iterations of expert iteration, and provide details on our specific experimental setup below.In each iteration we use sampling on the LeanDojo Benchmark 4 dataset, and save the (state, thought,tactic) examples that are part of successful proofs. For each problem, we sample K= 32 timesin parallel with temperature T= 1.0, and limit the number of times a tactic can be generated to atotal of N= 5per problem. Also, sampling is limited to 1minute per problem. In this setup, eachproblem needs on average about 0.5A100 minutes. We collect successfully sampled trajectories toproduce a “STaR dataset” D1, and up to 3proof trajectories were collected for each problem. Wecollected 32,231different (proof state, thoughts, next-tactic) pairs in successful proof trajectoriesduring expert iteration, which takes about 4days with 8×A100GPUs. Then, we further fine-tuneSFT model for 1epoch on the combination of GPT-4 annotated reasoning data and expert iterationdataDT∪D1to get the Lean-STaR model. We use the same learning rate setup that was used for theSFT model. In the second iteration, we generate a dataset D2in a similar fashion. Then, we chose tofurther fine-tune model from iteration 1,M1, on the generated dataset D2(roughly 19k pairs).The setup of experiment about InternLM2-plus is slightly different. The details are shown in Section3.3 and Appendix E.B Statistics for our methods as well as the baselinesTable 3: Statistics for the baselines and our Lean-CoT, Lean-STaR on MiniF2F dataset. We usesampling method with hyperparameters N= 50 & K= 32 & T= 0.7.APPROACH # (C ONTINUAL ) TRAINING DATA Pass@32INTERN LM2-M ATH-7B(FEW-SHOT) - 28.7% -SFT 231,240 29 .5% +0 .8%LEAN-COT 52,438 32 .8% +3 .3%LEAN-ST AR (I TER-1) 32,231 34 .0% +1 .2%LEAN-ST AR (I TER-2) 19,324 34.8% +0 .8%8C An Example and Explanation of A Formal Proof in LeanAn example of a formal proof in Lean with its visualization is shown in Figure 2, taken from [ 13]. Inthe proof, the tactic induction k is is applied to the initial state ( n≤m⇒n+k≤m+k) andthe ITP converts the current state to subgoals case 0 ∧case ih :n≤m∧n+k≤m+k⇒n+ (k+ 1)≤m+ (k+ 1) . The case 0: n≤mis our hypothesis h0so it can be proven by case0:exact h0tactic. Then, we rewrite the case ih through the nat.succ_le_succ_iff which isa theorem in Lean library means n≤m⇔n+ 1≤m+ 1. After tactics case 0:exact h0andcase ih:rw nat.succ_le_succ_iff , the goal state is converted to n+k≤m+kwhich is thehypothesis introduced by induction. Therefore, we can complete this proof using tactic exact k_ih .theorem add_le_add_right (m n k : N) (h 0: n≤m): n + k ≤m + k :=induction k with| zero =>exact h 0| succ k ih =>rw Nat.succ_le_succ_iffexact ihFigure 2: A example proof and its visualization of n≤m⇒n+k≤m+kin Lean, takenfrom [ 13].Theinduction tactic reduces the initial statement to two subgoals. Then tactics case0:exact h0andcase ih:rw nat.succ_le_succ_iff ,case ih:exact k_ih can be appliedin turn to complete the proof.9D Performance Analysis by Types and Difficulties using InternLM2-plus-7bTable 4 reports the number of problems successfully proved, partitioned by type and difficulty usingInternLM2-plus. We see that Lean-CoT improves performance mainly in Number Theory and Lean-STaR improves performance in solving difficult problems on all categories, which is the opposite ofthe performance of the InternLM2-base.Table 4: Counts of problems successfully proved in minif2f-test benchmark using InternLM2-plus-7b,split by type and difficulty. The methods use sampling with N= 50, K= 32 .TOTALTESTSETSIZEINTERN LM2- PLUS -7BLEAN-COTLEAN-ST AR(ITER-1)IMO 20 0 0 0AIME 15 3 3 4AMC 45 9 9 10MATHALGEBRALEVEL 5 14 6 6 6LEVEL 4 14 9 9 9LEVEL 3 14 11 13 13LEVEL 2 14 11 11 11LEVEL 1 14 10 10 10NUMBER THEORYLEVEL 5 16 7 7 7LEVEL 4 11 6 8 8LEVEL 3 11 6 7 9LEVEL 2 11 7 9 9LEVEL 1 11 10 10 10CUSTOMALGEBRA 18 4 3 4NUMBER THEORY 8 0 0 0INDUCTION 8 1 1 110E Performance difference of joint training and continue trainingAs shown in Table 5, the joint training method performs better using InternLM2-base but trainingmethod performs much better using InternLM2-plus. It seems that there are no difference betweenthese two methods. Therefore, this performance can be depend on the quantity of data or the model.(We use much more data when using InternLM2-plus and the quantity of "STaR data" is relativelysmall.)Table 5: Performance difference of joint training and continue training on Lean-STaR. We usesampling method with hyperparameters N= 50 & K= 32 & T= 0.7. In continue training, wefurther fine-tune the Lean-CoT model on "STaR data" to get Lean-STaR model and in joint trainingwe fine-tune the SFT model on combination of GPT-4 annotated reasoning data and "STaR data".APPROACH INTERN LM2- BASE -7B INTERN LM2- PLUS -7BLEAN-COT 32.8% 43 .4%LEAN-ST AR (I TER-1)(JOINT TRAINING )34.0% 43.9%LEAN-ST AR (I TER-1)(CONTINUE TRAINING )33.2% 45.5%F Retrospective Rationale Generation Prompt of GPT-4Please act as a professional mathematician.Your goal is to accurately prove a math theorem in Lean4.You are given the first tactic that should be taken to prove the Given Theorem.# Example:Given Theorem:```lean4X : Type u_1Y : Type u_2inst ✝1: TopologicalSpace Xinst ✝: TopologicalSpace Yf : X →. Yh : PContinuous f⊢IsOpen (PFun.preimage f Set.univ)```Given Tactic:```lean4exact h _ isOpen_univ```Assistant Output:### PLANNING OF REASONINGGiven the above goal and the given the target tactic `exact h _ isOpen_univ `, Ishould produce a reasoning that only based on the goal, and conclude to the `exact h _ isOpen_univ `, but not in the hindsight format.### REASONINGBy directly using the hypothesis `h : PContinuous f `, we state that for any open set, its preimage via `f`is also open, and this applies to `Set.univ `as well.### TACTIC11```lean4exact h _ isOpen_univ```To achieve the goal, you have three jobs.# Write down the planning of reasoning.# Write down the reasoning that can result in the selection of the Given Tactic.# Write down the Given Tactic.You have four principles to do this.# Craft reasoning that is comprehensive yet concise, contained ideally within twosentences.# Begin reasoning with the objective of the Given Theorem, ensuring logicalprogression, rather than constructing it retrospectively from the Given Tactic.# Avoid mentioning the Given Tactic and specific statement names from the GivenTactic throughout your reasoning.# Position the conclusion of the tactic after presenting all reasons, ensuring thatdetails regarding the Given Tactic are not introduced at the start of yourreasoning.Your output should be strictly in the following format and should not contain extracontent:### PLANNING OF REASONINGGiven the above goal and the given the target tactic <Given Tactic>, I shouldproduce a reasoning that only based on the goal, and conclude to the exact <Given Tactic>, but not in the hindsight format.### REASONING<your reasoning to the Given Tactic>### TACTIC<Given Tactic># Given Theorem:```lean4{theorem["state_before"]}```# Given Tactic:```lean4{theorem["tactic"]}```12G Examples of generated Lean proofs with thoughtstheorem aime_1990_p4 (x : R) (h 0: 0 < x) (h 1: x ^ 2 - 10 * x - 29 ̸=0)(h2: x ^ 2 - 10 * x - 45 ̸=0) (h 3: x ^ 2 - 10 * x - 69 ̸=0)(h4: 1 / (x ^ 2 - 10 * x - 29) + 1 / (x ^ 2 - 10 * x - 45) - 2 / (x ^ 2 - 10 *x - 69) = 0): x = 13 :=norm_num at h 1h2h3-- Reasoning: To prove `x = 13 `, we examine the conditionsunder which the given equation holds, specifically focusing on thenon-equality of the expressions involving `x ^ 2 - 10 * x - 29 `,`x ^ 2 - 10 *x - 45 `, and `x ^ 2 - 10 * x - 69 `. Simplifying these expressions usingproperties of real numbers and algebra allows us to directly compare them withtheir respective values, ultimately identifying `x = 13 `as the uniquesolution that satisfies all constraints.field_simp at * -- Reasoning: To prove \(x = 13\), we leverage the givenequalities involving rational fractions of \(x^2 - 10x - c\) for variousconstants \(c\), where \(0 < x\) and each \(x^2 - 10x - c\) is non-zero. Byalgebraically manipulating these expressions, we aim to derive a relationshipamong \(x\) and these constants, specifically focusing on \(13\), under thegiven non-zero conditions for the quadratic expressions.ring_nf at h 4⊢-- Reasoning: To establish the equality `x = 13 `under thegiven conditions, we need to manipulate the algebraic expression given by `h4`to reveal its dependence on `x`, recognizing that the presence of `x`in theexpressions may affect the equation 's truth value. The application ofalgebraic transformations on both sides of the equation will facilitate theidentification of `x`'s value, taking into consideration the inequalityconditions provided ( `h1`,`h2`,`h3`).nlinarith -- Reasoning: To prove \(x = 13\), we must manipulate the equationgiven by \(h 4\) to isolate \(x\), using the algebraic properties of equalityand the assumption \(h 0\) which states \(0 < x\). This requires carefulrearrangement and application of algebraic identities to simplify andrearrange terms, facilitating the direct application of the properties of realnumbers to reach the desired conclusion.theorem amc12a_2002_p6 (n : N) (h 0: 0 < n):∃m, m > n ∧ ∃ p, m * p ≤m + p :=use n + 1, by linarith only [h 0]-- Reasoning: To demonstrate the existence ofsuch an `m`and `p`that satisfy the given inequalities, we leverage theproperty of natural numbers that ensures an increase in `n`from `0`leads tothe possibility of finding such `m`and `p`. This approach is supported by thefundamental property of natural numbers and their arithmetic operations,suggesting that for a given `n`greater than `0`, there exists a successor `m`and a number `p`that satisfy the conditions outlined.exact 〈0, by simp 〉-- Reasoning: To demonstrate that there exists an \(n \in N\) such that \((n + 1) * p \leq n + 1 + p\) holds for some \(p \in N\), werecognize that setting \(p = 0\) satisfies the inequality by the properties ofzero in arithmetic. This is because \(n + 1 > 0\) by assumption, ensuring theleft-hand side is non-negative and the right-hand side simplifies to \(n +1\), making the inequality trivially true.13theorem aime_1983_p2 (x p : R) (f : R→R) (h 0: 0 < p ∧p < 15) (h 1: p≤x∧x≤15)(h2: f x = abs (x - p) + abs (x - 15) + abs (x - p - 15)): 15≤f x :=cases 'h1with h 1h2-- Reasoning: Given a function \(f\) defined over the realnumbers, where \(f(x)\) equals the sum of the absolute values of thedifferences \(x - p\), \(x - 15\), and \(x - p - 15\), along with conditionson \(p\) and \(x\) that ensure \(x\) lies between \(p\) and \(15\) and \(p\)is positive, we aim to show that \(f(x) \geq 15\). To demonstrate this, weconsider all possible cases for the position of \(x\) relative to \(p\) and\(15\), specifically at \(p\) itself, between \(p\) and \(15\), and at \(15\).By evaluating \(f(x)\) for each case, we can establish a lower bound of\(f(x)\) that is greater than or equal to \(15\).simp [h 0, abs_of_nonneg (sub_nonneg.2 h 1), abs_of_nonneg (sub_nonneg.2 h 2), *]-- Reasoning: The objective is to show that \(f(x)\) is at least 15 for \(x\)within the bounds \(p \le x \le 15\), where \(p\) is positive but less than15. The provided expression for \(f(x)\) suggests using properties of absolutevalues and inequalities to simplify and analyze \(f(x)\). By applyingsimplification rules for absolute values and inequalities, along with thegiven bounds, we aim to demonstrate the desired inequality \(f(x) \ge 15\).simp only [abs_of_nonneg] at * -- Reasoning: The objective is to establish aninequality involving an expression with absolute values and a sum, where allterms are functions of \(x\). By recognizing that the absolute value functionsare always non-negative, we can simplify the given expression by removing theabsolute value symbols, thus focusing on the arithmetic manipulation of theterms to demonstrate the inequality.simp only [abs_of_nonneg, sub_add_eq_add_sub] at * -- Reasoning: The objectiveis to demonstrate that the expression `15≤x - p + |x - 15| + |x - p - 15| `holds given the conditions on `p`,`x`, and the function `f`. Since theexpression in question involves absolute values which inherently involvenon-negativity, manipulating these expressions to highlight theirnon-negativity or equality is a logical initial step. This involvesrecognizing that the absolute value of a difference can be rewritten as thedifference of absolute values, which simplifies the manipulation of theexpression to prove the inequality.congr 1 -- Reasoning: To establish the inequality `15≤x + |x - 15| + |x - p- 15| - p `, we recognize that the left-hand side of the inequality involves amix of additions and subtractions of real numbers. By introducing a congruencetransformation that simplifies operations involving absolute values, we aim toalign both sides of the inequality in a form that directly reflects theproperties of real numbers under these operations.simp only [abs_of_nonpos (sub_nonpos_of_le h 2), abs_of_nonneg (sub_nonneg_of_leh1)]-- Reasoning: The objective is to prove an inequality involving absolutevalues and subtractions that aligns the given conditions and constraints.Simplification using the properties of absolute values and the provided boundscan directly lead to the desired conclusion by expressing the absolute valuesin terms of their underlying expressions and applying the given constraints.simp [abs_of_pos, *] -- Reasoning: To prove the given inequality involvingabsolute values, simplification using algebraic properties and givenhypotheses will directly lead us to the desired inequality by recognizing thealgebraic manipulation and application of the known bounds `h1`and `h2`, aswell as the non-negativity of `p`.rw [abs_of_nonpos] -- Reasoning: The objective is to demonstrate that `15≤15+ |x - p - 15| - p `under the given constraints on `p`and `x`. Given `p < 15 `and `p≤x≤15`, it 's apparent that `p - 15 `is non-positive. Thisobservation allows us to apply the property that the absolute value of anon-positive number is equal to its negative value, thereby simplifying theexpression on the right-hand side of the inequality.all_goals linarith -- Reasoning: The goal involves demonstrating a relationshipbetween expressions involving inequalities and subtraction. The simplificationof these expressions into a form that directly compares their numerical valuescan lead to a straightforward application of known inequalities and algebraicproperties, demonstrating the inequality 's validity under the given conditions.14 |
8cNJyqs45T | Interleaving Text and Number Embeddings to SolveMathemathics ProblemsMarvin Alberts1,2,3Gianmarco Gabrieli1Irina Espejo Morales11IBM Research2University of Zürich3NCCR Catalysis{marvin.alberts, irina.espejo.morales}@[email protected] text and numbers effectively is a crucial step towards enhancing LargeLanguage Models (LLMs) capabilities in assisting in scientific tasks. While mostcurrent approaches rely on discrete tokenization of numbers, for instance, conver-sion to scientific notation or base 10-decomposition, a recent approach proposed acontinuous numerical encoding as an inductive bias. In this paper, we build uponthis approach by introducing more expressive numerical embeddings. Our methodaddresses key shortcomings, including the elimination of numerical artefacts andthe ability to handle a wide range of magnitudes without clipping.Our work presents two key contributions. First, we employ an MLP to assigndistinct directions in the embedding space to different numbers. Our second contri-bution is the introduction of a routing layer that differentiates between numericaland text embeddings. We hypothesise that this combined approach enables themodel to distinguish between text and number distributions while maintaining itscapacity for arithmetic operations.Using only a 45 M parameter encoder-decoder architecture our method achieves aR2=0.9988 over a wide range of magnitude ( 10−3,108). In addition, we empiri-cally observe a reduction of the numerical artefacts and biases observed comparedto the baselines.1 IntroductionTransformer-based autoregressive language models have revolutionized machine learning over thepast years [ 1,2,3]. Originally designed for neural machine translation, these models have foundapplications across diverse domains, including natural language processing, image analysis, andmathematical problem-solving [ 4,5,6,7]. Despite their versatility in natural language tasks, accuratemathematical reasoning remains a challenge [ 8], especially when Chain-of-Thought prompting or ascratchpad are not employed [ 9]. The limits of the transformer architecture to perform mathematicalcomputations and predictions from first principles have been extensively explored in the literature[ 8,10,11,12,13]. However, the majority of approaches leverage digit-by-digit, scientific notation orbase 10 formats to encode and decode numbers [ 14]. This discretization of numbers leads to inherentprediction artefacts, especially for unstructured and non-synthetic numerical input data. For thisreason, XVALhas proposed to introduce continuity as an inductive bias [ 15], to reduce the impact ofdiscontinuous tokenization. Despite the improved performance in regression tasks and the reductionof artefacts, XVALlimits the magnitude by normalising the numerical inputs. In this work, we builtupon XVALincorporating continuity as an inductive bias, and pose the question: Does increasingthe expressivity of the numerical embeddings, by allowing each number to attend to its surroundingtext, improve the performance on number prediction in mathematical questions? For instance, it isintuitive that the number "2" should attend to words like "eggs" or "apple" and the number "0.0001"to words like "infinitesimal" or "fraction". The second question we aim to address: Can a routing38th Conference on Neural Information Processing Systems (NeurIPS 2024).layer (as in Mixture-of-Experts) differentiating between text and numbers induce structure in theprediction of the model to improve the generation of numbers?Related WorkRegarding number tokenization, a large number of LLMs use variants of Byte Pair Encoding (BPE)[16] to tokenize numbers in the same way as text. As shown by [ 8] and [ 12] this introduces a limit tothe number understanding of these models. Charton 10demonstrates that transformer models canperform complex matrix operations and proposes a scheme tokenizing numbers as a series of tokensfor sign, mantissa and exponent with varying levels of precision. As noted by [ 15], this approach doesnot take into consideration the continuous nature of numbers which is important in scientific domains,and instead, the authors introduce an architecture called XVALwhich bypasses the need for numbertokenization but not for number preprocessing and assigns a to each number a single numericalembedding modulated by the magnitude of the number itself. In the arithmetics applications [ 11] thatconcern this paper [ 14], the work by [ 17] shows that by adding positional encodings to digits it ispossible to generalize out of the maximum training length of digits.2 Our approachIn this work, we introduce Multimodal Decoding (MMD) which builds on the XVAL[15] architecture.XVALembeds numbers by introducing a number token, <num> . The embedding of this numbertoken is multiplied by the actual value of the number, effectively scaling the embedding of thenumber token. To decode numbers XVALintroduces two heads, one for tokens and one for numbers.During decoding if the <num> -token is predicted the model embedding is routed to the number headpredicting a number.Our architecture is similar. We also embed numbers separately but instead of scaling an embedding,we use an MLP allowing each number a distinct embedding direction learned by attending thesurrounding text and as such providing more expressivity (see Figure 1 (a)). Another major changeconsists of the introduction of a routing layer inspired by Mixture-of-Experts (MoE) models [ 18].Instead of relying on the text head to predict the <num> -token and then passing the model embeddingto the number head, the routing layer predicts whether a given model embedding is routed to thetext or number head (see Figure 1 (b)). This approach is motivated by the following insights: i)An embedding for a given number will be attending to the text that surrounds it, and differentnumbers will attend to different words; ii) The routing layer will force text and number embeddings0...1bos eosThe result of the multiplication is Interleaved embedding sequenceTransformer Block x NTextpreprocessingNumberpreprocessingTextembeddingNumberembedding (MLP) 1234 Interleaved tokensequence(a) (b)Transformer Block ×N Routing LayerTEXT NUMTEXT Head NUM HeadThe result of the multiplication is1234 EncodingThe result of the multiplication is 1234Figure 1: Diagram summarizing the MMD decoding method presented in this paper. (Left) High-levelworkflow of the encoder, routing layer classifying the modality, and the decoder. (Right) Zoom in onencoding an interleaved text and number sequence where, for text, the usual tokenization scheme isfollowed and for each number a new embedding vector is trained end-to-end.2Figure 2: Comparison of log-log prediction vs ground truth values for arithmetic computations in thetest set for baselines BPE and World-level (bottom row from left to right) and our MMD method (toprow). The colour of each point indicates the relative error between prediction and ground truth withdarker being a lower error.to have a different distribution so that arithmetic calculations can be performed with numericalembeddings inside the transformer blocks; and iii) Instead of hard-coding an inductive bias fornumerical embeddings with this method allows for finding optimal embedding distributions thatmight otherwise be missed.3 ExperimentsTo evaluate the efficacy of our architecture, we conduct experiments on two progressively morecomplex tasks. We begin with a relatively straightforward problem: predicting numerical answersto arithmetic problems. In this task, we constrain the arithmetic problems such that all answers arenumbers ( Numbers Only ).We then extend our evaluation to more challenging math problems. In this case, we introduceproblems that require either numerical or textual responses, or a combination of both ( Text andNumbers ).Since we propose fundamental changes on numerical embeddings and different losses, we selected asimple dataset, the Mathematics dataset [ 19], see Appendix C. We use two versions of the dataset.One containing only numerical answers and the second containing prompts with both numbers astext as possible answers.Table 1: Results for the numbers-only prediction experiments for baselines (BPE and Word-level)and for three versions of our MMD method.MAE↓ RMSE ↓ MRE↓ R2↑BPE 8.28·1041.05·1070.1230.1230.123 −168.22Word-level 5.07·1045.02·1060.833 0 .616XVAL 1.13·1044.51·1066.04·1040.9948MMD (Ours) 8.59·1034.94·1051.04·1040.9962MMD-log (Ours) 2.22·1041.93·1060.143 0 .9434Multi-MLP MMD (Ours) 5.60·1035.60·1035.60·1032.86·1042.86·1042.86·1044.84·1030.99880.99880.99883Table 2: Results for the text and numbers prediction experiments for baselines (BPE and Word-level)and for three versions of our MMD method.F1- SCORE ↑ MAE↓ RMSE ↓ MRE↓ R2↑BPE - 2.41·10153.92·10174.02·1014−5.34·1020Word-level - 4.94·10118.12·10133.52·108−2.32·1014XVAL 0.764 4.98·1063.59·1076.72·1070.141MMD (Ours) 0.766 1.41·1072.8·1075.85·1060.6790.6790.679MMD-log (Ours) 0.880 4.72·1054.72·1054.72·1056.00·1066.00·1066.00·1067.33·1037.33·1037.33·103−6.70Multi-MLP MMD (Ours) 0.769 1.78·1074.51·1071.53·107−0.230Numbers OnlyWe train a total of three variants of our model. The first with a one-layer MLP to encode numbers andsimilarly a one-layer MLP as the number head. In the second variant, we predict the log-transformednumbers and in the third variant, the number head consists of a three-layered MLP. We compare thesemodels to three baselines: One transformer model trained with a BPE tokenizer, i.e. numbers aretokenised digit by digit, a word level tokenizer and lastly XVAL. All three of the variants of the modeloutperform the baseline on MAE, RMSE and R2(see Table 1). Especially encouraging is analysingthe log-log plot of the predictions vs the targets (see Figure 2). Here we observe specifically forBPE and word-level artefacts with the model predicting multiple magnitudes off the target. On theother hand, we also observe failure modes in our architecture. While the predictions and targets trackwell starting from around 102, the model fails to predict small values. This problem is addressed bylog-transforming the numbers yielding a log-log curve closely following the identity.Text and NumbersWe then extend the evaluation of our method to mathematical problems beyond pure arithmetic,encompassing tasks which comprise interleaved text and numbers in various problems (e.g., algebra).We leverage the same methods and baselines reported in Section 3 and report the results in Table 2.In this case, our methods obtain performance metrics that are several orders of magnitude better thanthe baselines, demonstrating the challenge of text-only models to process interleaved sequences. Inparticular, log-transformed numerical representations result in improved accuracy compared to theother two variants, with the only expection of the R2score for the base MMD version. Moreover,the results show that logarithmic pre-processing is pivotal for differentiating text and numbers, withan F1-score on the modality classification that is 0.11higher than the other methods. As expected,the results are generally worse than in Section 3, due to the higher complexity of the interleavedmulti-task dataset.4 ConclusionIn this paper we investigated two strategies to interleave text and numbers as input to a standardencoder-decoder transformer architecture. The first strategy allows more expressive numerical em-beddings by assigning a distinct vector space to different numbers. The second strategy introduces arouting layer that differentiates between text and number tokens. This serves to create a desirablestructure in the embedding space differentiating between text and numbers without explicitly intro-ducing any inductive bias. From our experiments on predicting the numerical outcomes of arithmeticcomputations, we observed that our approach exhibited the most balanced set of metrics and whencomparing predicted vs true values, our method was the closest to the identity showing consistentperformance along magnitudes ranging from 10−3to108. We also note that even if a baselineperformed well for a particular metric, the true vs predicted plots reveal artifacts and undesirablebiases. The main limitation of this study is the simplicity of the arithmetic’s dataset, which we intendto address in future work by incorporating more complex datasets and perform experiments against awider set of numerical encodings and architectures in the literature.4References[1]Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal,Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, ArielHerbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M.Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, MateuszLitwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, AlecRadford, Ilya Sutskever, and Dario Amodei. Language Models are Few-Shot Learners, May2020. URL https://arxiv.org/abs/2005.14165v4 .[2]Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle,Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, AnthonyHartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark,Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere,Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi,Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, CorinneWong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, DaniellePintz, Danny Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano,Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, EmilyDinan, Eric Michael Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee,Georgia Lewis Anderson, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, HaileyNguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, IsabelKloumann, Ishan Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, and et.al. The Llama 3 Herd of Models, August 2024. URL http://arxiv.org/abs/2407.21783 .arXiv:2407.21783 [cs].[3]Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, LukasBlecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes,Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, AnthonyHartshorn, Saghar Hosseini, and et. al. Llama 2: Open Foundation and Fine-Tuned Chat Models,2023. URL http://arxiv.org/abs/2307.09288 . arXiv:2307.09288 [cs].[4]Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,Lukasz Kaiser, and Illia Polosukhin. Attention Is All You Need, August 2023. URL http://arxiv.org/abs/1706.03762 . arXiv:1706.03762 [cs].[5]Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena,Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the Limits of Transfer Learning with a UnifiedText-to-Text Transformer, September 2023. URL http://arxiv.org/abs/1910.10683 .arXiv:1910.10683 [cs, stat].[6]Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai,Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly,Jakob Uszkoreit, and Neil Houlsby. An Image is Worth 16x16 Words: Transformers for ImageRecognition at Scale, October 2020. URL https://arxiv.org/abs/2010.11929v2 .[7]Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer,Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An Open Lan-guage Model For Mathematics, March 2024. URL http://arxiv.org/abs/2310.10631 .arXiv:2310.10631 [cs].[8]Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. Do NLP modelsknow numbers? probing numeracy in embeddings. In Kentaro Inui, Jing Jiang, VincentNg, and Xiaojun Wan, editors, Proceedings of the 2019 Conference on Empirical Meth-ods in Natural Language Processing and the 9th International Joint Conference on Natu-ral Language Processing (EMNLP-IJCNLP) , pages 5307–5315, Hong Kong, China, Novem-ber 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1534. URLhttps://aclanthology.org/D19-1534 .[9]Flavio Petruzzellis, Alberto Testolin, and Alessandro Sperduti. Benchmarking gpt-4 on al-gorithmic problems: A systematic evaluation of prompting strategies, 2024. URL https://arxiv.org/abs/2402.17396 .5[10] François Charton. Linear algebra with transformers, 2022. URL https://arxiv.org/abs/2112.01898 .[11] Alberto Testolin. Can neural networks do arithmetic? a survey on the elementary numerical skillsof state-of-the-art deep learning models. Applied Sciences , 14(2):744, January 2024. ISSN 2076-3417. doi: 10.3390/app14020744. URL http://dx.doi.org/10.3390/app14020744 .[12] Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, and Dan Roth. Do languageembeddings capture scales? In Trevor Cohn, Yulan He, and Yang Liu, editors, Findings of theAssociation for Computational Linguistics: EMNLP 2020 , pages 4889–4896, Online, November2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.439.URL https://aclanthology.org/2020.findings-emnlp.439 .[13] Dhanasekar Sundararaman, Shijing Si, Vivek Subramanian, Guoyin Wang, Devamanyu Haz-arika, and Lawrence Carin. Methods for numeracy-preserving word embeddings. In Bonnie Web-ber, Trevor Cohn, Yulan He, and Yang Liu, editors, Proceedings of the 2020 Conference on Em-pirical Methods in Natural Language Processing (EMNLP) , pages 4742–4753, Online, Novem-ber 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.384.URL https://aclanthology.org/2020.emnlp-main.384 .[14] Avijit Thawani, Jay Pujara, Pedro A. Szekely, and Filip Ilievski. Representing numbers in nlp:a survey and a vision, 2021. URL https://arxiv.org/abs/2103.13136 .[15] Siavash Golkar, Mariel Pettee, Michael Eickenberg, Alberto Bietti, Miles Cranmer, GeraudKrawezik, Francois Lanusse, Michael McCabe, Ruben Ohana, Liam Parker, Bruno Régaldo-Saint Blancard, Tiberiu Tesileanu, Kyunghyun Cho, and Shirley Ho. xval: A continuous numberencoding for large language models, 2023. URL https://arxiv.org/abs/2310.02989 .[16] Philip Gage. A new algorithm for data compression. C Users J. , 12(2):23–38, 1994. ISSN0898-9788.[17] Sean McLeish, Arpit Bansal, Alex Stein, Neel Jain, John Kirchenbauer, Brian R. Bartoldson,Bhavya Kailkhura, Abhinav Bhatele, Jonas Geiping, Avi Schwarzschild, and Tom Goldstein.Transformers can do arithmetic with the right embeddings, 2024. URL https://arxiv.org/abs/2405.17399 .[18] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton,and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-expertslayer, 2017. URL https://arxiv.org/abs/1701.06538 .[19] David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematicalreasoning abilities of neural models, 2019. URL https://arxiv.org/abs/1904.01557 .6A Illustration of the routing layerSee Figure 3 for a graphical description of the different baselines and our methods. All experimentsstart with the same question and answer pair but text and numbers processed differently. For thebaselines BPE and Word-level in yellow, the routing layer is inactive because numbers are processedas text only.Figure 3: Schematic representation for the different baselines and our models indicating wether therouting layer is active or not when a number is in the input.B Experiment settingsModel: The backbone structure is a standard encoder-decoder architecture with 4 layers in theencoder transformer block and 4 layers for the text decoder block and a 2-layer fully-connectedMLP for the numeric head. The activation functions are Gaussian Error Linear Units (GELU). Allexperiments were performed using a single NVIDIA A100 40GB GPU. Models were trained for 50epochs by setting the learning rate to 1e−4for all the experiments.Metrics: The metrics we use to evaluate number predictions for the result of arithmetic calculationsare Mean Absolute Error (MAE), Mean Squared Error (MSE), Mean Relative Error (MRE), MedianRelative Error (MedRE) and the R2coefficient. Instead of reporting one of them, together theypresent a better picture of the distribution and biases of errors with respect to the range of the trueanswer.C Data samplesExamples of question and answer pairs of the Mathematics dataset [ 19] that were used for training ofNumbers Only andText and Numbers experiments presented in the paper.7Question: Calculate -971810940.335 + 612120.Answer: -970586700.335Question: Solve -12*t - 4482 = 64*t - 383*t + 141*t for t.Answer: 27Question: Does 15 divide 8287819?Answer: False8 |
6pUdfsJmd1 | AI-Assisted Generation of Difficult Math QuestionsVedant Shah*1,2Dingli Yu3Kaifeng Lyu3Simon Park3Jiatong Yu3Yinghui He3Nan Rosemary Ke1Michael Mozer4Yoshua Bengio1,2Sanjeev Arora3Anirudh Goyal*11Mila – Quebec AI Institute2Université de Montréal3Princeton University4University of Colorado, BoulderAbstractCurrent LLM training positions mathematical reasoning as a core capability. Withpublicly available sources fully tapped, there is an unmet demand for diverse andchallenging mathematics questions. Relying solely on human experts is both time-consuming and costly, while LLM-generated questions often lack the requisitediversity and difficulty. We present a design framework that combines the strengthsof LLMs with a human-in-the-loop approach to generate a diverse array of chal-lenging math questions. Initially, leveraging LLM metacognition skills [Didolkaret al., 2024], a strong LLM is used to extract core “skills” from existing mathdatasets. These skills serve as the basis for generating novel and difficult questionsby prompting the LLM with random pairs of core skills that must be utilized in thequestion. The use of two very different skills within each question makes findingsuch questions an “out of distribution” task for both LLMs and humans. Ourpipeline employs LLMs to iteratively generate and refine questions and solutionsthrough multi-turn prompting. Human annotators then verify and further refine thequestions, with their efficiency enhanced via further LLM interactions. Applyingthis pipeline on skills extracted from MATH dataset [Hendrycks et al., 2021] re-sulted in MATH2- a dataset of higher quality math questions, as evidenced bylower performance of all models on MATH2than on MATH. Although focusedon mathematics, our methodology seems applicable to other domains requiringstructured reasoning, and potentially as a component of scalable oversight . Also ofinterest is a striking relationship observed between models’ performance on thenew dataset: the success rate on MATH2is the square on MATH. This suggeststhat successfully solving the question in MATH2requires a nontrivial combinationof two distinct math skills.1 IntroductionSignificant improvement in the capabilities of LLMs [Chowdhery et al., 2023, Anil et al., 2023, Team,2023, Team et al., 2023, Abdin et al., 2024, Achiam et al., 2023, Touvron et al., 2023] to understandand generate complex mathematical content has been achieved by leveraging all the public data and afair bit of private data. Sources of high-quality, varied, and difficult mathematical questions are dryingup. Even finding new questions for evaluation is getting difficult since newly-released human examsare somewhat similar to past exams, which are potentially present in the LLMs’ training datasets.Hence, there is a pressing need for innovative methods to create new, diverse, and challengingquestions.∗Correspondance: [email protected], [email protected] Conference on Neural Information Processing Systems (NeurIPS 2024).Expert mathematicians and educators possess the deep understanding required to create questions thatnot only test a wide range of skills but also push the boundaries of what the learners, and by extension,the models, can handle. However, relying solely on human experts is not scalable. Generatingsynthetic questions using LLMs is feasible at scale [Trinh et al., 2024, Li et al., 2024, Gunasekaret al., 2023, Patel et al., 2024, Toshniwal et al., 2024, Gupta et al., 2023, Lu et al., 2024, Honovichet al., 2022], but often fall short in terms of the necessary difficulty. Huang et al. [2024] employs asimilar approach as ours where they extract topics and corresponding keypoints from a set of seedproblems using GPT-4, and then combine the topic to generate new questions, again using GPT-4).However, the generated data is meant to be used for the finetuning of models as compared to servingas an evaluation set in our case. As a result, the questions generated in Huang et al. [2024] arenot sufficiently difficult. Similarly, limited work exists on ensuring the necessary diversity in thegenerated synthetic data. Chan et al. [2024] proposes prompting frontier models to generate questionswhere each question is generated in the context of a persona as a way of ensuring diversity. They use1M different personas to generate questions, which are then used for finetuning models, leading tosignificant improvements. This dichotomy between the quality of human-generated questions and thescalability of LLM-generated questions presents a significant challenge [Yu et al., 2024].1.1 Evaluation Saturation PhenomenonLLM evaluations are becoming saturated due to improvements from better training and largerdatasets, but also from evaluation-specific optimizations like supervised fine-tuning (SFT) on syntheticquestion-answer pairs. These pairs, generated by leading proprietary models or filtered from themodel’s responses, can significantly boost performance. For instance, just 1 million syntheticexamples raised Llama2 7B’s MATH dataset performance to GPT-4 levels [Li et al., 2024].The distinction between general and evaluation-specific improvements is key, as the latter can lead tooverfitting rather than real skill acquisition. This issue was evident when models showed performancedrops on a new GSM8K dataset version and on newer Chinese GaoKao exams, suggesting shallowunderstanding of mathematics [Zhang et al., 2024].1.2 Proposed Framework: AI-assisted Generation of Difficult Math QuestionsRecent research [Arora and Goyal, 2023, Didolkar et al., 2024] demonstrated that top LLMs possessa robust understanding of mathematical skills, including the capability to identify the skills requiredto solve given questions [Reid et al., 2024, Achiam et al., 2023]. This naturally raises the question:can LLMs operate in the reverse direction, i.e., generate math problems when given a list of skills thathave to be tested? Our initial attempts yielded mixed results. While leading models could producecreative math questions when provided with a list of skills, the majority of these questions exhibitedone or more of the following shortcomings: too similar to existing questions in datasets; have errorsor nonsensical elements; are too tedious or mechanical to be engaging for human annotators. (SeeSection A.5.) Moreover, they often conflate “difficulty” with tedious calculations, which actuallywould play to the strength of machines to leverage external tools such as calculators or Pythoninterpreters.Nevertheless, there were promising instances where LLMs generated interesting and correct questionsthat they were unable to solve, due to incomplete or incorrect reasoning. This observation led usto the concept of AI-assisted creation of evaluation datasets . Our process may also be of interestfor human pedagogy since it begins with the extraction of core "skills" from existing math datasets,which serve as the foundational elements of mathematical questions. The current paper focuses onthe MATH dataset [Hendrycks et al., 2021], a mainstay of LLM evaluation in recent years.In our AI-assisted process, human experts played a crucial role. Using the (question, answer)pairs generated by LLMs and leveraging API access to leading models, experts identified promisingquestions—-often those incorrectly answered by the LLMs but containing many correct ideas. Expertsthen refined these questions to enhance their engagement value and provided gold-standard answers.The AI-assisted process not only boosted human productivity but also resulted in high-quality, novelquestions distinct from those in existing datasets.2SkillDescriptionsSkill 1Question 1Solution 1Question 2Solution 2Question 3Solution 3SkillDescriptionsHuman-AI Conversation ExemplarsGenerated Question Skill Description sValidationExemplarsVerified QuestionSkill DescriptionsValid / Invalid Generated QuestionAttempted SolutionGenerated Question Attempted SolutionValid / InvalidFinal Solution(A) Skill Pair Validation (B) Question Generation(C) Attempted Solution (D) Question Validation (E) Final SolutionSkill 2Question 1Solution 1Question 2Solution 2Question 3Solution 3(a)GPT-4 OmniClaude 3.5 SonnetGPT-4 TurboGemini-1.5-Pro Claude-3 OpusLlama-3.1-70B-InstructLlama-3-70B-InstructMetaMath-70BMAmmoTH-70BMixtral-8×7B-InstructMetaMath-13BMAmmoTH-13Bdeepseek-math-7b-instructLlama-3.1-8B-InstructLlama-3-8B-Instructgemma-1.1-7b-InstructMetaMath-7BMAmmoTH-7BPhi-3-mini-128k-instructgemma-1.1-2b-InstructModel020406080100Accuracy [%]Model Performance ComparisonMATH2MATH (b)Figure 1: (a) AI-assisted question generation : The five-step pipeline includes: (i) Skill PairValidation, ensuring distinct skills; (ii) Question Generation, producing a problem that combines bothskills; (iii) Attempted Solution, where the model solves using a defeatist approach; (iv) QuestionValidation, assessing correctness, rigor, and quality; and (v) Final Solution, applying advancedtechniques to enhance accuracy. (b) Comparison of Zero-Shot Performance : This figure showszero-shot Chain of Thought (CoT) performance on MATH and MATH2. Proprietary models showthe smallest performance drop on MATH2, while smaller models experience larger drops. Detailedresults are in Table 1.2 Pipeline for AI-Assisted Question GenerationWe present a structured approach to generating challenging mathematics questions by combiningthe capabilities of large language models (LLMs) and human expertise. Given below is a high-leveloverview of the process before delving into the details of each step.We begin our pipeline with skill extraction - identifying and cataloging distinct mathematical skillsfrom a dataset, as described in Didolkar et al. [2024]. This step creates a repository of skills linked tospecific questions. The motivation behind this is to systematically generate and analyze questionsthat require specific skills, ensuring a comprehensive evaluation framework.We employ a five-step approach to generate difficult math questions using advanced models. For eachround of generation, we randomly sample a pair of skills and three sample question-solution pairscorresponding to each skill from the skill repository. These reference examples are sourced from theMATH dataset.Step 1: Skill Pair Validation. We begin by asking the LLM (GPT-4 or Claude) to validate a randomlysampled skill pair by assessing the qualitative similarity of the two skills. Reference examples areprovided in-context to enrich the model’s understanding of the skills. If the model deems the skillstoo similar, they are flagged and excluded from question generation, as similar skills might lead tosimpler questions.Step 2: Question Generation. Next, we prompt the LLM to generate a question and a brief solutionrequiring the application of both skills in the sampled pair. We specify two conditions to ensure high-quality questions: the question should either require an exact answer or specify that an approximateanswer is acceptable, and it should ask for only a single final result. In-context, we provide twomulti-turn conversations between a human and an AI assistant. These conversations demonstratethe human providing feedback on the AI-generated questions, which the AI then refines. This helpsthe model anticipate and avoid practical issues, such as insufficient involvement of skills or logicalinconsistencies. Appendix A.6 provides examples of the responses of different models in the questiongeneration step.Step 3: Solution Attempt. The model then attempts a solution to the generated question, adopting anadversarial approach to identify flaws such as insufficient information, ambiguity, self-contradiction,or excessive computation. If any issues are found, the model stops solving and clearly states theproblems. Otherwise, it completes the solution. During this step, the model does not receive the skillnames or reference examples to ensure unbiased problem-solving.Step 4: Question Validation. We give LLM the generated question and its solution for validationagainst a fixed rubric consisting of seven criteria. We detail the validation criteria in Appendix A.1.The model uses reference examples and validation exemplars - model generated examples of validatingquestions, to facilitate this step. We employ majority voting (maj @ 4) to enhance robustness.3Step 5: Final Solution and Re-validation. For questions classified as valid, we ask the LLMto re-solve the question to obtain a final solution. Reference examples are provided in-context toimprove the model’s understanding. We use majority voting (maj @ 4) to ensure consistency. If allthe answers obtained in this step are unique, indicating potential ambiguity, the question is discarded.The questions obtained from the above pipeline are further screened by humans. This structuredapproach not only generates challenging and novel math questions but also ensures their qualitythrough rigorous validation, effectively combining the strengths of AI and human oversight. Fordetailed examples of prompts used at each step, refer to Appendix A.7.3 Experiments and FindingsThrough our experiments, we demonstrate the difficulty and quality of the MATH2while alsoanalyzing the behavior of different models on this task of compositional generalization . Firstly,we evaluate a wide range of models spanning a large range of parameter counts on MATH2andcompare against their performance on MATH [Hendrycks et al., 2021] which is the base datasetused for extracting skills, showing that the MATH2is necessarily harder than MATH. Next, wefurther demonstrate the difficulty and quality of questions in MATH2by showing that they are betterin-context exemplars as compared to standardly used exemplars. We describe the experimental setupbelow.3.1 Experimental SetupWe follow the pipeline proposed in [Didolkar et al., 2024] to extract skills from the MATH dataset[Hendrycks et al., 2021]. The MATH dataset encompasses seven high-level topics, allowing us toidentify and extract finer-grained skills within each topic and label each question accordingly. Atthe end of the skill-extraction process, we identify a set of 114 skills. We then remove a few simpleskills, such as basic_arithmetic andarithmetic_operations , before using the remaining setto generate questions using the proposed approach. We generate and verify 210 difficult questionsto create the MATH2dataset. Out of the 210 questions, 116 questions were generated using GPT-4Turbo, 3 using GPT-4 Omni, 51 using Claude-3 Opus and 40 using Gemini-1.5-Pro. Figure 3 showsthe distribution of skills in MATH2.Table 1: Comparison of Zero-Shot CoT Performance (Accuracy)on the Generated Dataset vs. MATH Test Set : GPT-4 Omnidemonstrates the least drop in percentage terms (16.73%) whereasMAmmoTH-7B shows the highest relative drop (93.92%).Model MATH2(Y) MATH (X) % DropGPT-4 Omni 64.29% 77.21% 16.73%Claude 3.5 Sonnet 46.15% 73.54% 37.24%GPT-4 Turbo 53.11% 73.27% 27.51%Gemini-1.5-Pro 39.71% 67.70% 41.34%Claude 3 Opus 37.14% 61.20 % 39.31%Llama-3.1-70B-Instruct 50.48% 67.40% 25.10%Llama-3-70B-Instruct 18.09% 47.89% 62.23%MetaMath-70B 8.61% 26.27% 67.22%MAmmoTH-70B 6.19% 19.31% 67.94%Mixtral-8 ×7B-Instruct 10.00% 31.52% 68.27%MetaMath-13B 6.19% 21.32% 70.96%MAmmoTH-13B 2.38% 10.99% 78.34%Deepseek-math-7b-instruct 16.83% 45.05% 62.64%Llama-3.1-8B-Instruct 28.09% 50.92% 44.83%Llama-3-8B-Instruct 9.05% 28.62% 68.38%Gemma-1.1-7B-Instruct 6.19% 23.36% 73.50%MetaMath-7B 1.91% 18.69% 89.78%MAmmoTH-7B 0.48% 7.90% 93.92%Phi-3-mini-128k-instruct 23.34% 48.29% 51.67%Gemma-1.1-2B-Instruct 2.38% 7.52% 68.35%Table 2 presents details ofthe changes made to thequestions during the humanverification process. In to-tal, 33.81% of the question-answer pairs in MATH2ap-pear exactly as phrased bytheir LLM creator.We evaluate the generatedset of questions on a varietyof language models, bothsmall and large. Specifi-cally, we assess the Meta-Math [Yu et al., 2023],MAmmoTH [Yue et al.,2023], Gemmma [Teamet al., 2024], and Llama-3 series, Phi-3, deepseek-math as well as one Mixture-of-Experts model Mixtral-8×7B-Instruct. Addition-ally, we include evaluationsof larger proprietary mod-els such as GPT-4o, GPT-4 Turbo2[OpenAI, 2023],Gemini-1.5-Pro, Claude 3.52Points to gpt-4-turbo-2024-04-09 at the time of writing4Sonnet3and Claude 3 Opus4. We compare the performances of these models on our generatedquestions against their performance on the MATH dataset [Hendrycks et al., 2021].We observe that the performance of models on MATH2follows a quadratic relationship with theperformance of models on MATH. We also demonstrate the high quality of the generated questionsby showing that they act as superior in-context exemplars for MATH dataset. We refer the reader toAppendix A.4 for more discussion on these observations and for further implementation and computedetails.4 ConclusionsWe introduced a framework that leverages the complementary strengths of humans and AI to generatenew, challenging mathematics questions. Building on recent insights into LLM metaknowledge,we use LLMs to extract and name key skills necessary for solving math problems. Using theseinsights, we developed a pipeline that employs named skills from the well-known MATH dataset,and leverages multi-turn interactions with advanced LLMs to generate questions that combine pairsof skills. These questions were subsequently reviewed and refined by human raters. The proposedpipeline produced questions with greater novelty and difficulty compared to those in the originalMATH dataset. This framework also resulted in a new math evaluation MATH2, that assesses thesame skills as the MATH dataset but is significantly more challenging for leading models becauseeach question involves two skills from different parts of MATH.We plan to release detailed information about our pipeline to encourage further research and develop-ment in the field of open-source math models.Limitations and Future Work. Our pipeline incurs moderately high costs due to extensive API-baseduse of frontier models as well as significant human verification. To improve efficiency, future workshould focus on using open weights models and optimizing prompting strategies to produce higher-quality questions initially, thereby reducing the need for extensive filtering. Additionally, reducinghuman verification through the development of automated validation tools is crucial. This couldinclude leveraging code generation and autoformalization capabilities of LLMs to generate responseswhich can be compiled using compilers or interpreters. Enhancing our pipeline by integrating atraining-based feedback loop, where the model is trained on the questions that pass human verification,could further streamline the process by progressively improving question quality. These measures willreduce dependency on expensive proprietary models, lower overall operational costs, and maintain oreven enhance the quality of the generated math evaluation benchmarks.Looking ahead, an even more exciting prospect is the potential application of the proposed frameworkto efficiently produce high-quality data in domains beyond mathematics.5 AcknowledgementsThis research used compute resources provided by Mila (mila.quebec) and GPT4 access as well ascompute resources provided by Princeton Language and Intelligence (PLI). The Princeton partici-pants were funded by NSF, DARPA, and PLI. VS would like to thank Aniket Didolkar for helpfuldiscussions throughout the project and for proof reading the paper. AG would like to thank MelvinJohnson, James McClelland and Yoram Bachrach for helpful discussions and useful feedback. AGwould also like to thank Daan Wierstra, Melvin Johnson, Siamak Shakeri, Murray Shanahan, JohnQuan, Theophane Weber, Olivier Tieleman, David Silver, Charles Blundell, Behnam Neyshabur,Ethan Dyer and Nicolas Heess for support and guidance.ReferencesM. Abdin, S. A. Jacobs, A. A. Awan, J. Aneja, A. Awadallah, H. Awadalla, N. Bach, A. Bahree,A. Bakhtiari, H. Behl, et al. Phi-3 technical report: A highly capable language model locally onyour phone. arXiv preprint arXiv:2404.14219 , 2024.J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt,S. Altman, S. Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023.3Points to claude-3-5-sonnet-20240620 at the time of writing4Points to claude-3-opus-20240229 at the time of writing5R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey,Z. Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403 , 2023.S. Arora and A. Goyal. A theory for emergence of complex skills in language models. arXiv preprintarXiv:2307.15936 , 2023.X. Chan, X. Wang, D. Yu, H. Mi, and D. Yu. Scaling synthetic data creation with 1,000,000,000personas. arXiv preprint arXiv:2406.20094 , 2024.A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung,C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes,Y . Tay, N. Shazeer, V . Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin,M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski,X. Garcia, V . Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph,A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat,A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz,O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel. Palm:Scaling language modeling with pathways. Journal of Machine Learning Research , 24(240):1–113,2023. URL http://jmlr.org/papers/v24/22-1144.html .A. Didolkar, A. Goyal, N. R. Ke, S. Guo, M. Valko, T. Lillicrap, D. Rezende, Y . Bengio, M. C. Mozer,and S. Arora. Metacognitive Capabilities of LLMs: An Exploration in Mathematical Problemsolving. 2024. URL https://api.semanticscholar.org/CorpusID:269921384 .A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Yang,A. Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024.S. Gunasekar, Y . Zhang, J. Aneja, C. C. T. Mendes, A. Del Giorno, S. Gopi, M. Javaheripi, P. Kauff-mann, G. de Rosa, O. Saarikivi, et al. Textbooks are all you need. arXiv preprint arXiv:2306.11644 ,2023.H. Gupta, K. Scaria, U. Anantheswaran, S. Verma, M. Parmar, S. A. Sawant, S. Mishra, and C. Baral.Targen: Targeted data generation with large language models. arXiv preprint arXiv:2310.17876 ,2023.D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt.Measuring mathematical problem solving with the math dataset, 2021.O. Honovich, T. Scialom, O. Levy, and T. Schick. Unnatural instructions: Tuning language modelswith (almost) no human labor. arXiv preprint arXiv:2212.09689 , 2022.Y . Huang, X. Liu, Y . Gong, Z. Gou, Y . Shen, N. Duan, and W. Chen. Key-point-driven data synthesiswith its enhancement on mathematical reasoning. arXiv preprint arXiv:2403.02333 , 2024.W. Kwon, Z. Li, S. Zhuang, Y . Sheng, L. Zheng, C. H. Yu, J. Gonzalez, H. Zhang, and I. Stoica. Effi-cient memory management for large language model serving with pagedattention. In Proceedingsof the 29th Symposium on Operating Systems Principles , pages 611–626, 2023.C. Li, W. Wang, J. Hu, Y . Wei, N. Zheng, H. Hu, Z. Zhang, and H. Peng. Common 7b languagemodels already possess strong math capabilities. arXiv preprint arXiv:2403.04706 , 2024.Z. Lu, A. Zhou, H. Ren, K. Wang, W. Shi, J. Pan, M. Zhan, and H. Li. Mathgenie: Generatingsynthetic data with question back-translation for enhancing mathematical reasoning of llms. arXivpreprint arXiv:2402.16352 , 2024.OpenAI. Gpt-4 technical report, 2023.A. Patel, C. Raffel, and C. Callison-Burch. Datadreamer: A tool for synthetic data generation andreproducible llm workflows. arXiv preprint arXiv:2402.10379 , 2024.M. Reid, N. Savinov, D. Teplyashin, D. Lepikhin, T. Lillicrap, J.-b. Alayrac, R. Soricut, A. Lazaridou,O. Firat, J. Schrittwieser, et al. Gemini 1.5: Unlocking multimodal understanding across millionsof tokens of context. arXiv preprint arXiv:2403.05530 , 2024.6G. Team. Gemini: A family of highly capable multimodal models, 2023.G. Team, R. Anil, S. Borgeaud, Y . Wu, J.-B. Alayrac, J. Yu, R. Soricut, J. Schalkwyk, A. M.Dai, A. Hauth, et al. Gemini: a family of highly capable multimodal models. arXiv preprintarXiv:2312.11805 , 2023.G. Team, T. Mesnard, C. Hardin, R. Dadashi, S. Bhupatiraju, S. Pathak, L. Sifre, M. Rivière, M. S.Kale, J. Love, et al. Gemma: Open models based on gemini research and technology. arXivpreprint arXiv:2403.08295 , 2024.S. Toshniwal, I. Moshkov, S. Narenthiran, D. Gitman, F. Jia, and I. Gitman. Openmathinstruct-1: A1.8 million math instruction tuning dataset. arXiv preprint arXiv:2402.10176 , 2024.H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra,P. Bhargava, S. Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXivpreprint arXiv:2307.09288 , 2023.T. H. Trinh, Y . Wu, Q. V . Le, H. He, and T. Luong. Solving olympiad geometry without humandemonstrations. Nature , 625(7995):476–482, 2024.J. Wei, X. Wang, Q. Liu, B. Yang, X. Dong, H. Huang, and W. Wang. Chain-of-thought promptingelicits reasoning in large language models. arXiv , abs/2201.11903, 2022. URL https://doi.org/10.48550/arXiv.2201.11903 .L. Yu, W. Jiang, H. Shi, J. Yu, Z. Liu, Y . Zhang, J. T. Kwok, Z. Li, A. Weller, and W. Liu. Meta-math: Bootstrap your own mathematical questions for large language models. arXiv preprintarXiv:2309.12284 , 2023.Y . Yu, Y . Zhuang, J. Zhang, Y . Meng, A. J. Ratner, R. Krishna, J. Shen, and C. Zhang. Large languagemodel as attributed training data generator: A tale of diversity and bias. Advances in NeuralInformation Processing Systems , 36, 2024.X. Yue, X. Qu, G. Zhang, Y . Fu, W. Huang, H. Sun, Y . Su, and W. Chen. Mammoth: Building mathgeneralist models through hybrid instruction tuning. arXiv preprint arXiv:2309.05653 , 2023.H. Zhang, J. Da, D. Lee, V . Robinson, C. Wu, W. Song, T. Zhao, P. Raja, D. Slack, Q. Lyu, et al.A careful examination of large language model performance on grade school arithmetic. arXivpreprint arXiv:2405.00332 , 2024.A AppendixA.1 Validation CriteriaWe detail the seven criteria used for validating a question in the question validation step of thepipeline.• Single Answer Requirement: The question should ask for only one final answer.•Exact Answer Requirement: There should be only one exact answer, unless approximationsare explicitly stated.•Dual Answer Requirement: The question must necessarily and sufficiently involve the appli-cation of both skills, with difficulty comparable to or greater than the reference examples.•Clarity and Completeness: The question should be clear and contain all necessary informa-tion.•Computational Tractability: The question should not require overly complex computations.• Realism and Logic: The scenario should be realistic and logically consistent.• Syntax and Grammar: The question should be grammatically correct and clearly written.7A.2 Failure Modes and Interesting BehaviorsInsufficient involvement of skills. Despite clearly specifying that solving the question shouldnecessarily require a rigorous application of both skills, the models often generate questions thateither miss one of the skills completely or require a very shallow application of one (while the otherone is sufficiently involved) or both skills. This is the most prominent failure mode of the models inthe context of question generation. This leads to potentially easy questions, defeating the purpose ofskill composition. Consider the question given below which was generated by Claude Opus whenasked to combine the skills ratio_and_proportion andgeometry .Example: Question: A square garden is to be divided into 4smaller square plots by two paths that are 1 meter wide andcross each other at right angles. The paths run North-Southand East-West, splitting the garden symmetrically. If thetotal area occupied by the paths is 36 square meters, find theside length of the original square garden.Upon careful examination of the question, we note that although the question tests geometry ,the involvement of ratio_and_proportions is practically non-existent. Further, the questionvalidation step in some cases also fails to identify these flaws. Supplying multi-turn human-AIinteractions where the user prompts a chatbot to generate a question combining two skills, in-contextduring the generation step helps the models to avoid such questions to a certain extent. Further, tomake the question validation step more robust to such questions, we prompt the model to ensure thatthe complexity of each skill application in the question being validated in similar to or more than thecomplexity of these skills in the reference examples present in the skill descriptions. The combinationof these two techniques helps us nearly eliminate questions where the absent one of the skills isabsent completely and reduce questions involving shallow application of skills to a significant extent.Insufficient information in the questions. Another common failure mode of the pipelineis the generated questions missing information or details essential for solving the ques-tion. For example in the question given below which is supposed to combine the skillsunderstanding_and_applying_floor_and_ceiling_functions andbasic_trigonometry ,lacks sufficient detail about the inclinations and elevations of the paths relative to the streetlight’sposition which is necessary to answer the question.Example: Question: Consider a scenario where you need toinstall a new streetlight at a point such that it illuminatestwo paths meeting at a point, each path making an angle of 45◦with the horizontal. The light from the streetlight reachesa maximum distance of 10meters on flat ground. You are toinstall the streetlight at the height of hmeters (wherehis the ceiling of the maximum distance the light reacheshorizontally) such that the edge of the light’s reach justtouches the ground at the end of each path. Determine theheight hat which the streetlight should be installed.To screen such questions, we include and explicit clause in the question validation prompt as describedin Section 2. Moreover, we also notice that the inclusion of the solution attempt step improves thechances of detecting such errors since the missing information may not always be apparent from justthe question itself. In such cases, attempting a solution (with a defeatist approach) can help detectsuch flaws.Unsolvable or Computationally Intractable Questions. There are instances when the modelgenerates questions which are unsolvable. For example the question given below has no solutionwhich satisfies all three constraints (i.e., the area of the rectangle being 360 and the sides belongingto the two arithmetic progressions defined in the question.)Example: Question 1: There’s a rectangle with an area of360 square units. The length of the rectangle is part of an8arithmetic sequence starting at 5 and with a common differenceof 7. If the other side of the rectangle is also part ofan arithmetic sequence with the first term 10 and commondifference 3, find the length of the shortest side of therectangle.In other instances, the model generates questions that are computationally intractable or requiremanually and tediously iterating through a long sequence of values. For example, solving the questiongiven below requires manually calculating the first 100 terms of the sequence to find the sumExample: Question 2: Consider an infinite series of numbersarranged in sections, where the nth section contains thefirstn+12positive integers that are divisible by nbutnot by any smaller positive integer (except 1). For example,the 1st section contains 1, the 2nd section starts with 2, 4,6, 10, 12, and 16 the 3rd section starts with 3, 9, 15, 21,33, ... and so on. Let Sbe the sum of the first 100 termsof this series. Find the sum of the digits of S.While technically not wrong, such questions are not ideal for evaluating the reasoning abilities of themodels since they mostly involve brute force calculations. Further, in cases where the sequence ofcalculations is very long, the LLM’s performance may be bottlenecked by other limitations such asthe context length of the model.Thus, we strive to filter such questions out. We add an explicit condition to check for computationaltractability and solvability of the generated questions in the verification prompt. This check is assistedby the solution attempt produced by the model which will potentially point out any such problems.Nonsensical Questions. In several cases, the model comes up with questions which are nonsensical- confusing, incomprehensible, logically inconsistent or ambiguous. Consider the question givenbelow.Given below is an example of a question which is logically inconsistent. More concretely, a squareplot of land whose side length is equal to the radius cannot fit inside the quarter-circle.Example: Question: A garden is designed in the shapeof a quarter-circle with a radius of 8 meters. A squareplot of land with a side length equal to the radius of thequarter-circle is placed inside this garden such that two ofits sides are along the straight edges of the quarter-circleboundary. If the square plot of land is to be tiled entirelywith square tiles each of area 64 square centimeters, what isthe total number of tiles required?We add checks for such cases in the question validation prompt. Further, at the end of the finalsolution step (maj @ 4), we further check for cases where the final answer produced in all the 4self-consistency trials are unique. If all answers are unique, we discard the question. The rationalebehind this being that it is highly likely that the model produces a different answer every time dueto some inherent ambiguity in the question which was not detected in the solution attempt and thequestion validation checks.Deceitful Solutions. Although rare, we encounter cases where the model makes up solutions eventhough the question is nonsensical or cannot be solved with the amount of information providedin the question. This happens very commonly in the solutions which are generated in the questiongeneration prompt. Thus, we do not use these solutions and include the final solution step where themodel is asked to solve the question again. Although most of such solutions and thus questions arescreened out in the question validation step and consistency check at the end of the final solutioncheck, in rare cases we see this behavior in the solution produced after the final solution step as well.Given below is one such example.9Example: Question: Consider the trigonometric identitysin2(x) + cos2(x) = 1 and the polynomial P(x) = x4−x2−12.Using x= sin( θ), solve P(x) = 0 forθin the interval [0,2π).While solving this question, the model arrives at the conditions sin(θ) = 2 orsin(θ) =−2. Clearly,these conditions have no solutions since −1≤sin(θ)≤1,∀θ∈[0,2π]. However, the model goeson to argue that the the closest values to 2 and -2 in the range of sin(θ)are 1 and -1, and thus, goeson to solve the question for sin(θ) = 1 orsin(θ) =−1.Finding a needle in the haystack. In an attempt to make the question validation step robustto as many failure modes as possible, we come up with a long and sophisticated prompt (seeAppendix A.7.4 for an example of this prompt). To elaborate, the validation prompt comprises of1.) skill descriptions consisting of 3 exemplars for each of the 2 skills, 2.) 6 very long validationexemplars 3.) rest of the prompt consisting of the 7 conditions described in Section 2. In prioriterations of the validation prompt, the names of the two skills which are supposed to be combinedin the question, were mentioned only once throughout the prompt. In such a case, we observed thatmore often than not, model confused the pair of skills at hand, with skills mentioned in the validationexemplars. We also observed cases where the model was unable to locate the original skill names,came up with its own skill names depending on the skill exemplars provided in the skill descriptions.More specifically, while checking the question for Dual Skill Requirement in Section 2, the modelswould check the question for the presence of skills distinct from the pair of skills at hand. We attributethis phenomenon to the the lost in the middle orfinding a needle in the haystack challenges faced byLLMs. We provide an example of the phenomenon below. Notice how in Point 3 of the reasoningtrace the model checks the question for the presence of two completely unrelated skills. We wereable to nearly eliminate this problem by replacing referential terms (such as above given skills ,theseskills , etc.) to the two skills by the skill names.Example: Skill Pair to be checked for:[understanding_and_manipulation_of_rational_functions, division_and_remainders]Question: A fair 6-sided die is rolled 4 times. What isthe probability that the product of the numbers rolledis divisible by 36? Express your answer as a simplifiedfraction.Reasoning trace of the model:# REASONINGLet’s evaluate the given question and solution against thespecified criteria:1. **Single Answer Requirement**: The question asks for asingle value - the probability of a specific event occurring.This satisfies the single answer requirement.2. **Exact Answer Requirement**: The solution provides anexact answer in the form of a simplified fraction (61/2386).No approximations or assumptions are made, fulfilling theexact answer requirement.3. **Dual Skill Requirement**: The question requiresknowledge and application of two distinct mathematicalskills: - Probability theory: Calculating the probability ofspecific outcomes in a series of die rolls. - Number theory:Understanding divisibility rules for 4 and 9, and how theyapply to the product of numbers.The solution demonstrates a rigorous application of bothskills, using probability calculations (such as the binomialprobability for rolling a 4 at least once) and numbertheoretic concepts (divisibility rules for 4 and 9) to arriveat the final answer. The complexity of skill application ison par with the provided examples.104. **Clarity and Completeness**: The question is clearlystated and provides all necessary information (fair 6-sideddie, rolled 4 times, product divisible by 36) to solve theproblem. No ambiguity or missing information is present.5. **Relevancy of Information**: All given information isrelevant and used in the solution. There are no extra detailsthat do not contribute to solving the problem.6. **Realism and Logic**: The scenario of rolling a diemultiple times and calculating probabilities is a realisticand commonly encountered problem in probability theory. Thedata (6-sided die, 4 rolls) is logically consistent.7. **Syntax and Grammar**: The question is grammaticallycorrect and clearly written, with no room formisinterpretation.The solution is well-structured and easy to follow, breakingdown the problem into clear steps and explaining the reasoningbehind each step. The mathematical reasoning is sound,correctly applying probability and number theory conceptsto arrive at the final answer.# FINAL ANSWERYesNote that none of the above failure modes are completely eliminated in the pipeline described inSection 2. Thus, human verification is required.Despite struggling with the failure modes described above, there also exist cases where the modelsexhibit positively surprising and creative behaviors. We talk about some of them below.Thinking out of the box. Although rare, we observe instances where the models get creative whilevalidating the question. Consider the question belowExample: Question: A class of students is learning aboutcombinatorics and geometry. They are given a probleminvolving colored beads: Red, Blue, and Green. If they needto form a necklace with 8 beads such that no two adjacentbeads have the same color and the necklace begins and endswith a bead of a different color, how many different necklacescan they create? Each necklace is counted up to rotation andreflection (considering the necklace can be flipped over).When validating this question using prior iterations of the question validation prompt, which didnot consist of the computational tractability check, the model output while validating the questionconsists of the following excerpt.Example: ...This might introduce a significant challenge not solely due to themethodology’s complexity but also due to the potential computational require-ment, which may not be feasible in a standard test environment without tools.Furthermore, while the connection to practical geometry (reflective and rotationalsymmetry) and combinatorics (color patterning and adjacency constraints) isstrong, the depth of understanding required to manually adjust for these symmetryconsiderations in a test question might be too intense or require more guidedlearning than a single evaluation question could provide....i.e, the model takes into consideration the fact that the question involves a lot of brute force computa-tion, despite there being no explicit check for computation complexity in the prompt, and classifiesthe question as invalid. We attribute such out of the box thinking behavior to the role-playing natureof our prompts. Our prompts consist of a math teacher evaluating the the fitness of the given questionfor being used for testing students’ reasoning and analytical skills in a math exam. This leaves roomopen for the model to detect potential problems not explicitly accounted for in the prompts whichmight make the question unfit for being used for evaluation.11A.3 Considerations for human-annotatersHuman annotators were tasked with double checking the validity of the question and the correctnessof the solution. They were asked to look out for any of the failure modes discussed in Section A.5.They were asked to check that the created question actually used the math skills it was supposedto exhibit and to improve the question with respect to readability, quality and difficulty. They wereencouraged to suggest changes that make the problem harder to solve using automated tools whileretaining easiness for the humans. We illustrate with an examples.GPT-4 created the following question given the skill-tags recursive_functions_and_sequencesandmultiplication_and_division :Example: Original Question: Consider the sequence definedrecursively by a1= 1 andan+1= 2an+nfor all n≥1. What isthe product of the first five terms of this sequence?An LLM can solve this by simple computation. The human modified the question so that solving theproblem requires understanding the underlying pattern.Example: Modified Question: A sequence is definedrecursively as follows: the first term a1is2, and for n≥2,an= 2n−1+n. What is the logarithm (base 2) of the averageof the first 50 terms of this sequence? Round down to thenearest integer.For the modified question, one leading model mentioned calculation difficulties for the inability togive any answer, and another resorted to an incorrect numerical approximation that led to an incorrectanswer.Human annotators were also asked to go through the solutions carefully and correct or improvethe solution for good questions if necessary. They were also asked to look out for questions thatcontain lot of enumeration, i.e. questions which are tedious and require significant amount ofbrute force computation. For such questions, the annotators were encouraged to reword them suchthat enumeration is not a feasible strategy below. For example, given below is an example of anenumerative question which was modified to avoid enumeration.Example: Original Question: Find the sum of the smallestprime divisor and the largest prime divisor of the numberN= 154+ 164.Modified Question: Find the sum of the two smallest primedivisors of 2317+ 1717.Models tend to adopt brute force approach on the original question calculating 154+ 164. Afterrephrasing the models cannot use brute force on 2317+ 1717, instead being forced to check thedivisors more analytically, in particular understanding of arithmetic modulo a prime.A.4 Further Experimental Details and ResultsFor generating responses, we use the MAmmoTH [Yue et al., 2023] evaluation suite. The responsesare graded using a GPT-4 grader, where GPT-4 Omni checks the correctness of a solution responseagainst the ground truth solution. During response generation, we set the temperature to 0 and top_pto 1 for all models. For open source LLMs, we use 2 80GB A100 GPUs and 72GB of RAM to runinference facilitated by vLLM [Kwon et al., 2023]. We use 25 workers while querying GPT-4 Omniand GPT-4 Turbo and 2 workers for querying Claude-3 Opus and Claude-3.5-Sonnet.A.4.1 Generated Data StatisticsOut of 210 question-solution pairs included in MATH2, 139 underwent some form of modification bythe human annotators before being included in the dataset. Out of the 64 questions modified, 3 wereminor modifications to improve the clarity of the question. Another 22 modifications were minormodifications, which nevertheless affected the meaning of the question and changed the final answer.120.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8X(MATH)0.00.10.20.30.40.50.6Y (MATH2)Performance on MATH2vs Performance on MATHModel PerformancesY=X2Y= 0.01101−0.00734X+ 0.9745X2(a)Figure 2: Relation between the performance of models on MATH2(Y) vs their performances onMATH ( X). As can be seen from the plot, the performance on models on generated questions roughlyfollows a quadratic relation with the performance of those models on MATH. The best quadratic fitfollows the relation: Y= 0.0166−0.0776X+ 1.096X2. This may be explained by the fact thatthe questions in MATH2consist of two skills at a time, as compared to questions in MATH, whichconsist of one skills.Skills2468101214Number of skillsSkill Occurence FrequencyFigure 3: Shows the distribution of different skills extracted during the skill extraction process inthe generated set of questions. The generated and human verified set of 210 questions consists of 97skills out of the 114 skills extracted via the skill extraction process as described in Didolkar et al.[2024], Each question in the generated set represents two skills. Note that the distribution of skills isnot uniform with there being multiple skills that are represented by one one question.But 39 modifications were significant; either making the given questions harder, or correcting them,or making them more interesting (i.e., less tedious) for humans.As for the solutions, 136 out of the 210 solutions originally generated by the model were modified tocorrect them or improve their clarity. This includes solutions which had to be modified because ofmodifications in the corresponding question.Table 2: Human Verification Statistics: Out of a total of 210 examples in MATH2, 139 (66.19%)were such that either the question or the solution generated by the model were modified by theannotator before being included in the final dataset. These modifications were made in order toincrease the difficulty of the questions or correct the questions or solutions.# of Modified Questions ( A)# of Modified Solutions ( B)# ofA∪B Dataset Size64 136 139 21013A.4.2 Performance across the two datasets: A surprising pattern0.0 0.2 0.4 0.6X2(MATH)0.00.10.20.30.40.50.6Y (MATH2)Perf. on MATH2vs Perf. on MATHModel PerformancesY=X2Figure 4: Relation between the performance of models on MATH2(Y) vs the square of theirperformances on MATH ( X2). As can be seen from the plot, Y≈X2. See Appendix A.4 for thebest-fit quadratic curve, which is slightly different.Table 1 shows that all tested models have significantly lower performance on MATH2than on the orig-inal MATH dataset. Denoting Yas the performance on MATH2andXas the performance on MATH,the percentage drop 100(X−Y)/Xfor frontier models — GPT-4 Omni, GPT-4 Turbo, Gemini-1.5-Pro, Claude-3.5-Sonnet and Claude 3 Opus — ranges from 16.73% to41.34%. MAmmoTH-7B, aspecialist math model, shows the largest drop at 93.92% .The fact that performance drops for all models should notbe too surprising, since as noted, theMATH2questions, by combining skills from different subareas of MATH, could be seen as “out ofdistribution (OOD).” This makes it tempting to interpret the percentage drop as a measure of a model’s(lack of) “OOD-resilience.” For instance, very large percentage drops seen with open-source modelsMetaMath and MAmmoTH feel understandable since their training used synthetic data generatedusing seed questions from MATH and GSM-8k. Lack of diversity in such synthetic data is known tocause overfitting to the dataset being imitated. Similarly, GPT-4o and Claude Sonnet 3.5 are suspectedto also have been extensively trained with synthetic data. Although their MATH performance issimilar, Sonnet 3.5 has worse MATH2performance, which might suggest lower quality/diversity inits synthetic data.However, in our opinion, the overall pattern among proprietary models of similar size does fit with theOOD story. A much simpler explanation pops out when we plot YvsX2(Figure 4 and Figure 2(a)):we find a linear relationship Y≈X2! This implies that the relative drop in performance of themodels is well-predictable from just their performance on MATH, and does not require taking theirtraining details into account!Why should the two scores be expected to have this relationship? Here is a natural (albeit heuristic)explanation. Suppose there are Nskills and sidenotes the success rate of the model at correctlyapplying the ith skill. Then, its Xvalue should reflect the average5of the si’s. Furthermore, on arandom question using the ith and jth skill, the probability that the model correctly answers it shouldbesisj, since it has to successfully apply both skills. If the questions are created using pairs of skillschosen randomly and independently, then the Yvalue will be the average value of sisj’s, which byindependence will be roughly X2.This reasoning in fact suggests that our pipeline has created questions that genuinely required applyingtwo very distinct skills (as opposed to, say, requiring primarily skill i, and mildly using skill j). Thediscovered relationship suggests further that if we could create questions where each combines kskills, we might see the relationship Y≈Xk, which would tend to further magnify performancedifferences between models.5With perhaps a small correction factor if the skills are not evenly distributed among the questions14Table 3: Performance of models on MATH under two different prompting strategies. MAmmoTH4-shot CoT prompting involves 4-shot prompting with exemplars taken from the MAmmoTH [Yueet al., 2023] evaluation suite. Skill Based 4-shot CoT [Didolkar et al., 2024] consists of using 4exemplars which are retrieved from the training set of MATH based on which skill is required tosolve the given question (as determined by GPT-4). Proposed 4-shot CoT prompting consists of4-shot prompting with exemplars taken from MATH2. These exemplars are retrieved such that one ofthe two skills in each exemplar is the same as the skill required by the question at hand, as labeled byGPT-4. In Proposed + Skill Based 4-shot CoT we supplement the exemplars retrieved from MATH2with exemplars from MATH training set, for skills that are present in < 4 questions in MATH2. Weshow that few-shot prompting with exemplars retrieved from the generated set of questions (MATH2)consistently outperforms vanilla few-shot prompting with relative gains of upto 12.79% over thebaseline (for for Llama-3.1-70B-Instruct [Dubey et al., 2024]).Method GPT-4O GPT-4T Llama-3.1-70B-Instruct MetaMath-70B MAmmoTH-70B Mixtral-8 ×7B-InstructMAmmoTH 4-shot CoT 76.67% 71.89% 58.15% 25.77% 18.45% 30.77%Skill Based 4-shot CoT 78.32% 72.77% 57.81% 25.42% 18.20% 30.31%Proposed 4-shot CoT 78.74% 72.96% 65.59% 27.54% 21.23% 34.23%Proposed + Skill Based 4-shot CoT 78.53% 71.51% 61.95% 27.54% 20.41% 33.95%A.4.3 Generated Questions are Effective In-Context Exemplars for MATH.A possible test for the quality of a Q&A pair on similar topics as MATH dataset is whether perfor-mance on MATH improves when using these as in-context exemplars.We test as follows. Recall that MATH has 7sections. Exemplars for a section are chosen from thesection area. However, by design, our new questions cross section boundaries. Furthermore, theyare higher quality than MATH questions. We implemented a new procedure to retrieve in-contextexemplars from MATH2based on the skill requirements of the current question.Since MATH2is limited in size, it does not cover all the skills extracted during the skill extractionprocess, containing 93 out of 114 skills. Figure 3 shows the distribution of different skills in thedataset. We filtered the MATH test set to remove examples requiring skills not present in the generateddataset, resulting in the removal of 913 test examples. During evaluation on the filtered MATHtest set, for each question Qlabeled with skill a(a∈ S, where Sis the set of extracted skills), weretrieved in-context exemplars from the MATH2, ensuring each exemplar involved skill a. We usedfour such exemplars per question (i.e., 4-shot CoT [Wei et al., 2022]). To handle skills representedby fewer than four examples in MATH2, we run two experiments: (A) Proposed 4-shot CoT : If agiven skill is represented by nexamples in the MATH2, where n <4, we use nin-context examplesinstead of 4 exemplars. (B) Proposed + Skill Based 4-shot CoT : If a given skill is represented bynexamples in MATH2, where n <4, we supplement 4−nexemplars for that skill from MATHtraining set. The relevant in-context exemplars in MATH training set are determined by followingthe methodology proposed in Didolkar et al. [2024]. We compared the performance of models usingthese targeted prompting strategies against two baselines: (C) MAmmoTH 4-shot CoT : The 4in-context exemplars are taken from the MAmmoTH evaluation suite [Yue et al., 2023]. (D) SkillBased 4-shot CoT : We use skill-based prompting as proposed in Didolkar et al. [2024], where thein-context exemplars are selected from the MATH training set, in accordance to the skill required bythe question at hand, as determined by GPT-4.Table 3 presents the results of this comparison. The two prompting strategies using questionsfrom MATH2as in-context exemplars, clearly outperform the two baselines. We conclude that theMATH2questions, due to their difficulty and skill relevance, serve as effective in-context exemplars.Performance gains would likely be more significant with larger datasets generated using our approach,reducing the need to supplement with external exemplars.A.4.4 Skill Proportional Comparison of MATH2and MATHFigure 3 shows the distribution of different skills in MATH2. To make a fairer comparison of MATHand MATH2, and to show empirically that MATH2benefits from the composition of two skills at thesame time as compared to MATH which consists of application of one skill at a time, we compare theperformance of models on MATH2to the performance of models on a subset of MATH which has as15similar skill distribution as MATH2(i.e. as shown in Figure 3). We form this subset by randomlysampling questions belonging to each skill in MATH. The subset consists of 4087 questions. Table 4compares the performance of some models on MATH2, MATH and the subset of MATH formedabove. From the performance of the models, we can conclude that a subset of MATH with a similardistribution of skills is not just easier than MATH2, but also MATH.Table 4: Comparison of the performance of various models on MATH, MATH2and a subset ofMATH which has a similar distribution of skills as MATH2, as shown in Figure 3Model MATH2(Y) MATH skill proportional subset MATH (X)GPT-4 Omni 64.29% 79.28% 77.21%GPT-4 Turbo 53.11% 74.49% 73.27%Deepseek-math-7b-instruct 16.83% 45.60% 45.05%Mixtral-8x7B-Instruct 10.00% 31.98% 31.52%A.4.5 Difficulty of questions generated by different modelsOut of the 210 questions, 116 questions were generated using GPT-4 Turbo, 3 using GPT-4 Omni,51 using Claude-3 Opus and 40 using Gemini-1.5-Pro. We consider individual subsets of datasetwherein the questions were generated by GPT-4o, GPT-4-Turbo and Gemini-1.5-Pro and evaluateGPT-4O, GPT-4 Turbo, Claude-3 Opus and Gemini-1.5-Pro on these subsets. The results are shownin Table 5Table 5: Performance of GPT-4 and Claude on questions generated using GPT-4 Turbo and Claude-3OpusSubset GPT-4 Omni GPT-4 Turbo Gemini-1.5-Pro Claude-3 OpusGPT-4 Turbo Subset 57.76% 54.31% 43.47% 40.52%Claude-3 Opus Subset 82.35% 66.67% 50.98% 45.10%Gemini-1.5-Pro Subset 36.58% 36.58% 39.02% 29.27%The results above show that the questions generated by Gemini-1.5-Pro ended up being significantlymore difficult than the questions generated by other models.A.4.6 Modified Questions vs Non-Modified QuestionsDuring the human verification process, the annotators were instructed to be on the look out for anyerrors in the questions and solutions generated by the models, and fix any lack of clarity, ambiguity,convoluted language, etc. in the generated questions which might confuse the model and reduce the“quality” of the questions. They were also instructed to look out for specific modifications whichcould make the questions more difficult. For further discussion on the human verification process,refer to Section A.3. In Table 6 we compare the performance of models on the questions which weremodified against their performance on the questions which were not modified. We see that dependingon the model being evaluated, they find either the modified or the unmodified questions more difficult.Table 6: Performance of models on human modified and non-modified questions from MATH2Model MATH2Unmodified MATH2ModifiedGPT-4 Omni 65.75% 50.00%GPT-4 Turbo 58.90% 59.37%Claude-3.5-Sonnet 47.94% 50.00%Gemini-1.5-Pro 44.52% 58.73%16A.5 Observations from the Question Generation ProcessThe question generation pipeline described in Section 2 was developed through an iterative processof refining prompts and design choices, and evaluating their impact on the quality of the finalquestions and solutions. Notably, the inclusion of the attempted solution andquestion validation stepssignificantly enhanced the pipeline’s effectiveness. Despite the sophistication of the pipeline andprompts, we still observe instances where models fail to follow the given instructions. This sectionhighlights prominent failure modes at various stages of the pipeline, which human raters need to beaware of. Additionally, we explore some intriguing behaviors of the models where they successfullycreate interesting and creative questions. Section A.5.1 details the role of human raters in improvingthese questions.A.5.1 Creative questions: Examples of Synergy from Human-AI interactionThe models frequently produced interesting and creative questions, although they often failed togenerate correct solutions. In these cases, the incorrect solutions usually contained enough correctideas for a human to quickly complete them.Human annotators were tasked with verifying the validity of the questions and the correctness of thesolutions. They were instructed to look out for any failure modes discussed in Section A.5.2. Theirresponsibilities included ensuring that the created questions actually employed the intended mathskills, and improving the questions in terms of readability, quality, and difficulty when possible. Theywere encouraged to suggest changes that would make the problems harder for automated tools tosolve while allowing easier or more elegant solutions for humans. The following examples illustratethis process:Example: Original Question: Find the smallest positiveinteger ksuch that k3− kis divisible by both 9and 10,and the sum of digits of kin its decimal representation isa prime number.Our human team had not encountered such questions before. It requires recognizing that k3−k=k(k−1)(k+ 1) is always divisible by 2and3. Thus, kmust be such that k(k−1)(k+ 1)/6isdivisible by 15(both 3and5). Additionally, the sum of the digits of kmust be a prime number, andensuring such conditions is challenging even for powerful LLMs.Example: Original Question: Consider a collection of red,blue, and green beads arranged in an infinite series. Thebeads alternate in color, starting with red, then blue, thengreen, and this pattern repeats indefinitely. The number ofbeads in each colored section follows the pattern of powers of2: the first red section has 2 beads, the first blue sectionhas 4 beads, the first green section has 8 beads, the secondred section has 16 beads, and so on. If a bracelet is madeusing a continuous, unbroken sequence of exactly 20 beads fromthis series, and each bead has a length of 0.5 units, how manydifferent bracelets can be made such that the perimeter of thebracelet is an integer value?The original question combined elements in a novel way. The human rater modified the question tochange the sequence size from 20 to 6 beads, maintaining the essential difficulty while making itmore elegant for humans. All tested models failed on the modified question.Example: Original Question: A container initially contains500 mL of water. A scientist adds water to the container14of the current amount every minute. After how many minuteswill the container first contain more than 1 L but less than 2L of water?Modified Question: A container starts with 500 mL of water.Each minute, the scientist adds water equal to 1/2of thecurrent amount. What is the smallest positive integer nsuch17that the number of liters of water in the container is neverin the interval [n, n+ 1]?This was one of many questions the models created about exponential growth and geometric series,possibly similar to standard math test questions. The human slightly altered it to simplify calculationsby hand and substituted a different condition that the models found challenging, while humans couldeasily estimate an approximate answer and then verify.Example: Original Question: Consider the sequence definedrecursively by a1= 1 andan+1= 2an+nfor all n≥1. What isthe product of the first five terms of this sequence?Modified Question: A sequence anis defined as follows: a1=2andan= 2n−1+an−1+n. What is the ⌊log2a500⌋?An LLM can solve the original question through simple computation. The modified question, however,requires understanding an underlying pattern.Example: Original Question: Find the sum of the smallestprime divisor and the largest prime divisor of the numberN= 154+ 164.Modified Question: Find the sum of the two smallest primedivisors of 2317+ 1717.Models tend to adopt a brute-force approach to the original question by calculating 154+ 164. Afterrephrasing, the number 2317+ 1717is too large for direct computation, requiring understanding ofarithmetic modulo a prime.These examples highlight the essential role of human oversight in refining and improving the questionsgenerated by LLMs, ensuring they are challenging, creative, and suitable for advanced mathematicalproblem-solving.A.5.2 Failure ModesDespite the sophistication of our pipeline, models frequently exhibit several failure modes: (a)Insufficient Involvement of Skills : Models often generate questions that either miss one of the skillscompletely or require a very shallow application of one or both skills. For example, a geometryquestion may fail to involve ratio and proportion adequately, (b) Insufficient Information : Questionsmay lack essential details needed for solving, making them incomplete or ambiguous. For instance, atrigonometry question might omit necessary angles or distances, (c) Unsolvable or ComputationallyIntractable Questions : Some questions generated are either unsolvable or require excessive brute-force calculations, which are impractical for evaluating reasoning abilities, (d) Nonsensical Questions :Models sometimes produce questions that are logically inconsistent, confusing, or ambiguous, suchas a probability problem with unclear parameters or an impossible geometry scenario, (e) DeceitfulSolutions : Occasionally, models fabricate solutions to nonsensical or unsolvable questions, presentingincorrect logic as plausible reasoning and (f) Finding a Needle in the Haystack : Long and complexvalidation prompts sometimes cause models to confuse or overlook the specified skills, leadingto incorrect evaluations. For a more detailed discussion and examples of questions in the variouscategories listed above, refer to Appendix A.2.A.6 Skill composition using different modelsIn this section, we provide examples of the responses of different models when asked to generatea question combining two different skills. We prompt GPT-4o, Claude 3 Opus, Gemini 1.5 Pro,and Llama-3-70B-Instruct and Llama-3-8B-Instruct to generate a question combining the skillsarea_calculation_skills andalgebraic_manipulation_skills . We use the same promptused in the Question Generation step (Step 2) of the proposed pipeline described in Section 2 (seeAppendix A.7.2). Given below are the responses of each model to the prompt.18GPT-4 OmniSure! Here is a challenging math question that tests both area calculation skills and algebraicmanipulation skills, formatted as requested.# QUESTIONA rectangle ABCD has vertices A(0,0),B(a,0),C(a, b), andD(0, b). Inside this rectangle, thereis a right triangle PQR where Pis at vertex A,Qis onBC, andRis onCD. The area of trianglePQR is given by16of the area of rectangle ABCD . Given that the coordinates of QandRare(a, kb )and(ka, b )respectively, where kis a constant, find the value of k.# SOLUTIONFirst, let’s find the area of the rectangle ABCD :Area of rectangle ABCD =a·bThe area of the triangle PQR is given as16of the area of the rectangle ABCD :Area of triangle PQR =16·(a·b) =ab6Next, we need to find the area of triangle PQR using the coordinates of P,Q, andR.The vertices of triangle PQR areP(0,0),Q(a, kb ), andR(ka, b ).The formula for the area of a triangle with vertices at (x1, y1),(x2, y2), and (x3, y3)is:Area =12|x1(y2−y3) +x2(y3−y1) +x3(y1−y2)|Substitute the coordinates of P,Q, andRinto the formula:Area of PQR =12|0(kb−b) +a(b−0) +ka(0−kb)|Area of PQR =120 +ab−ka2bArea of PQR =12|ab(1−k)|Area of PQR =ab2|1−k|We know that the area of triangle PQR isab6:ab2|1−k|=ab6To solve for k, we can divide both sides byab2:|1−k|=13This gives us two possible equations:1−k=13or 1−k=−1319Solving these equations for k:1−k=13⇒ k= 1−13=231−k=−13⇒ k= 1 +13=43Therefore, the possible values for kare23and43.# DETAILSThe question involves area calculation by requiring students to find the area of a triangle within arectangle, and algebraic manipulation by solving an equation involving a variable. The problemcomplexity is similar to the examples provided, ensuring that students are sufficiently challenged. Thefinal answer consists of the value(s) of k, making it straightforward to evaluate the students’ answers.Claude 3 OpusHere is my attempt at generating 1 (non multiple choice) difficult question which tests botharea_calculation_skills and algebraic_manipulation_skills:# QUESTIONConsider an equilateral triangle ABC with side length 2a. Let points D, E, F be the midpoints of sidesBC, CA, AB respectively.Now construct squares ACDE, ABFG, BCHI externally on sides AC, AB, BC respectively.Express the area of the shaded region (consisting of the equilateral triangle DEF and squares ACDE,ABFG, BCHI) in the form ka2, where kis a constant. Find the value of k.# SOLUTIONLet the side length of the equilateral triangle ABC be s= 2a.The area of equilateral triangle ABC is√34s2=√34(2a)2=√3a2.In an equilateral triangle, the distance from a vertex to the midpoint of the opposite side is√32times theside length. So DE = EF = FD =√32s=√3a.The area of equilateral triangle DEF is√34(√3a)2=3√34a2.The side length of each of the squares ACDE, ABFG, BCHI is equal to the side length of the equilateraltriangle ABC, which is s= 2a.So the area of each square is s2= (2a)2= 4a2.The total area of the shaded region is:Area of DEF + 3 * Area of each square=3√34a2+ 3(4 a2) =3√34a2+ 12a2= (12 +3√34)a2Therefore, the area of the shaded region is ka2, where k= 12 +3√34≈15.1981 .# DETAILSThis question combines area calculation of equilateral triangles and squares with algebraic manipulationof side lengths and areas. Key steps involve:1) Calculating areas of equilateral triangles ABC and DEF in terms of side length sandarespectively.This tests area_calculation_skills.2) Algebraically manipulating the expressions for side lengths DE, EF, FD and square side lengths interms of sanda. This requires algebraic_manipulation_skills.3) Adding up the component areas algebraically and simplifying the expression in the form ka2tosolve for constant k.20The question difficulty is calibrated to be similar to the provided examples. The solution is a singlenumerical constant kwhich avoids ambiguity in student answers. The equilateral triangle and squareconstructions provide an interesting geometric visualization.Let me know if you would like me to modify anything in the question or solution to better suit yourneeds! I’m happy to refine it further.Gemini 1.5 Pro# QUESTIONA circle is inscribed in equilateral triangle ABC , and a smaller circle is externally tangent to this circleand to sides ABandAC. If the side length of triangle ABC is 4, what is the area of the smaller circle?[asy] size(150); defaultpen(linewidth(0.8)); pair A=(0,2*sqrt(3)), B=(-2,0), C=(2,0), O=(0,sqrt(3)/3),P=(0,sqrt(3)); draw(A–B–C–A); draw(circle(O,sqrt(3))); draw(circle(P,sqrt(3)-1)); [/asy]# SOLUTIONLetObe the center of the larger circle, and let Pbe the center of the smaller circle. Let rbe the radiusof the smaller circle.[asy] size(150); defaultpen(linewidth(0.8)); pair A=(0,2*sqrt(3)), B=(-2,0), C=(2,0), O=(0,sqrt(3)/3),P=(0,sqrt(3)); draw(A–B–C–A); draw(circle(O,sqrt(3))); draw(circle(P,sqrt(3)-1)); draw(O–P);draw(O–C); draw(P–C); label(" A",A,N); label(" B",B,SW); label(" C",C,SE); label(" O",O,S);label(" P",P,NE); label(" 2",(1,0),S); label(" r",(P+C)/2,E); label(" 1",(C+O)/2,SE); [/asy]Since Ois the centroid of equilateral triangle ABC ,OC =23·2√3 =4√33. Also, PC =r+ 1, soby the Pythagorean Theorem on right triangle POC ,r2+ 2r+ 1 = PC2=OC2−OP2=163−(r+√3)2.This simplifies to 2r2+ (2 + 2√3)r−133= 0. By the quadratic formula,r=−(2 + 2√3)±q(2 + 2√3)2−4·2· −1334=−1−√3±√404.Since ris positive,r=−1−√3 + 2√104.Then the area of the smaller circle isπr2=π−1−√3 + 2√1042=27π8−13√3π8+5√30π4.# DETAILSThis question tests both area calculation skills (area of a circle) and algebraic manipulation skills(solving a quadratic equation with radicals). The problem is challenging because it involves nestedradicals, but the steps are all fairly standard.Llama-3-70B-InstructHere is a difficult question that tests both area calculation skills and algebraic manipulationskills:# QUESTIONA square with side length 4 is inscribed in a circle. A semicircle is drawn inside the square,with its diameter being one of the sides of the square. Find the area of the shaded region.21[asy]size(100);draw(unitcircle);draw((-2,0)–(2,0)–(2,2)–(-2,2)–cycle);filldraw((-2,0)–(2,0)–(2,2)–(-2,2)–cycle,lightgray);[/asy]# SOLUTIONThe area of the shaded region is equal to the area of the square minus the area of thesemicircle. The area of the square is 42= 16 . The area of the semicircle can be foundusing the formula12πr2, where ris the radius of the semicircle. Since the diameter of thesemicircle is 4, the radius is 2, so the area of the semicircle is12π(2)2= 2π. Therefore, thearea of the shaded region is 16−2π.# DETAILSThis question requires students to calculate the area of a square and a semicircle, and thensubtract one from the other to find the area of the shaded region. It also requires algebraicmanipulation skills to simplify the expression for the area of the semicircle. The answer isnot a simple numerical value, but rather an expression involving π, which adds an extra layerof complexity to the question.Llama-3-8B-InstructHere is a question that tests both area calculation skills and algebraic manipulation skills:# QUESTIONFind the area of the region bounded by the curve y=x2−4x+ 3and the x-axis, to thenearest hundredth.# SOLUTIONThe area of the region can be found by integrating the function y=x2−4x+ 3with respecttoxfrom 0to2, which isR20(x2−4x+3)dx= [x3/3−2x2+3x]20= (8/3)−8+6−[0] = 2 .The area can also be found by using the formula for the area of a region bounded by acurve and the x-axis, which isR20ydx=R20(x2−4x+ 3)dx= 2. Therefore, the area isapproximately 2square units, to the nearest hundredth.# DETAILSThis question tests both area calculation skills, as it requires the student to find the area of aregion bounded by a curve and the x-axis, and algebraic manipulation skills, as it requires thestudent to integrate a function with respect to x.Note: I’ve also reviewed the question and solution to ensure that they are accurate and clearfor students.A.7 Prompt ExamplesWe detail the prompts used in the various stages of question generation and evaluation in thefollowing sections. We present the skeleton of the prompts with placeholders for skill names, in-context exemplars, etc. in favor of keeping the prompt brief. The in-context exemplars and the skillexemplar repository are made available here.22A.7.1 Skill Pair ValidationPrompt for validating a pair of skillsI have two basic mathematical skills described below, each followed by some sample question - solutionpairs such that solving the questions involves the use of the particular skill.SKILL 1: <skill_1>Question 1: <skill_1_question_1>Answer 1: <skill_1_solution_1>Question 2: <skill_1_question_2>Answer 2: <skill_1_solution_2>Question 3: <skill_1_question_3>Answer 3: <skill_1_answer_3>SKILL 2: <skill_2>Question 1: <skill_2_question_1>Answer 1: <skill_2_solution_1>Question 2: <skill_2_question_2>Answer 2: <skill_2_solution_2>Question 3: <skill_2_question_3>Answer 3: <skill_2_solution_3>I am going to use these two skills for framing a new question such that the question requires an expertisein both the skills in order to be solved, i.e. the question will compose these two skills. However, I donot want the two skills to be very similar, i.e., they should not mean the same thing. Go through thedescriptions of the skills carefully. Based on your understanding of the skills, can you please tell mewhether the two skills are essentially entirely the same or not? Think step by step and give a detailedexplanation of your answer. The answer should begin with a prefix ’# EXPLANATION ’. Note thatyour understanding of the skills should not be restricted to the sample questions provided previously.They are just example questions. Use your own prior knowledge as well. End your response with a’Yes’ or ’No’ answer to whether the skills are similar or not. This final answer should be on a new lineand preceded by the prefix ’# FINAL ANSWER ’. Thank you very much!A.7.2 Question GenerationPrompt for question generationI am a math teacher trying to create challenging math questions for smart students. I was wondering ifyou could give me 1 (non multiple choice) question which tests both the following skills: (<skill_1>,<skill_2>) Please also provide a brief solution. Then please look over the question and the solution, andfix any issues so that my students do not get frustrated. This being a math exam, the answers shouldeither be exact, or if not possible, then the question should clearly say the answer is only expected tobe approximately correct. Further, for ease of evaluating the students’ answers, the question shouldask for a single final result.This process is difficult so I am attaching two sample conversations where(Agent) is an AI agent and (Query) is teacher feedback. The conversations revolve around framingsuch mathematical reasoning questions and using them for evaluating students. These should giveyou some idea of the expectations and the potential difficulties involved in this task. I am also givingthree example question - answer pairs for both <skill_1> and <skill_2> skills, such that the examplequestions test the corresponding skill. Please ensure that the complexity / difficulty of application of<skill_1> and <skill_2> skills in the generated question is similar to the complexity / difficulty of theskills in the example questions. Please format your output as’# QUESTION<question>23# SOLUTION<solution># DETAILS<all other text>’SKILL 1: <skill_1>Question 1: <skill_1_question_1>Answer 1: <skill_1_solution_1>Question 2: <skill_1_question_2>Answer 2: <skill_1_solution_2>Question 3: <skill_1_question_3>Answer 3: <skill_1_solution_1>SKILL 2: <skill_2>Question 1: <skill_2_question_1>Answer 1: <skill_2_solution_1>Question 2: <skill_2_question_2>Answer 2: <skill_2_solution_1>Question 3: <skill_2_question_3>Answer 3: <skill_2_solution_3># CONVERSATION 1<agent_convo_1># CONVERSATION 2<agent_convo_2>A.7.3 Attempted SolutionPrompt for solution attempt. Note that we instruct the model to take a defeatist approach towardssolving the questionPrompt for solution attemptYou are a professional math teacher and you are given a question which is supposed to test the analyticaland mathematical reasoning abilities of your students. You are supposed to provide a solution to thegiven question. However, the question may be flawed. For example, it might have problems likequestion being unsolvable using the information provided, question being self-contradictory, the finalanswer being computationally intractable, the question being ambiguous and confusing, questionhaving multiple possible interpretations, etc., which you may encounter while solving the problem.This question being used for evaluating students in math, the question should ideally have a single,exact answer, with no room for any deviations due to factors such as approximations, rounding errors,etc., unless explicitly specified in the question. Problems such as the ones described above, wouldprevent the students from solving the question properly, and thus, any question with either of theseproblems is unfit for testing the students. If you encounter any such problems, stop the solution rightthere and explain the problems. For example, if you encounter the need to make any approximations orrounding which is not specified in the question, stop solving the question along with the reason. You donot need to solve the question further once you encounter any such problem. If you do not encounterany such problem, solve the question to achieve the single exact answer which the question asks for.# QUESTION<question>24A.7.4 Question ValidationNote that how in the first paragraph, the names of the two skills are mentioned even time instead ofusing referential phrases. This is done to address the lost in the middle problemPrompt for validating the questionsYou are a professional math teacher. You want to evaluate the analytical and mathematical reasoningabilities of your students in a math exam. The students are supposed to sit in an examination halland solve the questions within a given time limit, without access to any computational devices.The evaluation is designed to test the students’ expertise in using two given mathematical skillssimultaneously, namely <skill_1> and <skill_2>. This is achieved by asking them to solve a questionthat necessitates expertise in both <skill_1> and <skill_2> skills, to be solved completely. Sinceevaluating the students is a critical task allowing very little margin for any error in the process, it is veryimportant to ensure that the questions used for evaluating are high quality and fit for being used toevaluate the students. You need to carefully review the question and a given attempt at solving it, andensure that the question is of high quality and fit to assess students. In order to do this, you shouldcheck the quality of the question with respect to several criteria, such as:- Single Answer Requirement: The question should ask for one and only one final result. It should notrequest multiple distinct answers or pieces of information.- Exact Answer Requirement: It should be possible to achieve one,exact answer to the question, withoutthe need of making any approximations or assumptions whatsoever, unless explicitly specified in thequestion. There should be no margin for the students to arrive at any other possible answer due to thingslike rounding errors, etc.- Dual Skill Requirement: The question must require rigorous expertise in both a: a) ’<skill_1>’ and b)’<skill_2>’, for resolution. Application of both <skill_1> and <skill_2> and their subskills should be,necessary and contribute directly to obtaining the final answer; <skill_1> and <skill_2> skill shouldbe applicable separately and critically during the problem-solving process. You are also given threeexample question - answer pairs for both <skill_1> and <skill_2> skills in order to help you betterunderstand the meaning of each skill. Please carefully review the question and its attempted solution,paying close attention to how well it aligns with the examples provided for each skill. Consider the depthand breadth of knowledge demonstrated in the examples. The complexity / difficulty of application ofboth <skill_1> and <skill_2> in the question should be similar or greater than the complexity / difficultyof <skill_1> and <skill_2> in the example question-answers given for that respective skill.- Clarity and Completeness: The question should be unambiguous and contain all the informationnecessary to complete the solution. Any required assumptions not common knowledge should beexplicitly stated. Check for any ambiguity that might confuse students. Carefully go through thesolution to check if it makes any assumption or approximation in order to solve the question.- Computational Tractability: Since the students are supposed to solve the questions within a given timelimit and without access to any computational devices such calculators, computer, mobile phones, etc.,you must ensure that the question is computationally tractable and all the computations involved can bedone by hand in a limited amount of time.- Relevancy of Information: The question should not have any extra details that do not contribute to thesolving of the problem.- Realism and Logic: The question should involve realistic scenarios or hypotheses with logicallyconsistent data. The specified operations and the contextual setup should reflect plausible mathematicalsituations. (e.g., positive amounts for transactions, integers for counts).- Syntax and Grammar: The question must be grammatically correct and clearly written to preventmisinterpretation.- etc. (any other problems which you think make the question not fit for being used for evaluating thestudents)Your task is to give a ’Yes’ or ’No’ assessment, indicating whether the question is high quality andsuitable for evaluating the students on simultaneous application of the skills <skill_1> and <skill_2>.Provide thorough reasoning for your assessment based on the conditions mentioned above and anyother relevant analytical points concerning mathematical reasoning and problem-solving. Your responseshould be structured as follows:# REASONING<Your detailed analysis justifying your decision>25# FINAL ANSWER<’Yes’ or ’No’. No other text should be present in this section>Ensure to review the combination of skills intended for assessment, and check the logical flowand mathematical correctness from the question’s setup to the solution’s conclusion. Look out forany problems in the question which are pointed out in the attempted solution. Account for all thepotential pitfalls such as logical inconsistencies, unnecessary complexity, or insufficient detail that mayobstruct the clarity or solvability of the question. Given below are the two skills and some examplequestion-answer pairs for the two skills. This process is difficult so I am attaching a few sampleconversations where (agent) is an AI agent who is trying to verify the questions and (query) is teacherfeedback. This should give you some idea of potential difficulties in this task. This is followed by thequestion which you need to check (preceded by ’# QUESTION TO BE CHECKED’) and its attemptedsolution (preceded by ’# SOLUTION ATTEMPT’).SKILL 1: <skill_1>Question 1: <skill_1_question_1>Answer 1: <skill_1_solution_1>Question 2: <skill_1_question_2>Answer 2: <skill_1_solution_2>Question 3: <skill_1_question_3>Answer 3: <skill_1_solution_3>SKILL 2: <skill_2>Question 1: <skill_2_question_1>Answer 1: <skill_2_solution_1>Question 2: <skill_2_question_2>Answer 2: <skill_2_solution_2>Question 3: <skill_2_question_3>Answer 3: <skill_2_solution_3># CONVERSATION 1<validation_exemplar_1># CONVERSATION 2<validation_exemplar_2>......# CONVERSATION 6<validation_exemplar_6># QUESTION TO BE CHECKED<question># SOLUTION ATTEMPT<solution>Thank you very much!A.7.5 Final SolutionFor the final solution, we make use in-context exemplars from MATH [Hendrycks et al., 2021] asopposed to the attempted solution step.26Prompt for the final solutionI have two basic mathematical skills described below, each followed by some sample question- solution pairs such that solving the questions involves the use of the particular skill in order to be solved.SKILL 1: <skill_1>Question 1: <skill_1_question_1>Answer 1: <skill_1_solution_1>Question 2: <skill_1_question_2>Answer 2: <skill_1_solution_2>Question 3: <skill_1_question_3>Answer 3: <skill_1_solution_3>SKILL 2: <skill_2>Question 1: <skill_2_question_1>Answer 1: <skill_2_solution_1>Question 2: <skill_2_question_2>Answer 2: <skill_2_solution_2>Question 3: <skill_2_question_3>Answer 3: <skill_2_solution_3>Go through the descriptions of the skills carefully. Now, here is a new question such that the question re-quires an expertise all both the skills in order to be solved. That is, the question composes these two skillsQUESTION: <question>Based on your understanding of the skills, can you please solve the question accurately? Think stepby step and explain the solution. Finally, end your response by stating the final numerical answerobtained using the solution. Note that your understanding of the skills should not be restricted to thesample questions provided in their description. They are just example questions. Use your own priorknowledge as well. The explanation of your solution and the final numerical answer should each be ona new line, and should be preceded by the prefixes ’# SOLUTION ’ and ’# ANSWER ’ respectively.Thus, your response should be in the format:’# SOLUTION<solution># ANSWER<final_answer; no other text should be present in this section>’.Thank you very much!A.7.6 EvaluationPrompt given to the GPT-4 for evaluating the model’s solutionYou are a professional math teacher and are tasked with evaluating your students on a math exam. Youare will be given a question, the correct solution to the question and the student’s solution. You need totell me whether the student solved the question correctly, thus matching the answer obtained by thecorrect solution. Think step-by-step and give a detailed explanation of your answer. At the end, give a’Yes’ or ’No’ answer to whether the student’s solution is correct. Your output should be in the followingformat:# STEP BY STEP EXPLANATION<detailed explanation of your thought process>27# CORRECTNESS<’Yes’ if the student’s solution is correct. ’No’ otherwise. This section should not contain any other text>Here are the question, correct solution to the question and the student’s solution:QUESTION: <question>CORRECT SOLUTION: <correct_solution>STUDENT’S SOLUTION: <student’s_solution>28 |
5ZlfgzLEjn | The Art of Knowing When to Stop: Analysis ofOptimal Stopping in People and MachinesFukun Evelene ZhangCognitive Science ProgramDepartment of Mathematics and StatisticsCarleton [email protected] ZhaoDepartment of Computer SciencePrinceton [email protected] combinatorial innovation, people face the decision problem of when to invest innew development, and when to stick with the currently best option. Zhao, Vélez,and Griffiths (2024) showed that under finite horizon, this equates to an optimalstopping problem, and provided analytical solutions. Interestingly, in behavioralexperiments, while people’s decisions aligned with the rational solutions overall,there were also systematic deviations. Here, we examine two heuristic models tothis optimal stopping problem in combinatorial innovation. Our approach assumesthat agents make decisions by running mental simulations that integrate priorbeliefs and past observations. We show that these models well-capture variouspatterns in empirical data, suggesting that people may rely on simple heuristics tomake fast decisions when solving computational problems involving sophisticatedcombinatorics. We also investigate whether Large Language Models (LLMs) canbe used as a cognitive model to study these processes, report preliminary findingsof LLM’s limitation in this task, but suggest that chain of thought prompting mayhelp mitigate these limitations.1 IntroductionInnovation often comes from recombination of previous technologies. This leads to an intriguingobservation: As the technology level goes up, the opportunity cost of developing a new technologygrows higher, and the space of existing technologies to attempt combination with increases rapidly.Knowing when to stop exploring new opportunities thus is as important as achieving one’s originalgoal, as over-persistence can waste time and resources [1, 5, 6]Zhao, Vélez, and Griffiths (2024) formalized this problem in a combinatorial discovery game. As asequential decision-making task between “innovate or not” under finite horizon, they showed thatthis forms an optimal stopping problem [7, 11] and offered an analytical solution. Interestingly, inbehavioral experiments, although participants showed good intuitions about following a stoppingrule, their stopping points varied compared to the rational solutions. Previous work has found outthat participants often deviate from rational solutions, persisting with suboptimal strategies longerthan necessary [4, 13, 14], and do so even when presented with the optimal strategy [13]. Moreover,participants’ performance did not improve over the course of time [10, 12]. These patterns, however,may be subject to training. For instance, Goldstein et al. (2020) observed significant learning leadingto near-optimal stopping behavior in a repeated secretary problem.To better understand the cognitive processes underlying optimal stopping, we explore several heuristicmodels to the computational problem in combinatorial innovation, drawing upon Bayesian inferenceand mental simulations. We also compare Large Language Models (LLMs) as agents to solve thesame task. To foreshadow, the heuristic models well-capture many aspects of the behavioral data,38th Conference on Neural Information Processing Systems (NeurIPS 2024).implying a converging process to optimum, and LLMs struggle to make either human-like predictionsor the rational solutions.2 Modeling optimal stopping2.1 Task and problemIn the discovery game defined by Zhao, Vélez, and Griffiths (2024), participants can either gainrewards from an existing item (extraction) or combine two items to create a new item with potentiallyhigher rewards (fusion). Each game is parameterized by the probability of success ( p) and the rewardincrease rate ( w). This setup forms a Markov decision process. Under finite horizon, the optimalpolicy is to keep doing fusion until a switching point d, after which keeps extracting the item withhighest rewards. The expected reward for switching at step disEπ(d)= (n−d) dXi=0di(pw)i(1−p)d−i!r. (1)Let the “remaining step” d′:=D−d+ 1, Equation 1 states thatd′≥1p(w−1)+ 1. (2)In an online behavioral experiment [16], 210 participants were randomly assigned to one of the fourconditions based on two parameters: p∈ {0.8,0.2}andw∈ {3,1.5}. The conditions included highp(= 0.8) with high w(= 3), high pwith low w(= 1.5), low p(= 0.2) with high w, and low pwithloww. Each participant completed 9 tasks–2 practice and 7 official. Each task consisted 10 steps. Ateach step, participants could choose to either fuse or extract. All participants were informed of therelevant parameter values in the official tasks but not in the practice rounds.Overall most participants followed a “switch-once” strategy as proposed by the rational model.However, the choice of switching points did not align perfectly. In the high- p-high- w, high- p-low-w,and low- p-high- wconditions, many participants exhibited under-exploration, switching too early;whereas those in the low- p-low-wcondition showed over-exploration, switching too late compared tothe predicted switch point in Equation 2. The most common switch points for the high- p-high- wandlow-p-low-wconditions coincided with the optimal switching point (step 9 and step 0, respectively),yet only 32% and 18% of participants in these conditions switched at the optimal point. In contrast,the most common switch points for the high- p-low-w, and low- p-high- wconditions did not alignwith the optimal switching point (step 7), instead being distributed evenly around the optimum.2.2 Bayesian heuristic modelsSolving the combinatorics in Equation 1 can be challenging for a bounded agent. Here, we treatparticipants as Bayesian learners, updating their switch point decisions based on the previous round’sreward and fusion feedback information. That is, we assume the player indeed switch once fromfusion to extraction in the game, but the switch step is drawn from a distribution P(d), d∈[0,10].Prior We use the practice round data to estimate the priors people brought into the official tasks,and approximate that empirical practice round distribution using a weighted combination of a uniformprior dU∼Unif(0,10)and and a Gaussian prior dN∼N(μ, σ), where μtakes the value of theaverage switch step of the first practice round for each condition and σ=|D|−14. Next, we usehyperparameter q∈[0,1]to control the relative contributions of the Gaussian and uniform priorsusing Equation 3, and the optimal value of qis fitted using Kullback-Leibler against the respectivepractice round data:P(d) =q∗P(dU) + (1 −q)∗P(dN). (3)The simulated prior and people’s first practice round distributions are plotted in Appendix A.1. Notethat people might adapt different exploratory and exploitative strategies in the practice rounds, andwe report those analysis in Appendix A.1.2Likelihoods. In task i, the player chooses a switch step di∼Pi(d), and follows a policy that fusesfor the first disteps and then extracts until the end. After this round of the game, the player observestotal reward Riand the total number of successes kfor this round. Then, the player could estimatethe amount of Ri+1,d′if switching at step d′for the next round of the game:P(R|d′) =Ri+1,d′=r×wk×(10−d′). (4)We consider two ways (belief update systems) of estimating the expected rewards for the next roundsof the game.Belief Update System 1 assumes agents lack predictive knowledge about rewards beyond the switchstep, expecting post-switch rewards to match those on the switch step. For example, if an agentswitches on step 6 after receiving 10 points, they expect earning 10 points on subsequent step. Theexpected rewards of switching before step 6 were calculated using the reward function in Equation 4.Belief Update System 2 assumes that agents estimate a fixed number of successful fusions ( s=2 or8) out of 10 steps, rather than evaluating each step’s success probability. If an agent switches at stepdand encounters zsuccessful fusions ( z < s ), they mentally simulate s−zsuccessful fusions forthe remaining steps ( d+ 1to 10). If z≥s, they assume no further successes will occur.Bayesian update Putting these together, the agent estimates an updated switching point distributionfollowing Bayes’ rule:P(ˆd|observation ) =P(R|d)P(d)Pd′∈DP(R|d)P(d), (5)where d∈D={0,1,· · ·,10}representing the possible switch points, Ris the estimated rewardswitching at step d. For task 2 to task 7, each prior is the posterior from the the previous task,Pj(d) =Pj−1(ˆd|observation ),1< j≤7 (6)Finally, a switch point is sampled from this posterior via applying a softmax function:σ(Pj)d=ePj,d/τP10d=0ePj,d/τ, (7)where τis the temperature parameter that we later fit with empirical data.2.3 ResultsWe compare the two heuristic models, Belief Update System 1 and 2, to the rational model inEquation 2 in capturing participants’ decisions in this optimal stopping problem. We ran 50 batchesof 10,000 simulations and reported the mean results after fitting the softmax function (Equation 7).To include the rational model in the comparison, we applied Equation 7 to a one-hot encoder withthe optimal switch point being 1 and all other steps being 0. Data and code are openly available at[17]. While the rational model is only able to predict a single optimal switch point, the two heuristicmodels provide a better account for the general shape of the empirical switch point distributionsfound in people. As shown in Figure 1a, both heuristic models accurately capture the left-skeweddistribution for high- p-high- w(HH), high- p-low-w(HL), and low- p-high- w(LH) condition and thenormal distribution shape but with a highest bar at step 0 for the low- p-low-wconditions (LL).Comparing the two Belief Update Systems, we find that Belief Update System 2 performs better inthe HH condition, accurately predicting the most common switching point (step 9) However, BeliefUpdate System 2 deviates from the most common switching point by one step (step 8). In the HLand LH conditions, both Belief Update Systems perform similarly, predicting the most commonswitching point as step 7 and step 6, respectively, very clost to most common switching point favoredby participants (step 6 and step 5). For the low- p-low-w(LL) condition, Belief Update System 1performs better, capturing the highest bar at step 0 and the second highest bar in the middle (step5). Evaluating the models using with the Bayesian Information Criterion (BIC) also confirms theseobservations (Appendix A.2).3Figure 1: Histogram of participants’ (black bar) and Bayesian Learners’ (colored dots) switch steps.Starts are the rational switch steps.3 LLM agentsWe now examine Large Language Models (LLMs) as simulated participants in the same combinatorialdiscovery game. We first prompted GPT (“gpt-3.5-turbo” and “gpt-4-turbo”) and Llama (“meta-llama/Llama-3.1-70B-Instruct”) models with the discovery game tasks with the same setup (DirectPlay), and in addition chain-of-thought prompting (COT) [3, 15]. Examples of each prompt type areprovided in Appendix A.3.1.3.1 Direct PlayFor Direct Play, our results revealed a striking difference between people’s and LLM models’ behavior(Figure 2). While about 80% participants switched only once per task [16], GPT-3.5 agents frequentlyswitched multiple times across conditions (switch-once proportions: HH: 71.4%; HL: 14.3%; LH:57.1%; LL: 28.6%). In contrast, GPT-4.0 and Llama-3.1 largely adhered to a switch-once strategy, theswitch step pattern differed substantially from human participants (see Figure 4 in Appendix A.3.4).In conditions where people typically under-explored (HH, HL, LH), LLMs under-explored evenmore. For HH, GPT-4.0 and Llama-3.1 most commonly switched at step 5, under-exploring by 4steps, while 32% of participants switched at the optimal step 9. In HL, GPT-4.0 stopped at step 5(2 steps early) and Llama-3.1 at step 6 (1 step early). In LH, GPT-4.0 switched one step early (atstep 6), while Llama-3.1 stopped at step 2, under-exploring by 5 steps. Conversely, in LL, whereoptimal switching is at step 0, both LLMs over-explored: GPT-4.0 switched at step 5 (5 steps late)and Llama-3.1 at step 3 (3 steps late), while 18% of participants switched optimally at step 0.3.2 Chain-of-thought promptingWe tested two variations of the chain-of-thought prompting: (1) explicitly informing the LLMs of therational model in Equation 2 with an example optimal play (MDP), and (2) in addition to providingthe equation and example optimal play, further asking the LLMs to explain why the example play isoptimal (COT). For instance, COT prompts included explanations such as: “At this early stage, wewant to attempt fusions to maximize future point potential. By fusing a and b, we create a new crystalworth 450 points, which can be used in future fusions,” or “The optimal action is still fusion. Eventhough it only succeeds 2 out of 10 times, each new crystal discovered is worth three times more thanthe previous one!”Our results showed that providing additional explanations through COT prompting significantlyoutperformed using only rational solutions and example of optimal plays (MDP). This suggests thatmathematical solutions alone (MDP) are insufficient for LLMs to determine the optimal switch point4Figure 2: Heatmap showing the average frequency of fusion attempts at each step over seven roundsfor LLM agents (GPT-3.5, GPT-4, and Llama-3.1-70B-Instruct) using three different promptingmethods (Direct Play, MDP Prompt, COT Prompt) and participants [16]. The rational switch stepsare indicated by stars, with the highest fusion rate matching the optimal switching point circled inyellow. The best-performing prompting methods for each model are highlighted in yellow.from fusion to extraction; reasoning prompts (COT) are necessary to help agents make more rationalchoices. COT prompting effectively guided the models to switch from fusion to extraction at theoptimal point. Comparing to the rational model (see Figure 2), for the HH condition, GPT-3.5 withCOT prompting has the highest fusion rate at the optimal switching point (step 9); for HL and LHcondition, Llama-3.1 with COT prompting have the highest fusion rate at the optimal switchingpoint (step 7); for the LL condition, both MDP and COT prompting led LLMs to maintain extractionthroughout all 10 steps, aligning with the rational model’s prediction in Equation 2. However, unlikeparticipants who progressively approach the optimal point, LLMs with COT and MDP prompttypically switch optimally or near-optimally from the start of tasks and deviate over time, implying alack of ongoing learning (see Figure 5 in Appendix A.3.4).4 DiscussionFinding the optimal stopping point in large combinatorial spaces is challenging to people. Ourheuristic models impute assumptions about approximating the optimal solution task-by-task viasimple update, and better capture the empirical distributions than the rational model. Moving forward,we hope to develop interventions that encourage people to be more rational in similar settings inspiredby the heuristic models. Testing the same experiments with GPT and Llama models revealed thatLLM agents may approach the task differently from people. In Direct Play, LLMs struggled toidentify the optimal strategy of switching once per task, often continuing to attempt fusions at thesame level, wasting opportunities for higher rewards. With chain-of-thought (COT) prompting, LLMslearn the optimal strategy more effectively, including switching from fusion to extraction at the rightmoment and consistently extracting or fusing the highest-value crystals. While COT promptinghelps LLMs achieve optimal solutions, their approach lacks the gradual adaptation seen in humanlearning. This suggests further research is needed to assess LLMs’ viability as cognitive models,especially examining how COT improves LLMs’ mathematical reasoning and its alignment withhuman cognition.5References[1] L. Alaoui and C. Fons-Rosen. “Know when to fold’em: The flip side of grit”. In: EuropeanEconomic Review 136 (2021), p. 103736.[2] J. R. Anderson. The Adaptive Character of Thought . Hillsdale, NJ: Erlbaum, 1990.[3] G. Bao et al. “LLMs with Chain-of-Thought are Non-Causal Reasoners”. In: arXiv preprintarXiv:2402.16048 (2024).[4] C. Baumann et al. “A linear threshold model for optimal stopping behavior”. In: PNAS 117.23(2020), pp. 12750–12755. DOI:insertDOIhere .[5] D. Bergemann and U. Hege. “The financing of innovation: Learning and stopping”. In: RANDJournal of Economics (2005), pp. 719–752.[6] R. Choi, M. Lévesque, and D. Shepherd. “An Optimal Stopping Model for the Explorationand Exploitation of a New Business Opportunity”. In: Creating Value: Winners in the NewBusiness Environment . 2017, pp. 127–143.[7] T. S. Ferguson. Optimal stopping and applications . 2006. URL:https://www.math.ucla.edu/~tom/Stopping/Contents.html .[8] D. G. Goldstein et al. “Learning when to stop searching”. In: Management Science 66.3 (2020),pp. 1375–1394.[9] T.L. Griffiths, F. Lieder, and N.D. Goodman. “Rational use of cognitive resources: Levels ofanalysis”. In: Topics in Cognitive Science 7 (2015), pp. 217–229. DOI:10.1111/tops.12142 .[10] M. Guan and M. D. Lee. “The effect of goals and environments on human performance inoptimal stopping problems”. In: Decision 5.4 (2018), p. 339. DOI:insertDOIhere .[11] T. P. Hill. “Knowing when to stop: How to gamble if you must-the mathematics of optimalstopping”. In: American Scientist 97.2 (2009), pp. 126–133.[12] M. D. Lee and S. Chong. “Strategies people use buying airline tickets: a cognitive modelinganalysis of optimal stopping in a changing environment”. In: Experimental Economics (2024).DOI:10.1007/s10683-024-09832-2 .[13] H. Singmann et al. “Full-Information Optimal-Stopping Problems: Providing People with theOptimal Policy Does not Improve Performance”. In: Proceedings of the Annual Meeting of theCognitive Science Society . V ol. 46. 2024.[14] N. Sukhov et al. When to Keep Trying and When to Let Go: Benchmarking Optimal Quitting .2023. DOI:https://doi.org/10.31234/osf.io/gjucy .[15] J. Wei et al. “Chain of thought prompting elicits reasoning in large language models”. In:Advances in Neural Information Processing Systems . V ol. 35. 2022, pp. 24824–24837.[16] B. Zhao, N. Vélez, and T. Griffiths. “A Rational Model of Innovation by Recombination”. In:Proceedings of the Annual Meeting of the Cognitive Science Society . V ol. 46. 2024.[17] B. Zhao and F.E. Zhang. Innovation game . 2024. URL:https://osf.io/8gwpv/ .A Supplemental materialA.1 Simulated prior distributionInitially, we consider two prior distributions to model agent’ initial switch point preference: anUniform prior dU∼Unif(0,10)representing agents with no initial preference and a Gaussian priordN∼N(μ, σ)which favors a specific initial switch point. However, instead of directly applyingthese distributions as the model prior, we draw inspiration from the human practice round distributionto understand when people tend to switch from fusion to extraction at the beginning of the discoverygame. We hypothesize that some individuals may prefer to switch randomly at first to gauge thepotential gains, while others may balance exploration and exploitation by choosing a middle point.To capture this idea, we introduce a hyperparameter q∈[0,1]that weights the contributions of theuniform and Gaussian priors. We use Kullback-Leibler divergence to find the optimal qthat best fitsthe practice rounds data, as described in Equation 3. The results of this process are visualized inFigure 3, which plots the practice round 1 data alongside the simulated distribution based on this data.6Figure 3: First practice round distribution and the simulated distribution with the optimal qvalue(qhh= 0, qhl= 1, qlh= 0.192, qll= 0.495)A.2 BIC score tableUsing the rational model (Equation 2) as the baseline after fitting a softmax function (Equation 7),we compute the Bayesian Information Criterion (BIC) for each model. The results are shown inTable 1, which summarizes the BIC improvements of each heuristic model over the rational model.The results show that the heuristic models outperform the rational model in all conditions except forBelief Update System 2 in the LL condition. Belief Update System 1 slightly performs best againstthe rational model in the all four conditions.Model HH HL LH LLBelief 1 10.851 4.368 14.782 0.978Belief 2 10.348 4.309 12.966 -7.336Table 1: BIC improvements of heuristic models over the baseline rational modelA.3 Large Language ModelA.3.1 LLM Direct Play promptWe used the OpenAI Completions API to engage GPT-turbo-3.5 and GPT-turbo-4.0 and HuggingFace Completion API to enage meta-llama/Llama-3.1-70B-Instruct in the combinatorial discoverygame defined by Zhao, Vélez, and Griffiths (2024). Below is an example of the game prompt forthe high- p-high- wcondition, which includes the game rules and a sample play for one task. Forhigh-p-low-w, low- p-high- w, and low- p-low-wconditions, the parameters p,w, and the base rewardis changed based based on the same empirical experiment setup. For high- p-low-w: fusion will work8 out of 10 times; each new crystal discovered is worth 1.5 times more points than the most valuablecrystal used to produce it; initially each crystal is worth 150 points. For low- p-high- w: fusion willwork 2 out of 10 times; each new crystal discovered is worth 3 times more points than the mostvaluable crystal used to produce it; initially each crystal is worth 150 points. For low- p-low-w: fusionwill work 2 out of 10 times; each new crystal discovered is worth 1.5 times more points than the mostvaluable crystal used to produce it; initially each crystal is worth 1 point. The game description andthe example play are modified based on the condition.7Game DescriptionYou are participating in a psychology experiment. In the experiment, you collect points fromsome alien crystals using a special machine. A production team will continuously supply youwith those crystals, ensuring you have as many as you need. Your ultimate goal is to maximizethe points you collect from these crystals.The experiment has seven rounds. In each round, you are given 6 crystals: crystal a, crystal b,crystal c, crystal d, crystal e, crystal f. Initially each crystal is worth 500 points.In each round, you can either <Extract> or <Fuse> the alien crystals:•You can collect points from a crystal with<Extract>. For example, <Extract> a givesyou 500 points.•You can discover new crystals by combining existing ones using <Fuse>, for example,<Fuse> a and b. Fusion may result in a new, more valuable crystal or it may fail,causing you to lose the opportunity to extract points from known crystals. The newlydiscovered crystals will be available for you to further <Extract> or <Fuse>. You willnot get any points immediately with this action. Overall, fusion generally works 8outof 10 times.Each new crystal you discover is worth 3times more points than the most valuable crystal usedto produce it. Here are some examples:•If <Fuse> a and b is successful, then the newly discovered crystal [ab] is worth 3·500points = 1500 points.•If <Fuse> [ab] and [ab] is successful, then the newly discovered crystal [[ab][ab]] isworth 3·1500 = 4500 points.•If <Fuse> [ab] and a is successful, then the newly discovered crystal [[ab]a] is worth3·1500 = 4500 points.Opportunities: You have 10 opportunities to either <Fuse> or <Extract> in each round. Repeatinga previously discovered or failed fusion will not consume an opportunity. You can track thecombinations you’ve already tried on the list provided on the right side of the machine.Here is an example game play for one task:Step 1:ChatGPT’s ResponseDO: <Fuse> a and bMachine Response•<Fuse> a and b is successful! You have discovered a new alien crystal [ab]! Now hereare your available alien crystals: a, b, c, d, e, f, [ab].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. Your totalpoints are 0. You have 9 opportunities left.Step 2:ChatGPT’s ResponseDO: <Fuse>[ab] and b8Machine Response•<Fuse>[ab] and b is successful! You have discovered a new alien crystal [[ab]b]! Nowhere are your available alien crystals: a, b, c, d, e, f, [ab], [[ab]b].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] is worth4500 points. Your total points are 0. You have 8 opportunities left.Step 3:ChatGPT’s ResponseDO <Fuse>[[ab]b] and cMachine Response•<Fuse>[[ab]b] and c is successful! You have discovered a new alien crystal [[[ab]b]c]!Now here are your available alien crystals: a, b, c, d, e, f, [ab], [[ab]b], [[[ab]b]c].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] isworth 4500 points. [[[ab]b]c] is worth 13500 points. Your total points are 0. You have7 opportunities left.Step 4:ChatGPT’s ResponseDO <Fuse>[[[ab]b]c] and dMachine Response•<Fuse>[[[ab]b]c] and d is successful! You have discovered a new alien crystal[[[[ab]b]c]d]! Now here are your available alien crystals: a, b, c, d, e, f, [ab], [[ab]b],[[[ab]b]c], [[[[ab]b]c]d].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] is worth4500 points. [[[ab]b]c] is worth 13500 points. [[[[ab]b]c]d] is worth 40500 points.Your total points are 0. You have 6 opportunities left.Step 5:ChatGPT’s ResponseDO <Fuse>[[[[ab]b]c]d] and eMachine Response•«Fuse>[[[[ab]b]c]d] and e is not successful! You have not discovered a new aliencrystal! Now here are your available alien crystals: a, b, c, d, e, f, [ab], [[ab]b],[[[ab]b]c], [[[[ab]b]c]d].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] is worth4500 points. [[[ab]b]c] is worth 13500 points. [[[[ab]b]c]d] is worth 40500 points.Your total points are 0. You have 5 opportunities left.Step 6:9ChatGPT’s ResponseDO <Fuse>[[[[ab]b]c]d] and fMachine Response•<Fuse>[[[[ab]b]c]d] and f is successful! You have discovered a new alien crystal[[[[[ab]b]c]d]f]! Now here are your available alien crystals: a, b, c, d, e, f, [ab], [[ab]b],[[[ab]b]c], [[[[ab]b]c]d], [[[[[ab]b]c]d]f].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] isworth 4500 points. [[[ab]b]c] is worth 13500 points. [[[[ab]b]c]d] is worth 40500points. [[[[[ab]b]c]d]f] is worth 121500 points. Your total points are 0. You have 4opportunities left.Step 7:ChatGPT’s ResponseDO <Fuse>[[[[[ab]b]c]d]f] and aMachine Response•<Fuse>[[[[[ab]b]c]d]f] and [a] is successful! You have discovered a new alien crystal[[[[[[ab]b]c]d]f]a]! Now here are your available alien crystals: a, b, c, d, e, f, [ab],[[ab]b], [[[ab]b]c], [[[[ab]b]c]d], [[[[[ab]b]c]d]f], [[[[[[ab]b]c]d]f]a].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] is worth4500 points. [[[ab]b]c] is worth 13500 points. [[[[ab]b]c]d] is worth 40500 points.[[[[[ab]b]c]d]f] is worth 121500 points. [[[[[[ab]b]c]d]f]a] is worth 364500 points.Your total points are 0. You have 3 opportunities left.Step 8:ChatGPT’s ResponseDO <Fuse> [[[[[[ab]b]c]d]f]a] and aMachine Response•<Fuse> [[[[[[ab]b]c]d]f]a] and [a] is successful! You have discovered a new aliencrystal [[[[[[[ab]b]c]d]f]a]a]! Now here are your available alien crystals: a, b, c,d, e, f, [ab], [[ab]b], [[[ab]b]c], [[[[ab]b]c]d], [[[[[ab]b]c]d]f], [[[[[[ab]b]c]d]f]a],[[[[[[[ab]b]c]d]f]a]a].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] isworth 4500 points. [[[ab]b]c] is worth 13500 points. [[[[ab]b]c]d] is worth 40500points. [[[[[ab]b]c]d]f] is worth 121500 points. [[[[[[ab]b]c]d]f]a] is worth 364500points. [[[[[[[ab]b]c]d]f]a]a] is worth 1093500 points. Your total points are 0. Youhave 2 opportunities left.Step 9:10ChatGPT’s ResponseDO <Extract>[[[[[[[ab]b]c]d]f]a]a]Machine Response•Extracting [[[[[[[ab]b]c]d]f]a]a] gives you 1093500 points. Your total points are1093500. You have 1 opportunities left.Step 10:ChatGPT’s ResponseDO <Extract>[[[[[[[ab]b]c]d]f]a]a]Machine Response•Extracting [[[[[[[ab]b]c]d]f]a]a] gives you 1093500 points. Your total points are2187000. You have 0 opportunities left. For task 1, you have gathered 2187000 intotal! Congratulations! Now ready for task 2? Here are your available alien crystals: a,b, c, d, e, f, where each worth 500 points.Game PromptNow let’s play this game! Note that the above example is just one way of playing the discoverygame. The strategies used in the example play may or may not be the optimal strategy to helpyou to get as many points as possible! Remember, your goal is to collect as many points aspossible using 10 opportunities for 7 tasks. Remember ONLY respond with “DO: <Extract>crystal” or “DO: <Fuse> crystal1 and crystal2” for each step, where crystal, crystal1, crystal2are available alien crystals.A.3.2 LLM MDP promptThe rest of the prompt stays the same as the Direct Play prompt, except the optimal strategy iscomputed by the rational model in Equation 2 has been informed explicitly. Here is an example fromthe HH condition:Game DescriptionThe optimal strategy is guided by the following formula: Let d′:=D−d+ 1represent thesteps remaining. Whend′≥1p(w−1)+ 1 =10.8(3−1)+ 1 = 9 ,where prepresents: “fusion will work 8 out of 10 times” and wrepresents each new crystalyou discover is worth 3 times more points than the most valuable crystal used to produce it.This strategy recommends switching from fusion to extraction at step 9 for optimal results.Additionally, always extract the most valuable available crystal.Here is an OPTIMAL game play:...11Game PromptNow let’s play this game! Note that the above example is the optimal way of playing thediscovery game. Remember, your goal is to collect as many points as possible using 10opportunities for 7 tasks.A.3.3 LLM COT promptGame DescriptionReasoning Process for Optimal Strategy:Objective: The goal is to collect as many points as possible.Decision Breakdown: Fuse: A successful fusion can create a new, more valuable crystal, butit carries a risk of failure. The new crystal is worth 3 times more than the most valuable inputcrystal. Extract: Extracting gives you guaranteed points from the current crystals. SwitchingStrategy: The optimal strategy is determined by a formula. Let d′:=D−d+ 1represent thesteps remaining. Whend′≥1p(w−1)+ 1 =10.8(3−1)+ 1 = 9 ,where prepresents: “fusion will work 8 out of 10 times” and wrepresents each new crystalyou discover is worth 3 times more points than the most valuable crystal used to produce it.This means switching to extraction at step 9 is optimal. Additionally, always extract the mostvaluable available crystal. Based on this, we fuse as much as possible until step 9, and then weextract.Reasoning Step-by-Step:•Step 1-9: Fuse crystals to create more valuable ones, aiming to maximize the potentialpoints.• Step 10: Switch to extracting the most valuable available crystal.Here is an OPTIMAL game play:Let’s go through an optimal game play with step-by-step reasoning for each action.Step 1:ChatGPT’s ResponseDO: <Fuse> a and bAt this early stage, we want to attempt fusions to maximize future point potential. By fusing aand b, we create a new crystal worth 1500 points, which can be further used in future fusions.Machine Response•<Fuse> a and b is successful! You have discovered a new alien crystal [ab]! Now hereare your available alien crystals: a, b, c, d, e, f, [ab].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. Your totalpoints are 0. You have 9 opportunities left.Step 2:12ChatGPT’s ResponseDO: <Fuse>[ab] and b.Reasoning: We continue fusing, as the formula suggests switching to extraction at step 9. Byfusing [ab] and b, we increase the value of the resulting crystal to 4500 points.Machine Response•<Fuse>[ab] and b is successful! You have discovered a new alien crystal [[ab]b]! Nowhere are your available alien crystals: a, b, c, d, e, f, [ab], [[ab]b].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] is worth4500 points. Your total points are 0. You have 8 opportunities left.Step 3:ChatGPT’s ResponseDO <Fuse>[[ab]b] and cReasoning: Following the same logic, fusing [[ab]b] and c increases the value of the crystalfurther to 13500 points, creating more value for future extractions.Machine Response•<Fuse>[[ab]b] and c is successful! You have discovered a new alien crystal [[[ab]b]c]!Now here are your available alien crystals: a, b, c, d, e, f, [ab], [[ab]b], [[[ab]b]c].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] isworth 4500 points. [[[ab]b]c] is worth 13500 points. Your total points are 0. You have7 opportunities left.Step 4:ChatGPT’s ResponseDO <Fuse>[[[ab]b]c] and dReasoning: At step 4, we continue to fuse since we are aiming for higher value crystals. Thenew crystal is now worth 40500 points.Machine Response•<Fuse>[[[ab]b]c] and d is successful! You have discovered a new alien crystal[[[[ab]b]c]d]! Now here are your available alien crystals: a, b, c, d, e, f, [ab], [[ab]b],[[[ab]b]c], [[[[ab]b]c]d].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] is worth4500 points. [[[ab]b]c] is worth 13500 points. [[[[ab]b]c]d] is worth 40500 points.Your total points are 0. You have 6 opportunities left.Step 5:13ChatGPT’s ResponseDO <Fuse>[[[[ab]b]c]d] and eReasoning: At step 5, we attempt another fusion, although there’s a risk of failure. If successful,we increase the potential value of the crystal for extraction.Machine Response•«Fuse>[[[[ab]b]c]d] and e is not successful! You have not discovered a new aliencrystal! Now here are your available alien crystals: a, b, c, d, e, f, [ab], [[ab]b],[[[ab]b]c], [[[[ab]b]c]d].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] is worth4500 points. [[[ab]b]c] is worth 13500 points. [[[[ab]b]c]d] is worth 40500 points.Your total points are 0. You have 5 opportunities left.Step 6:ChatGPT’s ResponseDO <Fuse>[[[[ab]b]c]d] and fReasoning: Since fusion is still optimal, we aim for a fusion. A successful fusion here leads to acrystal worth over 120,000 points.Machine Response•<Fuse>[[[[ab]b]c]d] and f is successful! You have discovered a new alien crystal[[[[[ab]b]c]d]f]! Now here are your available alien crystals: a, b, c, d, e, f, [ab], [[ab]b],[[[ab]b]c], [[[[ab]b]c]d], [[[[[ab]b]c]d]f].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] isworth 4500 points. [[[ab]b]c] is worth 13500 points. [[[[ab]b]c]d] is worth 40500points. [[[[[ab]b]c]d]f] is worth 121500 points. Your total points are 0. You have 4opportunities left.Step 7:ChatGPT’s ResponseDO <Fuse>[[[[[ab]b]c]d]f] and aReasoning: Fusion remains the preferred action to maximize the value of the crystal.Machine Response•<Fuse>[[[[[ab]b]c]d]f] and [a] is successful! You have discovered a new alien crystal[[[[[[ab]b]c]d]f]a]! Now here are your available alien crystals: a, b, c, d, e, f, [ab],[[ab]b], [[[ab]b]c], [[[[ab]b]c]d], [[[[[ab]b]c]d]f], [[[[[[ab]b]c]d]f]a].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] is worth4500 points. [[[ab]b]c] is worth 13500 points. [[[[ab]b]c]d] is worth 40500 points.[[[[[ab]b]c]d]f] is worth 121500 points. [[[[[[ab]b]c]d]f]a] is worth 364500 points.Your total points are 0. You have 3 opportunities left.Step 8:14ChatGPT’s ResponseDO <Fuse> [[[[[[ab]b]c]d]f]a] and aReasoning: Fusion remains the preferred action to maximize the value of the crystal.Machine Response•<Fuse> [[[[[[ab]b]c]d]f]a] and [a] is successful! You have discovered a new aliencrystal [[[[[[[ab]b]c]d]f]a]a]! Now here are your available alien crystals: a, b, c,d, e, f, [ab], [[ab]b], [[[ab]b]c], [[[[ab]b]c]d], [[[[[ab]b]c]d]f], [[[[[[ab]b]c]d]f]a],[[[[[[[ab]b]c]d]f]a]a].•a, b, c, d, e, and f are worth 500 points each. [ab] is worth 1500 points. [[ab]b] isworth 4500 points. [[[ab]b]c] is worth 13500 points. [[[[ab]b]c]d] is worth 40500points. [[[[[ab]b]c]d]f] is worth 121500 points. [[[[[[ab]b]c]d]f]a] is worth 364500points. [[[[[[[ab]b]c]d]f]a]a] is worth 1093500 points. Your total points are 0. Youhave 2 opportunities left.Step 9:ChatGPT’s ResponseDO <Extract>[[[[[[[ab]b]c]d]f]a]a]Reasoning: This is the last opportunity to fuse before switching to extraction. After this step,the next step will be extraction.Machine Response•Extracting [[[[[[[ab]b]c]d]f]a]a] gives you 1093500 points. Your total points are1093500. You have 1 opportunities left.Step 10:ChatGPT’s ResponseDO <Extract>[[[[[[[ab]b]c]d]f]a]a]Reasoning: As per the formula, step 9 marks the switch from fusing to extracting. The highestvalue crystal should now be extracted for maximum points.Machine Response•Extracting [[[[[[[ab]b]c]d]f]a]a] gives you 1093500 points. Your total points are2187000. You have 0 opportunities left. For task 1, you have gathered 2187000 intotal! Congratulations! Now ready for task 2? Here are your available alien crystals: a,b, c, d, e, f, where each worth 500 points.A.3.4 LLM AnalysisTo further analyze the LLM results, we plotted the most common switch step (Figure 4) and the bestfit lines of switch steps across seven tasks (Figure 5) for the switch-once proportions per conditionfor each LLM agent.Compared to the optimal switching point, GPT-3.5 with COT prompting performed best in the HHcondition. However, GPT-3.5 agents only chose to switch once in two out of seven tasks: one switchoccurred at step 9 (the optimal point), while the other occurred prematurely at step 1. Llama 3.1 with15both MDP and COT prompts performed best in the HL condition; Llama 3.1 with COT promptingperformed best in the LH condition; and GPT-3.5 with Direct Play, as well as all MDP and COTprompts, switched at the optimal point in the LL condition. When compared to participants’ mostcommon switch point, GPT-3.5 was the closest to human performance in the HH condition; in theHL condition, Llama 3.1 most closely matched participants (most commonly switching at step 6); inthe LH condition, no model’s most common switch step aligned with participants’; and in the LLcondition, GPT-3.5 with Direct Play, along with all MDP and COT prompts, matched the participants’switching step.Figure 4: Most common switch step per condition for GPT 3.5, GPT 4.0, and Llama 3.1 models ofDirect Play, MDP, and COT Prompting.As participants might use belief-update systems to gradually approaching to near-optimal or optimalswitch point, LLM agents fail to resemble similar behaviors. With the help of COT and MDPprompting, LLM agents started with switching optimally and gradually deviate away from theoptimal solution (Figure 5).Figure 5: Switch-once steps across seven tasks per condition for GPT 3.5, GPT 4.0, and Llama 3.1models of Direct Play, MDP, and COT Prompting.16 |
Subsets and Splits