Dataset Viewer
forum_id
stringlengths 10
10
| raw_ocr_text
stringlengths 13.1k
129k
|
---|---|
h7-XixPCAL | Structured Denoising Diffusion Models in DiscreteState-SpacesJacob Austin⇤, Daniel D. Johnson⇤, Jonathan Ho, Daniel Tarlow & Rianne van den Berg†Google Research, Brain Team{jaaustin,ddjohnson,jonathanho,dtarlow,riannevdberg}@google.comAbstractDenoising diffusion probabilistic models (DDPMs) [ 17] have shown impressiveresults on image and waveform generation in continuous state spaces. Here, weintroduce Discrete Denoising Diffusion Probabilistic Models (D3PMs), diffusion-like generative models for discrete data that generalize the multinomial diffusionmodel of Hoogeboom et al. [18], by going beyond corruption processes with uni-form transition probabilities. This includes corruption with transition matrices thatmimic Gaussian kernels in continuous space, matrices based on nearest neighborsin embedding space, and matrices that introduce absorbing states. The third al-lows us to draw a connection between diffusion models and autoregressive andmask-based generative models. We show that the choice of transition matrix is animportant design decision that leads to improved results in image and text domains.We also introduce a new loss function that combines the variational lower boundwith an auxiliary cross entropy loss. For text, this model class achieves strongresults on character-level text generation while scaling to large vocabularies onLM1B. On the image dataset CIFAR-10, our models approach the sample qualityand exceed the log-likelihood of the continuous-space DDPM model.1 IntroductionGenerative modeling is a core problem in machine learning, useful both for benchmarking our abilityto capture statistics of natural datasets and for downstream applications that require generatinghigh-dimensional data like images, text, and speech waveforms. There has been a great deal ofprogress with the development of methods like GANs [ 14,3], V AEs [ 22,32], large autoregressiveneural network models [ 43,42,44], normalizing flows [ 31,11,21,30], and others, each with theirown tradeoffs in terms of sample quality, sampling speed, log-likelihoods, and training stability.Recently, diffusion models [ 36] have emerged as a compelling alternative for image [ 17,39] and au-dio [7,23] generation, achieving comparable sample quality to GANs and log-likelihoods comparableto autoregressive models with fewer inference steps. A diffusion model is a parameterized Markovchain trained to reverse a predefined forward process, which is a stochastic process constructed togradually corrupt training data into pure noise. Diffusion models are trained using a stable objectiveclosely related to both maximum likelihood and score matching [ 19,45], and they admit fastersampling than autoregressive models by using parallel iterative refinement [28, 38, 40, 37].Although diffusion models have been proposed in both discrete and continuous state spaces [ 36],most recent work has focused on Gaussian diffusion processes that operate in continuous state spaces(e.g. for real-valued image and waveform data). Diffusion models with discrete state spaces have35th Conference on Neural Information Processing Systems (NeurIPS 2021).⇤Equal contributions†Now at Microsoft ResearchFigure 1: D3PM forward and (learned) reverse process applied to a quantized swiss roll. Each dotrepresents a 2D categorical variable. Top: samples from the uniform, discretized Gaussian, andabsorbing state D3PM model forward processes, along with corresponding transition matrices Q.Bottom: samples from a learned discretized Gaussian reverse process.been explored for text and image segmentation domains [ 18], but they have not yet been demonstratedas a competitive model class for large scale text or image generation.Our aim in this work is to improve and extend discrete diffusion models by using a more structuredcategorical corruption process to shape data generation, as illustrated in Figure 1. Our models do notrequire relaxing or embedding discrete data (including images) into continuous spaces, and can embedstructure or domain knowledge into the transition matrices used by the forward process. We achievesignificantly improved results by taking advantage of this flexibility. We develop structured corruptionprocesses appropriate for text data, using similarity between tokens to enable gradual corruptionand denoising. Expanding further, we also explore corruption processes that insert [MASK] tokens,which let us draw parallels to autoregressive and mask-based generative models. Finally, we studydiscrete diffusion models for quantized images, taking inspiration from the locality exploited bycontinuous diffusion models. This leads to a particular choice of discrete corruption process thatdiffuses preferentially to more similar states and leads to much better results in the image domain.Overall, we make a number of technical and conceptual contributions. Beyond designing several newstructured diffusion models, we introduce a new auxiliary loss which stabilizes training of D3PMsand a family of noise schedules based on mutual information that lead to improved performance. Westrongly outperform various non-autoregressive baselines for text generation on character-level textgeneration, and successfully scale discrete diffusion models to large vocabularies and long sequencelengths. We also achieve strong results on the image dataset CIFAR-10, approaching or exceedingthe Gaussian diffusion model from Ho et al. [17] on log-likelihoods and sample quality.2 Background: diffusion modelsDiffusion models [ 36] are latent variable generative models characterized by a forward and a reverseMarkov process. The forward process q(x1:T|x0)=QTt=1q(xt|xt1)corrupts the data x0⇠q(x0)into a sequence of increasingly noisy latent variables x1:T=x1,x2,. . . ,xT. The learnedreverse Markov process p✓(x0:T)=p(xT)QTt=1p✓(xt1|xt)gradually denoises the latent variablestowards the data distribution. For example, for continuous data, the forward process typically addsGaussian noise, which the reverse process learns to remove.2In order to optimize the generative model p✓(x0)to fit the data distribution q(x0), we typicallyoptimize a variational upper bound on the negative log-likelihood:Lvb=Eq(x0)DKL[q(xT|x0)||p(xT)]|{z}LT+TXt=2Eq(xt|x0)⇥DKL[q(xt1|xt,x0)||p✓(xt1|xt)]⇤| {z }Lt1Eq(x1|x0)[logp✓(x0|x1)]|{z}L0. (1)When the number of time steps Tgoes to infinity, both the forward process and the reverse processshare the same functional form [ 36,12], in the sense that the true posterior q(xt1|xt)becomesfully conditionally independent.3This motivates using a conditionally independent approximatereverse process p✓(xt1|xt)from the same class of distributions as that of the forward process.Furthermore, for several choices of the forward process the distribution q(xt|x0)converges to astationary distribution ⇡(x)in the limit t!1independent of the value of x0. When the number oftime steps Tis large enough and we choose ⇡(x)as the prior p(xT), we can guarantee that the LTterm in (1)will approach zero regardless of the data distribution q(x0). (Alternatively, one can use alearned prior p✓(xT).)While q(xt|xt1)can in theory be arbitrary, efficient training of p✓is possible when q(xt|xt1):1.Permits efficient sampling of xtfrom q(xt|x0)for an arbitrary time t, allowing us torandomly sample timesteps and optimize each Lt1term individually with stochasticgradient descent,2.Has a tractable expression for the forward process posterior q(xt1|xt,x0), which allowsus to compute the KL divergences present in the Lt1term of (1).The majority of recent work in continuous spaces [ 17,37,7,28] defines the forwardand reverse distributions as q(xt|xt1)= Nxt|p1txt1,tIand p✓(xt1|xt)=N(xt1|μ✓(xt,t),⌃✓(xt,t)), respectively. The aforementioned properties hold in the case ofthese Gaussian diffusion models: the forward process q(xt|x0)converges to a stationary distribution,motivating the choice p(xT)= N(xT|0,I), and both q(xt|x0)andq(xt1|xt,x0)are tractableGaussian distributions for which the KL divergence can be computed analytically.3 Diffusion models for discrete state spacesDiffusion models with discrete state spaces were first introduced by Sohl-Dickstein et al. [36], whoconsidered a diffusion process over binary random variables. Hoogeboom et al. [18] extendedthe model class to categorical random variables with transition matrices characterized by uniformtransition probabilities. In their supplementary material, Song et al. [37]also derived this extension,although no experiments were performed with this model class. Here, we briefly describe a moregeneral framework for diffusion with categorical random variables which includes these models asspecial cases.4For scalar discrete random variables with Kcategories xt,xt121,. . . ,K the forward transitionprobabilities can be represented by matrices: [Qt]ij=q(xt=j|xt1=i). Denoting the one-hotversion of xwith the row vector x, we can writeq(xt|xt1) = Cat( xt;p=xt1Qt), (2)where Cat( x;p)is a categorical distribution over the one-hot row vector xwith probabilities givenby the row vector p, and xt1Qtis to be understood as a row vector-matrix product. We assumethatQtis applied to each pixel of an image or each token in a sequence independently, and thatqfactorizes over these higher dimensions as well; we thus write q(xt|xt1)in terms of a single3For continuous state spaces and Gaussian q, the limit T!1corresponds to a stochastic differentialequation [40], whereas for discrete state spaces it corresponds to a Markov jump process.4Our implementation of D3PM framework is available at https://github.com/google-research/google-research/tree/master/d3pm .3element. Starting from x0, we obtain the following t-step marginal and posterior at time t1:q(xt|x0) = Catxt;p=x0Qt,with Qt=Q1Q2...Qtq(xt1|xt,x0)=q(xt|xt1,x0)q(xt1|x0)q(xt|x0)= Cat xt1;p=xtQ>tx0Qt1x0Qtx>t!.(3)Note that due to the Markov property of the forward process q(xt|xt1,x0)= q(xt|xt1). As-suming that the reverse process p✓(xt|xt1)is also factorized as conditionally independent overthe image or sequence elements, the KL divergence between qandp✓can be computed by simplysumming over all possible values of each random variable; we thus satisfy criteria 1 and 2 discussedin Section 2. Depending on Qt, the cumulative products Qtcan often be computed in closed form,or simply precomputed for all t. However, for large Kand large Tthis may be prohibitive. InAppendix A.4 we discuss how to ensure Qtcan still be computed efficiently in this case, allowingthe framework to scale to a larger number of categories.In the next section we discuss the choice of the Markov transition matrices Qtand correspondingstationary distributions. From here on, we refer to the general class of diffusion models with discretestate spaces as Discrete Denoising Diffusion Probabilistic Models (D3PMs).3.1 Choice of Markov transition matrices for the forward processAn advantage of the D3PM framework described above is the ability to control the data corruptionand denoising process by choosing Qt, in notable contrast to continuous diffusion, for which onlyadditive Gaussian noise has received significant attention. Besides the constraint that the rows of Qtmust sum to one to conserve probability mass, the only other constraint in choosing Qtis that therows of Qt=Q1Q2...Qtmust converge to a known stationary distribution5when tbecomes large,which can be guaranteed while imposing minimal restrictions on Qt(see Appendix A.1).We argue that for most real-world discrete data, including images and text, it makes sense toadd domain-dependent structure to the transition matrices Qtas a way of controlling the forwardcorruption process and the learnable reverse denoising process. Below we briefly discuss the uniformtransition matrices that have been studied in prior work [ 18], along with a set of structured transitionmatrices we have explored for our image and text dataset experiments; see Appendix A.2 for moredetails on each matrix type. We also note that this set is not exhaustive, and many other transitionmatrices could also be used within the D3PM framework.Uniform (Appendix A.2.1). Sohl-Dickstein et al. [36]considered a simple 2⇥2transition matrix forbinary random variables. Hoogeboom et al. [18]later extended this to categorical variables, proposinga transition matrix Qt=( 1 t)I+t/KTwith t2[0,1]. Since this transition matrix isdoubly stochastic with strictly positive entries, the stationary distribution is uniform. Because thetransition probability to any other state is uniform, in this paper we equivalently refer to this discretediffusion instance as D3PM-uniform.Absorbing state (Appendix A.2.2). Motivated by the success of BERT [ 10] and recent work onConditional Masked Language Models (CMLMs) in text, we consider a transition matrix with anabsorbing state (called [MASK]), such that each token either stays the same or transitions to [MASK]with some probability t. This does not impose particular relationships between categories, similar touniform diffusion, but still allows corrupted tokens to be distinguished from original ones. Moreover,the stationary distribution is not uniform but has all the mass on the [MASK] token. For images, wereuse the grey pixel as the [MASK] absorbing token.Discretized Gaussian (Appendix A.2.3). Instead of transitioning uniformly to any other state, forordinal data we propose imitating a continuous space diffusion model by using a discretized, truncatedGaussian distribution. We choose a normalization such that the transition matrix is doubly stochastic,leading to a uniform stationary distribution. This transition matrix will transition between moresimilar states with higher probability, and is well suited for quantized ordinal data such as images.Token embedding distance (Appendix A.2.4). Textual data does not have ordinal structure, butthere may still be interesting semantic relationships. For instance, in a word-level vocabulary5If a stationary distribution is not known, we can introduce a learned prior p✓(xT); we note that this isequivalent to extending the forward process by appending a rank-one matrix QT+1that ignores xTand producesa deterministic xT+1, then learning the reverse step p✓(xT|xT+1)=p✓(xT).4synonyms or closely related words (like “dog" or “cat") may be more similar than other tokens. Asa demonstration of the generality of the D3PM framework, we explore using similarity in wordembedding space to guide the forward process, and construct a doubly-stochastic transition matrixthat transitions more frequently between tokens that have similar embeddings while maintaining auniform stationary distribution.For uniform and absorbing-state diffusion, the cumulative products Qtcan be computed in closedform (see Appendix A.4.1); the remainder can be precomputed.3.2 Noise schedulesWe consider several different options for the noise schedule of the forward process. For discretizedGaussian diffusion, we explore linearly increasing the variance of the Gaussian before discretizingit. (Note that a linear schedule for Qtleads to a nonlinear amount of cumulative noise in Qt.) Foruniform diffusion we use the cosine schedule which sets the cumulative probability of a transition toa cosine function, as introduced by Nichol and Dhariwal [28]and adapted by Hoogeboom et al. [18].For a general set of transition matrices Qt(such as the one based on token embeddings), previouslyproposed schedules may not be directly applicable. We consider linearly interpolating the mutualinformation between xtandx0to zero, i.e. I(xt;x0)⇡(1tT)H(x0). Interestingly, for thespecific case of absorbing-state D3PMs, this schedule reduces to exactly the (Tt+ 1)1scheduleproposed by Sohl-Dickstein et al. [36]for a Bernoulli diffusion process. See Appendix A.7 for moredetails.3.3 Parameterization of the reverse processWhile it is possible to directly predict the logits of p✓(xt1|xt)using a neural network nn✓(xt),we follow Ho et al. [17]and Hoogeboom et al. [18]and focus on using a neural network nn✓(xt)to predict the logits of a distribution ep✓(ex0|xt), which we combine with q(xt1|xt,x0)and asummation over one-hot representations of x0to obtain the following parameterizationp✓(xt1|xt)/Xex0q(xt1,xt|ex0)ep✓(ex0|xt). (4)We note that under this x0-parameterization the KL divergence DKL[q(xt1|xt,x0)||p✓(xt1|xt)]will be zero if ep✓(ex0|xt)places all of its probability mass on the original value x0. The decompositionofq(xt1|xt,x0)in(3)also provides us with a motivation for this parameterization. According to(3), in a given state xt, the optimal reverse process only takes into account transitions to states forwhich q(xt|xt1)is non-zero. Therefore, the sparsity pattern of Qtdetermines the sparsity patternof the ideal reverse transition probabilities in p✓(xt1|xt). The parameterization in (4)automaticallyensures that the learned reverse probability distribution p✓(xt1|xt)has the correct sparsity patterndictated by the choice of the Markov transition matrix Qt. This parameterization also lets us performinference with ksteps at a time, by predicting p✓(xtk|xt)=Pq(xtk,xt|ex0)ep✓(ex0|xt).Finally, when modeling ordinal discrete data, instead of predicting the logits of ep✓(ex0|xt)directlywith the output of a neural net, another option is to model the probabilities with a truncated discretizedlogistic distribution (see Appendix A.8). This provides an extra ordinal inductive bias to the reversemodel and boosts FID and log-likelihood scores for images.3.4 Loss functionWhile the original diffusion models introduced by Sohl-Dickstein et al. [36]were optimized withthe negative variational lower bound Lvbof(1), more recent diffusion models are optimized withdifferent objectives. For instance, Ho et al. [17] derive a simplified loss function ( Lsimple ) thatreweights the negative variational bound, and Nichol and Dhariwal [28] explore a hybrid lossLhybrid =Lsimple +Lvb(using one term to learn the predicted mean and the other to learnpredicted variance). Inspired by this recent work, we introduce an auxiliary denoising objective forthex0-parameterization of the reverse process, which encourages good predictions of the data x0ateach time step. We combine this with the negative variational lower bound, yielding the followingalternative loss function:L=Lvb+Eq(x0)Eq(xt|x0)[logep✓(x0|xt)]. (5)5We note that the auxiliary loss resembles the cross entropy term L0in(1)att=1, and so one mightexpect that it is a KL reweighting similar to the one described by Ho et al. [17]. However, our Ldirectly supervises the model output ep✓(ex0|xt). This is in general a stronger source of supervisionthan any reweighting of the terms in the lower bound (1), which only provides supervision throughthe sum in (4). To see this, note that for a fixed x0, both DKL[q(xt1|xt,x0)||p✓(xt1|xt)]andEq(xt|x0)[logep✓(x0|xt)]are minimized when ep✓(ex0|xt)has all its mass on the datapoint x0, butfor some choices of qthere may be a different setting ex06=x0that induces the same distributionp✓(xt1|xt). We find that training with this loss leads to improved quality of image samples.4 Connection to existing probabilistic models for textIn this section we expand on interesting connections between the D3PM framework and severalexisting probabilistic and language modeling approaches.BERT is a one-step diffusion model: One possible D3PM transition matrix is a combination of auniform transition matrix and an absorbing state at the [MASK] token (i.e. Q=↵eTm+T/K+(1↵)I, where emis a one-hot vector on the [MASK] token). For a one-step diffusion processin which q(x1|x0)replaces 10% of tokens with [MASK] and 5% uniformly at random, this leadsprecisely to the BERT denoising objective, i.e. LvbLT=Eq(x1|x0)[logp✓(x0|x1)] = LBERT ,since LTis a constant independent of ✓(assuming a fixed prior).Autoregressive models are (discrete) diffusion models: Consider a diffusion process that deter-ministically masks tokens one-by-one in a sequence of length N=T:q([xt]i|x0)=[ x0]iifi<Ntelse [MASK] . This is a deterministic forward process, so q(xt1|xt,x0)is a delta distributionon the xtsequence with one fewer mask: q([xt1]i|xt,x0)=[xt]iifi6=Ttelse[x0]i. Whilethis process is not applied independently to each token, it can be recast as an independently-applieddiffusion process on the product space [0...N]⇥V, where each token is tagged with its position inthe sequence, Vis the vocabulary, and Qis an N⇥| V|⇥ N⇥| V| sparse matrix.Because all tokens except the one at position i=Tthave deterministic posteriors, the KLdivergence DKL(q([xt1]j|xt,x0)||p✓([xt1]j|xt))is zero for all other positions. The onlytoken for which this is not true is the token at position i, for which DKL(q([xt1]i|xt,x0)||p✓([xt1]i|xt)) = logp✓([x0]i|xt), the standard cross entropy loss for an autoregressive model.(Generative) Masked Language-Models (MLMs) are diffusion models: Generative Masked Lan-guage Models ([ 13], [47]) are generative models that generate text from a sequence of [MASK]tokens. They are usually trained by sampling a sequence x0, masking ktokens according to someschedule, and learning to predict the masked tokens given context. It turns out that a D3PM absorbing([MASK]) model trained on the usual ELBO objective with the x0-parameterization from 3.3 reducesto a reweighted version of this MLM objective (see Appendix A.3 for a detailed derivation).5 Text generationFor text, we experiment with generation on two datasets: text8 [ 26], a character-level dataset extractedfrom English-language Wikipedia, and the One Billion Word dataset (LM1B) [ 6], a large dataset ofshuffled English-language sentences. For both, we train a D3PM uniform model based on the workby Hoogeboom et al. [18](D3PM uniform) and a model that masks tokens (D3PM absorbing). Wealso consider a model that transitions uniformly to nearest neighbors in a token embedding space(D3PM NN). We follow Hoogeboom et al. [18]and use T= 1000 timesteps, although we are alsoable to evaluate on fewer due to the parameterization in Section 3.3.5.1 Character-level generation on text8text8 is a character-level text dataset consisting of a small vocabulary of 27 tokens: the letters ‘a’-‘z’and the ‘_’ whitespace token. We follow the convention of training and evaluating text8 in chunks oflength 256 without any preprocessing [ 18]. For nearest-neighbor D3PM, our nearest neighbor graphin character-space is shown in Appendix B.2.1. D3PM uniform models were trained with a cosineschedule from Hoogeboom et al. [18](ablations in Appendix B.2.1), while D3PM absorbing andD3PM NN models were trained with a mutual information schedule.6Table 1 shows that for D3PM, the D3PM absorbing model performed the best, exceeding theuniform and NN diffusion models. We were able to improve upon the baseline result of [ 18] withhyperparameter tuning, and our uniform and NN results outperformed results from Hoogeboomet al. [18] across all inference steps, down to as few as 20. We found that L=0.01worked bestfor D3PM absorbing, while Lvbwas better for D3PM uniform. Our model outperforms all non-autoregressive baselines except one, the Discrete Flow model [ 41] (for which unfortunately noopen-source implementations exist), and is also faster than all but one method, the IAF/SCF model[49]. It is also nearly 20x faster than an autoregressive transformer of the same size. We note that whileour 20-step D3PM models in Table 1 are much faster than a comparable autoregressive transformers,this table only shows timings for batch size 1 (per device). For larger batches, autoregressive cachingallows transformers to perform inference relatively more quickly. We include additional benchmarksand a plot of inference time as a function of iterations in Appendix B.2.1. D3PM with the maskabsorbing token was by far the best performing model, which lends credibility to the use of masks indenoising auto-encoders. Nearest-neighbor diffusion only narrowly improves upon a D3PM-uniformmodel: this was a surprising negative result for us, suggesting that not all notions of structure aremeaningful.5.2 Text generation on LM1BText generation for large-scale text datasets and large vocabularies with discrete diffusion models hasnot been previously demonstrated. We include results from LM1B as a proof of concept, showingthat these models can indeed scale (as discussed in Appendix A.4), and that the D3PM absorbingmodel continues to excel. All models were trained and evaluated on packed sequences of length 128,using a sentencepiece6vocabulary of size 8192 .Table 2 contains results from experiments on LM1B. Overall, mask diffusion (D3PM absorbing)does relatively well, approaching the performance of a comparable autoregressive model of thesame size, and scaling to far fewer steps, while uniform diffusion performs significantly worse.We find, surprisingly, that the D3PM NN model performs worse than the uniform model in termsof log likelihoods (although it demonstrates unique qualitative behavior). This suggests that wordembedding similarity may not be a meaningful kind of locality in a diffusion process. We found thetheL=0.01loss worked best for the mask absorbing model, but reduced performance for the othermodels. We note the surprising scaling in perplexity in Figure 2, achieving strong results with asfew as 10 inference steps. We also show samples from our model and completions from corruptedsamples.Table 1: Quantitative results on text8. NLL is reported on the entire test set. Sample times are forgenerating a single example of length 256. Results are reported on two seeds. All models are standard12-layer transformers unless otherwise noted.†Transformer XL is a 24-layer transformer, using a784 context window.‡Results reported by [18] by running code from official repository.Model Model steps NLL (bits/char) ( #) Sample time (s) ( #)Discrete Flow [41] ( 8⇥3layers) - 1.23 0 .16Argmax Coupling Flow [18] - 1.80 0 .40±0.03IAF / SCF [49]‡- 1.88 0 .04±0.0004Multinomial Diffusion (D3PM uniform) [18] 1000 1.72 26 .6±2.2D3PM uniform [18] (ours) 1000 1.61±0.02 3 .6±0.4D3PM NN ( Lvb) (ours) 1000 1.59±0.03 3 .1474 ±0.0002D3PM absorbing ( L=0.01) (ours) 1000 1.45±0.02 3 .4±0.3D3PM uniform [18] (ours) 256 1.68±0.01 0 .5801 ±0.0001D3PM NN ( Lvb) (ours) 256 1.64±0.02 0 .813±0.002D3PM absorbing ( L=0.01) (ours) 256 1.47±0.03 0 .598±0.002Transformer decoder (ours) 256 1 .37 0 .3570 ±0.0002Transformer decoder [1] 256 1 .18 -Transformer XL [9]†256 1 .08 -D3PM uniform [18] (ours) 20 1.79±0.03 0 .0771 ±0.0005D3PM NN ( Lvb) (ours) 20 1.75±0.02 0 .1110 ±0.0001D3PM absorbing ( L=0.01) (ours) 20 1.56±0.04 0 .0785 ±0.00036https://github.com/google/sentencepiece7Figure 2: Left: perplexity v.s. sampling iterations for LM1B. Right: Using a trained D3PM absorbingmodel for LM1B to (top) generate new sentences and (bottom) reconstruct corrupted examples.Table 2: Quantitative results on LM1B. Perplexity reported on the test set. Results are reportedon two seeds. All models have context window length 128 and 12 layers unless otherwise noted.†Transformer XL is a 24 layer transformer.‡rounded for readability, see Appendix B.2.2.Metric: Perplexity ( #) Sample time‡(s) (#)inference steps: 1000 128 64 1000 128 64D3PM uniform 137.9 ±2.1 139.2 ±1.2 145.0 ±1.2 1.82 0.21 0.08D3PM NN 149.5 ±1.3 158.6 ±2.2 160.4 ±1.2 21.29 6.69 5.88D3PM absorbing 76.9 ±2.3 80.1 ±1.2 83.6 ±6.1 1.90 0.19 0.10Transformer (ours) - 43.6 - - 0.26 -Transformer XL [9]†- 21.8 - - - -6 Image generationWe evaluate the performance of several D3PM models on the task of unconditional image generationwith the dataset CIFAR-10 [ 25]. We follow Ho et al. [17]and use T= 1000 timesteps for all modelsand verify that for all models the forward process converges to the stationary distribution within Tsteps, yielding a value of at most LT⇡105bits per dimension. We train three versions of D3PMwith different transition matrices: doubly stochastic matrices with uniform transition probabilities(D3PM uniform) [ 18], transition matrices with an absorbing state located at R, G and B values of 128(D3PM absorbing) and doubly stochastic discretized Gaussian transition matrices (D3PM Gauss). Forthe D3PM uniform model we experimented with a linear tschedule as well as the cosine scheduleas proposed in [ 18], with the cosine schedule producing the best results. For D3PM absorbing weuse the schedule t=(Tt+ 1)1as also proposed in [ 36], which corresponds to increasing theprobability of being in the absorbing state linearly over time. For D3PM Gauss we use the samelinear schedule as in [17]. See Appendix B.1 for more details on the experimental setup.Table 3 shows that for D3PM models trained with the Lvbobjective, D3PM Gauss performs betterthan D3PM absorbing and uniform on all metrics: Inception score (IS), Frechet Inception Distance(FID) and negative log-likelihood (NLL). The IS score of the uniform and absorbing D3PM modelsare comparable, while the FID score and NLL of the D3PM absorbing model are slightly better. Wetrained both D3PM absorbing and D3PM Gauss with the alternative loss function Lof(5), andwe found =0.001to work best. We have also experimented with larger values of and a modeltrained only with the auxiliary denoising term in (5). Although this led to a more rapid increasein performance early on in training, the NLL leveled off at higher values for larger and the FIDeven started increasing again. The results show that the models trained with Lperform significantlybetter than their counterparts trained with Lvb. One explanation for this boost in performance is thatthe cross entropy term leads to gradient noise that varies less with the time step t, which is in contrastto the large change in magnitude of the Lt1terms in Lvbfor smaller t, as demonstrated by Nicholand Dhariwal [28]. Finally, we achieve our best results by combining D3PM Gauss trained on Lwith a truncated logistic parameterization of the reverse process distribution p✓(ex0|xt)(D3PM Gauss+ logistic). Figure 3 shows samples from our best model (D3PM Gauss + logistic), as well as theD3PM absorbing model.8Table 3: Inception scores (IS), Frechet Inception Distance (FID) and negative log-likehood (NLL) onthe image dataset CIFAR-10. The NLL is reported on the test set in bits per dimension. We report ourresults as averages with standard deviations, obtained by training five models with different seeds.Model IS ( ") FID ( #) NLL ( #)Sparse Transformer [8] 2.80NCSN [38] 8.87±0.12 25.32NCSNv2 [39] 8.40±0.07 10.87StyleGAN2 + ADA [20] 9.74±0.05 3.26Diffusion (original), Lvb[36] 5.40DDPM Lvb[17] 7.67±0.13 13 .51 3.70DDPM Lsimple [17] 9.46±0.11 3 .17 3.75Improved DDPM Lvb[28] 11.47 2.94Improved DDPM Lsimple [28] 2.90 3.37DDPM++ cont [40] 2.92 2 .99NCSN++ cont. [40] 9.89 2 .20D3PM uniform Lvb 5.99±0.14 51 .27±2.15 5.08±0.02D3PM absorbing Lvb 6.26±0.10 41 .28±0.65 4.83±0.02D3PM absorbing L=0.001 6.78±0.08 30 .97±0.64 4.40±0.02D3PM Gauss Lvb 7.75±0.13 15 .30±0.55 3.966±0.005D3PM Gauss L=0.001 8.54±0.12 8 .34±0.10 3.975±0.006D3PM Gauss + logistic L=0.001 8.56±0.10 7 .34±0.19 3.435±0.0077 Related WorkDiffusion generative models were first proposed by Sohl-Dickstein et al. [36] and have gainedrenewed attention recently due to strong results on image and waveform generation [ 17,7]. Recentworks have proposed improvements for diffusion model training, including importance sampling ofthe ELBO, better noise schedules [ 28] and implicit diffusion models [ 37]. Several works have alsodrawn connections to score matching [ 45,19,38], leading to improved sampling algorithms in thecontinuous-time limit [40].While most works have considered continuous diffusion models, discrete diffusion-like models weredescribed in [ 36] and applied to text generation and image segmentation data in [ 18]. Some works[29,27] have dealt with discrete data by embedding it in continuous space and leveraging Gaussiandiffusion, but have not applied this to text. Seff et al. [35]considered generation of discrete structuredobjects using a diffusion-like Markov corruption process. Goyal et al. [15]proposed a diffusion-likemodel for images with a more flexible family of learned corruption processes. Ho et al. [17]alsodraws connections between diffusion and autoregressive models for continuous data.For text, denoising autoencoders have a long history both in representation learning [ 2,10] and morerecently as generative models [ 47]. These closely resemble our absorbing state diffusion variants forFigure 3: Left: progressive sampling at t= 1000 ,900,800,. . . ,0for D3PM absorbing (top) andD3PM Gauss + logistic (bottom), trained with Lloss on CIFAR-10. These samples were cherrypicked. Right: (non cherry picked) samples from the D3PM Gauss + logistic model.9a particular schedule and transition matrix (see Section 4), although our framing allows us to computelog-likelihoods and experiment with alternative transition matrices. Other works have considerednon-autoregressive translation and speech transcription via insertion and deletion [ 16,33], masking[13], and iteratively-refined sequence alignments [5, 34].8 DiscussionWe have presented D3PMs, a class of models that improves diffusion models for discrete data bydefining new kinds of discrete corruption processes. We achieve strong empirical results relative toprevious work on discrete diffusion models, even surpassing performance of continuous diffusionmodels in terms of log-likelihoods for image generation. While these results are promising, onelimitation is that—like much other work on non-autoregressive generative models—our models arestill inferior to strong autoregressive models like Transformer XL for text generation, and continuousdiffusion models still yield stronger results on image quality. We expect that D3PMs can benefitfurther from the rapid development of continuous diffusion models [ 40,28]. For example, furtherresearch in alternative losses for D3PM’s can take inspiration from the reweighted Lsimple objectiveused in [ 17], or the resampled variational bound in Nichol and Dhariwal [28]. Furthermore, D3PM’smight benefit from increasing the number of timesteps and a more optimized noise schedule, asdiscussed in Nichol and Dhariwal [28]. Another limitation comes from the choice of evaluationmetrics that we use (and that are standard for evaluation of generative models). Inception scoreand Frechet Inception Distance are based on neural networks that have been trained on a particulardistribution of data, which is not representative for all use-cases, and focusing on average qualitymetrics may not accurately reflect performance across the wide diversity of settings where thesegenerative models may be applied. This creates a risk of negative social impacts where advancesdisproportionately favor a subset of the population. Text generation models, including D3PMs,also present many challenges for responsible and reliable use. Prior works have highlighted thepotential for misuse [ 24,4], bias [ 46], and hallucination [ 48] in neural language models. D3PMs,like autoregressive language models, should be carefully evaluated along these axes before beingdeployed in a production setting. Going forward, we are excited about the space of possibilities thatarise within the D3PM framework. We have found successes in leveraging the flexibility that comesfrom defining discrete corruption processes for discrete data, but we believe that there are many morepossibilities that make use of richer forms of structure to define even more powerful discrete diffusionmodels.Acknowledgments and Disclosure of FundingWe would like to thank Hugo Larochelle for providing high-level feedback during the project, andBen Poole for reviewing a draft version of this manuscript. We would also like to thank Julia Kreutzerand Xavier Garcia for helpful conversations about language experiments, and Daniel Watson for earlydiscussions about discrete diffusion. We, the authors, declare to have no competing interests. Theresearch conducted for this paper was entirely supported by Google.10References[1]Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. Character-Levellanguage modeling with deeper Self-Attention. arXiv preprint arXiv:1808.04444 , August 2018.[2]Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent. Generalized denoising Auto-Encoders as generative models. arXiv preprint arXiv:1305.6663 , May 2013.[3]Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelitynatural image synthesis. In International Conference on Learning Representations , 2019.[4]Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Kather-ine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and ColinRaffel. Extracting training data from large language models. December 2020.[5]William Chan, Chitwan Saharia, Geoffrey Hinton, Mohammad Norouzi, and Navdeep Jaitly.Imputer: Sequence modelling via imputation and dynamic programming. In InternationalConference on Machine Learning , pages 1403–1413. PMLR, 2020.[6]Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, andTony Robinson. One billion word benchmark for measuring progress in statistical languagemodeling. arXiv preprint arXiv:1312.3005 , December 2013.[7]Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan.WaveGrad: Estimating gradients for waveform generation. arXiv preprint arXiv:2009.00713 ,September 2020.[8]Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences withsparse transformers. arXiv preprint arXiv:1904.10509 , 2019.[9]Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov.Transformer-XL: Attentive language models beyond a Fixed-Length context. arXiv preprintarXiv:1901.02860 , January 2019.[10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training ofdeep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 ,October 2018.[11] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP.arXiv preprint arXiv:1605.08803 , 2016.[12] W Feller. On the theory of stochastic processes, with particular reference to applications. InProceedings of the [First] Berkeley Symposium on Mathematical Statistics and Probability . TheRegents of the University of California, 1949.[13] Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. Mask-Predict: Paralleldecoding of conditional masked language models. arXiv preprint arXiv:1904.09324 , April2019.[14] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, SherjilOzair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in NeuralInformation Processing Systems , pages 2672–2680, 2014.[15] Anirudh Goyal, Nan Rosemary Ke, Surya Ganguli, and Yoshua Bengio. Variational walkback:Learning a transition operator as a stochastic recurrent net. November 2017.[16] Jiatao Gu, Changhan Wang, and Jake Zhao. Levenshtein transformer. arXiv preprintarXiv:1905.11006 , May 2019.[17] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. InAdvances in Neural Information Processing Systems , pages 6840–6851, 2020.[18] Emiel Hoogeboom, Didrik Nielsen, Priyank Jaini, Patrick Forré, and Max Welling. Argmaxflows and multinomial diffusion: Towards non-autoregressive language models. arXiv preprintarXiv:2102.05379 , 2021.11[19] Aapo Hyvärinen, Juha Karhunen, and Erkki Oja. Independent component analysis , volume 46.John Wiley & Sons, 2004.[20] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila.Training generative adversarial networks with limited data. arXiv preprint arXiv:2006.06676v1 ,2020.[21] Diederik P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolu-tions. In Advances in Neural Information Processing Systems , pages 10215–10224, 2018.[22] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. arXiv preprintarXiv:1312.6114 , 2013.[23] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatilediffusion model for audio synthesis. arXiv preprint arXiv:2009.09761 , 2020.[24] Sarah Kreps, R Miles McCain, and Miles Brundage. All the news that’s fit to fabricate: AI-Generated text as a tool of media misinformation. Journal of Experimental Political Science ,pages 1–14.[25] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images.2009.[26] Matt Mahoney. Text8 dataset. http://mattmahoney.net/dc/textdata , 2011. Accessed:2021-5-24.[27] Gautam Mittal, Jesse Engel, Curtis Hawthorne, and Ian Simon. Symbolic music generationwith diffusion models. arXiv preprint arXiv:2103.16091 , March 2021.[28] Alex Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. arXivpreprint arXiv:2102.09672 , 2021.[29] Chenhao Niu, Yang Song, Jiaming Song, Shengjia Zhao, Aditya Grover, and Stefano Ermon.Permutation invariant graph generation via score-based generative modeling. arXiv preprintarXiv:2003.00638 , March 2020.[30] George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and BalajiLakshminarayanan. Normalizing flows for probabilistic modeling and inference. arXiv preprintarXiv:1912.02762 , 2019.[31] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. InInternational Conference on Machine Learning , pages 1530–1538, 2015.[32] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagationand approximate inference in deep generative models. In International Conference on MachineLearning , pages 1278–1286, 2014.[33] Laura Ruis, Mitchell Stern, Julia Proskurnia, and William Chan. Insertion-deletion transformer.arXiv preprint arXiv:2001.05540 , 2020.[34] Chitwan Saharia, William Chan, Saurabh Saxena, and Mohammad Norouzi. Non-autoregressivemachine translation with latent alignments. In Proceedings of the 2020 Conference on EmpiricalMethods in Natural Language Processing (EMNLP) , pages 1098–1108, 2020.[35] Ari Seff, Wenda Zhou, Farhan Damani, Abigail Doyle, and Ryan P Adams. Discrete objectgeneration with reversible inductive construction. arXiv preprint arXiv:1907.08268 , July 2019.[36] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsuper-vised learning using nonequilibrium thermodynamics. In International Conference on MachineLearning , pages 2256–2265, 2015.[37] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. InInternational Conference on Learning Representations , 2021.12[38] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the datadistribution. In Advances in Neural Information Processing Systems , pages 11895–11907, 2019.[39] Yang Song and Stefano Ermon. Improved techniques for training score-based generative models.arXiv preprint arXiv:2006.09011 , 2020.[40] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, andBen Poole. Score-based generative modeling through stochastic differential equations. arXivpreprint arXiv:2011.13456 , November 2020.[41] Dustin Tran, Keyon Vafa, Kumar Agrawal, Laurent Dinh, and Ben Poole. Discrete flows:Invertible generative models of discrete data. In Advances in Neural Information ProcessingSystems , volume 32, 2019.[42] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, AlexGraves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. WaveNet: A generativemodel for raw audio. arXiv preprint arXiv:1609.03499 , 2016.[43] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neuralnetworks. International Conference on Machine Learning , 2016.[44] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Informa-tion Processing Systems , pages 5998–6008, 2017.[45] Pascal Vincent. A connection between score matching and denoising autoencoders. NeuralComputation , 23(7):1661–1674, 2011.[46] Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. Universal adversarialtriggers for attacking and analyzing NLP. August 2019.[47] Alex Wang and Kyunghyun Cho. BERT has a mouth, and it must speak: BERT as a markovrandom field language model. arXiv preprint arXiv:1902.04094 , February 2019.[48] Chunting Zhou, Graham Neubig, Jiatao Gu, Mona Diab, Paco Guzman, Luke Zettlemoyer,and Marjan Ghazvininejad. Detecting hallucinated content in conditional neural sequencegeneration. November 2020.[49] Zachary M Ziegler and Alexander M Rush. Latent normalizing flows for discrete sequences.arXiv preprint arXiv:1901.10548 , January 2019.13 |
6cdYMkxxNt | Understanding the Transferability of Representationsvia Task-RelatednessAkshay Mehra, Yunbei Zhang, and Jihun HammTulane University{amehra, yzhang111, jhamm3}@tulane.eduAbstractThe growing popularity of transfer learning, due to the availability of modelspre-trained on vast amounts of data, makes it imperative to understand when theknowledge of these pre-trained models can be transferred to obtain high-performingmodels on downstream target tasks. However, the exact conditions under whichtransfer learning succeeds in a cross-domain cross-task setting are still poorlyunderstood. To bridge this gap, we propose a novel analysis that analyzes thetransferability of the representations of pre-trained models to downstream tasks interms of their relatedness to a given reference task. Our analysis leads to an upperbound on transferability in terms of task-relatedness, quantified using the differencebetween the class priors, label sets, and features of the two tasks. Our experimentsusing state-of-the-art pre-trained models show the effectiveness of task-relatednessin explaining transferability on various vision and language tasks. The efficientcomputability of task-relatedness even without labels of the target task and its highcorrelation with the model’s accuracy after end-to-end fine-tuning on the targettask makes it a useful metric for transferability estimation. Our empirical results ofusing task-relatedness on the problem of selecting the best pre-trained model froma model zoo for a target task highlight its utility for practical problems.1 IntroductionTransfer learning (TL) [ 42,59] is a powerful tool for developing high-performance machine learningmodels, especially in current times when large models [ 45,11,12,18] pre-trained on huge amountsof data are being fine-tuned for various downstream tasks. While large pre-trained models achieveimpressive performance on downstream tasks even in the zero-shot inference setting [ 45], theirperformance can often be improved by fine-tuning them on data from target tasks. However, ourunderstanding of when representations from these models lead to classifiers that achieve highperformance (i.e., high transferability) to downstream tasks is still lacking.Analytical works based on domain adaptation [ 8,7,52,36,32,37,38] can only explain cross-domaintasks (i.e., when only features/label priors change across tasks) but in the TL setting, label sets canalso change (i.e., cross-task setting). Recently, [ 56] showed that the relatedness between the label setsof the two tasks measured using conditional entropy can explain the difference in their transferability.However, [ 56] focused only on the cross-task setting, and analysis for transferability in a generalcross-domain cross-task setting is not addressed. Apart from these analytical works, another line ofwork focuses on proposing transferability metrics that correlate well with performance on downstreamtasks after end-to-end fine-tuning. We refer to these works as score-based transferability estimation(SbTE) metrics [ 61,55,29,40,19,51]. These works focus on developing scores for selecting apre-trained model from a model zoo, that achieves the best transferability on a target task. Whilethese works address a practical problem, they do not focus on providing an analysis of transferability.38th Conference on Neural Information Processing Systems (NeurIPS 2024).AircraftsTexturesDigitsPetsChest X-raysReference Taske.g., ImageNetDownstream Task e.g., PetsTask-relatednessFixedPre-trained Encodere.g.CLIPTransferabilityHigh?TrainableFlowersSuite of Downstream TasksFigure 1: Given a pre-trained encoder (e.g.,CLIP [ 45]), how does the performance afterfine-tuning it on a reference task (e.g., Ima-geNet) relate to the performance after fine-tuning it on other tasks? Through a rigorousbound on transferability (Theorem 3) in termsof the relatedness between a reference and atarget task, we show that tasks related to thereference task achieve provably better perfor-mance after fine-tuning.Thus, we first rigorously analyze the transferabilityof the representations in producing high-performingclassifiers and propose a novel approach that studiestransferability in terms of its relatedness to a refer-ence task (see Fig. 1). This is in line with previ-ous analytical works [ 7,2,56,55] which studiedthe model’s performance on target tasks in terms ofthe source task in different settings such as domainadaptation/generalization and recently TL. However,there’s a crucial difference: we study transferabilityin terms of a reference task instead of the source tasksince it is impractical to assume the knowledge of thesource task used to train large models such as CLIP[45] or GPT, commonly used for TL.Our approach works by transforming the distribution(and classifier) of a reference task, by transformingits class-prior distribution, label set, and feature spaceto obtain a new distribution that is similar to that ofthe target task (Fig. 2). Based on these transforma-tions, we show that transferability can be provablyexplained (and is tightly upper bounded) using threeinterpretable terms. A weighted reference loss term appearing due to the class prior distributiondifference between the tasks, a label mismatch term appearing as conditional entropy between thelabel distributions of the tasks, and a distribution mismatch term appearing as the Wasserstein distancebetween the transformed reference and target distributions (Theorem 3). We define task-relatednessas the sum of these three terms (a smaller value implies higher relatedness). We propose an opti-mization problem (Eq. 4) and an algorithm (Alg. 1) to learn the transformations to compute it. Usingstate-of-the-art (SOTA) pre-trained models, with different architectures, trained with various trainingmethods on computer vision (CV) and natural language processing (NLP) tasks, we show that task-relatedness achieves a small gap to transferability (Sec. 4.1). Our analysis also leads to new insightsinto learning in the TL setting such as to improve the transferability of an encoder on a downstreamtask, one can improve the encoder’s transferability on related reference tasks (Sec. 4.2). This isparticularly useful when practitioners intend to develop encoders that achieve high transferability toproprietary (and potentially inaccessible) datasets.We also demonstrate the utility of task-relatedness in estimating the accuracy of the model afterend-to-end fine-tuning. While the TL setting assumes access to target labels, the high computationalcost of end-to-end fine-tuning of a pre-trained model on a target task calls for developing metrics thatare efficiently computable and highly correlated with end-to-end fine-tuning accuracy. To this end,we propose to use task-relatedness computed in the penultimate layer of the pre-trained model as ourtransferability estimation metric. To further improve the computational efficiency of task-relatednesswe only measure the difference between the class-wise means and covariances of the distributionsin lieu of the Wasserstein distance as required in Theorem 3. This enables the computation oftask-relatedness with only the statistics of the reference/target tasks. Our empirical results (Sec. 4.3)attest that task-relatedness achieves a high correlation with the model’s accuracy after end-to-endfine-tuning on the target task making it an effective metric for selecting a pre-trained model froma model zoo that achieves the best accuracy on the target task. Moreover, unlike previous SbTEmetrics, task-relatedness can be estimated even without labeled target data, making it suitable forunsupervised transferability estimation, highlighting the advantage of a reference task as used in ouranalysis. Our main contributions are:•We rigorously analyze transferability for classification tasks. Our analysis, to the best of ourknowledge, leads to the first upper bound on transferability in terms of task-relatedness in across-domain cross-task setting.•We propose an optimization problem to efficiently compute task-relatedness, using a smallamount of target labels and show that it can even predict performance after end-to-end fine-tuning without requiring target labels.2•Using SOTA models and CV/NLP tasks, we show that task-relatedness accurately predictstransferability and show that transferability to unseen tasks can be improved by improvingtransferability to known (related) tasks.2 Related WorkTransfer learning (TL): TL [ 42,59,18,49,46,15,14] has been studied widely and consistsof various settings including transductive transfer, inductive transfer, and task transfer learning.The transductive setting also referred to as domain adaptation [ 8,7] focuses on reducing the shiftbetween two domains. The task transfer setting focuses on identifying the relationship between tasks,regardless of the model, to explain the transfer performance (see Appendix B for more details). Lastly,the inductive transfer setting focuses on using an inductive bias such as fine-tuning a pre-trainedmodel (trained via adversarial training [ 48], self-supervised learning [ 11,10,12] or by combininglanguage and image information [ 45]) to improve the performance on a target task. Our work focuseson the inductive transfer learning setting and proposes an upper bound on transferability of therepresentations of pre-trained models to downstream tasks.Analytical works for learning under distribution shifts: Prior works [ 8,7,52,36,32,37,38]analytically explained learning under distribution shifts using distributional divergence betweenthe marginal distributions and a label mismatch term. However, these results are applicable underassumptions such as covariate or label shift which need not be satisfied in TL where both the datadistribution and the label spaces can be different (see App. B for detailed comparison). Recently, [ 56]proposed an upper bound on transferability in a restrictive setting of the same features for both tasks,however, our analysis does not require such an assumption. Other works [ 9,47,41] analyzed therepresentation for the multi-task learning setting. These works showed that when tasks are weaklyrelated, a single representation space (model) may not perform well for all tasks. However, the TLsetting differs from both of these and our work aims to analyze transferability in this setting.Score-based transferability estimation (SbTE): These works [ 5,40,29,61,55,39] use data fromthe target task and produce a score correlated with transferability. Such a score is useful for selectingthe model from a model zoo that leads to the best transferability to a target task. [ 56] proposed theNegative Conditional Entropy (NCE) score that predicts transferability using the negative conditionalentropy between labels of the tasks but requires the two tasks to have the same input instances. [ 6]estimates transferability by solving the HGR maximum correlation problem and using normalizedHscore, in the same setting as [ 56]. [40] proposed the LEEP score and computed NCE using soft(pseudo) labels for the target task from a pre-trained model. OT-CE [ 55] combined Wassersteindistance [ 3] and NCE whereas [ 5,61] estimate likelihood and the marginalized likelihood of labeledtarget examples to estimate transferability. [33] proposes a model-agnostic approach that also relieson optimal transport to compute the distance between the tasks similar to OTDD [ 3]. In contrast, wefocus on analyzing transferability in terms of task-relatedness theoretically along with demonstratingits effectiveness as a transferability estimation metric for the pre-trained model selection problem.3 Analysis of TL using task-relatednessProblem setting and notations: LetPR(x, y)andPT(x, y)denote the distributions of the referenceand the target tasks, defined on XR×YRandXT×YTrespectively. We assume that the feature spacesare common ( XR=XT=X) such as RGB images, but the reference label set YR={1,2,···, KR}and the target label set YT={1,2,···, KT}can be entirely different. We assume the number ofreference task classes ( KR) are greater than or equal to the number of target classes ( KT). Inthe TL setting, an encoder (feature extractor) g:X → Z is pre-trained on a dataset with orwithout labels depending on the training method (e.g., supervised vs. self-supervised). We denotethe resultant push-forward distributions of RandTon the encoder output space as PR(z, y)andPT(z, y). With a fixed encoder g, a classifier (linear or non-linear), h(z) :Z → ∆, that outputs aprobability vector is learned for the reference ( hR) and the target ( hT) separately, where ∆R/T isaKR/KTsimplex for R/T . The classifier hR= arg min h∈HE(z,y)∈PR[l(h(z;g), y)]andhT=arg min h∈HE(z,y)∈PT[l(h(z;g), y)]whereHis the set of classifiers and l(h(z), y) =−log(h(z)y)is the cross-entropy loss. Table 3 in App. A summarizes the notations used in our work. Next, wedefine transferability as commonly used in the literature.3BeeLionWolfCatDogCatDogPrior TransformCLabel TransformBFeature TransformAhR(z)hR!zPR(z,y)CatDogDistributionmismatch Reference R(e.g., Imagenet)Target T(e.g., CIFAR-10)PR!(z,y)PR!!(z,y)PR!!!(z,y)PT(z,y)hR!!zhR!!!zhTzR!R!!R!!!LionWolfFigure 2: : Overview of our task transformation model: A series of transformations are applied tothe reference distribution PR(z, y)and classifier hRto produce the transformed distribution PR′′′and classifier hR′′′to explain transferability to the downstream target task. Class-prior transformation(R→R′) changes the class prior of the reference distribution (e.g., an irrelevant Bee class in Rnowhas smaller prior) followed by label set transformation ( R′→R′′) (e.g., to match {Lion, Wolf }with{Cat, Dog }), followed by feature space transformation ( R′′→R′′′) to match the feature distributionof the target task PT(z, y).Definition 1. (Transferability). Transferability of the representations from an encoder gon a targettaskTfor classifiers in His defined as E(z,y)∈PT[l(hT(z;g), y)].In the next section, we show the analysis with Has the class of linear classifiers for ease of explanationand discuss its extension to non-linear classifiers in App. A.5. Proofs for Sec. 3 are in App. A.3.1 Our task transformation modelThe reference and the target tasks share the same encoder but do not share label sets or datadistributions. Therefore, to relate the two tasks, we propose a chain of three simple transformations:1) prior transformation (from RtoR′), 2) label transformation (from R′toR′′), and 3) featuretransformation (from R′′toR′′′). The R′, R′′, R′′′are intermediate domain names after each of thetransformations are applied. The corresponding classifier in each domain is denoted by hR′,hR′′, andhR′′′as illustrated in Fig. 2. The distribution after the transformations ( PR′′′) has the same featureZR′′′=ZT=Zand label sets YR′′′=YTas the target task T, and consequently, the loss of thetransformed classifier hR′′′and the target classifier hTcan be related.Class-prior transformation (R→R′):Since the reference task has more classes than the targettask ( KR≥KT), many of the reference task classes are likely irrelevant for transfer to the targetclasses, e.g., while transferring from ImageNet to CIFAR10, only a small portion of ImageNet classesare relevant to CIFAR10 classes. The prior transformation accounts for the relative importance of thereference classes. This is illustrated in Fig. 2, where changing the class prior of Rreduces the prior ofthe Bee class and increases the priors of Wolf and Lion classes (shown by the changed size of classesWolf and Lion in R′). While transforming the prior of R, we keep the conditional distribution and theclassifier the same i.e., PR′(z|y) =PR(z|y)andhR′(z) =hR(z). Lemma 1 in App. A.2.1 showsthat the expected loss of the classifier hRonR′is a re-weighted version of the loss of hRonR.Label transformation (R′→R′′):Next, we use a label transformation to match the label sets ofthe new domain R′′and that of the target domain. To this end, we specify the conditional distributionBij:=P(yR′′=i|yR′=j)(Bij∈[0,1],∀i, j,PiBij= 1,∀j). The label yR′′of anexample from the domain R′′is obtained via BP(yR′). This generative process doesn’t requirethe feature, i.e., PR′′(yR′′|yR′, z) =PR′′(yR′′|yR′).Bwith sparse entries (i.e., only one entry ofa column is 1) models a deterministic map from YRtoYT;Bwith dense entries models a weakerassociation. This process is illustrated in Fig. 2 which shows the map from {Bee, Wolf, Lion } ⊂ Y R′to{Dog, Cat } ⊂ Y Tafter using B. Under this model, a reasonable choice of classifier for R′′ishR′′(z) =BhR′(z). Lemma 2 in App. A.2.2 shows that the expected loss of hR′′depends on theloss of hR′and the conditional entropy between the label sets of the tasks R′andR′′and Corollary 1shows the conditions for optimality of hR′′.Feature transformation ( R′′→R′′′):The final step involves changing the feature space of thedistribution R′′. We apply an invertible linear transformation Ato the distribution in R′′to obtain thenew distribution R′′′. After the transformation, the classifier associated with the new domain R′′′ishR′′′(z) =hR′′(A−1(z)). This is illustrated in Fig. 2 after feature transform using A. Lemma 3 in4App. A.2.3 shows that a linear transform of the space and classifier does not incur any additional lossand Corollary 2 shows that the optimality of hR′′implies optimality of hR′′′. Using these, we getTheorem 1 by defining conditional entropy as followsH(YR′′|YR′) =−XyR′∈YR′XyR′′∈YR′′PR′(yR′)ByR′′,yR′log(ByR′′,yR′). (1)Theorem 1. LetC:=hPR′(y)PR(y)iKRy=1be a vector of probability ratios , Bbe aKT×KRmatrix withBij=P(yR′′=i|yR′=j),A:Z → Z be an invertible linear map of features. Let the classifiershR′(z) :=hR(z),hR′′(z) :=BhR′(z),hR′′′(z) :=hR′′(A−1(z)). Assuming lis the cross-entropyloss, we haveEPR′′′(z,y)[l(hR′′′(z), y)]≤EPR(z,y)[C(y)l(hR(z), y)]| {z }Re-weighted reference loss+H(YR′′|YR′)|{z}Label mismatch.Theorem 1 provides an upper bound on the loss of the final transformed classifier/distribution interms of the loss of the reference classifier/distribution. The re-weighted reference loss shows that theperformance of the transformed classifier on the new domain is linked to the label-wise re-weightedloss of the reference classifier on R. This implies that one can use only the relevant reference classesto contribute to the bound. The label mismatch term shows that the performance of the distributionR′′′andRdepends on the conditional entropy H(YR′′|YR′;B)between the label distributions ofthe domain R′′andR′. A high value of Himplies that the labels of the reference task are unrelatedleading to lower transferability, whereas a low Himplies higher transferability. Corollary 3 inApp. A.2.4 shows when the bound in Theorem 1 becomes equality.3.2 Distribution mismatch between PR′′′andPTAfter the three transformations, the transformed reference PR′′′(z, y)can be compared with the targetPT(z, y). However, these are only simple transformations and PR′′′cannot be made identical toPTin general. This mismatch can be measured by the Wasserstein or Optimal Transport distance[44, 58]. Since our goal is to match two joint distributions defined on Z × Y we used((z, y),(z′, y′)) :=∥z−z′∥2+∞ ·1y̸=y′, (2)withz, z′∈ Z andy, y′∈ Y as our base distance [53] to define the (type-1) Wasserstein distanceWd(P, Q) := infπ∈Π(P,Q)E((z,y),(z′,y′))∼π[d((z, y),(z′, y′))]. (3)Using Eq. 2, the Wasserstein distance between the joint distributions is the weighted sum of theWasserstein distance between conditional distributions ( P(z|y)) (Lemma 4 in App. A). Theorem 2below explains the gap between the losses due to the distribution mismatch.Assumption 1. 1) The composition of the loss function and the classifier l◦his aτ−Lipschitzfunction w.r.t to ∥ · ∥ 2norm, i.e., |l(h(z), y)−l(h(z′), y)| ≤τ∥z−z′∥2for all y∈ Y,z, z′∈ Zwhere h∈ H. 2)PT(y) =PR′′′(y).The assumption 2), can be satisfied since we have full control on the prior PR′′′(y)viaBandC.Theorem 2. Let the distributions TandR′′′be defined on the same domain Z ×Y and assumption 1holds, thenEPT(z,y)[l(h(z), y)]−EPR′′′(z,y)[l(h(z), y)]≤τ Wd(PR′′′, PT)|{z }Distribution mismatch,withdas in Eq. 2.Theorem 2 shows that when l◦hisτ−Lipschitz then the performance gap between the R′′′andTisbounded by the type-1 Wasserstein distance between the two distributions. The Lipschitz coefficientof the composition can be bounded by τ, by penalizing the gradient norm w.r.t zat training time. Thus,for linear fine-tuning, we train the classifiers hRandhTwith an additional gradient norm penaltymax{0,∥∇zl(h(z), y)∥2−τ}to make them conform to the Lipschitz assumption (see App. C.3).Note that constraining the Lipschitz constant restricts the hypothesis class. The trade-off between theLipschitz constant and the performance of his empirically evaluated in App. C.3.1.5CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024LossRN50CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024ADV(1) RN50CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024SIMCLR RN50CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024SWAV RN50AG_NewsYELP-5SST-5012DistilRoBERTaTarget Loss Reweighted Reference Loss Label Mismatch Distribution MismatchFigure 3: Task-relatedness (decomposed into its components) produces a small gap to transferability(blue bars). As the task-relatedness between the reference (ImageNet (for CV), DBPedia (for NLP)),and the target tasks (x-axis) increases, the transferability improves. (Note: the label mismatch term iszero in our figures as Bis fixed to a sparse matrix, see Sec. 3.4.)3.3 Bounding transferability using task-relatednessHere, we combine the results obtained in Theorem 1 and Theorem 2. The final bound proposedin Theorem 3 is one of our main contributions which explains transferability as a sum of threeinterpretable and measurable gaps.Theorem 3. Letlbe the cross-entropy loss, then under assumptions of Theorems 1 and 2,EPT(z,y)[l(hT(z), y)]≤EPR(z,y)[C(y)l(hR(z), y)]| {z }Re-weighted reference loss+H(YR′′|YR′)|{z}Label mismatch+τ Wd(PR′′′, PT).| {z }Distribution mismatchThe theorem shows that transferability can be decomposed into the loss incurred while transformingthe class prior distribution, label space, and feature space of the reference distribution (first two terms)and the residual distance between the distribution obtained after transformations and the actual targetdistribution (last term). Based on the terms in the upper bound we define task-relatedness as follows.Definition 2. (Task-relatedness). The relatedness between a target and a reference task is defined asEPR(z,y)[C(y)l(hR(z), y)] +H(YR′′|YR′) +τ Wd(PR′′′, PT).A smaller value of the task-relatedness measure implies higher relatedness of the reference and thetarget tasks. In particular, when the target task is a transformation of the reference task then thereexist transformations A, B, andCsuch that the distribution R′′′perfectly matches the distributionof the target task (i.e., Wd(PR′′′, PT) = 0 ). Moreover, when labels are deterministically related(Corollary 3) our bound becomes an equality.Lastly, while we presented an analysis for linear fine-tuning here (for simplicity of presentation), ourbounds hold for non-linear classifiers and non-linear feature transformations as well (see App. A.5).3.4 Estimating task-relatednessThe optimization problem for learning the transformations A, B , and Cto compute task-relatednessin Theorem 3 is presented below. We use two new variables: inverse of the transformation A, denotedby ̄A:=A−1and a transformed reference prior distribution denoted by D(y) :=C(y)PR(y).minA, ̄A,B,DEPR(z,y)D(y)PR(y)l(hR(z), y)+H(YR′′|YR′;B, D) +τWd(PR′′′, PT;A, B)s.t. A ̄A= ̄AA=I, P T(y) =BD,XiBij= 1∀j,Xy∈YRD(y) = 1 ,Bij∈[0,1]∀i, j, andDi∈[0,1]∀i.(4)Alg. 1 shows how we solve Eq. 4 (see App. D for additional details of the algorithm). Fig. 8 inApp. C.1.2 shows how the upper bound is reduced as the optimization proceeds. Computationally, asingle epoch of Alg. 1 takes a mere 0.17 seconds on our hardware for transfer from ImageNet to Petsfor the ResNet-18 model (we ran Alg. 1 for 2000 epochs). In App. C.1, we show the effectiveness oflearning the transformations using Alg. 1 on small-scale transfer tasks. Our results show that when6Algorithm 1 Minimization of the bound in Theorem 3Input : Reference task samples and labels ( ZR, YR), Target task samples ( ZT), Target task labels(YT) (optional).Output : Estimate of task-relatedness using the learned transformations A, ̄A, B, D .Init: A:= ̄A:=I,D:=PR(y), random B∈RKT×KR1:Randomly sample nRpoints (ziR, yiR)∼(ZR, YR)as per the class prior D.2:ifYTis available then3: Randomly sample nTpoints (zjT, yjT)∼(ZT, YT).4:else5: Randomly sample nTpoints (zjT)∼(ZT).6: # Compute pseudo-labels for the target samples zT.7: yjT= arg max y∈YTBhR(A−1zT) forj= 1,···, nT.8:end if9:Compute (ziR′′′, yiR′′′) = (AziR,arg max yBe(yiR)), fori= 1,···, nR.10:Assign YR′:=YRandYR′′:=YT.11:Compute the optimal coupling π∗between the distributions R′′′andTby minimizingWd(PR′′′, PT), i.e.,minπ∈Π(PR′′′,PT)Xi,jπij ̃d((ziR′′′, yiR′′′),(zjT, yjT))s.t.Xjπij=1nR∀i,Xiπij=1nT∀j.12:Using π∗, solve for A, ̄A, B, D using mini-batch SGDminA, ̄A,B,DXi,jπ∗i,j ̃d((ziR′′′, yiR′′′),(zjT, yjT))+1nRXiD(yi)PR(yi)l(hR(ziR), yi) +H(YR′′|YR′)+∥PT(y)−BD∥22+ (∥A ̄A−I∥F+∥ ̄AA−I∥F).13:Repeat 1 - 12 until convergence.the reference task has classes semantically related to the target task, Alg. 1 learns transformations thatachieve the smallest gap to transferability. However, since finding data semantically related to thetarget task may not always be possible we choose a reference task with the same number of classes asthe target and fix that matrix Bto a random permutation of identity (making the label mismatch termzero) and Dto the prior of the reference task, learning only the transformation A, in our experiments.4 Empirical AnalysisHere, we empirically demonstrate the effectiveness of task-relatedness in explaining transferability invarious settings. We present additional results in App. C and dataset/experimental details in App. D.Our codes can be found at https://github.com/akshaymehra24/TaskTransferAnalysis .4.1 Task-relatedness achieves a small gap to actual transferabilityTask-relatedness tightly upperbounds trans ferabilityacross variousarchitectures, pretrain ingmeth -ods, anddatasets. We demonstrate this by using various pre-trained models with architectures suchas Vision Transformers (ViT) [ 20], ResNet-18/50/101/152 [ 27], DistilRoBERTa [ 34] trained withvarious pretraining methods including supervised training, adversarial training [ 48], SimCLR [ 11],MoCo [ 26], SwA V [ 10], and MAE [ 25]. We also consider a wide range of target datasets including,CIFAR10/100, Aircraft, Pets, DTD, AG-News, Yelp-5, and SST-5 whose details are in App. D.7MNIST FMNIST USPSTarget TasksMNIST FMNIST USPSReference Tasks3.43 2.803.76 3.533.12 3.65Task-relatedness0123MNIST FMNIST USPSTarget TasksMNIST FMNIST USPSReference Tasks1.72 1.231.77 1.641.14 1.70Transferability0.00.51.01.5(a)PE FFE PE FFE0123Loss2.022.46USPS1.882.032.202.48MNIST-M2.122.27MNISTPE FFE PE FFE01232.022.66USPS1.942.152.202.65MNIST-M2.042.25SVHNTarget LossReweighted Reference LossLabel MismatchDistribution Mismatch (b)Figure 4: (a) Task-relatedness and transferability are highly correlated across various reference-targetpairs. (b) Improving the transferability of an encoder on a reference task (in the plot title) leads toimproved transferability of all related target tasks (x-axis). (e.g., compared to the original pre-trainedCLIP encoder (PE), a end-to-end fine-tuned CLIP encoder (FFE) on the reference task achieveshigher transferability to all related tasks.)For this experiment, we fix the reference task to be ImageNet [ 17] for image classification and toDBPedia for sentence classification tasks and use Alg. 1 to estimate task-relatedness. The resultsin Fig. 3 and 9 (in the Appendix) show that our bound achieves a small gap to actual transferability.As the task-relatedness between the reference and the target tasks improves, transferability alsoimproves showing that task-relatedness and transferability are strongly correlated. Task-relatednessisalso strongly correlated with theaccuracy oftheend-to-endfine-tuned classifiers onthetargettask. In Fig. 11 (in the Appendix), we show high Pearson correlation coefficients ( ≥ −0.57) fortask-relatedness and accuracy after fully fine-tuning various pre-trained encoders using data fromvarious target tasks.4.2 Effect of the reference task on task-relatednessHighly related reference–targettaskpairs, based ontask-relatedness, achieve higher trans ferabilitycoincidingwith thesemanticrelatedness between tasks. To understand how a reference task affectstask-relatedness and eventually transferability, we consider two experiments using convolutionaland CLIP-trained models with various character recognition tasks such as MNIST, Fashion-MNIST(FMNIST), SVHN, MNIST-M, and USPS. Of these datasets, SVHN and MNIST-M contain coloredimages while the rest contain gray-scale images. In the first experiment, we train convolutionalmodels on MNIST, FMNIST, and USPS and measure pairwise transferability. Here we use thereference task to be the same task as that used for training the models. The results in Fig. 4(a)show that transferability to those target tasks is higher for which task-relatedness metric’s value issmaller. Specifically, USPS achieves the best transferability ( 1.23) and the smallest task-relatedness(2.80) when the reference task is MNIST. This is attributed to both datasets containing gray-scaleimages of digits. On the other hand, when the reference task is unrelated to the target task i.e., thetask-relatedness value is high, transferability suffers, e.g., when the reference task is MNIST and thetarget task is FMNIST. Results in App. C.2.2 show similar results for the sentence classification task.Thegapbetween task-relatedness andtrans ferabilityissmaller when areference taskperforms wellwith agiven encoder. Here we use MNIST and SVHN as two reference tasks and compute thetask-relatedness and transferability with USPS and MNIST-M as target tasks, using CLIP (Vit B32)model. A linear classifier trained on top of the embeddings from the CLIP model achieves ≈98%accuracy on MNIST but only ≈61% accuracy for SVHN. Due to this, transferability (USPS:2.02,MNIST-M:2.20) explained using task-relatedness with MNIST as the reference task (USPS: 2.46,MNIST-M: 2.48) is better than that computed using SVHN (USPS:2.66, MNIST-M:2.65) as thereference, even though MNIST-M is intuitively more similar to SVHN (as both contain coloredimages of digits). This is evident from the results of PE (Pre-trained Encoder) in Fig. 4(b).Improvingtheperformance ofanencoder onareference task improves trans ferabilitytootherrelated (potentially unseen) tasks. To show this we fully fine-tune the CLIP encoder on MNIST andSVHN tasks, increasing the accuracy of the classifiers for both MNIST and SVHN to 99% and 95%,respectively. Using the representations from these new encoders, we find that the transferability ofboth related target tasks improves along with task-relatedness (see FFE results in Fig. 4(b)). Here,8Table 1: Task-relatedness achieves high (negative) Pearson correlation to the accuracy after end-to-endfine-tuning for various tasks. For NCE [ 56], Leep [ 40], LogMe [ 61], SFDA [ 51], OT-NCE, OTCE[55], and H-score [ 6]positive correlation is better whereas for PACTran [ 19] and task-relatedness(ours) negative correlation is better.Target task LogMe Leep NCE PACTran SFDA H-Score OT-NCE OTCE OursPets 0.82 0.80 0.73 -0.82 0.57 0.77 0.88 0.86 -0.77DTD 0.88 0.96 -0.19 -0.85 0.90 0.89 0.84 0.82 -0.97Aircraft -0.60 0.92 0.97 0.11 0.72 -0.80 0.56 0.60 -0.72Average 0.37 0.90 0.50 -0.52 0.73 0.29 0.76 0.76 -0.82we see that task-relatedness for MNIST-M and USPS is the best when the reference task is SVHNand MNIST, respectively, aligning with our intuition of semantic relatedness between these tasks.This also suggests that transferability on other related tasks can be improved by fully fine-tuning theencoder on these reference tasks. Thus, in scenarios where target tasks are private (such as proprietaryChest X-rays), an encoder trained to work well on related tasks (such as publicly available ChestX-rays) is bound to achieve good transferability.4.3 Task-relatedness for end-to-end transferability estimationIn this section, we show an efficient way of computing task-relatedness which enables its use forestimating transferability after end-to-end fine-tuning. While Alg. 1, accurately estimates task-relatedness by minimizing the bound in Eq. 3, it could be inefficient due to the requirement ofcomputing and minimizing the Wasserstein distance between distributions at every epoch. Thus, tomake the computation efficient, we replace the Wasserstein distance computation in step 11 and 12 ofAlg. 1, with mean and covariance matching terms. Specifically, we define the distance between twodistributions R′′′andTasΓ(R′′′, T) :=∥μR′′′−μT∥22+λ∥ΣR′′′−ΣT∥22, (5)where μR′′′/T:=1nR′′′/TPz∈PR′′′/Tz,ΣR′′′/T:=1nR′′′/TPz∈PR′′′/T(z−μR′′′/T)T(z−μR′′′/T),andλis a regularization coefficient. Using Γ(R′′′, T)in place of Wd(R′′′, T), makes the computationof task-relatedness by learning transformations A, B, andCsignificantly more efficient.Task-relatedness isaneffectivemetricforthepre-trained model selectionproblem. The goal of thisproblem is to find a pre-trained model from a model zoo that achieves the best accuracy on a giventarget task after end-to-end fine-tuning of the model using labeled target data. Since end-to-endfine-tuning is costly (takes almost a day to fully fine-tune a single model on a single target task asshown by [ 61]), an effective transferability metric is significantly more efficient to compute and iscorrelated well with the accuracy after end-to-end finetuning. Using 5 different pre-trained models(supervised ResNet-50/101/152, adversarially pre-trained [ 48] ResNet50 with ε∈ {0.1,1}) andImageNet as the reference task, we show in Table 1, that task-relatedness achieves a high correlationwith the accuracy after end-to-end fine-tuning on the target task. Our results also highlight theinstability of various popular SbTE metrics, such as LogMe [ 61] and NCE [ 56] which can producea high negative correlation, and those of PACTran [ 19] which achieve low correlation values oncomplex datasets. In comparison, task-relatedness consistently achieves a good correlation for varioustarget tasks. Computationally, it takes a mere 3-4 minutes to learn the transformations to computetask-relatedness, providing a significant computation advantage over end-to-end fine-tuning. We alsoshow that task-relatedness remains highly correlated with end-to-endfine-tuningaccuracy even withalimitedamount oflabeled data from thetargettask as shown in Fig. 5 unlike other SbTE metrics.Next, we show that task-relatedness caneven beestimated withoutusinglabelsfrom thetargettask.Table 2: Correlation of task-relatedness andend-to-end fine-tuning accuracy computed us-ing true and pseudo labels of the target task.TargetTruelabelsPseudolabelsPets -0.77 -0.76DTD -0.97 -0.91Aircraft -0.72 -0.16For scenarios, where labeled data from the tar-get task is unavailable, estimating transferabilityis challenging. This is because both fine-tuningand most SbTE methods require labels to com-pute the transferability scores. Here we show thattask-relatedness can still be an effective measure toestimate transferability in this challenging setting.Since we use a transformative model and have ac-cess to a reference task/classifier, we can use thepredictions from the reference task’s classifier trans-9Figure 5: Task-relatedness (Ours) remains highly correlated with accuracy after end-to-end fine-tuningon a target task even when using a small percentage of target data unlike other SbTE methods (LogME,Leep, NCE, PACTran, OT-NCE, OTCE, and H-Score) whose correlation is affected significantly.For LogMe, Leep, NCE, OT-NCE, OTCE, and H-score positive correlation is better whereas forPACTran and task-relatedness (ours) negative correlation is better.formed via B(to obtain labels ∈ YT) and estimate the pseudo -labels of the target data. Concretely,pseudo-label for a target sample xTis obtained as ypseudoT = arg max y∈YTBhR(A−1(zT)). Resultsin Table 2 show that our task-relatedness estimated via pseudo-labeled target data still achieves a highcorrelation to transferability on most datasets. For datasets such as Pets and DTD, where transformingthe reference task classifier produces high accuracy on the target task, the difference between thepseudo and true labels is small. Consequently, the difference in the correlations with pseudo andtrue labels is also small. Thus, when the reference and target tasks are related, transferability can beestimated accurately without requiring labels of the target task, showing that task-relatedness is aneffective metric even for unsupervised transferability estimation.5 ConclusionWe analyzed TL in terms of the relatedness between the target and a reference task. Our analysis worksby transforming the distribution of a reference task to match that of the target. Using this we provedan upper bound on transferability, defined as task-relatedness, consisting of three interpretable terms,namely, the re-weighted reference task loss, label mismatch, and distribution mismatch. We proposedan algorithm to compute task-relatedness and demonstrated its effectiveness at accurately predictingtransferability (even without target labels) with SOTA models. Moreover, the high correlation oftask-relatedness with accuracy after end-to-end fine-tuning and its efficient computability, makes itan effective metric for transferability estimation.Limitations. We studied transferability using the cross-entropy loss and used Wasserstein distance-based distribution shift analysis due to their popularity. However, due to accuracy being the primarymetric of interest in classification tasks and the difficulty of computing the Wasserstein distance withlimited samples in a high dimensional representation space, extending the analysis to 0-1 loss andother divergence measures are important directions which are not addressed here and are left forfuture works.6 AcknowledgmentWe thank the anonymous reviewers of this work for their insightful comments and suggestions. Thiswork was supported by the NSF EPSCoR-Louisiana Materials Design Alliance (LAMDA) program#OIA-1946231.10References[1]Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji,Charless C Fowlkes, Stefano Soatto, and Pietro Perona. Task2vec: Task embedding for meta-learning. In Proceedings of the IEEE/CVF international conference on computer vision , pages6430–6439, 2019.[2]Isabela Albuquerque, João Monteiro, Mohammad Darvishi, Tiago H Falk, and IoannisMitliagkas. Generalizing to unseen domains via distribution matching. arXiv preprintarXiv:1911.00804 , 2019.[3]David Alvarez-Melis and Nicolo Fusi. Geometric dataset distances via optimal transport. arXivpreprint arXiv:2002.02923 , 2020.[4]Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarialnetworks. In International conference on machine learning , pages 214–223. PMLR, 2017.[5]Yajie Bao, Yang Li, Shao-Lun Huang, Lin Zhang, Lizhong Zheng, Amir Zamir, and LeonidasGuibas. An information-theoretic approach to transferability in task transfer learning. In 2019IEEE International Conference on Image Processing (ICIP) , pages 2309–2313, 2019.[6]Yajie Bao, Yang Li, Shao-Lun Huang, Lin Zhang, Lizhong Zheng, Amir Zamir, and LeonidasGuibas. An information-theoretic approach to transferability in task transfer learning. In 2019IEEE international conference on image processing (ICIP) , pages 2309–2313. IEEE, 2019.[7]Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jen-nifer Wortman Vaughan. A theory of learning from different domains. Machine learning ,79(1):151–175, 2010.[8]Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira, et al. Analysis of represen-tations for domain adaptation. Advances in neural information processing systems , 19:137,2007.[9]Shai Ben-David and Reba Schuller. Exploiting task relatedness for multiple task learning. InLearning theory and kernel machines , pages 567–580. Springer, 2003.[10] Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin.Unsupervised learning of visual features by contrasting cluster assignments. Advances in neuralinformation processing systems , 33:9912–9924, 2020.[11] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple frameworkfor contrastive learning of visual representations. In International conference on machinelearning , pages 1597–1607. PMLR, 2020.[12] Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervisedvision transformers, 2021.[13] Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi.Describing textures in the wild. In Proceedings of the IEEE conference on computer vision andpattern recognition , pages 3606–3613, 2014.[14] Quan Cui, Bingchen Zhao, Zhao-Min Chen, Borui Zhao, Renjie Song, Boyan Zhou, JiajunLiang, and Osamu Yoshie. Discriminability-transferability trade-off: An information-theoreticperspective. In European Conference on Computer Vision , pages 20–37. Springer, 2022.[15] Jifeng Dai, Yi Li, Kaiming He, and Jian Sun. R-fcn: Object detection via region-based fullyconvolutional networks. Advances in neural information processing systems , 29, 2016.[16] Bharath Bhushan Damodaran, Benjamin Kellenberger, Rémi Flamary, Devis Tuia, and NicolasCourty. Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation.InProceedings of the European Conference on Computer Vision (ECCV) , pages 447–463, 2018.[17] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and patternrecognition , pages 248–255. Ieee, 2009.11[18] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training ofdeep bidirectional transformers for language understanding, 2019.[19] Nan Ding, Xi Chen, Tomer Levinboim, Soravit Changpinyo, and Radu Soricut. Pactran: Pac-bayesian metrics for estimating the transferability of pretrained models to classification tasks.InEuropean Conference on Computer Vision , pages 252–268. Springer, 2022.[20] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai,Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly,Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for imagerecognition at scale, 2021.[21] Kshitij Dwivedi, Jiahui Huang, Radoslaw Martin Cichy, and Gemma Roig. Duality diagramsimilarity: a generic framework for initialization selection in task transfer learning. In EuropeanConference on Computer Vision , pages 497–513. Springer, 2020.[22] Kshitij Dwivedi and Gemma Roig. Representation similarity analysis for efficient task taxonomy& transfer learning. In Proceedings of the IEEE/CVF Conference on Computer Vision andPattern Recognition , pages 12387–12396, 2019.[23] Remi Flamary, Nicolas Courty, Alexandre Gramfort, Mokhtar Z Alaya, Aureie Boisbunon,Stanislas Chambon, Laetitia Chapel, Adrien Corenflos, Kilian Fatras, Nemo Fournier, et al. Pot:Python optimal transport. The Journal of Machine Learning Research , 22(1):3571–3578, 2021.[24] Jiechao Guan and Zhiwu Lu. Task relatedness-based generalization bounds for meta learning.InInternational Conference on Learning Representations , 2022.[25] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Maskedautoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition , pages 16000–16009, 2022.[26] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast forunsupervised visual representation learning. In Proceedings of the IEEE/CVF conference oncomputer vision and pattern recognition , pages 9729–9738, 2020.[27] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for imagerecognition, 2015.[28] Minyang Hu, Hong Chang, Zong Guo, Bingpeng Ma, Shiguang Shan, and Xilin Chen. Under-standing few-shot learning: Measuring task relatedness and adaptation difficulty via attributes.Advances in Neural Information Processing Systems , 36, 2024.[29] Long-Kai Huang, Ying Wei, Yu Rong, Qiang Yang, and Junzhou Huang. Frustratingly easytransferability estimation, 2022.[30] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images.., 2009.[31] Aounon Kumar, Alexander Levine, Tom Goldstein, and Soheil Feizi. Certifying model accuracyunder distribution shifts. arXiv preprint arXiv:2201.12440 , 2022.[32] Trung Le, Tuan Nguyen, Nhat Ho, Hung Bui, and Dinh Phung. Lamda: Label matchingdeep domain adaptation. In International Conference on Machine Learning , pages 6043–6054.PMLR, 2021.[33] Xinran Liu, Yikun Bai, Yuzhe Lu, Andrea Soltoggio, and Soheil Kolouri. Wasserstein taskembedding for measuring task similarities. arXiv preprint arXiv:2208.11726 , 2022.[34] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, MikeLewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretrainingapproach. arXiv preprint arXiv:1907.11692 , 2019.[35] Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151 , 2013.12[36] Yishay Mansour, Mehryar Mohri, and Afshin Rostamizadeh. Domain adaptation: Learningbounds and algorithms. arXiv preprint arXiv:0902.3430 , 2009.[37] Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, and Jihun Hamm. Understanding the limitsof unsupervised domain adaptation via data poisoning. In Thirty-Fifth Conference on NeuralInformation Processing Systems , 2021.[38] Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, and Jihun Hamm. Do domain generalizationmethods generalize well? In NeurIPS ML Safety Workshop , 2022.[39] Akshay Mehra, Yunbei Zhang, and Jihun Hamm. Test-time assessment of a model’s performanceon unseen domains via optimal transport. In Proceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition (CVPR) Workshops , pages 173–182, June 2024.[40] Cuong V . Nguyen, Tal Hassner, Matthias Seeger, and Cedric Archambeau. Leep: A newmeasure to evaluate transferability of learned representations, 2020.[41] Vishakh Padmakumar, Leonard Lausen, Miguel Ballesteros, Sheng Zha, He He, and GeorgeKarypis. Exploring the role of task transferability in large-scale multi-task learning. arXivpreprint arXiv:2204.11117 , 2022.[42] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions onKnowledge and Data Engineering , 22(10):1345–1359, 2010.[43] Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012IEEE conference on computer vision and pattern recognition , pages 3498–3505. IEEE, 2012.[44] Gabriel Peyré, Marco Cuturi, et al. Computational optimal transport: With applications to datascience. Foundations and Trends® in Machine Learning , 11(5-6):355–607, 2019.[45] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal,Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visualmodels from natural language supervision. In International conference on machine learning ,pages 8748–8763. PMLR, 2021.[46] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-timeobject detection with region proposal networks. Advances in neural information processingsystems , 28, 2015.[47] Sebastian Ruder. An overview of multi-task learning in deep neural networks. arXiv preprintarXiv:1706.05098 , 2017.[48] Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, and Aleksander Madry. Do ad-versarially robust imagenet models transfer better? Advances in Neural Information ProcessingSystems , 33:3533–3545, 2020.[49] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled versionof bert: smaller, faster, cheaper and lighter, 2020.[50] Vikash Sehwag, Saeed Mahloujifar, Tinashe Handina, Sihui Dai, Chong Xiang, Mung Chiang,and Prateek Mittal. Robust learning meets generative models: Can proxy distributions improveadversarial robustness? arXiv preprint arXiv:2104.09425 , 2021.[51] Wenqi Shao, Xun Zhao, Yixiao Ge, Zhaoyang Zhang, Lei Yang, Xiaogang Wang, Ying Shan,and Ping Luo. Not all models are equal: predicting model transferability in a self-challengingfisher space. In European Conference on Computer Vision , pages 286–302. Springer, 2022.[52] Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. Wasserstein distance guided representationlearning for domain adaptation. In Thirty-Second AAAI Conference on Artificial Intelligence ,2018.[53] Aman Sinha, Hongseok Namkoong, Riccardo V olpi, and John Duchi. Certifying some dis-tributional robustness with principled adversarial training. arXiv preprint arXiv:1710.10571 ,2017.13[54] Jie Song, Yixin Chen, Jingwen Ye, Xinchao Wang, Chengchao Shen, Feng Mao, and MingliSong. Depara: Deep attribution graph for deep knowledge transferability. In Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 3922–3930, 2020.[55] Yang Tan, Yang Li, and Shao-Lun Huang. Otce: A transferability metric for cross-domaincross-task representations, 2021.[56] Anh T. Tran, Cuong V . Nguyen, and Tal Hassner. Transferability and hardness of supervisedclassification tasks, 2019.[57] Nilesh Tripuraneni, Michael Jordan, and Chi Jin. On the theory of transfer learning: Theimportance of task diversity. Advances in neural information processing systems , 33:7852–7862,2020.[58] Cédric Villani. Optimal transport: old and new , volume 338. Springer, 2009.[59] Karl Weiss, Taghi M Khoshgoftaar, and Dingding Wang. A survey of transfer learning. Journalof Big Data , 3(1):9, May 2016.[60] Han Xiao, Kashif Rasul, and Roland V ollgraf. Fashion-mnist: a novel image dataset forbenchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 , 2017.[61] Kaichao You, Yong Liu, Jianmin Wang, and Mingsheng Long. Logme: Practical assessment ofpre-trained models for transfer learning, 2021.[62] Amir R Zamir, Alexander Sax, William Shen, Leonidas J Guibas, Jitendra Malik, and SilvioSavarese. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEEconference on computer vision and pattern recognition , pages 3712–3722.[63] Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for textclassification. Advances in neural information processing systems , 28, 2015.14AppendixWe present the missing proofs of the theoretical results from Sec. 3 along with justifications for theclassifiers ( hR′, hR′′, hR′′′) as Corollaries in Appendix A followed by related work on learning in thepresence of distribution shift with the same feature and label space in Appendix B. This is followedby additional experimental results including NLP classification tasks with large pre-trained models inAppendix C. We conclude in Appendix D with details of the experiments and datasets used.A Proofs for Sec. 3A.1 NotationTable 3: Table of notationsData relatedR/T Reference/Target task.XR/T Images of reference/target tasks.YR/T Label set of reference/target tasks.KR/T Number of classes in reference/target tasks.PR/T(x, y)Data distribution of reference/target tasks.(ZR/T, YR/T)Samples (features and labels) of the reference/target tasks.Model relatedgEncoder pre-trained on a pre-training dataset.ZR/T Representations extracted for reference/target tasks using g.hR/T Classifier learned for reference/target tasks on the representations of g.lCross-entropy loss.τLipschitz constant.Task-transformation relatedA Parameters for feature transformation.B Parameters for label transformation.C Parameters for class-prior transformation.PR′(z, y)andhR′Distribution and classifier of R′after applying transformation ConR.PR′′(z, y)andhR′′Distribution and classifier of R′′after applying transformation BonR′.PR′′′(z, y)andhR′′′ Distribution and classifier of R′′′after applying transformation AonR′′.H Conditional entropy as defined in Eq. 1.dBase distance between two samples as defined in Eq. 2.WdWasserstein distance between two distributions defined in Eq. 3.ΓDistance between two distributions based on their mean/variance definedin Eq. 5.A.2 Our task transformation model (Sec. 3.1)A.2.1 Class-Prior transformation (R→R′)Lemma 1. LetC:=hPR′(y)PR(y)iKRy=1be a vector of probability ratios and the classifier hR′(z) :=hR(z), thenEPR′(z,y)[l(hR′(z), y)] =EPR(z,y)[C(y)l(hR(z), y)],for any loss function l.15Proof.EPR′(z,y)[l(hR′(z), y)] =EPR′(z,y)[l(hR(z), y)] =Xy∈YRPR′(y)EPR′(z|y)[l(hR(z), y)]=Xy∈YRPR(y)PR(y)PR′(y)EPR′(z|y)[l(hR(z), y)] =Xy∈YRPR(y)EPR′(z|y)[C(y)l(hR(z), y)]=Xy∈YRPR(y)EPR(z|y)[C(y)l(hR(z), y)] (since PR(z|y) =PR′(z|y) by construction)=EPR(z,y)[C(y)l(hR(z), y)].A.2.2 Label transformation (R′→R′′)Lemma 2. LetBbe a KT×KRmatrix with Bij=P(yR′′=i|yR′=j)andhR′′(z) := BhR′(z)andlbe the cross-entropy loss. Then, EPR′′(z,y)[l(hR′′(z), y)]≤EPR′(z,y)[l(hR′(z), y)] + H(YR′′|YR′), where H(YR′′|YR′)is the conditional entropy(−PyR′∈YR′PyR′′∈YR′′PR′(yR′)ByR′′,yR′log(ByR′′,yR′)).Proof. Note that P(z) :=PR′(z) =PR′′(z)by construction.EPR′′(z,y)[l(hR′′(z), y)] =EP(z,y′′)[l(hR′′(z), y′′)]=EP(z)EP(y′′|z)[l(hR′′(z), y′′)] =EP(z)Xy′′Xy′P(y′′, y′|z)(l(hR′′(z), y′′)) (since y′∈ YR′)=EP(z)EP(y′′,y′|z)[l(hR′′(z), y′′)]=EP(z)EP(y′|z)EP(y′′|y′)[l(hR′′(z), y′′)] (since P(y′′|y′, z) =P(y′′|y′))=EP(z)EP(y′|z)[Xy′′∈YR′′l(hR′′(z), y′′)By′′,y′] (since By′′,y′=P(y′′|y′))=EP(z,y′)[Xy′′∈YR′′l(BhR′(z), y′′)By′′,y′].Since the loss lis the cross-entropy loss, we havel(BhR′(z), y′′) = −log(Xj∈YR′By′′,jhjR′(z))≤ −log(By′′,y′hy′R′(z)) =−log(By′′,y′)−log(hy′R′(z)).16Therefore, we haveEPR′′(z,y)[l(hR′′(z), y)]=EP(z,y′)[Xy′′∈YR′′l(BhR′(z), y′′)By′′,y′]≤ −EP(z,y′)[Xy′′∈YR′′By′′,y′log(By′′,y′) + log( hy′R′(z))]=−EP(z,y′)[Xy′′∈YR′′By′′,y′log(By′′,y′)]−EP(z,y′)[Xy′′∈YR′′By′′,y′log(hy′R′(z))]=−EP(z,y′)[Xy′′∈YR′′By′′,y′log(By′′,y′)] +EP(z,y′)[−log(hy′R′(z))Xy′′∈YR′′By′′,y′]=−EP(z,y′)[Xy′′∈YR′′By′′,y′log(By′′,y′)] +EP(z,y′)[−log(hy′R′(z))]=EP(z,y′)[−Xy′′∈YR′′By′′,y′log(By′′,y′)] +EP(z,y′)[l(hR′(z), y′)]=EP(y′)EPR′(z|y′)[−Xy′′∈YR′′By′′,y′log(By′′,y′)] +EP(z,y′)[l(hR′(z), y′)]= [−Xy′∈YR′Xy′′∈YR′′P(y′)By′′,y′log(By′′,y′)] +EP(z,y′)[l(hR′(z), y′)]=H(YR′′|YR′) +EPR′(z,y)[l(hR′(z), y)].Corollary 1 below, shows the conditions under which the optimal softmax classifier for the domainR′remains optimal for the domain R′′, justifying our choice of classifier change from R′toR′′.Corollary 1. Letebe one-hot encoding of the labels, |YR′′|=|YR′|,Bbe a KT×KRpermutation matrix and hR′be the optimal softmax classifier for R′andyR′′:=σ(yR′) :=arg max y∈YR′′(Be(yR′))ythen under the assumptions of Lemma 2, hR′′(z) := BhR′(z)is theoptimal softmax classifier for R′′.Proof. Since yR′′:=σ(yR′) := arg max y∈YT(Be(yR′))ywe haveEPR′′[l(hR′′(z), yR′′)] = EP(z,y′′)[l(BhR′(z), y′′)] =Xy′′∈YR′′P(y′′)EP(z|y′′)[l(BhR′(z), y′′)]=Xy′∈YR′P(σ(y′))EP(z|σ(y′))[l(BhR′(z), σ(y′))]=Xy′∈YR′P(y′)EP(z|y′)[l(hR′(z), y′)] =EPR′[l(hR′(z), yR′)].The second last equality follows due to the symmetry of cross-entropy loss, i.e., l(h, y) =−loghy=−logBhσ(y)=l(Bh, σ (y)).Since minhR′′EPR′′[l(hR′′(z), yR′′)] = min hR′EPR′[l(hR′(z), yR′)]andhR′is optimal for R′wehavehR′′(z) :=BhR′(z)is the optimal softmax classifier for R′′.A.2.3 Feature transformation (R′′→R′′′)Lemma 3. LetA:Z → Z be an invertible linear map of features and the classifier hR′′′(z) :=hR′′(A−1(z)). Then EPR′′′(z,y)[l(hR′′′(z), y)] =EPR′′(z,y)[l(hR′′(z), y)]for any loss l.Proof. EPR′′′(z,y)[l(hR′′′(z), y)] = EPR′′′(z,y)[l(hR′′(A−1(z)), y)] = EPR′′(z,y)[l(hR′′(z), y)].17Our Corollary 2 below shows that the optimal softmax classifier for domain R′′remains optimal fordomain R′′′too.Corollary 2. LethR′′be the optimal softmax classifier in domain R′′then under the assumptions ofLemma 3, hR′′′(z) =hR′′(A−1(z))is the optimal softmax classifier in domain R′′′.Proof. When hR′′′(z) = hR′′(A−1(z)),minhR′′EPR′′(z,y)[l(hR′′(z), y)] =minhR′′′EPR′′′(z,y)[l(hR′′′(z), y)]by Lemma 3, hence if hR′′is optimal for R′′then so ishR′′′for the domain R′′′.A.2.4 Three transformations combined (R→R′′′)Theorem 1. LetC:=hPR′(y)PR(y)iKRy=1be a vector of probability ratios , Bbe aKT×KRmatrix withBij=P(yR′′=i|yR′=j),A:Z → Z be an invertible linear map of features. Let the classifiershR′(z) :=hR(z),hR′′(z) :=BhR′(z),hR′′′(z) :=hR′′(A−1(z)). Assuming lis the cross-entropyloss, we haveEPR′′′(z,y)[l(hR′′′(z), y)]≤EPR(z,y)[C(y)l(hR(z), y)]| {z }Re-weighted reference task loss+H(YR′′|YR′)|{z}Label mismatch.Proof.EPR′′′(z,y)[l(hR′′′(z), y)] = EPR′′(z,y)[l(hR′′(z), y)] (Lemma 3)≤EPR′(z,y)[l(hR′(z), y)] +H(YR′′|YR′) (Lemma 2)=EPR(z,y)[C(y)l(hR(z), y)] +H(YR′′|YR′) (Lemma 1) .Corollary 3. Letebe one-hot encoding of the labels, |YR′′′|=|YR|,B: ∆R′→∆R′′be apermutation matrix and yR′′:=σ(yR′) := arg max y∈YR′′(Be(yR′))ythen under the assumptionsof Lemmas 1, 2, and 3 we have EPR′′′(z,y)[l(hR′′′(z), y)] =EPR(z,y)[C(y)l(hR(z), y)].Proof. Since yR′′:=σ(yR′) := arg max y∈YT(Be(yR′))ywe haveEPR′′[l(hR′′(z), yR′′)] = EP(z,y′′)[l(BhR′(z), y′′)] =Xy′′∈YR′′P(y′′)EP(z|y′′)[l(BhR′(z), y′′)]=Xy′∈YR′P(σ(y′))EP(z|σ(y′))[l(BhR′(z), σ(y′))]=Xy′∈YR′P(y′)EP(z|y′)[l(hR′(z), y′)] =EPR′(z,y)[l(hR′(z), y)].The second last equality follows due to the symmetry of cross-entropy loss, i.e., l(h, y) =−loghy=−logBhσ(y)=l(Bh, σ (y)).Therefore, we haveEPR′′′(z,y)[l(hR′′′(z), y)] = EPR′′(z,y)[l(hR′′(z), y)] (Lemma 3)=EPR′(z,y)[l(hR′(z), y)] (from above)=EPR(z,y)[C(y)l(hR(z), y)] (Lemma 1) .A.3 Distribution mismatch between R′′′andT(Sec. 3.2)Lemma 4. LetUandQbe two distributions on Z × Y with the same prior PU(y=i) =PQ(y=i) =P(y=i). With the base distance ddefined as in Eq. 2, we have Wd(PU, PQ) =PyP(y)W∥·∥2(PU(z|y), PQ(z|y)).18Proof. Letω∗ydenote the optimal coupling for the conditional distributions (PU(z|y), PQ(z|y))fory∈ Y andπ∗denote the the optimal coupling for the joint distributions (PU(z, y), PQ(z, y)).Then, under the definition of our base distance d,π∗((z, y),(z′, y′)) = 0 when y̸=y′i.e. nomass from the distribution Ubelonging to class ycan be moved to the classes y′̸=yof thedistribution Qwhen the class priors of UandQare the same. Moreover, sincePij(ω∗y)ij= 1andP{i,j:yi=y′j=k}π∗ij=P(y=k)fork∈ Y we have π∗((z, y),(z′, y′)) = ω∗y(z, z′)P(y)1y=y′forevery y, y′∈ Y.Then, we can show that the total Wasserstein distance between the joint distributions can be expressedas the sum of conditional Wasserstein distances, as followsWd(PU(z, y), PQ(z, y))=Xy,y′Zπ∗((z, y),(z′, y′))d((z, y),(z′, y′))dzdz′=Xy,y′Zπ∗((z, y),(z′, y′))(∥z−z′∥2+∞ ·1y̸=y′)dzdz′=Xy,y′Zω∗y(z, z′)P(y)1y=y′(∥z−z′∥2+∞ ·1y̸=y′)dzdz′=Xy,y′Zω∗y(z, z′)P(y)1y=y′∥z−z′∥2dzdz′(since 1 y=y′·1y̸=y′= 0)=Xy,y′P(y)1y=y′Zω∗y(z, z′)∥z−z′∥2dzdz′=XyP(y)Zω∗y(z, z′)∥z−z′∥2dzdz′=XyP(y)W∥·∥2(PU(z|y), PQ(z|y)).Theorem 2. Let the distributions TandR′′′be defined on the same domain Z × Y and assumption 1holds, then EPT(z,y)[l(h(z), y)]−EPR′′′(z,y)[l(h(z), y)]≤τ Wd(PR′′′, PT)|{z }Distribution mismatch,withdas in Eq. 2.Proof.EPT(z,y)[l(h(z), y)]−EPR′′′(z,y)[l(h(z), y)]=EPT(y)EPT(z|y)[l(h(z), y)]−EPR′′′(y)EPR′′′(z|y)[l(h(z), y)]=EPT(y)[EPT(z|y)[l(h(z), y)]−EPR′′′(z|y)[l(h(z), y)]] (since PT(y) =PR′′′(y))≤EPT(y)[ supl′◦h′∈τ−LipschitzEPT(z|y)[l′(h′(z), y)]−EPR′′′(z|y)[l′(h′(z), y)]]=EPT(y)[τ W∥·∥2(PT(z|y), PR′′′(z|y))] (Kantorovich −Rubinstein duality)=τ Wd(PR′′′, PT) (Lemma 4) .A.4 Final bound (Sec. 3.3)Theorem 3. Letlbe the cross entropy loss then under the assumptions of Theorems 1 and 2 we have,EPT(z,y)[l(hT(z), y)]≤EPR(z,y)[C(y)l(hR(z), y)]| {z }Re-weighted reference task loss+H(YR′′|YR′)|{z}Label mismatch+τ Wd(PR′′′, PT)|{z }Distribution mismatch.19Proof. Letl◦hT,l◦hR, and l◦hR′′′beτ−Lipschitz (from Assumption 1).EPT(z,y)[l(hT(z), y)]≤EPT(z,y)[l(hR′′′(z), y)] (Optimality difference)≤EPR′′′(z,y)[l(hR′′′(z), y)] +τ Wd(PR′′′, PT) (Theorem 2)≤EPR(z,y)[C(y)l(hR(z), y)] +H(YR′′|YR′) +τ Wd(PR′′′, PT) (Theorem 1) .In our experiments, we enforce the τ−Lipschitz constraint for l◦hRandl◦hTand verify that theLipschitz constant of l◦hR′′′remains close to τwithin tolerance.A.5 Extension to non-linear classifiers and non-linear transformationsTo extend our analysis to non-linear classifiers/transformations, we allow A:Z → Z to be a non-linear map and the classifiers h∈ H non_linear to be also non-linear (such as multi-layer perceptron).In addition to the Assumption 1 (1), which requires l◦hR′′′to be τ−Lipschitz, we also require thathR′′′belongs to the same class Hnon_linear ashTandhRfor any A. This holds for example when his linear and Ais also linear or when his a multilayer perceptron and Ais linear. With this additionalassumption, all of our proofs work for any linear/non-linear transformation of the feature space,without any change. Thus, Theorem 1, Theorem 2 and Theorem 3 hold for non-linear classifiers aswell. With these extensions, our bounds can be used to explain transferability even with non-linearclassifiers.B Additional related workDistributional divergence-based analyses of learning with distribution shifts (under same featureand label sets): Here we review some of the previous works that analyzed the problem of learningunder distribution shifts in terms of distributional divergences such as the Wasserstein distance. Theseanalyses apply when the feature and label spaces remain the same between the original and the shifteddistribution.Early works [ 8,52,36] showed that the performance on a shifted distribution (target domain) canbe estimated in terms of the performance of the source domain and the distance between the twodomains’ marginal distributions and labeling functions. Specifically, [8] showed that thatET(h, fT)≤ ES(h, fS) +d1(PS, PT) + min {EPS[|fS(z)−fT(z)|],EDT[|fS(z)−fT(z)|]},where d1denotes the total variation distance, f:Z → [0,1]denotes the labeling function, h:Z → { 0,1}denotes the hypothesis and EP(h, f) :=Ez∼P[|h(z)−f(z)|]denote the risk of thehypothesis h. A follow up work [ 52], showed a similar result using type-1 Wasserstein distance forallK−Lipschitz continuous hypotheses i.e.,ET(h, fT)≤ ES(h, fS) + 2K·W1(PS, PT) +λ,where λis the combined error of the ideal hypothesis h∗that minimizes the combined errorES(h, fS) +ET(h, fT). Another recent work [ 32] used a target transformation-based approachand Wasserstein distance to quantify learning in the presence of data and label shifts. Other works[31,50] also presented an analysis based on Wasserstein distance to understand how the accuracyof smoothed classifies and robustness change in the presence of distribution shifts. Compared tothese works the bound proposed in Theorem 2 considers cross-entropy loss (which is a popularchoice of the loss function in the classification setting) and uses a joint feature and labels Wassersteindistance rather than only marginal Wasserstein distance. These differences make the bound proposedin Theorem 2 useful in the analysis of transfer learning than those proposed in previous works whenwe have access to labeled target domain data.Comparison with [ 56]:The closest work to ours is that of [ 56], which showed that transferability toa target task can be related to transferability to another task (source task in their work) and the labelmismatch between the two tasks. However, the bound is proposed in a restrictive setting when boththe tasks have the same features but different labels (i.e. same images labeled differently between thetwo tasks). In this setting, [ 56], showed that transferability is upper bounded by loss of the sourceclassifier on the source task and the conditional entropy (CE) of the label sets of the two tasks. We20significantly extend this analysis to general tasks which is the most commonly used setting in practice(e.g., our analysis allows us to study transfer learning from ImageNet with 1000 classes to CIFAR-100with 100 unrelated classes, where both tasks have different images). In this setting, our main result inTheorem 3, shows that transferability involves additional terms such as the distribution mismatchterm (in the form of Wasserstein distance), the prior mismatch term (in the form of re-weightedsource loss) and the conditional entropy between the label sets. Moreover, the bound proposed by[56] is a special case of our bound with Cbeing the vector of all ones (no prior change) and AbeingIdentity (data distributions of two tasks are the same).Comparison with other transferability estimation and model selection methods: The problemof transferability estimation has gained a lot of attention recently, especially due to the availability ofa larger number of pre-trained models. Earlier works used models end-to-end fine-tuned on targettasks to evaluate transferability via task-relatedness estimated using the actual target loss [ 62] and theFisher information matrix [ 1]. However, the requirement of models trained on target tasks limits theirpractical utility. Other works [ 22,21,54] used the representation of the data from the tasks fromthese end-to-end fine-tuned models and developed similarity scores that achieved high correlationwith transferability. However, these approaches are not practical since they require models end-to-endfine-tuned on the target task. Our work does not depend on models trained on target tasks as shownin our analysis (Theorem 3) making it both theoretically and practically sound.Another line of work [ 5,40,29,61,55,51,19] focuses on the problem of pre-trained model selectionwhere the goal is to find the best pre-trained classifier from a model zoo that will achieve the highesttransferability to a particular downstream task. Thus, the main challenge of this problem is to be ableto estimate transferability in a way that is more efficient than fine-tuning the pre-trained models onthe downstream tasks. To this end, recent works have proposed several scores that are correlated withthe accuracy of the models after fine-tuning them on the target task, which we refer to as score-basedtransferability estimation (SbTE) methods, in our work.However, unlike our work, the goal of SbTE works is not to propose a universal bound or identifyterms that provably govern transferability. Moreover, while the scores proposed in SbTE workscorrelate well with transferability, they are only meaningful in a relative sense. Concretely, a scoreof 1 (e.g., for LogMe [46]) on a CIFAR-100 task for a particular model does not indicate whethertransferability is good or bad and requires comparison with scores of other pre-trained models onthe same target task. On the other hand, our upper bound directly approximates transferability, e.g.,an upper bound of 1 on the CIFAR-100 task for a model implies that transfer learning will incur anaverage cross-entropy loss of less than 1 implying a highly accurate transfer. Our results in Figs. 3,and 9 attest that our upper bound is indeed a good estimate of the transferability.Another disadvantage of the scores proposed in these works is that they cannot be compared acrosstarget tasks, unlike our upper bound. As observed from Fig. 4 of LogMe [46], scores for CIFAR-10are lower than scores for CIFAR-100 on the same pre-trained model, but, the transferability toCIFAR-10 is better than that to CIFAR-100. On the other hand, from our Fig. 9, the upper bounds onCIFAR-10 are lower than those of CIFAR-100 implying better transferability of classifiers pre-trainedon ImageNet to CIFAR-10. Thus, our work is more suitable for estimating the absolute performanceon various target tasks given a particular pre-trained classifier.Thus, while the goal of our work differs fundamentally from SbTE approaches the problem addressedin this work is of significant practical importance. In Sec. 4.3, we show that task-relatedness estimatedvia Alg. 1 achieves competitive performance compared to popular SbTE methods on this problem.Comparison with task transfer learning approaches: This approach focuses on explaining thetransfer performance based on the relationship between tasks. For instance, works such as [ 28,57,24]study the problem of few-shot learning (FSL) where a model is trained on data from related trainingtasks and is adapted to an unseen task using only a few samples. Different from these works, wefocus on the setting where a pre-trained encoder trained on some pre-training dataset is adapted withenough samples/gradient steps to a downstream task. This downstream target task may or may nothave any relation to the pre-training task unlike [28, 57, 24].Concretely, [ 28] proposed a model-agnostic metric called Task Attribute Distance to gauge thesuccess of transfer. Our work, on the other hand, defines task-relatedness based on the similarity ofthe representations of reference and target tasks in the representations of the pre-trained model (and ismodel dependent) rather than relying on the attribute information, which may not be available in TL21setting. [ 57] analyzes the sample complexity for learning a model shared across tasks and adapting itto a new target task and shows task diversity to be a crucial component for the success of transfer inthe FSL setting. Our work on the other hand does not assume access to shared tasks or restrict thenumber of samples required for fine-tuning on the target task. Moreover, their notion of task diversityrequires access to a set of training tasks that may not be available in the TL setting, making our notionof task-relatedness more practical for TL. Lastly, [ 24] proposes a notion of task-relatedness for theFSL setting, allowing to utilize all the data from available training tasks to help learn a model ona new task with a few gradient steps. This notion is model-agnostic and defined over the samplespace ( X × Y ) unlike our measure which is defined in the representation space of the model whosetransferability needs to be evaluated.Thus while task-relatedness is at the core TL, the notions proposed by [ 28,57,24] are suitable fortask transfer setting where as our notion is more suitable for the inductive TL setting.C Additional experimentsC.1 Small-scale experiment to evaluate Alg. 11)Ouralgorithm produces trans formations thatachieve abettervalue forthebound compared tonaive trans formations. 2)Havingthesame numberofclasses inthereference taskasinthetargettaskleads tothebestvalue ofthebound. 3)Itisonly marginally bettertohave semantically relatedclasses inthereference task.We evaluate Alg. 1 for computing task-relatedness and predicting the transferability of a ResNet-18model to CIFAR-10 in two settings. In the first setting, we consider a reference task that comprisesdata from 10 classes chosen at random and 10 classes that are semantically related to the labelsof CIFAR10 from ImageNet [ 17] (see App. D) whereas in the setting we select reference taskwith 20 classes. We compare the cross-entropy loss on CIFAR10 obtained after linear fine-tuning(transferability) with our bound, by using various transformations including those learned via Alg. 1.Using 10 reference classesUsing 20 reference classesFigure 6: Transformations optimizedusing Eq. 4 produce a better task-relatedness than obtained by naively cho-sen transformations, primarily due to de-creased distribution mismatch. The pres-ence of the same number of classes in thereference as those in the target achievesthe smallest task-relatedness value. Se-mantically related classes in the refer-ence task help but only marginally.We test 3 different cases. The first is FixedAll: whereall transformations are fixed ( Ais fixed to the Identitymatrix, Bis fixed to a random permutation matrix in thesetting with 10 classes, and to a matrix such that tworeference classes match a single target class in the set-ting with 20 classes, Dis fixed to the reference prior).The second is LearnedA: where we use Alg. 1 to learnonlyAwhile B, D are fixed as in FixedAll, and lastlyLearnedAll: where all the transformations are learned viaAlg. 1. Our results in Fig. 6, show that in both settingsusing FixedAll, our bound is marginally better when thereference task has classes semantically related to the targettask. In all cases, the bound becomes significantly betterin the LearnedA setting due to learning the transformationAthat transforms the reference distribution to better matchthe target and decrease the distribution mismatch term ofthe bound. Lastly, by learning all the transformations, inthe LearnedAll setting, we achieve the best value for thebound, regardless of whether there are randomly chosenor semantically related classes in the reference task. More-over, we see that the value of the bound estimated whenthe reference task has 10 source classes is better in allcases. This is due to smaller re-weighted reference losssince learning a 20-way classifier is more challenging thanlearning a 10-way classifier, especially with the Lipschitzconstraint. In the setting with 20 classes, Alg. 1, prefersto retain data from 10 of the 20 reference task classes, Fig. 7 (right), reducing the re-weightedreference loss, makes Bsparse, reducing the label mismatch term, and aligns the reference and targetdistributions, via A, reducing the distribution mismatch term.22Overall, our results show that 1) learning Dprefers to retain the data from the same number ofreference classes as those present in the target, 2) the Bmatrix eventually becomes sparse, making thelabel mismatch term zero, and 3) there is only a small difference in task-relatedness between LearnedAand LearnedAll settings. Based on these insights, we use Alg. 1 in the LearnedA setting, with thereference task containing the same number of randomly sampled classes from ImageNet as the samenumber of classes in the target task (since it may not always possible to find semantically similarclasses for all target tasks) and fix Bto be a random permutation matrix, for all other experiments inthe paper. We select classes for the reference task from ImageNet due to its diversity and the fact thatit has a large number of classes. However, any dataset can be used to define the reference task (e.g.,we used Digits datasets for the reference task in Sec. 4.2).Next, we evaluate the effect of knowing the exact matching between the labels of the reference andthe target tasks in comparison to a random permutation. We use MNIST as the reference task andUSPS as the target task (and vice-versa). We compare our results in a setting where only Ais learnedandBis set to an identity matrix and when Bis set to a random permutation matrix. Note that theidentity matrix corresponds to the correct mapping between the classes of MNIST and USPS tasks(both contain digits from 0 to 9).We find that the upper bound obtained when Bis fixed to identity is only marginally better than thecase when Bis a random permutation. Specifically, the difference between the bound when Bisfixed to a random permutation and when Bis an identity matrix is 0.10 for the MNIST →USPS taskand 0.17 for the USPS →MNIST task. The primary reason for the decrease in the upper bound comesfrom the reduced distribution mismatch term. While the upper bound improves slightly when theideal matching between the labels is known, such a mapping may not be known when the labels ofthe tasks are not related such as for FMNIST and MNIST. Thus, fixing Bto a random permutationmatrix yields a reliable estimate of transferability in most cases.C.1.1 Visualization of the transformed data via t-SNE for various settings in Sec. C.1Here we use the setting considered in App. C.1 where we consider 20 randomly selected classesfrom ImageNet as the reference task and consider the transfer to CIFAR-10. We plot the results ofusing different transformations using t-SNE to show how various transformations affect the upperbound in Theorem 3. Our results in Fig. 7(left) show that when no transformations are learned(FixedAll), the 20 random reference task classes do not overlap with the 10 target classes leadingto an increased Wasserstein distance which in turn leads to a larger upper bound. By learning thetransformation A(LearnedA), Fig. 7(center) shows a significantly better alignment between theclasses of the reference and target which leads to a decreased Wasserstein distance and hence a tighterupper bound. Moreover, by learning all the transformations (LearnedAll), Fig. 7(right) shows thatnot only do the distributions align well but also the prior of the reference is changed to only keep 10reference classes to match the prior of the target distribution providing a further improvement in theupper bound. This clearly shows the effectiveness of our proposed optimization algorithm in learningvarious transformations to minimize the upper bound.C.1.2 Effectiveness of minimizing the upper bound in Theorem 3 via solving Eq. 4In Fig. 8, we show how the upper bound changes as the optimization progresses to transform thereference task (ImageNet) into four target tasks with the ResNet-18 model. Similar to experimentsin Sec. 4.1 of the paper we optimize over the transformation Awhile BandDare fixed to arandom permutation matrix and the reference prior. After about 600 epochs the optimization problemconverges to a local minima.C.2 Additional results for the impact of the reference task on task-relatedness (Sec. 4.2)C.2.1 Additional results for image classificationHere we provide details of the experiment presented in Sec. 4.2 about the effect of the reference taskon task-relatedness and transferability. Similar to the results presented in Fig. 4 of the main paper, theresults in Fig. 10 show that when the reference and the target tasks are related then task-relatedness isgood as in the case when the reference task is MNIST and target is USPS or vice versa achievinga smaller gap to transferability. When the target tasks are unrelated to the reference task data thenboth the transferability and task-relatedness are low. E.g., when the reference task is MNIST and the23TargetTransformed ReferenceTargetTransformed ReferenceTargetTransformed ReferenceFigure 7: (Best viewed in color.) t-SNE visualizations of the effect of various transformations onthe bound in Theorem 3 when data from 20 randomly selected classes from ImageNet are used totransfer to CIFAR-10. When all transformations are fixed (FixedAll, left) the distance between thedistribution R′′′(transformed reference) and Tis high explaining the large upper bound. Learningjust the transformation Ausing the algorithm proposed in Sec. 3.4 significantly reduces the distancebetween R′′′andTleading to a tighter upper bound (center). Learning all the transformations furtherimproves the matching (right). Especially, learning BandDchange the class priors of the referenceso that the same number of classes from the reference are used for matching as those present in thetarget. This is evident from the right plot where only 10 unique reference task clusters are visiblecompared to 20 in the center plot, with fixed D. Moreover, the zoomed-in portion shows that for thecenter figure two classes from the reference task (green 2 and 3) match with class 1 (blue) of thetarget whereas a single class from the reference task (green 18) matches class 1 (blue) of the target inthe right figure.Figure 8: Reduction of the proposed upper bound is shown as transformations are learned bysolving the optimization problem in Eq. 3. After 600 epochs, the upper bound stabilizes showing theconvergence of the optimization problem. Each subplot shows the effect of learning the transformationparameters for the transfer learning task with ImageNet as the reference task and ResNet-18 (trainedin a supervised way) for different target tasks. The solid line is the average after 5 random restartsand the shaded portion shows their standard deviation.target is FMNIST or vice-versa. Task-relatedness is also correlated to transferability, i.e., a modeltrained on MNIST achieves better transferability to USPS than FMNIST.C.2.2 Results for NLP sentence classification taskIn this section, we use sentence classification NLP task to demonstrate further the effect of thereference task on task-relatedness. For this experiment, we first fine-tune the entire DistilRoBERTa[34] model distilled on OpenAI’s WebText dataset, using a subsample of 10,000 points from theDBPedia dataset. We then use these fine-tuned models to evaluate the transferability to AG news,SST-5, and Yelp datasets. The results in Fig. 12 show that transferability on AG News is the smallestamong the three datasets. This coincides with the task-relatedness value obtained after learning the24024LossRN18024RN50024RN101024RN152024CLIP RN50024LossADV(0.1) RN18024ADV(0.1) RN50024ADV(1) RN18024ADV(1) RN50024CLIP Vit-B32CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024LossSIMCLR RN50CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024MOCO RN50CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024SWAV RN50CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024MAE ViT-B16CIFAR10CIFAR100-SPetsDTDCIFAR100-MCIFAR100Aircraft024CLIP ViT-B16Target Loss Reweighted Reference Loss Label Mismatch Distribution Mismatch(a) Image classification tasksAG_NewsYELP-5SST-5012DistilBERTAG_NewsYELP-5SST-5012DistilRoBERTa(b) Sentence classification tasksFigure 9: Full results for comparison of transferability vs. task-relatedness for large pre-trainedmodels on image and sentence classification tasks. Task-relatedness consistently achieves a smallgap to transferability. We denote cross-entropy loss on the y-axis. Plot-title denotes the pre-trainedmodel and the x-axis denotes the target tasks.USPSFMNIST024LossMNISTUSPSMNIST024FMNISTMNISTFMNIST024USPSAG_NewsYELP-5SST-5012DBPediaTarget Loss Reweighted Reference Loss Label Mismatch Distribution MismatchFigure 10: Decomposition of task-relatedness into its three components illustrates that our task-relatedness (specifically the distribution mismatch term) explains the difference in transferability.Similar tasks such as USPS and MNIST have the highest transferability and also have the highesttask-relatedness.transformations which explains why transfer from DBPedia to AG News is more successful comparedto other target tasks. This observation is reasonable, especially considering that both DBPedia andAG News have structured information. Moreover, since DBPedia is related to Wikipedia, the termsand entities appearing in AG News are more related to those appearing in DBPedia in comparison toterms/entities appearing in SST-5 and Yelp which consist of movie reviews and reviews collectedfrom Yelp.For our experiments, in this section, we follow a similar setting of fixing Bto be a random permutationmatrix, Cto the prior of the reference task, and only learn the transformation A. We sample 10,000points from DBPedia belonging to the same number of classes as those present in the target task (e.g.,for AG News we sample data from 4 randomly selected classes of DBPedia) and use this data as thereference task data to train hRwith gradient norm penalty ( τ=0.02). All experiments are run for 3random seeds and average results are reported in Fig. 12.2575 80 85 90 953.03.54.04.55.0ResNet-50 (rho=-0.63)Accuracy (%)Task-relatedness80 85 90 953.03.54.04.55.0ResNet-101 (rho=-0.65)Accuracy (%)Task-relatedness75 80 85 90 953.54.04.55.0ResNet-152 (rho=-0.59)Accuracy (%)Task-relatedness75 80 85 90 953.03.54.04.55.0Adv ResNet-50 (eps=0.1, rho=-0.62)Accuracy (%)Task-relatedness75 80 85 90 953.03.54.04.55.0Adv ResNet-50 (eps=1, rho=-0.57)Accuracy (%)Task-relatednessFigure 11: Task-relatedness is highly (negatively) correlated (Pearson correlation coefficient in thesubplot title) with the accuracy of the models after end-to-end fine-tuning. Each subplot showstransfer learning with various target tasks for a specific model architecture and training method.AG News SST 5 Yelp 5Target TasksDBPediaReference Task1.43 1.72 1.65Task-relatedness1.451.501.551.601.651.70AG News SST 5 Yelp 5Target TasksDBPediaReference Task1.18 1.56 1.55Transferability1.201.251.301.351.401.451.501.55Figure 12: Task-relatedness (Left) and transferability (Right) are highly correlated across variousreference-target pairs. A target task related to the reference task (DBPedia) such as AG News achievesbetter transferability (with DistillRoBERTa model) and task-relatedness compared to less-relatedtasks such as SST-5 and Yelp5.C.3 Lipschitz constrained linear fine-tuningC.3.1 Implementing softmax classification with τ−Lipschitz lossTo use the bound Theorem. 3, it is required that the loss be τ−Lipschitz continuous w.r.t. zinthe input domain Z. To enforce this, while learning the weights of the softmax classifier (linearfine-tuning) for the reference task or the target, we add the gradient norm penalty as used in previousworks [52, 4] and solve the following optimization problemminh1NXil(h(zi), yi) +ρmaxymax{0,∥∇zl(h(zi), y)∥2−τ}2(ρ≈104)where l(h(z), y) =−wTyz+ logPjewTjzis the cross-entropy loss.C.3.2 Trade-off between empirical and predicted transferabilityConstraining the Lipschitz coefficient of the classifier increases both the target and the reference taskcross-entropy loss since the hypothesis set is being restricted. The smaller the τis, the larger the lossbecomes. On the other hand, the smaller τmakes the distribution mismatch term in Theorem 3 alsosmaller. Since the bound is the sum of the reference task loss and the distribution mismatch (and labelmismatch), there is a trade-off determined by the value of τ. We illustrate the effect of the values of τon the empirical and predicted transferability. As mentioned previously, we train both the classifierfor the reference task hRand the target hTwith an additional penalty on the gradient norm to makethem τ−Lipschitz. In Fig. 13, we present results of varying the value of τfor the transfer to the Petsdataset with ImageNet as the reference task. For this experiment, we selected 37 random classesfrom ImageNet and only learned the transform Aby keeping Bfixed to a random permutation andCfixed to the uniform prior over reference task classes. We observe that the performance of linearfine-tuning degrades as we decrease the value of τbut explainability through the bound improvessince the distribution mismatch term (dependent on τ) decreases in the bound. However, making τtoo small is not preferable since it leads to an increase in the first term of the bound (re-weightedreference task loss) increasing the bound overall. Moreover, it also leads to a degradation in theaccuracy after linear fine-tuning. For our experiments, we use τ= 0.02since it doesn’t decreasethe accuracy of fine-tuning significantly and leads to a small gap between empirical and predictedtransferability.260.01 0.02 0.050.1 0.20510LossRN180.01 0.02 0.050.1 0.20510RN50Target Loss Reweighted Reference Loss Label Mismatch Distribution MismatchFigure 13: Trade-off between the cross-entropy loss (y-axis) after linear fine-tuning and the upperbound in Theorem 3 as a function of τ(x-axis), for ResNet18 and ResNet50 models pre-trained onImageNet and linearly fine-tuned on the Pets dataset. Increasing the value of τleads to a decrease inthe cross-entropy loss after fine-tuning but increases in the proposed bound mainly due to the τ·Wdterm.D Details of the experimentsAll codes are written in Python using Tensorflow/Pytorch and were run on an Intel(R) Xeon(R)Platinum 8358 CPU with 200 GB of RAM and an Nvidia A10 GPU. Implementation and hyperpa-rameters are described below. Our codes can be found in the supplementary material. We report anaverage of three independent runs for experiments in Sec. 4.2 and 4.3.D.1 Dataset detailsIn our work, we used the standard image classification benchmark datasets along with standardnatural language processing datasets1.Aircraft [35]: consists of 10,000 aircraft images belonging to 100 classes.CIFAR-10/100 [ 30]:These datasets contain 60,000 images belonging to 10/100 categories. Addi-tionally, we created two subsets of CIFAR100 with the first 25 (small CIFAR-100-S) and 50 (mediumCIFAR-100-M) classes.DTD[13]: consists of 5,640 textural images belonging to 47 categories.Fashion MNIST [60]: consists of 70,000 grayscale images belonging to 10 categories.Pets [43]: consists of 7049 images of Cats and Dogs spread across 37 categories.ImageNet [17]: consists of 1.1 million images belonging to 1000 categories.Yelp [63]: consists of 650,000 training and 50,000 test examples belonging to 5 classes.Stanford Sentiment Treebank (SST-5) [ 63]:consists of 8,544 training and 2,210 test samplesbelonging to 5 classes.AG News [63]: consists of 120,000 training and 7,600 test examples belonging to 4 classesDBPedia [63]: consists of 560,000 training and 70,000 test examples belonging to 14 classesD.2 Semantically similar classes for CIFAR-10 from ImageNetFor our experiments with CIFAR-10 in Sec. C.1, we selected the following semantically similarclasses from ImageNet, { airliner, minivan, cock, tabby cat, ox, chihuahua, bull-frog, sorrel, submarine, fire engine }.D.3 Additional experimental detailsDetails of the optimization problem in Eq. 4 and Alg. 1. In Step 5 of the algorithm, we use thenetwork simplex flow algorithm from POT [ 23] to compute the optimal coupling. Since computingthe Wasserstein distance over the entire dataset can be slow, we follow [ 16] and compute the coupling1All NLP datasets and models are obtained from https://huggingface.co/ .27over batches. Note that the base distance defined in Eq. 2 is non-differentiable. Thus, we usea differentiable approximation ̃d((z, y),(z′, y′)) := dfeatures (z, z′) +ν· ∥e(y)−e(y′)∥2(withν= 108) where (z, y)and(z′, y′)are samples from the domains R′′′andTande(·)denotes theone-hot embedding of the labels in Step 5/6 . The first three terms in Step 6 of our algorithmcorrespond to the terms in the objective of Eq. 4 while the two additional terms are added to penalizethe constraints of class prior matching PT(y) =BD and invertibility of the matrix A, respectivelyas required by Theorem 3. We use the softmax operation to ensure BandDare a valid probabilitymatrix and vector.In our experiments, in Sec. 4.1, we used pre-trained models available from Pytorch for ResNet18/50,along with publicly available pre-trained models provided in the official repositories of each trainingmethod. For each experiment, we subsample data from the ImageNet dataset belonging to thesame number of classes as those present in the target dataset and use this data to train the linearlayer on top of the representations extracted from the pre-trained model along with a gradient normpenalty (Reference task classifier). To speed up the experiments, we use only 10,000 points from thesubsample of ImageNet for training the linear classifier and computing the transfer. For evaluation,we use a similar subsample of the validation dataset of ImageNet containing all the samples belongingto the subsampled classes. Fine-tuning on this dataset takes about 0.05 seconds per epoch for the taskof transfer from ImageNet to Pets with the ResNet-18 model (we run the fine-tuning for a total of5000 epochs).Along with training the linear classifiers with a gradient norm penalty (with τ= 0.02), we standardizethe features extracted from the pre-trained models to remove their mean (along each axis) and makethem have a unit standard deviation (along each axis). While standardizing the features do not have asignificant impact on the loss of the classifiers, including it makes it easier to match the distributionsof the reference task and target data after transformations. Since our optimization problem transformsthe reference task distribution to match the distribution of the target by solving the optimizationproblem in Eq. 4 by working on mini-batches, the size of the batch must be greater than the dimensionof the representation space of the pre-trained encoder. E.g., for ResNet18 models which have arepresentation dimension of 512, we use a batch size of 1000 and for ResNet50 models which have arepresentation dimension of 2048, we use a batch size of 2500. Having a smaller batch size than thedimension could lead to a noisy gradient since for that batch the transformation can achieve a perfectmatching, which may not generalize to data from other batches or unseen test data.While computing the transformations, we apply the same augmentation (re-sizing and center crop-ping)/normalization to the training data as those applied to the test data. Along with this, we extractthe features of the training and test data from the pre-trained model once and use these to trainthe linear layer. We note that this is done to save the computation time and better results could beobtained by allowing for extracting features after data augmentation for every batch.Finally, for our experiments in Sec. 4.2, the encoders are trained end-to-end on the reference task.This is in contrast to our other experiments where the encoders are pre-trained and data from thereference task is only used for linear fine-tuning. Using these models, task-relatedness is evaluated byfine-tuning a linear layer using the data from the target task as well as the transformations computedby solving Eq. 4. We used τ= 0.2here. We run the experiments with 3 random seeds and report theaverage results.For our experiments in Sec. 4.3, we used the official code from [ 61] to compute the scores for NCE,Leep, and LogMe along with the official code of [ 51] for SFDA. For PACTran [ 19], we also use theirofficial code with the PACTran-Gaussian method with N/K = 100 , β= 10N, σ2=D/100whereNdenotes the number of samples and Kdenotes the number of classes. This setting is similar tothe PACTran-Gauss fixsetting used in their work with the difference that we use N/K = 100 touse a sufficiently large number of samples, especially considering that all our other results for SbTEmethods are computed on the full training set. For OTCE, we follow the official code and computethe recommended OT-based NCE score and OTCE score ( λ1=−0.0001 andλ2=−1) using 4000randomly selected training samples from the two tasks. For the source task, we subsample the samenumber of classes as the target task and use. For the H-score, we use the official code to compute theH-score [6].For estimating transferability by solving Eq. 5, we set λ=0.01 for all the experiments in Sec. 4.3.28NeurIPS Paper Checklist1.ClaimsQuestion: Do the main claims made in the abstract and introduction accurately reflect the paper’scontributions and scope?Answer: [Yes]Justification: The abstract summarizes the paper’s contributions.Guidelines:•The answer NA means that the abstract and introduction do not include the claims made inthe paper.• The abstract and/or introduction should clearly state the claims made, including the contri-butions made in the paper and important assumptions and limitations. A No or NA answerto this question will not be perceived well by the reviewers.•The claims made should match theoretical and experimental results, and reflect how muchthe results can be expected to generalize to other settings.•It is fine to include aspirational goals as motivation as long as it is clear that these goals arenot attained by the paper.2.LimitationsQuestion: Does the paper discuss the limitations of the work performed by the authors?Answer: [Yes]Justification: Discussed in Sec. 5 and elaborated here.Here we present some of the potential limitations of our work and discuss the avenues for futurework. The analysis presented in the paper studies transferability using the popular cross-entropyloss similar to that analyzed by [ 56]. While this is important and practically useful, extendingthe analysis to other losses, primarily to the 0-1 loss, is an interesting future direction since inmost classification tasks, accuracy is the metric of primary interest.Another limitation of our work is the dependence of our bound on the Wasserstein distance.While it is a popular choice for analyzing performance in various distribution shift scenarios[52,16,50], it is difficult to estimate in practice due to its sensitivity to the number of samplesand the dimension of the representation space. An analysis based on a different and easy-to-compute divergence measure might be more amenable for making transferability estimation viatask-relatedness easier and faster.We emphasize that even though we present some of the limitations of our current work above,these are by no means weaknesses of our work, which non-trivially extends the current under-standing of the transfer learning setting through a rigorous analysis and achieves impressiveperformance on practical applications.Guidelines:•The answer NA means that the paper has no limitation while the answer No means that thepaper has limitations, but those are not discussed in the paper.• The authors are encouraged to create a separate "Limitations" section in their paper.•The paper should point out any strong assumptions and how robust the results are toviolations of these assumptions (e.g., independence assumptions, noiseless settings, modelwell-specification, asymptotic approximations only holding locally). The authors shouldreflect on how these assumptions might be violated in practice and what the implicationswould be.•The authors should reflect on the scope of the claims made, e.g., if the approach was onlytested on a few datasets or with a few runs. In general, empirical results often depend onimplicit assumptions, which should be articulated.•The authors should reflect on the factors that influence the performance of the approach. Forexample, a facial recognition algorithm may perform poorly when image resolution is lowor images are taken in low lighting. Or a speech-to-text system might not be used reliablyto provide closed captions for online lectures because it fails to handle technical jargon.•The authors should discuss the computational efficiency of the proposed algorithms andhow they scale with dataset size.29•If applicable, the authors should discuss possible limitations of their approach to addressproblems of privacy and fairness.•While the authors might fear that complete honesty about limitations might be used byreviewers as grounds for rejection, a worse outcome might be that reviewers discover limita-tions that aren’t acknowledged in the paper. The authors should use their best judgment andrecognize that individual actions in favor of transparency play an important role in devel-oping norms that preserve the integrity of the community. Reviewers will be specificallyinstructed to not penalize honesty concerning limitations.3.Theory Assumptions and ProofsQuestion: For each theoretical result, does the paper provide the full set of assumptions and acomplete (and correct) proof?Answer: [Yes]Justification: In App. A.Guidelines:• The answer NA means that the paper does not include theoretical results.•All the theorems, formulas, and proofs in the paper should be numbered and cross-referenced.• All assumptions should be clearly stated or referenced in the statement of any theorems.•The proofs can either appear in the main paper or the supplemental material, but if theyappear in the supplemental material, the authors are encouraged to provide a short proofsketch to provide intuition.•Inversely, any informal proof provided in the core of the paper should be complemented byformal proofs provided in appendix or supplemental material.• Theorems and Lemmas that the proof relies upon should be properly referenced.4.Experimental Result ReproducibilityQuestion: Does the paper fully disclose all the information needed to reproduce the mainexperimental results of the paper to the extent that it affects the main claims and/or conclusionsof the paper (regardless of whether the code and data are provided or not)?Answer: [Yes]Justification: In App. D.Guidelines:• The answer NA means that the paper does not include experiments.•If the paper includes experiments, a No answer to this question will not be perceived wellby the reviewers: Making the paper reproducible is important, regardless of whether thecode and data are provided or not.•If the contribution is a dataset and/or model, the authors should describe the steps taken tomake their results reproducible or verifiable.•Depending on the contribution, reproducibility can be accomplished in various ways. Forexample, if the contribution is a novel architecture, describing the architecture fully mightsuffice, or if the contribution is a specific model and empirical evaluation, it may be necessaryto either make it possible for others to replicate the model with the same dataset, or provideaccess to the model. In general. releasing code and data is often one good way to accomplishthis, but reproducibility can also be provided via detailed instructions for how to replicatethe results, access to a hosted model (e.g., in the case of a large language model), releasingof a model checkpoint, or other means that are appropriate to the research performed.•While NeurIPS does not require releasing code, the conference does require all submissionsto provide some reasonable avenue for reproducibility, which may depend on the nature ofthe contribution. For example(a)If the contribution is primarily a new algorithm, the paper should make it clear how toreproduce that algorithm.(b)If the contribution is primarily a new model architecture, the paper should describe thearchitecture clearly and fully.30(c)If the contribution is a new model (e.g., a large language model), then there shouldeither be a way to access this model for reproducing the results or a way to reproducethe model (e.g., with an open-source dataset or instructions for how to construct thedataset).(d)We recognize that reproducibility may be tricky in some cases, in which case authorsare welcome to describe the particular way they provide for reproducibility. In the caseof closed-source models, it may be that access to the model is limited in some way(e.g., to registered users), but it should be possible for other researchers to have somepath to reproducing or verifying the results.5.Open access to data and codeQuestion: Does the paper provide open access to the data and code, with sufficient instructionsto faithfully reproduce the main experimental results, as described in supplemental material?Answer: [Yes]Justification: Our codes can be found at https://github.com/akshaymehra24/TaskTransferAnalysis .Guidelines:• The answer NA means that paper does not include experiments requiring code.•Please see the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•While we encourage the release of code and data, we understand that this might not bepossible, so “No” is an acceptable answer. Papers cannot be rejected simply for not includingcode, unless this is central to the contribution (e.g., for a new open-source benchmark).•The instructions should contain the exact command and environment needed to run toreproduce the results. See the NeurIPS code and data submission guidelines ( https://nips.cc/public/guides/CodeSubmissionPolicy ) for more details.•The authors should provide instructions on data access and preparation, including how toaccess the raw data, preprocessed data, intermediate data, and generated data, etc.•The authors should provide scripts to reproduce all experimental results for the new proposedmethod and baselines. If only a subset of experiments are reproducible, they should statewhich ones are omitted from the script and why.•At submission time, to preserve anonymity, the authors should release anonymized versions(if applicable).•Providing as much information as possible in supplemental material (appended to the paper)is recommended, but including URLs to data and code is permitted.6.Experimental Setting/DetailsQuestion: Does the paper specify all the training and test details (e.g., data splits, hyperparame-ters, how they were chosen, type of optimizer, etc.) necessary to understand the results?Answer: [Yes]Justification: In App. D.Guidelines:• The answer NA means that the paper does not include experiments.•The experimental setting should be presented in the core of the paper to a level of detail thatis necessary to appreciate the results and make sense of them.•The full details can be provided either with the code, in appendix, or as supplementalmaterial.7.Experiment Statistical SignificanceQuestion: Does the paper report error bars suitably and correctly defined or other appropriateinformation about the statistical significance of the experiments?Answer: [Yes]Justification: An average of three independent runs are reported.Guidelines:31• The answer NA means that the paper does not include experiments.•The authors should answer "Yes" if the results are accompanied by error bars, confidenceintervals, or statistical significance tests, at least for the experiments that support the mainclaims of the paper.•The factors of variability that the error bars are capturing should be clearly stated (forexample, train/test split, initialization, random drawing of some parameter, or overall runwith given experimental conditions).•The method for calculating the error bars should be explained (closed form formula, call toa library function, bootstrap, etc.)• The assumptions made should be given (e.g., Normally distributed errors).•It should be clear whether the error bar is the standard deviation or the standard error of themean.•It is OK to report 1-sigma error bars, but one should state it. The authors should preferablyreport a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normalityof errors is not verified.•For asymmetric distributions, the authors should be careful not to show in tables or figuressymmetric error bars that would yield results that are out of range (e.g. negative error rates).•If error bars are reported in tables or plots, The authors should explain in the text how theywere calculated and reference the corresponding figures or tables in the text.8.Experiments Compute ResourcesQuestion: For each experiment, does the paper provide sufficient information on the computerresources (type of compute workers, memory, time of execution) needed to reproduce theexperiments?Answer: [Yes]Justification: In App. D.Guidelines:• The answer NA means that the paper does not include experiments.•The paper should indicate the type of compute workers CPU or GPU, internal cluster, orcloud provider, including relevant memory and storage.•The paper should provide the amount of compute required for each of the individualexperimental runs as well as estimate the total compute.•The paper should disclose whether the full research project required more compute than theexperiments reported in the paper (e.g., preliminary or failed experiments that didn’t makeit into the paper).9.Code Of EthicsQuestion: Does the research conducted in the paper conform, in every respect, with the NeurIPSCode of Ethics https://neurips.cc/public/EthicsGuidelines ?Answer: [Yes]Justification: There are no ethics concerns.Guidelines:• The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics.•If the authors answer No, they should explain the special circumstances that require adeviation from the Code of Ethics.•The authors should make sure to preserve anonymity (e.g., if there is a special considerationdue to laws or regulations in their jurisdiction).10.Broader ImpactsQuestion: Does the paper discuss both potential positive societal impacts and negative societalimpacts of the work performed?Answer: [NA]Justification: NAGuidelines:32• The answer NA means that there is no societal impact of the work performed.•If the authors answer NA or No, they should explain why their work has no societal impactor why the paper does not address societal impact.•Examples of negative societal impacts include potential malicious or unintended uses(e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g.,deployment of technologies that could make decisions that unfairly impact specific groups),privacy considerations, and security considerations.•The conference expects that many papers will be foundational research and not tied toparticular applications, let alone deployments. However, if there is a direct path to anynegative applications, the authors should point it out. For example, it is legitimate to pointout that an improvement in the quality of generative models could be used to generatedeepfakes for disinformation. On the other hand, it is not needed to point out that a genericalgorithm for optimizing neural networks could enable people to train models that generateDeepfakes faster.•The authors should consider possible harms that could arise when the technology is beingused as intended and functioning correctly, harms that could arise when the technology isbeing used as intended but gives incorrect results, and harms following from (intentional orunintentional) misuse of the technology.•If there are negative societal impacts, the authors could also discuss possible mitigationstrategies (e.g., gated release of models, providing defenses in addition to attacks, mecha-nisms for monitoring misuse, mechanisms to monitor how a system learns from feedbackover time, improving the efficiency and accessibility of ML).11.SafeguardsQuestion: Does the paper describe safeguards that have been put in place for responsible releaseof data or models that have a high risk for misuse (e.g., pretrained language models, imagegenerators, or scraped datasets)?Answer: [NA]Justification: NAGuidelines:• The answer NA means that the paper poses no such risks.•Released models that have a high risk for misuse or dual-use should be released withnecessary safeguards to allow for controlled use of the model, for example by requiring thatusers adhere to usage guidelines or restrictions to access the model or implementing safetyfilters.•Datasets that have been scraped from the Internet could pose safety risks. The authorsshould describe how they avoided releasing unsafe images.•We recognize that providing effective safeguards is challenging, and many papers do notrequire this, but we encourage authors to take this into account and make a best faith effort.12.Licenses for existing assetsQuestion: Are the creators or original owners of assets (e.g., code, data, models), used in thepaper, properly credited and are the license and terms of use explicitly mentioned and properlyrespected?Answer: [NA]Justification: NAGuidelines:• The answer NA means that the paper does not use existing assets.• The authors should cite the original paper that produced the code package or dataset.•The authors should state which version of the asset is used and, if possible, include a URL.• The name of the license (e.g., CC-BY 4.0) should be included for each asset.•For scraped data from a particular source (e.g., website), the copyright and terms of serviceof that source should be provided.•If assets are released, the license, copyright information, and terms of use in the packageshould be provided. For popular datasets, paperswithcode.com/datasets has curatedlicenses for some datasets. Their licensing guide can help determine the license of a dataset.33•For existing datasets that are re-packaged, both the original license and the license of thederived asset (if it has changed) should be provided.•If this information is not available online, the authors are encouraged to reach out to theasset’s creators.13.New AssetsQuestion: Are new assets introduced in the paper well documented and is the documentationprovided alongside the assets?Answer: [NA]Justification: NAGuidelines:• The answer NA means that the paper does not release new assets.•Researchers should communicate the details of the dataset/code/model as part of their sub-missions via structured templates. This includes details about training, license, limitations,etc.• The paper should discuss whether and how consent was obtained from people whose assetis used.•At submission time, remember to anonymize your assets (if applicable). You can eithercreate an anonymized URL or include an anonymized zip file.14.Crowdsourcing and Research with Human SubjectsQuestion: For crowdsourcing experiments and research with human subjects, does the paperinclude the full text of instructions given to participants and screenshots, if applicable, as well asdetails about compensation (if any)?Answer: [NA]Justification: NAGuidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Including this information in the supplemental material is fine, but if the main contributionof the paper involves human subjects, then as much detail as possible should be included inthe main paper.•According to the NeurIPS Code of Ethics, workers involved in data collection, curation, orother labor should be paid at least the minimum wage in the country of the data collector.15.Institutional Review Board (IRB) Approvals or Equivalent for Research with HumanSubjectsQuestion: Does the paper describe potential risks incurred by study participants, whether suchrisks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (oran equivalent approval/review based on the requirements of your country or institution) wereobtained?Answer: [NA]Justification: NAGuidelines:•The answer NA means that the paper does not involve crowdsourcing nor research withhuman subjects.•Depending on the country in which research is conducted, IRB approval (or equivalent)may be required for any human subjects research. If you obtained IRB approval, you shouldclearly state this in the paper.•We recognize that the procedures for this may vary significantly between institutions andlocations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelinesfor their institution.•For initial submissions, do not include any information that would break anonymity (ifapplicable), such as the institution conducting the review.34 |
0NMzBwqaAJ | "Not All Tokens Are What You Need for PretrainingZhenghao Lin⋆χφZhibin Gou⋆πφYeyun Gong⋄φ(...TRUNCATED) |
turWYO1w2Q | "Information Directed Tree SearchReasoning and Planning with Language AgentsYash Chandak, Hyunji Ale(...TRUNCATED) |
oRW8i4EF0Z | "A Bayesian ApproachTowards Crowdsourcing the Truths from LLMsPeiran Yao1, Jerin George Mathew2, She(...TRUNCATED) |
3iDxHRQfVy | "Had Enough of Experts? Elicitation and Evaluationof Bayesian Priors from Large Language ModelsDavid(...TRUNCATED) |
xC2xtBLmri | "CAFA: Coding as Auto-Formulation Can Boost LargeLanguage Models in Solving Linear ProgrammingProble(...TRUNCATED) |
x2yiUEH0f9 | "Probabilistic Proof State Compression: OptimizingLLM-Guided Formal VerificationAli Rahim∗Departme(...TRUNCATED) |
wzaMGXiOEy | "Intermediate Fine-Tuning ImprovesMathematical Reasoning in Smaller ModelsNeeraj Gangwar Suma P Bhat(...TRUNCATED) |
vPfm789BK0 | "LLM and Simulation as Bilevel Optimizers:A New Paradigm to Advance Physical ScientificDiscoveryPing(...TRUNCATED) |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 17