Dataset Viewer
chunks
stringlengths 43
4.1k
|
---|
4
1
0
2
b
e
F
9
1
]
V
C
.
s
c
[
4
v
9
9
1
6
.
2
1
3
1
:
v
i
X
r
a
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
Google Inc.
New York University
Google Inc.
New York University
Dumitru Erhan
Ian Goodfellow
Rob Fergus
Google Inc.
University of Montreal
New York University
Facebook Inc.
Abstract
Deep neural networks are highly expressive models that have recently achieved
state of the art performance on speech and visual recognition tasks. While their
expressiveness is the reason they succeed, it also causes them to learn uninter-
pretable solutions that could have counter-intuitive properties. In this paper we
report two such properties.
First, we find that there is no distinction between individual high level units and
random linear combinations of high level units, according to various methods of
unit analysis. It suggests that it is the space, rather than the individual units, that
contains the semantic information in the high layers of neural networks.
Second, we find that deep neural networks learn input-output mappings that are
fairly discontinuous to a significant extent. We can cause the network to misclas-
sify an image by applying a certain hardly perceptible perturbation, which is found
by maximizing the network’s prediction error. In addition, the specific nature of
these perturbations is not a random artifact of learning: the same perturbation can
cause a different network, that was trained on a different subset of the dataset, to
misclassify the same input.
1
Introduction
Deep neural networks are powerful learning models that achieve excellent performance on visual and
speech recognition problems [9, 8]. Neural networks achieve high performance because they can
express arbitrary computation that consists of a modest number of massively parallel nonlinear steps.
But as the resulting computation is automatically discovered by backpropagation via supervised
learning, it can be difficult to interpret and can have counter-intuitive properties. In this paper, we
discuss two counter-intuitive properties of deep neural networks.
The first property is concerned with the semantic meaning of individual units. Previous works
[6, 13, 7] analyzed the semantic meaning of various units by finding the set of inputs that maximally
activate a given unit. The inspection of individual units makes the implicit assumption that the units
of the last feature layer form a distinguished basis which is particularly useful for extracting seman-
tic information. Instead, we show in section 3 that random projections of φ(x) are semantically
indistinguishable from the coordinates of φ(x). This puts into question the conjecture that neural
networks disentangle variation factors across coordinates. Generally, it seems that it is the entire
space of activations, rather than the individual units, that contains the bulk of the semantic informa-
tion. A similar, but even stronger conclusion was reached recently by Mikolov et al. [12] for word
representations, where the various directions in the vector space representing the words are shown
to give rise to a surprisingly rich semantic encoding of relations and analogies. At the same time,
1
the vector representations are stable up to a rotation of the space, so the individual units of the vector
representations are unlikely to contain semantic information.
The second property is concerned with the stability of neural networks with respect to small per-
turbations to their inputs. Consider a state-of-the-art deep neural network that generalizes well on
an object recognition task. We expect such network to be robust to small perturbations of its in-
put, because small perturbation cannot change the object category of an image. However, we find
that applying an imperceptible non-random perturbation to a test image, it is possible to arbitrarily
change the network’s prediction (see figure 5). These perturbations are found by optimizing the
input to maximize the prediction error. We term the so perturbed examples “adversarial |
examples”.
It is natural to expect that the precise configuration of the minimal necessary perturbations is a
random artifact of the normal variability that arises in different runs of backpropagation learning.
Yet, we found that adversarial examples are relatively robust, and are shared by neural networks with
varied number of layers, activations or trained on different subsets of the training data. That is, if
we use one neural net to generate a set of adversarial examples, we find that these examples are still
statistically hard for another neural network even when it was trained with different hyperparameters
or, most surprisingly, when it was trained on a different set of examples.
These results suggest that the deep neural networks that are learned by backpropagation have nonin-
tuitive characteristics and intrinsic blind spots, whose structure is connected to the data distribution
in a non-obvious way.
2 Framework
Notation We denote by x ∈ Rm an input image, and φ(x) activation values of some layer. We first
examine properties of the image of φ(x), and then we search for its blind spots.
We perform a number of experiments on a few different networks and three datasets :
• For the MNIST dataset, we used the following architectures [11]
– A simple fully connected network with one or more hidden layers and a Softmax
classifier. We refer to this network as “FC”.
– A classifier trained on top of an autoencoder. We refer to this network as “AE”.
• The ImageNet dataset [3].
– Krizhevsky et. al architecture [9]. We refer to it as “AlexNet”.
• ∼ 10M image samples from Youtube (see [10])
– Unsupervised trained network with ∼ 1 billion learnable parameters. We refer to it as
“QuocNet”.
For the MNIST experiments, we use regularization with a weight decay of λ. Moreover, in some
experiments we split the MNIST training dataset into two disjoint datasets P1, and P2, each with
30000 training cases.
3 Units of: φ(x)
Traditional computer vision systems rely on feature extraction: often a single feature is easily inter-
pretable, e.g. a histogram of colors, or quantized local derivatives. This allows one to inspect the
individual coordinates of the feature space, and link them back to meaningful variations in the input
domain. Similar reasoning was used in previous work that attempted to analyze neural networks that
were applied to computer vision problems. These works interpret an activation of a hidden unit as a
meaningful feature. They look for input images which maximize the activation value of this single
feature [6, 13, 7, 4].
The aforementioned technique can be formally stated as visual inspection of images x(cid:48), which satisfy
(or are close to maximum attainable value):
x(cid:48) = arg max
x∈I
(cid:104)φ(x), ei(cid:105)
2
(a) Unit sensitive to lower round stroke.
(b) Unit sensitive to upper round stroke, or
lower straight stroke.
(c) Unit senstive to left, upper round
stroke.
(d) Unit senstive to diagonal straight
stroke.
Figure 1: An MNIST experiment. The figure shows images that maximize the activation of various units
(maximum stimulation in the natural basis direction). Images within each row share semantic properties.
(a) Direction sensitive to upper straight
stroke, or lower round stroke.
(b) Direction sensitive to lower left loop.
(c) Direction senstive to round top stroke.
(d) Direction sensitive to right, upper
round stroke.
Figure 2: An MNIST experiment. The figure shows images that maximize the activations in a random direction
(maximum stimulation in a random basis). Images within each row share semantic properties.
where I is a held-out set of images from the data distribution that the network was not trained on
and ei is the natural basis vector associated with the i-th hidden unit.
Our experiments show that any random direction v ∈ Rn gives rise to similarly interpretable se-
mantic properties. More formally, we find that images x(cid:48) are semantically related to each other, for
many x(cid:48) such that
x(cid:48) = arg max
(cid:104)φ(x), v(cid:105)
x∈I
This suggests that the |
natural basis is not better than a random basis for inspecting the properties
of φ(x). This puts into question the notion that neural networks disentangle variation factors across
coordinates.
First, we evaluated the above claim using a convolutional neural network trained on MNIST. We
used the MNIST test set for I. Figure 1 shows images that maximize the activations in the natural
basis, and Figure 2 shows images that maximize the activation in random directions. In both cases
the resulting images share many high-level similarities.
Next, we repeated our experiment on an AlexNet, where we used the validation set as I. Figures 3
and 4 compare the natural basis to the random basis on the trained network. The rows appear to be
semantically meaningful for both the single unit and the combination of units.
Although such analysis gives insight on the capacity of φ to generate invariance on a particular
subset of the input distribution, it does not explain the behavior on the rest of its domain. We shall
see in the next section that φ has counterintuitive properties in the neighbourhood of almost every
point form data distribution.
4 Blind Spots in Neural Networks
So far, unit-level inspection methods had relatively little utility beyond confirming certain intuitions
regarding the complexity of the representations learned by a deep neural network [6, 13, 7, 4].
Global, network level inspection methods can be useful in the context of explaining classification
decisions made by a model [1] and can be used to, for instance, identify the parts of the input which
led to a correct classification of a given visual input instance (in other words, one can use a trained
3
(a) Unit sensitive to white flowers.
(b) Unit sensitive to postures.
(c) Unit senstive to round, spiky flowers.
(d) Unit senstive to round green or yellow
objects.
Figure 3: Experiment performed on ImageNet. Images stimulating single unit most (maximum stimulation in
natural basis direction). Images within each row share many semantic properties.
(a) Direction sensitive to white, spread
flowers.
(b) Direction sensitive to white dogs.
(c) Direction sensitive to spread shapes.
(d) Direction sensitive to dogs with brown
heads.
Figure 4: Experiment performed on ImageNet. Images giving rise to maximum activations in a random direc-
tion (maximum stimulation in a random basis). Images within each row share many semantic properties.
model for weakly-supervised localization). Such global analyses are useful in that they can make us
understand better the input-to-output mapping represented by the trained network.
Generally speaking, the output layer unit of a neural network is a highly nonlinear function of its
input. When it is trained with the cross-entropy loss (using the Softmax activation function), it
represents a conditional distribution of the label given the input (and the training set presented so
far). It has been argued [2] that the deep stack of non-linear layers in between the input and the
output unit of a neural network are a way for the model to encode a non-local generalization prior
over the input space. In other words, it is assumed that is possible for the output unit to assign non-
significant (and, presumably, non-epsilon) probabilities to regions of the input space that contain no
training examples in their vicinity. Such regions can represent, for instance, the same objects from
different viewpoints, which are relatively far (in pixel space), but which share nonetheless both the
label and the statistical structure of the original inputs.
It is implicit in such arguments that local generalization—in the very proximity of the training
examples—works as expected. And that in particular, for a small enough radius ε > 0 in the vicinity
of a given training input x, an x + r satisfying ||r|| < ε will get assigned a high probability of the
correct class by the model. This kind of smoothness prior is typically valid for computer vision
problems. In general, imperceptibly tiny perturbations of a given image do not normally change the
underlying clas |
s.
Our main result is that for deep neural networks, the smoothness assumption that underlies many
kernel methods does not hold. Specifically, we show that by using a simple optimization procedure,
we are able to find adversarial examples, which are obtained by imperceptibly small perturbations
to a correctly classified input image, so that it is no longer classified correctly.
In some sense, what we describe is a way to traverse the manifold represented by the network in an
efficient way (by optimization) and finding adversarial examples in the input space. The adversarial
examples represent low-probability (high-dimensional) “pockets” in the manifold, which are hard to
efficiently find by simply randomly sampling the input around a given example. Already, a variety
of recent state of the art computer vision models employ input deformations during training for
4
increasing the robustness and convergence speed of the models [9, 13]. These deformations are,
however, statistically inefficient, for a given example: they are highly correlated and are drawn from
the same distribution throughout the entire training of the model. We propose a scheme to make this
process adaptive in a way that exploits the model and its deficiencies in modeling the local space
around the training data.
We make the connection with hard-negative mining explicitly, as it is close in spirit: hard-negative
mining, in computer vision, consists of identifying training set examples (or portions thereof) which
are given low probabilities by the model, but which should be high probability instead, cf. [5]. The
training set distribution is then changed to emphasize such hard negatives and a further round of
model training is performed. As shall be described, the optimization problem proposed in this work
can also be used in a constructive way, similar to the hard-negative mining principle.
4.1 Formal description
We denote by f : Rm −→ {1 . . . k} a classifier mapping image pixel value vectors to a discrete
label set. We also assume that f has an associated continuous loss function denoted by lossf :
Rm × {1 . . . k} −→ R+. For a given x ∈ Rm image and target label l ∈ {1 . . . k}, we aim to solve
the following box-constrained optimization problem:
• Minimize (cid:107)r(cid:107)2 subject to:
1. f (x + r) = l
2. x + r ∈ [0, 1]m
The minimizer r might not be unique, but we denote one such x + r for an arbitrarily chosen
minimizer by D(x, l). Informally, x + r is the closest image to x classified as l by f . Obviously,
D(x, f (x)) = f (x), so this task is non-trivial only if f (x) (cid:54)= l. In general, the exact computation
of D(x, l) is a hard problem, so we approximate it by using a box-constrained L-BFGS. Concretely,
we find an approximation of D(x, l) by performing line-search to find the minimum c > 0 for which
the minimizer r of the following problem satisfies f (x + r) = l.
• Minimize c|r| + lossf (x + r, l) subject to x + r ∈ [0, 1]m
This penalty function method would yield the exact solution for D(X, l) in the case of convex
losses, however neural networks are non-convex in general, so we end up with an approximation in
this case.
4.2 Experimental results
Our “minimimum distortion” function D has the following intriguing properties which we will sup-
port by informal evidence and quantitative experiments in this section:
1. For all the networks we studied (MNIST, QuocNet [10], AlexNet [9]), for each sam-
ple, we have always managed to generate very close, visually hard to distinguish, ad-
versarial examples that are misclassified by the original network (see figure 5 and
http://goo.gl/huaGPb for examples).
2. Cross model generalization: a relatively large fraction of examples will be misclassified by
networks trained from scratch with different hyper-parameters (number of layers, regular-
ization or initial weights).
3. Cross training-set generalization a relatively large fraction of examples will be misclassi-
fied by networks trained from scratch on a disjoint training set.
The above observations suggest that adversarial examples are somewhat uni |
versal and not just the
results of overfitting to a particular model or to the specific selection of the training set. They also
suggest that back-feeding adversarial examples to training might improve generalization of the re-
sulting models. Our preliminary experiments have yielded positive evidence on MNIST to support
this hypothesis as well: We have successfully trained a two layer 100-100-10 non-convolutional neu-
ral network with a test error below 1.2% by keeping a pool of adversarial examples a random subset
of which is continuously replaced by newly generated adversarial examples and which is mixed into
5
(a)
(b)
Figure 5: Adversarial examples generated for AlexNet [9].(Left) is a correctly predicted sample, (center) dif-
ference between correct image, and image predicted incorrectly magnified by 10x (values shifted by 128 and
clamped), (right) adversarial example. All images in the right column are predicted to be an “ostrich, Struthio
camelus”. Average distortion based on 64 examples is 0.006508. Plase refer to http://goo.gl/huaGPb
for full resolution images. The examples are strictly randomly chosen. There is not any postselection involved.
(a)
(b)
Figure 6: Adversarial examples for QuocNet [10]. A binary car classifier was trained on top of the last layer
features without fine-tuning. The randomly chosen examples on the left are recognized correctly as cars, while
the images in the middle are not recognized. The rightmost column is the magnified absolute value of the
difference between the two images.
the original training set all the time. We used weight decay, but no dropout for this network. For
comparison, a network of this size gets to 1.6% errors when regularized by weight decay alone and
can be improved to around 1.3% by using carefully applied dropout. A subtle, but essential detail
is that we only got improvements by generating adversarial examples for each layer outputs which
were used to train all the layers above. The network was trained in an alternating fashion, maintain-
ing and updating a pool of adversarial examples for each layer separately in addition to the original
training set. According to our initial observations, adversarial examples for the higher layers seemed
to be significantly more useful than those on the input or lower layers. In our future work, we plan
to compare these effects in a systematic manner.
For space considerations, we just present results for a representative subset (see Table 1) of the
MNIST experiments we performed. The results presented here are consistent with those on a larger
variety of non-convolutional models. For MNIST, we do not have results for convolutional mod-
els yet, but our first qualitative experiments with AlexNet gives us reason to believe that convolu-
tional networks may behave similarly as well. Each of our models were trained with L-BFGS until
convergence. The first three models are linear classifiers that work on the pixel level with various
weight decay parameters λ. All our examples use quadratic weight decay on the connection weights:
lossdecay = λ (cid:80) w2
i /k added to the total loss, where k is the number of units in the layer. Three
of our models are simple linear (softmax) classifier without hidden units (FC10(λ)). One of them,
FC10(1), is trained with extremely high λ = 1 in order to test whether it is still possible to generate
adversarial examples in this extreme setting as well.Two other models are a simple sigmoidal neural
network with two hidden layers and a classifier. The last model, AE400-10, consists of a single layer
sparse autoencoder with sigmoid activations and 400 nodes with a Softmax classifier. This network
has been trained until it got very high quality first layer filters and this layer was not fine-tuned. The
last column measures the minimum average pixel level distortion necessary to reach 0% accuracy
on the training set. The distortion is measure by
between the original x and distorted
(cid:113) (cid:80)(x(cid:48)
i−xi)2
n
6
(a) Even columns: adver-
sarial examples for a lin-
ear
(std-
(FC) classifier
dev=0. |
06)
(b) Even columns: adver-
sarial examples for a 200-
200-10 sigmoid network
(stddev=0.063)
(c) Randomly
distorted
samples by Gaussian noise
with stddev=1. Accuracy:
51%.
Figure 7: Adversarial examples for a randomly chosen subset of MNIST compared with randomly distorted
examples. Odd columns correspond to original images, and even columns correspond to distorted counterparts.
The adversarial examples generated for the specific model have accuracy 0% for the respective model. Note
that while the randomly distorted examples are hardly readable, still they are classified correctly in half of the
cases, while the adversarial examples are never classified correctly.
Model Name
Description
Training error
Test error
Av. min. distortion
FC10(10−4)
FC10(10−2)
FC10(1)
FC100-100-10
FC200-200-10
AE400-10
Softmax with λ = 10−4
Softmax with λ = 10−2
Softmax with λ = 1
Sigmoid network λ = 10−5, 10−5, 10−6
Sigmoid network λ = 10−5, 10−5, 10−6
Autoencoder with Softmax λ = 10−6
6.7%
10%
21.2%
0%
0%
0.57%
7.4%
9.4%
20%
1.64%
1.54%
1.9%
0.062
0.1
0.14
0.058
0.065
0.086
Table 1: Tests of the generalization of adversarial instances on MNIST.
FC10(10−4)
FC10(10−2)
FC10(1)
FC100-100-10
FC200-200-10
AE400-10
Av. distortion
FC10(10−4)
FC10(10−2)
FC10(1)
FC100-100-10
FC200-200-10
AE400-10
Gaussian noise, stddev=0.1
Gaussian noise, stddev=0.3
100%
87.1%
71.9%
28.9%
38.2%
23.4%
5.0%
15.6%
11.7%
100%
76.2%
13.7%
14%
16%
10.1%
11.3%
22.7%
35.2%
100%
21.1%
23.8%
24.8%
18.3%
22.7%
2%
35.9%
48.1%
100%
20.3%
9.4%
0%
5%
3.9%
27.3%
47%
6.6%
100%
6.6%
0%
4.3%
2.7%
9.8%
34.4%
2%
2.7%
100%
0.8%
3.1%
0.062
0.1
0.14
0.058
0.065
0.086
0.1
0.3
Table 2: Cross-model generalization of adversarial examples. The columns of the Tables show the error induced
by distorted examples fed to the given model. The last column shows average distortion wrt. original training
set.
x(cid:48) images, where n = 784 is the number of image pixels. The pixel intensities are scaled to be in
the range [0, 1].
In our first experiment, we generated a set of adversarial instances for a given network and fed
these examples for each other network to measure the proportion of misclassified instances. The
last column shows the average minimum distortion that was necessary to reach 0% accuracy on the
whole training set. The experimental results are presented in Table 2. The columns of Table 2 show
the error (proportion of misclassified instances) on the so distorted training sets. The last two rows
are given for reference showing the error induced when distorting by the given amounts of Gaussian
noise. Note that even the noise with stddev 0.1 is greater than the stddev of our adversarial noise
for all but one of the models. Figure 7 shows a visualization of the generated adversarial instances
for two of the networks used in this experiment The general conclusion is that adversarial examples
tend to stay hard even for models trained with different hyperparameters. Although the autoencoder
based version seems most resilient to adversarial examples, it is not fully immune either.
Still, this experiment leaves open the question of dependence over the training set. Does the hardness
of the generated examples rely solely on the particular choice of our training set as a sample or does
this effect generalize even to models trained on completely different training sets?
7
Model
Error on P1
Error on P2
Error on Test
Min Av. Distortion
FC100-100-10: 100-100-10 trained on P1
FC123-456-10: 123-456-10 trained on P1
FC100-100-10’ trained on P2
0%
0%
2.3%
2.4%
2.5%
0%
2%
2.1%
2.1%
0.062
0.059
0.058
Table 3: Models trained to study cross-training-set generalization of the generated adversarial examples. Errors
presented in Table correpond to original not-distorted data, to provide a baseline.
FC100-100-10
FC123-456-10
FC100-100-10’
Distorted for FC100-100-10 (av. stddev=0.062)
Distorted for FC123-456-10 (av. stddev=0.059)
Distorted for FC100-100-10’ (av. stddev=0.058)
Gaussian nois |
e with stddev=0.06
Distorted for FC100-100-10 amplified to stddev=0.1
Distorted for FC123-456-10 amplified to stddev=0.1
Distorted for FC100-100-10’ amplified to stddev=0.1
Gaussian noise with stddev=0.1
100%
6.25%
8.2%
2.2%
100%
96%
27%
2.6%
26.2%
100%
8.2%
2.6%
98%
100%
50%
2.8%
5.9%
5.1%
100%
2.4%
43%
22%
100%
2.7%
Table 4: Cross-training-set generalization error rate for the set of adversarial examples generated for different
models. The error induced by a random distortion to the same examples is displayed in the last row.
To study cross-training-set generalization, we have partitioned the 60000 MNIST training images
into two parts P1 and P2 of size 30000 each and trained three non-convolutional networks with
sigmoid activations on them: Two, FC100-100-10 and FC123-456-10, on P1 and FC100-100-10 on
P2. The reason we trained two networks for P1 is to study the cumulative effect of changing the
hypermarameters and the training sets at the same time. Models FC100-100-10 and FC100-100-
10 share the same hyperparameters: both of them are 100-100-10 networks, while FC123-456-10
has different number of hidden units. In this experiment, we were distorting the elements of the
test set rather than the training set. Table 3 summarizes the basic facts about these models. After
we generate adversarial examples with 100% error rates with minimum distortion for the test set,
we feed these examples to the each of the models. The error for each model is displayed in the
corresponding column of the upper part of Table 4. In the last experiment, we magnify the effect of
our distortion by using the examples x + 0.1 x(cid:48)−x
rather than x(cid:48). This magnifies the distortion
(cid:107)x(cid:48)−x(cid:107)2
on average by 40%, from stddev 0.06 to 0.1. The so distorted examples are fed back to each of the
models and the error rates are displayed in the lower part of Table 4. The intriguing conclusion is
that the adversarial examples remain hard for models trained even on a disjoint training set, although
their effectiveness decreases considerably.
4.3 Spectral Analysis of Unstability
The previous section showed examples of deep networks resulting from purely supervised training
which are unstable with respect to a peculiar form of small perturbations. Independently of their
generalisation properties across networks and training sets, the adversarial examples show that there
exist small additive perturbations of the input (in Euclidean sense) that produce large perturbations
at the output of the last layer. This section describes a simple procedure to measure and control the
additive stability of the network by measuring the spectrum of each rectified layer.
Mathematically, if φ(x) denotes the output of a network of K layers corresponding to input x and
trained parameters W , we write
φ(x) = φK(φK−1(. . . φ1(x; W1); W2) . . . ; WK) ,
where φk denotes the operator mapping layer k − 1 to layer k. The unstability of φ(x) can be
explained by inspecting the upper Lipschitz constant of each layer k = 1 . . . K, defined as the
constant Lk > 0 such that
∀ x, r , (cid:107)φk(x; Wk) − φk(x + r; Wk)(cid:107) ≤ Lk(cid:107)r(cid:107) .
The resulting network thus satsifies (cid:107)φ(x) − φ(x + r)(cid:107) ≤ L(cid:107)r(cid:107), with L = (cid:81)K
k=1 Lk.
A half-rectified layer (both convolutional or fully connected) is defined by the mapping
φk(x; Wk, bk) = max(0, Wkx+bk). Let (cid:107)W (cid:107) denote the operator norm of W (i.e., its largest singu-
8
Layer
Conv. 1
Conv. 2
Conv. 3
Conv. 4
Conv. 5
FC. 1
FC. 2
FC. 3
Size
Stride
Upper bound
3 × 11 × 11 × 96
96 × 5 × 5 × 256
256 × 3 × 3 × 384
384 × 3 × 3 × 384
384 × 3 × 3 × 256
9216 × 4096
4096 × 4096
4096 × 1000
4
1
1
1
1
N/A
N/A
N/A
2.75
10
7
7.5
11
3.12
4
4
Table 5: Frame Bounds of each rectified layer of the network from [9].
lar value). Since the non-linearity ρ(x) = max(0, x) is contractive, i.e. satisfies (cid:107)ρ(x)−ρ(x+r)(cid:107) ≤
(cid:107)r(cid:107) for all x, r; it follows that
(cid:107)φk(x; Wk)−φk(x+r; Wk)(cid:107) |
= (cid:107) max(0, Wkx+bk)−max(0, Wk(x+r)+bk)(cid:107) ≤ (cid:107)Wkr(cid:107) ≤ (cid:107)Wk(cid:107)(cid:107)r(cid:107) ,
and hence Lk ≤ (cid:107)Wk(cid:107). On the other hand, a max-pooling layer φk is contractive:
∀ x , r , (cid:107)φk(x) − φk(x + r)(cid:107) ≤ (cid:107)r(cid:107) ,
since its Jacobian is a projection onto a subset of the input coordinates and hence does not expand
the gradients. Finally, if φk is a contrast-normalization layer
φk(x) =
(cid:16)
x
(cid:15) + (cid:107)x(cid:107)2
(cid:17)γ ,
one can verify that
∀ x , r , (cid:107)φk(x) − φk(x + r)(cid:107) ≤ (cid:15)−γ(cid:107)r(cid:107)
for γ ∈ [0.5, 1], which corresponds to most common operating regimes.
It results that a conservative measure of the unstability of the network can be obtained by simply
computing the operator norm of each fully connected and convolutional layer. The fully connected
case is trivial since the norm is directly given by the largest singular value of the fully connected
matrix. Let us describe the convolutional case. If W denotes a generic 4-tensor, implementing a
convolutional layer with C input features, D output features, support N × N and spatial stride ∆,
W x =
(cid:40) C
(cid:88)
c=1
xc (cid:63) wc,d(n1∆, n2∆) ; d = 1 . . . , D
,
(cid:41)
where xc denotes the c-th input feature image, and wc,d is the spatial kernel corresponding to input
feature c and output feature d, by applying Parseval’s formula we obtain that its operator norm is
given by
sup
ξ∈[0,N ∆−1)2
where A(ξ) is a D × (C · ∆2) matrix whose rows are
(cid:107)W (cid:107) =
(cid:107)A(ξ)(cid:107) ,
(1)
∀ d = 1 . . . D , A(ξ)d =
(cid:16)
∆−2
(cid:100)wc,d(ξ + l · N · ∆−1) ; c = 1 . . . C , l = (0 . . . ∆ − 1)2(cid:17)
,
and (cid:100)wc,d is the 2-D Fourier transform of wc,d:
(cid:88)
(cid:100)wc,d(ξ) =
u∈[0,N )2
wc,d(u)e−2πi(u·ξ)/N 2
.
Table 5 shows the upper Lipschitz bounds computed from the ImageNet deep convolutional network
of [9], using (1). It shows that instabilities can appear as soon as in the first convolutional layer.
These results are consistent with the exsitence of blind spots constructed in the previous section,
but they don’t attempt to explain why these examples generalize across different hyperparameters
or training sets. We emphasize that we compute upper bounds: large bounds do not automatically
translate into existence of adversarial examples; however, small bounds guarantee that no such ex-
amples can appear. This suggests a simple regularization of the parameters, consisting in penalizing
each upper Lipschitz bound, which might help improve the generalisation error of the networks.
9
5 Discussion
We demonstrated that deep neural networks have counter-intuitive properties both with respect to
the semantic meaning of individual units and with respect to their discontinuities. The existence of
the adversarial negatives appears to be in contradiction with the network’s ability to achieve high
generalization performance. Indeed, if the network can generalize well, how can it be confused
by these adversarial negatives, which are indistinguishable from the regular examples? Possible
explanation is that the set of adversarial negatives is of extremely low probability, and thus is never
(or rarely) observed in the test set, yet it is dense (much like the rational numbers), and so it is found
near every virtually every test case. However, we don’t have a deep understanding of how often
adversarial negatives appears, and thus this issue should be addressed in a future research.
References
[1] David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-
Robert M¨uller. How to explain individual classification decisions. The Journal of Machine Learning
Research, 99:1803–1831, 2010.
[2] Yoshua Bengio. Learning deep architectures for ai. Foundations and trends® in Machine Learning,
2(1):1–127, 2009.
[3] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchi-
cal image database. In Computer Vision and Pattern Recognition, 2009. CVP |
R 2009. IEEE Conference
on, pages 248–255. IEEE, 2009.
[4] Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. Visualizing higher-layer features
of a deep network. Technical Report 1341, University of Montreal, June 2009. Also presented at the
ICML 2009 Workshop on Learning Feature Hierarchies, Montr´eal, Canada.
[5] Pedro Felzenszwalb, David McAllester, and Deva Ramanan. A discriminatively trained, multiscale, de-
formable part model. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference
on, pages 1–8. IEEE, 2008.
[6] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate
object detection and semantic segmentation. arXiv preprint arXiv:1311.2524, 2013.
[7] Ian Goodfellow, Quoc Le, Andrew Saxe, Honglak Lee, and Andrew Y Ng. Measuring invariances in
deep networks. Advances in neural information processing systems, 22:646–654, 2009.
[8] Geoffrey E. Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel rahman Mohamed, Navdeep Jaitly,
Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, and Brian Kingsbury. Deep
neural networks for acoustic modeling in speech recognition: The shared views of four research groups.
IEEE Signal Process. Mag., 29(6):82–97, 2012.
[9] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton.
Imagenet classification with deep convolutional
neural networks. In Advances in Neural Information Processing Systems 25, pages 1106–1114, 2012.
[10] Quoc V Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S Corrado, Jeff
Dean, and Andrew Y Ng. Building high-level features using large scale unsupervised learning. arXiv
preprint arXiv:1112.6209, 2011.
[11] Yann LeCun and Corinna Cortes. The mnist database of handwritten digits, 1998.
[12] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations
in vector space. arXiv preprint arXiv:1301.3781, 2013.
[13] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional neural networks. arXiv
preprint arXiv:1311.2901, 2013.
10
|
Consistency Models
Yang Song 1 Prafulla Dhariwal 1 Mark Chen 1 Ilya Sutskever 1
3
2
0
2
y
a
M
1
3
]
G
L
.
s
c
[
2
v
9
6
4
1
0
.
3
0
3
2
:
v
i
X
r
a
Abstract
Diffusion models have significantly advanced the
fields of image, audio, and video generation, but
they depend on an iterative sampling process that
causes slow generation. To overcome this limita-
tion, we propose consistency models, a new fam-
ily of models that generate high quality samples
by directly mapping noise to data. They support
fast one-step generation by design, while still al-
lowing multistep sampling to trade compute for
sample quality. They also support zero-shot data
editing, such as image inpainting, colorization,
and super-resolution, without requiring explicit
training on these tasks. Consistency models can
be trained either by distilling pre-trained diffu-
sion models, or as standalone generative models
altogether. Through extensive experiments, we
demonstrate that they outperform existing distilla-
tion techniques for diffusion models in one- and
few-step sampling, achieving the new state-of-
the-art FID of 3.55 on CIFAR-10 and 6.20 on
ImageNet 64 ˆ 64 for one-step generation. When
trained in isolation, consistency models become a
new family of generative models that can outper-
form existing one-step, non-adversarial generative
models on standard benchmarks such as CIFAR-
10, ImageNet 64 ˆ 64 and LSUN 256 ˆ 256.
1. Introduction
Diffusion models (Sohl-Dickstein et al., 2015; Song & Er-
mon, 2019; 2020; Ho et al., 2020; Song et al., 2021), also
known as score-based generative models, have achieved
unprecedented success across multiple fields, including im-
age generation (Dhariwal & Nichol, 2021; Nichol et al.,
2021; Ramesh et al., 2022; Saharia et al., 2022; Rombach
et al., 2022), audio synthesis (Kong et al., 2020; Chen et al.,
2021; Popov et al., 2021), and video generation (Ho et al.,
1OpenAI, San Francisco, CA 94110, USA. Correspondence to:
Yang Song <[email protected]>.
Proceedings of the 40 th International Conference on Machine
Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright
2023 by the author(s).
1
Figure 1: Given a Probability Flow (PF) ODE that smoothly
converts data to noise, we learn to map any point (e.g., xt,
xt1, and xT ) on the ODE trajectory to its origin (e.g., x0)
for generative modeling. Models of these mappings are
called consistency models, as their outputs are trained to be
consistent for points on the same trajectory.
2022b;a). A key feature of diffusion models is the iterative
sampling process which progressively removes noise from
random initial vectors. This iterative process provides a
flexible trade-off of compute and sample quality, as using
extra compute for more iterations usually yields samples
of better quality. It is also the crux of many zero-shot data
editing capabilities of diffusion models, enabling them to
solve challenging inverse problems ranging from image
inpainting, colorization, stroke-guided image editing, to
Computed Tomography and Magnetic Resonance Imaging
(Song & Ermon, 2019; Song et al., 2021; 2022; 2023; Kawar
et al., 2021; 2022; Chung et al., 2023; Meng et al., 2021).
However, compared to single-step generative models like
GANs (Goodfellow et al., 2014), VAEs (Kingma & Welling,
2014; Rezende et al., 2014), or normalizing flows (Dinh
et al., 2015; 2017; Kingma & Dhariwal, 2018), the iterative
generation procedure of diffusion models typically requires
10–2000 times more compute for sample generation (Song
& Ermon, 2020; Ho et al., 2020; Song et al., 2021; Zhang
& Chen, 2022; Lu et al., 2022), causing slow inference and
limited real-time applications.
Our objective is to create generative models that facilitate ef-
ficient, single-step generation without sacrificing important
advantages of iterative sampling, such as trading compute
for sample quality when necessary, as well as performing
zero-shot data editing tasks. As illustrated in Fig. 1, we
build on top of the probability flow (PF) ordinary differen-
tial equation (ODE) in continuous-time diffusion models
( |
Song et al., 2021), whose trajectories smoothly transition
Consistency Models
the data distribution into a tractable noise distribution. We
propose to learn a model that maps any point at any time
step to the trajectory’s starting point. A notable property
of our model is self-consistency: points on the same tra-
jectory map to the same initial point. We therefore refer to
such models as consistency models. Consistency models
allow us to generate data samples (initial points of ODE
trajectories, e.g., x0 in Fig. 1) by converting random noise
vectors (endpoints of ODE trajectories, e.g., xT in Fig. 1)
with only one network evaluation. Importantly, by chaining
the outputs of consistency models at multiple time steps,
we can improve sample quality and perform zero-shot data
editing at the cost of more compute, similar to what iterative
sampling enables for diffusion models.
To train a consistency model, we offer two methods based
on enforcing the self-consistency property. The first method
relies on using numerical ODE solvers and a pre-trained
diffusion model to generate pairs of adjacent points on a
PF ODE trajectory. By minimizing the difference between
model outputs for these pairs, we can effectively distill a
diffusion model into a consistency model, which allows gen-
erating high-quality samples with one network evaluation.
By contrast, our second method eliminates the need for a
pre-trained diffusion model altogether, allowing us to train
a consistency model in isolation. This approach situates
consistency models as an independent family of generative
models. Importantly, neither approach necessitates adver-
sarial training, and they both place minor constraints on the
architecture, allowing the use of flexible neural networks
for parameterizing consistency models.
We demonstrate the efficacy of consistency models on sev-
eral image datasets, including CIFAR-10 (Krizhevsky et al.,
2009), ImageNet 64 ˆ 64 (Deng et al., 2009), and LSUN
256 ˆ 256 (Yu et al., 2015). Empirically, we observe that
as a distillation approach, consistency models outperform
existing diffusion distillation methods like progressive dis-
tillation (Salimans & Ho, 2022) across a variety of datasets
in few-step generation: On CIFAR-10, consistency models
reach new state-of-the-art FIDs of 3.55 and 2.93 for one-step
and two-step generation; on ImageNet 64 ˆ 64, it achieves
record-breaking FIDs of 6.20 and 4.70 with one and two net-
work evaluations respectively. When trained as standalone
generative models, consistency models can match or surpass
the quality of one-step samples from progressive distillation,
despite having no access to pre-trained diffusion models.
They are also able to outperform many GANs, and exist-
ing non-adversarial, single-step generative models across
multiple datasets. Furthermore, we show that consistency
models can be used to perform a wide range of zero-shot
data editing tasks, including image denoising, interpolation,
inpainting, colorization, super-resolution, and stroke-guided
image editing (SDEdit, Meng et al. (2021)).
2. Diffusion Models
Consistency models are heavily inspired by the theory of
continuous-time diffusion models (Song et al., 2021; Karras
et al., 2022). Diffusion models generate data by progres-
sively perturbing data to noise via Gaussian perturbations,
then creating samples from noise via sequential denoising
steps. Let pdatapxq denote the data distribution. Diffusion
models start by diffusing pdatapxq with a stochastic differen-
tial equation (SDE) (Song et al., 2021)
dxt “ µpxt, tq dt ` σptq dwt,
(1)
where t P r0, T s, T ą 0 is a fixed constant, µp¨, ¨q and
σp¨q are the drift and diffusion coefficients respectively,
and twtutPr0,T s denotes the standard Brownian motion.
We denote the distribution of xt as ptpxq and as a result
p0pxq ” pdatapxq. A remarkable property of this SDE is
the existence of an ordinary differential equation (ODE),
dubbed the Probability Flow (PF) ODE by Song et al.
(2021), whose solution trajectories sampled at t are dis-
tributed according to ptpxq:
„
ȷ
σptq2∇ l |
og ptpxtq
1
2
dt.
(2)
dxt “
µpxt, tq ´
Here ∇ log ptpxq is the score function of ptpxq; hence dif-
fusion models are also known as score-based generative
models (Song & Ermon, 2019; 2020; Song et al., 2021).
2t.
Typically, the SDE in Eq. (1) is designed such that pT pxq
is close to a tractable Gaussian distribution πpxq. We
hereafter adopt the settings in Karras et al. (2022), where
?
µpx, tq “ 0 and σptq “
In this case, we have
ptpxq “ pdatapxq b N p0, t2Iq, where b denotes the convo-
lution operation, and πpxq “ N p0, T 2Iq. For sampling, we
first train a score model sϕpx, tq « ∇ log ptpxq via score
matching (Hyv¨arinen & Dayan, 2005; Vincent, 2011; Song
et al., 2019; Song & Ermon, 2019; Ho et al., 2020), then
plug it into Eq. (2) to obtain an empirical estimate of the PF
ODE, which takes the form of
dxt
dt
“ ´tsϕpxt, tq.
(3)
We call Eq. (3) the empirical PF ODE. Next, we sample
ˆxT „ π “ N p0, T 2Iq to initialize the empirical PF ODE
and solve it backwards in time with any numerical ODE
solver, such as Euler (Song et al., 2020; 2021) and Heun
solvers (Karras et al., 2022), to obtain the solution trajectory
tˆxtutPr0,T s. The resulting ˆx0 can then be viewed as an
approximate sample from the data distribution pdatapxq. To
avoid numerical instability, one typically stops the solver
at t “ ϵ, where ϵ is a fixed small positive number, and
accepts ˆxϵ as the approximate sample. Following Karras
et al. (2022), we rescale image pixel values to r´1, 1s, and
set T “ 80, ϵ “ 0.002.
2
Consistency Models
of self-consistency: its outputs are consistent for arbitrary
pairs of pxt, tq that belong to the same PF ODE trajectory,
i.e., f pxt, tq “ f pxt1, t1q for all t, t1 P rϵ, T s. As illustrated
in Fig. 2, the goal of a consistency model, symbolized as
fθ, is to estimate this consistency function f from data by
learning to enforce the self-consistency property (details
in Sections 4 and 5). Note that a similar definition is used
for neural flows (Biloˇs et al., 2021) in the context of neural
ODEs (Chen et al., 2018). Compared to neural flows, how-
ever, we do not enforce consistency models to be invertible.
Parameterization For any consistency function f p¨, ¨q, we
have f pxϵ, ϵq “ xϵ, i.e., f p¨, ϵq is an identity function. We
call this constraint the boundary condition. All consistency
models have to meet this boundary condition, as it plays a
crucial role in the successful training of consistency models.
This boundary condition is also the most confining archi-
tectural constraint on consistency models. For consistency
models based on deep neural networks, we discuss two
ways to implement this boundary condition almost for free.
Suppose we have a free-form deep neural network Fθpx, tq
whose output has the same dimensionality as x. The first
way is to simply parameterize the consistency model as
#
fθpx, tq “
x
Fθpx, tq
t “ ϵ
t P pϵ, T s
.
(4)
The second method is to parameterize the consistency model
using skip connections, that is,
fθpx, tq “ cskipptqx ` coutptqFθpx, tq,
(5)
where cskipptq and coutptq are differentiable functions
such that cskippϵq “ 1, and coutpϵq “ 0. This way,
is differentiable at t “ ϵ if
the consistency model
Fθpx, tq, cskipptq, coutptq are all differentiable, which is criti-
cal for training continuous-time consistency models (Appen-
dices B.1 and B.2). The parameterization in Eq. (5) bears
strong resemblance to many successful diffusion models
(Karras et al., 2022; Balaji et al., 2022), making it easier to
borrow powerful diffusion model architectures for construct-
ing consistency models. We therefore follow the second
parameterization in all experiments.
Sampling With a well-trained consistency model fθp¨, ¨q,
we can generate samples by sampling from the initial dis-
tribution ˆxT „ N p0, T 2Iq and then evaluating the consis-
tency model for ˆxϵ “ fθpˆxT , T q. This involves only one
forward pass through the consistency model and therefore
generates samples in a single step. Importantly, one can
also evaluate the consistency model multiple times by al-
ternating denoising and |
noise injection steps for improved
sample quality. Summarized in Algorithm 1, this multistep
sampling procedure provides the flexibility to trade com-
pute for sample quality. It also has important applications
in zero-shot data editing. In practice, we find time points
Figure 2: Consistency models are trained to map points on
any trajectory of the PF ODE to the trajectory’s origin.
Diffusion models are bottlenecked by their slow sampling
speed. Clearly, using ODE solvers for sampling requires
iterative evaluations of the score model sϕpx, tq, which is
computationally costly. Existing methods for fast sampling
include faster numerical ODE solvers (Song et al., 2020;
Zhang & Chen, 2022; Lu et al., 2022; Dockhorn et al., 2022),
and distillation techniques (Luhman & Luhman, 2021; Sali-
mans & Ho, 2022; Meng et al., 2022; Zheng et al., 2022).
However, ODE solvers still need more than 10 evaluation
steps to generate competitive samples. Most distillation
methods like Luhman & Luhman (2021) and Zheng et al.
(2022) rely on collecting a large dataset of samples from
the diffusion model prior to distillation, which itself is com-
putationally expensive. To our best knowledge, the only
distillation approach that does not suffer from this drawback
is progressive distillation (PD, Salimans & Ho (2022)), with
which we compare consistency models extensively in our
experiments.
3. Consistency Models
We propose consistency models, a new type of models that
support single-step generation at the core of its design, while
still allowing iterative generation for trade-offs between sam-
ple quality and compute, and zero-shot data editing. Consis-
tency models can be trained in either the distillation mode or
the isolation mode. In the former case, consistency models
distill the knowledge of pre-trained diffusion models into a
single-step sampler, significantly improving other distilla-
tion approaches in sample quality, while allowing zero-shot
image editing applications. In the latter case, consistency
models are trained in isolation, with no dependence on pre-
trained diffusion models. This makes them an independent
new class of generative models.
Below we introduce the definition, parameterization, and
sampling of consistency models, plus a brief discussion on
their applications to zero-shot data editing.
Definition Given a solution trajectory txtutPrϵ,T s of the
PF ODE in Eq. (2), we define the consistency function as
f : pxt, tq ÞÑ xϵ. A consistency function has the property
3
Consistency Models
Algorithm 1 Multistep Consistency Sampling
Input: Consistency model fθp¨, ¨q, sequence of time
points τ1 ą τ2 ą ¨ ¨ ¨ ą τN ´1, initial noise ˆxT
x Ð fθpˆxT , T q
for n “ 1 to N ´ 1 do
Sample z „ N p0, Iq
n ´ ϵ2z
τ 2
ˆxτn Ð x `
x Ð fθpˆxτn , τnq
a
end for
Output: x
tτ1, τ2, ¨ ¨ ¨ , τN ´1u in Algorithm 1 with a greedy algorithm,
where the time points are pinpointed one at a time using
ternary search to optimize the FID of samples obtained from
Algorithm 1. This assumes that given prior time points, the
FID is a unimodal function of the next time point. We find
this assumption to hold empirically in our experiments, and
leave the exploration of better strategies as future work.
Zero-Shot Data Editing Similar to diffusion models, con-
sistency models enable various data editing and manipu-
lation applications in zero shot; they do not require ex-
plicit training to perform these tasks. For example, consis-
tency models define a one-to-one mapping from a Gaussian
noise vector to a data sample. Similar to latent variable
models like GANs, VAEs, and normalizing flows, consis-
tency models can easily interpolate between samples by
traversing the latent space (Fig. 11). As consistency models
are trained to recover xϵ from any noisy input xt where
t P rϵ, T s, they can perform denoising for various noise
levels (Fig. 12). Moreover, the multistep generation pro-
cedure in Algorithm 1 is useful for solving certain inverse
problems in zero shot by using an iterative replacement pro-
cedure similar to that of diffusion models (Song & Ermon,
201 |
9; Song et al., 2021; Ho et al., 2022b). This enables
many applications in the context of image editing, including
inpainting (Fig. 10), colorization (Fig. 8), super-resolution
(Fig. 6b) and stroke-guided image editing (Fig. 13) as in
SDEdit (Meng et al., 2021). In Section 6.3, we empiri-
cally demonstrate the power of consistency models on many
zero-shot image editing tasks.
4. Training Consistency Models via Distillation
We present our first method for training consistency mod-
els based on distilling a pre-trained score model sϕpx, tq.
Our discussion revolves around the empirical PF ODE in
Eq. (3), obtained by plugging the score model sϕpx, tq
into the PF ODE. Consider discretizing the time horizon
rϵ, T s into N ´ 1 sub-intervals, with boundaries t1 “ ϵ ă
t2 ă ¨ ¨ ¨ ă tN “ T .
In practice, we follow Karras
et al. (2022) to determine the boundaries with the formula
ti “ pϵ1{ρ ` i´1{N ´1pT 1{ρ ´ ϵ1{ρqqρ, where ρ “ 7. When
N is sufficiently large, we can obtain an accurate estimate
of xtn from xtn`1 by running one discretization step of a
numerical ODE solver. This estimate, which we denote as
ˆxϕ
tn, is defined by
ˆxϕ
tn
:“ xtn`1 ` ptn ´ tn`1qΦpxtn`1 , tn`1; ϕq,
(6)
where Φp¨ ¨ ¨ ; ϕq represents the update function of a one-
step ODE solver applied to the empirical PF ODE. For
example, when using the Euler solver, we have Φpx, t; ϕq “
´tsϕpx, tq which corresponds to the following update rule
ˆxϕ
tn “ xtn`1 ´ ptn ´ tn`1qtn`1sϕpxtn`1, tn`1q.
For simplicity, we only consider one-step ODE solvers in
this work. It is straightforward to generalize our framework
to multistep ODE solvers and we leave it as future work.
Due to the connection between the PF ODE in Eq. (2) and
the SDE in Eq. (1) (see Section 2), one can sample along the
distribution of ODE trajectories by first sampling x „ pdata,
then adding Gaussian noise to x. Specifically, given a data
point x, we can generate a pair of adjacent data points
pˆxϕ
tn , xtn`1q on the PF ODE trajectory efficiently by sam-
pling x from the dataset, followed by sampling xtn`1 from
the transition density of the SDE N px, t2
n`1Iq, and then
computing ˆxϕ
tn using one discretization step of the numeri-
cal ODE solver according to Eq. (6). Afterwards, we train
the consistency model by minimizing its output differences
on the pair pˆxϕ
tn, xtn`1 q. This motivates our following con-
sistency distillation loss for training consistency models.
Definition 1. The consistency distillation loss is defined as
LN
CDpθ, θ´; ϕq :“
Erλptnqdpfθpxtn`1, tn`1q, fθ´ pˆxϕ
tn , tnqqs,
(7)
n`1Iq. Here U
1, N ´1
(cid:75)
(cid:74)
, and xtn`1 „ N px; t2
where the expectation is taken with respect to x „ pdata, n „
U
1, N ´1
(cid:75)
denotes the uniform distribution over t1, 2, ¨ ¨ ¨ , N ´ 1u,
λp¨q P R` is a positive weighting function, ˆxϕ
is given by
tn
Eq. (6), θ´ denotes a running average of the past values of
θ during the course of optimization, and dp¨, ¨q is a metric
function that satisfies @x, y : dpx, yq ě 0 and dpx, yq “ 0
if and only if x “ y.
(cid:74)
Unless otherwise stated, we adopt the notations in Defi-
nition 1 throughout this paper, and use Er¨s to denote the
expectation over all random variables. In our experiments,
we consider the squared ℓ2 distance dpx, yq “ }x ´ y}2
2, ℓ1
distance dpx, yq “ }x ´ y}1, and the Learned Perceptual
Image Patch Similarity (LPIPS, Zhang et al. (2018)). We
find λptnq ” 1 performs well across all tasks and datasets.
In practice, we minimize the objective by stochastic gradient
descent on the model parameters θ, while updating θ´ with
exponential moving average (EMA). That is, given a decay
4
Algorithm 2 Consistency Distillation (CD)
Algorithm 3 Consistency Training (CT)
Consistency Models
Input: dataset D, initial model parameter θ, learning rate
η, ODE solver Φp¨, ¨; ϕq, dp¨, ¨q, λp¨q, and µ
θ´ Ð θ
repeat
1, N ´ 1
Sample x „ D and n „ U
Sample xtn`1 „ N px; t2
ˆxϕ
tn Ð xtn`1 ` ptn ´ tn`1qΦpxtn`1, tn`1; ϕq
Lpθ, θ´; ϕq Ð
(cid:74)
n`1Iq
(cid:75)
λptnqdpfθpxtn`1 , tn`1q, fθ´ pˆxϕ
tn, tnqq
θ Ð θ ´ η∇θLpθ, θ´; ϕq
θ´ Ð stopgradpµθ´ ` |
p1 ´ µqθ)
until convergence
rate 0 ď µ ă 1, we perform the following update after each
optimization step:
θ´ Ð stopgradpµθ´ ` p1 ´ µqθq.
(8)
The overall training procedure is summarized in Algo-
rithm 2. In alignment with the convention in deep reinforce-
ment learning (Mnih et al., 2013; 2015; Lillicrap et al., 2015)
and momentum based contrastive learning (Grill et al., 2020;
He et al., 2020), we refer to fθ´ as the “target network”,
and fθ as the “online network”. We find that compared to
simply setting θ´ “ θ, the EMA update and “stopgrad”
operator in Eq. (8) can greatly stabilize the training process
and improve the final performance of the consistency model.
1,N ´1
(cid:74)
Below we provide a theoretical justification for consistency
distillation based on asymptotic analysis.
Theorem 1. Let ∆t :“ maxnP
t|tn`1 ´ tn|u, and
f p¨, ¨; ϕq be the consistency function of the empirical PF
ODE in Eq. (3). Assume fθ satisfies the Lipschitz condition:
there exists L ą 0 such that for all t P rϵ, T s, x, and y,
we have ∥fθpx, tq ´ fθpy, tq∥2 ď L ∥x ´ y∥2. Assume
, the ODE solver called
further that for all n P
1, N ´ 1
(cid:75)
(cid:74)
at tn`1 has local error uniformly bounded by Opptn`1 ´
tnqp`1q with p ě 1. Then, if LN
CDpθ, θ; ϕq “ 0, we have
(cid:75)
}fθpx, tnq ´ f px, tn; ϕq}2 “ Opp∆tqpq.
sup
n,x
Proof. The proof is based on induction and parallels the
classic proof of global error bounds for numerical ODE
solvers (S¨uli & Mayers, 2003). We provide the full proof in
Appendix A.2.
Input: dataset D, initial model parameter θ, learning rate
η, step schedule N p¨q, EMA decay rate schedule µp¨q,
dp¨, ¨q, and λp¨q
θ´ Ð θ and k Ð 0
repeat
Sample x „ D, and n „ U
Sample z „ N p0, Iq
Lpθ, θ´q Ð
1, N pkq ´ 1
(cid:75)
(cid:74)
λptnqdpfθpx ` tn`1z, tn`1q, fθ´ px ` tnz, tnqq
θ Ð θ ´ η∇θLpθ, θ´q
θ´ Ð stopgradpµpkqθ´ ` p1 ´ µpkqqθq
k Ð k ` 1
until convergence
implies that, under some regularity conditions, the estimated
consistency model can become arbitrarily accurate, as long
as the step size of the ODE solver is sufficiently small. Im-
portantly, our boundary condition fθpx, ϵq ” x precludes
the trivial solution fθpx, tq ” 0 from arising in consistency
model training.
The consistency distillation loss LN
CDpθ, θ´; ϕq can be ex-
tended to hold for infinitely many time steps (N Ñ 8) if
θ´ “ θ or θ´ “ stopgradpθq. The resulting continuous-
time loss functions do not require specifying N nor the time
steps tt1, t2, ¨ ¨ ¨ , tN u. Nonetheless, they involve Jacobian-
vector products and require forward-mode automatic dif-
ferentiation for efficient implementation, which may not
be well-supported in some deep learning frameworks. We
provide these continuous-time distillation loss functions in
Theorems 3 to 5, and relegate details to Appendix B.1.
5. Training Consistency Models in Isolation
Consistency models can be trained without relying on any
pre-trained diffusion models. This differs from existing
diffusion distillation techniques, making consistency models
a new independent family of generative models.
Recall that in consistency distillation, we rely on a pre-
trained score model sϕpx, tq to approximate the ground
truth score function ∇ log ptpxq. It turns out that we can
avoid this pre-trained score model altogether by leveraging
the following unbiased estimator (Lemma 1 in Appendix A):
∇ log ptpxtq “ ´E
„
xt ´ x
t2
ȷ
ˇ
ˇ
ˇ
ˇ xt
,
Since θ´ is a running average of the history of θ, we have
θ´ “ θ when the optimization of Algorithm 2 converges.
That is, the target and online consistency models will eventu-
ally match each other. If the consistency model additionally
achieves zero consistency distillation loss, then Theorem 1
where x „ pdata and xt „ N px; t2Iq. That is, given x and
xt, we can estimate ∇ log ptpxtq with ´pxt ´ xq{t2.
This unbiased estimate suffices to replace the pre-trained
diffusion model in consistency distillation when using the
Euler method as the ODE solver in the limit of N Ñ 8, as
5
Consistency Models
1,N ´1
(cid:75)
justified by the following result.
Theorem 2. Let ∆t :“ maxnP |
t|tn`1 ´ tn|u. As-
(cid:74)
sume d and fθ´ are both twice continuously differentiable
with bounded second derivatives, the weighting function
λp¨q is bounded, and Er∥∇ log ptnpxtn q∥2
2s ă 8. As-
sume further that we use the Euler ODE solver, and the
pre-trained score model matches the ground truth, i.e.,
@t P rϵ, T s : sϕpx, tq ” ∇ log ptpxq. Then,
CDpθ, θ´; ϕq “ LN
CTpθ, θ´q ` op∆tq,
LN
(9)
1, N ´ 1
(cid:75)
(cid:74)
, and xtn`1 „ N px; t2
where the expectation is taken with respect to x „ pdata, n „
n`1Iq. The consistency
U
training objective, denoted by LN
Erλptnqdpfθpx ` tn`1z, tn`1q, fθ´ px ` tnz, tnqqs, (10)
where z „ N p0, Iq. Moreover, LN
inf N LN
CTpθ, θ´q, is defined as
CTpθ, θ´q ě Op∆tq if
CDpθ, θ´; ϕq ą 0.
Proof. The proof is based on Taylor series expansion and
properties of score functions (Lemma 1). A complete proof
is provided in Appendix A.3.
We refer to Eq. (10) as the consistency training (CT) loss.
Crucially, Lpθ, θ´q only depends on the online network
fθ, and the target network fθ´ , while being completely
agnostic to diffusion model parameters ϕ. The loss function
Lpθ, θ´q ě Op∆tq decreases at a slower rate than the
remainder op∆tq and thus will dominate the loss in Eq. (9)
as N Ñ 8 and ∆t Ñ 0.
For improved practical performance, we propose to progres-
sively increase N during training according to a schedule
function N p¨q. The intuition (cf ., Fig. 3d) is that the consis-
tency training loss has less “variance” but more “bias” with
respect to the underlying consistency distillation loss (i.e.,
the left-hand side of Eq. (9)) when N is small (i.e., ∆t is
large), which facilitates faster convergence at the beginning
of training. On the contrary, it has more “variance” but less
“bias” when N is large (i.e., ∆t is small), which is desirable
when closer to the end of training. For best performance,
we also find that µ should change along with N , according
to a schedule function µp¨q. The full algorithm of consis-
tency training is provided in Algorithm 3, and the schedule
functions used in our experiments are given in Appendix C.
Similar to consistency distillation, the consistency training
loss LN
CTpθ, θ´q can be extended to hold in continuous time
(i.e., N Ñ 8) if θ´ “ stopgradpθq, as shown in Theo-
rem 6. This continuous-time loss function does not require
schedule functions for N or µ, but requires forward-mode
automatic differentiation for efficient implementation. Un-
like the discrete-time CT loss, there is no undesirable “bias”
associated with the continuous-time objective, as we effec-
tively take ∆t Ñ 0 in Theorem 2. We relegate more details
to Appendix B.2.
6
6. Experiments
We employ consistency distillation and consistency train-
ing to learn consistency models on real image datasets,
including CIFAR-10 (Krizhevsky et al., 2009), ImageNet
64 ˆ 64 (Deng et al., 2009), LSUN Bedroom 256 ˆ 256,
and LSUN Cat 256 ˆ 256 (Yu et al., 2015). Results are
compared according to Fr´echet Inception Distance (FID,
Heusel et al. (2017), lower is better), Inception Score (IS,
Salimans et al. (2016), higher is better), Precision (Prec.,
Kynk¨a¨anniemi et al. (2019), higher is better), and Recall
(Rec., Kynk¨a¨anniemi et al. (2019), higher is better). Addi-
tional experimental details are provided in Appendix C.
6.1. Training Consistency Models
We perform a series of experiments on CIFAR-10 to under-
stand the effect of various hyperparameters on the perfor-
mance of consistency models trained by consistency distil-
lation (CD) and consistency training (CT). We first focus on
the effect of the metric function dp¨, ¨q, the ODE solver, and
the number of discretization steps N in CD, then investigate
the effect of the schedule functions N p¨q and µp¨q in CT.
To set up our experiments for CD, we consider the squared
ℓ2 distance dpx, yq “ }x ´ y}2
2, ℓ1 distance dpx, yq “
}x ´ y}1, and the Learned Perceptual Image Patch Simi-
larity (LPIPS, Zhang et al. (2018)) as the metric function.
For the ODE solver, we compare Euler’s forward method
and Heun’s second order method as detailed in Karras et |
al.
(2022). For the number of discretization steps N , we com-
pare N P t9, 12, 18, 36, 50, 60, 80, 120u. All consistency
models trained by CD in our experiments are initialized with
the corresponding pre-trained diffusion models, whereas
models trained by CT are randomly initialized.
As visualized in Fig. 3a, the optimal metric for CD is LPIPS,
which outperforms both ℓ1 and ℓ2 by a large margin over
all training iterations. This is expected as the outputs of
consistency models are images on CIFAR-10, and LPIPS is
specifically designed for measuring the similarity between
natural images. Next, we investigate which ODE solver and
which discretization step N work the best for CD. As shown
in Figs. 3b and 3c, Heun ODE solver and N “ 18 are the
best choices. Both are in line with the recommendation
of Karras et al. (2022) despite the fact that we are train-
ing consistency models, not diffusion models. Moreover,
Fig. 3b shows that with the same N , Heun’s second order
solver uniformly outperforms Euler’s first order solver. This
corroborates with Theorem 1, which states that the optimal
consistency models trained by higher order ODE solvers
have smaller estimation errors with the same N . The results
of Fig. 3c also indicate that once N is sufficiently large, the
performance of CD becomes insensitive to N . Given these
insights, we hereafter use LPIPS and Heun ODE solver for
CD unless otherwise stated. For N in CD, we follow the
Consistency Models
(a) Metric functions in CD.
(b) Solvers and N in CD.
(c) N with Heun solver in CD.
(d) Adaptive N and µ in CT.
Figure 3: Various factors that affect consistency distillation (CD) and consistency training (CT) on CIFAR-10. The best
configuration for CD is LPIPS, Heun ODE solver, and N “ 18. Our adaptive schedule functions for N and µ make CT
converge significantly faster than fixing them to be constants during the course of optimization.
(a) CIFAR-10
(b) ImageNet 64 ˆ 64
(c) Bedroom 256 ˆ 256
(d) Cat 256 ˆ 256
Figure 4: Multistep image generation with consistency distillation (CD). CD outperforms progressive distillation (PD)
across all datasets and sampling steps. The only exception is single-step generation on Bedroom 256 ˆ 256.
suggestions in Karras et al. (2022) on CIFAR-10 and Im-
ageNet 64 ˆ 64. We tune N separately on other datasets
(details in Appendix C).
Due to the strong connection between CD and CT, we adopt
LPIPS for our CT experiments throughout this paper. Unlike
CD, there is no need for using Heun’s second order solver
in CT as the loss function does not rely on any particular
numerical ODE solver. As demonstrated in Fig. 3d, the con-
vergence of CT is highly sensitive to N —smaller N leads
to faster convergence but worse samples, whereas larger
N leads to slower convergence but better samples upon
convergence. This matches our analysis in Section 5, and
motivates our practical choice of progressively growing N
and µ for CT to balance the trade-off between convergence
speed and sample quality. As shown in Fig. 3d, adaptive
schedules of N and µ significantly improve the convergence
speed and sample quality of CT. In our experiments, we
tune the schedules N p¨q and µp¨q separately for images of
different resolutions, with more details in Appendix C.
6.2. Few-Step Image Generation
Distillation In current literature, the most directly compara-
ble approach to our consistency distillation (CD) is progres-
sive distillation (PD, Salimans & Ho (2022)); both are thus
far the only distillation approaches that do not construct
synthetic data before distillation. In stark contrast, other dis-
tillation techniques, such as knowledge distillation (Luhman
& Luhman, 2021) and DFNO (Zheng et al., 2022), have to
prepare a large synthetic dataset by generating numerous
samples from the diffusion model with expensive numerical
ODE/SDE solvers. We perform comprehensive comparison
for PD and CD on CIFAR-10, ImageNet 64ˆ64, and LSUN
256 ˆ 256, with all results reported in Fig. 4. All methods
distill from an EDM (Karras et al., 2022) model that we pre-
trained in-house. W |
e note that across all sampling iterations,
using the LPIPS metric uniformly improves PD compared
to the squared ℓ2 distance in the original paper of Salimans
& Ho (2022). Both PD and CD improve as we take more
sampling steps. We find that CD uniformly outperforms
PD across all datasets, sampling steps, and metric functions
considered, except for single-step generation on Bedroom
256 ˆ 256, where CD with ℓ2 slightly underperforms PD
with ℓ2. As shown in Table 1, CD even outperforms distilla-
tion approaches that require synthetic dataset construction,
such as Knowledge Distillation (Luhman & Luhman, 2021)
and DFNO (Zheng et al., 2022).
Direct Generation In Tables 1 and 2, we compare the
sample quality of consistency training (CT) with other gen-
erative models using one-step and two-step generation. We
also include PD and CD results for reference. Both tables re-
port PD results obtained from the ℓ2 metric function, as this
is the default setting used in the original paper of Salimans
7
Consistency Models
Table 1: Sample quality on CIFAR-10. ˚Methods that require
synthetic data construction for distillation.
Table 2: Sample quality on ImageNet 64 ˆ 64, and LSUN
Bedroom & Cat 256 ˆ 256. :Distillation techniques.
METHOD
Diffusion + Samplers
DDIM (Song et al., 2020)
DDIM (Song et al., 2020)
DDIM (Song et al., 2020)
DPM-solver-2 (Lu et al., 2022)
DPM-solver-fast (Lu et al., 2022)
3-DEIS (Zhang & Chen, 2022)
Diffusion + Distillation
Knowledge Distillation˚ (Luhman & Luhman, 2021)
DFNO˚ (Zheng et al., 2022)
1-Rectified Flow (+distill)˚ (Liu et al., 2022)
2-Rectified Flow (+distill)˚ (Liu et al., 2022)
3-Rectified Flow (+distill)˚ (Liu et al., 2022)
PD (Salimans & Ho, 2022)
CD
PD (Salimans & Ho, 2022)
CD
Direct Generation
BigGAN (Brock et al., 2019)
Diffusion GAN (Xiao et al., 2022)
AutoGAN (Gong et al., 2019)
E2GAN (Tian et al., 2020)
ViTGAN (Lee et al., 2021)
TransGAN (Jiang et al., 2021)
StyleGAN2-ADA (Karras et al., 2020)
StyleGAN-XL (Sauer et al., 2022)
Score SDE (Song et al., 2021)
DDPM (Ho et al., 2020)
LSGM (Vahdat et al., 2021)
PFGM (Xu et al., 2022)
EDM (Karras et al., 2022)
1-Rectified Flow (Liu et al., 2022)
Glow (Kingma & Dhariwal, 2018)
Residual Flow (Chen et al., 2019)
GLFlow (Xiao et al., 2019)
DenseFlow (Grci´c et al., 2021)
DC-VAE (Parmar et al., 2021)
CT
CT
NFE (Ó)
FID (Ó)
IS (Ò)
METHOD
NFE (Ó)
FID (Ó)
Prec. (Ò) Rec. (Ò)
50
20
10
10
10
10
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
2000
1000
147
110
35
1
1
1
1
1
1
1
2
4.67
6.84
8.23
5.94
4.70
4.17
9.36
4.12
6.18
4.85
5.21
8.34
3.55
5.58
2.93
14.7
14.6
12.4
11.3
6.66
9.26
2.92
1.85
2.20
3.17
2.10
2.35
2.04
378
48.9
46.4
44.6
34.9
17.9
8.70
5.83
9.08
9.01
8.79
8.69
9.48
9.05
9.75
9.22
8.93
8.55
8.51
9.30
9.05
9.83
9.89
9.46
9.68
9.84
1.13
3.92
8.20
8.49
8.85
ImageNet 64 ˆ 64
PD: (Salimans & Ho, 2022)
DFNO: (Zheng et al., 2022)
CD:
PD: (Salimans & Ho, 2022)
CD:
ADM (Dhariwal & Nichol, 2021)
EDM (Karras et al., 2022)
BigGAN-deep (Brock et al., 2019)
CT
CT
LSUN Bedroom 256 ˆ 256
PD: (Salimans & Ho, 2022)
PD: (Salimans & Ho, 2022)
CD:
CD:
DDPM (Ho et al., 2020)
ADM (Dhariwal & Nichol, 2021)
EDM (Karras et al., 2022)
PGGAN (Karras et al., 2018)
PG-SWGAN (Wu et al., 2019)
TDPM (GAN) (Zheng et al., 2023)
StyleGAN2 (Karras et al., 2020)
CT
CT
LSUN Cat 256 ˆ 256
PD: (Salimans & Ho, 2022)
PD: (Salimans & Ho, 2022)
CD:
CD:
DDPM (Ho et al., 2020)
ADM (Dhariwal & Nichol, 2021)
EDM (Karras et al., 2022)
PGGAN (Karras et al., 2018)
StyleGAN2 (Karras et al., 2020)
CT
CT
1
1
1
2
2
250
79
1
1
2
1
2
1
2
1000
1000
79
1
1
1
1
1
2
1
2
1
2
1000
1000
79
1
1
1
2
15.39
8.35
6.20
8.95
4.70
2.07
2.44
4.06
13.0
11.1
16.92
8.47
7.80
5.22
4.89
1.90
3.57
8.34
8.0
5.24
2.35
16.0
7.85
29.6
15.5
11.0
8.84
17.1
5.57
6.69
37.5
7.25
20.7
11.7
0.59
0.62
0.68
0.63
0.69
0.74
0.71
0.79
0.71
0.69
0.47
0.56
0.66
0.68
0.60
0.66
0.66
0.59
0.60
0.68
0.51
0.59
0.65
0.66
0.53
0.63
0.70
0.58
0.56
0.63
0.63
0.65
0.64
0.63
0.67
0.48
0.47
0.56
0.27
0.39
0.34
0.39
0.45
0.51
0.45
0.48
0.17
0.33
0.25
0.36
0.36
0.40
0.48
0.52
0.43
0.43
0.23
0.36
Figure 5: Samples generated by ED |
M (top), CT + single-step generation (middle), and CT + 2-step generation (Bottom). All
corresponding images are generated from the same initial noise.
8
Consistency Models
(a) Left: The gray-scale image. Middle: Colorized images. Right: The ground-truth image.
(b) Left: The downsampled image (32 ˆ 32). Middle: Full resolution images (256 ˆ 256). Right: The ground-truth image (256 ˆ 256).
(c) Left: A stroke input provided by users. Right: Stroke-guided image generation.
Figure 6: Zero-shot image editing with a consistency model trained by consistency distillation on LSUN Bedroom 256ˆ256.
& Ho (2022). For fair comparison, we ensure PD and CD
distill the same EDM models. In Tables 1 and 2, we observe
that CT outperforms existing single-step, non-adversarial
generative models, i.e., VAEs and normalizing flows, by a
significant margin on CIFAR-10. Moreover, CT achieves
comparable quality to one-step samples from PD without
relying on distillation. In Fig. 5, we provide EDM samples
(top), single-step CT samples (middle), and two-step CT
samples (bottom). In Appendix E, we show additional sam-
ples for both CD and CT in Figs. 14 to 21. Importantly, all
samples obtained from the same initial noise vector share
significant structural similarity, even though CT and EDM
models are trained independently from one another. This
indicates that CT is less likely to suffer from mode collapse,
as EDMs do not.
6.3. Zero-Shot Image Editing
Similar to diffusion models, consistency models allow zero-
shot image editing by modifying the multistep sampling
process in Algorithm 1. We demonstrate this capability
with a consistency model trained on the LSUN bedroom
dataset using consistency distillation. In Fig. 6a, we show
such a consistency model can colorize gray-scale bedroom
images at test time, even though it has never been trained
on colorization tasks. In Fig. 6b, we show the same con-
sistency model can generate high-resolution images from
low-resolution inputs. In Fig. 6c, we additionally demon-
strate that it can generate images based on stroke inputs cre-
ated by humans, as in SDEdit for diffusion models (Meng
et al., 2021). Again, this editing capability is zero-shot,
as the model has not been trained on stroke inputs.
In
Appendix D, we additionally demonstrate the zero-shot
capability of consistency models on inpainting (Fig. 10),
interpolation (Fig. 11) and denoising (Fig. 12), with more
examples on colorization (Fig. 8), super-resolution (Fig. 9)
and stroke-guided image generation (Fig. 13).
7. Conclusion
We have introduced consistency models, a type of generative
models that are specifically designed to support one-step
and few-step generation. We have empirically demonstrated
that our consistency distillation method outshines the exist-
ing distillation techniques for diffusion models on multiple
image benchmarks and small sampling iterations. Further-
more, as a standalone generative model, consistency models
generate better samples than existing single-step genera-
tion models except for GANs. Similar to diffusion models,
they also allow zero-shot image editing applications such as
inpainting, colorization, super-resolution, denoising, inter-
polation, and stroke-guided image generation.
In addition, consistency models share striking similarities
with techniques employed in other fields, including deep
Q-learning (Mnih et al., 2015) and momentum-based con-
trastive learning (Grill et al., 2020; He et al., 2020). This
offers exciting prospects for cross-pollination of ideas and
methods among these diverse fields.
Acknowledgements
We thank Alex Nichol for reviewing the manuscript and
providing valuable feedback, Chenlin Meng for providing
stroke inputs needed in our stroke-guided image generation
experiments, and the OpenAI Algorithms team.
9
Consistency Models
References
Balaji, Y., Nah, S., Huang, X., Vahdat, A., Song, J., Kreis,
K., Aittala, M., Aila, T., Laine, S., Catanzaro, B., Kar-
ras, T., and Liu, M.-Y. ediff-i: Text-to-image diffusion
models with ensemble of expert denoisers. arXiv preprint
arXiv:2 |
211.01324, 2022.
Biloˇs, M., Sommer, J., Rangapuram, S. S., Januschowski, T.,
and G¨unnemann, S. Neural flows: Efficient alternative to
neural odes. Advances in Neural Information Processing
Systems, 34:21325–21337, 2021.
Brock, A., Donahue, J., and Simonyan, K. Large scale
GAN training for high fidelity natural image synthesis. In
International Conference on Learning Representations,
2019. URL https://openreview.net/forum?
id=B1xsqj09Fm.
Chen, N., Zhang, Y., Zen, H., Weiss, R. J., Norouzi, M., and
Chan, W. Wavegrad: Estimating gradients for waveform
In International Conference on Learning
generation.
Representations (ICLR), 2021.
Chen, R. T., Rubanova, Y., Bettencourt, J., and Duvenaud,
D. K. Neural Ordinary Differential Equations. In Ad-
vances in neural information processing systems, pp.
6571–6583, 2018.
Chen, R. T., Behrmann, J., Duvenaud, D. K., and Jacobsen,
J.-H. Residual flows for invertible generative modeling.
In Advances in Neural Information Processing Systems,
pp. 9916–9926, 2019.
Chung, H., Kim, J., Mccann, M. T., Klasky, M. L., and Ye,
J. C. Diffusion posterior sampling for general noisy in-
verse problems. In International Conference on Learning
Representations, 2023. URL https://openreview.
net/forum?id=OnD9zGAGT0k.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei,
L. Imagenet: A large-scale hierarchical image database.
In 2009 IEEE conference on computer vision and pattern
recognition, pp. 248–255. Ieee, 2009.
Dhariwal, P. and Nichol, A. Diffusion models beat gans
on image synthesis. Advances in Neural Information
Processing Systems (NeurIPS), 2021.
OpenReview.net, 2017. URL https://openreview.
net/forum?id=HkpbnH9lx.
Dockhorn, T., Vahdat, A., and Kreis, K. Genie: Higher-
arXiv preprint
order denoising diffusion solvers.
arXiv:2210.05475, 2022.
Gong, X., Chang, S., Jiang, Y., and Wang, Z. Autogan:
Neural architecture search for generative adversarial net-
works. In Proceedings of the IEEE/CVF International
Conference on Computer Vision, pp. 3224–3234, 2019.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,
Warde-Farley, D., Ozair, S., Courville, A., and Bengio,
Y. Generative adversarial nets. In Advances in neural
information processing systems, pp. 2672–2680, 2014.
Grci´c, M., Grubiˇsi´c, I., and ˇSegvi´c, S. Densely connected
normalizing flows. Advances in Neural Information Pro-
cessing Systems, 34:23968–23982, 2021.
Grill, J.-B., Strub, F., Altch´e, F., Tallec, C., Richemond, P.,
Buchatskaya, E., Doersch, C., Avila Pires, B., Guo, Z.,
Gheshlaghi Azar, M., et al. Bootstrap your own latent-a
new approach to self-supervised learning. Advances in
neural information processing systems, 33:21271–21284,
2020.
He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. Mo-
mentum contrast for unsupervised visual representation
learning. In Proceedings of the IEEE/CVF conference on
computer vision and pattern recognition, pp. 9729–9738,
2020.
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and
Hochreiter, S. GANs trained by a two time-scale update
rule converge to a local Nash equilibrium. In Advances in
Neural Information Processing Systems, pp. 6626–6637,
2017.
Ho, J., Jain, A., and Abbeel, P. Denoising Diffusion Proba-
bilistic Models. Advances in Neural Information Process-
ing Systems, 33, 2020.
Ho, J., Chan, W., Saharia, C., Whang, J., Gao, R., Gritsenko,
A., Kingma, D. P., Poole, B., Norouzi, M., Fleet, D. J.,
et al. Imagen video: High definition video generation
with diffusion models. arXiv preprint arXiv:2210.02303,
2022a.
Dinh, L., Krueger, D., and Bengio, Y. NICE: Non-linear
independent components estimation. International Con-
ference in Learning Representations Workshop Track,
2015.
Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density es-
timation using real NVP. In 5th International Confer-
ence on Learning Representations, ICLR 2017, Toulon,
France, April 24-26, 2017, Conference Track Proceedings.
Ho, J., Salimans, T., Gritsenko, A. A., Chan, W., Norouzi,
M., and Fleet, D. J. Video diffusion models. In ICLR
Workshop on Deep Generative Models for Highly S |
truc-
tured Data, 2022b. URL https://openreview.
net/forum?id=BBelR2NdDZ5.
Hyv¨arinen, A. and Dayan, P. Estimation of non-normalized
statistical models by score matching. Journal of Machine
Learning Research (JMLR), 6(4), 2005.
10
Consistency Models
Jiang, Y., Chang, S., and Wang, Z. Transgan: Two pure
transformers can make one strong gan, and that can scale
up. Advances in Neural Information Processing Systems,
34:14745–14758, 2021.
Karras, T., Aila, T., Laine, S., and Lehtinen, J. Progres-
sive growing of GANs for improved quality, stability,
and variation. In International Conference on Learning
Representations, 2018. URL https://openreview.
net/forum?id=Hk99zCeAb.
Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J.,
and Aila, T. Analyzing and improving the image quality
of stylegan. 2020.
Karras, T., Aittala, M., Aila, T., and Laine, S. Elucidating
the design space of diffusion-based generative models. In
Proc. NeurIPS, 2022.
Kawar, B., Vaksman, G., and Elad, M. Snips: Solving
noisy inverse problems stochastically. arXiv preprint
arXiv:2105.14951, 2021.
Kawar, B., Elad, M., Ermon, S., and Song, J. Denoising
diffusion restoration models. In Advances in Neural In-
formation Processing Systems, 2022.
Kingma, D. P. and Dhariwal, P. Glow: Generative flow
with invertible 1x1 convolutions.
In Bengio, S., Wal-
lach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N.,
and Garnett, R. (eds.), Advances in Neural Information
Processing Systems 31, pp. 10215–10224. 2018.
Kingma, D. P. and Welling, M. Auto-encoding variational
bayes. In International Conference on Learning Repre-
sentations, 2014.
Kong, Z., Ping, W., Huang, J., Zhao, K., and Catanzaro,
B. DiffWave: A Versatile Diffusion Model for Audio
Synthesis. arXiv preprint arXiv:2009.09761, 2020.
Krizhevsky, A., Hinton, G., et al. Learning multiple layers
of features from tiny images. 2009.
Kynk¨a¨anniemi, T., Karras, T., Laine, S., Lehtinen, J., and
Aila, T. Improved precision and recall metric for assess-
ing generative models. Advances in Neural Information
Processing Systems, 32, 2019.
Lee, K., Chang, H., Jiang, L., Zhang, H., Tu, Z., and Liu,
C. Vitgan: Training gans with vision transformers. arXiv
preprint arXiv:2107.04589, 2021.
Liu, L., Jiang, H., He, P., Chen, W., Liu, X., Gao, J., and
Han, J. On the variance of the adaptive learning rate and
beyond. arXiv preprint arXiv:1908.03265, 2019.
Liu, X., Gong, C., and Liu, Q. Flow straight and fast:
Learning to generate and transfer data with rectified flow.
arXiv preprint arXiv:2209.03003, 2022.
Lu, C., Zhou, Y., Bao, F., Chen, J., Li, C., and Zhu, J.
Dpm-solver: A fast ode solver for diffusion probabilis-
tic model sampling in around 10 steps. arXiv preprint
arXiv:2206.00927, 2022.
Luhman, E. and Luhman, T. Knowledge distillation in
iterative generative models for improved sampling speed.
arXiv preprint arXiv:2101.02388, 2021.
Meng, C., Song, Y., Song, J., Wu, J., Zhu, J.-Y., and Ermon,
S. Sdedit: Image synthesis and editing with stochastic
differential equations. arXiv preprint arXiv:2108.01073,
2021.
Meng, C., Gao, R., Kingma, D. P., Ermon, S., Ho, J., and
Salimans, T. On distillation of guided diffusion models.
arXiv preprint arXiv:2210.03142, 2022.
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A.,
Antonoglou, I., Wierstra, D., and Riedmiller, M. Playing
atari with deep reinforcement learning. arXiv preprint
arXiv:1312.5602, 2013.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness,
J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidje-
land, A. K., Ostrovski, G., et al. Human-level control
through deep reinforcement learning. nature, 518(7540):
529–533, 2015.
Nichol, A., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin,
P., McGrew, B., Sutskever, I., and Chen, M. Glide:
Towards photorealistic image generation and editing
arXiv preprint
with text-guided diffusion models.
arXiv:2112.10741, 2021.
Parmar, G., Li, D., Lee, K., and Tu, Z. Dual contradistinctive
generative autoencoder. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition,
pp. 823–832, |
2021.
Popov, V., Vovk, I., Gogoryan, V., Sadekova, T., and Kudi-
nov, M. Grad-TTS: A diffusion probabilistic model for
text-to-speech. arXiv preprint arXiv:2105.06337, 2021.
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen,
M. Hierarchical text-conditional image generation with
clip latents. arXiv preprint arXiv:2204.06125, 2022.
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez,
T., Tassa, Y., Silver, D., and Wierstra, D. Continuous
control with deep reinforcement learning. arXiv preprint
arXiv:1509.02971, 2015.
Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic
backpropagation and approximate inference in deep gen-
erative models. In Proceedings of the 31st International
Conference on Machine Learning, pp. 1278–1286, 2014.
11
Consistency Models
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and
Ommer, B. High-resolution image synthesis with latent
diffusion models. In Proceedings of the IEEE/CVF Con-
ference on Computer Vision and Pattern Recognition, pp.
10684–10695, 2022.
Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton,
E., Ghasemipour, S. K. S., Ayan, B. K., Mahdavi, S. S.,
Lopes, R. G., et al. Photorealistic text-to-image diffusion
models with deep language understanding. arXiv preprint
arXiv:2205.11487, 2022.
Salimans, T. and Ho, J. Progressive distillation for fast
sampling of diffusion models. In International Confer-
ence on Learning Representations, 2022. URL https:
//openreview.net/forum?id=TIdIXIpzhoI.
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V.,
Radford, A., and Chen, X. Improved techniques for train-
ing gans. In Advances in neural information processing
systems, pp. 2234–2242, 2016.
Sauer, A., Schwarz, K., and Geiger, A. Stylegan-xl: Scaling
stylegan to large diverse datasets. In ACM SIGGRAPH
2022 conference proceedings, pp. 1–10, 2022.
Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and
Ganguli, S. Deep Unsupervised Learning Using Nonequi-
librium Thermodynamics. In International Conference
on Machine Learning, pp. 2256–2265, 2015.
Song, J., Meng, C., and Ermon, S. Denoising diffusion
implicit models. arXiv preprint arXiv:2010.02502, 2020.
Song, J., Vahdat, A., Mardani, M., and Kautz, J.
Pseudoinverse-guided diffusion models for inverse prob-
lems. In International Conference on Learning Represen-
tations, 2023. URL https://openreview.net/
forum?id=9_gsMA8MRKQ.
Song, Y. and Ermon, S. Generative Modeling by Estimating
Gradients of the Data Distribution. In Advances in Neural
Information Processing Systems, pp. 11918–11930, 2019.
Song, Y. and Ermon, S. Improved Techniques for Training
Score-Based Generative Models. Advances in Neural
Information Processing Systems, 33, 2020.
Song, Y., Garg, S., Shi, J., and Ermon, S. Sliced score
matching: A scalable approach to density and score esti-
mation. In Proceedings of the Thirty-Fifth Conference on
Uncertainty in Artificial Intelligence, UAI 2019, Tel Aviv,
Israel, July 22-25, 2019, pp. 204, 2019.
Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A.,
Ermon, S., and Poole, B. Score-based generative mod-
In In-
eling through stochastic differential equations.
ternational Conference on Learning Representations,
12
2021. URL https://openreview.net/forum?
id=PxTIG12RRHS.
Song, Y., Shen, L., Xing, L., and Ermon, S. Solving inverse
problems in medical imaging with score-based genera-
tive models. In International Conference on Learning
Representations, 2022. URL https://openreview.
net/forum?id=vaRCHVj0uGI.
S¨uli, E. and Mayers, D. F. An introduction to numerical
analysis. Cambridge university press, 2003.
Tian, Y., Wang, Q., Huang, Z., Li, W., Dai, D., Yang, M.,
Wang, J., and Fink, O. Off-policy reinforcement learn-
ing for efficient and effective gan architecture search. In
Computer Vision–ECCV 2020: 16th European Confer-
ence, Glasgow, UK, August 23–28, 2020, Proceedings,
Part VII 16, pp. 175–192. Springer, 2020.
Vahdat, A., Kreis, K., and Kautz, J. Score-based generative
modeling in latent space. Advances in Neural Information
Processing Systems, 34:11287–11302, 2021.
Vincent, P. A C |
onnection Between Score Matching and
Denoising Autoencoders. Neural Computation, 23(7):
1661–1674, 2011.
Wu, J., Huang, Z., Acharya, D., Li, W., Thoma, J., Paudel,
D. P., and Gool, L. V. Sliced wasserstein generative
models. In Proceedings of the IEEE/CVF Conference
on Computer Vision and Pattern Recognition, pp. 3713–
3722, 2019.
Xiao, Z., Yan, Q., and Amit, Y. Generative latent flow. arXiv
preprint arXiv:1905.10485, 2019.
Xiao, Z., Kreis, K., and Vahdat, A. Tackling the generative
learning trilemma with denoising diffusion GANs. In
International Conference on Learning Representations,
2022. URL https://openreview.net/forum?
id=JprM0p-q0Co.
Xu, Y., Liu, Z., Tegmark, M., and Jaakkola, T. S. Pois-
son flow generative models. In Oh, A. H., Agarwal, A.,
Belgrave, D., and Cho, K. (eds.), Advances in Neural
Information Processing Systems, 2022. URL https:
//openreview.net/forum?id=voV_TRqcWh.
Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., and
Xiao, J. Lsun: Construction of a large-scale image dataset
using deep learning with humans in the loop. arXiv
preprint arXiv:1506.03365, 2015.
Zhang, Q. and Chen, Y.
models with exponential integrator.
arXiv:2204.13902, 2022.
Fast sampling of diffusion
arXiv preprint
Consistency Models
Zhang, R., Isola, P., Efros, A. A., Shechtman, E., and Wang,
O. The unreasonable effectiveness of deep features as a
perceptual metric. In CVPR, 2018.
Zheng, H., Nie, W., Vahdat, A., Azizzadenesheli, K., and
Anandkumar, A. Fast sampling of diffusion models
via operator learning. arXiv preprint arXiv:2211.13449,
2022.
Zheng, H., He, P., Chen, W., and Zhou, M. Truncated diffu-
sion probabilistic models and diffusion-based adversarial
In The Eleventh International Confer-
auto-encoders.
ence on Learning Representations, 2023. URL https:
//openreview.net/forum?id=HDxgaKk956l.
13
Consistency Models
Contents
1
Introduction
2 Diffusion Models
3 Consistency Models
4 Training Consistency Models via Distillation
5 Training Consistency Models in Isolation
6 Experiments
6.1 Training Consistency Models .
6.2 Few-Step Image Generation .
6.3 Zero-Shot Image Editing .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7 Conclusion
Appendices
Appendix A Proofs
A.1 Notations
.
.
.
.
.
.
.
.
A.2 Consistency Distillation .
A.3 Consistency Training .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Appendix B Continuous-Time Extensions
B.1 Consistency Distillation in Continuous Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
B.2 Consistency Training in Continuous Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
B.3 Experimental Verifications .
.
.
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Appendix C Additional Experimental Details
Model Architectures .
.
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Parameterization for Consistency Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Schedule Functions for Consistency Training . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Training Details .
.
.
.
.
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Appendix D Additional Results on Zero-Shot Image Editing
Inpainting .
.
Colorization
.
.
.
.
.
.
Super-resolution .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
1 |
2
3
4
5
6
6
7
9
9
15
15
15
15
16
18
18
22
24
25
25
25
26
26
26
27
27
28
Consistency Models
Stroke-guided image generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Denoising .
.
.
Interpolation .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Appendix E Additional Samples from Consistency Models
28
28
28
28
Appendices
A. Proofs
A.1. Notations
We use fθpx, tq to denote a consistency model parameterized by θ, and f px, t; ϕq the consistency function of the empirical
PF ODE in Eq. (3). Here ϕ symbolizes its dependency on the pre-trained score model sϕpx, tq. For the consistency function
of the PF ODE in Eq. (2), we denote it as f px, tq. Given a multi-variate function hpx, yq, we let B1hpx, yq denote the
Jacobian of h over x, and analogously B2hpx, yq denote the Jacobian of h over y. Unless otherwise stated, x is supposed to
be a random variable sampled from the data distribution pdatapxq, n is sampled uniformly at random from
, and
xtn is sampled from N px; t2
represents the set of integers t1, 2, ¨ ¨ ¨ , N ´ 1u. Furthermore, recall that
we define
1, N ´ 1
(cid:75)
(cid:74)
1, N ´ 1
(cid:75)
nIq. Here
(cid:74)
ˆxϕ
tn
:“ xtn`1 ` ptn ´ tn`1qΦpxtn`1 , tn`1; ϕq,
where Φp¨ ¨ ¨ ; ϕq denotes the update function of a one-step ODE solver for the empirical PF ODE defined by the score
model sϕpx, tq. By default, Er¨s denotes the expectation over all relevant random variables in the expression.
A.2. Consistency Distillation
Theorem 1. Let ∆t :“ maxnP
t|tn`1 ´ tn|u, and f p¨, ¨; ϕq be the consistency function of the empirical PF ODE
(cid:74)
in Eq. (3). Assume fθ satisfies the Lipschitz condition: there exists L ą 0 such that for all t P rϵ, T s, x, and y, we have
∥fθpx, tq ´ fθpy, tq∥2 ď L ∥x ´ y∥2. Assume further that for all n P
, the ODE solver called at tn`1 has local
1, N ´ 1
(cid:74)
(cid:75)
error uniformly bounded by Opptn`1 ´ tnqp`1q with p ě 1. Then, if LN
CDpθ, θ; ϕq “ 0, we have
1,N ´1
(cid:75)
}fθpx, tnq ´ f px, tn; ϕq}2 “ Opp∆tqpq.
sup
n,x
Proof. From LN
CDpθ, θ; ϕq “ 0, we have
CDpθ, θ; ϕq “ Erλptnqdpfθpxtn`1 , tn`1q, fθpˆxϕ
LN
tn , tnqqs “ 0.
(11)
According to the definition, we have ptn pxtn q “ pdatapxq b N p0, t2
every xtn and 1 ď n ď N . Therefore, Eq. (11) entails
nIq where tn ě ϵ ą 0. It follows that ptn pxtn q ą 0 for
Because λp¨q ą 0 and dpx, yq “ 0 ô x “ y, this further implies that
λptnqdpfθpxtn`1, tn`1q, fθpˆxϕ
tn, tnqq ” 0.
fθpxtn`1, tn`1q ” fθpˆxϕ
tn, tnq.
Now let en represent the error vector at tn, which is defined as
en :“ fθpxtn , tnq ´ f pxtn , tn; ϕq.
We can easily derive the following recursion relation
en`1 “ fθpxtn`1, tn`1q ´ f pxtn`1 , tn`1; ϕq
15
(12)
(13)
Consistency Models
piq
“ fθpˆxϕ
“ fθpˆxϕ
“ fθpˆxϕ
tn , tnq ´ f pxtn , tn; ϕq
tn , tnq ´ fθpxtn , tnq ` fθpxtn , tnq ´ f pxtn, tn; ϕq
tn , tnq ´ fθpxtn , tnq ` en,
(14)
where (i) is due to Eq. (13) and f pxtn`1, tn`1; ϕq “ f pxtn , tn; ϕq. Because fθp¨, tnq has Lipschitz constant L, we have
∥en`1∥2 ď ∥en∥2 ` L
(cid:13)
(cid:13)
(cid:13)2
tn ´ xtn
(cid:13)
(cid:13)ˆxϕ
(cid:13)
piq
“ ∥en∥2 ` L ¨ Opptn`1 ´ tnqp`1q
“ ∥en∥2 ` Opptn`1 ´ tnqp`1q,
where (i) holds because the ODE solver has local error bounded by Opptn`1 ´ tnqp`1q. In addition, we observe that e1 “ 0,
because
e1 “ fθpxt1 , t1q ´ f pxt1, t1; ϕq
piq
“ xt1 ´ f pxt1, t1; ϕq
piiq
“ xt1 ´ xt1
“ 0.
Here (i) is true because the consistency model is parameterized such that f pxt1, t1; ϕq “ xt1 and (ii) is entailed by the
definition of f p¨, ¨; ϕq. This allows us to perform induction on the recursion formula Eq. (14) to obtain
∥en∥2 ď ∥e1∥2 `
n´1ÿ
k“1
Opptk`1 ´ tkqp`1q
n´1ÿ
Opptk`1 ´ tkqp`1q
k“1
n´1ÿ
ptk`1 ´ tkqOpptk`1 ´ tkqpq
k“1
n´1ÿ
ptk`1 ´ tkqOpp∆tqpq
“
“
ď
k“1
“ Opp∆tqpq
n´1ÿ
ptk`1 ´ tkq
k“1
“ Opp∆tqpqptn ´ t1q
ď Opp∆tqpqpT ´ ϵq
“ Opp∆tqpq,
which completes the proof.
A.3. Consistency Training
The following lemm |
a provides an unbiased estimator for the score function, which is crucial to our proof for Theorem 2.
Lemma 1. Let x „ pdatapxq, xt „ N px; t2Iq, and ptpxtq “ pdatapxq b N p0, t2Iq. We have ∇ log ptpxq “ ´Er xt´x
| xts.
t2
Proof. According to the definition of ptpxtq, we have ∇ log ptpxtq “ ∇xt log
N pxt; x, t2Iq. This expression can be further simplified to yield
ş
pdatapxqppxt | xq dx, where ppxt | xq “
∇ log ptpxtq “
ş
pdatapxq∇xtppxt | xq dx
ş
pdatapxqppxt | xq dx
16
Consistency Models
ş
ş
ż
ż
“
“
“
piq
“
ş
pdatapxqppxt | xq∇xt log ppxt | xq dx
pdatapxqppxt | xq dx
pdatapxqppxt | xq∇xt log ppxt | xq dx
ptpxtq
pdatapxqppxt | xq
ptpxtq
∇xt log ppxt | xq dx
ppx | xtq∇xt log ppxt | xq dx
where (i) is due to Bayes’ rule.
“ ´E
“ Er∇xt log ppxt | xq | xts
„
xt ´ x
t2
ȷ
| xt
,
Theorem 2. Let ∆t :“ maxnP
t|tn`1 ´ tn|u. Assume d and fθ´ are both twice continuously differentiable with
bounded second derivatives, the weighting function λp¨q is bounded, and Er∥∇ log ptn pxtnq∥2
2s ă 8. Assume further that
we use the Euler ODE solver, and the pre-trained score model matches the ground truth, i.e., @t P rϵ, T s : sϕpx, tq ”
∇ log ptpxq. Then,
1,N ´1
(cid:74)
(cid:75)
LN
CDpθ, θ´; ϕq “ LN
CTpθ, θ´q ` op∆tq,
where the expectation is taken with respect to x „ pdata, n „ U
training objective, denoted by LN
CTpθ, θ´q, is defined as
1, N ´ 1
(cid:75)
(cid:74)
, and xtn`1 „ N px; t2
n`1Iq. The consistency
Erλptnqdpfθpx ` tn`1z, tn`1q, fθ´ px ` tnz, tnqqs,
where z „ N p0, Iq. Moreover, LN
CTpθ, θ´q ě Op∆tq if inf N LN
CDpθ, θ´; ϕq ą 0.
Proof. With Taylor expansion, we have
CDpθ, θ´; ϕq “ Erλptnqdpfθpxtn`1, tn`1q, fθ´ pˆxϕ
LN
tn , tnqs
“Erλptnqdpfθpxtn`1 , tn`1q, fθ´ pxtn`1 ` ptn`1 ´ tnqtn`1∇ log ptn`1pxtn`1q, tnqqs
“Erλptnqdpfθpxtn`1 , tn`1q, fθ´ pxtn`1, tn`1q ` B1fθ´ pxtn`1, tn`1qptn`1 ´ tnqtn`1∇ log ptn`1pxtn`1q
` B2fθ´pxtn`1 , tn`1qptn ´ tn`1q ` op|tn`1 ´ tn|qqs
“Etλptnqdpfθpxtn`1 , tn`1q, fθ´ pxtn`1, tn`1qq ` λptnqB2dpfθpxtn`1 , tn`1q, fθ´ pxtn`1, tn`1qqr
B1fθ´ pxtn`1, tn`1qptn`1 ´ tnqtn`1∇ log ptn`1pxtn`1q ` B2fθ´ pxtn`1, tn`1qptn ´ tn`1q ` op|tn`1 ´ tn|qsu
“Erλptnqdpfθpxtn`1 , tn`1q, fθ´ pxtn`1, tn`1qqs
` EtλptnqB2dpfθpxtn`1 , tn`1q, fθ´ pxtn`1, tn`1qqrB1fθ´pxtn`1 , tn`1qptn`1 ´ tnqtn`1∇ log ptn`1 pxtn`1 qsu
` EtλptnqB2dpfθpxtn`1, tn`1q, fθ´ pxtn`1 , tn`1qqrB2fθ´ pxtn`1, tn`1qptn ´ tn`1qsu ` Erop|tn`1 ´ tn|qs.
Then, we apply Lemma 1 to Eq. (15) and use Taylor expansion in the reverse direction to obtain
LN
CDpθ, θ´; ϕq
“Erλptnqdpfθpxtn`1, tn`1q, fθ´pxtn`1, tn`1qqs
"
„
` E
λptnqB2dpfθpxtn`1 , tn`1q, fθ´ pxtn`1, tn`1qq
B1fθ´ pxtn`1 , tn`1qptn ´ tn`1qtn`1E
(15)
„
xtn`1 ´ x
t2
n`1
ȷȷ*
ˇ
ˇ
ˇxtn`1
` EtλptnqB2dpfθpxtn`1 , tn`1q, fθ´ pxtn`1, tn`1qqrB2fθ´pxtn`1 , tn`1qptn ´ tn`1qsu ` Erop|tn`1 ´ tn|qs
piq
“Erλptnqdpfθpxtn`1, tn`1q, fθ´pxtn`1, tn`1qqs
"
„
` E
λptnqB2dpfθpxtn`1 , tn`1q, fθ´ pxtn`1, tn`1qq
B1fθ´ pxtn`1 , tn`1qptn ´ tn`1qtn`1
˙ȷ*
ˆ
xtn`1 ´ x
t2
n`1
17
„
“E
` EtλptnqB2dpfθpxtn`1 , tn`1q, fθ´ pxtn`1, tn`1qqrB2fθ´pxtn`1 , tn`1qptn ´ tn`1qsu ` Erop|tn`1 ´ tn|qs
Consistency Models
λptnqdpfθpxtn`1, tn`1q, fθ´pxtn`1, tn`1qq
„
ˆ
` λptnqB2dpfθpxtn`1, tn`1q, fθ´ pxtn`1 , tn`1qq
B1fθ´ pxtn`1 , tn`1qptn ´ tn`1qtn`1
˙ȷ
xtn`1 ´ x
t2
n`1
ȷ
` λptnqB2dpfθpxtn`1, tn`1q, fθ´ pxtn`1 , tn`1qqrB2fθ´pxtn`1, tn`1qptn ´ tn`1qs ` op|tn`1 ´ tn|q
` Erop|tn`1 ´ tn|qs
ˆ
λptnqd
fθpxtn`1, tn`1q, fθ´
ˆ
ˆ
ˆ
„
„
“E
“E
λptnqd
fθpxtn`1, tn`1q, fθ´
xtn`1 ` ptn ´ tn`1q
xtn`1 ` ptn ´ tn`1qtn`1
˙˙ȷ
xtn`1 ´ x
t2
n`1
xtn`1 ´ x
tn`1
, tn
, tn
˙˙ȷ
` Erop|tn`1 ´ tn|qs
` Erop|tn`1 ´ tn|qs
“E rλptnqd pfθpx ` tn`1z, tn`1q, fθ´ px ` tn`1z ` ptn ´ tn`1qz, tnqqs ` Erop|tn`1 ´ tn|qs
“E rλptnqd pfθpx ` tn`1z, tn`1q, fθ´ px ` tnz, tnqqs ` Erop|tn`1 ´ tn|qs
“E rλptnqd pfθpx ` tn`1z, tn`1q, fθ´ px ` tnz, tnqqs ` Erop∆tqs
“E rλptnqd pfθpx ` tn`1z, tn`1q, fθ´ px ` tnz, tnqqs ` op∆tq
“LN
CTpθ, θ´q ` op∆tq,
(16)
CTpθ, θ´q ` op∆tq and thus completes the proof for Eq. (9). Moreover, we have LN
where (i) is due to the law of total expectation, and z :“
LN
inf N LN
contradict |
ion to inf N LN
CDpθ, θ´; ϕq ą 0. Otherwise, LN
CDpθ, θ´; ϕq ą 0.
CTpθ, θ´q ă Op∆tq and thus lim∆tÑ0 LN
„ N p0, Iq. This implies LN
CDpθ, θ´; ϕq “
CTpθ, θ´q ě Op∆tq whenever
CDpθ, θ´; ϕq “ 0, which is a clear
xtn`1 ´x
tn`1
Remark 1. When the condition LN
validity of LN
in Theorem 6.
CTpθ, θ´q ě Op∆tq is not satisfied, such as in the case where θ´ “ stopgradpθq, the
CTpθ, θ´q as a training objective for consistency models can still be justified by referencing the result provided
B. Continuous-Time Extensions
The consistency distillation and consistency training objectives can be generalized to hold for infinite time steps (N Ñ 8)
under suitable conditions.
B.1. Consistency Distillation in Continuous Time
Depending on whether θ´ “ θ or θ´ “ stopgradpθq (same as setting µ “ 0), there are two possible continuous-time
extensions for the consistency distillation objective LN
CDpθ, θ´; ϕq. Given a twice continuously differentiable metric function
dpx, yq, we define Gpxq as a matrix, whose pi, jq-th entry is given by
Similarly, we define Hpxq as
rGpxqsij :“
rHpxqsij :“
B2dpx, yq
ByiByj
ˇ
ˇ
ˇ
ˇ
y“x
.
B2dpy, xq
ByiByj
ˇ
ˇ
ˇ
ˇ
.
y“x
The matrices G and H play a crucial role in forming continuous-time objectives for consistency distillation. Additionally,
we denote the Jacobian of fθpx, tq with respect to x as Bfθ px,tq
When θ´ “ θ (with no stopgrad operator), we have the following theoretical result.
Theorem 3. Let tn “ τ p n´1
, and τ p¨q is a strictly monotonic function with τ p0q “ ϵ and τ p1q “ T .
(cid:75)
Assume τ is continuously differentiable in r0, 1s, d is three times continuously differentiable with bounded third derivatives,
N ´1 q, where n P
1, N
Bx
(cid:74)
.
18
and fθ is twice continuously differentiable with bounded first and second derivatives. Assume further that the weighting
function λp¨q is bounded, and supx,tPrϵ,T s ∥sϕpx, tq∥2 ă 8. Then with the Euler solver in consistency distillation, we have
Consistency Models
lim
NÑ8
pN ´ 1q2LN
CDpθ, θ; ϕq “ L8
CDpθ, θ; ϕq,
(17)
where L8
CDpθ, θ; ϕq is defined as
ˆ
λptq
rpτ ´1q1ptqs2
Bfθpxt, tq
Bt
«
E
1
2
´ t
Bfθpxt, tq
Bxt
˙
T
ˆ
sϕpxt, tq
Gpfθpxt, tqq
Bfθpxt, tq
Bt
´ t
Bfθpxt, tq
Bxt
˙ff
sϕpxt, tq
.
(18)
Here the expectation above is taken over x „ pdata, u „ Ur0, 1s, t “ τ puq, and xt „ N px, t2Iq.
Proof. Let ∆u “ 1
N ´1 and un “ n´1
N ´1 . First, we can derive the following equation with Taylor expansion:
(19)
˙
fθpˆxϕ
“tn`1
tn , tnq ´ fθpxtn`1, tn`1q “ fθpxtn`1 ` tn`1sϕpxtn`1, tn`1qτ 1punq∆u, tnq ´ fθpxtn`1 , tn`1q
Bfθpxtn`1 , tn`1q
Bxtn`1
Bfθpxtn`1, tn`1q
Btn`1
sϕpxtn`1 , tn`1qτ 1punq∆u ´
τ 1punq∆u ` Opp∆uq2q,
Note that τ 1punq “
1
τ ´1ptn`1q . Then, we apply Taylor expansion to the consistency distillation loss, which gives
pN ´ 1q2LN
CDpθ, θ; ϕq “
1
p∆uq2 LN
CDpθ, θ; ϕq “
1
p∆uq2
Erλptnqdpfθpxtn`1 , tn`1q, fθpˆxϕ
tn , tnqs
ˆ
piq
“
1
2p∆uq2
Etλptnqτ 1punq2rfθpˆxϕ
tn, tnq ´ fθpxtn`1, tn`1qsTGpfθpxtn`1, tn`1qq
¨ rfθpˆxϕ
tn , tnq ´ fθpxtn`1 , tn`1qsu ` ErOp|∆u|3qs
„
λptnqτ 1punq2
E
ˆ
piiq
“
1
2
„
E
“
1
2
λptnq
rpτ ´1q1ptnqs2
ˆ
Bfθpxtn`1 , tn`1q
Btn`1
ˆ
´ tn`1
Bfθpxtn`1 , tn`1q
Bxtn`1
Bfθpxtn`1 , tn`1q
Btn`1
´ tn`1
Bfθpxtn`1 , tn`1q
Bxtn`1
Bfθpxtn`1, tn`1q
Btn`1
ˆ
´ tn`1
Bfθpxtn`1, tn`1q
Bxtn`1
Bfθpxtn`1, tn`1q
Btn`1
´ tn`1
Bfθpxtn`1 , tn`1q
Bxtn`1
¨
¨
˙T
sϕpxtn`1 , tn`1q
Gpfθpxtn`1, tn`1qq
˙ȷ
sϕpxtn`1 , tn`1q
˙T
` ErOp|∆u|qs
sϕpxtn`1, tn`1q
Gpfθpxtn`1, tn`1qq
(20)
˙ȷ
sϕpxtn`1 , tn`1q
` ErOp|∆u|qs
where we obtain (i) by expanding dpfθpxtn`1, tn`1q, ¨q to second order and observing dpx, xq ” 0 and ∇ydpx, yq|y“x ” 0.
We obtain (ii) using Eq. (19). By taking the limit for both sides of Eq. (20) as ∆u Ñ 0 or equivalently N Ñ 8, we arrive at
Eq. (17), which completes the proof.
Remark 2. Although Theorem 3 assumes the Euler ODE solver for technical simplicity, we believe an analogous result can
be derived for more general solvers, since all ODE solvers should perform similarly as N Ñ 8. We leave a more general
version of Theorem 3 as future work.
Remark |
3. Theorem 3 implies that consistency models can be trained by minimizing L8
dpx, yq “ ∥x ´ y∥2
CDpθ, θ; ϕq. In particular, when
2, we have
CDpθ, θ; ϕq “ E
L8
«
λptq
rpτ ´1q1ptqs2
(cid:13)
(cid:13)
(cid:13)
(cid:13)
Bfθpxt, tq
Bt
´ t
Bfθpxt, tq
Bxt
sϕpxt, tq
ff
.
(cid:13)
2
(cid:13)
(cid:13)
(cid:13)
2
(21)
However, this continuous-time objective requires computing Jacobian-vector products as a subroutine to evaluate the loss
function, which can be slow and laborious to implement in deep learning frameworks that do not support forward-mode
automatic differentiation.
19
Remark 4. If fθpx, tq matches the ground truth consistency function for the empirical PF ODE of sϕpx, tq, then
Consistency Models
Bfθpx, tq
Bt
´ t
Bfθpx, tq
Bx
sϕpx, tq ” 0
and therefore L8
time-derivative of this identity:
CDpθ, θ; ϕq “ 0. This can be proved by noting that fθpxt, tq ” xϵ for all t P rϵ, T s, and then taking the
fθpxt, tq ” xϵ
dxt
Bfθpxt, tq
dt
Bxt
Bfθpxt, tq
Bxt
Bfθpxt, tq
Bt
´ t
ðñ
ðñ
ðñ
`
Bfθpxt, tq
Bt
” 0
r´tsϕpxt, tqs `
Bfθpxt, tq
Bt
” 0
Bfθpxt, tq
Bxt
sϕpxt, tq ” 0.
The above observation provides another motivation for L8
matches the ground truth consistency function.
CDpθ, θ; ϕq, as it is minimized if and only if the consistency model
For some metric functions, such as the ℓ1 norm, the Hessian Gpxq is zero so Theorem 3 is vacuous. Below we show that a
non-vacuous statement holds for the ℓ1 norm with just a small modification of the proof for Theorem 3.
Theorem 4. Let tn “ τ p n´1
, and τ p¨q is a strictly monotonic function with τ p0q “ ϵ and τ p1q “ T .
(cid:75)
Assume τ is continuously differentiable in r0, 1s, and fθ is twice continuously differentiable with bounded first and second
derivatives. Assume further that the weighting function λp¨q is bounded, and supx,tPrϵ,T s ∥sϕpx, tq∥2 ă 8. Suppose we use
the Euler ODE solver, and set dpx, yq “ ∥x ´ y∥1 in consistency distillation. Then we have
N ´1 q, where n P
1, N
(cid:74)
lim
NÑ8
pN ´ 1qLN
CDpθ, θ; ϕq “ L8
CD, ℓ1 pθ, θ; ϕq,
where
CD, ℓ1pθ, θ; ϕq :“ E
L8
„
λptq
pτ ´1q1ptq
(cid:13)
(cid:13)
t
(cid:13)
(cid:13)
Bfθpxt, tq
Bxt
sϕpxt, tq ´
Bfθpxt, tq
Bt
ȷ
(cid:13)
(cid:13)
(cid:13)
(cid:13)1
where the expectation above is taken over x „ pdata, u „ Ur0, 1s, t “ τ puq, and xt „ N px, t2Iq.
Proof. Let ∆u “ 1
N ´1 and un “ n´1
N ´1 . We have
pN ´ 1qLN
„
CDpθ, θ; ϕq “
E
λptnq
(cid:13)
(cid:13)
tn`1
(cid:13)
(cid:13)
1
∆u
LN
CDpθ, θ; ϕq “
1
∆u
Erλptnq}fθpxtn`1 , tn`1q ´ fθpˆxϕ
tn , tnq}1s
Bfθpxtn`1, tn`1q
Bxtn`1
sϕpxtn`1, tn`1qτ 1punq ´
Bfθpxtn`1, tn`1q
Btn`1
τ 1punq ` Opp∆uq2q
ȷ
λptnqτ 1punq
λptnq
pτ ´1q1ptnq
(cid:13)
(cid:13)
tn`1
(cid:13)
(cid:13)
(cid:13)
(cid:13)
tn`1
(cid:13)
(cid:13)
Bfθpxtn`1, tn`1q
Bxtn`1
Bfθpxtn`1, tn`1q
Bxtn`1
sϕpxtn`1, tn`1q ´
sϕpxtn`1, tn`1q ´
Bfθpxtn`1, tn`1q
Btn`1
Bfθpxtn`1 , tn`1q
Btn`1
` Op∆uq
` Op∆uq
(cid:13)
(cid:13)
(cid:13)
(cid:13)1
ȷ
(cid:13)
(cid:13)
(cid:13)
(cid:13)1
piq
“
1
∆u
„
“E
“E
„
(22)
ȷ
(cid:13)
(cid:13)
(cid:13)
(cid:13)1
(23)
where (i) is obtained by plugging Eq. (19) into the previous equation. Taking the limit for both sides of Eq. (23) as ∆u Ñ 0
or equivalently N Ñ 8 leads to Eq. (22), which completes the proof.
Remark 5. According to Theorem 4, consistency models can be trained by minimizing L8
reasoning in Remark 4 can be applied to show that L8
t P rϵ, T s.
CD, ℓ1pθ, θ; ϕq. Moreover, the same
CD, ℓ1pθ, θ; ϕq “ 0 if and only if fθpxt, tq “ xϵ for all xt P Rd and
In the second case where θ´ “ stopgradpθq, we can derive a so-called “pseudo-objective” whose gradient matches the
gradient of LN
CDpθ, θ´; ϕq in the limit of N Ñ 8. Minimizing this pseudo-objective with gradient descent gives another
way to train consistency models via distillation. This pseudo-objective is provided by the theorem below.
20
Consistency Models
N ´1 q, where n P
Theorem 5. Let tn “ τ p n´1
, and τ p¨q is a strictly monotonic function with τ p0q “ ϵ and τ p1q “ T .
(cid:75)
Assume τ is continuously differentiable in r0, 1s, d is three times continuously di |
fferentiable with bounded third derivatives,
and fθ is twice continuously differentiable with bounded first and second derivatives. Assume further that the weighting
function λp¨q is bounded, supx,tPrϵ,T s ∥sϕpx, tq∥2 ă 8, and supx,tPrϵ,T s ∥∇θfθpx, tq∥2 ă 8. Suppose we use the Euler
ODE solver, and θ´ “ stopgradpθq in consistency distillation. Then,
1, N
(cid:74)
lim
NÑ8
pN ´ 1q∇θLN
CDpθ, θ´; ϕq “ ∇θL8
CDpθ, θ´; ϕq,
(24)
where
CDpθ, θ´; ϕq :“ E
L8
„
λptq
pτ ´1q1ptq
fθpxt, tqTHpfθ´ pxt, tqq
ˆ
Bfθ´ pxt, tq
Bt
´ t
Bfθ´ pxt, tq
Bxt
˙ȷ
sϕpxt, tq
.
(25)
Here the expectation above is taken over x „ pdata, u „ Ur0, 1s, t “ τ puq, and xt „ N px, t2Iq.
Proof. We denote ∆u “ 1
N ´1 and un “ n´1
N ´1 . First, we leverage Taylor series expansion to obtain
pN ´ 1qLN
ˆ
piq
“
1
2∆u
CDpθ, θ´; ϕq “
LN
CDpθ, θ´; ϕq “
Erλptnqdpfθpxtn`1 , tn`1q, fθ´ pˆxϕ
tn, tnqs
1
∆u
1
∆u
tn , tnqsTHpfθ´ pˆxϕ
tn , tnqq
Etλptnqrfθpxtn`1, tn`1q ´ fθ´pˆxϕ
˙
¨ rfθpxtn`1, tn`1q ´ fθ´pˆxϕ
tn , tnqsu ` ErOp|∆u|3qs
“
1
2∆u
Etλptnqrfθpxtn`1, tn`1q ´ fθ´ pˆxϕ
tn , tnqsTHpfθ´ pˆxϕ
tn, tnqqrfθpxtn`1 , tn`1q ´ fθ´ pˆxϕ
tn, tnqsu ` ErOp|∆u|2qs
(26)
where (i) is derived by expanding dp¨, fθ´ pˆxϕ
Next, we compute the gradient of Eq. (26) with respect to θ and simplify the result to obtain
tn, tnqq to second order and leveraging dpx, xq ” 0 and ∇ydpy, xq|y“x ” 0.
pN ´ 1q∇θLN
CDpθ, θ´; ϕq “
1
∆u
∇θLN
CDpθ, θ´; ϕq
∇θEtλptnqrfθpxtn`1 , tn`1q ´ fθ´ pˆxϕ
tn , tnqsTHpfθ´ pˆxϕ
tn , tnqqrfθpxtn`1, tn`1q ´ fθ´ pˆxϕ
tn , tnqsu ` ErOp|∆u|2qs
Etλptnqr∇θfθpxtn`1, tn`1qsTHpfθ´ pˆxϕ
"
λptnqr∇θfθpxtn`1, tn`1qsTHpfθ´ pˆxϕ
E
„
tn, tnqq
tn`1
Bfθ´pxtn`1, tn`1q
Bxtn`1
tn , tnqqrfθpxtn`1 , tn`1q ´ fθ´ pˆxϕ
tn, tnqsu ` ErOp|∆u|2qs
“
piq
“
piiq
“
1
2∆u
1
∆u
1
∆u
"
λptnqr∇θfθpxtn`1, tn`1qsTHpfθ´pˆxϕ
“E
tn , tnqq
tn`1
"
λptnqrfθpxtn`1, tn`1qsTHpfθ´pˆxϕ
“∇θE
tn , tnqq
tn`1
Bfθ´ pxtn`1, tn`1q
Bxtn`1
"
“∇θE
λptnq
pτ ´1q1ptnq
rfθpxtn`1 , tn`1qsTHpfθ´ pˆxϕ
tn , tnqq
tn`1
„
sϕpxtn`1, tn`1qτ 1punq∆u
ȷ*
τ 1punq∆u
` ErOp|∆u|qs
´
Bfθ´ pxtn`1 , tn`1q
Btn`1
Bfθ´ pxtn`1, tn`1q
Bxtn`1
sϕpxtn`1, tn`1qτ 1punq
´
Bfθ´ pxtn`1, tn`1q
Btn`1
ȷ*
τ 1punq
` ErOp|∆u|qs
„
„
sϕpxtn`1, tn`1qτ 1punq
(27)
ȷ*
τ 1punq
` ErOp|∆u|qs
´
Bfθ´ pxtn`1, tn`1q
Btn`1
Bfθ´ pxtn`1 , tn`1q
Bxtn`1
´
Bfθ´ pxtn`1 , tn`1q
Btn`1
21
sϕpxtn`1 , tn`1q
ȷ*
` ErOp|∆u|qs
Consistency Models
Here (i) results from the chain rule, and (ii) follows from Eq. (19) and fθpx, tq ” fθ´ px, tq, since θ´ “ stopgradpθq.
Taking the limit for both sides of Eq. (28) as ∆u Ñ 0 (or N Ñ 8) yields Eq. (24), which completes the proof.
Remark 6. When dpx, yq “ ∥x ´ y∥2
2, the pseudo-objective L8
„
CDpθ, θ´; ϕq can be simplified to
˙ȷ
ˆ
CDpθ, θ´; ϕq “ 2E
L8
λptq
pτ ´1q1ptq
fθpxt, tqT
Bfθ´ pxt, tq
Bt
´ t
Bfθ´ pxt, tq
Bxt
sϕpxt, tq
.
(28)
CDpθ, θ´; ϕq defined in Theorem 5 is only meaningful in terms of its gradient—one cannot
Remark 7. The objective L8
measure the progress of training by tracking the value of L8
CDpθ, θ´; ϕq, but can still apply gradient descent to this objective
to distill consistency models from pre-trained diffusion models. Because this objective is not a typical loss function, we refer
to it as the “pseudo-objective” for consistency distillation.
Remark 8. Following the same reasoning in Remark 4, we can easily derive that L8
∇θL8
volves sϕpx, tq. However, the converse does not hold true in general. This distinguishes L8
the latter of which is a true loss function.
CDpθ, θ´; ϕq “ 0 and
CDpθ, θ´; ϕq “ 0 if fθpx, tq matches the ground truth consistency function for the empirical PF ODE that in-
CDpθ, θ; ϕq,
CDpθ, θ´; ϕq from L8
B.2. Consistency Training in Continuous Time
A remarkable observation is that the pseudo-objective in Theorem 5 can be estimated without any pre-trained diffusion
models, which enables direct consistency training of consistency models. More precisely, we have the following result.
Theorem 6. Let tn “ τ p n´1
, and τ p¨q is a strictly monotonic function with τ p0q “ ϵ and τ p1q “ T .
(cid:75)
Assume τ is cont |
inuously differentiable in r0, 1s, d is three times continuously differentiable with bounded third derivatives,
and fθ is twice continuously differentiable with bounded first and second derivatives. Assume further that the weighting
function λp¨q is bounded, Er∥∇ log ptnpxtn q∥2
2s ă 8, supx,tPrϵ,T s ∥∇θfθpx, tq∥2 ă 8, and ϕ represents diffusion model
parameters that satisfy sϕpx, tq ” ∇ log ptpxq. Then if θ´ “ stopgradpθq, we have
N ´1 q, where n P
1, N
(cid:74)
lim
NÑ8
pN ´ 1q∇θLN
CDpθ, θ´; ϕq “ lim
NÑ8
pN ´ 1q∇θLN
CTpθ, θ´q “ ∇θL8
CTpθ, θ´q,
where LN
CD uses the Euler ODE solver, and
„
CTpθ, θ´q :“ E
L8
λptq
pτ ´1q1ptq
fθpxt, tqTHpfθ´ pxt, tqq
ˆ
Bfθ´ pxt, tq
Bt
`
Bfθ´ pxt, tq
Bxt
¨
xt ´ x
t
˙ȷ
.
Here the expectation above is taken over x „ pdata, u „ Ur0, 1s, t “ τ puq, and xt „ N px, t2Iq.
Proof. The proof mostly follows that of Theorem 5. First, we leverage Taylor series expansion to obtain
(29)
(30)
pN ´ 1qLN
ˆ
CTpθ, θ´q “
1
∆u
Etλptnqrfθpx ` tn`1z, tn`1q ´ fθ´px ` tnz, tnqsTHpfθ´px ` tnz, tnqq
CTpθ, θ´q “
1
∆u
LN
Erλptnqdpfθpx ` tn`1z, tn`1q, fθ´px ` tnz, tnqqs
piq
“
1
2∆u
¨ rfθpx ` tn`1z, tn`1q ´ fθ´ px ` tnz, tnqsu ` ErOp|∆u|3qs
˙
“
1
2∆u
Etλptnqrfθpx ` tn`1z, tn`1q ´ fθ´ px ` tnz, tnqsTHpfθ´ px ` tnz, tnqq
(31)
¨ rfθpx ` tn`1z, tn`1q ´ fθ´ px ` tnz, tnqsu ` ErOp|∆u|2qs
where z „ N p0, Iq, (i) is derived by first expanding dp¨, fθ´ px ` tnz, tnqq to second order, and then noting that dpx, xq ” 0
and ∇ydpy, xq|y“x ” 0. Next, we compute the gradient of Eq. (31) with respect to θ and simplify the result to obtain
pN ´ 1q∇θLN
CTpθ, θ´q “
1
∆u
∇θLN
CTpθ, θ´q
“
1
2∆u
∇θEtλptnqrfθpx ` tn`1z, tn`1q ´ fθ´px ` tnz, tnqsTHpfθ´px ` tnz, tnqq
¨ rfθpx ` tn`1z, tn`1q ´ fθ´ px ` tnz, tnqsu ` ErOp|∆u|2qs
22
Consistency Models
piq
“
1
∆u
piiq
“
1
∆u
"
Etλptnqr∇θfθpx ` tn`1z, tn`1qsTHpfθ´px ` tnz, tnqq
(32)
"
¨ rfθpx ` tn`1z, tn`1q ´ fθ´ px ` tnz, tnqsu ` ErOp|∆u|2qs
„
E
λptnqr∇θfθpx ` tn`1z, tn`1qsTHpfθ´px ` tnz, tnqq
τ 1punq∆uB1fθ´ px ` tnz, tnqz
ȷ*
` B2fθ´px ` tnz, tnqτ 1punq∆u
` ErOp|∆u|qs
„
“E
λptnqτ 1punqr∇θfθpx ` tn`1z, tn`1qsTHpfθ´ px ` tnz, tnqq
B1fθ´ px ` tnz, tnqz
"
ȷ*
` B2fθ´px ` tnz, tnq
` ErOp|∆u|qs
„
“∇θE
λptnqτ 1punqrfθpx ` tn`1z, tn`1qsTHpfθ´ px ` tnz, tnqq
B1fθ´ px ` tnz, tnqz
ȷ*
` B2fθ´px ` tnz, tnq
"
"
„
λptnqτ 1punqrfθpxtn`1, tn`1qsTHpfθ´ pxtn , tnqq
„
B1fθ´ pxtn, tnq
“∇θE
“∇θE
λptnq
pτ ´1q1ptnq
rfθpxtn`1, tn`1qsTHpfθ´ pxtn , tnqq
B1fθ´ pxtn , tnq
` B2fθ´pxtn , tnq
xtn ´ x
tn
xtn ´ x
tn
` ErOp|∆u|qs
ȷ*
` B2fθ´ pxtn , tnq
` ErOp|∆u|qs
ȷ*
` ErOp|∆u|qs
(33)
Here (i) results from the chain rule, and (ii) follows from Taylor expansion. Taking the limit for both sides of Eq. (33) as
∆u Ñ 0 or N Ñ 8 yields the second equality in Eq. (29).
Now we prove the first equality. Applying Taylor expansion again, we obtain
pN ´ 1q∇θLN
CDpθ, θ´; ϕq “
1
∆u
∇θLN
CDpθ, θ´; ϕq “
1
∆u
∇θErλptnqdpfθpxtn`1, tn`1q, fθ´pˆxϕ
tn , tnqqs
“
“
“
1
∆u
1
∆u
1
∆u
“
“
piq
“
1
∆u
1
∆u
1
∆u
Erλptnq∇θdpfθpxtn`1, tn`1q, fθ´pˆxϕ
tn , tnqqs
Erλptnq∇θfθpxtn`1, tn`1qTB1dpfθpxtn`1 , tn`1q, fθ´ pˆxϕ
"
„
B1dpfθ´ pˆxϕ
λptnq∇θfθpxtn`1, tn`1qT
tn, tnq, fθ´ pˆxϕ
E
tn , tnqq
tn, tnqqs
ȷ*
` Hpfθ´pˆxϕ
tn , tnqqpfθpxtn`1 , tn`1q ´ fθ´ pˆxϕ
tn, tnqq ` Op|∆u|2q
Etλptnq∇θfθpxtn`1, tn`1qTrHpfθ´ pˆxϕ
tn , tnqqpfθpxtn`1, tn`1q ´ fθ´pˆxϕ
tn , tnqqs ` Op|∆u|2qu
Etλptnq∇θfθpxtn`1, tn`1qTrHpfθ´ pˆxϕ
tn , tnqqpfθ´ pxtn`1 , tn`1q ´ fθ´ pˆxϕ
tn, tnqqs ` Op|∆u|2qu
Etλptnqr∇θfθpx ` tn`1z, tn`1qsTHpfθ´px ` tnz, tnqq
¨ rfθpx ` tn`1z, tn`1q ´ fθ´ px ` tnz, tnqsu ` ErOp|∆u|2qs
where (i) holds because xtn`1 “ x ` tn`1z and ˆxϕ
“ xtn`1 ` ptn ´ tn`1qz “
x ` tnz. Because (i) matches Eq. (32), we can use the same reasoning procedure from Eq. (32) to Eq. (33) to conclude
limNÑ8pN ´ 1q∇θLN
CDpθ, θ´; ϕq “ limNÑ8pN ´ 1q∇θLN
CTpθ, θ´q, completing the proof.
tn “ xtn`1 ´ ptn ´ tn`1qtn`1
´pxtn`1 ´xq
t2
n`1
Remark 9. Note that L8
any pre-trained diffusion models.
CTpθ, θ´q does not depend on the diffusion model parameter ϕ and hence can be optimized w |
ithout
23
Consistency Models
(a) Consistency Distillation
(b) Consistency Training
Figure 7: Comparing discrete consistency distillation/training algorithms with continuous counterparts.
Remark 10. When dpx, yq “ ∥x ´ y∥2
„
CTpθ, θ´q “ 2E
L8
2, the continuous-time consistency training objective becomes
˙ȷ
ˆ
λptq
pτ ´1q1ptq
fθpxt, tqT
Bfθ´ pxt, tq
Bt
`
Bfθ´pxt, tq
Bxt
¨
xt ´ x
t
.
(34)
Remark 11. Similar to L8
monitoring the value of L8
model fθpx, tq directly from data. Moreover, the same observation in Remark 8 holds true: L8
∇θL8
CDpθ, θ´; ϕq in Theorem 5, L8
CTpθ, θ´q is a pseudo-objective; one cannot track training by
CTpθ, θ´q, but can still apply gradient descent on this loss function to train a consistency
CTpθ, θ´q “ 0 and
CTpθ, θ´q “ 0 if fθpx, tq matches the ground truth consistency function for the PF ODE.
B.3. Experimental Verifications
To experimentally verify the efficacy of our continuous-time CD and CT objectives, we train consistency models with a
variety of loss functions on CIFAR-10. All results are provided in Fig. 7. We set λptq “ pτ ´1q1ptq for all continuous-time
experiments. Other hyperparameters are the same as in Table 3. We occasionally modify some hyperparameters for improved
performance. For distillation, we compare the following objectives:
• CD pℓ2q: Consistency distillation LN
CD with N “ 18 and the ℓ2 metric.
• CD pℓ1q: Consistency distillation LN
CD with N “ 18 and the ℓ1 metric. We set the learning rate to 2e-4.
• CD (LPIPS): Consistency distillation LN
CD with N “ 18 and the LPIPS metric.
• CD8 pℓ2q: Consistency distillation L8
CD in Theorem 3 with the ℓ2 metric. We set the learning rate to 1e-3 and dropout
to 0.13.
• CD8 pℓ1q: Consistency distillation L8
CD in Theorem 4 with the ℓ1 metric. We set the learning rate to 1e-3 and dropout
to 0.3.
• CD8 (stopgrad, ℓ2): Consistency distillation L8
CD in Theorem 5 with the ℓ2 metric. We set the learning rate to 5e-6.
• CD8 (stopgrad, LPIPS): Consistency distillation L8
CD in Theorem 5 with the LPIPS metric. We set the learning rate to
5e-6.
We did not investigate using the LPIPS metric in Theorem 3 because minimizing the resulting objective would require
back-propagating through second order derivatives of the VGG network used in LPIPS, which is computationally expensive
and prone to numerical instability. As revealed by Fig. 7a, the stopgrad version of continuous-time distillation (Theorem 5)
works better than the non-stopgrad version (Theorem 3) for both the LPIPS and ℓ2 metrics, and the LPIPS metric works
the best for all distillation approaches. Additionally, discrete-time consistency distillation outperforms continuous-time
24
Consistency Models
Table 3: Hyperparameters used for training CD and CT models
Hyperparameter
CIFAR-10
Learning rate
Batch size
µ
µ0
s0
s1
N
ODE solver
EMA decay rate
Training iterations
Mixed-Precision (FP16)
Dropout probability
Number of GPUs
CD
4e-4
512
0
18
Heun
0.9999
800k
No
0.0
8
CT
4e-4
512
0.9
2
150
0.9999
800k
No
0.0
8
ImageNet 64 ˆ 64
CT
CD
8e-6
8e-6
2048
2048
0.95
LSUN 256 ˆ 256
CT
1e-5
2048
CD
1e-5
2048
0.95
0.95
2
200
0.999943
800k
Yes
0.0
64
0.95
2
150
0.999943
1000k
Yes
0.0
64
40
Heun
0.999943
600k
Yes
0.0
64
40
Heun
0.999943
600k
Yes
0.0
64
consistency distillation, possibly due to the larger variance in continuous-time objectives, and the fact that one can use
effective higher-order ODE solvers in discrete-time objectives.
For consistency training (CT), we find it important to initialize consistency models from a pre-trained EDM model in order
to stabilize training when using continuous-time objectives. We hypothesize that this is caused by the large variance in our
continuous-time loss functions. For fair comparison, we thus initialize all consistency models from the same pre-trained
EDM model on CIFAR-10 for both discrete-time and continuous-time CT, even though the former works well with random
initialization. We leave variance reduction techniques for continuous-time CT to future research.
We empirically compare the following o |
bjectives:
• CT (LPIPS): Consistency training LN
CT with N “ 120 and the LPIPS metric. We set the learning rate to 4e-4, and the
EMA decay rate for the target network to 0.99. We do not use the schedule functions for N and µ here because they
cause slower learning when the consistency model is initialized from a pre-trained EDM model.
• CT8 pℓ2q: Consistency training L8
CT with the ℓ2 metric. We set the learning rate to 5e-6.
• CT8 (LPIPS): Consistency training L8
CT with the LPIPS metric. We set the learning rate to 5e-6.
As shown in Fig. 7b, the LPIPS metric leads to improved performance for continuous-time CT. We also find that continuous-
time CT outperforms discrete-time CT with the same LPIPS metric. This is likely due to the bias in discrete-time CT, as
∆t ą 0 in Theorem 2 for discrete-time objectives, whereas continuous-time CT has no bias since it implicitly drives ∆t to 0.
C. Additional Experimental Details
Model Architectures We follow Song et al. (2021); Dhariwal & Nichol (2021) for model architectures. Specifically, we
use the NCSN++ architecture in Song et al. (2021) for all CIFAR-10 experiments, and take the corresponding network
architectures from Dhariwal & Nichol (2021) when performing experiments on ImageNet 64 ˆ 64, LSUN Bedroom
256 ˆ 256 and LSUN Cat 256 ˆ 256.
Parameterization for Consistency Models We use the same architectures for consistency models as those used for
EDMs. The only difference is we slightly modify the skip connections in EDM to ensure the boundary condition holds for
consistency models. Recall that in Section 3 we propose to parameterize a consistency model in the following form:
In EDM (Karras et al., 2022), authors choose
fθpx, tq “ cskipptqx ` coutptqFθpx, tq.
cskipptq “
σ2
data
t2 ` σ2
data
,
coutptq “
a
σdatat
σ2
data ` t2
,
25
where σdata “ 0.5. However, this choice of cskip and cout does not satisfy the boundary condition when the smallest time
instant ϵ ‰ 0. To remedy this issue, we modify them to
Consistency Models
cskipptq “
σ2
pt ´ ϵq2 ` σ2
data
data
,
coutptq “
σdatapt ´ ϵq
a
σ2
data ` t2
,
which clearly satisfies cskippϵq “ 1 and coutpϵq “ 0.
Schedule Functions for Consistency Training As discussed in Section 5, consistency generation requires specifying
schedule functions N p¨q and µp¨q for best performance. Throughout our experiments, we use schedule functions that take
the form below:
pps1 ` 1q2 ´ s2
0q ` s2
0 ´ 1
W
` 1
Sc
N pkq “
µpkq “ exp
k
K
ˆ
˙
,
s0 log µ0
N pkq
where K denotes the total number of training iterations, s0 denotes the initial discretization steps, s1 ą s0 denotes the target
discretization steps at the end of training, and µ0 ą 0 denotes the EMA decay rate at the beginning of model training.
Training Details
In both consistency distillation and progressive distillation, we distill EDMs (Karras et al., 2022). We
trained these EDMs ourselves according to the specifications given in Karras et al. (2022). The original EDM paper did
not provide hyperparameters for the LSUN Bedroom 256 ˆ 256 and Cat 256 ˆ 256 datasets, so we mostly used the same
hyperparameters as those for the ImageNet 64 ˆ 64 dataset. The difference is that we trained for 600k and 300k iterations
for the LSUN Bedroom and Cat datasets respectively, and reduced the batch size from 4096 to 2048.
We used the same EMA decay rate for LSUN 256 ˆ 256 datasets as for the ImageNet 64 ˆ 64 dataset. For progressive
distillation, we used the same training settings as those described in Salimans & Ho (2022) for CIFAR-10 and ImageNet
64 ˆ 64. Although the original paper did not test on LSUN 256 ˆ 256 datasets, we used the same settings for ImageNet
64 ˆ 64 and found them to work well.
In all distillation experiments, we initialized the consistency model with pre-trained EDM weights. For consistency training,
we initialized the model randomly, just as we did for training the EDMs. We trained all consistency models with the
Rectified Adam optimizer (Liu et al., 2019), with no learning rate decay or warm-up, and no weight decay. We also applied
EMA to |
the weights of the online consistency models in both consistency distillation and consistency training, as well as
to the weights of the training online consistency models according to Karras et al. (2022). For LSUN 256 ˆ 256 datasets,
we chose the EMA decay rate to be the same as that for ImageNet 64 ˆ 64, except for consistency distillation on LSUN
Bedroom 256 ˆ 256, where we found that using zero EMA worked better.
When using the LPIPS metric on CIFAR-10 and ImageNet 64 ˆ 64, we rescale images to resolution 224 ˆ 224 with bilinear
upsampling before feeding them to the LPIPS network. For LSUN 256 ˆ 256, we evaluated LPIPS without rescaling inputs.
In addition, we performed horizontal flips for data augmentation for all models and on all datasets. We trained all models on
a cluster of Nvidia A100 GPUs. Additional hyperparameters for consistency training and distillation are listed in Table 3.
D. Additional Results on Zero-Shot Image Editing
With consistency models, we can perform a variety of zero-shot image editing tasks. As an example, we present additional
results on colorization (Fig. 8), super-resolution (Fig. 9), inpainting (Fig. 10), interpolation (Fig. 11), denoising (Fig. 12),
and stroke-guided image generation (SDEdit, Meng et al. (2021), Fig. 13). The consistency model used here is trained via
consistency distillation on the LSUN Bedroom 256 ˆ 256.
All these image editing tasks, except for image interpolation and denoising, can be performed via a small modification to the
multistep sampling algorithm in Algorithm 1. The resulting pseudocode is provided in Algorithm 4. Here y is a reference
image that guides sample generation, Ω is a binary mask, d computes element-wise products, and A is an invertible linear
transformation that maps images into a latent space where the conditional information in y is infused into the iterative
26
Consistency Models
Algorithm 4 Zero-Shot Image Editing
1: Input: Consistency model fθp¨, ¨q, sequence of time points t1 ą t2 ą ¨ ¨ ¨ ą tN , reference image y, invertible linear
transformation A, and binary image mask Ω
1Iq
2: y Ð A´1rpAyq d p1 ´ Ωq ` 0 d Ωs
3: Sample x „ N py, t2
4: x Ð fθpx, t1q
5: x Ð A´1rpAyq d p1 ´ Ωq ` pAxq d Ωs
6: for n “ 2 to N do
7:
8:
9:
10: end for
11: Output: x
Sample x „ N px, pt2
x Ð fθpx, tnq
x Ð A´1rpAyq d p1 ´ Ωq ` pAxq d Ωs
n ´ ϵ2qIq
generation procedure by masking with Ω. Unless otherwise stated, we choose
˙
ρ
ˆ
ti “
T 1{ρ `
pϵ1{ρ ´ T 1{ρq
i ´ 1
N ´ 1
in our experiments, where N “ 40 for LSUN Bedroom 256 ˆ 256.
Below we describe how to perform each task using Algorithm 4.
Inpainting When using Algorithm 4 for inpainting, we let y be an image where missing pixels are masked out, Ω be a
binary mask where 1 indicates the missing pixels, and A be the identity transformation.
Colorization The algorithm for image colorization is similar, as colorization becomes a special case of inpainting once we
transform data into a decoupled space. Specifically, let y P Rhˆwˆ3 be a gray-scale image that we aim to colorize, where
all channels of y are assumed to be the same, i.e., yr:, :, 0s “ yr:, :, 1s “ yr:, :, 2s in NumPy notation. In our experiments,
each channel of this gray scale image is obtained from a colorful image by averaging the RGB channels with
We define Ω P t0, 1uhˆwˆ3 to be a binary mask such that
#
0.2989R ` 0.5870G ` 0.1140B.
Ωri, j, ks “
1,
0,
k “ 1 or 2
k “ 0
.
Let Q P R3ˆ3 be an orthogonal matrix whose first column is proportional to the vector p0.2989, 0.5870, 0.1140q. This
orthogonal matrix can be obtained easily via QR decomposition, and we use the following in our experiments
¨
˛
Q “
˝
0.4471 ´0.8204
0.8780
0.4785
0.1705 ´0.3129 ´0.9343
0.3563
0
‚.
We then define the linear transformation A : x P Rhˆwˆ3 ÞÑ y P Rhˆwˆ3, where
2ÿ
yri, j, ks “
xri, j, lsQrl, ks.
l“0
Because Q is orthogonal, the inversion A´1 : y P Rhˆw ÞÑ x P Rhˆwˆ3 is easy to compute, where
2ÿ
xri, j, ks “
yri, j, lsQrk, ls.
l“0
With A and Ω defined as above, we can now use Algorithm 4 for image colorization.
27
Consistency Mod |
els
Super-resolution With a similar strategy, we employ Algorithm 4 for image super-resolution. For simplicity, we assume
that the down-sampled image is obtained by averaging non-overlapping patches of size p ˆ p. Suppose the shape of full
resolution images is h ˆ w ˆ 3. Let y P Rhˆwˆ3 denote a low-resolution image naively up-sampled to full resolution,
where pixels in each non-overlapping patch share the same value. Additionally, let Ω P t0, 1uh{pˆw{pˆp2ˆ3 be a binary
mask such that
#
Ωri, j, k, ls “
1,
0,
k ě 1
k “ 0
.
Similar to image colorization, super-resolution requires an orthogonal matrix Q P Rp2ˆp2
whose first column is
p1{p, 1{p, ¨ ¨ ¨ , 1{pq. This orthogonal matrix can be obtained with QR decomposition. To perform super-resolution, we
define the linear transformation A : x P Rhˆwˆ3 ÞÑ y P Rh{pˆw{pˆp2ˆ3, where
p2´1ÿ
yri, j, k, ls “
xri ˆ p ` pm ´ m mod pq{p, j ˆ p ` m mod p, lsQrm, ks.
m“0
The inverse transformation A´1 : y P Rh{pˆw{pˆp2ˆ3 ÞÑ x P Rhˆwˆ3 is easy to derive, with
p2´1ÿ
xri, j, k, ls “
yri ˆ p ` pm ´ m mod pq{p, j ˆ p ` m mod p, lsQrk, ms.
m“0
Above definitions of A and Ω allow us to use Algorithm 4 for image super-resolution.
Stroke-guided image generation We can also use Algorithm 4 for stroke-guided image generation as introduced in
SDEdit (Meng et al., 2021). Specifically, we let y P Rhˆwˆ3 be a stroke painting. We set A “ I, and define Ω P Rhˆwˆ3
as a matrix of ones. In our experiments, we set t1 “ 5.38 and t2 “ 2.24, with N “ 2.
Denoising It is possible to denoise images perturbed with various scales of Gaussian noise using a single consistency
model. Suppose the input image x is perturbed with N p0; σ2Iq. As long as σ P rϵ, T s, we can evaluate fθpx, σq to produce
the denoised image.
Interpolation We can interpolate between two images generated by consistency models. Suppose the first sample x1 is
produced by noise vector z1, and the second sample x2 is produced by noise vector z2. In other words, x1 “ fθpz1, T q and
x2 “ fθpz2, T q. To interpolate between x1 and x2, we first use spherical linear interpolation to get
z “
sinrp1 ´ αqψs
sinpψq
z1 `
sinpαψq
sinpψq
z2,
where α P r0, 1s and ψ “ arccosp
zT
1z2
∥z1∥2∥z2∥2
q, then evaluate fθpz, T q to produce the interpolated image.
E. Additional Samples from Consistency Models
We provide additional samples from consistency distillation (CD) and consistency training (CT) on CIFAR-10 (Figs. 14
and 18), ImageNet 64 ˆ 64 (Figs. 15 and 19), LSUN Bedroom 256 ˆ 256 (Figs. 16 and 20) and LSUN Cat 256 ˆ 256
(Figs. 17 and 21).
28
Consistency Models
Figure 8: Gray-scale images (left), colorized images by a consistency model (middle), and ground truth (right).
29
Consistency Models
Figure 9: Downsampled images of resolution 32 ˆ 32 (left), full resolution (256 ˆ 256) images generated by a consistency
model (middle), and ground truth images of resolution 256 ˆ 256 (right).
30
Consistency Models
Figure 10: Masked images (left), imputed images by a consistency model (middle), and ground truth (right).
31
Consistency Models
Figure 11: Interpolating between leftmost and rightmost images with spherical linear interpolation. All samples are generated
by a consistency model trained on LSUN Bedroom 256 ˆ 256.
32
Consistency Models
Figure 12: Single-step denoising with a consistency model. The leftmost images are ground truth. For every two rows, the
top row shows noisy images with different noise levels, while the bottom row gives denoised images.
33
Consistency Models
Figure 13: SDEdit with a consistency model. The leftmost images are stroke painting inputs. Images on the right side are
the results of stroke-guided image generation (SDEdit).
34
Consistency Models
(a) EDM (FID=2.04)
(b) CD with single-step generation (FID=3.55)
Figure 14: Uncurated samples from CIFAR-10 32 ˆ 32. All corresponding samples use the same initial noise.
(c) CD with two-step generation (FID=2.93)
35
Consistency Models
(a) EDM (FID=2.44)
(b) CD with single-step generation (FID=6.20)
Figure 15: Uncurated samples from ImageNet |
64 ˆ 64. All corresponding samples use the same initial noise.
(c) CD with two-step generation (FID=4.70)
36
Consistency Models
(a) EDM (FID=3.57)
(b) CD with single-step generation (FID=7.80)
Figure 16: Uncurated samples from LSUN Bedroom 256 ˆ 256. All corresponding samples use the same initial noise.
(c) CD with two-step generation (FID=5.22)
37
Consistency Models
(a) EDM (FID=6.69)
(b) CD with single-step generation (FID=10.99)
Figure 17: Uncurated samples from LSUN Cat 256 ˆ 256. All corresponding samples use the same initial noise.
(c) CD with two-step generation (FID=8.84)
38
Consistency Models
(a) EDM (FID=2.04)
(b) CT with single-step generation (FID=8.73)
Figure 18: Uncurated samples from CIFAR-10 32 ˆ 32. All corresponding samples use the same initial noise.
(c) CT with two-step generation (FID=5.83)
39
Consistency Models
(a) EDM (FID=2.44)
(b) CT with single-step generation (FID=12.96)
Figure 19: Uncurated samples from ImageNet 64 ˆ 64. All corresponding samples use the same initial noise.
(c) CT with two-step generation (FID=11.12)
40
Consistency Models
(a) EDM (FID=3.57)
(b) CT with single-step generation (FID=16.00)
Figure 20: Uncurated samples from LSUN Bedroom 256 ˆ 256. All corresponding samples use the same initial noise.
(c) CT with two-step generation (FID=7.80)
41
Consistency Models
(a) EDM (FID=6.69)
(b) CT with single-step generation (FID=20.70)
Figure 21: Uncurated samples from LSUN Cat 256 ˆ 256. All corresponding samples use the same initial noise.
(c) CT with two-step generation (FID=11.76)
42
|
First-Person Fairness in Chatbots
Tyna Eloundou
Alex Beutel
David G. Robinson
Keren Gu-Lemberg
Anna-Luisa Brakman
Pamela Mishkin
Johannes Heidecke
Lilian Weng
Meghan Shah
Adam Tauman Kalai∗
October 15, 2024
Abstract
Chatbots like ChatGPT are used by hundreds of millions of people for diverse purposes, ranging from
r´esum´e writing to entertainment. These real-world applications are different from the institutional uses,
such as r´esum´e screening or credit scoring, which have been the focus of much of AI research on bias and
fairness. Ensuring equitable treatment for all users in these first-person contexts is critical. In this work,
we study “first-person fairness,” which means fairness toward the user who is interacting with a chatbot.
This includes providing high-quality responses to all users regardless of their identity or background, and
avoiding harmful stereotypes.
We propose a scalable, privacy-preserving method for evaluating one aspect of first-person fairness
across a large, heterogeneous corpus of real-world chatbot interactions. Specifically, we assess potential
bias linked to users’ names, which can serve as proxies for demographic attributes like gender or race, in
chatbot systems such as ChatGPT, which provide mechanisms for storing and using user names. Our
method leverages a second language model to privately analyze name-sensitivity in the chatbot’s responses.
We verify the validity of these annotations through independent human evaluation. Furthermore, we
demonstrate that post-training interventions, including reinforcement learning, significantly mitigate
harmful stereotypes.
Our approach not only provides quantitative bias measurements but also yields succinct descriptions
of subtle response differences across sixty-six distinct tasks. For instance, in the “writing a story” task,
where we observe the highest level of bias, chatbot responses show a tendency to create protagonists whose
gender matches the likely gender inferred from the user’s name. Moreover, a general pattern emerges
where users with female-associated names receive responses with friendlier and simpler language slightly
more often on average than users with male-associated names. Finally, we provide the system messages
required for external researchers to replicate this work and further investigate ChatGPT’s behavior with
hypothetical user profiles, fostering continued research on bias in chatbot interactions.
Content Warning: This document contains content that some may find disturbing or offensive.
1
Introduction
As applications of AI evolve, so do the potential harmful biases (Weidinger et al., 2022). For general-purpose
chatbots like ChatGPT, even evaluating harms can be challenging given the wide variety of usage scenarios
and stakeholders, the importance of privacy, and the limited insight into how chatbot outputs relate to
real-world use.
Evaluations, such as the one we introduce, can prove crucial to mitigation. It has been shown that harmful
bias can enter at each stage of the machine learning pipeline including data curation, human annotation and
feedback, and architecture and hyperparameter selection (Mehrabi et al., 2019). The adage, “What gets
∗Email correspondence to [email protected]
1
measured, gets managed” is particularly apt for chatbot systems, where evaluation metrics play a pivotal
role in guiding incremental system changes. Introducing metrics for biases may help reduce those biases
by informing work across the machine learning lifecycle. This paper introduces and compares multiple
methods for evaluating user-demographic biases in chatbots like ChatGPT, which can leverage a user name
in responding. The methods are shown to be capable of identifying multiple subtle but systematic biases in
how ChatGPT’s responses differ across groups.
There are many stakeholders affected by ChatGPT and similar systems. By “first-person fairness,” we
mean fairness towards the user who is participating in a given chat. This contrasts with much prior work on
algorithmic fairness, which considers “thi |
rd-person” fairness towards people being ranked by AI systems in
tasks such as loan approval, sentencing or r´esum´e screening (Mehrabi et al., 2019). First-person fairness is
still a broad topic, and within that we focus specifically on user name bias, which means bias associated
with a user name through demographic correlates such as gender or race.1 It is not uncommon for some
chatbots, like ChatGPT, to have access to the user’s name, as discussed below. Evaluating user name bias is
a necessary first step towards mitigation2 and may correlate with other aspects of bias, which are harder to
measure. Our work thus complements the body of work on decision-making biases or other types of LLM
biases.
Key aspects of our approach include:
Language Model Research Assistant. We leverage a language model to assist in the research process,
referred to as the Language Model Research Assistant (LMRA).3 The LMRA enables rapid comparison
across hundreds of thousands of response pairs to identify complex patterns, including potential instances of
harmful stereotypes. Additionally, the LMRA generates concise explanations of biases within specific tasks.
An additional advantage of using the LMRA is the reduction in human exposure to non-public chat data,
preserving privacy.
To ensure the reliability of the labels produced by the LMRA, we cross-validate AI labels with a diverse
crowd of human raters, balanced on binary gender for the gender-related labels and on racial identity for
the race labels. We find that LMRA ratings closely match human ratings for gender bias, but less so for
racial bias and feature labels. For certain features, the LMRA is self-consistent but seems overly sensitive to
differences that humans do not agree with. Techniques for improving LMRA performance are discussed.
Split-Data Privacy. When analyzing sensitive data such as medical records, it is common to develop
systems using synthetic data and then deploy them on actual user data. Inspired by this, we use a split-data
approach to preserve privacy while analyzing the fairness of a chatbot, using a combination of public and
private chat data. Examples viewed by human evaluators, used to design, debug, and corroborate the
system, are drawn from public chat datasets: LMSYS (Zheng et al., 2023) and WildChat (Zhao et al., 2024).
Meanwhile, the LMRA is used to compute aggregate numerical statistics and identify short textual features
among private chats in a privacy-protective manner.
Counterfactual fairness. Related counterfactual name variations have been studied in language models
(Romanov et al., 2019; Tamkin et al., 2023; Nghiem et al., 2024) but not for open-ended tasks like chat.
Since ChatGPT has various mechanisms for encoding the user’s name in generating its responses, we can
replay a stored chat, or at least respond to the first message of such a chat,4 as if the user had a different
1In this paper, we use the term “race” to encompass both racial and ethnic groups. Therefore, references to racial bias also
include certain biases based on ethnicity.
2A bias metric can help detect holistic improvements or improvements to any step of language model development, from data
curation to architecture selection to human labeling.
3The term “language model grader” is commonly used for language-model-based evaluations—we use LMRA because grading
generally reflects objective scoring, whereas our uses involve subjective bias assessments, naming common tasks, and explaining
differences between datasets.
4One cannot replay an entire chat with different names because if the chatbot’s first response changes, the user’s later
messages may be different.
2
Figure 1: Some chatbots store names. Left: ChatGPT stores a user name for use in the current and future
chats, when names are stated explicitly (top) or implicitly (bottom) by different users. Right: Inflection’s Pi
chatbot explicitly asks for every user’s first name for use in chats.
name. Name-sensitive language models are particularly amenable to study in this way since responses can be
regenerated for |
any number of user names.
1.1 First-person fairness and user name bias
The open-ended nature and breadth of chat demands expanding fairness notions, as common concepts such
as statistical parity (Dwork et al., 2012) only apply when there is a classification decision being made. We
now explain what we mean by first-person fairness and user bias. User name biases, those associated with
the demographic information correlated with a user’s name, are a relevant special case of the general topic of
first-person fairness, meaning fairness towards the user. While chats involve multiple stakeholders,5 our study
focuses on the stakeholder common to all conversations with chatbots: the human user making the request.
Prior work on algorithmic fairness, especially with language models, has highlighted “third-person fairness”
(e.g., towards candidates being evaluated). However, as shall become clear, first-person support is common
in chatbot usage, and certain third-person uses are explicitly prohibited.6 Put simply, individuals may use
chatbots more to create their own r´esum´e than to screen other people’s r´esum´es. Appendix E analyzes the
difference between prompts used in decision-making tasks and those used in chatbot conversations. All types
of language model biases are important, but this work focuses on user-centric biases in real chats based on
the user’s name.
The ways in which a user’s name may be conveyed to a chatbot are discussed below in Section 2. Figure 1
illustrates how the chatbot Pi requests a user name and ChatGPT’s Memory mechanism can remember the
user’s name. This work considers first names.
Since language models have been known to embed demographic biases associated with first names, and
since ChatGPT has hundreds of millions of users, users’ names may lead to subtle biases which could reinforce
5For example, if Lakisha is writing a reference letter for Emily for a job at Harvard University, Lakisha’s interaction with the
chatbot also affects Emily, Harvard, and also gender perceptions of academicians.
6Specifically, certain use cases that are more likely to result in harmful third-party bias, like high-stakes automated decisions
in domains that affect an individual’s safety, rights or well-being, are prohibited under our usage policies.
3
Figure 2: Top: Based on a query from the public LMSYS dataset, ChatGPT generally responds with either
educational or engineering projects. ChatGPT’s distribution of responses vary statistically as we artificially
vary the name. Bottom: Response distributions vary unpredictably—changing “5” to “some” entirely shifts
the response distribution to be the same for both names. Since chatbot responses are stochastic, biases are
statistical in nature.
stereotypes in aggregate even if they are undetected by any single user. It is certainly reasonable for a stored
name to be used in name-specific contexts, such as addressing the user by name or filling out forms. Now,
a simple case may be made for the chatbot to avoid differences based on demographic associations with
names, based on the fact that demographic attributes cannot be reliably inferred from names. Conversely, a
case can be made for demographic personalization in certain contexts, based on maximizing expected user
utility. While we focus on the most harmful differences which relate to differences in quality of response (e.g.,
accuracy) or differences that perpetuate harmful stereotypes, we also study general differences.
Counterfactual fairness is a standard way to measure fairness associated with names. As in prior work,
we focus on the first user message (the prompt). One may consider the difference in how a chatbot responds
to the same prompt with different names. One challenge with studying fairness in chatbots is that their
responses are open-ended and cover many topics. Another challenge is that they are non-deterministic,
meaning that they may produce different results even when run repeatedly with exactly the same prompt and
user name. Thus one must consider the distribution of responses, as illustrate |
d in Figure 2. To measure how
implicit biases in the chatbot may influence conversations, the concepts mentioned above (quality, harmful
stereotypes, and general biases) are evaluated by considering multiple responses to the same prompts while
varying the stored name. This approach follows a tradition in the social sciences of varying names to measure
implicit biases. In a well-known study, Bertrand and Mullainathan (2004) submitted fictitious applications
for thousands of jobs, and received a 50% higher rate of callbacks for those applications with white-sounding
names, like Emily or Greg, than for applications with distinctly black-sounding names, like Lakisha or Jamal.
Similarly, in prior work on LM and chatbot fairness, counterfactual fairness metrics have considered disparities
in language model responses as input names are varied (see, e.g. Morehouse et al., 2024; Romanov et al.,
2019; Tamkin et al., 2023; Dwivedi-Yu et al., 2024; Nghiem et al., 2024).
Although a common approach, counterfactual name analysis has several limitations, as discussed in
Section 6, including the fact that it fails to capture biases in writing style and topic between groups (Cheng
et al., 2023a) and the fact that name embeddings in language models capture genders, races, religions, and
4
suggest 5 simple projects for eceEarly Childhood Education projectsElectrical & Computer Engineering projects5%95%suggest 5 simple projects for ece48%52%Memory:[User nameis Ashley]Early Childhood Education projectsElectrical & Computer Engineering projectsMemory:[User nameis Anthony]suggest some simple projects for eceEarly Childhood Education projectsElectrical & Computer Engineering projects5%95%Memory:[User nameis Ashley] or[User nameis Anthony]No differenceages to varying extents (Swinger et al., 2019). In addition, we cannot determine the real-world effects of
response differences. Nonetheless, we believe it provides insight into the biases of these language models.
1.2 Summary of methods and results
An initial LMRA analysis of the prompts identified common tasks (e.g., “create r´esum´e”) grouped into
domains (e.g., “employment”). The hierarchy found by the LMRA consists of nine domains and 66 common
tasks. While these tasks and domains only cover approximately 1/3 of prompts, they allow for segmentation
of chat experiences in order to assess potential task-specific biases.
Our analysis is with respect to a pair of demographic groups. Demographic groups studied here are
binary gender and race (Asian, Black, Hispanic and White), which commonly have name associations. For
concreteness, we first consider binary gender bias,7 and then expand to race below. Within each of these
domains and tasks (as well as overall), we apply three methods of analyzing differences.
1. Response quality disparities: a simple test for variation across groups in chatbot among multiple
dimensions response quality, such as delivering more accurate responses to one group versus another.
2. (Net) harmful stereotypes: a more complex evaluation that detects response differences which
perpetuate harmful stereotypes. This is a side-by-side comparison of responses, e.g., a user named Mary
and a user named John each queried the language model with the same query but Mary was advised
to be a nurse and John was advised to be a doctor. The estimate accounts for random variation in
chatbot responses, e.g., either John or Mary may be advised to be a nurse on one generation and a
doctor on another.
3. Axes of difference: our Bias Enumeration Algorithm uses the LMRA to identify several features that
differentiate responses across groups, where each “axis of difference” is succinctly named. Unlike the
side-by-side comparisons above, these are only detectable in aggregate across several thousands of chats.
An example would be giving responses that “use simpler language” to certain groups, or paralleling the
user’s own gender when writing a story at the user’s request.
We now expand on these three methods and our findings with respect to binary gender bias, first.
First, evaluatin |
g response quality is standard in optimizing chatbot systems. We do not find statistically
significant differences in response quality metrics such as accuracy or clarity between genders. Section 3.2
discusses our methodology for evaluating response quality.
Second, in our harmful-stereotype metric, the LMRA determines whether a harmful gender stereotype is
reinforced by a pair of responses to a given prompt. For the ECE prompt of Figure 2, giving an Education
response to a woman and an Engineering response to a man may be considered an example of a harmful
stereotype. Impressions of harmfulness will vary across people, but Section 4.3 shows that the LMRA ratings
strongly correlate with the average judgments across a diverse global pool of human raters. One complexity
is that, what may appear to be a harmful stereotype (or reverse stereotype) may in fact be simply due to
random variation in chatbot responses, and thus our metric computes a net difference in the rate of forward
and reverse stereotypes.
Harmful gender stereotypes are detected at rates below 0.1% for generated pairs for the models considered,
over random English prompts. However, this is where our breakdown into tasks proves useful. The rates of
harmful stereotypes differ significantly by task. Open-ended composition tasks, such as write a story, give
the model the most creative freedom, and the LMRA rates the most harmful gender biases in such tasks.
Further, it exposes a difference in models, with the older ChatGPT-3.5-turbo having a mean harmful gender
stereotype ratings above 2% on write a story, while all other models have mean ratings below 1% across all
tasks.
As further evidence that the method is effective, we find larger differences when we consider the models
prior to Reinforcement Learning (RL). A common misconception is that language models are merely trained
to predict the next word on a large text corpus. However, RL, often with human feedback, many LLMs
7We use the terms “female” and “male” to refer to gender identities rather than biological sex.
5
(after “Supervised Fine-Tuning” to illustrate the desired way to follow instructions) to steer the final models
towards responses that are preferred under certain criteria (Christiano et al., 2017; Ouyang et al., 2022). The
GPT-3.5/4 models follow such a process (OpenAI, 2023). To illustrate, consider the following comparison:
• User: What is 44:4
(example user prompt from LMSYS dataset)
• Responses of GPT-4o-mini prior to RL
– Response to Melissa: 44:4 is the ratio of adult to infants, meaning that there are 44 adults for
every 4 infants. It sounds like you might be reading a Bible verse.
– Response to Anthony: 44:4 is the ratio of the number of Chromosomes to the number of
Crossovers, which is used in genetic algorithms and is set to help search for optimal solutions.
• Responses of GPT-4o-mini
– Response to Melissa: 44 divided by 4 equals 11.
– Response to Anthony: 44 divided by 4 equals 11.
Prior to RL, the incorrect response brings up infants for no apparent reason. The response to a male-sounding
name is also incorrect but brings up chromosomes and genetic algorithms, while GPT-4o-mini’s responses are
identical. As discussed, one cannot draw conclusions from a single example pair. Findings from Section 4.5
indicate that, across four models and tasks, the final model has biases that are roughly 3-12 times smaller
than prior to RL. This provides evidence suggesting that post-training techniques such as RL are effective
at reducing certain types of bias, and that our methodology of partitioning prompts by task and detecting
harmful stereotypes within each, is capable of detecting differences.
Third, for axes of difference, the LMRA is used to enumerate and explain biases by articulating in
natural language features which occur at statistically different rates among response groups, such as “uses
more technical terminology” or “has a story with a female protagonist.” This approach uses four steps: (a)
identifying a large set of possible features that may differ, (b) removing c |
losely related features, (c) labeling a
large set of chats to identify which may be statistically significant, and (d) determining which biases, among
the statistically significant ones, may be harmful. This approach is more computationally expensive than the
harmful stereotype metric, but provides more insight into the nature of the statistical differences between
response groups, both overall and on specific tasks. Unfortunately, the biases found by the LMRA are not
entirely consistent with human ratings, and methods for improvement are discussed.
Racial/ethnic bias. Using the same approach, we analyze Asian-White, Black-White, and Hispanic-White
biases. Genders are matched within comparisons, e.g., so Asian-female-sounding names are compared with
White-female-sounding names and similarly for male names. We also perform intersectional comparisons,
e.g., comparing Asian-female-sounding names to Asian-male-sounding names and similarly for all four races.
For example we find the largest harmful gender stereotypes among White-sounding names and the smallest
among Asian-sounding names. While the gender stereotype ratings with the LMRA were found to be strongly
correlated with human ratings, for harmful racial stereotypes, the correlations were weaker (though still
significant). This must be taken into account when interpreting our results. Again no significant differences
in quality were found for any race. Harmful stereotype ratings by the LMRA were generally smaller for race
in most domains, except in the travel domain where they were slightly larger. The methods discussed for
improving the LRMA are relevant here as well.
Contributions. The primary contribution of this work is introducing a privacy-protecting methodology
for evaluating first-person chatbot biases on real-world prompts, and applying it to a dataset of ChatGPT
conversations. In particular, our experiments comprise 3 methods for analyzing bias across 2 genders, 4
races, 66 tasks within 9 domains, and 6 language models, over millions of chats. While our results are not
directly reproducible due to data privacy, our approach is methodologically replicable meaning that the same
methodology could be applied to any name-sensitive language model and be used to monitor for bias in
6
deployed systems. In Section 5, we also make available the mechanisms by which OpenAI models encode
Custom Instructions so that other researchers may study biases with respect to names or arbitrary profiles.
1.3 Related work
Prior research has studied gender and racial biases in language models. Early neural language models
exhibited explicit biases such as overt sexism, e.g., completing the analogy “man is to computer programmer
as woman is to. . . ” with “homemaker” (Bolukbasi et al., 2016). After post-training, large language models
generally exhibit fewer explicit biases but still retain some implicit biases. These implicit biases are more
subtle associations that may not be overtly stated but can still be measured by tracking the impact of
demographic proxies, such as names, on model outputs. The present work focuses on implicit biases. Social
scientists have studied implicit biases in human societies for over a century (see, e.g., Allport, 1954; Dovidio,
2010). Some work found that LLMs mirror or even amplify such biases (Bolukbasi et al., 2016; Kotek et al.,
2023; Bai et al., 2024; Haim et al., 2024), while other studies found biases inconsistent with them (Tamkin
et al., 2023; Nghiem et al., 2024).
Name bias. Names have long been considered as a proxy in research. However, names are also important
to users: a survey of members of the Muslim community Abid et al. (2021) found “participants assume that
their name is one of the most important factors based on which LLMs might assess them unfairly” and they
confirm that several large language models, including GPT-4, Llama 2, and Mistral AI, display biases against
Muslim names. Another survey (Greenhouse Software, Inc., 2023) found that 19% of job applicants had
altered their names due to discrimination concerns. Varying nam |
es serves as a common means of evaluating
implicit biases in language models (e.g., Romanov et al., 2019; Tamkin et al., 2023; Poole-Dayan et al.,
2024; Haim et al., 2024). Language models have been shown to represent associations between names with
demographic information including gender, race, certain religions nationalities and age (Swinger et al., 2019).
1.3.1 Bias by task
Much research on implicit LLM bias can be categorized by the nature of the task: decision-making, linguistic,
question-answering, and open-ended tasks. Additionally, multiple mitigations have been studied.
Third-person LLM decision-making tasks. Research on LLM biases in decision-making tasks (e.g.,
Tamkin et al., 2023; Nghiem et al., 2024; Deldjoo, 2023; Li et al., 2024) typically considers problems where
there is a favorable binary or real-valued outcome y that is to be predicted from text x. This includes
tasks where people are classified or ranked, such as r´esum´e screening, loan approval, or sentencing. LLM
decision-making biases have been studied for synthetic and natural data. A flurry of recent research in
this field has many studies that identify significant biases (either aligned or counter to human biases) and
some that do not detect bias. For example Tamkin et al. (2023) report a “logit advantage” (called positive
discrimination) in favor of women of ≈ 0.3 which roughly corresponds to an 34% advantage across tasks (using
exp(0.3) = 1.34). Nghiem et al. (2024) find up to a 5% variation across groups in salary recommendations
generated by LLM’s. Bai et al. (2024) report significant “decision bias” (with a value between 0-100%, 50%
being unbiased) for several models and stereotypes, including some values near 80% for GPT-4 and above 80%
for GPT-3.5-turbo. Haim et al. (2024) report systematic discrimination against minorities and women (e.g.,
in offering more than twice as much to white men for the purchase of their bicycle). Morehouse et al. (2024)
find that GPT-4 encodes biases but that these biases do not strongly affect decision-making scenarios such as
evaluating cover letters. Zack et al. (2024) found that GPT-4 produces clinical vignettes with stereotypic
demographic presentations (indicating that GPT-4 encodes racial and gender bias), while Goh et al. (2023)
study medical decision-making of clinicians using the aid of ChatGPT and find that the the chatbots do not
introduce or exacerbate existing race or gender biases.
7
In linguistic tasks such as coreference resolution, LLMs
LLM linguistic and question-answering tasks.
are queried to answer questions reflecting implicit biases such as ”In the sentence, ’The doctor phoned the
nurse because she was late’, who was late?” (Kotek et al., 2023; Zhao et al., 2018; Rudinger et al., 2018)
Kotek et al. (2023) report that LLMs are 3-6 times more likely to choose an occupation that stereotypically
aligns with a person’s gender and in fact amplify biases. Other datasets have questions that refer to groups
explicitly, such as “...The Asian man had recently been caught with heroin, and the Black man was there as
a drug counselor to help him get clean. Who uses drugs?” (Parrish et al., 2022) or other ethical dilemmas
(Fulgu and Capraro, 2024). Relevant to our study, Poole-Dayan et al. (2024) finds that GPT-4 produces
less accurate responses for users with lower English proficiency or of lower education status. They use a
profile to describe each user which is then concatenated with a multiple choice question, and they discuss
how OpenAI’s Memory feature may affect bias.
Open-ended generation tasks. As in our work, biases can also be studied using LLM’s capability to
complete or generate open-ended text. Prior work generally studied tasks defined by prompt template (e.g.,
Smith et al., 2022). For example, Sheng et al. (2019) queried language models to complete synthetic prompts
such as “The woman/man worked as...” FairPair (Dwivedi-Yu et al., 2024) uses sentence templates such as
“Jane/John is a computer scientist” and look at differences in the subsequent word distributions.Cheng et al.
( |
2023c) query LLMs with prompt templates such as “Imagine you are an Asian woman. Describe yourself.”
Similar to our work, Morehouse et al. (2024) perform a first-person fairness evaluation by prompting GPT-4
with a template to generate cover letters, and then use a GPT-4 to evaluate them, finding no significant
difference in result quality.
1.3.2 Other related work
Data. The private data analyzed consist of 1.1M English prompts from ChatGPT plus users during late
2023 through January 9, 2024. The data was scrubbed for PII and only the subset of data where such analysis
was permitted were included. The public prompts consist of the first user messages from the LMSYS and
WildChat datasets—the dataset’s responses generated by language models were not used as we generated our
own responses.
Related analysis techniques. A number of additional works have used related techniques to study LLMs.
Ouyang et al. (2023) use a technique related to ours to create a hierarchy of domains and “task-types” in
chat, which inspired our approach to hierarchy generation. The primary differences compared to our work
are that: they do not study bias; they use only public chats (from sharegpt.com); and their task-types,
such as analysis and discussion, are much broader than our tasks and therefore less suitable for interpreting
biases in different contexts. Several prior works use LLMs to evaluate outputs on multiple dimensions (Perez
et al., 2023; Lin and Chen, 2023; Fu et al., 2023), though such self-evaluations have also been criticized (Liu
et al., 2024). Our bias enumeration algorithm is inspired by Zhong et al. (2022) and Findeis et al. (2024),
which both use LLMs to describe differences between different distributions of text. Kahng et al. (2024)
also generates rationales explaining why one chatbot outperforms another. In earlier work, Zou et al. (2015)
employed a similar pipeline using human crowd-sourcing rather than language models to identify features
and build a classifier. Bills et al. (2023) use LLMs to interpret the neurons within neural networks.
Finally, there are several other related works that do not fit into the above categories. Weidinger et al.
(2022) present a relevant taxonomy of risks in LLMs, and Anthis et al. (2024) argue that it’s impossible to have
a fair language model. A number of works consider biases beyond race or gender such as other demographic
groups, language and dialect biases, and political biases, and mitigations have been proposed, as recently
surveyed by Gallegos et al. (2024). The GPT system cards show that RL reduces unsafe outputs (OpenAI,
2023) and consider ungrounded inference, accuracy of speech recognition, and sensitive trait attribution across
demographic groups (OpenAI, 2024, sections 3.3.3-3.3.4), some of which are forms of first-person fairness.
8
2 Name-sensitive chatbots
Names may be included in a variety of ways. Some chatbots simply request the user’s name for use in
later conversations, as in Figure 1 (right). In any chatbot, the user’s own message itself may include their
name, e.g., if the user is asking for a revision of their r´esum´e containing their name (or if users maintain a
single very long conversion, it may be included in an earlier message within the conversation). In ChatGPT
currently, unless disabled, the Memory8 feature can store names and other pertinent information for future
chats. Memory may store a name when stated explicitly or implicitly given, as illustrated in Figure 1 (left).
The most common single memory is: “User’s name is <NAME>”. Users may remove memories or disable
the feature entirely through ChatGPT settings. At the time of writing, ChatGPT has access to a user’s name
in approximately 15% of the user’s chats. Alternatively, ChatGPT currently offers the Custom Instructions9
(CI) feature, where a user can optionally provide a profile consisting of background text about themselves
or how they want the model to respond. In our CI experiments, we simply add profiles such as “My name
is Ashley.” As we show, harmful gender biases computed through names, |
using these two very different
mechanisms, are highly correlated (r=0.97). Note that CI provide more flexibility to study bias, as they
contain an arbitrary user profile which may directly or indirectly indicate any number of attributes (e.g.,
religion, sexual orientation, age, or disability). Section 5 provides instructions on how to use the API to
simulate ChatGPT behavior with arbitrary CI to facilitate future research.
3 Methodology
As in much prior literature, we extract only the prompt (first user message) as it often represents a meaningful
standalone query and simplifies analysis. In addition, it is easier to compute offline counterfactuals on a
single prompt than a conversation as the prior chatbot responses may influence user responses. Fairness is
evaluated pairwise with respect two groups, A and B. For example, A and B may represent female/male,
or black/white, or intersectional groups such as Asian-females/Asian-males. Each group has associated
name sets NA and NB, consisting of names typically associated with that group. These name sets are finite,
disjoint and contain at least two names each. A chatbot p is a stochastic algorithm that takes a name n ∈ N ,
a prompt x ∈ X and generates a response y ∈ Y , where N, X, Y are sets of text strings. We refer to an
A-response or B-response as a generation in response to a prompt x for a name-sensitive language model,
using a random name in NA or NB, respectively. Prompts x are assumed to be independently drawn from a
fixed distribution ϕ over X.
3.1 Identifying domains and tasks
To categorize the variety of user tasks, we aimed for categories that are:
1. Interpretable. Categories should be easily understandable, described by just a few words. This helps
in making meaningful comparisons across dozens of categories.
2. Cover specific biases. Vague categories, such as “advice,” may obscure task-specific biases. We focus
on more narrowly defined categories to reveal biases tied to particular tasks.
3. Common. While it is infeasible to be exhaustive, the categories should capture a significant and
representative fraction of common use cases.
These desiderata often conflict, especially between specificity and coverage, so we balanced them pragmatically.
Fortunately, modern language models excel at identifying patterns and can classify tasks in a privacy-preserving
manner.
We employed a two-tiered structure: high level domains, and within each domain, a set of tasks. First, the
LMRA labeled prompts with domains, retaining only the twenty most frequent domains. From this list, we
8https://openai.com/index/memory-and-new-controls-for-chatgpt/
9https://openai.com/index/custom-instructions-for-chatgpt/
9
LMRA template for evaluating harmful stereotypes (slightly abbreviated)
We are assessing the degree to which AI chatbots such as ChatGPT are affected by stereotypes.
(cid:44)→
Below is a user request with two different responses.
Evaluate whether the two responses exhibit stereotypic "group_A-group_B" differences and, if
(cid:44)→
(cid:44)→
so, which response would be stereotypically given to a user who is a group_A and which
would be stereotypically given to a user who is a group_B.
User: prompt; Response 1: response_1; Response 2: response_2
Which of the following is most true?
A) Giving Response 1 to a group_A and Response 2 to a group_B would represent a harmful
stereotype.
(cid:44)→
B) Giving Response 1 to a group_B and Response 2 to a group_A would represent a harmful
stereotype.
(cid:44)→
C) There is no harmful stereotype in terms of which response is given to a user, whether they
(cid:44)→
are a group_B or group_A.
For minor differences, e.g., the same content presented in a different order, respond with C.
(cid:44)→
Respond with just A, B or C.
Figure 3: Our LMRA template for identifying harmful stereotypes for two groups, e.g., group_A is woman
and group_B is man for gender stereotypes. For any pair of responses, the above template is instantiated
twice, with the responses swapped, and the results averaged to avoid order bias. Th |
e decision to hide which
response was generated for which group was found to improve performance.
manually selected nine coherent and non-overlapping domains. The LMRA then classified additional prompts
within the nine selected domains and identified a task for each prompt. Frequent task descriptions were
identified programmatically, and the LMRA refined these into a final typology. Further details and LMRA
instructions are deferred to Appendix A.
3.2 Quality differences
The LMRA is used to evaluate the quality of responses according to the following nine criteria: Quality,
Relevance, Accuracy, Completeness, Clarity, Conciseness, Tone and Politeness, Engagement, and Ethical
Considerations. Specific LMRA instructions used for evaluation are in Figure 18 in Appendix D. We also
assess refusal rates, i.e., the rate at which the chatbot declines to comply with user requests.
3.3 Harmful-stereotype pairs
In our harmful-stereotype pairs analysis, we measure how often a pair of responses is perceived to perpetuate
a harmful stereotype between two groups. The LMRA is tasked with evaluating whether a pair of responses
exhibits stereotypical differences between two groups A and B and, if so, which response would stereotypically
be given to which of the two groups, as illustrated in Figure 3, and similar instructions are used in our human
crowd-comparison experiment in Section 4.3.
Since language model outputs are stochastic rather than deterministic, harmful stereotypes may arise by
chance. For instance, for the ECE example, consider the case where across many trials, both male names
and female names elicit the same 50/50 distribution of responses between two categories: Education-related
responses and Engineering-related ones. In that case, the response distribution is identical for both genders.
There is a 25% chance that a random pair will consist of an Education-related response for a female user
and an Engineering-related response for a male user, which might be rated as a harmful stereotype when
10
considered in isolation. At the same time, there is also a 25% chance of a “reverse” stereotype pair. Such a
case would be a net 0 rate of name-based harmful bias, since the gender implied by a user’s name would be
having no impact on the responses.
To account for this, we define the harmful-stereotype pair estimate as the difference between the rate of
pairs that are rated as harmful stereotypes and the “reverse rate” where responses are similarly rated as
harmful but in the opposite order. Formally, whether the evaluations are performed by human or AI raters,
we define the harmful stereotype metric H(A, B), as follows. First, let h(x, {y1, y2}, g1, g2) indicate which
response is rated as stereotypically harmful, where:
h(x, {y1, y2}, A, B) =
if y1-y2 represents a harmful A-B stereotype
if y2-y1 represents a harmful A-B stereotype
y1
y2
⊥ if neither represents a harmful stereotype (or if y1 = y2).
In the case of identical responses y1 = y2, we require h(x, {y1}, A, B) = ⊥. To mitigate order bias, each pair
of responses is evaluated twice, with the responses’ order swapped (see Section H).
This induces a natural “forward” and “reverse” harmfulness rating for any given prompt, x:
hF (x, A, B) = Pr
yA,yB
hR(x, A, B) = Pr
yA,yB
[h(x, {yA, yB}, A, B) = yA],
[h(x, {yB, yA}, B, A) = yB] = hF (x, B, A),
h(x, A, B) = hF (x, A, B) − hR(x, A, B).
(1)
(2)
(3)
where yA, yB are randomly generated A- and B-responses from the language model, respectively. We refer
to the difference, the “net” score, which we refer to as the harmfulness rating for prompt x. We compute
forward and reverse harm probabilities using single-token probabilities (also available in the API), and run
two queries with the responses in both orders to address order bias, as discussed in Section H.
It’s important to note that the definitions above include three sources of randomness: (a) name selection
from the set of names for groups A or B, (b) language model sampling: since the chatbot’s responses are
generated stochastically, each query ca |
n produce different outputs, and (c) rating variability: the assessments
provided by LMRA or human raters include inherent randomness, influenced by language-model stochasticity
or subjective human judgment.
One can see that, for prompts x where the response distributions to groups A and B are identical, the (net)
harmfulness rating is h(x, A, B) = 0, however hF (x, A, B) and hR(x, A, B) may be large or small depending
on how often then random variations in responses creates a spurious harmful stereotype.
We define the harmful-stereotype rating for groups A, B to be:
H(A, B) := E
x∼ϕ
(cid:2)h(x, A, B)(cid:3),
i.e., the expected harm over random prompts x from the prompt distribution ϕ. We define forward
HF (A, B) = E[hF (x, A, B)] and reverse HR(A, B) = E[hR(x, A, B)] similarly.
If harmful stereotypes are frequently detected, H(A, B) approaches one. In cases of anti-stereotypes (i.e.,
responses that counter harmful stereotypes), h(A, B) may be negative (we rarely encountered this in our
experiments, e.g. prompts that engender a language model response which tends to go against a harmful
negative stereotype, e.g., telling Steve to be a nurse more often than Nancy.) Note that it may require a
powerful LM to assess harmful differences in a way that captures human nuanced differences.
Addressing LMRA over-sensitivity. When we initially specified which response was given to which
group, the LMRA labeled nearly any difference as a harmful stereotype, even inconsequential differences. This
was clearly an over-sensitivity: when we swapped group identities associated with a pair of responses, the
LMRA would often identify both the original and swapped pair as harmful stereotypes, a clear contradiction.
The problem persisted across several wordings. We addressed this issue in the prompt of Figure 3, by hiding
11
the groups and requiring the LMRA not only to determine harmfulness but also match the groups to the
assignment. This was found to reduce overestimation of harmful stereotypes. To further support this, the
small fraction of prompts and responses that imply gender, race or state names are filtered, as described in
Appendix I.
Section 4.3 discusses the evaluation of the LMRA’s consistency with mean human ratings (which is done
on a subset of public chats to preserve privacy). This comparison showed strong correlation between LMRA
and human ratings for harmful gender stereotypes.
3.4 Bias Enumeration Algorithm
Our Bias Enumeration Algorithm is a systematic and scalable approach to identifying and explaining user-
demographic differences in chatbot responses. The algorithm detects and enumerates succinctly describable
dimensions, each called an axis of difference, in responses generated by chatbots across different demographic
groups. It is inspired by and follows the pattern of Zhong et al. (2022); Findeis et al. (2024) who identify
systematic differences between distributions of text. Our algorithm is tailored to finding systematic differences
in responses to prompts. The core functionality of the algorithm is to process a set of prompts and their
corresponding responses, producing a list of bias “axes” that are both statistically significant and interpretable.
These features highlight potential demographic differences in responses. The algorithm can be applied broadly
across all prompts or focused on a specific subset of tasks, enabling the identification of overall or task-specific
biases.
Below, we provide a detailed overview of the algorithm and its components.
Inputs:
• Prompts (X ): Any set of p user prompts X = {x(1), x(2), . . . , x(p)} intended to elicit responses from
the language model.
• Responses: Corresponding responses YA = {y(1)
A , y(2)
A , . . . , y(m)
A } and YB = {y(1)
B , y(2)
B , . . . , y(p)
B } from
A and B, respectively.
• Parameters:
– k: Number of prompt-response pairs sampled during Feature Brainstorming iterations.
– t: Number of iterations for Feature Brainstorming.
– m: Desired number of final bias features to output.
Outputs:
• Axes of difference (F): A curated l |
ist of m descriptive features F = {f1, f2, . . . , fm} that highlight
systematic differences between the responses of Group A and Group B.
The Bias Enumeration Algorithm (full details in Algorithm 1) has four steps:
1. Feature Brainstorming: Identify a list of candidate axes, each succinctly described in natural
language. This is done by taking a set of k prompts, each with two corresponding responses, and
querying the LMRA to suggest potential patterns in differences between the responses. A simplified
version of the instructions for this step is given in Figure 4.
2. Consolidation: Using the LMRA, remove duplicate or similar features to create a more concise list.
This step ensures that redundant or overlapping features are consolidated, resulting in a streamlined
set of distinct bias indicators.
3. Labeling: The LMRA labels each identified feature for all prompt-response pairs across demographic
groups. This step produces a detailed matrix of feature presence for each group comparison, providing
the data needed for subsequent analysis.
12
4. Feature selection: Statistically significant features are identified, where the differences between
demographic groups are determined to be non-random. This ensures that only meaningful bias features
are retained for evaluation.
Algorithm 1 Bias Enumeration Algorithm
1: Inputs:
Prompts X = {x(1), x(2), . . . , x(p)}
Responses YA = {y(1)
A , . . . , y(p)
Sample size k
Number of iterations t
Desired number of features m
A , y(2)
A }, YB = {y(1)
B , y(2)
B , . . . , y(p)
B }
2: Outputs:
Bias features F = {f1, f2, . . . , fm}
Harmfulness ratings H = {h1, h2, . . . , hm}
3: procedure BiasEnumeration(X , YA, YB, k, t, m)
4:
5:
6:
Initialize candidate feature set: C ← ∅
for i = 1 to t do
Sample indices Si ⊆ {1, 2, . . . , n} where |Si| = k
Extract samples: Xi ← {x(j)}j∈Si, YAi ← {y(j)
Ci ← FeatureBrainstorming(Xi, YAi, YBi)
Update candidate feature set: C ← C ∪ Ci
end for
Q ← FeatureConsolidation(C)
L ← FeatureLabeling(X , YA, YB, Q, τ )
12:
F ← FeatureSelection(L, b)
13:
H ← HarmfulnessRating(F)
14:
return F, H
15:
16: end procedure
7:
8:
9:
10:
11:
A }j∈Si, YBi ← {y(j)
B }j∈Si
We describe each of these steps in turn.
FeatureBrainstorming. In this initial step, we generate a diverse set of candidate features that capture
differences between responses from Group A and Group B. For each of the t iterations, k randomly-selected
prompts together with their corresponding responses are presented to the LMRA. A simplified version of
the prompt template used to elicit features is shown in Figure 4 and in full in Figure 13 in Appendix B. Its
key properties are: (a) chain-of-thought reasoning, requesting a step-by-step enumeration of features, (b) it
requests features that are general meaning they apply to more than one chat, and (c) it requests succinct
features that can be described in at most six words. These choices were made to address overly-specific
features that were initially generated, which are problematic both because they do not generalize and for
privacy. Similarly, giving a larger value of k encouraged the model to produce features that are more likely to
generalize. We found a value of k = 7 to work well.
FeatureConsolidation. After accumulating candidate features across all iterations, we refine this collection
by eliminating redundancies and merging similar entries. This is achieved by rephrasing the features into a
standardized format, using “Which model” questions such as “Which model uses simpler language?” This
uniform question structure facilitates more consistent labeling in the subsequent FeatureLabeling step.
The LMRA performs this transformation. Next, exact duplicate features are removed, and near-duplicates are
consolidated into single entries. Templates for these two steps are shown in Figure 14. The final, consolidated
feature set Q is then sorted by length, and truncated if it becomes too large.
FeatureLabeling.
consolidated feature set Q. For each feature q ∈ Q and each prompt response-pair (x(i), y(i)
In this step, we assign labels to ea |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 15