|
{ |
|
"ID": "1-MBdJssZ-S", |
|
"Title": "Discrete Contrastive Diffusion for Cross-Modal Music and Image Generation", |
|
"Keywords": "Contrastive Diffusion, Conditioned Generations, Music Generation, Image Synthesis", |
|
"URL": "https://openreview.net/forum?id=1-MBdJssZ-S", |
|
"paper_draft_url": "/references/pdf?id=Eor4porKS", |
|
"Conferece": "ICLR_2023", |
|
"track": "Applications (eg, speech processing, computer vision, NLP)", |
|
"acceptance": "Accept: poster", |
|
"review_scores": "[['3', '6', '2'], ['3', '6', '4'], ['3', '8', '4'], ['3', '6', '3']]", |
|
"input": { |
|
"source": "CRF", |
|
"title": "Discrete Contrastive Diffusion for Cross-Modal Music and Image Generation", |
|
"authors": [], |
|
"emails": [], |
|
"sections": [ |
|
{ |
|
"heading": "1 INTRODUCTION", |
|
"text": "Generative tasks that seek to synthesize data in different modalities, such as audio and images, have attracted much attention. The recently explored diffusion probabilistic models (DPMs) Sohl-Dickstein et al. (2015b) have served as a powerful generative backbone that achieves promising results in both unconditional and conditional generation Kong et al. (2020); Mittal et al. (2021); Lee & Han (2021); Ho et al. (2020); Nichol & Dhariwal (2021); Dhariwal & Nichol (2021); Ho et al. (2022); Hu et al. (2021). Compared to the unconditional case, conditional generation is usually applied in more concrete and practical cross-modality scenarios, e.g., video-based music generation Di et al. (2021); Zhu et al. (2022); Gan et al. (2020a) and text-based image generation Gu et al. (2022); Ramesh et al. (2021); Li et al. (2019); Ruan et al. (2021). Most existing DPM-based conditional synthesis works Gu et al. (2022); Dhariwal & Nichol (2021) learn the connection between the conditioning and the generated data implicitly by adding a prior to the variational lower bound Sohl-Dickstein et al. (2015b). While such approaches still feature high generation fidelity, the correspondence between the conditioning and the synthesized data can sometimes get lost, as illustrated in the right column in Fig. 1. To this end, we aim to explicitly enhance the input-output faithfulness via their maximized mutual information under the diffusion generative framework for conditional settings in this paper. Examples of our synthesized music audio and image results are given in Fig. 1.\nContrastive methods Oord et al. (2018); Bachman et al. (2019); Song & Ermon (2020a) have been proven to be very powerful for data representation learning. Their high-level idea aims to learn the representation z of raw data x based on the assumption that a properly encoded z benefits the ability of a generative model p to reconstruct the raw data given z as prior. This idea can be achieved via optimization of the density ratio p(x|z)p(x) Oord et al. (2018) as an entirety, without explicitly modeling the actual generative model p. While the direct optimization of mutual information via generative models p is a challenging problem to implement and train Song & Ermon (2020b); Belghazi et al. (2018) in the conventional contrastive representation learning field, we show that this can be effectively done within our proposed contrastive diffusion framework. Specifically, we reformulate\nthe optimization problem for the desired conditional generative tasks via DPMs by analogy to the above embedding z and raw data x with our conditioning input and synthesized output. We introduce a Conditional Discrete Contrastive Diffusion (CDCD) loss, and design two contrastive diffusion mechanisms - step-wise parallel diffusion that invokes multiple parallel diffusion processes during contrastive learning, and sample-wise auxiliary diffusion, which maintains one principal diffusion process, to effectively incorporate the CDCD loss into the denoising process. We demonstrate that with the proposed contrastive diffusion method, we can not only effectively train so as to maximize the desired mutual information by connecting the CDCD loss with the conventional variational objective function, but also to directly optimize the generative network p. The optimized CDCD loss further encourages faster convergence of a DPM model with fewer diffusion steps. We additionally present our intra- and inter-negative sampling methods by providing internally disordered and instance-level negative samples, respectively.\nTo better illustrate the input-output connections, we conduct main experiments on the novel crossmodal dance-to-music generation task Zhu et al. (2022), which aims to generate music audio based on silent dance videos. Compared to other tasks such as text-to-image synthesis, dance-to-music generation explicitly evaluates the input-output correspondence in terms of various cross-modal alignment features such as dance-music beats, genre and general quality. However, various generative settings, frameworks, and applications can also benefit from our contrastive diffusion approach, e.g., joint or separate training of conditioning encoders, continuous or discrete conditioning inputs, and diverse input-output modalities as detailed in Sec. 4. Overall, we achieve results superior or comparable to state-of-the-art on three conditional synthesis tasks: dance-to-music (datasets: AIST++ Tsuchida et al. (2019); Li et al. (2021), TikTok Dance-Music Zhu et al. (2022)), text-to-\nimage (datasets: CUB200 Wah et al. (2011), MSCOCO Lin et al. (2014)) and class-conditioned image synthesis (dataset: ImageNet Russakovsky et al. (2015)). Our experimental findings suggest three key take-away: 1 Improving the input-output connections via maximized mutual information is indeed beneficial for their correspondence and the general fidelity of the results (see Fig. 1 and supplement). 2 Both our proposed step-wise parallel diffusion with intra-negative samples and sample-wise auxiliary diffusion with inter-negative samples show state-of-the-art scores in our evaluations. The former is more beneficial for capturing the intra-sample correlations, e.g., musical rhythms, while the latter improves the instance-level performance, e.g., music genre and image class. 3 With maximized mutual information, our conditional contrastive diffusion converge in substantially fewer diffusion steps compared to vanilla DPMs, while maintaining the same or even superior performance (approximately 35% fewer steps for dance-to-music generation and 40% fewer for text-to-image synthesis), thus significantly increasing inference speed." |
|
}, |
|
{ |
|
"heading": "2 BACKGROUND", |
|
"text": "Diffusion Probabilistic Models. DPMs Sohl-Dickstein et al. (2015b) are a class of generative models that learn to convert a simple Gaussian distribution into a data distribution. This process consists of a forward diffusion process and a reverse denoising process, each consisting of a sequence of T steps that act as a Markov chain. During forward diffusion, an input data sample x0 is gradually \u201ccorrupted\u201d at each step t by adding Gaussian noise to the output of step t\u2212 1. The reverse denoising process, seeks to convert the noisy latent variable xT into the original data sample x0 by removing the noise added during diffusion. The stationary distribution for the final latent variable xT is typically assumed to be a normal distribution, p(xT ) = N (xT |0, I). An extension of this approach replaces the continuous state with a discrete one Sohl-Dickstein et al. (2015a); Hoogeboom et al. (2021); Austin et al. (2021), in which the latent variables x1:T typically take the form of one-hot vectors with K categories. The diffusion process can then be parameterized using a multinomial categorical transition matrix defined as q(xt|xt\u22121) = Cat(xt; p = xt\u22121Qt), where [Qt]ij = q(xt = j|xt\u22121 = i). The reverse process p\u03b8(xt|xt\u22121) can also be factorized as conditionally independent over the discrete sequences Austin et al. (2021).\nIn both the continuous and discrete state formulations of DPMs Song & Ermon (2020c); Song et al. (2020b); Kingma et al. (2021); Song et al. (2021); Huang et al. (2021); Vahdat et al. (2021), the denoising process p\u03b8 can be optimized by the KL divergence between q and p\u03b8 in closed forms Song et al. (2020a); Nichol & Dhariwal (2021); Ho et al. (2020); Hoogeboom et al. (2021); Austin et al. (2021) via the variational bound on the negative log-likelihood:\nLvb = Eq[DKL(q(xT |x0)||p(xT ))\ufe38 \ufe37\ufe37 \ufe38 LT + \u2211 t>1 DKL(q(xt\u22121|xt, x0)||p\u03b8(xt\u22121|xt))\ufe38 \ufe37\ufe37 \ufe38 Lt\u22121 \u2212 log p\u03b8(x0|x1)\ufe38 \ufe37\ufe37 \ufe38 L0 ]. (1)\nExisting conditional generation works via DPMs Gu et al. (2022); Dhariwal & Nichol (2021) usually learn the implicit relationship between the conditioning c and the synthesized data x0 by directly adding the c as the prior in equation 1. DPMs with discrete state space provide more controls on the data corruption and denoising compared to its continuous counterpart Austin et al. (2021); Gu et al. (2022) by the flexible designs of transition matrix, which benefits for practical downstream operations such as editing and interactive synthesis Tseng et al. (2020); Cui et al. (2021); Xu et al. (2021). We hence employ contrastive diffusion using a discrete state space in this work.\nContrastive Representation Learning. Contrastive learning uses loss functions designed to make neural networks learn to understand and represent the specific similarities and differences between elements in the training data without labels explicitly defining such features, with positive and negative pairs of data points, respectively. This approach has been successfully applied in learning representations of high-dimensional data Oord et al. (2018); Bachman et al. (2019); He et al. (2020); Song & Ermon (2020a); Chen et al. (2020). Many such works seek to maximize the mutual information between the original data x and its learned representation z under the framework of likelihood-free inference Oord et al. (2018); Song & Ermon (2020a); Durkan et al. (2020). The above problem can be formulated as maximizing a density ratio p(x|z)p(x) that preserves the mutual information between the raw data x and learned representation z.\nTo achieve this, existing contrastive methods Oord et al. (2018); Durkan et al. (2020); He et al. (2020); Zhang et al. (2021) typically adopt a neural network to directly model the ratio as an entirety and\navoid explicitly considering the actual generative model p(x|z), which has proven to be a more challenging problem Song & Ermon (2020b); Belghazi et al. (2018). In contrast, we show that by formulating the conventional contrastive representation learning problem under the generative setting, the properties of DPMs enable us to directly optimize the model p in this work, which can be interpreted as the optimal version of the density ratio Oord et al. (2018).\nVector-Quantized Representations for Conditional Generation. Vector quantization is a classical technique in which a high-dimensional space is represented using a discrete number of vectors, which has proven effective in tasks ranging from data compression to density estimation Gray & Olshen (1997); Gray (1984). More recently, Vector-Quantized (VQ) deep learning models employ this technique to allow for compact and discrete representations of music and image data Oord et al. (2017); Razavi et al. (2019); Esser et al. (2021b); Dhariwal et al. (2020). Typically, the VQ-based models use an encoder-codebook-decoder framework, where the \u201ccodebook\u201d contains a fixed number of vectors (entries) to represent the original high dimensional raw data. The encoder transforms the input x into feature embedding that are each mapped to the closest corresponding vector in the codebook, while the decoder uses the set of quantized vectors z to reconstruct the input data, producing x\u2032 as illustrated in the upper part of Fig. 2.\nIn this work, we perform conditional diffusion process on the VQ space (i.e., discrete token sequences) as shown in the bottom part of Fig. 2, which largely reduces the dimensionality of the raw data, thus avoiding the expensive raw data decoding and synthesis. As our approach is flexible enough to be employed with various input and output modalities, the exact underlying VQ model we use depends on the target data domain. For music synthesis, we employ a fine-tuned Jukebox Dhariwal et al. (2020) model, while for image generation, we employ VQ-GAN Esser et al. (2021b). See Sec. 4 for further details. We refer to z, the latent quantized representation of x, as z0 below to distinguish it from the latent representation at prior stages in the denoising process." |
|
}, |
|
{ |
|
"heading": "3 METHOD", |
|
"text": "Here we outline our approach to cross-modal and conditional generation using our proposed discrete contrastive diffusion approach, which is depicted in Fig. 2. In Sec. 3.1, we formulate our Conditional Discrete Contrastive Diffusion loss in detail, and demonstrate how it helps to maximize the mutual information between the conditioning and generated discrete data representations. Sec. 3.2 defines two specific mechanisms for applying this loss within a diffusion model training framework, samplewise and step-wise. In Sec. 3.3, we detail techniques for constructing negative samples designed to improve the overall quality and coherence of the generated sequences.\nGiven the data pair (c, x), where c is the conditioning information from a given input modality (e.g., videos, text, or a class label), our objective is to generate a data sample x in the target modality (e.g., music audio or images) corresponding to c. In the training stage, we first employ and train a VQ-based model to obtain discrete representation z0 of the data x from the target modality. Next, our diffusion process operates on the encoded latent representation z0 of x. The denoising process recovers the latent representation z0 given the conditioning c that can be decoded to obtain the reconstruction x\u2032. In inference, we generate z0 based on the conditioning c, and decode the latent VQ representation z0 back to raw data domain using the decoder from the pre-trained and fixed VQ decoder." |
|
}, |
|
{ |
|
"heading": "3.1 CONDITIONAL DISCRETE CONTRASTIVE DIFFUSION LOSS", |
|
"text": "We seek to enhance the connection between c and the generated data z0 by maximizing their mutual information, defined as I(z0; c) = \u2211 z0 p\u03b8(z0, c) log p\u03b8(z0|c) p\u03b8(z0)\n. We introduce a set of negative VQ sequences Z \u2032 = {z1, z2, ..., zN}, encoded from N negative samples X \u2032 = {x1, x2, ..., xN}, and define f(z0, c) =\np\u03b8(z0|c) p\u03b8(z0) . Our proposed Conditional Discrete Contrastive Diffusion (CDCD) loss is:\nLCDCD := \u2212E [ log\nf(z0, c)\nf(z0, c) + \u03a3zj\u2208Z\u2032f(z j 0, c)\n] . (2)\nThe proposed CDCD loss is similar to the categorical cross-entropy loss for classifying the positive sample as in Oord et al. (2018), where our conditioning c and the generated data z0 corresponds to the original learned representation and raw data, and optimization of this loss leads to maximization of I(z0; c). However, the loss in Oord et al. (2018) models the density ratio f(z0, c) as an entirety. In\nour case, we demonstrate that the DPMs properties Sohl-Dickstein et al. (2015b); Ho et al. (2020); Austin et al. (2021) enable us to directly optimize the actual distribution p\u03b8 within the diffusion process for the desired conditional generation tasks. Specifically, we show the connections between the proposed CDCD loss and the conventional variational loss Lvb (see equation 1) in Sec. 3.2, and thus how it contributes to efficient DPM learning. Additionally, we can derive the lower bound for the mutual information as I(z0; c) \u2265 log(N)\u2212LCDCD (see supplement for details), which indicates that a larger number of negative samples increases the lower bound. These two factors allow for faster convergence of a DPM with fewer diffusion steps." |
|
}, |
|
{ |
|
"heading": "3.2 PARALLEL AND AUXILIARY DIFFUSION PROCESS", |
|
"text": "The CDCD loss in equation 2 considers the mutual information between c and z0 in a general way, without specifying the intermediate diffusion steps. We propose and analyze two contrastive diffusion mechanisms to efficiently incorporate this loss into DPM learning, and demonstrate that we can directly optimize the generative model p\u03b8 in the diffusion process. We present our step-wise parallel diffusion and the sample-wise auxiliary diffusion mechanisms, which are distinguished by the specific operations applied for the intermediate negative latent variables zj1:T for each negative sample x\nj . The high-level intuition for the parallel and auxiliary designs is to emphasize different attributes of the synthesized data given specific applications. Particularly, we propose the parallel variant to learn the internal coherence of the audio sequential data by emphasizing the gradual change at each time step, while the auxiliary mechanism focuses more on the sample-level connections to the conditioning.\nStep-Wise Parallel Diffusion. This mechanism not only focuses on the mutual information between c and z0, but also takes the intermediate negative latent variables z j 1:T into account by explicitly invoking the complete diffusion process for each negative sample zj \u2208 Z \u2032. As illustrated in Fig. 2 (bottom left), we initiate N + 1 parallel diffusion processes, among which N are invoked by negative samples. For each negative sample xj \u2208 X \u2032, we explicitly compute its negative latent discrete variables zj0:T . In this case, equation 2 is as follows (see supplement for the detailed derivation):\nLCDCD\u2212Step := EZ log [ 1+ p\u03b8(z0:T )\np\u03b8(z0:T |c) NEZ\u2032 [p\u03b8(zj0:T |c) p\u03b8(z j 0:T ) ]] \u2261 Lvb(z, c)\u2212 \u2211 zj\u2208Z\u2032 Lvb(zj , c). (3)\nThe equation above factorizes the proposed CDCD loss using the step-wise parallel diffusion mechanism into two terms, where the first term corresponds to the original variational bound Lvb, and the second term can be interpreted as the negative sum of variational bounds induced by the negative samples and the provided conditioning c.\nSample-Wise Auxiliary Diffusion. Alternatively, our sample-wise auxiliary diffusion mechanism maintains one principal diffusion process, as in traditional diffusion training, shown in Fig. 2 (bottom right). It contrasts the intermediate positive latent variables z1:T with the negative sample z j 0 \u2208 Z. In this case, we can write the CDCD loss from. equation 2 as (see supplement for details):\nLCDCD\u2212Sample := Eq[\u2212log p\u03b8(z0|zt, c)]\u2212 \u03a3zj\u2208Z\u2032Eq[\u2212log p\u03b8(zj0|zt, c)]. (4)\nAs with the step-wise loss, the CDCD-Sample loss includes two terms. The first refers to sampling directly from the positive z0 at an arbitrary timestep t. The second sums the same auxiliary loss from negative samples zj0. This marginalization operation is based on the property of Markov chain as in previous discrete DPMs Austin et al. (2021); Gu et al. (2022), which imposes direct supervision from the sample data. The first term is similar to the auxiliary denoising objective in Austin et al. (2021); Gu et al. (2022).\nBoth contrastive diffusion mechanisms enable us to effectively incorporate the CDCD loss into our DPM learning process by directly optimizing the actual denoising generative network p\u03b8.\nFinal Loss Function. The final loss function for our contrastive diffusion training process is:\nL = Lvb(z, c) + \u03bbLCDCD, (5)\nLvb is conditioning c related, and takes the form of Lt\u22121 = DKL(q(zt\u22121|zt, z0)||p\u03b8(zt\u22121|zt, c)) as in Gu et al. (2022), where c included as the prior for all the intermediate steps. LCDCD refers to either the step-wise parallel diffusion or sample-wise auxiliary diffusion loss. Empirically, we can omit the first term in equation 3, or directly optimize LCDCD\u2212Step, in which the standard Lvb is already included. The detailed training algorithm is explained in the supplement." |
|
}, |
|
{ |
|
"heading": "3.3 INTRA- AND INTER-NEGATIVE SAMPLING", |
|
"text": "Previous contrastive works construct negative samples using techniques such as image augmentation Chen et al. (2020); He et al. (2020) or spatially adjacent image patches Oord et al. (2018). In this work, we categorize our sampling methods into intra- and inter-negative samplings as in Fig. 3. For the intra-sample negative sampling, we construct X \u2032 based on the given original x. This bears resemblance to the patch-based technique in the image domain Oord et al. (2018). As for the audio data, we first divide the original audio waveform into multiple chunks, and randomly shuffle their ordering. For the inter-sample negative sampling, X \u2032 consists of instance-level negative samples x\u2032 that differ from the given data pair (c, x). In practice, we define negative samples x\u2032 to be music sequences with different musical genres from x in the music generation task, while x\u2032 denotes images other than x in the image synthesis task.\nBased on our proposed contrastive diffusion modes and negative sampling methods, there are four possible contrastive settings: step-wise parallel diffusion with either intra- or inter-negative sampling (denoted as Step-Intra and Step-Inter), or sample-wise auxiliary diffusion with either intra- or internegative sampling (denoted as Sample-Intra and Sample-Inter). Intuitively, we argue that Step-Intra\nand Sample-Inter settings are more reasonable compared to Step-Inter and Sample-Intra because of the consistency between the diffusion data corruption process and the way to construct negative samples. Specifically, the data corruption process in the discrete DPMs includes sampling and replacing certain tokens with some random or mask tokens at each diffusion step Austin et al. (2021); Gu et al. (2022), which is a chunk-level operation within a given data sequence similar to the ways we construct intra-negative samples by shuffling the chunk-level orders. In contrast, the sample-wise auxiliary diffusion seeks to provide sample-level supervision, which is consistent with our inter-negative sampling method.\nIn the interest of clarity and concision, we only present the experimental results for Step-Intra and Sample-Inter settings in Sec. 4 of our main paper. The complete results obtained with other contrastive settings and more detailed analysis are included in the supplement." |
|
}, |
|
{ |
|
"heading": "4 EXPERIMENTS", |
|
"text": "We conduct experiments on three conditional generation tasks: dance-to-music generation, text-toimage synthesis, and class-conditioned image synthesis. For the dance-to-music task, we seek to generate audio waveforms for complex music from human motion and dance video frames. For the text-to-image task, the objective is to generate images from given textual descriptions. Given our emphasis on the input-output faithfulness for cross-modal generations, the main analysis are based on the dance-to-music generation task since the evaluation protocol from Zhu et al. (2022) explicitly measures such connections in terms of beats, genre and general correspondence for generated music." |
|
}, |
|
{ |
|
"heading": "4.1 DANCE-TO-MUSIC GENERATION", |
|
"text": "Dataset. We use the AIST++ Li et al. (2021) dataset and the TikTok Dance-Music dataset Zhu et al. (2022) for the dance-to-music experiments. AIST++ is a subset of the AIST dataset Tsuchida et al. (2019), which contains 1020 dance videos and 60 songs performed by professional dancers and filmed in clean studio environment settings without occlusions. AIST++ provide human motion data in the form of SMPL Loper et al. (2015) parameters and body keypoints, and includes the annotations for different genres and choreography styles. The TikTok Dance-Music dataset includes 445 dance videos collected from the social media platform. The 2D skeleton data extracted with OpenPose Cao et al. (2017); Cao et al. (2019) is used as the motion representation. We adopt the official cross-modality splits without overlapping music songs for both datasets.\nImplementations. The sampling rate for all audio signals is 22.5 kHz in our experiments. We use 2-second music samples as in Zhu et al. (2022) for the main experiments. We fine-tuned the pre-trained Jukebox Dhariwal et al. (2020) for our Music VQ-VAE model. For the motion encoder, we deploy a backbone stacked with convolutional layers and residual blocks. For the visual encoder, we extract I3D features Carreira & Zisserman (2017) using a model pre-trained on Kinectics Kay et al. (2017) as the visual conditioning. The motion and visual encoder outputs are concatenated to form the final continuous conditioning input to our contrastive diffusion model. For the contrastive diffusion model, we adopt a transformer-based backbone to learn the denoising network p\u03b8. It includes 19 transformer blocks, with each block consisting of full attention, cross attention and feed forward modules, and a channel size of 1024 for each block. We set the initial weight for the contrastive loss as \u03bb = 5e\u2212 5. The number N of intra- and inter-negative samples for each GT music sample is 10. The visual encoder, motion encoder, and the contrastive diffusion model are jointly optimized. More implementation details are provided in the supplement.\nEvaluations. The evaluation of synthesized music measures both the conditioning-output correspondence and the general synthesis quality using the metrics introduced in Zhu et al. (2022). Specifically, the metrics include the beats coverage score, the beats hit scores, the genre accuracy score, and two subjective evaluation tests with Mean Opinion Scores (MOS) for the musical coherence and general quality. Among these metrics, the beats scores emphasize the intra-sample properties, since they calculate the second-level audio onset strength within musical chunks Ellis (2007), while the genre accuracy focuses on the instance-level musical attributes of music styles. Detailed explanations of the above metrics can be found in Zhu et al. (2022). We compare against multiple dance-to-music generation works: Foley Gan et al. (2020a), Dance2Music Aggarwal & Parikh (2021), CMT Di et al. (2021), and D2M-GAN Zhu et al. (2022). The first three models rely on symbolic discrete MIDI musical representations, while the last one also uses a VQ musical representation. The major difference between the symbolic MIDI and discrete VQ musical representations lies within the fact\nTable 1: Quantitative evaluation results for the dance-to-music task on the AIST++ dataset. This table shows the best performance scores we obtain for different contrastive diffusion steps. We report the mean and standard deviations of our contrastive diffusion for three inference tests.\nMusical features Rhythms Rhythms Genre Coherence Quality Metrics Coverage \u2191 Hit \u2191 Accuracy \u2191 MOS \u2191 MOS \u2191\nGT Music 100 100 88.5 4.7 4.8 Foley 74.1 69.4 8.1 2.9 -\nDance2Music 83.5 82.4 7.0 3.0 - CMT 85.5 83.5 11.6 3.0 -\nD2M-GAN 88.2 84.7 24.4 3.3 3.4 Ours Vanilla 89.0\u00b11.1 83.8\u00b11.5 25.3\u00b10.8 3.3 3.6 Ours Step-Intra 93.9\u00b11.2 90.7\u00b11.5 25.8\u00b10.6 3.6 3.5 Ours Sample-Inter 91.8\u00b11.6 86.9\u00b11.4 27.2\u00b10.5 3.6 3.6\nTable 2: Quantitative evaluastion results for the dance-to-music task on the TikTok dataset. We set the default number of diffusion steps to be 80.\nMethods BeatsCoverage/Hit \u2191 D2M-GAN 88.4/ 82.3 Ours Vanilla 88.7/ 81.4\nOurs Step-Intra 91.8/ 86.3 Ours Sample-Inter 90.1/ 85.5\n30 60 80 100 120 Diffusion steps T\n75\n80\n85\n90\n95\n100\nB ea\nt c ov\ner ag\ne sc\nor es\n84.2 86.7 87.3\n88.4 89.0 85.7\n89.8\n93.6 93.8 93.9\n84.6\n88.2\n91.3 91.8 91.7\nVanilla Step-Intra Sample-Inter\n0\n5\n10\n15\n20\n25\n5.87\n3.27 2.81 2.12 1.64\n30 60 80 100 120 Diffusion steps T\n10\n12\n14\n16\n18\n20\n22\n24\nFI D\ns co\nre s\n19.8\n15.1 14.7 13.9 13.6\n16.4\n14.5\n12.7 12.8 12.7 14.3\n12.9 12.6 12.4 12.3\nVQ-D* Step-Intra Sample-Inter\n0\n2\n4\n6\n8\n10\nTh ro\nug hp\nut (s\nam pl\nes /s\n)\n1.18 0.51 0.42 0.37 0.29\nFigure 4: Convergence analysis in terms of diffusion steps for the dance-to-music task on AIST++ dataset (left) and the text-to-image task on CUB200 dataset (right). We observe that our contrastive diffusion models converge at around 80 steps and 60 steps, resulting 35% steps and 40% steps less compared to the vanilla models that converge at 120 steps and 100 steps, while maintaining superior performance, respectively. We use the same number of steps for training and inference.\nthat the MIDI is pre-defined for each instrument, while the VQ is learning-based. The latter thus enables complex and free music synthesis appropriate for scenarios like dance videos.\nResults and Discussion. The quantitative experimental results are shown in Tab. 1 and Tab. 2. Our proposed methods achieve better performance than the competing methods even with vanilla version without contrastive mechanisms. Furthermore, we find that the Step-Intra setting is more helpful in increasing the beats scores, while the Sample-Inter setting yields more improvements for the genre accuracy scores. We believe this is due to the evaluation methods of different metrics. The beats scores measure the chunk-level (i.e., , the audio onset strength Ellis (2007)) consistency between the GT and synthesized music samples Zhu et al. (2022), while the genre scores consider the overall musical attributes of each sample sequence in instance level. This finding is consistent with our assumptions in Sec. 3.3.\nConvergence Analysis. We also analyze the impact of the proposed contrastive diffusion on model convergence in terms of diffusion steps. The number of diffusion steps is a significant hyper-parameter for DPMs Sohl-Dickstein et al. (2015b); Nichol & Dhariwal (2021); Austin et al. (2021); Gu et al. (2022); Kingma et al. (2021) that directly influences the inference time and synthesis quality. Previous works have shown that a larger number of diffusion steps usually lead to better model performance, but longer inference times Kingma et al. (2021); Gu et al. (2022). We demonstrate that, with the improved mutual information via the proposed contrastive diffusion method, we can greatly reduce the number of steps needed. As shown in Fig. 4 (left), we observe that the beats scores reach a stable level at approximately 80 steps, \u223c35% less than the vanilla DPM that converges in \u223c120 steps. More ablation studies and analysis on this task can be found in the supplement." |
|
}, |
|
{ |
|
"heading": "4.2 CONDITIONAL IMAGE SYNTHESIS", |
|
"text": "Dataset. We conduct text-to-image synthesis on CUB200 Wah et al. (2011) and MSCOCO datasets Lin et al. (2014). The CUB200 dataset contains images of 200 bird species. Each image has 10 corresponding text descriptions. The MSCOCO dataset contains 82k images for training and 40k images for testing. Each image has 5 text descriptions. We also perform the class-conditioned\nimage generation on ImageNet Deng et al. (2009); Russakovsky et al. (2015). Implementation details for both tasks are provided in the supplement.\nEvaluations. We adopt two evaluation metrics for text-to-image synthesis: the classic FID score Heusel et al. (2017) as the general measurement for image quality, and the CLIPScore Hessel et al. (2021) to evaluate the correspondence between the given textual caption and the synthesized image. For the class-conditioned image synthesis, we use the FID score and a classifier-based accuracy for general and input-output correspondence measurement. We compare against text-to-image generation methods including StackGAN Zhang et al. (2017), StackGAN++ Zhang et al. (2018), SEGAN Tan et al. (2019), AttnGAN Xu et al. (2018), DM-GAN Zhu et al. (2019), DF-GAN Tao et al. (2020), DAE-GAN Ruan et al. (2021), DALLE Ramesh et al. (2021), and VQ-Diffusion Gu et al. (2022). For experiments on ImageNet, we list the result comparisons with ImageBART Esser et al. (2021a), VQGAN Esser et al. (2021b), IDDPM Nichol & Dhariwal (2021), and VQ-D Gu et al. (2022). Specifically, VQ-Diffusion Gu et al. (2022) also adopts the discrete diffusion generative backbone, which can be considered as the vanilla version without contrastive mechanisms. Additionally, we provide more comparisons with other methods in terms of dataset, model scale and training time in the supplement for a more comprehensive and fair understanding of our proposed method.\nResults and Discussion. The quantitative results are represented in Tab. 3 and Tab. 4. We observe that our contrastive diffusion achieves state-of-the-art performance for both general synthesis fidelity and input-output correspondence, and the Sample-Inter contrastive setting is more beneficial compared to Step-Intra for the image synthesis. This empirical finding again validates our assumption regarding the contrastive settings in Sec. 3.3, where the Sample-Inter setting helps more with the instance-level synthesis quality. Notably, as shown in Fig. 4 (right), our contrastive diffusion method shows model convergence at about 60 diffusion steps, while the vanilla version converges at approximately 100 steps on CUB200 Wah et al. (2011), which greatly increases the inference speed by 40%." |
|
}, |
|
{ |
|
"heading": "5 CONCLUSION", |
|
"text": "While DPMs have demonstrated remarkable potential, improving their training and inference efficiency while maintaining flexible and accurate results for conditional generation is an ongoing challenge, particularly for cross-modal tasks. Our Conditional Discrete Contrastive Diffusion (CDCD) loss addresses this by maximizing the mutual information between the conditioning input and the generated output. Our contrastive diffusion mechanisms and negative sampling methods effectively incorporate this loss into DPM training. Extensive experiments on various cross-modal conditional generation tasks demonstrate the efficacy of our approach in bridging drastically differing domains. Exciting directions for future work include incorporating additional guidance into our contrastive learning process, and extending this work to DPMs operating in a continuous space.\nEthics Statement. As in other media generation works, there are possible malicious uses of such media to be addressed by oversight organizations and regulatory agencies.\nReproducibility Statement. We provide implementation details in the supplement and will release our code and pre-trained models to ensure the reproducibility of this work." |
|
}, |
|
{ |
|
"heading": "A MORE QUALITATIVE RESULTS", |
|
"text": "" |
|
}, |
|
{ |
|
"heading": "A.1 GENERATED MUSIC SAMPLES", |
|
"text": "For qualitative samples of synthesized dance music sequences, please refer to our anonymous page in the supplement with music samples. In addition to the generated music samples on AIST++ Tsuchida et al. (2019); Li et al. (2021) and TikTok Dance-Music Dataset Zhu et al. (2022), we also include some qualitative samples obtained with the music editing operations based on the dance-music genre annotations from AIST++. Specifically, we edit the original paired motion conditioning input with a different dance-music genre using a different dance choreographer.\nDiscussion on Musical Representations and Audio Quality. It is worth noting that we only compare the overall audio quality with that of D2M-GAN Zhu et al. (2022). This is due to the nature of the different musical representations in the literature of deep-learning based music generation Gan et al. (2020a); Dong et al. (2018); Huang et al. (2019); Gan et al. (2020b); Aggarwal & Parikh (2021). There are mainly two categories for adopted musical representations in previous works: pre-defined symbolic and learning-based representations Ji et al. (2020); Briot et al. (2020). For the former symbolic music representation, typical options include 1D piano-roll and 2D MIDI-based representations. While these works benefit from the pre-defined music synthesizers and produce music that does not include raw audio noise, the main limitation is that such representations are usually limited to a single specific instrument, which hinders their flexibility to be applied in wider and more complex scenarios such as dance videos. In contrast, the learning-based music representations (i.e., musical VQ in our case) rely on well-trained music synthesizers as decoders, but can be used as a unified representation for various musical sounds, e.g., instruments or voices. However, the training of such music encoders and decoders for high-quality audio signals itself remains a challenging problem. Specifically, high-quality audio is a form of high-dimensional data with an extremely large sampling rate, even compared to high-resolution images. For example, the sampling rate for CD-quality audio signals is 44.1 kHz, resulting in 2,646,000 data points for a one-minute musical piece. To this end, existing deep learning based works Dhariwal et al. (2020); Kumar et al. (2019) for music generation employ methods to reduce the number of dimensions, e.g., by introducing hop lengths and a smaller sampling rate. These operations help to make music learning and generation more computationally tractable, but also introduce additional noise in the synthesized audio signals.\nIn this work, we adopt the pre-trained JukeBox model Dhariwal et al. (2020) as our music encoder and decoder for the musical VQ representation. The adopted model has a hop length of 128, which corresponds to the top-level model from their original work Dhariwal et al. (2020). Jukebox employs 3 models: top-, middle-, and bottom-level, with both audio quality and required computation increasing from the first to the last model. As an example, in the supplemental HTML page, we provide music samples directly reconstructed from JukeBox using the top-level model we employ in our work, compared to the ground-truth audio. While they allow for high-quality audio reconstruction (from the bottom-level model, with a hop length of 8), it requires much more time and computation not only for training but also for the final inference, e.g., 3 hours to generate a 20-second musical sequence. As the synthesized music from the top-level model includes some audible noise, we apply a noise reduction operation Sainburg et al. (2020). However, the overall audio quality is not a primary factor that we specifically address in this work on cross-modal conditioning and generation, as it largely depends on the specific music encoder and decoder that are employed. This explains why we report similar MOS scores in terms of the general audio quality." |
|
}, |
|
{ |
|
"heading": "A.2 SYNTHESIZED IMAGES", |
|
"text": "We present more qualitative examples for text-to-image synthesis and class-conditioned image synthesis in Fig. 5, Fig. 6, and Fig. 7." |
|
}, |
|
{ |
|
"heading": "B DETAILED PROOF AND TRAINING", |
|
"text": "" |
|
}, |
|
{ |
|
"heading": "B.1 LOWER BOUND OF CDCD LOSS", |
|
"text": "We show that the proposed CDCD loss has a lower bound related to the mutual information and the number of negative samples N . The derivations below are similar to those from Oord et al. (2018):\nLCDCD := EZ [\u2212log p\u03b8(z0|c) p\u03b8(z0)\np\u03b8(z0|c) p\u03b8(z0)\n+ \u2211\nzj\u2208Z\u2032 p\u03b8(z\nj 0|c)\np\u03b8(z j 0)\n] (6a)\n= EZ log [1 + p\u03b8(z0) p\u03b8(z0|c) \u2211\nzj\u2208Z\u2032\np\u03b8(z j 0|c)\np\u03b8(z j 0)\n] (6b)\n\u2248 EZ log [1 +N p\u03b8(z0)\np\u03b8(z0|c) EZ\u2032 [\np\u03b8(z j 0|c)\np\u03b8(z j 0)\n]] (6c)\n= EZ log[1 +N p\u03b8(z0)\np\u03b8(z0|c) ] (6d)\n\u2265 EZ log[N p\u03b8(z0)\np\u03b8(z0|c) ] (6e)\n= log(N)\u2212 I(z0, c). (6f)" |
|
}, |
|
{ |
|
"heading": "B.2 CONVENTIONAL VARIATIONAL LOSS", |
|
"text": "The conventional variational loss Lvb is derived as follows Sohl-Dickstein et al. (2015b):\nLvb(x) := Eq[\u2212log p\u03b8(x0:T )\nq(x1:T |x0) ]\n= Eq[\u2212log p(xT )\u2212 \u2211 t>1 log p\u03b8(xt\u22121|xt) q(xt|xt\u22121) \u2212 log p\u03b8(x0|x1) q(x1|x0) ]\n= Eq[\u2212log p(xT )\u2212 \u2211 t>1 log p\u03b8(xt\u22121|xt) q(xt\u22121|xt, x0) \u00b7 q(xt\u22121|x0) q(xt|x0) \u2212 log p\u03b8(x0|x1) q(x1|x0) ]\n= Eq[\u2212log p(xT ) q(xT |x0) \u2212 \u2211 t>1 log p\u03b8(xt\u22121|xt) q(xt\u22121|xt, x0) \u2212 log p\u03b8(x0|x1)]\n= Eq[DKL(q(xT |x0)||p(xT )) + \u2211 t>1 DKL(q(xt\u22121|xt, x0)||p\u03b8(xt\u22121|xt))\u2212 log p\u03b8(x0|x1)].\n(7)\nB.3 Lvb WITH CONDITIONING PRIOR\nFollowing the unconditional conventional variational loss, we then show its conditional variant with the conditioning c as prior, which has also been adopted in Gu et al. (2022).\nLvb(x, c) = L0 + L1 + ...+ LT\u22121 + LT L0 = \u2212log p\u03b8(x0|x1, c) Lt\u22121 = DKL(q(xt\u22121|xt, x0)||p\u03b8(xt\u22121|xt, c)) LT = DKL(q(xT |x0)||p(xT ))\n(8)" |
|
}, |
|
{ |
|
"heading": "B.4 STEP-WISE AND SAMPLE-WISE CONTRASTIVE DIFFUSION", |
|
"text": "Below, we show the full derivation for the step-wise parallel contrastive diffusion loss. Given that the intermediate variables from z1:T are also taken into account in this step-wise contrastive diffusion, we slightly modify the initial notation f(z0, c) =\np\u03b8(z0|c) p\u03b8(z0) from Eq.(2) in the main paper to\nf(z, c) = p\u03b8(z0:T |c)p\u03b8(z0:T ) .\nLCDCD\u2212Step := \u2212EZ [log f(z, c) f(z, c) + \u2211\nzj\u2208Z\u2032 f(z j , c)\n] (9a)\n= EZ log [1 + \u2211 zj\u2208Z\u2032 f(z j , c)\nf(z, c) ] (9b)\n= EZ log [1 + p\u03b8(z0:T ) p\u03b8(z0:T |c) \u2211\nzj\u2208Z\u2032\np\u03b8(z j 0:T |c)\np\u03b8(z j 0:T )\n] (9c)\n\u2248 EZ log [1 + p\u03b8(z0:T )\np\u03b8(z0:T |c) NEZ\u2032\np\u03b8(z j 0:T |c)\np\u03b8(z j 0:T )\n] (same as Eq.(1c)) (9d)\n\u2248 EZEq log[ q(z1:T |z0) p\u03b8(z0:T |c) N p\u03b8(z0:T |c) q(z1:T |z0) ] (conditional p\u03b8) (9e)\n\u2248 Eq[\u2212log p\u03b8(z0:T |c) q(z1:T |z0) ]\u2212N EZ\u2032Eq[\u2212log p\u03b8(z0:T |c) q(z1:T |z0) ] (9f)\n= Lvb(z, c)\u2212 \u2211\nzj\u2208Z\u2032 Lvb(zj , c). (9g)\nSimilarly for the sample-wise auxiliary contrastive diffusion, the loss can be derived as follows:\nLCDCD\u2212Sample := \u2212EZ [log f(z0, c) f(z0, c) + \u2211 zj\u2208Z\u2032 f(z j 0, c) ] (10a)\n= EZ log [1 + p\u03b8(z0)\np\u03b8(z0|c) NEZ\u2032 [\np\u03b8(z j 0|c)\np\u03b8(z j 0)\n]] (10b)\n\u2248 EZEq log[ q(z1:T |z0) p\u03b8(z0|c) N p\u03b8(z0|c) q(z1:T |z0) ] (10c)\n\u2248 Eq[\u2212log p\u03b8(z0|c) q(z1:T |z0) ]\u2212N EZ\u2032Eq[\u2212log p\u03b8(z0|c) q(z1:T |z0) ] (10d)\n= Eq[\u2212log p\u03b8(z0|zt, c)]\u2212 \u2211\nzj\u2208Z\u2032 Eq[\u2212log p\u03b8(zj0|zt, c)]. (10e)" |
|
}, |
|
{ |
|
"heading": "B.5 CONDITIONAL DISCRETE CONTRASTIVE DIFFUSION TRAINING", |
|
"text": "The training process for the proposed contrastive diffusion is explained in Algo. 1." |
|
}, |
|
{ |
|
"heading": "C ADDITIONAL EXPERIMENTAL DETAILS AND ANALYSIS", |
|
"text": "" |
|
}, |
|
{ |
|
"heading": "C.1 DANCE-TO-MUSIC TASK", |
|
"text": "Implementation. The sampling rate for all audio signals is 22.5 kHz in our experiments. We use 2-second music samples as in Zhu et al. (2022) for our main experiments, resulting in 44,100 audio data points for each raw music sequence. For the Music VQ-VAE, we fine-tuned Jukebox Dhariwal et al. (2020) on our data to leverage its pre-learned codebook from a large-scale music dataset (approximately 1.2 million songs). The codebook size K is 2048, with a token dimension dz = 128, and the hop-length L is 128 in our default experimental setting. For the motion module, we deploy a backbone stacked with convolutional layers and residual blocks. The dimension size of the embedding we use for music conditioning is 1024. For the visual module, we extract I3D features Carreira & Zisserman (2017) using a model pre-trained on Kinectics Kay et al. (2017) as the visual conditioning information, with a dimension size of 2048. In the implementation of our contrastive diffusion model, we adopt a transformer-based backbone to learn the denoising network p\u03b8. It includes 19 transformer blocks, in which each block is consists of full-attention, cross-attention and a feed-forward network, and the channel size for each block is 1024. We set the initial weight for the contrastive loss as\nAlgorithm 1 Conditional Discrete Contrastive Diffusion Training. The referenced equations can be found in the main paper.\nInput: Initial network parameters \u03b8, contrastive loss weight \u03bb, learning rate \u03b7, number of negative samples N , total diffusion steps T , conditioning information c, contrastive mode m \u2208 {Step, Sample}. 1: for each training iteration do 2: t \u223c Uniform({1, 2, ..., T}) 3: zt \u2190 Sample from q(zt|zt\u22121) 4: Lvb \u2190 \u2211 i=1,...,t Li \u25b7 Eq. 1\n5: if m == Step then 6: for j = 1, ..., N do 7: zjt \u2190 Sample from q(z j t |z j t\u22121, c) \u25b7 from negative variables in previous steps 8: end for 9: LCDCD = \u2212 1N \u2211 Ljvb \u25b7 Eq. 3\n10: else if m == Sample then 11: for j = 1, ..., N do 12: zt \u2190 Sample from q(zt|zj0, c) \u25b7 from negative variables in step 0 13: end for 14: LCDCD = \u2212 1N \u2211 Ljz0 \u25b7 Eq. 4 15: end if 16: L \u2190 Lvb + \u03bbLCDCD \u25b7 Eq. 5 17: \u03b8 \u2190 \u03b8 \u2212 \u03b7\u2207\u03b8L 18: end for\n\u03bb = 5e\u2212 5. The numbers of intra- and inter-negative samples for each GT music sample are both 10. The AdamW Loshchilov & Hutter (2017) optimizer with \u03b21 = 0.9 and \u03b22 = 0.96 is deployed in our training, with a learning rate of 4.5e\u2212 4. We also employ an adaptive weight for the denoising loss weight by gradually decreasing the weight as the diffusion step increases and approaches the end of the chain. The visual module, motion module, and the contrastive diffusion model are jointly optimized.\nOther than the aforementioned implementation details, we also include the mask token technique that bears resemblance to those used in language modelling Devlin et al. (2018) and text-to-image synthesis Gu et al. (2022) for our dance-to-music generation task. We adopt a truncation rate of 0.86 in our inference.\nMOS Evaluation Test. We asked a total of 32 participants to participate in our subjective Mean Opinion Scores (MOS) music evaluations Zhu et al. (2022); Kumar et al. (2019), among which 11 of them are female, while the rest are male. For the dance-music coherence test, we fuse the generated music samples with the GT videos as post-processing. We then asked each evaluator to rate 20 generated videos with a score of 1 (least coherent) to 5 (most coherent) after watching the processed video clip. Specifically, the participants are asked to pay more attention to the dance-music coherence in terms of the dance moves corresponding to the music genre and rhythm, rather than the overall music quality, with reference to the GT video clips with the original music. As for the overall quality evaluations, we only play the audio tracks without the video frames to each evaluator. As before, they are asked to rate the overall music quality with a score of 1 (worst audio quality) to 5 (best audio quality).\nTraining Cost. For the dance2music task experiments on the AIST++ dataset, we use 4 NVIDIA RTX A5000 GPUs, and train the model for approximately 2 days. For the same task on the TikTok dance-music dataset, the training takes approximately 1.5 days on the same hardware.\nComplete Results for Contrastive Settings. As discussed in our main paper, there are four possible combinations for contrastive settings given different contrastive diffusion mechanisms and negative sampling methods. Here, we include complete quantitative scores for different contrastive settings in Tab. 5. We observe that all the four contrastive settings, including the Step-Inter and SampleIntra settings that are not reported in our main paper, help to improve the performance. As we noted, amongst all the settings, Step-Intra and Sample-Inter are more reasonable and yield larger improvements for intra-sample data attributes (i.e., beats scores) and instance-level features (i.e., genre accuracy scores).\nAblation on Music Length. Although we use 2-second musical sequences in the main experiments to make for consistent and fair comparisons with Zhu et al. (2022), our framework can also synthesize longer musical sequences. In the supplementary, we show our generated music sequences in 6- seconds. The quantitative evaluations in terms of different musical sequence lengths are presented Tab. 6, where we show better performance when synthesizing longer musical sequences." |
|
}, |
|
{ |
|
"heading": "C.2 TEXT-TO-IMAGE TASK", |
|
"text": "Implementation. For the text-to-image generation task, we adopt VQ-GAN Esser et al. (2021b) as the discrete encoder and decoder. The codebook size K is 2886, with a token dimension dz = 256. VQGAN converts a 256\u00d7 256 resolution image to 32\u00d7 32 discrete tokens. For the textual conditioning, we employ the pre-trained CLIP Radford et al. (2021) model to encode the given textual descriptions. The denoising diffusion model p\u03b8 has 18 transformer blocks and a channel size of 192, which is a similar model scale to the small version of VQ-Diffusion Gu et al. (2022). We use \u03bb = 5e \u2212 5 as the contrastive loss weight. Similar to the dance-to-music task, we also use the adaptive weight that changes within the diffusion stages. We keep the same truncation rate of 0.86 as in our dance-to-music experiment and in Gu et al. (2022). Unlike in the dance-to-music experiments, where we jointly learn the conditioning encoders, both the VQ-GAN and CLIP models are fixed during the contrastive diffusion training.\nTraining Cost. For the text2image task experiments on the CUB200 dataste, the training takes approximately 5 days using 4 NVIDIA RTX A5000 GPUs. For the same experiments on the MSCOCO dataset, we run the experiments on Amazon Web Services (AWS) using 8 NVIDIA Tesla V100 GPUs. This task required 10 days of training." |
|
}, |
|
{ |
|
"heading": "C.3 CLASS-CONDITIONED IMAGE SYNTHESIS TASK", |
|
"text": "Implementation. For the class-conditioned image synthesis, we also adopt the pre-trained VQGAN Esser et al. (2021b) as the discrete encoder and decoder. We replace the conditioning encoder with class embedding optimized during the contrastive diffusion training. The size of the conditional embedding is 512. Other parameters and techniques remain the same, as in the text-to-image task.\nTraining Cost. For the class-conditioned experiments on the ImageNet, we use 8 NVIDIA Tesla V100 GPUs running on AWS. This task required 20 days of training." |
|
} |
|
], |
|
"year": 2022, |
|
"abstractText": "Diffusion probabilistic models (DPMs) have become a popular approach to conditional generation, due to their promising results and support for cross-modal synthesis. A key desideratum in conditional synthesis is to achieve high correspondence between the conditioning input and generated output. Most existing methods learn such relationships implicitly, by incorporating the prior into the variational lower bound. In this work, we take a different route\u2014we explicitly enhance input-output connections by maximizing their mutual information. To this end, we introduce a Conditional Discrete Contrastive Diffusion (CDCD) loss and design two contrastive diffusion mechanisms to effectively incorporate it into the denoising process, combining the diffusion training and contrastive learning for the first time by connecting it with the conventional variational objectives. We demonstrate the efficacy of our approach in evaluations with diverse multimodal conditional synthesis tasks: dance-to-music generation, text-to-image synthesis, as well as class-conditioned image synthesis. On each, we enhance the inputoutput correspondence and achieve higher or competitive general synthesis quality. Furthermore, the proposed approach improves the convergence of diffusion models, reducing the number of required diffusion steps by more than 35% on two benchmarks, significantly increasing the inference speed.", |
|
"creator": "LaTeX with hyperref" |
|
}, |
|
"output": [ |
|
[ |
|
"1. \"While I find the idea novel, I think the method is quite elaborate and can imply more computational resources.\"", |
|
"2. \"First, including negative samples as part of the loss will increase computation making the computational cost even more expensive for a denoising diffusion process.\"", |
|
"3. \"Second, while the proposed stepwise diffusion allows parallelization, it still requires more resources that can increase the cost of an already expensive denoising diffusion process.\"", |
|
"4. \"The paper lacks an ablation study about the parameter \u03bb which controls the contribution to the total loss of the proposed regularizer. According to Section 4.1, \u03bb was set to 5e-5, which I find the value too low. It is unclear how to set this parameter from the experiments. More importantly, what the impact of increasing the value of \u03bb and thus enforcing the regularizer stronger on performance is not clear.\"", |
|
"5. \"The paper also misses an ablation study about the latent encoder. What is the effect of not even using one? Wouldn\u2019t a latent encoder likely reduce the information (in the information theoretical sense) from the original input. Can the proposed methods work on raw signals, i.e., latents are the input signals directly.\"", |
|
"6. \"The experiments in paragraph \u201cResults and Discussion\u201d and Fig. 4 state that because the proposed method requires fewer steps to converge the proposed method is faster to converge. While the experiments show a reduction in steps, it is unclear about the cost of each step in terms of time in the proposed method. I think having a paper demonstrating that the proposed method indeed reduces the time of convergence is more important than the number of steps.\"", |
|
"7. \"The experiment in Table 3 is missing a more appropriate baseline: Stable Diffusion. I think using Stable Diffusion instead of DALLE makes more sense because Stable Diffusion also uses a latent representation while DALLE does not.\"", |
|
"8. \"From the theoretical perspective, is there a proof showing that the proposed regularizer combined w/ the variational-bound-based loss still preserves the Langevin dynamics in some way? I think discussing the theoretical guarantees can be informative.\"", |
|
"9. \"After engaging with the authors in the discussion, I still think the paper can benefit from reporting wall-clock time of the training phase, add more extensive ablation studies, and add Stable Diffusion as a baseline. For the most part, most of my concerns about clarity were addressed. Nevertheless, because I think there are missing experiments, I cannot champion the paper fully as I think the paper can benefit from another revision.\"" |
|
], |
|
[ |
|
"1. The authors should provide more evidence to support the claim that incorporating prior into the variational lower bound can lead to the loss of the cross-modal correspondence.", |
|
"2. Would the objective of enhancing cross-modal relationships contradict to increase the sample quality? How would the authors balance the variational loss and contrastive loss?", |
|
"3. In the construction of inter-negative samples, the authors take all the images x\u2019 other than x as negative samples. In this way, similar images may also be considered negative samples. How would the authors address this?", |
|
"4. In the text-to-image generation task, the authors use the VQ-diffusion-S as the baseline. The results of the proposed approach slightly outperform the VQ-diffusion-S while falling behind the VQ-diffusion-B greatly. The authors should verify the effectiveness of the proposed approach on larger models.", |
|
"5. In Table3, the performance of the proposed approach falls behind the DF-GAN." |
|
], |
|
[ |
|
"1. By zooming in the synthesized images shown in Fig. 6 and Fig. 7, it seems to me that they are not as good as some SOTA results obtained by DALLE-V2, Imagen, etc.", |
|
"2. The authors need to explain the reasons and show/compare more qualitative results on the tasks of text-to-image synthesis and class-conditioned image synthesis in their supplemental materials." |
|
], |
|
[ |
|
"1. I did't see any weaknesses" |
|
] |
|
], |
|
"review_num": 4, |
|
"item_num": [ |
|
9, |
|
5, |
|
2, |
|
1 |
|
] |
|
} |