Unconditional Priors Matter! Improving Conditional Generation of Fine-Tuned Diffusion Models
Abstract
Classifier-Free Guidance (CFG) is a fundamental technique in training conditional diffusion models. The common practice for CFG-based training is to use a single network to learn both conditional and unconditional noise prediction, with a small dropout rate for conditioning. However, we observe that the joint learning of unconditional noise with limited bandwidth in training results in poor priors for the unconditional case. More importantly, these poor unconditional noise predictions become a serious reason for degrading the quality of conditional generation. Inspired by the fact that most CFG-based conditional models are trained by fine-tuning a base model with better unconditional generation, we first show that simply replacing the unconditional noise in CFG with that predicted by the base model can significantly improve conditional generation. Furthermore, we show that a diffusion model other than the one the fine-tuned model was trained on can be used for unconditional noise replacement. We experimentally verify our claim with a range of CFG-based conditional models for both image and video generation, including Zero-1-to-3, Versatile Diffusion, DiT, DynamiCrafter, and InstructPix2Pix.
Community
Thank you for showing interest in our work! We have published our project page at https://unconditional-priors-matter.github.io/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Efficient Distillation of Classifier-Free Guidance using Adapters (2025)
- Guidance Free Image Editing via Explicit Conditioning (2025)
- TCFG: Tangential Damping Classifier-free Guidance (2025)
- PLADIS: Pushing the Limits of Attention in Diffusion Models at Inference Time by Leveraging Sparsity (2025)
- History-Guided Video Diffusion (2025)
- InPO: Inversion Preference Optimization with Reparametrized DDIM for Efficient Diffusion Model Alignment (2025)
- Diffusion Models without Classifier-free Guidance (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper