Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: text-to-image
|
|
8 |
---
|
9 |
# Diffusion Model Alignment Using Direct Preference Optimization
|
10 |
|
11 |
-
![row01](01.
|
12 |
|
13 |
Direct Preference Optimization (DPO) for text-to-image diffusion models is a method to align diffusion models to text human preferences by directly optimizing on human comparison data. Please check our paper at [Diffusion Model Alignment Using Direct Preference Optimization](https://arxiv.org/abs/2311.12908).
|
14 |
|
|
|
8 |
---
|
9 |
# Diffusion Model Alignment Using Direct Preference Optimization
|
10 |
|
11 |
+
![row01](01.gif)
|
12 |
|
13 |
Direct Preference Optimization (DPO) for text-to-image diffusion models is a method to align diffusion models to text human preferences by directly optimizing on human comparison data. Please check our paper at [Diffusion Model Alignment Using Direct Preference Optimization](https://arxiv.org/abs/2311.12908).
|
14 |
|