Text-to-Image
Diffusers
Safetensors
English
mhdang commited on
Commit
b4fd05a
1 Parent(s): c78eb46

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -0
README.md CHANGED
@@ -1,9 +1,14 @@
1
  ---
2
  datasets:
3
  - yuvalkirstain/pickapic_v2
 
 
 
 
4
  ---
5
  # Diffusion Model Alignment Using Direct Preference Optimization
6
 
 
7
 
8
  Direct Preference Optimization (DPO) for text-to-image diffusion models is a method to align diffusion models to text human preferences by directly optimizing on human comparison data. Please check our paper at [Diffusion Model Alignment Using Direct Preference Optimization](https://arxiv.org/abs/2311.12908).
9
 
 
1
  ---
2
  datasets:
3
  - yuvalkirstain/pickapic_v2
4
+ language:
5
+ - en
6
+ library_name: diffusers
7
+ pipeline_tag: text-to-image
8
  ---
9
  # Diffusion Model Alignment Using Direct Preference Optimization
10
 
11
+ ![row01](01.png)
12
 
13
  Direct Preference Optimization (DPO) for text-to-image diffusion models is a method to align diffusion models to text human preferences by directly optimizing on human comparison data. Please check our paper at [Diffusion Model Alignment Using Direct Preference Optimization](https://arxiv.org/abs/2311.12908).
14