Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,46 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-nd-4.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-nd-4.0
|
3 |
---
|
4 |
+
# Benchmarking Spatial Relationships in Text-to-Image Generation
|
5 |
+
*Tejas Gokhale, Hamid Palangi, Besmira Nushi, Vibhav Vineet, Eric Horvitz, Ece Kamar, Chitta Baral, Yezhou Yang*
|
6 |
+
|
7 |
+
<!-- ![](assets/motivating_example_4.png "") -->
|
8 |
+
<p align=center>
|
9 |
+
<img src="assets/visor_example_detailed_new.png" height=400px/>
|
10 |
+
</p>
|
11 |
+
|
12 |
+
- We introduce a large-scale challenge dataset SR<sub>2D</sub> that contains sentences describing two objects and the spatial relationship between them.
|
13 |
+
- We introduce a metric called VISOR (short for **V**erify**I**ng **S**patial **O**bject **R**elationships) to quantify spatial reasoning performance.
|
14 |
+
- VISOR and SR<sub>2D</sub> can be used off-the-shelf with any text-to-image model.
|
15 |
+
|
16 |
+
## SR<sub>2D</sub> Dataset
|
17 |
+
Our dataset is hosted as [here](https://huggingface.co/datasets/tgokhale/sr2d_visor). This contains
|
18 |
+
1. The text prompt dataset in `.json` format (`text_spatial_rel_phrases.json`)
|
19 |
+
2. Images generated using 7 models (GLIDE, CogView2, DALLE-mini, Stable Diffusion, GLIDE + Stable Diffusion + CDM, and Stable Diffusion v2.1)
|
20 |
+
|
21 |
+
Alternatively, the text prompt dataset can also accessed from [`text_spatial_rel_phrases.json`](https://github.com/microsoft/VISOR/blob/main/text_spatial_rel_phrases.json). It contains all examples from the current version of the dataset (31680 text prompts) accompanied by the corresponding metadata.
|
22 |
+
This dataset can also be generated by running the script `python create_spatial_phrases.py`
|
23 |
+
|
24 |
+
## GitHub repository
|
25 |
+
The GitHub repository for [VISOR](https://github.com/microsoft/VISOR/) contains code for generating images with prompts from the SR<sub>2D</sub> dataset and evaluating the generated images using VISOR.
|
26 |
+
|
27 |
+
|
28 |
+
## References
|
29 |
+
Code for text-to-image generation:
|
30 |
+
1. GLIDE: https://github.com/openai/glide-text2im
|
31 |
+
2. DALLE-mini: https://github.com/borisdayma/dalle-mini
|
32 |
+
3. CogView2: https://github.com/THUDM/CogView2
|
33 |
+
4. Stable Diffusion: https://github.com/CompVis/stable-diffusion
|
34 |
+
5. Composable Diffusion Models: https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
|
35 |
+
6. OpenAI API for DALLE-2: https://openai.com/api/
|
36 |
+
|
37 |
+
## Citation
|
38 |
+
If you find SR<sub>2D</sub> or VISOR useful in your research, please use the following citation:
|
39 |
+
```
|
40 |
+
@article{gokhale2022benchmarking,
|
41 |
+
title={Benchmarking Spatial Relationships in Text-to-Image Generation},
|
42 |
+
author={Gokhale, Tejas and Palangi, Hamid and Nushi, Besmira and Vineet, Vibhav and Horvitz, Eric and Kamar, Ece and Baral, Chitta and Yang, Yezhou},
|
43 |
+
journal={arXiv preprint arXiv:2212.10015},
|
44 |
+
year={2022}
|
45 |
+
}
|
46 |
+
```
|