dome272 commited on
Commit
af81976
1 Parent(s): ac22e3d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -13
README.md CHANGED
@@ -7,25 +7,98 @@ tags:
7
  - wuerstchen
8
  ---
9
 
10
- # How-to-use
11
 
12
- By using this pipeline with `AutoPipelineForText2Image`, the required prior pipeline: https://huggingface.co/warp-diffusion/wuerstchen-prior will automatically be downloaded
13
- and run.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  ```py
16
  import torch
17
- from diffusers import AutoPipelineForText2Image
18
  from diffusers.pipelines.wuerstchen import default_stage_c_timesteps
19
 
20
- pipe = AutoPipelineForText2Image.from_pretrained("warp-ai/wuerstchen", torch_dtype=torch.float16).to("cuda")
 
 
 
 
 
 
 
 
 
21
 
22
  caption = "Anthropomorphic cat dressed as a fire fighter"
23
- images = pipe(
24
- caption,
25
- width=1024,
26
- height=1536,
27
- prior_timesteps=default_stage_c_timesteps,
28
- prior_guidance_scale=4.0,
29
- num_images_per_prompt=2,
 
 
 
 
 
 
 
 
 
 
 
30
  ).images
31
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - wuerstchen
8
  ---
9
 
10
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/i-DYpDHw8Pwiy7QBKZVR5.jpeg" width=1500>
11
 
12
+ ## Würstchen - Overview
13
+ Würstchen is diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce
14
+ computational costs for both training and inference by magnitudes. Training on 1024x1024 images, is way more expensive than training at 32x32. Usually, other works make
15
+ use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through it's novel design, we achieve a 42x spatial
16
+ compression. This was unseen before, because common methods fail to faithfully reconstruct detailed images after 16x spatial compression already. Würstchen employs a
17
+ two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://arxiv.org/abs/2306.00637)).
18
+ A third model, Stage C, is learnt in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, allowing
19
+ also cheaper and faster inference.
20
+
21
+ ## Würstchen - Decoder
22
+ The Decoder is what we refer to as "Stage A" and "Stage B". The decoder takes in image embeddings, either generated by the Prior (Stage C) or extracted from a real image
23
+ and decodes those latents back into the pixel space. Specifically, Stage B first decodes the image embeddings into the VQGAN Space, and Stage A (which is a VQGAN)
24
+ decodes the latents into pixel space. Together, they achieve a spatial compression of 42.
25
+
26
+ **Note:** The reconstruction is lossy and loses information of the image. The current Stage B often lacks details in the reconstructions, that are especially noticable to
27
+ us humans when looking at faces, hands, etc. We are working on making these reconstructions even better in the future!
28
+
29
+ ### Image Sizes
30
+ Würstchen was trained on image resolutions between 1024x1024 & 1536x1536. We sometimes also observe good outputs at resolutions like 1024x2048. Feel free to try it out.
31
+ We also observed that the Prior (Stage C) adapts extremely fast to new resolutions. So finetuning it at 2048x2048 should be computationally cheap.
32
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/IfVsUDcP15OY-5wyLYKnQ.jpeg" width=1000>
33
+
34
+ ## How to run
35
+ This pipeline should be run together with a prior https://huggingface.co/warp-diffusion/wuerstchen-prior:
36
 
37
  ```py
38
  import torch
39
+ from diffusers import WuerstchenDecoderPipeline, WuerstchenPriorPipeline
40
  from diffusers.pipelines.wuerstchen import default_stage_c_timesteps
41
 
42
+ device = "cuda"
43
+ dtype = torch.float16
44
+ num_images_per_prompt = 2
45
+
46
+ prior_pipeline = WuerstchenPriorPipeline.from_pretrained(
47
+ "warp-ai/wuerstchen-prior", torch_dtype=dtype
48
+ ).to(device)
49
+ decoder_pipeline = WuerstchenDecoderPipeline.from_pretrained(
50
+ "warp-ai/wuerstchen", torch_dtype=dtype
51
+ ).to(device)
52
 
53
  caption = "Anthropomorphic cat dressed as a fire fighter"
54
+ negative_prompt = ""
55
+
56
+ prior_output = prior_pipeline(
57
+ prompt=caption,
58
+ height=1024,
59
+ width=1536,
60
+ timesteps=default_stage_c_timesteps,
61
+ negative_prompt=negative_prompt,
62
+ guidance_scale=4.0,
63
+ num_images_per_prompt=num_images_per_prompt,
64
+ )
65
+ decoder_output = decoder_pipeline(
66
+ image_embeddings=prior_output.image_embeddings,
67
+ prompt=caption,
68
+ negative_prompt=negative_prompt,
69
+ num_images_per_prompt=num_images_per_prompt,
70
+ guidance_scale=0.0,
71
+ output_type="pil",
72
  ).images
73
+ ```
74
+
75
+ ## Model Details
76
+ - **Developed by:** Pablo Pernias, Dominic Rampas
77
+ - **Model type:** Diffusion-based text-to-image generation model
78
+ - **Language(s):** English
79
+ - **License:** MIT
80
+ - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a Diffusion model in the style of Stage C from the [Würstchen paper](https://arxiv.org/abs/2306.00637) that uses a fixed, pretrained text encoder ([CLIP ViT-bigG/14](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
81
+ - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2306.00637).
82
+ - **Cite as:**
83
+
84
+ @misc{pernias2023wuerstchen,
85
+ title={Wuerstchen: Efficient Pretraining of Text-to-Image Models},
86
+ author={Pablo Pernias and Dominic Rampas and Marc Aubreville},
87
+ year={2023},
88
+ eprint={2306.00637},
89
+ archivePrefix={arXiv},
90
+ primaryClass={cs.CV}
91
+ }
92
+
93
+ ## Environmental Impact
94
+
95
+ **Würstchen v2** **Estimated Emissions**
96
+ Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
97
+
98
+ - **Hardware Type:** A100 PCIe 40GB
99
+ - **Hours used:** 24602
100
+ - **Cloud Provider:** AWS
101
+ - **Compute Region:** US-east
102
+ - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 2275.68 kg CO2 eq.
103
+
104
+