Update README.md
Browse files
README.md
CHANGED
@@ -29,7 +29,7 @@ The authors call this procedure "consistency distillation (CD)".
|
|
29 |
Consistency models can also be trained from scratch to generate clean images from a noisy image and timestep, which the authors call "consistency training (CT)".
|
30 |
|
31 |
This model is a `diffusers`-compatible version of the [cd_imagenet64_lpips.pt](https://github.com/openai/consistency_models#pre-trained-models) checkpont from the [original code and model release](https://github.com/openai/consistency_models).
|
32 |
-
This model was distilled from an [EDM model](https://arxiv.org/pdf/2206.00364.pdf) trained on the ImageNet 64x64 dataset, using [LPIPS](https://richzhang.github.io/PerceptualSimilarity/) as the measure of closeness.
|
33 |
See the [original model card](https://github.com/openai/consistency_models/blob/main/model-card.md) for more information.
|
34 |
|
35 |
## Download
|
@@ -48,7 +48,7 @@ pipe = ConsistencyModelPipeline.from_pretrained("dg845/diffusers-cd_imagenet64_l
|
|
48 |
|
49 |
The original model checkpoint can be used with the [original consistency models codebase](https://github.com/openai/consistency_models).
|
50 |
|
51 |
-
Here is an example of using the `
|
52 |
|
53 |
```python
|
54 |
import torch
|
@@ -63,7 +63,7 @@ pipe.to(device)
|
|
63 |
|
64 |
# Onestep Sampling
|
65 |
image = pipe(num_inference_steps=1).images[0]
|
66 |
-
image.save("
|
67 |
|
68 |
# Onestep sampling, class-conditional image generation
|
69 |
# ImageNet-64 class label 145 corresponds to king penguins
|
|
|
29 |
Consistency models can also be trained from scratch to generate clean images from a noisy image and timestep, which the authors call "consistency training (CT)".
|
30 |
|
31 |
This model is a `diffusers`-compatible version of the [cd_imagenet64_lpips.pt](https://github.com/openai/consistency_models#pre-trained-models) checkpont from the [original code and model release](https://github.com/openai/consistency_models).
|
32 |
+
This model was distilled (via consistency distillation (CD)) from an [EDM model](https://arxiv.org/pdf/2206.00364.pdf) trained on the ImageNet 64x64 dataset, using [LPIPS](https://richzhang.github.io/PerceptualSimilarity/) as the measure of closeness.
|
33 |
See the [original model card](https://github.com/openai/consistency_models/blob/main/model-card.md) for more information.
|
34 |
|
35 |
## Download
|
|
|
48 |
|
49 |
The original model checkpoint can be used with the [original consistency models codebase](https://github.com/openai/consistency_models).
|
50 |
|
51 |
+
Here is an example of using the `cd_imagenet64_lpips` checkpoint with `diffusers`:
|
52 |
|
53 |
```python
|
54 |
import torch
|
|
|
63 |
|
64 |
# Onestep Sampling
|
65 |
image = pipe(num_inference_steps=1).images[0]
|
66 |
+
image.save("cd_imagenet64_lpips_onestep_sample.png")
|
67 |
|
68 |
# Onestep sampling, class-conditional image generation
|
69 |
# ImageNet-64 class label 145 corresponds to king penguins
|