File size: 1,348 Bytes
614b67f 4d7e0e9 614b67f 4d7e0e9 614b67f 4d7e0e9 614b67f 4d7e0e9 614b67f 4d7e0e9 614b67f 4d7e0e9 614b67f 4d7e0e9 614b67f 4d7e0e9 614b67f 4d7e0e9 60158a3 4d7e0e9 4d88789 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
library_name: diffusers
license: apache-2.0
datasets:
- Drozdik/tattoo_v0
language:
- en
tags:
- art
---
## Model Details
**Abstract**:
*Trained a Unconditional Diffusion Model on tattoo dataset with DDIM noise scheduler *
## Inference
**DDPM** models can use *discrete noise schedulers* such as:
- [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py)
- [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py)
- [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py)
for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest.
For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead.
See the following code:
```python
# !pip install diffusers
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
model_id = "google/DDIM-tattoo-32"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm().images[0]
# save image
image.save("ddpm_generated_image.png")
``` |