File size: 6,806 Bytes
0d860ab 2e02a55 c9794fb f252fee 2e02a55 fb63999 2e02a55 fb63999 2e02a55 fb63999 2e02a55 fb63999 2e02a55 1921ec8 2e02a55 fb63999 2e02a55 fb63999 2e02a55 fb63999 2e02a55 fb63999 2e02a55 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 |
---
tags:
- pytorch
- diffusers
- unconditional-image-generation
- image-generation
- denoising-diffusion
- stable-diffusion
license: apache-2.0
library_name: diffusers
model_name: ddpm-celebahq-256
---
# DDPM CelebAHQ 256 with Safetensors
This repository contains a **denoising diffusion probabilistic model (DDPM)** trained on the CelebA HQ dataset at a resolution of 256x256. The model is based on the original `google/ddpm-celebahq-256` implementation and has been updated to support **safetensors** for model storage.
## Model Information
- **Model Type**: `UNet2DModel`
- **Diffusion Process**: DDPM (Denoising Diffusion Probabilistic Models)
- **Training Data**: CelebA HQ dataset
- **Resolution**: 256x256
- **Format**: The model weights are available in both `safetensors` and standard PyTorch (`.pth`) formats.
## Features
- **Safetensors Support**: The model weights are stored in the `safetensors` format, a safer and more efficient alternative to regular PyTorch `.pth` files. It ensures better compatibility, security, and serialization of model weights.
- **Pretrained Model**: This model is pretrained on the CelebA HQ dataset and is designed for high-quality image generation.
- **Model Formats**: Available in both standard PyTorch and safetensors formats for easy integration into your workflow.
## Example Images
Here are some sample images generated by the model at different diffusion steps:
![Step 50](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_50.png)
![Step 100](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_100.png)
![Step 150](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_150.png)
![Step 200](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_200.png)
![Step 250](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_250.png)
![Step 300](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_300.png)
![Step 350](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_350.png)
![Step 400](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_400.png)
![Step 450](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_450.png)
![Step 500](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_500.png)
![Step 550](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_550.png)
![Step 600](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_600.png)
![Step 650](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_650.png)
![Step 700](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_700.png)
![Step 750](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_750.png)
![Step 800](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_800.png)
![Step 850](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_850.png)
![Step 900](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_900.png)
![Step 950](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_950.png)
![Step 1000](https://huggingface.co/Mou11209203/ddpm-celebahq-256/resolve/main/images/image_step_1000.png)
## How to Use
To use this model, you can load it using the `diffusers` library from Hugging Face. You can load the model in either `safetensors` format or the traditional `.pth` format.
### Requirements
- Install the required dependencies:
```bash
pip install torch diffusers safetensors
```
### Loading the Model
To load the model and run inference, you can use the following code:
```python
import torch
import numpy as np
import PIL.Image
from diffusers import UNet2DModel, DDPMScheduler
import tqdm
# 1. Initialize the model
# Choose a model ID, use google's with use_safetensors=False, use Mou11209203's with use_safetensors=True
repo_id = "google/ddpm-celebahq-256"
repo_id1 = "Mou11209203/ddpm-celebahq-256"
model = UNet2DModel.from_pretrained(repo_id1, use_safetensors=True)
model.to("cuda") # Move the model to GPU
print("model.config: ", model.config)
# 2. Initialize the scheduler
scheduler = DDPMScheduler.from_pretrained(repo_id1)
print("scheduler.config: ", scheduler.config)
# 3. Create an image with Gaussian noise
torch.manual_seed(1733782420) # Fix the random seed for reproducibility
noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size).to("cuda")
print(f"Noisy sample shape: {noisy_sample.shape}")
# 4. Define a function to display the image
def display_sample(sample, i):
image_processed = sample.cpu().permute(0, 2, 3, 1)
image_processed = (image_processed + 1.0) * 127.5
image_processed = image_processed.numpy().astype(np.uint8)
image_pil = PIL.Image.fromarray(image_processed[0])
print(f"Image at step {i}")
image_pil.show()
# 5. Reverse diffusion process
sample = noisy_sample
for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)):
# 1. Predict the noise residual
with torch.no_grad():
residual = model(sample, t).sample
# 2. Compute the less noisy image and move x_t -> x_t-1
sample = scheduler.step(residual, t, sample).prev_sample
# 3. Optionally display the image (every 50 steps)
if (i + 1) % 50 == 0:
display_sample(sample, i + 1)
print("Denoising complete.")
```
## Training
If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
## Model Storage
The following files are available for download:
- **Model Weights (PyTorch format)**: `diffusion_pytorch_model.pth`
- **Model Weights (Safetensors format)**: `diffusion_pytorch_model.safetensors`
- **Generated Images**: Various steps from 50 to 1000
- **README.md**: This document for usage and setup instructions
## Citation
If you use this model in your research or project, please cite the original `google/ddpm-celebahq-256` repository:
```bibtex
@misc{google/ddpm-celebahq-256,
author = {Google Research},
title = {DDPM CelebAHQ 256},
year = {2022},
url = {https://huggingface.co/google/ddpm-celebahq-256}
}
```
## License
This model is provided under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
|