Mou11209203 commited on
Commit
2e02a55
·
verified ·
1 Parent(s): 0e10880

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +128 -0
README.md ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DDPM CelebAHQ 256 with Safetensors
2
+
3
+ This repository contains a **denoising diffusion probabilistic model (DDPM)** trained on the CelebA HQ dataset at a resolution of 256x256. The model is based on the original `google/ddpm-celebahq-256` implementation and has been updated to support **safetensors** for model storage.
4
+
5
+ ## Model Information
6
+
7
+ - **Model Type**: `UNet2DModel`
8
+ - **Diffusion Process**: DDPM (Denoising Diffusion Probabilistic Models)
9
+ - **Training Data**: CelebA HQ dataset
10
+ - **Resolution**: 256x256
11
+ - **Format**: The model weights are available in both `safetensors` and standard PyTorch (`.pth`) formats.
12
+
13
+ ## Features
14
+
15
+ - **Safetensors Support**: The model weights are stored in the `safetensors` format, a safer and more efficient alternative to regular PyTorch `.pth` files. It ensures better compatibility, security, and serialization of model weights.
16
+ - **Pretrained Model**: This model is pretrained on the CelebA HQ dataset and is designed for high-quality image generation.
17
+ - **Model Formats**: Available in both standard PyTorch and safetensors formats for easy integration into your workflow.
18
+
19
+ ## Example Images
20
+
21
+ Here are some sample images generated by the model at different diffusion steps:
22
+
23
+ ![Step 50](https://huggingface.co/{repo_name}/blob/main/images/image_step_50.png)
24
+ ![Step 100](https://huggingface.co/{repo_name}/blob/main/images/image_step_100.png)
25
+ ![Step 150](https://huggingface.co/{repo_name}/blob/main/images/image_step_150.png)
26
+ ![Step 200](https://huggingface.co/{repo_name}/blob/main/images/image_step_200.png)
27
+ ![Step 250](https://huggingface.co/{repo_name}/blob/main/images/image_step_250.png)
28
+ ![Step 300](https://huggingface.co/{repo_name}/blob/main/images/image_step_300.png)
29
+ ![Step 400](https://huggingface.co/{repo_name}/blob/main/images/image_step_400.png)
30
+ ![Step 500](https://huggingface.co/{repo_name}/blob/main/images/image_step_500.png)
31
+ ![Step 600](https://huggingface.co/{repo_name}/blob/main/images/image_step_600.png)
32
+ ![Step 700](https://huggingface.co/{repo_name}/blob/main/images/image_step_700.png)
33
+ ![Step 800](https://huggingface.co/{repo_name}/blob/main/images/image_step_800.png)
34
+ ![Step 900](https://huggingface.co/{repo_name}/blob/main/images/image_step_900.png)
35
+ ![Step 1000](https://huggingface.co/{repo_name}/blob/main/images/image_step_1000.png)
36
+
37
+ ## How to Use
38
+
39
+ To use this model, you can load it using the `diffusers` library from Hugging Face. You can load the model in either `safetensors` format or the traditional `.pth` format.
40
+
41
+ ### Requirements
42
+
43
+ - Install the required dependencies:
44
+ ```bash
45
+ pip install torch diffusers safetensors
46
+ ```
47
+
48
+ ### Loading the Model
49
+
50
+ To load the model and run inference, you can use the following code:
51
+
52
+ ```python
53
+ import torch
54
+ import numpy as np
55
+ import PIL.Image
56
+ from diffusers import UNet2DModel, DDPMScheduler
57
+ import tqdm
58
+
59
+ # 1. Initialize the model
60
+ repo_id = "google/ddpm-celebahq-256"
61
+ model = UNet2DModel.from_pretrained(repo_id, use_safetensors=True)
62
+ model.to("cuda") # Move the model to GPU
63
+ display(model.config)
64
+
65
+ # 2. Initialize the scheduler
66
+ scheduler = DDPMScheduler.from_pretrained(repo_id)
67
+
68
+ # 3. Create an image with Gaussian noise
69
+ torch.manual_seed(0) # Set random seed for reproducibility
70
+ noisy_sample = torch.randn(1, model.config.in_channels, model.config.sample_size, model.config.sample_size).to("cuda")
71
+ display(f"Noisy sample shape: {noisy_sample.shape}")
72
+
73
+ # 4. Define a function to display the image
74
+ def display_sample(sample, i):
75
+ image_processed = sample.cpu().permute(0, 2, 3, 1)
76
+ image_processed = (image_processed + 1.0) * 127.5
77
+ image_processed = image_processed.numpy().astype(np.uint8)
78
+
79
+ image_pil = PIL.Image.fromarray(image_processed[0])
80
+ display(f"Image at step {i}")
81
+ display(image_pil)
82
+
83
+ # 5. Reverse diffusion process
84
+ sample = noisy_sample
85
+ for i, t in enumerate(tqdm.tqdm(scheduler.timesteps)):
86
+ # 1. Predict noise residual
87
+ with torch.no_grad():
88
+ residual = model(sample, t).sample
89
+
90
+ # 2. Compute the less noisy image and move x_t -> x_t-1
91
+ sample = scheduler.step(residual, t, sample).prev_sample
92
+
93
+ # 3. Optionally display the image (every 50 steps)
94
+ if (i + 1) % 50 == 0:
95
+ display_sample(sample, i + 1)
96
+
97
+ display("Denoising complete.")
98
+ ```
99
+
100
+ ## Training
101
+
102
+ If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
103
+
104
+ ## Model Storage
105
+
106
+ The following files are available for download:
107
+
108
+ - **Model Weights (PyTorch format)**: `diffusion_pytorch_model.pth`
109
+ - **Model Weights (Safetensors format)**: `diffusion_pytorch_model.safetensors`
110
+ - **Generated Images**: Various steps from 50 to 1000
111
+ - **README.md**: This document for usage and setup instructions
112
+
113
+ ## Citation
114
+
115
+ If you use this model in your research or project, please cite the original `google/ddpm-celebahq-256` repository:
116
+
117
+ ```bibtex
118
+ @misc{google/ddpm-celebahq-256,
119
+ author = {Google Research},
120
+ title = {DDPM CelebAHQ 256},
121
+ year = {2022},
122
+ url = {https://huggingface.co/google/ddpm-celebahq-256}
123
+ }
124
+ ```
125
+
126
+ ## License
127
+
128
+ This model is provided under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).