mjbuehler commited on
Commit
3f95002
1 Parent(s): 3ac43e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -26
README.md CHANGED
@@ -4,58 +4,111 @@ library_name: diffusers
4
  license: openrail++
5
  tags:
6
  - text-to-image
7
- - text-to-image
8
  - diffusers-training
9
  - diffusers
10
  - lora
11
  - template:sd-lora
12
- - stable-diffusion-xl
13
- - stable-diffusion-xl-diffusers
14
  instance_prompt: <leaf microstructure>
15
  widget: []
16
  ---
17
 
18
- <!-- This model card has been generated automatically according to the information the training script had access to. You
19
- should probably proofread and complete it, then remove this comment. -->
20
-
21
-
22
- # SDXL LoRA DreamBooth - lamm-mit/leaf_LoRA_SD3_V12
23
 
24
  <Gallery />
25
 
26
  ## Model description
27
 
28
- These are lamm-mit/leaf_LoRA_SD3_V12 LoRA adaption weights for stabilityai/stable-diffusion-3-medium-diffusers.
29
-
30
- The weights were trained using [DreamBooth](https://dreambooth.github.io/).
31
 
32
- LoRA for the text encoder was enabled: False.
33
 
34
- Special VAE used for training: None.
35
 
36
- ## Trigger words
37
 
38
  You should use <leaf microstructure> to trigger the image generation.
39
 
40
- ## Download model
 
 
41
 
42
- Weights for this model are available in Safetensors format.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
- [Download](lamm-mit/leaf_LoRA_SD3_V12/tree/main) them in the Files & versions tab.
45
 
 
46
 
47
- ## Intended uses & limitations
48
 
49
- #### How to use
 
 
50
 
51
- ```python
52
- # TODO: add an example code snippet for running this diffusion pipeline
53
- ```
54
 
55
- #### Limitations and bias
 
56
 
57
- [TODO: provide examples of latent issues and potential remediations]
 
 
 
 
58
 
59
- ## Training details
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
- [TODO: describe the data used to train the model]
 
4
  license: openrail++
5
  tags:
6
  - text-to-image
 
7
  - diffusers-training
8
  - diffusers
9
  - lora
10
  - template:sd-lora
11
+ - stable-diffusion-3
12
+ - stable-diffusion-3-diffusers
13
  instance_prompt: <leaf microstructure>
14
  widget: []
15
  ---
16
 
17
+ # Stable Diffusion 3 Medium Fine-tuned with Leaf Images
 
 
 
 
18
 
19
  <Gallery />
20
 
21
  ## Model description
22
 
23
+ These are LoRA adaption weights for stabilityai/stable-diffusion-3-medium-diffusers.
 
 
24
 
25
+ ## Trigger words
26
 
27
+ The following image were used during fine-tuning using the keyword <leaf microstructure>:
28
 
29
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/sI_exTnLy6AtOFDX1-7eq.png)
30
 
31
  You should use <leaf microstructure> to trigger the image generation.
32
 
33
+ #### How to use
34
+
35
+ Defining some helper functions:
36
 
37
+ ```python
38
+ from diffusers import DiffusionPipeline
39
+ import torch
40
+ import os
41
+ from datetime import datetime
42
+ from PIL import Image
43
+
44
+ def generate_filename(base_name, extension=".png"):
45
+ timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
46
+ return f"{base_name}_{timestamp}{extension}"
47
+
48
+ def save_image(image, directory, base_name="image_grid"):
49
+
50
+ filename = generate_filename(base_name)
51
+ file_path = os.path.join(directory, filename)
52
+ image.save(file_path)
53
+ print(f"Image saved as {file_path}")
54
+
55
+ def image_grid(imgs, rows, cols, save=True, save_dir='generated_images', base_name="image_grid",
56
+ save_individual_files=False):
57
+
58
+ if not os.path.exists(save_dir):
59
+ os.makedirs(save_dir)
60
+
61
+ assert len(imgs) == rows * cols
62
+
63
+ w, h = imgs[0].size
64
+ grid = Image.new('RGB', size=(cols * w, rows * h))
65
+ grid_w, grid_h = grid.size
66
+
67
+ for i, img in enumerate(imgs):
68
+ grid.paste(img, box=(i % cols * w, i // cols * h))
69
+ if save_individual_files:
70
+ save_image(img, save_dir, base_name=base_name+f'_{i}-of-{len(imgs)}_')
71
+
72
+ if save and save_dir:
73
+ save_image(grid, save_dir, base_name)
74
+
75
+ return grid
76
+ ```
77
 
78
+ Model loading and generation pipeline:
79
 
80
+ ```python
81
 
82
+ repo_id_load='lamm-mit/stable-diffusion-3-medium-leaf-inspired'
83
 
84
+ pipeline = DiffusionPipeline.from_pretrained ("stabilityai/stable-diffusion-3-medium-diffusers",
85
+ torch_dtype=torch.float16
86
+ )
87
 
88
+ pipeline.load_lora_weights(repo_id_load)
89
+ pipeline=pipeline.to('cuda')
 
90
 
91
+ prompt = "a cube in the shape of a <leaf microstructure>"
92
+ negative_prompt = ""
93
 
94
+ num_samples = 3
95
+ num_rows = 3
96
+ n_steps=75
97
+ guidance_scale=15
98
+ all_images = []
99
 
100
+ for _ in range(num_rows):
101
+ image = pipeline(prompt,num_inference_steps=n_steps,num_images_per_prompt=num_samples,
102
+ guidance_scale=guidance_scale,negative_prompt=negative_prompt).images
103
+
104
+ all_images.extend(image)
105
+
106
+ grid = image_grid(all_images, num_rows, num_samples,
107
+ save_individual_files=True,
108
+ save_dir='generated_images',
109
+ base_name="image_grid",
110
+ )
111
+ grid
112
+ ```
113
 
114
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623ce1c6b66fedf374859fe7/qk5kRJJmetvhZ0ctltc3z.png)