mmaluchnick commited on
Commit
5bed0a9
Β·
verified Β·
1 Parent(s): ce359d5

Scheduled Commit

Browse files
Files changed (1) hide show
  1. ai-toolkit.log +173 -0
ai-toolkit.log CHANGED
@@ -1,3 +1,176 @@
1
  The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
2
  Running 1 job
3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  0%| | 0/40 [00:00<?, ?it/s]
5
  10%|β–ˆ | 4/40 [00:00<00:00, 37.15it/s]
6
  20%|β–ˆβ–ˆ | 8/40 [00:00<00:01, 24.03it/s]
7
  32%|β–ˆβ–ˆβ–ˆβ–Ž | 13/40 [00:00<00:00, 28.82it/s]
8
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 23/40 [00:00<00:00, 49.26it/s]
9
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 30/40 [00:00<00:00, 53.40it/s]
10
  90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 36/40 [00:00<00:00, 46.91it/s]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  0%| | 0/40 [00:00<?, ?it/s]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  0%| | 0/40 [00:00<?, ?it/s]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
2
  Running 1 job
3
 
4
+ /usr/local/lib/python3.10/dist-packages/albumentations/__init__.py:13: UserWarning: A new version of Albumentations is available: 1.4.23 (you have 1.4.15). Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1.
5
+ check_for_updates()
6
+ /usr/local/lib/python3.10/dist-packages/controlnet_aux/mediapipe_face/mediapipe_face_common.py:7: UserWarning: The module 'mediapipe' is not installed. The package will have limited functionality. Please install it using the command: pip install 'mediapipe'
7
+ warnings.warn(
8
+ /usr/local/lib/python3.10/dist-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
9
+ warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
10
+ /usr/local/lib/python3.10/dist-packages/timm/models/registry.py:4: FutureWarning: Importing from timm.models.registry is deprecated, please import via timm.models
11
+ warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.models", FutureWarning)
12
+ /usr/local/lib/python3.10/dist-packages/controlnet_aux/segment_anything/modeling/tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_5m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_5m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
13
+ return register_model(fn_wrapper)
14
+ /usr/local/lib/python3.10/dist-packages/controlnet_aux/segment_anything/modeling/tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_11m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_11m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
15
+ return register_model(fn_wrapper)
16
+ /usr/local/lib/python3.10/dist-packages/controlnet_aux/segment_anything/modeling/tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_224 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_224. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
17
+ return register_model(fn_wrapper)
18
+ /usr/local/lib/python3.10/dist-packages/controlnet_aux/segment_anything/modeling/tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_384 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_384. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
19
+ return register_model(fn_wrapper)
20
+ /usr/local/lib/python3.10/dist-packages/controlnet_aux/segment_anything/modeling/tiny_vit_sam.py:654: UserWarning: Overwriting tiny_vit_21m_512 in registry with controlnet_aux.segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_512. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
21
+ return register_model(fn_wrapper)
22
+ /workspace/ai-toolkit/extensions_built_in/sd_trainer/SDTrainer.py:61: FutureWarning: `torch.cuda.amp.GradScaler(args...)` is deprecated. Please use `torch.amp.GradScaler('cuda', args...)` instead.
23
+ self.scaler = torch.cuda.amp.GradScaler()
24
+ You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
25
+ {
26
+ "type": "sd_trainer",
27
+ "training_folder": "output",
28
+ "device": "cuda:0",
29
+ "network": {
30
+ "type": "lora",
31
+ "linear": 16,
32
+ "linear_alpha": 16
33
+ },
34
+ "save": {
35
+ "dtype": "float16",
36
+ "save_every": 500,
37
+ "max_step_saves_to_keep": 4,
38
+ "push_to_hub": false
39
+ },
40
+ "datasets": [
41
+ {
42
+ "folder_path": "/workspace/ai-toolkit/images",
43
+ "caption_ext": "txt",
44
+ "caption_dropout_rate": 0.05,
45
+ "shuffle_tokens": false,
46
+ "cache_latents_to_disk": true,
47
+ "resolution": [
48
+ 512,
49
+ 768,
50
+ 1024
51
+ ]
52
+ }
53
+ ],
54
+ "train": {
55
+ "batch_size": 1,
56
+ "steps": 2000,
57
+ "gradient_accumulation_steps": 1,
58
+ "train_unet": true,
59
+ "train_text_encoder": false,
60
+ "gradient_checkpointing": true,
61
+ "noise_scheduler": "flowmatch",
62
+ "optimizer": "adamw8bit",
63
+ "lr": 0.0004,
64
+ "ema_config": {
65
+ "use_ema": true,
66
+ "ema_decay": 0.99
67
+ },
68
+ "dtype": "bf16"
69
+ },
70
+ "model": {
71
+ "name_or_path": "black-forest-labs/FLUX.1-dev",
72
+ "is_flux": true,
73
+ "quantize": true
74
+ },
75
+ "sample": {
76
+ "sampler": "flowmatch",
77
+ "sample_every": 500,
78
+ "width": 1024,
79
+ "height": 1024,
80
+ "prompts": [
81
+ "Photo of xtina holding a sign that says 'I LOVE PROMPTS!'",
82
+ "Professional headshot of xtina in a business suit.",
83
+ "A happy pilot xtina of a Boeing 747.",
84
+ "A doctor xtina talking to a patient.",
85
+ "A chef xtina in the middle of a bustling kitchen, plating a beautifully arranged dish.",
86
+ "A young xtina with a big grin, holding a large ice cream cone in front of an old-fashioned ice cream parlor.",
87
+ "A person xtina in a tuxedo, looking directly into the camera with a confident smile, standing on a red carpet at a gala event.",
88
+ "Person xtina with a bitchin' 80's mullet hairstyle leaning out the window of a pontiac firebird"
89
+ ],
90
+ "neg": "",
91
+ "seed": 42,
92
+ "walk_seed": true,
93
+ "guidance_scale": 4,
94
+ "sample_steps": 20
95
+ },
96
+ "trigger_word": "xtina"
97
+ }
98
+ Using EMA
99
+
100
+ #############################################
101
+ # Running job: my_first_flux_lora_v1
102
+ #############################################
103
+
104
+
105
+ Running 1 process
106
+ Loading Flux model
107
+ Loading transformer
108
+ Quantizing transformer
109
+ Loading vae
110
+ Loading t5
111
+
112
+
113
+ Quantizing T5
114
+ Loading clip
115
+ making pipe
116
+ preparing
117
+ create LoRA network. base dim (rank): 16, alpha: 16
118
+ neuron dropout: p=None, rank dropout: p=None, module dropout: p=None
119
+ create LoRA for Text Encoder: 0 modules.
120
+ create LoRA for U-Net: 494 modules.
121
+ enable LoRA for U-Net
122
+ Dataset: /workspace/ai-toolkit/images
123
+ - Preprocessing image dimensions
124
+
125
  0%| | 0/40 [00:00<?, ?it/s]
126
  10%|β–ˆ | 4/40 [00:00<00:00, 37.15it/s]
127
  20%|β–ˆβ–ˆ | 8/40 [00:00<00:01, 24.03it/s]
128
  32%|β–ˆβ–ˆβ–ˆβ–Ž | 13/40 [00:00<00:00, 28.82it/s]
129
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 23/40 [00:00<00:00, 49.26it/s]
130
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 30/40 [00:00<00:00, 53.40it/s]
131
  90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 36/40 [00:00<00:00, 46.91it/s]
132
+ - Found 40 images
133
+ Bucket sizes for /workspace/ai-toolkit/images:
134
+ 384x576: 18 files
135
+ 448x512: 1 files
136
+ 448x576: 13 files
137
+ 576x448: 5 files
138
+ 384x640: 1 files
139
+ 512x512: 2 files
140
+ 6 buckets made
141
+ Caching latents for /workspace/ai-toolkit/images
142
+ - Saving latents to disk
143
+
144
+ Dataset: /workspace/ai-toolkit/images
145
+ - Preprocessing image dimensions
146
+
147
  0%| | 0/40 [00:00<?, ?it/s]
148
+ - Found 40 images
149
+ Bucket sizes for /workspace/ai-toolkit/images:
150
+ 576x832: 12 files
151
+ 640x768: 6 files
152
+ 640x832: 8 files
153
+ 576x896: 7 files
154
+ 832x640: 3 files
155
+ 768x640: 2 files
156
+ 704x768: 1 files
157
+ 768x768: 1 files
158
+ 8 buckets made
159
+ Caching latents for /workspace/ai-toolkit/images
160
+ - Saving latents to disk
161
+
162
+ Dataset: /workspace/ai-toolkit/images
163
+ - Preprocessing image dimensions
164
+
165
  0%| | 0/40 [00:00<?, ?it/s]
166
+ - Found 40 images
167
+ Bucket sizes for /workspace/ai-toolkit/images:
168
+ 832x1216: 12 files
169
+ 896x1088: 6 files
170
+ 896x1152: 5 files
171
+ 832x1152: 6 files
172
+ 768x1280: 2 files
173
+ 1152x832: 2 files
174
+ 768x1152: 1 files
175
+ 704x1024: 1 files
176
+ 1088x896: 2 files
177
+ 1152x896: 1 files
178
+ 960x1024: 1 files
179
+ 1024x1024: 1 files
180
+ 12 buckets made
181
+ Caching latents for /workspace/ai-toolkit/images
182
+ - Saving latents to disk
183
+
184
+ Generating baseline samples before training
185
+