armhebb commited on
Commit
22b8701
1 Parent(s): dd661ae

End of training

Browse files
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - stable-diffusion-xl
4
+ - stable-diffusion-xl-diffusers
5
+ - text-to-image
6
+ - diffusers
7
+ - lora
8
+ - template:sd-lora
9
+ widget:
10
+
11
+ - text: 'in the style of <s0><s1>'
12
+
13
+ base_model: stabilityai/stable-diffusion-xl-base-1.0
14
+ instance_prompt: in the style of <s0><s1>
15
+ license: openrail++
16
+ ---
17
+
18
+ # SDXL LoRA DreamBooth - Resleeve/65995e622d50edfb3ead9255-test-sep
19
+
20
+ <Gallery />
21
+
22
+ ## Model description
23
+
24
+ ### These are Resleeve/65995e622d50edfb3ead9255-test-sep LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
25
+
26
+ ## Download model
27
+
28
+ ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
29
+
30
+ - **LoRA**: download **[`./output.safetensors` here 💾](/Resleeve/65995e622d50edfb3ead9255-test-sep/blob/main/./output.safetensors)**.
31
+ - Place it on your `models/Lora` folder.
32
+ - On AUTOMATIC1111, load the LoRA by adding `<lora:./output:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
33
+ - *Embeddings*: download **[`pytorch_lora_weights_emb.safetensors` here 💾](/Resleeve/65995e622d50edfb3ead9255-test-sep/blob/main/pytorch_lora_weights_emb.safetensors)**.
34
+ - Place it on it on your `embeddings` folder
35
+ - Use it by adding `pytorch_lora_weights_emb` to your prompt. For example, `in the style of pytorch_lora_weights_emb`
36
+ (you need both the LoRA and the embeddings as they were trained together for this LoRA)
37
+
38
+
39
+ ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
40
+
41
+ ```py
42
+ from diffusers import AutoPipelineForText2Image
43
+ import torch
44
+ from huggingface_hub import hf_hub_download
45
+ from safetensors.torch import load_file
46
+
47
+ pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
48
+ pipeline.load_lora_weights('Resleeve/65995e622d50edfb3ead9255-test-sep', weight_name='pytorch_lora_weights.safetensors')
49
+ embedding_path = hf_hub_download(repo_id='Resleeve/65995e622d50edfb3ead9255-test-sep', filename='pytorch_lora_weights_emb.safetensors' repo_type="model")
50
+ state_dict = load_file(embedding_path)
51
+ pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
52
+ pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
53
+
54
+ image = pipeline('in the style of <s0><s1>').images[0]
55
+ ```
56
+
57
+ For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
58
+
59
+ ## Trigger words
60
+
61
+ To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
62
+
63
+ to trigger concept `TOK` → use `<s0><s1>` in your prompt
64
+
65
+
66
+
67
+ ## Details
68
+ All [Files & versions](/Resleeve/65995e622d50edfb3ead9255-test-sep/tree/main).
69
+
70
+ The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
71
+
72
+ LoRA for the text encoder was enabled. False.
73
+
74
+ Pivotal tuning was enabled: True.
75
+
76
+ Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
77
+
logs/dreambooth-lora-sd-xl/1725535357.8915758/events.out.tfevents.1725535357.adamwest-PC.138744.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:131dc3b1c9dd0b664b82c8aac6e1d650e38c510e09fbf73419b525b50ba0872f
3
+ size 3647
logs/dreambooth-lora-sd-xl/1725535357.8923788/hparams.yml ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ adam_beta1: 0.9
2
+ adam_beta2: 0.999
3
+ adam_epsilon: 1.0e-08
4
+ adam_weight_decay: 0.0001
5
+ adam_weight_decay_text_encoder: null
6
+ allow_tf32: false
7
+ cache_dir: null
8
+ cache_latents: true
9
+ caption_column: prompt
10
+ center_crop: false
11
+ checkpointing_steps: 100000
12
+ checkpoints_total_limit: null
13
+ class_data_dir: null
14
+ class_prompt: null
15
+ crops_coords_top_left_h: 0
16
+ crops_coords_top_left_w: 0
17
+ dataloader_num_workers: 0
18
+ dataset_config_name: null
19
+ dataset_name: 72987166-f222-40bb-9eda-01de17f188a6
20
+ enable_xformers_memory_efficient_attention: false
21
+ gradient_accumulation_steps: 1
22
+ gradient_checkpointing: true
23
+ hub_model_id: Resleeve/65995e622d50edfb3ead9255-test-sep
24
+ hub_token: null
25
+ image_column: image
26
+ instance_data_dir: null
27
+ instance_prompt: in the style of <s0><s1>
28
+ learning_rate: 0.0005
29
+ local_rank: -1
30
+ logging_dir: logs
31
+ lr_num_cycles: 1
32
+ lr_power: 1.0
33
+ lr_scheduler: constant
34
+ lr_warmup_steps: 0
35
+ max_grad_norm: 1.0
36
+ max_train_steps: 10
37
+ mixed_precision: bf16
38
+ noise_offset: 0.0
39
+ num_class_images: 100
40
+ num_new_tokens_per_abstraction: 2
41
+ num_train_epochs: 1
42
+ num_validation_images: 1
43
+ optimizer: adamW
44
+ output_dir: ./output
45
+ pretrained_model_name_or_path: stabilityai/stable-diffusion-xl-base-1.0
46
+ pretrained_vae_model_name_or_path: madebyollin/sdxl-vae-fp16-fix
47
+ prior_generation_precision: null
48
+ prior_loss_weight: 1.0
49
+ prodigy_beta3: null
50
+ prodigy_decouple: true
51
+ prodigy_safeguard_warmup: true
52
+ prodigy_use_bias_correction: true
53
+ push_to_hub: true
54
+ rank: 32
55
+ repeats: 2
56
+ report_to: tensorboard
57
+ resolution: 1024
58
+ resume_from_checkpoint: null
59
+ revision: null
60
+ sample_batch_size: 4
61
+ scale_lr: false
62
+ seed: 42
63
+ snr_gamma: null
64
+ text_encoder_lr: 1.0
65
+ token_abstraction: TOK
66
+ train_batch_size: 2
67
+ train_text_encoder: false
68
+ train_text_encoder_frac: 1.0
69
+ train_text_encoder_ti: true
70
+ train_text_encoder_ti_frac: 0.1
71
+ use_8bit_adam: false
72
+ validation_epochs: 1000
73
+ validation_prompt: null
74
+ variant: null
75
+ with_prior_preservation: false
logs/dreambooth-lora-sd-xl/events.out.tfevents.1725535357.adamwest-PC.138744.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a510b635a7c7a4a8afd8aa2ea223278460ac867094a567e7ce927aae8fa71629
3
+ size 908
output_emb.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c9e679ae0a4627b790bf3575bb6881804c51550b02237e3781545c253c2d7605
3
+ size 16536
pytorch_lora_weights.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10f2eef7018fee2bcc14123440290ab6d5d03646356deda6cfbd7b5add6df996
3
+ size 185963768
pytorch_lora_weights_fooocus.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b42e81433f61bb654f9328ac12ce5956acf40afa94d4590129f3b499f3f864ce
3
+ size 186046568