AlekseyCalvin commited on
Commit
ab19e91
1 Parent(s): e91cb87

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +46 -3
  2. config.yaml +70 -0
  3. lora.safetensors +3 -0
README.md CHANGED
@@ -1,3 +1,46 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: flux-1-dev-non-commercial-license
4
+ license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
5
+ language:
6
+ - en
7
+ tags:
8
+ - flux
9
+ - diffusers
10
+ - lora
11
+ - replicate
12
+ base_model: "black-forest-labs/FLUX.1-dev"
13
+ pipeline_tag: text-to-image
14
+ # widget:
15
+ # - text: >-
16
+ # prompt
17
+ # output:
18
+ # url: https://...
19
+ instance_prompt: INF
20
+ ---
21
+
22
+ # Influx
23
+
24
+ <!-- <Gallery /> -->
25
+
26
+ Trained on Replicate using:
27
+
28
+ https://replicate.com/ostris/flux-dev-lora-trainer/train
29
+
30
+
31
+ ## Trigger words
32
+ You should use `INF` to trigger the image generation.
33
+
34
+
35
+ ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
36
+
37
+ ```py
38
+ from diffusers import AutoPipelineForText2Image
39
+ import torch
40
+
41
+ pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
42
+ pipeline.load_lora_weights('AlekseyCalvin/Influx', weight_name='lora.safetensors')
43
+ image = pipeline('your prompt').images[0]
44
+ ```
45
+
46
+ For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
config.yaml ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ job: custom_job
2
+ config:
3
+ name: flux_train_replicate
4
+ process:
5
+ - type: custom_sd_trainer
6
+ training_folder: output
7
+ device: cuda:0
8
+ trigger_word: INF
9
+ network:
10
+ type: lora
11
+ linear: 128
12
+ linear_alpha: 128
13
+ network_kwargs:
14
+ only_if_contains:
15
+ - transformer.transformer_blocks.2.norm1.linear
16
+ - transformer.transformer_blocks.2.attn.to_q
17
+ - transformer.transformer_blocks.2.attn.to_k
18
+ - transformer.transformer_blocks.2.attn.to_v
19
+ - transformer.transformer_blocks.18.norm1.linear
20
+ - transformer.transformer_blocks.18.attn.to_q
21
+ - transformer.transformer_blocks.18.attn.to_k
22
+ - transformer.transformer_blocks.18.attn.to_v
23
+ save:
24
+ dtype: float16
25
+ save_every: 1001
26
+ max_step_saves_to_keep: 1
27
+ datasets:
28
+ - folder_path: input_images
29
+ caption_ext: txt
30
+ caption_dropout_rate: 0.05
31
+ shuffle_tokens: false
32
+ cache_latents_to_disk: true
33
+ cache_latents: true
34
+ resolution:
35
+ - 512
36
+ - 768
37
+ - 1024
38
+ train:
39
+ batch_size: 2
40
+ steps: 1000
41
+ gradient_accumulation_steps: 1
42
+ train_unet: true
43
+ train_text_encoder: false
44
+ content_or_style: balanced
45
+ gradient_checkpointing: true
46
+ noise_scheduler: flowmatch
47
+ optimizer: lion8bit
48
+ lr: 0.0008
49
+ ema_config:
50
+ use_ema: true
51
+ ema_decay: 0.99
52
+ dtype: bf16
53
+ model:
54
+ name_or_path: FLUX.1-dev
55
+ is_flux: true
56
+ quantize: true
57
+ sample:
58
+ sampler: flowmatch
59
+ sample_every: 1001
60
+ width: 1024
61
+ height: 1024
62
+ prompts: []
63
+ neg: ''
64
+ seed: 42
65
+ walk_seed: true
66
+ guidance_scale: 3.5
67
+ sample_steps: 28
68
+ meta:
69
+ name: flux_train_replicate
70
+ version: '1.0'
lora.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff0c0081119291ce19b76afc21be9648a7d391557859a38989009f8b5297631b
3
+ size 20449784