naumnaum commited on
Commit
acf33d4
1 Parent(s): fd02d55

Model card auto-generated by SimpleTuner

Browse files
Files changed (1) hide show
  1. README.md +108 -0
README.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: "black-forest-labs/FLUX.1-dev"
4
+ tags:
5
+ - flux
6
+ - flux-diffusers
7
+ - text-to-image
8
+ - diffusers
9
+ - simpletuner
10
+ - not-for-all-audiences
11
+ - lora
12
+ - template:sd-lora
13
+ - standard
14
+ inference: true
15
+
16
+ ---
17
+
18
+ # rita-v2
19
+
20
+ This is a standard PEFT LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
21
+
22
+
23
+ The main validation prompt used during training was:
24
+
25
+
26
+
27
+ ```
28
+ casual profile headshot photo of TOK woman for instagram. hasselblad photography.
29
+ ```
30
+
31
+ ## Validation settings
32
+ - CFG: `3.5`
33
+ - CFG Rescale: `0.0`
34
+ - Steps: `35`
35
+ - Sampler: `euler`
36
+ - Seed: `42`
37
+ - Resolution: `576x1024`
38
+
39
+ Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
40
+
41
+
42
+
43
+
44
+ <Gallery />
45
+
46
+ The text encoder **was not** trained.
47
+ You may reuse the base model text encoder for inference.
48
+
49
+
50
+ ## Training settings
51
+
52
+ - Training epochs: 2
53
+ - Training steps: 250
54
+ - Learning rate: 0.0005
55
+ - Effective batch size: 2
56
+ - Micro-batch size: 2
57
+ - Gradient accumulation steps: 1
58
+ - Number of GPUs: 1
59
+ - Prediction type: flow-matching
60
+ - Rescaled betas zero SNR: False
61
+ - Optimizer: ao-adamw8bit
62
+ - Precision: Pure BF16
63
+ - Quantised: Yes: int8-quanto
64
+ - Xformers: Not used
65
+ - LoRA Rank: 16
66
+ - LoRA Alpha: 16.0
67
+ - LoRA Dropout: 0.1
68
+ - LoRA initialisation style: default
69
+
70
+
71
+ ## Datasets
72
+
73
+ ### rita-simpletuner-09-10-51-v2
74
+ - Repeats: 10
75
+ - Total number of images: 16
76
+ - Total number of aspect buckets: 1
77
+ - Resolution: 0.262144 megapixels
78
+ - Cropped: False
79
+ - Crop style: None
80
+ - Crop aspect: None
81
+
82
+
83
+ ## Inference
84
+
85
+
86
+ ```python
87
+ import torch
88
+ from diffusers import DiffusionPipeline
89
+
90
+ model_id = 'black-forest-labs/FLUX.1-dev'
91
+ adapter_id = 'naumnaum/rita-v2'
92
+ pipeline = DiffusionPipeline.from_pretrained(model_id)
93
+ pipeline.load_lora_weights(adapter_id)
94
+
95
+ prompt = "casual profile headshot photo of TOK woman for instagram. hasselblad photography."
96
+
97
+ pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
98
+ image = pipeline(
99
+ prompt=prompt,
100
+ num_inference_steps=35,
101
+ generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
102
+ width=576,
103
+ height=1024,
104
+ guidance_scale=3.5,
105
+ ).images[0]
106
+ image.save("output.png", format="PNG")
107
+ ```
108
+