AiAF
/

Text-to-Image
Diffusers
art
Not-For-All-Audiences
AiAF commited on
Commit
6d6db53
·
verified ·
1 Parent(s): d1dd97f

Update README.md / Model Card Update

Browse files
Files changed (1) hide show
  1. README.md +78 -80
README.md CHANGED
@@ -6,13 +6,7 @@ pipeline_tag: text-to-image
6
  tags:
7
  - art
8
  ---
9
- # Model Card for Model ID
10
 
11
- <!-- Provide a quick summary of what the model is/does. -->
12
-
13
- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
14
-
15
- ## Model Details
16
 
17
  ### Model Description
18
 
@@ -79,82 +73,86 @@ If on a local A1111 set up, use the standard <Lora:[name-of-LoRA-Goes-here]:[1]>
79
 
80
  ### Interrogation Data
81
 
82
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
83
-
84
- #### Preprocessing [optional]
85
-
86
- [More Information Needed]
87
-
88
-
89
- #### Training Hyperparameters
90
-
91
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
92
-
93
- #### Speeds, Sizes, Times [optional]
94
-
95
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
96
-
97
- [More Information Needed]
98
-
99
- ## Evaluation
100
-
101
- <!-- This section describes the evaluation protocols and provides the results. -->
102
-
103
- ### Testing Data, Factors & Metrics
104
-
105
- #### Testing Data
106
-
107
- <!-- This should link to a Dataset Card if possible. -->
108
-
109
- [More Information Needed]
110
-
111
- #### Factors
112
-
113
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
114
-
115
- [More Information Needed]
116
-
117
- #### Metrics
118
-
119
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
120
-
121
- [More Information Needed]
122
-
123
- ### Results
124
-
125
- [More Information Needed]
126
-
127
- #### Summary
128
-
129
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
130
 
131
- ## Model Examination [optional]
132
-
133
- <!-- Relevant interpretability work for the model goes here -->
134
-
135
- [More Information Needed]
136
-
137
- ## Environmental Impact
138
-
139
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
140
-
141
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
142
-
143
- - **Hardware Type:** [More Information Needed]
144
- - **Hours used:** [More Information Needed]
145
- - **Cloud Provider:** [More Information Needed]
146
- - **Compute Region:** [More Information Needed]
147
- - **Carbon Emitted:** [More Information Needed]
148
-
149
- ## Technical Specifications [optional]
150
-
151
- ### Model Architecture and Objective
152
-
153
- [More Information Needed]
154
-
155
- ### Compute Infrastructure
156
-
157
- [More Information Needed]
158
 
159
  #### Hardware
160
 
 
6
  tags:
7
  - art
8
  ---
 
9
 
 
 
 
 
 
10
 
11
  ### Model Description
12
 
 
73
 
74
  ### Interrogation Data
75
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76
 
77
+ {
78
+ "ss_epoch": "10",
79
+ "ss_bucket_no_upscale": "False",
80
+ "ss_total_batch_size": "25",
81
+ "ss_num_batches_per_epoch": "69",
82
+ "ss_new_vae_hash": "63aeecb90ff7bc1c115395962d3e803571385b61938377bc7089b36e81e92e2e",
83
+ "ss_tag_frequency": "{\"meta_lat.json\": {}}",
84
+ "ss_min_snr_gamma": "3",
85
+ "ss_caption_dropout_every_n_epochs": "0",
86
+ "ss_sd_model_hash": "e577480d",
87
+ "ss_max_token_length": "225",
88
+ "ss_shuffle_caption": "False",
89
+ "ss_seed": "4479",
90
+ "ss_sd_model_name": "v6.safetensors",
91
+ "ss_reg_dataset_dirs": "{}",
92
+ "ss_flip_aug": "False",
93
+ "ss_lr_warmup_steps": "0",
94
+ "ss_resolution": "(1024, 1024)",
95
+ "ss_caption_dropout_rate": "0.0",
96
+ "ss_gradient_checkpointing": "True",
97
+ "ss_bucket_info": "{\"buckets\": {\"0\": {\"resolution\": [320, 1024], \"count\": 4}, \"1\": {\"resolution\": [384, 1024], \"count\": 2}, \"2\": {\"resolution\": [448, 1024], \"count\": 16}, \"3\": {\"resolution\": [512, 1024], \"count\": 32}, \"4\": {\"resolution\": [576, 1024], \"count\": 72}, \"5\": {\"resolution\": [640, 1024], \"count\": 126}, \"6\": {\"resolution\": [704, 1024], \"count\": 240}, \"7\": {\"resolution\": [768, 1024], \"count\": 182}, \"8\": {\"resolution\": [832, 1024], \"count\": 164}, \"9\": {\"resolution\": [896, 1024], \"count\": 76}, \"10\": {\"resolution\": [960, 1024], \"count\": 38}, \"11\": {\"resolution\": [1024, 320], \"count\": 2}, \"12\": {\"resolution\": [1024, 384], \"count\": 4}, \"13\": {\"resolution\": [1024, 448], \"count\": 2}, \"14\": {\"resolution\": [1024, 512], \"count\": 2}, \"15\": {\"resolution\": [1024, 576], \"count\": 26}, \"16\": {\"resolution\": [1024, 640], \"count\": 42}, \"17\": {\"resolution\": [1024, 704], \"count\": 84}, \"18\": {\"resolution\": [1024, 768], \"count\": 72}, \"19\": {\"resolution\": [1024, 832], \"count\": 52}, \"20\": {\"resolution\": [1024, 896], \"count\": 42}, \"21\": {\"resolution\": [1024, 960], \"count\": 34}, \"22\": {\"resolution\": [1024, 1024], \"count\": 32}}, \"mean_img_ar_error\": 0.0}",
98
+ "ss_full_fp16": "False",
99
+ "ss_scale_weight_norms": "None",
100
+ "ss_mixed_precision": "fp16",
101
+ "ss_max_grad_norm": "0",
102
+ "ss_enable_bucket": "True",
103
+ "ss_network_dropout": "None",
104
+ "ss_training_comment": "None",
105
+ "ss_training_finished_at": "1720056618.6869428",
106
+ "ss_multires_noise_iterations": "6",
107
+ "ss_random_crop": "False",
108
+ "ss_num_epochs": "10",
109
+ "ss_num_reg_images": "0",
110
+ "ss_network_dim": "32",
111
+ "ss_network_args": "{\"conv_dim\": \"8\", \"conv_alpha\": \"1\"}",
112
+ "ss_num_train_images": "1346",
113
+ "ss_gradient_accumulation_steps": "1",
114
+ "ss_face_crop_aug_range": "None",
115
+ "ss_lowram": "False",
116
+ "ss_vae_name": "sdxl_vae.safetensors",
117
+ "ss_clip_skip": "None",
118
+ "ss_max_bucket_reso": "None",
119
+ "sshs_model_hash": "86886e99d8a83793fe63cc21287344330858888423015fd998cc133c69a18862",
120
+ "ss_batch_size_per_device": "25",
121
+ "ss_v2": "False",
122
+ "ss_unet_lr": "None",
123
+ "ss_keep_tokens": "0",
124
+ "ss_color_aug": "False",
125
+ "ss_noise_offset": "None",
126
+ "ss_optimizer": "transformers.optimization.Adafactor(scale_parameter=False,relative_step=False,warmup_init=False)",
127
+ "ss_caption_tag_dropout_rate": "0.0",
128
+ "ss_base_model_version": "sdxl_base_v0-9",
129
+ "ss_zero_terminal_snr": "False",
130
+ "ss_max_train_steps": "690",
131
+ "ss_multires_noise_discount": "0.3",
132
+ "ss_learning_rate": "0.001",
133
+ "ss_adaptive_noise_scale": "None",
134
+ "ss_network_module": "networks.lora",
135
+ "ss_steps": "690",
136
+ "ss_vae_hash": "d636e597",
137
+ "ss_training_started_at": "1720053253.286365",
138
+ "ss_sd_scripts_commit_hash": "05811296f6dc987f67f194689e106a326017b9d4",
139
+ "ss_min_bucket_reso": "None",
140
+ "ss_output_name": "Lightsource | @0Lightsource | OLS[PonyXL]",
141
+ "ss_network_alpha": "32",
142
+ "ss_prior_loss_weight": "1.0",
143
+ "ss_lr_scheduler": "constant",
144
+ "ss_new_sd_model_hash": "67ab2fd8ec439a89b3fedb15cc65f54336af163c7eb5e4f2acc98f090a29b0b3",
145
+ "ss_text_encoder_lr": "None",
146
+ "ss_cache_latents": "False",
147
+ "sshs_legacy_hash": "76a48373",
148
+ "ss_dataset_dirs": "{\"meta_lat.json\": {\"n_repeats\": 2, \"img_count\": 673}}",
149
+ "ss_session_id": "2268693806"
150
+ }
151
+
152
+
153
+
154
+ [SDXL / PonyXL]
155
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
156
 
157
  #### Hardware
158