PseudoTerminal X commited on
Commit
cda6ffc
1 Parent(s): 63fdcfb

Trained for 0 epochs and 6000 steps.

Browse files

Trained with datasets ['text-embeds-pixart-filter', 'photo-concept-bucket', 'ideogram', 'midjourney-v6-520k-raw', 'sfwbooru', 'nijijourney-v6-520k-raw', 'dalle3']
Learning rate 1e-06, batch size 24, and 1 gradient accumulation steps.
Used DDPM noise scheduler for training with epsilon prediction type and rescaled_betas_zero_snr=False
Using 'trailing' timestep spacing.
Base model: terminusresearch/pixart-900m-1024-ft-v0.6
VAE: madebyollin/sdxl-vae-fp16-fix

.gitattributes CHANGED
@@ -324,3 +324,4 @@ assets/image_97_1.png filter=lfs diff=lfs merge=lfs -text
324
  assets/image_98_2.png filter=lfs diff=lfs merge=lfs -text
325
  assets/image_99_0.png filter=lfs diff=lfs merge=lfs -text
326
  assets/image_9_0.png filter=lfs diff=lfs merge=lfs -text
 
 
324
  assets/image_98_2.png filter=lfs diff=lfs merge=lfs -text
325
  assets/image_99_0.png filter=lfs diff=lfs merge=lfs -text
326
  assets/image_9_0.png filter=lfs diff=lfs merge=lfs -text
327
+ training_state-dalle3.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1562,7 +1562,7 @@ You may reuse the base model text encoder for inference.
1562
  ## Training settings
1563
 
1564
  - Training epochs: 0
1565
- - Training steps: 5000
1566
  - Learning rate: 1e-06
1567
  - Effective batch size: 192
1568
  - Micro-batch size: 24
 
1562
  ## Training settings
1563
 
1564
  - Training epochs: 0
1565
+ - Training steps: 6000
1566
  - Learning rate: 1e-06
1567
  - Effective batch size: 192
1568
  - Micro-batch size: 24
optimizer.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1c189c64ecb26612d52b33c52e59d34c489c6568623b94afc0be757f4e58f8dc
3
  size 5451415117
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed70998267d02449989bcf35702542feac26451b366a6c6c364a65ebc2b1e8a5
3
  size 5451415117
random_states_0.pkl CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7bfbcba7049d59c14811d4950b2a078d97a1b0cde3a7fd38840a1c206b26ecc8
3
- size 16100
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a8db8db1fea598b10c0473dce4b6c76bd0b66998be5ac5bb3c216454c3d6200
3
+ size 16036
scheduler.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6db68071d6a3fca0758715a9a86721a7db2e381ba247175f7f1d75233038ba6d
3
  size 1000
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa90d1596151fc982c9df01c65a1b9accaa596352fb73dd2d76f53176687cc0d
3
  size 1000
training_state-dalle3.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_state-ideogram.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_state-midjourney-v6-520k-raw.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_state-nijijourney-v6-520k-raw.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_state-photo-concept-bucket.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_state-sfwbooru.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_state.json CHANGED
@@ -1 +1 @@
1
- {"global_step": 5000, "epoch_step": 5000, "epoch": 1, "exhausted_backends": [], "repeats": {"ideogram": 3}}
 
1
+ {"global_step": 6000, "epoch_step": 6000, "epoch": 1, "exhausted_backends": [], "repeats": {"ideogram": 4}}
transformer/diffusion_pytorch_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1ae41b9372b99b3274eacc8c3c39bdb1ec46a86364153c3fb48d6e7020ba5fa0
3
  size 1816969728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98a02c95706b6a63f6c64718fc8954d964a836cbec72a233ec1ed426a5d9f440
3
  size 1816969728