MHGanainy commited on
Commit
9a8bd71
·
verified ·
1 Parent(s): ee4ebb0

Model save

Browse files
README.md CHANGED
@@ -15,8 +15,6 @@ should probably proofread and complete it, then remove this comment. -->
15
  # mgpt-lora-multi-belgium-balanced-1024
16
 
17
  This model is a fine-tuned version of [ai-forever/mGPT](https://huggingface.co/ai-forever/mGPT) on an unknown dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 1.4204
20
 
21
  ## Model description
22
 
@@ -36,16 +34,16 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 2e-05
39
- - train_batch_size: 1
40
- - eval_batch_size: 1
41
  - seed: 42
42
  - distributed_type: multi-GPU
43
  - num_devices: 8
44
- - total_train_batch_size: 8
45
- - total_eval_batch_size: 8
46
  - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: cosine
48
- - lr_scheduler_warmup_steps: 1356
49
  - num_epochs: 1
50
 
51
  ### Training results
 
15
  # mgpt-lora-multi-belgium-balanced-1024
16
 
17
  This model is a fine-tuned version of [ai-forever/mGPT](https://huggingface.co/ai-forever/mGPT) on an unknown dataset.
 
 
18
 
19
  ## Model description
20
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 2e-05
37
+ - train_batch_size: 4
38
+ - eval_batch_size: 4
39
  - seed: 42
40
  - distributed_type: multi-GPU
41
  - num_devices: 8
42
+ - total_train_batch_size: 32
43
+ - total_eval_batch_size: 32
44
  - optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
  - lr_scheduler_type: cosine
46
+ - lr_scheduler_warmup_steps: 339
47
  - num_epochs: 1
48
 
49
  ### Training results
adapter_config.json CHANGED
@@ -20,8 +20,8 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "c_proj",
24
- "c_attn"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "c_attn",
24
+ "c_proj"
25
  ],
26
  "task_type": "CAUSAL_LM",
27
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:aa7270ff8c43a83f1579543374def1a8b497e34100d8e7b3c5f4e1b4c8ec7b15
3
  size 34621712
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b21ba1cad039f38de5e8e2d1240cd6f19fb80b01acb6dc5bfe81e8b66b442280
3
  size 34621712
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e8ee014ed39394825f838989fd2c8c1db60e79db6e53e4d3d5c0d00c1f079aa0
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ead0c237617dec1883d24b78d9c6a79e83527bb9a22e40a2c2c0b36e160f85c
3
  size 5368