datajose commited on
Commit
6d16203
·
verified ·
1 Parent(s): 58de123

datajose/pruebas-ft

Browse files
README.md CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.5016
20
 
21
  ## Model description
22
 
@@ -36,31 +36,25 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.0002
39
- - train_batch_size: 4
40
- - eval_batch_size: 4
41
  - seed: 42
42
  - gradient_accumulation_steps: 4
43
- - total_train_batch_size: 16
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
- - num_epochs: 10
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
- | 2.2715 | 0.96 | 20 | 1.7064 |
55
- | 0.71 | 1.98 | 41 | 1.1687 |
56
- | 0.5515 | 2.99 | 62 | 1.0146 |
57
- | 0.5052 | 4.0 | 83 | 0.8605 |
58
- | 0.4887 | 4.96 | 103 | 0.7023 |
59
- | 0.4311 | 5.98 | 124 | 0.6066 |
60
- | 0.418 | 6.99 | 145 | 0.5606 |
61
- | 0.4088 | 8.0 | 166 | 0.5206 |
62
- | 0.4243 | 8.96 | 186 | 0.5048 |
63
- | 0.3898 | 9.64 | 200 | 0.5016 |
64
 
65
 
66
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.7598
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.0002
39
+ - train_batch_size: 20
40
+ - eval_batch_size: 20
41
  - seed: 42
42
  - gradient_accumulation_steps: 4
43
+ - total_train_batch_size: 80
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
+ - num_epochs: 4
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
51
 
52
  | Training Loss | Epoch | Step | Validation Loss |
53
  |:-------------:|:-----:|:----:|:---------------:|
54
+ | 1.334 | 1.0 | 192 | 0.8773 |
55
+ | 0.8235 | 2.0 | 385 | 0.8020 |
56
+ | 0.7665 | 3.0 | 578 | 0.7718 |
57
+ | 0.7357 | 3.98 | 768 | 0.7598 |
 
 
 
 
 
 
58
 
59
 
60
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4b8f41a103ccfef48a0c7101fd706957a4b02d57796fae22b701eec88a4af294
3
  size 8397056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e379e2f988aad18294bce115ce2a5a4b71dc9615cfc746c1ad847741e9af26b
3
  size 8397056
runs/Mar12_09-59-19_datajose/events.out.tfevents.1710248364.datajose.6585.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07f962b31d666c90b8c9dfc435cfee8927ed0d62f5973279e965730bec976558
3
+ size 5222
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:30e010fd8cbc265a352f57d16ac7c99310ada8ea19b2d05a7f4aa4a081d8dd63
3
  size 4856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1eba88998a55b1eeeac22e77eb413c9ac68fa2fbf92e95166cdfa22737d5757d
3
  size 4856