sylyas commited on
Commit
fccf47b
·
verified ·
1 Parent(s): f25be64

End of training

Browse files
Files changed (2) hide show
  1. README.md +8 -8
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -30,8 +30,8 @@ datasets:
30
  field_input: input
31
  field_instruction: instruction
32
  field_output: output
33
- format: '{input}'
34
- no_input_format: '{field_instruction}'
35
  system_format: '{system}'
36
  system_prompt: ''
37
  debug: null
@@ -102,7 +102,7 @@ xformers_attention: null
102
 
103
  This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the None dataset.
104
  It achieves the following results on the evaluation set:
105
- - Loss: 0.9130
106
 
107
  ## Model description
108
 
@@ -130,16 +130,16 @@ The following hyperparameters were used during training:
130
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
131
  - lr_scheduler_type: cosine
132
  - lr_scheduler_warmup_steps: 10
133
- - training_steps: 481
134
 
135
  ### Training results
136
 
137
  | Training Loss | Epoch | Step | Validation Loss |
138
  |:-------------:|:------:|:----:|:---------------:|
139
- | 1.7333 | 0.0021 | 1 | 1.9654 |
140
- | 0.8852 | 0.2516 | 121 | 0.9382 |
141
- | 0.8645 | 0.5031 | 242 | 0.9225 |
142
- | 1.0676 | 0.7547 | 363 | 0.9130 |
143
 
144
 
145
  ### Framework versions
 
30
  field_input: input
31
  field_instruction: instruction
32
  field_output: output
33
+ format: '{instruction} {input}'
34
+ no_input_format: '{instruction}'
35
  system_format: '{system}'
36
  system_prompt: ''
37
  debug: null
 
102
 
103
  This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) on the None dataset.
104
  It achieves the following results on the evaluation set:
105
+ - Loss: 0.8958
106
 
107
  ## Model description
108
 
 
130
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
131
  - lr_scheduler_type: cosine
132
  - lr_scheduler_warmup_steps: 10
133
+ - training_steps: 475
134
 
135
  ### Training results
136
 
137
  | Training Loss | Epoch | Step | Validation Loss |
138
  |:-------------:|:------:|:----:|:---------------:|
139
+ | 1.3737 | 0.0021 | 1 | 1.6453 |
140
+ | 0.9206 | 0.2505 | 119 | 0.9274 |
141
+ | 1.0707 | 0.5011 | 238 | 0.9007 |
142
+ | 1.0721 | 0.7516 | 357 | 0.8958 |
143
 
144
 
145
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cf1138e4d78d9a6d81cdfa06f98986b150f914bab624559c952d7489be29b535
3
  size 167934026
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0aa059fd1c09734db9ab6011a808407433369ec1f56ce9ddde4dcc475dc9ad1
3
  size 167934026