CodeLlama-7b-Instruct-hf_En__CMP_TR_size_304_epochs_10_2024-06-23_10-41-40_3558636
This model is a fine-tuned version of codellama/CodeLlama-7b-Instruct-hf on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.0825
- Accuracy: 0.491
- Chrf: 0.24
- Bleu: 0.16
- Sacrebleu: 0.2
- Rouge1: 0.385
- Rouge2: 0.191
- Rougel: 0.349
- Rougelsum: 0.385
- Meteor: 0.428
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 3407
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 304
- training_steps: 3040
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Chrf | Bleu | Sacrebleu | Rouge1 | Rouge2 | Rougel | Rougelsum | Meteor |
---|---|---|---|---|---|---|---|---|---|---|---|---|
5.7836 | 1.0 | 304 | 4.9962 | 0.476 | 0.018 | 0.0 | 0.0 | 0.058 | 0.019 | 0.057 | 0.058 | 0.144 |
0.0456 | 2.0 | 608 | 3.2985 | 0.492 | 0.058 | 0.043 | 0.0 | 0.128 | 0.066 | 0.128 | 0.128 | 0.177 |
0.0602 | 3.0 | 912 | 3.0838 | 0.503 | 0.044 | 0.0 | 0.0 | 0.079 | 0.0 | 0.078 | 0.069 | 0.144 |
0.0176 | 4.0 | 1216 | 3.1386 | 0.504 | 0.082 | 0.018 | 0.0 | 0.105 | 0.0 | 0.087 | 0.105 | 0.231 |
0.7809 | 5.0 | 1520 | 3.0542 | 0.515 | 0.069 | 0.041 | 0.0 | 0.189 | 0.058 | 0.186 | 0.189 | 0.249 |
0.0458 | 6.0 | 1824 | 2.5971 | 0.496 | 0.074 | 0.0 | 0.0 | 0.075 | 0.003 | 0.067 | 0.075 | 0.219 |
0.034 | 7.0 | 2128 | 2.6303 | 0.481 | 0.145 | 0.053 | 0.1 | 0.379 | 0.131 | 0.341 | 0.36 | 0.292 |
0.0579 | 8.0 | 2432 | 2.2991 | 0.497 | 0.184 | 0.084 | 0.1 | 0.291 | 0.093 | 0.278 | 0.281 | 0.282 |
0.0113 | 9.0 | 2736 | 2.0278 | 0.485 | 0.234 | 0.159 | 0.2 | 0.364 | 0.17 | 0.327 | 0.357 | 0.424 |
0.0329 | 10.0 | 3040 | 2.0825 | 0.491 | 0.24 | 0.16 | 0.2 | 0.385 | 0.191 | 0.349 | 0.385 | 0.428 |
Framework versions
- PEFT 0.7.1
- Transformers 4.37.0
- Pytorch 2.2.1+cu121
- Datasets 2.20.0
- Tokenizers 0.15.2
- Downloads last month
- 0
Model tree for vdavidr/CodeLlama-7b-Instruct-hf_En__CMP_TR_size_304_epochs_10_2024-06-23_10-41-40_3558636
Base model
codellama/CodeLlama-7b-Instruct-hf