Output_llama3_80-20
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.6274
- Balanced Accuracy: 0.6563
- Accuracy: 0.6615
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | Balanced Accuracy | Accuracy |
---|---|---|---|---|---|
No log | 1.0 | 96 | 0.6857 | 0.5462 | 0.5625 |
No log | 2.0 | 192 | 0.6684 | 0.5758 | 0.5833 |
No log | 3.0 | 288 | 0.7167 | 0.6384 | 0.6198 |
No log | 4.0 | 384 | 0.6335 | 0.6179 | 0.625 |
No log | 5.0 | 480 | 0.6574 | 0.6297 | 0.5990 |
0.6776 | 6.0 | 576 | 0.6322 | 0.6168 | 0.625 |
0.6776 | 7.0 | 672 | 0.6374 | 0.6114 | 0.6094 |
0.6776 | 8.0 | 768 | 0.6261 | 0.6278 | 0.6354 |
0.6776 | 9.0 | 864 | 0.6289 | 0.6651 | 0.6406 |
0.6776 | 10.0 | 960 | 0.6082 | 0.6368 | 0.6406 |
0.5732 | 11.0 | 1056 | 0.6036 | 0.6553 | 0.6615 |
0.5732 | 12.0 | 1152 | 0.6445 | 0.6870 | 0.6510 |
0.5732 | 13.0 | 1248 | 0.6094 | 0.6833 | 0.6875 |
0.5732 | 14.0 | 1344 | 0.6104 | 0.6607 | 0.6667 |
0.5732 | 15.0 | 1440 | 0.6553 | 0.6960 | 0.6927 |
0.5144 | 16.0 | 1536 | 0.6262 | 0.6603 | 0.6510 |
0.5144 | 17.0 | 1632 | 0.6154 | 0.6619 | 0.6667 |
0.5144 | 18.0 | 1728 | 0.6210 | 0.6619 | 0.6667 |
0.5144 | 19.0 | 1824 | 0.6293 | 0.6716 | 0.6771 |
0.5144 | 20.0 | 1920 | 0.6274 | 0.6563 | 0.6615 |
Framework versions
- PEFT 0.10.0
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.20.3
- Downloads last month
- 245
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for Ahatsham/Output_llama3_80-20
Base model
meta-llama/Meta-Llama-3-8B