--- base_model: unsloth/llama-3-8b library_name: peft license: llama3 tags: - unsloth - generated_from_trainer model-index: - name: Meta-Llama-3-8B_metamath_default results: [] --- # Meta-Llama-3-8B_metamath_default This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5068 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.02 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8666 | 0.0211 | 13 | 0.7509 | | 0.6952 | 0.0421 | 26 | 0.7368 | | 0.7079 | 0.0632 | 39 | 0.7196 | | 0.6922 | 0.0842 | 52 | 0.7066 | | 0.6565 | 0.1053 | 65 | 0.7074 | | 0.6791 | 0.1264 | 78 | 0.7263 | | 0.6858 | 0.1474 | 91 | 0.7019 | | 0.6693 | 0.1685 | 104 | 0.6926 | | 0.6503 | 0.1896 | 117 | 0.6922 | | 0.6488 | 0.2106 | 130 | 0.6925 | | 0.6505 | 0.2317 | 143 | 0.6844 | | 0.6533 | 0.2527 | 156 | 0.6842 | | 0.6505 | 0.2738 | 169 | 0.6709 | | 0.6456 | 0.2949 | 182 | 0.6661 | | 0.6307 | 0.3159 | 195 | 0.6699 | | 0.6144 | 0.3370 | 208 | 0.6629 | | 0.6286 | 0.3580 | 221 | 0.6547 | | 0.6261 | 0.3791 | 234 | 0.6469 | | 0.6365 | 0.4002 | 247 | 0.6482 | | 0.6108 | 0.4212 | 260 | 0.6428 | | 0.6207 | 0.4423 | 273 | 0.6322 | | 0.6219 | 0.4633 | 286 | 0.6265 | | 0.6133 | 0.4844 | 299 | 0.6213 | | 0.5944 | 0.5055 | 312 | 0.6138 | | 0.5871 | 0.5265 | 325 | 0.6034 | | 0.5827 | 0.5476 | 338 | 0.6013 | | 0.5714 | 0.5687 | 351 | 0.5923 | | 0.5512 | 0.5897 | 364 | 0.5849 | | 0.5636 | 0.6108 | 377 | 0.5755 | | 0.5564 | 0.6318 | 390 | 0.5684 | | 0.5444 | 0.6529 | 403 | 0.5647 | | 0.5431 | 0.6740 | 416 | 0.5582 | | 0.5311 | 0.6950 | 429 | 0.5533 | | 0.5323 | 0.7161 | 442 | 0.5458 | | 0.5172 | 0.7371 | 455 | 0.5386 | | 0.5113 | 0.7582 | 468 | 0.5341 | | 0.4989 | 0.7793 | 481 | 0.5296 | | 0.4929 | 0.8003 | 494 | 0.5264 | | 0.5266 | 0.8214 | 507 | 0.5214 | | 0.5075 | 0.8424 | 520 | 0.5184 | | 0.4917 | 0.8635 | 533 | 0.5150 | | 0.5078 | 0.8846 | 546 | 0.5124 | | 0.4897 | 0.9056 | 559 | 0.5099 | | 0.4879 | 0.9267 | 572 | 0.5081 | | 0.5007 | 0.9478 | 585 | 0.5073 | | 0.4979 | 0.9688 | 598 | 0.5071 | | 0.4991 | 0.9899 | 611 | 0.5068 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.0 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1