--- base_model: google/gemma-2b library_name: peft license: gemma metrics: - accuracy tags: - trl - reward-trainer - generated_from_trainer model-index: - name: 0809_031041-google-gemma-2b results: [] --- # 0809_031041-google-gemma-2b This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3773 - Accuracy: 0.8239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.4824 | 0.3546 | 50 | 0.5036 | 0.7367 | | 0.3486 | 0.7092 | 100 | 0.4458 | 0.7746 | | 0.3555 | 1.0638 | 150 | 0.4337 | 0.8030 | | 0.3447 | 1.4184 | 200 | 0.4066 | 0.8239 | | 0.3008 | 1.7730 | 250 | 0.3979 | 0.8258 | | 0.3857 | 2.1277 | 300 | 0.3888 | 0.8390 | | 0.2754 | 2.4823 | 350 | 0.3760 | 0.8314 | | 0.4746 | 2.8369 | 400 | 0.3798 | 0.8258 | | 0.3281 | 3.1915 | 450 | 0.3734 | 0.8258 | | 0.3149 | 3.5461 | 500 | 0.3827 | 0.8277 | | 0.2695 | 3.9007 | 550 | 0.3720 | 0.8277 | | 0.2524 | 4.2553 | 600 | 0.3758 | 0.8239 | | 0.2197 | 4.6099 | 650 | 0.3768 | 0.8220 | | 0.251 | 4.9645 | 700 | 0.3773 | 0.8239 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.0 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1