naive-puzzle's picture
End of training
9604272 verified
metadata
library_name: peft
license: gemma
base_model: google/gemma-2-2b-jpn-it
tags:
  - generated_from_trainer
metrics:
  - spearmanr
  - pearsonr
model-index:
  - name: estimation-reward-gemma-2-2b
    results: []

estimation-reward-gemma-2-2b

This model is a fine-tuned version of google/gemma-2-2b-jpn-it on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 93.8438
  • Spearmanr: 0.6688
  • Kendalltau: 0.4828
  • Pearsonr: 0.0
  • Rmse: 9.6873
  • Mae: 7.4340

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine_with_min_lr
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 2.0

Training results

Training Loss Epoch Step Validation Loss Spearmanr Kendalltau Pearsonr Rmse Mae
142.0495 0.2094 500 172.4941 0.1755 0.1193 0.0 13.1337 10.1489
114.1114 0.4188 1000 145.3719 0.4083 0.2799 0.0 12.0570 9.3769
126.2303 0.6281 1500 123.7460 0.5220 0.3660 0.0 11.1241 8.5576
104.0802 0.8375 2000 110.0878 0.5964 0.4219 0.0 10.4923 8.0832
92.9514 1.0469 2500 101.4019 0.6340 0.4530 0.0 10.0698 7.7131
89.5989 1.2563 3000 98.1783 0.6485 0.4649 0.0 9.9085 7.6020
76.1914 1.4657 3500 96.0021 0.6582 0.4736 0.0 9.7981 7.5160
81.2849 1.6750 4000 95.4644 0.6645 0.4783 0.0 9.7706 7.5254
75.9316 1.8844 4500 93.8438 0.6688 0.4828 0.0 9.6873 7.4340

Framework versions

  • PEFT 0.15.1
  • Transformers 4.50.2
  • Pytorch 2.5.1+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1