lmind_hotpot_train8000_eval7405_v1_qa_3e-5_lora2

This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the tyzhu/lmind_hotpot_train8000_eval7405_v1_qa dataset. It achieves the following results on the evaluation set:

  • Loss: 3.7015
  • Accuracy: 0.5822

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • total_eval_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 50.0

Training results

Training Loss Epoch Step Accuracy Validation Loss
1.8255 1.0 250 0.6054 1.8392
1.7368 2.0 500 0.6078 1.8111
1.6689 3.0 750 0.6075 1.8103
1.5555 4.0 1000 0.6067 1.8414
1.4559 5.0 1250 0.6038 1.8992
1.3514 6.0 1500 0.6018 1.9584
1.2491 7.0 1750 0.6000 2.0300
1.1749 8.0 2000 0.5982 2.1051
1.0769 9.0 2250 0.5954 2.1948
1.0134 10.0 2500 0.5943 2.2515
0.9209 11.0 2750 0.5921 2.3421
0.8636 12.0 3000 0.5905 2.4443
0.7866 13.0 3250 0.588 2.5574
0.7448 14.0 3500 0.5867 2.5800
0.6709 15.0 3750 0.5846 2.6912
0.6439 16.0 4000 0.5853 2.7546
0.5869 17.0 4250 0.5831 2.7997
0.5596 18.0 4500 0.5833 2.8435
0.5205 19.0 4750 0.5833 2.9510
0.5045 20.0 5000 0.5824 2.9797
0.47 21.0 5250 0.5832 3.0530
0.455 22.0 5500 0.5821 3.0804
0.4332 23.0 5750 0.5813 3.1938
0.4171 24.0 6000 0.5816 3.1836
0.4049 25.0 6250 0.5817 3.1950
0.3975 26.0 6500 0.5801 3.2749
0.3798 27.0 6750 0.5808 3.3141
0.3774 28.0 7000 0.5815 3.3085
0.3636 29.0 7250 0.5813 3.3525
0.362 30.0 7500 0.5809 3.4330
0.3486 31.0 7750 0.5805 3.4240
0.3471 32.0 8000 0.5806 3.4737
0.335 33.0 8250 0.5825 3.4706
0.3367 34.0 8500 0.5829 3.4640
0.3276 35.0 8750 0.5806 3.5442
0.3298 36.0 9000 0.58 3.6080
0.3226 37.0 9250 0.5818 3.5853
0.3229 38.0 9500 0.5826 3.5513
0.3163 39.0 9750 0.5812 3.5633
0.3181 40.0 10000 0.5816 3.6170
0.3105 41.0 10250 0.5821 3.5726
0.3113 42.0 10500 0.5811 3.6571
0.3083 43.0 10750 0.5824 3.6066
0.3082 44.0 11000 0.582 3.6072
0.3032 45.0 11250 0.5822 3.6758
0.3041 46.0 11500 0.5827 3.7283
0.3016 47.0 11750 0.5813 3.7187
0.3017 48.0 12000 0.5803 3.6693
0.294 49.0 12250 0.5812 3.7501
0.2981 50.0 12500 0.5822 3.7015

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.14.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for tyzhu/lmind_hotpot_train8000_eval7405_v1_qa_3e-5_lora2

Finetuned
(619)
this model

Dataset used to train tyzhu/lmind_hotpot_train8000_eval7405_v1_qa_3e-5_lora2

Evaluation results

  • Accuracy on tyzhu/lmind_hotpot_train8000_eval7405_v1_qa
    self-reported
    0.582