squad_qa_title_v5_full_qaonly_meta-llama_Llama-2-7b-hf_3e-5_lora

This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.7140
  • Accuracy: 0.6713

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 50.0

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.99 74 1.4559 0.6859
1.8449 1.99 149 1.2466 0.6966
1.2465 3.0 224 1.2806 0.6943
1.2465 4.0 299 1.3376 0.6928
1.1239 4.99 373 1.3705 0.6927
1.0214 5.99 448 1.4187 0.6898
0.8757 7.0 523 1.5146 0.6850
0.8757 8.0 598 1.6043 0.6820
0.7243 8.99 672 1.7030 0.6797
0.5566 9.99 747 1.8440 0.6742
0.4482 11.0 822 1.8708 0.6738
0.4482 12.0 897 2.0210 0.6723
0.3721 12.99 971 2.0927 0.6706
0.3134 13.99 1046 2.1836 0.6717
0.2923 15.0 1121 2.2067 0.6722
0.2923 16.0 1196 2.2767 0.6720
0.277 16.99 1270 2.3396 0.6715
0.2604 17.99 1345 2.3341 0.6727
0.2588 19.0 1420 2.2934 0.6719
0.2588 20.0 1495 2.3288 0.6720
0.2545 20.99 1569 2.3674 0.6733
0.246 21.99 1644 2.3575 0.6712
0.2475 23.0 1719 2.4415 0.6717
0.2475 24.0 1794 2.3931 0.6724
0.2441 24.99 1868 2.4622 0.6716
0.2393 25.99 1943 2.4699 0.6727
0.2419 27.0 2018 2.5011 0.6721
0.2419 28.0 2093 2.4473 0.6713
0.2384 28.99 2167 2.5251 0.6712
0.2349 29.99 2242 2.5332 0.6706
0.2362 31.0 2317 2.4678 0.6713
0.2362 32.0 2392 2.4959 0.6699
0.2335 32.99 2466 2.5345 0.6692
0.231 33.99 2541 2.4998 0.6716
0.2323 35.0 2616 2.5296 0.6703
0.2323 36.0 2691 2.6055 0.6723
0.2309 36.99 2765 2.5830 0.6727
0.229 37.99 2840 2.5591 0.6710
0.2293 39.0 2915 2.5690 0.6729
0.2293 40.0 2990 2.5830 0.6732
0.2283 40.99 3064 2.6750 0.6712
0.2248 41.99 3139 2.6572 0.6715
0.2267 43.0 3214 2.6151 0.6722
0.2267 44.0 3289 2.6482 0.6722
0.2252 44.99 3363 2.6898 0.6708
0.224 45.99 3438 2.6339 0.6716
0.2258 47.0 3513 2.6734 0.6717
0.2258 48.0 3588 2.7264 0.6713
0.2249 48.99 3662 2.7045 0.6701
0.2253 49.5 3700 2.7140 0.6713

Framework versions

  • Transformers 4.34.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.14.1
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Model tree for tyzhu/squad_qa_title_v5_full_qaonly_meta-llama_Llama-2-7b-hf_3e-5_lora

Finetuned
(619)
this model