mistral-7b-nli_cot_qkv

This model is a fine-tuned version of TheBloke/Mistral-7B-v0.1-GPTQ on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7749

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • num_epochs: 12
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.426 0.9998 1196 0.4255
0.3664 1.9996 2392 0.4365
0.3221 2.9994 3588 0.4455
0.2804 4.0 4785 0.4577
0.2403 4.9998 5981 0.4719
0.2001 5.9996 7177 0.4948
0.1643 6.9994 8373 0.5278
0.1305 8.0 9570 0.5634
0.1011 8.9998 10766 0.6095
0.0768 9.9996 11962 0.6621
0.0577 10.9994 13158 0.7225
0.0445 11.9975 14352 0.7749

Framework versions

  • PEFT 0.10.0
  • Transformers 4.40.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for jd0g/mistral-7b-nli_cot_qkv

Adapter
(27)
this model