mistral-finetuned-toxicity3

This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.3 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7477

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 2
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.9782 0.0185 200 0.9489
0.9499 0.0370 400 0.9159
0.8738 0.0555 600 0.9025
0.9153 0.0740 800 0.8785
0.8624 0.0925 1000 0.8715
0.8629 0.1110 1200 0.8655
0.8632 0.1294 1400 0.8521
0.838 0.1479 1600 0.8503
0.8168 0.1664 1800 0.8456
0.8198 0.1849 2000 0.8389
0.8243 0.2034 2200 0.8277
0.814 0.2219 2400 0.8315
0.8027 0.2404 2600 0.8229
0.8192 0.2589 2800 0.8173
0.8178 0.2774 3000 0.8161
0.7955 0.2959 3200 0.8132
0.786 0.3144 3400 0.8081
0.8196 0.3329 3600 0.8046
0.7996 0.3514 3800 0.8034
0.8236 0.3699 4000 0.7995
0.8192 0.3883 4200 0.7965
0.7898 0.4068 4400 0.7920
0.8018 0.4253 4600 0.7896
0.7837 0.4438 4800 0.7881
0.7802 0.4623 5000 0.7885
0.7856 0.4808 5200 0.7847
0.7873 0.4993 5400 0.7813
0.787 0.5178 5600 0.7806
0.7871 0.5363 5800 0.7781
0.7955 0.5548 6000 0.7787
0.7857 0.5733 6200 0.7745
0.7817 0.5918 6400 0.7729
0.7841 0.6103 6600 0.7735
0.7474 0.6288 6800 0.7683
0.7597 0.6472 7000 0.7707
0.7591 0.6657 7200 0.7666
0.7615 0.6842 7400 0.7646
0.7366 0.7027 7600 0.7647
0.7697 0.7212 7800 0.7611
0.7387 0.7397 8000 0.7599
0.7503 0.7582 8200 0.7577
0.7545 0.7767 8400 0.7566
0.7734 0.7952 8600 0.7540
0.7512 0.8137 8800 0.7532
0.7627 0.8322 9000 0.7512
0.7519 0.8507 9200 0.7520
0.7556 0.8692 9400 0.7489
0.7667 0.8877 9600 0.7472
0.7458 0.9061 9800 0.7465
0.7191 0.9246 10000 0.7457
0.7396 0.9431 10200 0.7423
0.7281 0.9616 10400 0.7426
0.7219 0.9801 10600 0.7416
0.7237 0.9986 10800 0.7389
0.589 1.0171 11000 0.7538
0.6071 1.0356 11200 0.7503
0.5696 1.0541 11400 0.7547
0.6019 1.0726 11600 0.7498
0.5741 1.0911 11800 0.7551
0.5922 1.1096 12000 0.7527
0.5721 1.1281 12200 0.7534
0.5856 1.1466 12400 0.7526
0.5775 1.1650 12600 0.7549
0.5911 1.1835 12800 0.7511
0.5983 1.2020 13000 0.7494
0.6213 1.2205 13200 0.7460
0.6006 1.2390 13400 0.7468
0.5658 1.2575 13600 0.7477

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.3
  • Pytorch 2.1.0
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
97
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for juliadollis/mistral-finetuned-toxicity3

Adapter
(224)
this model