zephyr-dpo-qlora-uf-ours-5e-6-epoch1
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the generation/UF dataset. It achieves the following results on the evaluation set:
- Loss: 0.9016
- Rewards/chosen: -5.0415
- Rewards/rejected: -5.9633
- Rewards/accuracies: 0.6560
- Rewards/margins: 0.9219
- Rewards/margins Max: 4.5250
- Rewards/margins Min: -2.8411
- Rewards/margins Std: 2.5079
- Logps/rejected: -854.9131
- Logps/chosen: -788.7391
- Logits/rejected: -1.3435
- Logits/chosen: -1.4046
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.4419 | 0.28 | 100 | 0.6650 | -0.4529 | -0.5977 | 0.6240 | 0.1449 | 0.8619 | -0.5418 | 0.4692 | -318.3518 | -329.8811 | -2.5449 | -2.5719 |
0.1872 | 0.56 | 200 | 0.7859 | -2.8392 | -3.5400 | 0.6630 | 0.7008 | 3.4374 | -2.1827 | 1.9158 | -612.5828 | -568.5178 | -1.4159 | -1.4771 |
0.1102 | 0.85 | 300 | 0.8935 | -4.8886 | -5.8224 | 0.6470 | 0.9339 | 4.5089 | -2.8033 | 2.4986 | -840.8231 | -773.4503 | -1.3417 | -1.4023 |
Framework versions
- PEFT 0.7.1
- Transformers 4.39.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
- Downloads last month
- 4
Model tree for just1nseo/zephyr-dpo-qlora-uf-ours-5e-6-epoch1
Base model
mistralai/Mistral-7B-v0.1
Finetuned
alignment-handbook/zephyr-7b-sft-full