metadata
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-full-gpt_consistent-reward-scale-1
results: []
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4815
- Rewards/chosen: -1.5849
- Rewards/rejected: -2.8045
- Rewards/accuracies: 0.7328
- Rewards/margins: 1.2196
- Logps/rejected: -526.9686
- Logps/chosen: -443.5758
- Logits/rejected: 3.4838
- Logits/chosen: 2.3333
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 55
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.6607 | 0.1147 | 50 | 0.6447 | -0.0044 | -0.1452 | 0.6897 | 0.1408 | -261.0398 | -285.5275 | -2.4923 | -2.5735 |
0.5616 | 0.2294 | 100 | 0.5464 | -0.8527 | -1.5772 | 0.6853 | 0.7245 | -404.2408 | -370.3625 | 0.2075 | -0.2410 |
0.5333 | 0.3440 | 150 | 0.5195 | -1.0024 | -1.8820 | 0.7112 | 0.8797 | -434.7274 | -385.3255 | 1.4808 | 0.5890 |
0.5219 | 0.4587 | 200 | 0.5010 | -1.0719 | -2.0541 | 0.7328 | 0.9822 | -451.9354 | -392.2838 | 2.4260 | 1.4256 |
0.5007 | 0.5734 | 250 | 0.4917 | -1.2321 | -2.3291 | 0.7241 | 1.0970 | -479.4298 | -408.2994 | 2.6738 | 1.4527 |
0.5109 | 0.6881 | 300 | 0.4878 | -1.3356 | -2.5048 | 0.7284 | 1.1691 | -496.9991 | -418.6534 | 2.8884 | 1.5762 |
0.5063 | 0.8028 | 350 | 0.4814 | -1.4870 | -2.6833 | 0.7371 | 1.1963 | -514.8549 | -433.7904 | 3.3469 | 2.1699 |
0.4936 | 0.9174 | 400 | 0.4815 | -1.5849 | -2.8045 | 0.7328 | 1.2196 | -526.9686 | -443.5758 | 3.4838 | 2.3333 |
Framework versions
- Transformers 4.44.0.dev0
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1