metadata
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-full-prometheus_consistent-reward-scale-1-rpo
results: []
zephyr-7b-dpo-full-prometheus_consistent-reward-scale-1-rpo
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0340
- Rewards/chosen: -0.0965
- Rewards/rejected: -0.3938
- Rewards/accuracies: 0.7414
- Rewards/margins: 0.2973
- Logps/rejected: -258.4546
- Logps/chosen: -285.2520
- Logits/rejected: -2.1563
- Logits/chosen: -2.3140
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 55
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.048 | 0.1143 | 50 | 0.0428 | 0.0615 | -0.0563 | 0.7026 | 0.1178 | -224.7078 | -269.4535 | -2.4545 | -2.5520 |
0.0399 | 0.2286 | 100 | 0.0385 | -0.0747 | -0.3118 | 0.75 | 0.2371 | -250.2514 | -283.0706 | -1.9893 | -2.1601 |
0.0367 | 0.3429 | 150 | 0.0371 | -0.1934 | -0.4431 | 0.7672 | 0.2497 | -263.3893 | -294.9446 | -2.3057 | -2.4218 |
0.0375 | 0.4571 | 200 | 0.0353 | -0.0541 | -0.3320 | 0.7672 | 0.2779 | -252.2786 | -281.0130 | -2.1436 | -2.2907 |
0.0371 | 0.5714 | 250 | 0.0344 | -0.0812 | -0.3496 | 0.7629 | 0.2684 | -254.0325 | -283.7219 | -2.2615 | -2.3785 |
0.0345 | 0.6857 | 300 | 0.0341 | -0.0682 | -0.3495 | 0.7457 | 0.2813 | -254.0265 | -282.4234 | -2.2130 | -2.3475 |
0.0373 | 0.8 | 350 | 0.0341 | -0.0908 | -0.3849 | 0.7414 | 0.2941 | -257.5619 | -284.6819 | -2.1788 | -2.3321 |
0.0367 | 0.9143 | 400 | 0.0340 | -0.0965 | -0.3938 | 0.7414 | 0.2973 | -258.4546 | -285.2520 | -2.1563 | -2.3140 |
Framework versions
- Transformers 4.44.0.dev0
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1