metadata
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- alignment-handbook
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-full-prometheus_consistent-reward-scale-1-rpo
results: []
zephyr-7b-dpo-full-prometheus_consistent-reward-scale-1-rpo
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0341
- Rewards/chosen: -0.0738
- Rewards/rejected: -0.3717
- Rewards/accuracies: 0.7414
- Rewards/margins: 0.2979
- Logps/rejected: -256.2462
- Logps/chosen: -282.9861
- Logits/rejected: -2.4523
- Logits/chosen: -2.5611
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 55
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.048 | 0.1143 | 50 | 0.0428 | 0.0621 | -0.0550 | 0.7026 | 0.1170 | -224.5715 | -269.3974 | -2.4545 | -2.5523 |
0.04 | 0.2286 | 100 | 0.0385 | -0.0777 | -0.3186 | 0.75 | 0.2409 | -250.9367 | -283.3715 | -1.9565 | -2.1306 |
0.0363 | 0.3429 | 150 | 0.0371 | -0.2052 | -0.4595 | 0.7543 | 0.2543 | -265.0228 | -296.1211 | -2.1955 | -2.3441 |
0.0373 | 0.4571 | 200 | 0.0353 | -0.0452 | -0.3269 | 0.7716 | 0.2817 | -251.7630 | -280.1239 | -2.3848 | -2.4903 |
0.0374 | 0.5714 | 250 | 0.0344 | -0.0802 | -0.3463 | 0.75 | 0.2662 | -253.7082 | -283.6198 | -2.4307 | -2.5245 |
0.0346 | 0.6857 | 300 | 0.0342 | -0.0372 | -0.3195 | 0.7457 | 0.2823 | -251.0285 | -279.3270 | -2.4797 | -2.5812 |
0.0375 | 0.8 | 350 | 0.0342 | -0.0783 | -0.3746 | 0.7414 | 0.2963 | -256.5389 | -283.4324 | -2.4474 | -2.5561 |
0.0367 | 0.9143 | 400 | 0.0341 | -0.0738 | -0.3717 | 0.7414 | 0.2979 | -256.2462 | -282.9861 | -2.4523 | -2.5611 |
Framework versions
- Transformers 4.44.0.dev0
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1