--- license: apache-2.0 base_model: alignment-handbook/zephyr-7b-sft-full tags: - trl - dpo - generated_from_trainer model-index: - name: zephyr-7b-dpo-full-magpi-reward-scale-1 results: [] --- # zephyr-7b-dpo-full-magpi-reward-scale-1 This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 - Rewards/chosen: -2.4218 - Rewards/rejected: -70.3140 - Rewards/accuracies: 1.0 - Rewards/margins: 67.8922 - Logps/rejected: -7672.1890 - Logps/chosen: -609.1660 - Logits/rejected: 2.5753 - Logits/chosen: -0.2188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 55 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0055 | 0.1420 | 50 | 0.0031 | -0.8850 | -42.4340 | 0.9960 | 41.5490 | -4884.1895 | -455.4866 | -2.5280 | -2.9176 | | 0.0014 | 0.2841 | 100 | 0.0003 | -1.7467 | -57.2050 | 1.0 | 55.4583 | -6361.2881 | -541.6496 | -0.1436 | -2.5818 | | 0.001 | 0.4261 | 150 | 0.0002 | -2.4059 | -66.6788 | 1.0 | 64.2728 | -7308.6660 | -607.5745 | 1.9422 | -1.9097 | | 0.0018 | 0.5682 | 200 | 0.0002 | -2.1797 | -67.6111 | 1.0 | 65.4314 | -7401.8965 | -584.9550 | 2.3189 | -1.0395 | | 0.0009 | 0.7102 | 250 | 0.0001 | -2.4169 | -67.5787 | 1.0 | 65.1618 | -7398.6553 | -608.6732 | 2.5585 | -0.3354 | | 0.0009 | 0.8523 | 300 | 0.0001 | -2.4125 | -70.2443 | 1.0 | 67.8319 | -7665.2217 | -608.2272 | 2.5751 | -0.2549 | | 0.0024 | 0.9943 | 350 | 0.0001 | -2.4218 | -70.3140 | 1.0 | 67.8922 | -7672.1890 | -609.1660 | 2.5753 | -0.2188 | ### Framework versions - Transformers 4.44.0.dev0 - Pytorch 2.1.2 - Datasets 2.20.0 - Tokenizers 0.19.1