--- license: apache-2.0 base_model: hZzy/qwen2.5-0.5b-sft-news-IFT tags: - alignment-handbook - ndcg - trl - expo - generated_from_trainer - trl - expo - generated_from_trainer datasets: - hZzy/train_pairwise model-index: - name: qwen2.5-0.5b-expo-L2EXPO-ES-0.01 results: [] --- [Visualize in Weights & Biases](https://wandb.ai/zhiyuzha-university-of-florida/huggingface/runs/8q1vl3fv) # qwen2.5-0.5b-expo-L2EXPO-ES-0.01 This model is a fine-tuned version of [hZzy/qwen2.5-0.5b-sft-news-IFT](https://huggingface.co/hZzy/qwen2.5-0.5b-sft-news-IFT) on the hZzy/train_pairwise dataset. It achieves the following results on the evaluation set: - Loss: 0.4116 - Logps: -92.3444 - Logits: -1.6667 - Objective: 0.4097 - Dpo Loss: 0.6906 - Regularize: 0.4097 - Ranking Simple: 0.5290 - Ranking Idealized: 0.8732 - Ranking Idealized Expo: 0.5321 - Wo Beta: 17.6306 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 3 - gradient_accumulation_steps: 12 - total_train_batch_size: 144 - total_eval_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Logps | Logits | Objective | Dpo Loss | Regularize | Ranking Simple | Ranking Idealized | Ranking Idealized Expo | Wo Beta | |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:|:---------:|:--------:|:----------:|:--------------:|:-----------------:|:----------------------:|:-------:| | 0.4144 | 0.1417 | 50 | 0.4116 | -92.3444 | -1.6667 | 0.4097 | 0.6906 | 0.4097 | 0.5290 | 0.8732 | 0.5321 | 17.6306 | | 0.394 | 0.2834 | 100 | 0.4078 | -115.8757 | -2.0312 | 0.4074 | 0.6863 | 0.4074 | 0.5399 | 0.8732 | 0.5321 | 22.1058 | | 0.3504 | 0.4251 | 150 | 0.4044 | -123.1670 | -2.0505 | 0.4018 | 0.6803 | 0.4018 | 0.5719 | 0.8732 | 0.5321 | 23.5660 | | 0.3135 | 0.5668 | 200 | 0.4006 | -121.6031 | -2.1409 | 0.3974 | 0.6781 | 0.3974 | 0.5621 | 0.8732 | 0.5321 | 23.0977 | | 0.2807 | 0.7085 | 250 | 0.4043 | -122.1639 | -2.3711 | 0.4010 | 0.6790 | 0.4010 | 0.5600 | 0.8732 | 0.5321 | 23.6688 | | 0.2532 | 0.8503 | 300 | 0.3975 | -123.3866 | -2.2621 | 0.3953 | 0.6769 | 0.3953 | 0.5740 | 0.8732 | 0.5321 | 23.6726 | ### Framework versions - Transformers 4.42.0 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1