Visualize in Weights & Biases

qwen2.5-0.5b-expo-L2EXPO-EXPERIMENT

This model is a fine-tuned version of hZzy/qwen2.5-0.5b-sft-news-IFT on the hZzy/train_pairwise dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4048
  • Logps: -92.9808
  • Logits: -1.5197
  • Objective: 0.4085
  • Dpo Loss: 0.6871
  • Regularize: 0.4085
  • Ranking Simple: 0.5196
  • Ranking Idealized: 0.5888
  • Ranking Idealized Expo: 0.5103

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-07
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 6
  • gradient_accumulation_steps: 12
  • total_train_batch_size: 288
  • total_eval_batch_size: 24
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Logps Logits Objective Dpo Loss Regularize Ranking Simple Ranking Idealized Ranking Idealized Expo
0.4193 0.2834 50 0.4127 -91.2123 -1.4175 0.4100 0.6922 0.4100 0.5114 0.5888 0.5103
0.4027 0.5668 100 0.4078 -91.1627 -1.4426 0.4063 0.6899 0.4063 0.5114 0.5888 0.5103
0.3645 0.8503 150 0.4059 -91.6651 -1.4648 0.4065 0.6895 0.4065 0.5093 0.5888 0.5103
0.3416 1.1337 200 0.4050 -91.3921 -1.4831 0.4071 0.6885 0.4071 0.5124 0.5888 0.5103
0.33 1.4171 250 0.4063 -92.3039 -1.4859 0.4085 0.6888 0.4085 0.5103 0.5888 0.5103
0.3193 1.7005 300 0.4041 -92.1735 -1.4928 0.4069 0.6880 0.4069 0.5134 0.5888 0.5103
0.3108 1.9839 350 0.4044 -92.4054 -1.4988 0.4070 0.6875 0.4070 0.5134 0.5888 0.5103
0.3048 2.2674 400 0.4050 -92.6592 -1.5060 0.4083 0.6881 0.4083 0.5134 0.5888 0.5103
0.2719 2.5508 450 0.4046 -92.6320 -1.5051 0.4084 0.6875 0.4084 0.5186 0.5888 0.5103
0.2722 2.8342 500 0.4044 -92.6225 -1.5137 0.4081 0.6873 0.4081 0.5176 0.5888 0.5103
0.2796 3.1176 550 0.4041 -92.7195 -1.5148 0.4077 0.6873 0.4077 0.5196 0.5888 0.5103
0.2553 3.4010 600 0.4045 -92.8725 -1.5165 0.4083 0.6872 0.4083 0.5186 0.5888 0.5103
0.252 3.6845 650 0.4046 -92.9877 -1.5188 0.4083 0.6871 0.4083 0.5196 0.5888 0.5103
0.2455 3.9679 700 0.4052 -93.0832 -1.5187 0.4088 0.6873 0.4088 0.5186 0.5888 0.5103
0.2417 4.2513 750 0.4047 -92.9650 -1.5192 0.4086 0.6872 0.4086 0.5196 0.5888 0.5103
0.2513 4.5347 800 0.4047 -92.9578 -1.5198 0.4085 0.6871 0.4085 0.5196 0.5888 0.5103
0.2539 4.8181 850 0.4048 -92.9807 -1.5197 0.4085 0.6871 0.4085 0.5196 0.5888 0.5103

Framework versions

  • Transformers 4.42.0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
494M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for hZzy/qwen2.5-0.5b-expo-L2EXPO-EXPERIMENT

Finetuned
(74)
this model

Dataset used to train hZzy/qwen2.5-0.5b-expo-L2EXPO-EXPERIMENT