Self-Exploring Language Models: Active Preference Elicitation for Online Alignment.

DPO-Zephyr-7B

This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta using synthetic data based on on the HuggingFaceH4/ultrafeedback_binarized dataset.

Model description

  • Model type: A 7B parameter Zephyr-based Self-Exploring Language Models (SELM).
  • License: MIT

Results

AlpacaEval 2.0 (LC WR) MT-Bench (Average)
SELM-Zephyr-7B-iter-3        24.00       7.48
SELM-Zephyr-7B-iter-2        23.40       7.72
SELM-Zephyr-7B-iter-1        20.28       7.42
DPO-Zephyr-7B        14.45       7.28

Training hyperparameters

The following hyperparameters were used during training:

  • alpha: 0.001
  • beta: 0.01
  • train_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • num_epochs: 1

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.14.6
  • Tokenizers 0.19.1
Downloads last month
14
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for ZhangShenao/DPO-Zephyr-7B

Finetuned
(153)
this model
Finetunes
1 model

Dataset used to train ZhangShenao/DPO-Zephyr-7B

Collection including ZhangShenao/DPO-Zephyr-7B