chujiezheng commited on
Commit
333202c
1 Parent(s): 330fac2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -6,6 +6,6 @@ license: llama3
6
 
7
  # Llama-3-Instruct-8B-SimPO-ExPO
8
 
9
- The extrapolated (ExPO) model based on [`princeton-nlp/Mistral-7B-Instruct-SimPO`](https://huggingface.co/princeton-nlp/Mistral-7B-Instruct-SimPO) and [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper.
10
 
11
  Specifically, we obtain this model by extrapolating **(alpha = 0.3)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.
 
6
 
7
  # Llama-3-Instruct-8B-SimPO-ExPO
8
 
9
+ The extrapolated (ExPO) model based on [`princeton-nlp/Llama-3-Instruct-8B-SimPO`](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO) and [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper.
10
 
11
  Specifically, we obtain this model by extrapolating **(alpha = 0.3)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.