File size: 478 Bytes
cfc39fd
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
---
license: mit
datasets:
- tatsu-lab/alpaca
base_model:
- meta-llama/Llama-2-7b
---

This model is aligned using the AlpacaFarm dataset, fine-tuned through the Contrastive Preference Optimization (CPO) loss. The alignment process started from the Supervised Fine-Tuned (SFT) version of LLaMA 2 7B. The optimization process was conducted with a single epoch. For more information on the dataset, refer to the AlpacaFarm documentation (https://github.com/tatsu-lab/alpaca_farm).