Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
sfulay
/
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
like
0
Safetensors
mistral
trl
dpo
alignment-handbook
Generated from Trainer
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
main
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
/
README.md
Commit History
Model save
9b16a50
verified
sfulay
commited on
Sep 2, 2024
Model save
4d79451
verified
sfulay
commited on
Sep 2, 2024
Model save
0a3e251
verified
sfulay
commited on
Sep 2, 2024
Model save
44bba64
verified
sfulay
commited on
Aug 28, 2024