Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
rAIfle
/
Nous-Hermes-2-Mixtral-8x7B-DPO-exl2-rpcal
like
0
English
Mixtral
instruct
finetune
chatml
DPO
RLHF
gpt4
synthetic data
distillation
License:
apache-2.0
Model card
Files
Files and versions
Community
main
Nous-Hermes-2-Mixtral-8x7B-DPO-exl2-rpcal
Commit History
Upload measurement.json with huggingface_hub
fc338da
verified
rAIfle
commited on
Jan 16
Update README.md
065374e
verified
rAIfle
commited on
Jan 16
Create README.md
57a1e80
verified
rAIfle
commited on
Jan 16
initial commit
86ef480
verified
rAIfle
commited on
Jan 16