Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
xwz-xmu
's Collections
preferences
speech
benchmarks
livebench
models-long-context
dataset-sft
dataset-pretrain
multimodel-sft
preferences
updated
about 1 month ago
Upvote
-
HumanLLMs/Human-Like-DPO-Dataset
Viewer
•
Updated
Jan 12
•
10.9k
•
3.28k
•
193
Upvote
-
Share collection
View history
Collection guide
Browse collections