Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
plaguss
/
zephyr-7b-lora-adapter-dpo-dibt-v0
like
0
PEFT
Safetensors
mistral
choo-choo
trl
dpo
Generated from Trainer
4-bit precision
bitsandbytes
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Use this model
72788aa
zephyr-7b-lora-adapter-dpo-dibt-v0
/
train_results.json
plaguss
HF staff
Model save
3e6253c
verified
8 months ago
raw
Copy download link
history
blame
Safe
165 Bytes
{
"epoch"
:
2.0
,
"train_loss"
:
0.1882365908726905
,
"train_runtime"
:
5068.0756
,
"train_samples_per_second"
:
0.65
,
"train_steps_per_second"
:
0.041
}