Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
mimicheng
/
zephyr-7b-sft-qlora-1ep-25jan
like
0
PEFT
Safetensors
HuggingFaceH4/ultrachat_200k
mixtral
dpo-experiment
Generated from Trainer
trl
sft
4-bit precision
bitsandbytes
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Use this model
2af5144
zephyr-7b-sft-qlora-1ep-25jan
/
train_results.json
mimicheng
Model save
2af5144
verified
10 months ago
raw
Copy download link
history
blame
Safe
197 Bytes
{
"epoch"
:
1.0
,
"train_loss"
:
0.29661884888175544
,
"train_runtime"
:
74749.5276
,
"train_samples"
:
207865
,
"train_samples_per_second"
:
1.865
,
"train_steps_per_second"
:
0.233
}