Warning: Training went fine until about 2.4k steps in and then it declined and started glitching. Either overcooked or in its experimental phase.
Uploaded model
- Developed by: Lambent
- License: apache-2.0
- Finetuned from model : Lambent/danube3.1-4b-Reasoning-Light
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for Lambent/danube3.1-4b-Reasoning-1Epoch
Base model
h2oai/h2o-danube3.1-4b-chat
Finetuned
Lambent/danube3.1-4b-Reasoning-Light