Models
Collection
Quants, Finetunes, Random Merges
•
39 items
•
Updated
This model is a fine-tuned version of anthracite-org/magnum-v2-4b on the combined_new_22k.json dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Logps/chosen | Rewards/rejected | Logps/rejected | Rewards/margins | Kl |
---|---|---|---|---|---|---|---|---|---|
0.5042 | 0.2788 | 16 | 0.5038 | 0.0004 | -11.2884 | -0.0004 | -10.6529 | 0.0008 | 0.0022 |
0.5037 | 0.5575 | 32 | 0.5033 | 0.0006 | -11.2865 | -0.0008 | -10.6565 | 0.0014 | 0.0013 |
0.5035 | 0.8363 | 48 | 0.5041 | 0.0003 | -11.2899 | -0.0006 | -10.6546 | 0.0008 | 0.0016 |
0.5037 | 1.1151 | 64 | 0.5035 | 0.0005 | -11.2872 | -0.0005 | -10.6540 | 0.0011 | 0.0017 |
0.5036 | 1.3938 | 80 | 0.5036 | 0.0005 | -11.2874 | -0.0005 | -10.6535 | 0.0010 | 0.0010 |
0.5032 | 1.6726 | 96 | 0.5035 | 0.0006 | -11.2867 | -0.0005 | -10.6541 | 0.0011 | 0.0012 |
0.5036 | 1.9514 | 112 | 0.5037 | 0.0006 | -11.2869 | -0.0006 | -10.6546 | 0.0011 | 0.0009 |
Base model
nvidia/Llama-3.1-Minitron-4B-Width-Base