metadata
base_model: unsloth/Qwen2-7B
library_name: peft
license: apache-2.0
tags:
- unsloth
- generated_from_trainer
model-index:
- name: Qwen2-7B_metamath_default
results: []
Qwen2-7B_metamath_default
This model is a fine-tuned version of unsloth/Qwen2-7B on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.2149
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.1753 | 0.0211 | 13 | 0.1875 |
0.2034 | 0.0421 | 26 | 0.2488 |
0.2604 | 0.0632 | 39 | 0.2850 |
0.2823 | 0.0842 | 52 | 0.3067 |
0.2953 | 0.1053 | 65 | 0.3213 |
0.3151 | 0.1264 | 78 | 0.3321 |
0.3224 | 0.1474 | 91 | 0.3354 |
0.3144 | 0.1685 | 104 | 0.3366 |
0.3291 | 0.1896 | 117 | 0.3428 |
0.3233 | 0.2106 | 130 | 0.3447 |
0.3342 | 0.2317 | 143 | 0.3455 |
0.3292 | 0.2527 | 156 | 0.3393 |
0.3322 | 0.2738 | 169 | 0.3418 |
0.3225 | 0.2949 | 182 | 0.3334 |
0.3306 | 0.3159 | 195 | 0.3326 |
0.315 | 0.3370 | 208 | 0.3303 |
0.3135 | 0.3580 | 221 | 0.3307 |
0.3131 | 0.3791 | 234 | 0.3264 |
0.3114 | 0.4002 | 247 | 0.3211 |
0.3065 | 0.4212 | 260 | 0.3160 |
0.315 | 0.4423 | 273 | 0.3122 |
0.296 | 0.4633 | 286 | 0.3074 |
0.306 | 0.4844 | 299 | 0.2980 |
0.2821 | 0.5055 | 312 | 0.2968 |
0.2861 | 0.5265 | 325 | 0.2924 |
0.2717 | 0.5476 | 338 | 0.2850 |
0.2704 | 0.5687 | 351 | 0.2784 |
0.2693 | 0.5897 | 364 | 0.2716 |
0.2612 | 0.6108 | 377 | 0.2652 |
0.2512 | 0.6318 | 390 | 0.2601 |
0.2537 | 0.6529 | 403 | 0.2543 |
0.2543 | 0.6740 | 416 | 0.2498 |
0.2443 | 0.6950 | 429 | 0.2463 |
0.2441 | 0.7161 | 442 | 0.2407 |
0.2286 | 0.7371 | 455 | 0.2362 |
0.2298 | 0.7582 | 468 | 0.2327 |
0.2304 | 0.7793 | 481 | 0.2300 |
0.2266 | 0.8003 | 494 | 0.2270 |
0.2127 | 0.8214 | 507 | 0.2231 |
0.2193 | 0.8424 | 520 | 0.2205 |
0.2136 | 0.8635 | 533 | 0.2190 |
0.2099 | 0.8846 | 546 | 0.2176 |
0.2042 | 0.9056 | 559 | 0.2165 |
0.2077 | 0.9267 | 572 | 0.2158 |
0.2136 | 0.9478 | 585 | 0.2152 |
0.2131 | 0.9688 | 598 | 0.2150 |
0.2124 | 0.9899 | 611 | 0.2149 |
Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1