llama-7b_oasst1_l0.0002_32-16-8-4-2
This model is a fine-tuned version of huggyllama/llama-7b on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 2.4365
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 0
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 10000
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.5015 | 0.0018 | 1 | 1.6467 |
1.5031 | 0.3392 | 187 | 1.3133 |
1.1305 | 0.6783 | 374 | 1.3024 |
1.2813 | 1.0175 | 561 | 1.3088 |
1.1541 | 1.3566 | 748 | 1.3333 |
1.0665 | 1.6958 | 935 | 1.3310 |
0.6245 | 2.0349 | 1122 | 1.4157 |
0.808 | 2.3741 | 1309 | 1.4323 |
0.7045 | 2.7132 | 1496 | 1.5097 |
0.4423 | 3.0524 | 1683 | 1.6009 |
0.4282 | 3.3915 | 1870 | 1.5678 |
0.6079 | 3.7307 | 2057 | 1.6937 |
0.2882 | 4.0698 | 2244 | 1.8043 |
0.2995 | 4.4090 | 2431 | 1.7991 |
0.4094 | 4.7481 | 2618 | 1.7384 |
0.2117 | 5.0873 | 2805 | 1.9782 |
0.2571 | 5.4264 | 2992 | 1.9985 |
0.3058 | 5.7656 | 3179 | 1.8565 |
0.154 | 6.1047 | 3366 | 2.0782 |
0.2155 | 6.4439 | 3553 | 2.1139 |
0.1508 | 6.7830 | 3740 | 2.0258 |
0.0964 | 7.1222 | 3927 | 2.1266 |
0.1315 | 7.4613 | 4114 | 2.1538 |
0.1598 | 7.8005 | 4301 | 2.1381 |
0.1106 | 8.1397 | 4488 | 2.1491 |
0.0935 | 8.4788 | 4675 | 2.2179 |
0.1957 | 8.8180 | 4862 | 2.1993 |
0.1017 | 9.1571 | 5049 | 2.1894 |
0.1121 | 9.4963 | 5236 | 2.2152 |
0.1047 | 9.8354 | 5423 | 2.2572 |
0.1359 | 10.1746 | 5610 | 2.2288 |
0.0988 | 10.5137 | 5797 | 2.2110 |
0.0964 | 10.8529 | 5984 | 2.2737 |
0.0723 | 11.1920 | 6171 | 2.3120 |
0.1423 | 11.5312 | 6358 | 2.2633 |
0.0945 | 11.8703 | 6545 | 2.2472 |
0.0719 | 12.2095 | 6732 | 2.3867 |
0.0735 | 12.5486 | 6919 | 2.3091 |
0.1106 | 12.8878 | 7106 | 2.2937 |
0.0695 | 13.2269 | 7293 | 2.3760 |
0.078 | 13.5661 | 7480 | 2.3706 |
0.2388 | 13.9052 | 7667 | 2.2813 |
0.0911 | 14.2444 | 7854 | 2.3442 |
0.0833 | 14.5835 | 8041 | 2.3908 |
0.0894 | 14.9227 | 8228 | 2.3312 |
0.1405 | 15.2618 | 8415 | 2.2996 |
0.0967 | 15.6010 | 8602 | 2.3877 |
0.0763 | 15.9401 | 8789 | 2.3984 |
0.063 | 16.2793 | 8976 | 2.3779 |
0.0998 | 16.6185 | 9163 | 2.3717 |
0.0938 | 16.9576 | 9350 | 2.3968 |
0.063 | 17.2968 | 9537 | 2.4133 |
0.2187 | 17.6359 | 9724 | 2.3855 |
0.1012 | 17.9751 | 9911 | 2.4056 |
Framework versions
- PEFT 0.12.1.dev0
- Transformers 4.45.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for alexander-hm/llama-7b_oasst1_l0.0002_32-16-8-4-2
Base model
huggyllama/llama-7b