llama-7b_alpaca_l0.0002_64
This model is a fine-tuned version of huggyllama/llama-7b on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.8634
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 0
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 0
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.6644 | 0.0003 | 1 | 2.9705 |
2.6972 | 0.0587 | 187 | 1.9034 |
1.8541 | 0.1174 | 374 | 1.8795 |
1.2478 | 0.1761 | 561 | 1.8640 |
1.8178 | 0.2348 | 748 | 1.8499 |
2.5147 | 0.2935 | 935 | 1.8168 |
1.4811 | 0.3522 | 1122 | 1.8192 |
1.5617 | 0.4108 | 1309 | 1.8117 |
2.6811 | 0.4695 | 1496 | 1.8032 |
1.9665 | 0.5282 | 1683 | 1.7964 |
1.5309 | 0.5869 | 1870 | 1.7978 |
1.5708 | 0.6456 | 2057 | 1.8058 |
2.2185 | 0.7043 | 2244 | 1.7870 |
2.6575 | 0.7630 | 2431 | 1.7815 |
1.6179 | 0.8217 | 2618 | 1.7880 |
1.4937 | 0.8804 | 2805 | 1.7864 |
2.2623 | 0.9391 | 2992 | 1.7782 |
2.4709 | 0.9978 | 3179 | 1.7754 |
1.9605 | 1.0565 | 3366 | 1.7927 |
1.5008 | 1.1151 | 3553 | 1.7958 |
1.4912 | 1.1738 | 3740 | 1.8099 |
1.9176 | 1.2325 | 3927 | 1.8007 |
1.5569 | 1.2912 | 4114 | 1.7962 |
1.3717 | 1.3499 | 4301 | 1.8071 |
1.5241 | 1.4086 | 4488 | 1.8020 |
2.1042 | 1.4673 | 4675 | 1.7964 |
1.6643 | 1.5260 | 4862 | 1.7947 |
1.3815 | 1.5847 | 5049 | 1.7994 |
2.5619 | 1.6434 | 5236 | 1.7989 |
1.7651 | 1.7021 | 5423 | 1.7948 |
1.4931 | 1.7608 | 5610 | 1.7908 |
1.5089 | 1.8195 | 5797 | 1.7957 |
1.768 | 1.8781 | 5984 | 1.7989 |
1.769 | 1.9368 | 6171 | 1.7915 |
1.5345 | 1.9955 | 6358 | 1.7887 |
1.2575 | 2.0542 | 6545 | 1.8514 |
1.1761 | 2.1129 | 6732 | 1.8809 |
1.4524 | 2.1716 | 6919 | 1.8932 |
1.5745 | 2.2303 | 7106 | 1.8655 |
1.1251 | 2.2890 | 7293 | 1.8609 |
1.2381 | 2.3477 | 7480 | 1.8901 |
1.7963 | 2.4064 | 7667 | 1.8743 |
1.4293 | 2.4651 | 7854 | 1.8580 |
1.3278 | 2.5238 | 8041 | 1.8687 |
1.2364 | 2.5824 | 8228 | 1.9165 |
1.5239 | 2.6411 | 8415 | 1.8834 |
1.3108 | 2.6998 | 8602 | 1.8617 |
1.2084 | 2.7585 | 8789 | 1.8702 |
1.3279 | 2.8172 | 8976 | 1.8786 |
1.7506 | 2.8759 | 9163 | 1.8734 |
1.4208 | 2.9346 | 9350 | 1.8601 |
1.2449 | 2.9933 | 9537 | 1.8668 |
Framework versions
- PEFT 0.12.1.dev0
- Transformers 4.45.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for alexander-hm/llama-7b_alpaca_l0.0002_64
Base model
huggyllama/llama-7b