AdaDecode
Collection
9 items
•
Updated
This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on the meng-lab/Llama-3.1-8B-Instruct-humaneval dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Loss Layer 4 Head | Loss Layer 8 Head | Loss Layer 12 Head | Loss Layer 16 Head | Loss Layer 20 Head | Loss Layer 24 Head | Loss Layer 28 Head |
---|---|---|---|---|---|---|---|---|---|---|
7.7477 | 9.6823 | 200 | 7.6952 | 1.9941 | 1.7442 | 1.9609 | 1.0923 | 0.4414 | 0.2459 | 0.4381 |
5.8078 | 19.3646 | 400 | 6.4289 | 1.9090 | 1.5288 | 1.4099 | 0.9812 | 0.3976 | 0.2383 | 0.1448 |
4.8435 | 29.0469 | 600 | 5.9964 | 1.8480 | 1.5236 | 1.3836 | 0.6737 | 0.3976 | 0.2537 | 0.1092 |
4.6084 | 38.7292 | 800 | 6.0069 | 1.8460 | 1.7121 | 1.3111 | 0.6743 | 0.3436 | 0.2146 | 0.0977 |
4.0625 | 48.4115 | 1000 | 5.7159 | 1.8920 | 1.4329 | 1.3107 | 0.6548 | 0.3220 | 0.1980 | 0.0920 |
3.7565 | 58.0938 | 1200 | 5.4530 | 1.7095 | 1.3997 | 1.2900 | 0.6451 | 0.3159 | 0.1877 | 0.0897 |
3.5758 | 67.7761 | 1400 | 5.4088 | 1.6897 | 1.3862 | 1.2843 | 0.6413 | 0.3125 | 0.1860 | 0.0880 |
3.5369 | 77.4584 | 1600 | 5.3933 | 1.6839 | 1.3837 | 1.2815 | 0.6409 | 0.3124 | 0.1856 | 0.0870 |
3.51 | 87.1407 | 1800 | 5.3780 | 1.6781 | 1.3809 | 1.2799 | 0.6378 | 0.3111 | 0.1843 | 0.0865 |
3.4762 | 96.8230 | 2000 | 5.3754 | 1.6774 | 1.3806 | 1.2795 | 0.6378 | 0.3110 | 0.1844 | 0.0864 |
Base model
meta-llama/Llama-3.1-8B