File size: 1,682 Bytes
4851278 460b635 4851278 460b635 4851278 79684c4 460b635 79684c4 4851278 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
license: other
license_name: nvidia-open-model-license
license_link: >-
https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
tags:
- fp8
- vllm
base_model: nvidia/Minitron-8B-Base
---
# Minitron-8B-Base-FP8
FP8 quantized checkpoint of [nvidia/Minitron-8B-Base](https://huggingface.co/nvidia/Minitron-4B-Base) for use with vLLM.
## Evaluations
This quantized model:
```
lm_eval --model vllm --model_args pretrained=Minitron-8B-Base-FP8 --tasks gsm8k --num_fewshot 5 --batch_size auto
vllm (pretrained=Minitron-8B-Base-FP8), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.5019|± |0.0138|
| | |strict-match | 5|exact_match|↑ |0.4989|± |0.0138|
```
Baseline:
```
lm_eval --model vllm --model_args pretrained=nvidia/Minitron-8B-Base --tasks gsm8k --num_fewshot 5 --batch_size auto
vllm (pretrained=nvidia/Minitron-8B-Base), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.5080|± |0.0138|
| | |strict-match | 5|exact_match|↑ |0.5064|± |0.0138|
```
The [original paper](https://arxiv.org/pdf/2407.14679) evals:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/60466e4b4f40b01b66151416/YFmlifuYBVtdfsdPVgV4u.png)
|