Update README.md
Browse files
README.md
CHANGED
@@ -70,7 +70,7 @@ The 4 bit GPTQ quant has small quality degradation from the original `bfloat16`
|
|
70 |
|
71 |
| Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Description |
|
72 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
73 |
-
| [main](https://huggingface.co/astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 5.74 GB | Yes | 4-bit, with Act Order and group size 128g. Smallest
|
74 |
| More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. |
|
75 |
|
76 |
## Serving this GPTQ model using vLLM
|
@@ -82,9 +82,9 @@ python -m vllm.entrypoints.openai.api_server --model astronomer-io/Llama-3-8B-In
|
|
82 |
```
|
83 |
For the non-stop token generation bug, make sure to send requests with `stop_token_ids":[128001, 128009]` to vLLM endpoint
|
84 |
Example:
|
85 |
-
```
|
86 |
{
|
87 |
-
"model": "Llama-3-8B-Instruct-GPTQ-4-Bit",
|
88 |
"messages": [
|
89 |
{"role": "system", "content": "You are a helpful assistant."},
|
90 |
{"role": "user", "content": "Who created Llama 3?"}
|
|
|
70 |
|
71 |
| Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Description |
|
72 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
73 |
+
| [main](https://huggingface.co/astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 5.74 GB | Yes | 4-bit, with Act Order and group size 128g. Smallest model possible with small accuracy loss |
|
74 |
| More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. |
|
75 |
|
76 |
## Serving this GPTQ model using vLLM
|
|
|
82 |
```
|
83 |
For the non-stop token generation bug, make sure to send requests with `stop_token_ids":[128001, 128009]` to vLLM endpoint
|
84 |
Example:
|
85 |
+
```json
|
86 |
{
|
87 |
+
"model": "astronomer-io/Llama-3-8B-Instruct-GPTQ-4-Bit",
|
88 |
"messages": [
|
89 |
{"role": "system", "content": "You are a helpful assistant."},
|
90 |
{"role": "user", "content": "Who created Llama 3?"}
|