Update README.md
Browse files
README.md
CHANGED
@@ -43,10 +43,10 @@ datasets:
|
|
43 |
- Quantized by [David Xue](https://www.linkedin.com/in/david-xue-uva/) from [Astronomer](https://astronomer.io)
|
44 |
|
45 |
## MUST READ: Very Important!! Note About Untrained Special Tokens in Llama 3 Base (Non-instruct) Models & Fine-tuning Llama 3 Base
|
|
|
46 |
- Special tokens such as the ones used for instruct are undertrained in Llama 3 base models.
|
47 |
- Credits: discovered by Daniel Han https://twitter.com/danielhanchen/status/1781395882925343058
|
48 |
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/655ad0f8727df37c77a09cb9/1U2rRrx60p1pNeeAZw8Rd.png)
|
49 |
-
- A patch function is under way, fine-tuning this model for instruction following may cause `NaN` graidents unless this problem is addressed.
|
50 |
|
51 |
## Important Note About Serving with vLLM & oobabooga/text-generation-webui
|
52 |
- For loading this model onto vLLM, make sure all requests have `"stop_token_ids":[128001, 128009]` to temporarily address the non-stop generation issue.
|
@@ -66,15 +66,18 @@ This model can be loaded with less than 6 GB of VRAM (huge reduction from the or
|
|
66 |
|
67 |
The 4 bit GPTQ quant has small quality degradation from the original `bfloat16` model but can be served on much smaller GPUs with maximum improvement in latency and throughput.
|
68 |
|
|
|
|
|
69 |
<!-- description end -->
|
70 |
|
71 |
## GPTQ Quantization Method
|
72 |
- This model is quantized by utilizing the AutoGPTQ library, following best practices noted by [GPTQ paper](https://arxiv.org/abs/2210.17323)
|
73 |
- Quantization is calibrated and aligned with random samples from the specified dataset (wikitext for now) for minimum accuracy loss.
|
74 |
|
75 |
-
| Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Description |
|
76 |
-
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
77 |
-
| [main](https://huggingface.co/astronomer-io/Llama-3-8B-GPTQ-4-Bit/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 5.74 GB | Yes | 4-bit, with Act Order and group size 128g. Smallest model possible with small accuracy loss |
|
|
|
78 |
| More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. |
|
79 |
|
80 |
## Serving this GPTQ model using vLLM
|
|
|
43 |
- Quantized by [David Xue](https://www.linkedin.com/in/david-xue-uva/) from [Astronomer](https://astronomer.io)
|
44 |
|
45 |
## MUST READ: Very Important!! Note About Untrained Special Tokens in Llama 3 Base (Non-instruct) Models & Fine-tuning Llama 3 Base
|
46 |
+
- **If you intend to fine-tune this model with any added tokens, or fine-tune for instruction following, please use the `untrained-special-tokens-fixed` branch/revision.**
|
47 |
- Special tokens such as the ones used for instruct are undertrained in Llama 3 base models.
|
48 |
- Credits: discovered by Daniel Han https://twitter.com/danielhanchen/status/1781395882925343058
|
49 |
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/655ad0f8727df37c77a09cb9/1U2rRrx60p1pNeeAZw8Rd.png)
|
|
|
50 |
|
51 |
## Important Note About Serving with vLLM & oobabooga/text-generation-webui
|
52 |
- For loading this model onto vLLM, make sure all requests have `"stop_token_ids":[128001, 128009]` to temporarily address the non-stop generation issue.
|
|
|
66 |
|
67 |
The 4 bit GPTQ quant has small quality degradation from the original `bfloat16` model but can be served on much smaller GPUs with maximum improvement in latency and throughput.
|
68 |
|
69 |
+
The `untrained-special-tokens-fixed` branch is the same model as the main branch but has special tokens and tokens untrained (by finding the tokens where max embedding value of each token in input_embeddings and output_embeddings is 0) and setting them to the average of all trained tokens for each feature. Using this branch is recommended if you plan to do any fine-tuning with your own tokens added or with instruction following.
|
70 |
+
|
71 |
<!-- description end -->
|
72 |
|
73 |
## GPTQ Quantization Method
|
74 |
- This model is quantized by utilizing the AutoGPTQ library, following best practices noted by [GPTQ paper](https://arxiv.org/abs/2210.17323)
|
75 |
- Quantization is calibrated and aligned with random samples from the specified dataset (wikitext for now) for minimum accuracy loss.
|
76 |
|
77 |
+
| Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Special Tokens Fixed | Description |
|
78 |
+
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ------- | ---- |
|
79 |
+
| [main](https://huggingface.co/astronomer-io/Llama-3-8B-GPTQ-4-Bit/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 5.74 GB | Yes | No | 4-bit, with Act Order and group size 128g. Smallest model possible with small accuracy loss |
|
80 |
+
| [untrained-special-tokens-fixed](https://huggingface.co/astronomer-io/Llama-3-8B-GPTQ-4-Bit/tree/untrained-special-tokens-fixed) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 5.74 GB | Yes | Yes | 4-bit, with Act Order and group size 128g. Smallest model possible with small accuracy loss |
|
81 |
| More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. |
|
82 |
|
83 |
## Serving this GPTQ model using vLLM
|