Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,89 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: other
|
3 |
license_name: llama-3
|
4 |
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
base_model: meta-llama/Meta-Llama-3-8B
|
3 |
+
inference: false
|
4 |
+
model_creator: astronomer-io
|
5 |
+
model_name: Meta-Llama-3-8B
|
6 |
+
model_type: llama
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
quantized_by: davidxmle
|
9 |
license: other
|
10 |
license_name: llama-3
|
11 |
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
|
12 |
+
tags:
|
13 |
+
- llama
|
14 |
+
- llama-3
|
15 |
+
- facebook
|
16 |
+
- meta
|
17 |
+
- astronomer
|
18 |
+
- gptq
|
19 |
+
- pretrained
|
20 |
+
- quantized
|
21 |
+
- finetuned
|
22 |
+
- autotrain_compatible
|
23 |
+
- endpoints_compatible
|
24 |
+
datasets:
|
25 |
+
- wikitext
|
26 |
---
|
27 |
+
<!-- header start -->
|
28 |
+
<!-- 200823 -->
|
29 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
30 |
+
<img src="https://www.astronomer.io/logo/astronomer-logo-RGB-standard-1200px.png" alt="Astronomer" style="width: 60%; min-width: 400px; display: block; margin: auto;">
|
31 |
+
</div>
|
32 |
+
<div style="margin-top: 1.0em; margin-bottom: 1.0em;"></div>
|
33 |
+
|
34 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">This model is generously created and made open source by <a href="https://astronomer.io">Astronomer</a>.</p></div>
|
35 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Astronomer is the de facto company for <a href="https://airflow.apache.org/">Apache Airflow</a>, the most trusted open-source framework for data orchestration and MLOps.</p></div>
|
36 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
37 |
+
<!-- header end -->
|
38 |
+
|
39 |
+
# Llama-3-8B-GPTQ-8-Bit
|
40 |
+
- Original Model creator: [Meta Llama from Meta](https://huggingface.co/meta-llama)
|
41 |
+
- Original model: [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
|
42 |
+
- Built with Meta Llama 3
|
43 |
+
- Quantized by [David Xue](https://www.linkedin.com/in/david-xue-uva/) from [Astronomer](https://astronomer.io)
|
44 |
+
|
45 |
+
## MUST READ: Very Important!! Note About Untrained Special Tokens in Llama 3 Base (Non-instruct) Models & Fine-tuning Llama 3 Base
|
46 |
+
- Special tokens such as the ones used for instruct are undertrained in Llama 3 base models.
|
47 |
+
- Credits: discovered by Daniel Han https://twitter.com/danielhanchen/status/1781395882925343058
|
48 |
+
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/655ad0f8727df37c77a09cb9/1U2rRrx60p1pNeeAZw8Rd.png)
|
49 |
+
- A patch function is under way, fine-tuning this model for instruction following may cause `NaN` graidents unless this problem is addressed.
|
50 |
+
|
51 |
+
## Important Note About Serving with vLLM & oobabooga/text-generation-webui
|
52 |
+
- For loading this model onto vLLM, make sure all requests have `"stop_token_ids":[128001, 128009]` to temporarily address the non-stop generation issue.
|
53 |
+
- vLLM does not yet respect `generation_config.json`.
|
54 |
+
- vLLM team is working on a a fix for this https://github.com/vllm-project/vllm/issues/4180
|
55 |
+
- For oobabooga/text-generation-webui
|
56 |
+
- Load the model via AutoGPTQ, with `no_inject_fused_attention` enabled. This is a bug with AutoGPTQ library.
|
57 |
+
- Under `Parameters` -> `Generation` -> `Skip special tokens`: turn this off (deselect)
|
58 |
+
- Under `Parameters` -> `Generation` -> `Custom stopping strings`: add `"<|end_of_text|>","<|eot_id|>"` to the field
|
59 |
+
|
60 |
+
<!-- description start -->
|
61 |
+
## Description
|
62 |
+
|
63 |
+
This repo contains 4 Bit quantized GPTQ model files for [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B).
|
64 |
+
|
65 |
+
This model can be loaded with less than 6 GB of VRAM (huge reduction from the original 16.07GB model) and can be served lightning fast with the cheapest Nvidia GPUs possible (Nvidia T4, Nvidia K80, RTX 4070, etc).
|
66 |
+
|
67 |
+
The 4 bit GPTQ quant has small quality degradation from the original `bfloat16` model but can be served on much smaller GPUs with maximum improvement in latency and throughput.
|
68 |
+
|
69 |
+
<!-- description end -->
|
70 |
+
|
71 |
+
## GPTQ Quantization Method
|
72 |
+
- This model is quantized by utilizing the AutoGPTQ library, following best practices noted by [GPTQ paper](https://arxiv.org/abs/2210.17323)
|
73 |
+
- Quantization is calibrated and aligned with random samples from the specified dataset (wikitext for now) for minimum accuracy loss.
|
74 |
+
|
75 |
+
| Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Description |
|
76 |
+
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
77 |
+
| [main](https://huggingface.co/astronomer-io/Llama-3-8B-GPTQ-4-Bit/tree/main) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 5.74 GB | Yes | 4-bit, with Act Order and group size 128g. Smallest model possible with small accuracy loss |
|
78 |
+
| More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 4 bit models in the future using different parameters such as 128g group size and etc. |
|
79 |
+
|
80 |
+
## Serving this GPTQ model using vLLM
|
81 |
+
Tested serving this model via vLLM using an Nvidia T4 (16GB VRAM).
|
82 |
+
|
83 |
+
Tested with the below command
|
84 |
+
```bash
|
85 |
+
python -m vllm.entrypoints.openai.api_server --model astronomer-io/Llama-3-8B-GPTQ-4-Bit --max-model-len 8192 --dtype float16
|
86 |
+
```
|
87 |
+
For the non-stop token generation bug, make sure to send requests with `stop_token_ids":[128001, 128009]` to vLLM endpoint
|
88 |
+
### Contributors
|
89 |
+
- Quantized by [David Xue, Machine Learning Engineer from Astronomer](https://www.linkedin.com/in/david-xue-uva/)
|