davidxmle commited on
Commit
eb2df45
1 Parent(s): 39f2ad3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md CHANGED
@@ -1,5 +1,84 @@
1
  ---
 
 
 
 
 
 
 
2
  license: other
3
  license_name: llama-3
4
  license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: meta-llama/Meta-Llama-3-8B
3
+ inference: false
4
+ model_creator: astronomer-io
5
+ model_name: Meta-Llama-3-8B
6
+ model_type: llama
7
+ pipeline_tag: text-generation
8
+ quantized_by: davidxmle
9
  license: other
10
  license_name: llama-3
11
  license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE
12
+ tags:
13
+ - llama
14
+ - llama-3
15
+ - facebook
16
+ - meta
17
+ - astronomer
18
+ - gptq
19
+ - pretrained
20
+ - quantized
21
+ - finetuned
22
+ - autotrain_compatible
23
+ - endpoints_compatible
24
+ datasets:
25
+ - wikitext
26
  ---
27
+ <!-- header start -->
28
+ <!-- 200823 -->
29
+ <div style="width: auto; margin-left: auto; margin-right: auto">
30
+ <img src="https://www.astronomer.io/logo/astronomer-logo-RGB-standard-1200px.png" alt="Astronomer" style="width: 60%; min-width: 400px; display: block; margin: auto;">
31
+ </div>
32
+ <div style="margin-top: 1.0em; margin-bottom: 1.0em;"></div>
33
+
34
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">This model is generously created and made open source by <a href="https://astronomer.io">Astronomer</a>.</p></div>
35
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">Astronomer is the de facto company for <a href="https://airflow.apache.org/">Apache Airflow</a>, the most trusted open-source framework for data orchestration and MLOps.</p></div>
36
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
37
+ <!-- header end -->
38
+
39
+ # Llama-3-8B-GPTQ-8-Bit
40
+ - Original Model creator: [Meta Llama from Meta](https://huggingface.co/meta-llama)
41
+ - Original model: [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
42
+ - Built with Meta Llama 3
43
+ - Quantized by [Astronomer](https://astronomer.io)
44
+
45
+ ## Important Note About Serving with vLLM & oobabooga/text-generation-webui
46
+ - For loading this model onto vLLM, make sure all requests have `"stop_token_ids":[128001, 128009]` to temporarily address the non-stop generation issue.
47
+ - vLLM does not yet respect `generation_config.json`.
48
+ - vLLM team is working on a a fix for this https://github.com/vllm-project/vllm/issues/4180
49
+ - For oobabooga/text-generation-webui
50
+ - Load the model via AutoGPTQ, with `no_inject_fused_attention` enabled. This is a bug with AutoGPTQ library.
51
+ - Under `Parameters` -> `Generation` -> `Skip special tokens`: turn this off (deselect)
52
+ - Under `Parameters` -> `Generation` -> `Custom stopping strings`: add `"<|end_of_text|>","<|eot_id|>"` to the field
53
+
54
+ <!-- description start -->
55
+ ## Description
56
+
57
+ This repo contains 8 Bit quantized GPTQ model files for [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B).
58
+
59
+ This model can be loaded with just over 10GB of VRAM (compared to the original 16.07GB model) and can be served lightning fast with the cheapest Nvidia GPUs possible (Nvidia T4, Nvidia K80, RTX 4070, etc).
60
+
61
+ The 8 bit GPTQ quant has minimum quality degradation from the original `bfloat16` model due to its higher bitrate.
62
+
63
+ <!-- description end -->
64
+
65
+ ## GPTQ Quantization Method
66
+ - This model is quantized by utilizing the AutoGPTQ library, following best practices noted by [GPTQ paper](https://arxiv.org/abs/2210.17323)
67
+ - Quantization is calibrated and aligned with random samples from the specified dataset (wikitext for now) for minimum accuracy loss.
68
+
69
+ | Branch | Bits | Group Size | Act Order | Damp % | GPTQ Dataset | Sequence Length | VRAM Size | ExLlama | Description |
70
+ | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
71
+ | [main](https://huggingface.co/astronomer-io/Llama-3-8B-GPTQ-8-Bit/tree/main) | 8 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 8192 | 9.74 GB | No | 8-bit, with Act Order and group size 32g. Minimum accuracy loss with decent VRAM usage reduction. |
72
+ | More variants to come | TBD | TBD | TBD | TBD | TBD | TBD | TBD | TBD | May upload additional variants of GPTQ 8 bit models in the future using different parameters such as 128g group size and etc. |
73
+
74
+ ## Serving this GPTQ model using vLLM
75
+ Tested serving this model via vLLM using an Nvidia T4 (16GB VRAM).
76
+
77
+ Tested with the below command
78
+ ```bash
79
+ python -m vllm.entrypoints.openai.api_server --model astronomer-io/Llama-3-8B-GPTQ-8-Bit --max-model-len 8192 --dtype float16
80
+ ```
81
+ For the non-stop token generation bug, make sure to send requests with `stop_token_ids":[128001, 128009]` to vLLM endpoint
82
+
83
+ ### Contributors
84
+ - Quantized by [David Xue, Machine Learning Engineer from Astronomer](https://www.linkedin.com/in/david-xue-uva/)