Text Generation
Transformers
Safetensors
English
llama
causal-lm
text-generation-inference
4-bit precision
gptq

8bit version of the model

#8
by varun500 - opened
README.md CHANGED
@@ -11,34 +11,18 @@ datasets:
11
  - tatsu-lab/alpaca
12
  inference: false
13
  ---
14
- <!-- header start -->
15
- <!-- 200823 -->
16
- <div style="width: auto; margin-left: auto; margin-right: auto">
17
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
18
- </div>
19
- <div style="display: flex; justify-content: space-between; width: 100%;">
20
- <div style="display: flex; flex-direction: column; align-items: flex-start;">
21
- <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
22
- </div>
23
- <div style="display: flex; flex-direction: column; align-items: flex-end;">
24
- <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
25
- </div>
26
- </div>
27
- <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
28
- <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
29
- <!-- header end -->
30
 
31
  # StableVicuna-13B-GPTQ
32
 
33
- This repo contains 4bit GPTQ format quantised models of [CarperAI's StableVicuna 13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta).
34
 
35
  It is the result of first merging the deltas from the above repository with the original Llama 13B weights, then quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
36
 
37
  ## Repositories available
38
 
39
  * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ).
40
- * [4-bit, 5-bit and 8-bit GGML models for CPU (+CUDA) inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML).
41
- * [Unquantised float16 model in HF format](https://huggingface.co/TheBloke/stable-vicuna-13B-HF).
42
 
43
  ## PROMPT TEMPLATE
44
 
@@ -59,7 +43,11 @@ Open the text-generation-webui UI as normal.
59
  4. Wait until it says it's finished downloading.
60
  5. Click the **Refresh** icon next to **Model** in the top left.
61
  6. In the **Model drop-down**: choose the model you just downloaded,`stable-vicuna-13B-GPTQ`.
62
- 7. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
 
 
 
 
63
 
64
  ## Provided files
65
 
@@ -97,7 +85,7 @@ To access this file, please switch to the `latest` branch fo this repo and downl
97
  ```
98
  CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.act-order.safetensors
99
  ```
100
-
101
  ## Manual instructions for `text-generation-webui`
102
 
103
  File `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
@@ -127,37 +115,6 @@ The above commands assume you have installed all dependencies for GPTQ-for-LLaMa
127
 
128
  If you can't update GPTQ-for-LLaMa or don't want to, you can use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
129
 
130
- <!-- footer start -->
131
- <!-- 200823 -->
132
- ## Discord
133
-
134
- For further support, and discussions on these models and AI in general, join us at:
135
-
136
- [TheBloke AI's Discord server](https://discord.gg/theblokeai)
137
-
138
- ## Thanks, and how to contribute.
139
-
140
- Thanks to the [chirper.ai](https://chirper.ai) team!
141
-
142
- I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
143
-
144
- If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
145
-
146
- Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
147
-
148
- * Patreon: https://patreon.com/TheBlokeAI
149
- * Ko-Fi: https://ko-fi.com/TheBlokeAI
150
-
151
- **Special thanks to**: Aemon Algiz.
152
-
153
- **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
154
-
155
-
156
- Thank you to all my generous patrons and donaters!
157
-
158
- And thank you again to a16z for their generous grant.
159
-
160
- <!-- footer end -->
161
  # Original StableVicuna-13B model card
162
 
163
  ## Model Description
@@ -305,7 +262,7 @@ This work would not have been possible without the support of [Stability AI](htt
305
  Zack Witten and
306
  alexandremuzio and
307
  crumb},
308
- title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
309
  Util, T5 ILQL, Tests}},
310
  month = mar,
311
  year = 2023,
 
11
  - tatsu-lab/alpaca
12
  inference: false
13
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  # StableVicuna-13B-GPTQ
16
 
17
+ This repo contains 4bit GPTQ format quantised models of [CarterAI's StableVicuna 13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta).
18
 
19
  It is the result of first merging the deltas from the above repository with the original Llama 13B weights, then quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
20
 
21
  ## Repositories available
22
 
23
  * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ).
24
+ * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML).
25
+ * [Unquantised 16bit model in HF format](https://huggingface.co/TheBloke/stable-vicuna-13B-HF).
26
 
27
  ## PROMPT TEMPLATE
28
 
 
43
  4. Wait until it says it's finished downloading.
44
  5. Click the **Refresh** icon next to **Model** in the top left.
45
  6. In the **Model drop-down**: choose the model you just downloaded,`stable-vicuna-13B-GPTQ`.
46
+ 7. If you see an error in the bottom right, ignore it - it's temporary.
47
+ 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama`
48
+ 9. Click **Save settings for this model** in the top right.
49
+ 10. Click **Reload the Model** in the top right.
50
+ 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
51
 
52
  ## Provided files
53
 
 
85
  ```
86
  CUDA_VISIBLE_DEVICES=0 python3 llama.py stable-vicuna-13B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors stable-vicuna-13B-GPTQ-4bit.act-order.safetensors
87
  ```
88
+
89
  ## Manual instructions for `text-generation-webui`
90
 
91
  File `stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
 
115
 
116
  If you can't update GPTQ-for-LLaMa or don't want to, you can use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
117
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
118
  # Original StableVicuna-13B model card
119
 
120
  ## Model Description
 
262
  Zack Witten and
263
  alexandremuzio and
264
  crumb},
265
+ title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
266
  Util, T5 ILQL, Tests}},
267
  month = mar,
268
  year = 2023,
config.json CHANGED
@@ -20,13 +20,5 @@
20
  "torch_dtype": "float16",
21
  "transformers_version": "4.28.1",
22
  "use_cache": true,
23
- "vocab_size": 32001,
24
- "quantization_config": {
25
- "bits": 4,
26
- "damp_percent": 0.01,
27
- "desc_act": false,
28
- "group_size": 128,
29
- "model_file_base_name": "model",
30
- "quant_method": "gptq"
31
- }
32
- }
 
20
  "torch_dtype": "float16",
21
  "transformers_version": "4.28.1",
22
  "use_cache": true,
23
+ "vocab_size": 32001
24
+ }
 
 
 
 
 
 
 
 
quantize_config.json CHANGED
@@ -2,6 +2,5 @@
2
  "bits": 4,
3
  "damp_percent": 0.01,
4
  "desc_act": false,
5
- "group_size": 128,
6
- "model_file_base_name": "model"
7
  }
 
2
  "bits": 4,
3
  "damp_percent": 0.01,
4
  "desc_act": false,
5
+ "group_size": 128
 
6
  }
special_tokens_map.json CHANGED
@@ -1,23 +1,6 @@
1
  {
2
- "bos_token": {
3
- "content": "<s>",
4
- "lstrip": false,
5
- "normalized": true,
6
- "rstrip": false,
7
- "single_word": false
8
- },
9
- "eos_token": {
10
- "content": "</s>",
11
- "lstrip": false,
12
- "normalized": true,
13
- "rstrip": false,
14
- "single_word": false
15
- },
16
- "unk_token": {
17
- "content": "<unk>",
18
- "lstrip": false,
19
- "normalized": true,
20
- "rstrip": false,
21
- "single_word": false
22
- }
23
- }
 
1
  {
2
+ "bos_token": "</s>",
3
+ "eos_token": "</s>",
4
+ "pad_token": "[PAD]",
5
+ "unk_token": "</s>"
6
+ }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
model.safetensors → stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7fca2a7f47b506df5a623e40c5d7f2d0efde349f27277b56a3e5a4a23e70401a
3
- size 7255179752
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:442d71b56bc16721d28aeb2d5e0ba07cf04bfb61cc7af47993d5f0a15133b520
3
+ size 7255179696
tokenizer_config.json CHANGED
@@ -3,7 +3,7 @@
3
  "add_eos_token": false,
4
  "bos_token": {
5
  "__type": "AddedToken",
6
- "content": "<s>",
7
  "lstrip": false,
8
  "normalized": true,
9
  "rstrip": false,
@@ -12,7 +12,7 @@
12
  "clean_up_tokenization_spaces": false,
13
  "eos_token": {
14
  "__type": "AddedToken",
15
- "content": "</s>",
16
  "lstrip": false,
17
  "normalized": true,
18
  "rstrip": false,
@@ -20,14 +20,15 @@
20
  },
21
  "model_max_length": 2048,
22
  "pad_token": null,
 
23
  "sp_model_kwargs": {},
24
  "tokenizer_class": "LlamaTokenizer",
25
  "unk_token": {
26
  "__type": "AddedToken",
27
- "content": "<unk>",
28
  "lstrip": false,
29
  "normalized": true,
30
  "rstrip": false,
31
  "single_word": false
32
  }
33
- }
 
3
  "add_eos_token": false,
4
  "bos_token": {
5
  "__type": "AddedToken",
6
+ "content": "",
7
  "lstrip": false,
8
  "normalized": true,
9
  "rstrip": false,
 
12
  "clean_up_tokenization_spaces": false,
13
  "eos_token": {
14
  "__type": "AddedToken",
15
+ "content": "",
16
  "lstrip": false,
17
  "normalized": true,
18
  "rstrip": false,
 
20
  },
21
  "model_max_length": 2048,
22
  "pad_token": null,
23
+ "padding_side": "right",
24
  "sp_model_kwargs": {},
25
  "tokenizer_class": "LlamaTokenizer",
26
  "unk_token": {
27
  "__type": "AddedToken",
28
+ "content": "",
29
  "lstrip": false,
30
  "normalized": true,
31
  "rstrip": false,
32
  "single_word": false
33
  }
34
+ }