TheBloke commited on
Commit
c428173
1 Parent(s): 5c995fd

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +391 -0
README.md ADDED
@@ -0,0 +1,391 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct
3
+ datasets:
4
+ - togethercomputer/llama-instruct
5
+ inference: false
6
+ language:
7
+ - en
8
+ library_name: transformers
9
+ license: llama2
10
+ model_creator: Together
11
+ model_name: Llama2 7B 32K Instruct
12
+ model_type: llama
13
+ prompt_template: '[INST]
14
+
15
+ {prompt}
16
+
17
+ [\INST]
18
+
19
+ '
20
+ quantized_by: TheBloke
21
+ ---
22
+
23
+ <!-- header start -->
24
+ <!-- 200823 -->
25
+ <div style="width: auto; margin-left: auto; margin-right: auto">
26
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
27
+ </div>
28
+ <div style="display: flex; justify-content: space-between; width: 100%;">
29
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
30
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
31
+ </div>
32
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
33
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
34
+ </div>
35
+ </div>
36
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
37
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
38
+ <!-- header end -->
39
+
40
+ # Llama2 7B 32K Instruct - AWQ
41
+ - Model creator: [Together](https://huggingface.co/togethercomputer)
42
+ - Original model: [Llama2 7B 32K Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct)
43
+
44
+ <!-- description start -->
45
+ ## Description
46
+
47
+ This repo contains AWQ model files for [Together's Llama2 7B 32K Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct).
48
+
49
+
50
+ ### About AWQ
51
+
52
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
53
+
54
+ It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
55
+ <!-- description end -->
56
+ <!-- repositories-available start -->
57
+ ## Repositories available
58
+
59
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-AWQ)
60
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GPTQ)
61
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF)
62
+ * [Together's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct)
63
+ <!-- repositories-available end -->
64
+
65
+ <!-- prompt-template start -->
66
+ ## Prompt template: Llama2-Instruct-Only
67
+
68
+ ```
69
+ [INST]
70
+ {prompt}
71
+ [\INST]
72
+
73
+ ```
74
+
75
+ <!-- prompt-template end -->
76
+
77
+
78
+ <!-- README_AWQ.md-provided-files start -->
79
+ ## Provided files and AWQ parameters
80
+
81
+ For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
82
+
83
+ Models are released as sharded safetensors files.
84
+
85
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
86
+ | ------ | ---- | -- | ----------- | ------- | ---- |
87
+ | [main](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-AWQ/tree/main) | 4 | 128 | [c4](https://huggingface.co/datasets/allenai/c4) | 4096 | 3.89 GB
88
+
89
+ <!-- README_AWQ.md-provided-files end -->
90
+
91
+ <!-- README_AWQ.md-use-from-vllm start -->
92
+ ## Serving this model from vLLM
93
+
94
+ Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
95
+
96
+ - When using vLLM as a server, pass the `--quantization awq` parameter, for example:
97
+
98
+ ```shell
99
+ python3 python -m vllm.entrypoints.api_server --model TheBloke/Llama-2-7B-32K-Instruct-AWQ --quantization awq
100
+ ```
101
+
102
+ When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
103
+
104
+ ```python
105
+ from vllm import LLM, SamplingParams
106
+
107
+ prompts = [
108
+ "Hello, my name is",
109
+ "The president of the United States is",
110
+ "The capital of France is",
111
+ "The future of AI is",
112
+ ]
113
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
114
+
115
+ llm = LLM(model="TheBloke/Llama-2-7B-32K-Instruct-AWQ", quantization="awq")
116
+
117
+ outputs = llm.generate(prompts, sampling_params)
118
+
119
+ # Print the outputs.
120
+ for output in outputs:
121
+ prompt = output.prompt
122
+ generated_text = output.outputs[0].text
123
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
124
+ ```
125
+ <!-- README_AWQ.md-use-from-vllm start -->
126
+
127
+ <!-- README_AWQ.md-use-from-python start -->
128
+ ## How to use this AWQ model from Python code
129
+
130
+ ### Install the necessary packages
131
+
132
+ Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
133
+
134
+ ```shell
135
+ pip3 install autoawq
136
+ ```
137
+
138
+ If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
139
+
140
+ ```shell
141
+ pip3 uninstall -y autoawq
142
+ git clone https://github.com/casper-hansen/AutoAWQ
143
+ cd AutoAWQ
144
+ pip3 install .
145
+ ```
146
+
147
+ ### You can then try the following example code
148
+
149
+ ```python
150
+ from awq import AutoAWQForCausalLM
151
+ from transformers import AutoTokenizer
152
+
153
+ model_name_or_path = "TheBloke/Llama-2-7B-32K-Instruct-AWQ"
154
+
155
+ # Load model
156
+ model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
157
+ trust_remote_code=True, safetensors=True)
158
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
159
+
160
+ prompt = "Tell me about AI"
161
+ prompt_template=f'''[INST]
162
+ {prompt}
163
+ [\INST]
164
+
165
+ '''
166
+
167
+ print("\n\n*** Generate:")
168
+
169
+ tokens = tokenizer(
170
+ prompt_template,
171
+ return_tensors='pt'
172
+ ).input_ids.cuda()
173
+
174
+ # Generate output
175
+ generation_output = model.generate(
176
+ tokens,
177
+ do_sample=True,
178
+ temperature=0.7,
179
+ top_p=0.95,
180
+ top_k=40,
181
+ max_new_tokens=512
182
+ )
183
+
184
+ print("Output: ", tokenizer.decode(generation_output[0]))
185
+
186
+ # Inference can also be done using transformers' pipeline
187
+ from transformers import pipeline
188
+
189
+ print("*** Pipeline:")
190
+ pipe = pipeline(
191
+ "text-generation",
192
+ model=model,
193
+ tokenizer=tokenizer,
194
+ max_new_tokens=512,
195
+ do_sample=True,
196
+ temperature=0.7,
197
+ top_p=0.95,
198
+ top_k=40,
199
+ repetition_penalty=1.1
200
+ )
201
+
202
+ print(pipe(prompt_template)[0]['generated_text'])
203
+ ```
204
+ <!-- README_AWQ.md-use-from-python end -->
205
+
206
+ <!-- README_AWQ.md-compatibility start -->
207
+ ## Compatibility
208
+
209
+ The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
210
+
211
+ [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
212
+ <!-- README_AWQ.md-compatibility end -->
213
+
214
+ <!-- footer start -->
215
+ <!-- 200823 -->
216
+ ## Discord
217
+
218
+ For further support, and discussions on these models and AI in general, join us at:
219
+
220
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
221
+
222
+ ## Thanks, and how to contribute
223
+
224
+ Thanks to the [chirper.ai](https://chirper.ai) team!
225
+
226
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
227
+
228
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
229
+
230
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
231
+
232
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
233
+
234
+ * Patreon: https://patreon.com/TheBlokeAI
235
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
236
+
237
+ **Special thanks to**: Aemon Algiz.
238
+
239
+ **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
240
+
241
+
242
+ Thank you to all my generous patrons and donaters!
243
+
244
+ And thank you again to a16z for their generous grant.
245
+
246
+ <!-- footer end -->
247
+
248
+ # Original model card: Together's Llama2 7B 32K Instruct
249
+
250
+
251
+ # Llama-2-7B-32K-Instruct
252
+
253
+ ## Model Description
254
+
255
+ Llama-2-7B-32K-Instruct is an open-source, long-context chat model finetuned from [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K), over high-quality instruction and chat data.
256
+ We built Llama-2-7B-32K-Instruct with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct).
257
+ We hope that this can enable everyone to finetune their own version of [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
258
+
259
+ ## Data Collection Details
260
+
261
+ Llama-2-7B-32K-Instruct is fine-tuned over a combination of two parts:
262
+ 1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
263
+ We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)).
264
+ The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
265
+ We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct).
266
+
267
+ 2. **Long-context Summarization and Long-context QA**.
268
+ We follow the recipe of [Llama-2-7B-32K](https://together.ai/blog/Llama-2-7B-32K), and train our model with the [BookSum dataset](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) and [Multi-document Question Answering](https://arxiv.org/abs/2307.03172).
269
+
270
+ The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%).
271
+
272
+ ## Model Usage
273
+
274
+ We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference.
275
+
276
+ To run the model locally, we strongly recommend to install Flash Attention V2, which is necessary to obtain the best performance:
277
+ ```
278
+ # Please update the path of `CUDA_HOME`
279
+ export CUDA_HOME=/usr/local/cuda-11.8
280
+ pip install transformers==4.31.0
281
+ pip install sentencepiece
282
+ pip install ninja
283
+ pip install flash-attn --no-build-isolation
284
+ pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
285
+ ```
286
+ You can load the model directly from the Hugging Face model hub using
287
+ ```python
288
+ import torch
289
+ from transformers import AutoTokenizer, AutoModelForCausalLM
290
+
291
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct")
292
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct",
293
+ trust_remote_code=True, torch_dtype=torch.float16)
294
+ input_ids = tokenizer.encode("[INST]\nWrite a poem about cats\n[/INST]\n\n", return_tensors="pt")
295
+ output = model.generate(input_ids, max_length=128,
296
+ temperature=0.7, repetition_penalty=1.1, top_p=0.7, top_k=50)
297
+ output_text = tokenizer.decode(output[0], skip_special_tokens=True)
298
+ ```
299
+
300
+ The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by:
301
+
302
+ ```
303
+ [INST]\n<your instruction here>\n[\INST]\n\n
304
+ ```
305
+
306
+ For example, if we query the model with
307
+
308
+ ```
309
+ [INST]\nWrite a poem about cats\n[\INST]\n\n
310
+ ```
311
+
312
+ the model will return
313
+ ```
314
+ [INST]
315
+ Write a poem about cats
316
+ [/INST]
317
+
318
+ Cats are mysterious creatures, with their aloof and independent nature. They're also incredibly beautiful, with their sleek fur and piercing eyes. Here's a poem that captures the essence of cats:
319
+
320
+ Cats, oh cats, how can I describe you?
321
+ Your beauty is beyond compare, it seems.
322
+ You're graceful and elegant, like a ballerina's dance,
323
+ But don't let your charm fool you, for you're not easily tamed.
324
+
325
+ With your soft purring and playful meows,
326
+ You draw us in with your enchanting powers.
327
+ We love to watch you play, your tail twirling 'round,
328
+ As if you're dancing on air, with no sound.
329
+
330
+ But don't be fooled by your sweetness, my friend,
331
+ For beneath that gentle exterior, lies a fierce defender.
332
+ When danger lurks, you'll spring into action,
333
+ Protecting those you hold dear, without question.
334
+
335
+ Solet us admire you, from afar,
336
+ For in your own way, you're truly unique, a star.
337
+ And though we may never fully understand,
338
+ The depths of your soul, we'll always stand, hand in paw, as one.
339
+
340
+ This poem captures the essence of cats, highlighting their beauty, independence,and protective nature. It also celebrates the special bond between humans and cats, recognizing their unique qualities and the joy they bring to our lives.
341
+ ```
342
+
343
+ ## Model Evaluation
344
+
345
+ We evaluate the model from three aspects: 1) [Alpaca Eval](https://tatsu-lab.github.io/alpaca_eval/);
346
+ 2) [Rouge score over BookSum](https://together.ai/blog/Llama-2-7B-32K); and
347
+ 3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/Llama-2-7B-32K).
348
+ We compare with models including
349
+ [GPT-3.5-Turbo-16K](https://platform.openai.com/docs/models/gpt-3-5),
350
+ [https://huggingface.co/meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf),
351
+ [Longchat-7b-16k](https://huggingface.co/lmsys/longchat-7b-16k)
352
+ and [Longchat-7b-v1.5-32k](https://huggingface.co/lmsys/longchat-7b-v1.5-32k).
353
+ We summarize the results below:
354
+
355
+ * Alpaca Eval
356
+ | Model | win_rate | standard_error | n_total | avg_length |
357
+ | -------- | ------- | ------- | ------- | ------- |
358
+ | Llama-2-7B-Chat-hf | 71.37 | 1.59 | 805 | 1479 |
359
+ | Llama-2-7B-32K-Instruct | 70.36 | 1.61 | 803 | 1885 |
360
+ | oasst-rlhf-llama-33b | 66.52 | 1.66 | 805 | 1079 |
361
+ | text_davinci_003 | 50.00 | 0.00 | 805 | 307|
362
+ | falcon-40b-instruct | 45.71 | 1.75 | 805 | 662 |
363
+ | alpaca-farm-ppo-human | 41.24 | 1.73 | 805 | 803 |
364
+ | alpaca-7b | 26.46 | 1.54 | 805 | 396 |
365
+ | text_davinci_001 | 15.17 | 1.24 | 804 | 296 |
366
+
367
+ * Rouge Score over BookSum
368
+ | Model | R1 | R2 | RL |
369
+ | -------- | ------- | ------- | ------- |
370
+ | Llama-2-7B-Chat-hf | 0.055 | 0.008 | 0.046 |
371
+ | Longchat-7b-16k | 0.303 | 0.055 | 0.160 |
372
+ | Longchat-7b-v1.5-32k | 0.308 | 0.057 | 0.163 |
373
+ | GPT-3.5-Turbo-16K | 0.324 | 0.066 | 0.178 |
374
+ | Llama-2-7B-32K-Instruct (ours) | 0.336 | 0.076 | 0.184 |
375
+
376
+ * Accuracy over MQA
377
+ | Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) |
378
+ | -------- | ------- | ------- | ------- |
379
+ | Llama-2-7B-Chat-hf | 0.448 | 0.421 | 0.354 |
380
+ | Longchat-7b-16k | 0.510 | 0.473 | 0.428 |
381
+ | Longchat-7b-v1.5-32k | 0.534 | 0.516 | 0.479 |
382
+ | GPT-3.5-Turbo-16K | 0.622 | 0.609 | 0.577 |
383
+ | Llama-2-7B-32K-Instruct (ours) | 0.622 | 0.604 | 0.589 |
384
+
385
+ ## Limitations and Bias
386
+
387
+ As with all language models, Llama-2-7B-32K-Instruct may generate incorrect or biased content. It's important to keep this in mind when using the model.
388
+
389
+ ## Community
390
+
391
+ Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)