--- language: - en license: llama2 library_name: transformers datasets: - togethercomputer/llama-instruct model_name: Llama2 7B 32K Instruct base_model: togethercomputer/Llama-2-7B-32K-Instruct inference: false model_creator: Together model_type: llama prompt_template: '[INST] {prompt} [\INST] ' quantized_by: TheBloke ---
TheBlokeAI

Chat & support: TheBloke's Discord server

Want to contribute? TheBloke's Patreon page

TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)


# Llama2 7B 32K Instruct - AWQ - Model creator: [Together](https://huggingface.co/togethercomputer) - Original model: [Llama2 7B 32K Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct) ## Description This repo contains AWQ model files for [Together's Llama2 7B 32K Instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference. It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB. ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-GGUF) * [Together's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct) ## Prompt template: Llama2-Instruct-Only ``` [INST] {prompt} [\INST] ``` ## Provided files and AWQ parameters For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/Llama-2-7B-32K-Instruct-AWQ/tree/main) | 4 | 128 | [c4](https://huggingface.co/datasets/allenai/c4) | 4096 | 3.89 GB ## Serving this model from vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - When using vLLM as a server, pass the `--quantization awq` parameter, for example: ```shell python3 python -m vllm.entrypoints.api_server --model TheBloke/Llama-2-7B-32K-Instruct-AWQ --quantization awq ``` When using vLLM from Python code, pass the `quantization=awq` parameter, for example: ```python from vllm import LLM, SamplingParams prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/Llama-2-7B-32K-Instruct-AWQ", quantization="awq") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` ## How to use this AWQ model from Python code ### Install the necessary packages Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later ```shell pip3 install autoawq ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### You can then try the following example code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer model_name_or_path = "TheBloke/Llama-2-7B-32K-Instruct-AWQ" # Load model model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True, trust_remote_code=True, safetensors=True) tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True) prompt = "Tell me about AI" prompt_template=f'''[INST] {prompt} [\INST] ''' print("\n\n*** Generate:") tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() # Generate output generation_output = model.generate( tokens, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, max_new_tokens=512 ) print("Output: ", tokenizer.decode(generation_output[0])) # Inference can also be done using transformers' pipeline from transformers import pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Compatibility The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm). [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781). ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. # Original model card: Together's Llama2 7B 32K Instruct # Llama-2-7B-32K-Instruct ## Model Description Llama-2-7B-32K-Instruct is an open-source, long-context chat model finetuned from [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K), over high-quality instruction and chat data. We built Llama-2-7B-32K-Instruct with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct). We hope that this can enable everyone to finetune their own version of [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback! ## Data Collection Details Llama-2-7B-32K-Instruct is fine-tuned over a combination of two parts: 1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**. We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)). The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct). We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/Llama-2-7B-32K-Instruct). 2. **Long-context Summarization and Long-context QA**. We follow the recipe of [Llama-2-7B-32K](https://together.ai/blog/Llama-2-7B-32K), and train our model with the [BookSum dataset](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) and [Multi-document Question Answering](https://arxiv.org/abs/2307.03172). The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%). ## Model Usage We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference. To run the model locally, we strongly recommend to install Flash Attention V2, which is necessary to obtain the best performance: ``` # Please update the path of `CUDA_HOME` export CUDA_HOME=/usr/local/cuda-11.8 pip install transformers==4.31.0 pip install sentencepiece pip install ninja pip install flash-attn --no-build-isolation pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary ``` You can load the model directly from the Hugging Face model hub using ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct") model = AutoModelForCausalLM.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct", trust_remote_code=True, torch_dtype=torch.float16) input_ids = tokenizer.encode("[INST]\nWrite a poem about cats\n[/INST]\n\n", return_tensors="pt") output = model.generate(input_ids, max_length=128, temperature=0.7, repetition_penalty=1.1, top_p=0.7, top_k=50) output_text = tokenizer.decode(output[0], skip_special_tokens=True) ``` The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by: ``` [INST]\n\n[\INST]\n\n ``` For example, if we query the model with ``` [INST]\nWrite a poem about cats\n[\INST]\n\n ``` the model will return ``` [INST] Write a poem about cats [/INST] Cats are mysterious creatures, with their aloof and independent nature. They're also incredibly beautiful, with their sleek fur and piercing eyes. Here's a poem that captures the essence of cats: Cats, oh cats, how can I describe you? Your beauty is beyond compare, it seems. You're graceful and elegant, like a ballerina's dance, But don't let your charm fool you, for you're not easily tamed. With your soft purring and playful meows, You draw us in with your enchanting powers. We love to watch you play, your tail twirling 'round, As if you're dancing on air, with no sound. But don't be fooled by your sweetness, my friend, For beneath that gentle exterior, lies a fierce defender. When danger lurks, you'll spring into action, Protecting those you hold dear, without question. Solet us admire you, from afar, For in your own way, you're truly unique, a star. And though we may never fully understand, The depths of your soul, we'll always stand, hand in paw, as one. This poem captures the essence of cats, highlighting their beauty, independence,and protective nature. It also celebrates the special bond between humans and cats, recognizing their unique qualities and the joy they bring to our lives. ``` ## Model Evaluation We evaluate the model from three aspects: 1) [Alpaca Eval](https://tatsu-lab.github.io/alpaca_eval/); 2) [Rouge score over BookSum](https://together.ai/blog/Llama-2-7B-32K); and 3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/Llama-2-7B-32K). We compare with models including [GPT-3.5-Turbo-16K](https://platform.openai.com/docs/models/gpt-3-5), [https://huggingface.co/meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), [Longchat-7b-16k](https://huggingface.co/lmsys/longchat-7b-16k) and [Longchat-7b-v1.5-32k](https://huggingface.co/lmsys/longchat-7b-v1.5-32k). We summarize the results below: * Alpaca Eval | Model | win_rate | standard_error | n_total | avg_length | | -------- | ------- | ------- | ------- | ------- | | Llama-2-7B-Chat-hf | 71.37 | 1.59 | 805 | 1479 | | Llama-2-7B-32K-Instruct | 70.36 | 1.61 | 803 | 1885 | | oasst-rlhf-llama-33b | 66.52 | 1.66 | 805 | 1079 | | text_davinci_003 | 50.00 | 0.00 | 805 | 307| | falcon-40b-instruct | 45.71 | 1.75 | 805 | 662 | | alpaca-farm-ppo-human | 41.24 | 1.73 | 805 | 803 | | alpaca-7b | 26.46 | 1.54 | 805 | 396 | | text_davinci_001 | 15.17 | 1.24 | 804 | 296 | * Rouge Score over BookSum | Model | R1 | R2 | RL | | -------- | ------- | ------- | ------- | | Llama-2-7B-Chat-hf | 0.055 | 0.008 | 0.046 | | Longchat-7b-16k | 0.303 | 0.055 | 0.160 | | Longchat-7b-v1.5-32k | 0.308 | 0.057 | 0.163 | | GPT-3.5-Turbo-16K | 0.324 | 0.066 | 0.178 | | Llama-2-7B-32K-Instruct (ours) | 0.336 | 0.076 | 0.184 | * Accuracy over MQA | Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) | | -------- | ------- | ------- | ------- | | Llama-2-7B-Chat-hf | 0.448 | 0.421 | 0.354 | | Longchat-7b-16k | 0.510 | 0.473 | 0.428 | | Longchat-7b-v1.5-32k | 0.534 | 0.516 | 0.479 | | GPT-3.5-Turbo-16K | 0.622 | 0.609 | 0.577 | | Llama-2-7B-32K-Instruct (ours) | 0.622 | 0.604 | 0.589 | ## Limitations and Bias As with all language models, Llama-2-7B-32K-Instruct may generate incorrect or biased content. It's important to keep this in mind when using the model. ## Community Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)