rozek's picture
Update README.md
b7ce965
|
raw
history blame
6.76 kB
metadata
license: llama2
tags:
  - llama
  - llama-2
  - facebook
  - meta
  - text-generation-inference
  - quantized
  - gguf
  - 32k-context
  - togethercomputer
language:
  - en
pipeline_tag: text-generation

LLaMA-2-7B-32K-Instruct_GGUF

Together Computer, Inc. has released Llama-2-7B-32K-Instruct, a model based on Meta AI's LLaMA-2-7B, but fine-tuned for context lengths up to 32K using "Position Interpolation" and "Rotary Position Embeddings" (RoPE).

While the current version of llama.cpp already supports such large context lengths, it requires quantized files in the new GGUF format - and that's where this repo comes in: it contains the following quantizations of the original weights from Together's fined-tuned model

(strikethrough files are currently being uploaded)

Nota bene: while RoPE makes inferences with large contexts possible, you still need an awful lot of RAM when doing so. And since "32K" does not mean that you always have to use a context size of 32768 (only that the model was fine-tuned for that size), it is recommended that you keep your context as small as possible

If you need quantizations for Together Computer's Llama-2-7B-32K model, then look for LLaMA-2-7B-32K_GGUF

How Quantization was done

Since the author does not want arbitrary Python stuff to loiter on his computer, the quantization was done using Docker.

Assuming that you have the Docker Desktop installed on your system and also have a basic knowledge of how to use it, you may just follow the instructions shown below in order to generate your own quantizations:

Nota bene: you will need 30+x GB of free disk space, at least - depending on your quantization

  1. create a new folder called llama.cpp_in_Docker
    this folder will later be mounted into the Docker container and store the quantization results
  2. download the weights for the fine-tuned LLaMA-2 model from Hugging Face into a subfolder of llama.cpp_in_Docker (let's call the new folder LLaMA-2-7B-32K-Instruct)
  3. within the Docker Desktop, search for and download a basic-python image - just use one of the most popular ones
  4. from a terminal session on your host computer (i.e., not a Docker container!), start a new container for the downloaded image which mounts the folder we created before:
docker run --rm \
  -v ./llama.cpp_in_Docker:/llama.cpp \
  -t basic-python /bin/bash

(you may have to adjust the path to your local folder)

  1. back in the Docker Desktop, open the "Terminal" tab of the started container and enter the following commands (one after the other - copying the complete list and pasting it into the terminal as a whole does not always seems to work properly):
apt update
apt-get install software-properties-common -y
apt-get update
apt-get install g++ git make -y
cd /llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
  1. now open the "Files" tab and navigate to the file /llama.cpp/llama.cpp/Makefile, right-click on it and choose "Edit file"
  2. search for aarch64, and - in the line found (which looks like ifneq ($(filter aarch64%,$(UNAME_M)),)) - change ifneq to ifeq
  3. save your change using the disk icon in the upper right corner of the editor pane and open the "Terminal" tab again
  4. now enter the following commands:
make
python3 -m pip install -r requirements.txt
python3 convert.py ../LLaMA-2-7B-32K-Instruct
  1. you are now ready to run the actual quantization, e.g., using
./quantize ../LLaMA-2-7B-32K-Instruct/ggml-model-f16.gguf \
   ../LLaMA-2-7B-32K-Instruct/LLaMA-2-7B-32K-Instruct-Q4_0.gguf Q4_0
  1. run any quantizations you need and stop the container when finished (the container will automatically be deleted but the generated files will remain available on your host computer)
  2. the basic-python image may also be deleted (manually) unless you plan to use it again in the near future

You are now free to move the quanitization results to where you need them and run inferences with context lengths up to 32K (depending on the amount of memory you will have available - long contexts need a lot of RAM)

License

Concerning the license(s):