Update README.md
Browse files
README.md
CHANGED
@@ -11,23 +11,51 @@ datasets:
|
|
11 |
|
12 |
## Model Description
|
13 |
|
14 |
-
LLaMA-2-7B-32K-Chat is an open-source, long-context chat model finetuned from [Llama-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) over high-quality instructions and chat data.
|
15 |
-
We build Llama-2-7B-32K-Chat with less than 200 lines of Python script using Together API, and we also make the recipe fully available.
|
16 |
-
We hope that this can enable everyone to finetune their own version of [Llama-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) — play with Together API and give us feedback!
|
17 |
|
18 |
-
|
|
|
19 |
|
|
|
20 |
|
21 |
-
|
|
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
-
|
25 |
|
|
|
|
|
26 |
|
|
|
|
|
27 |
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
|
|
30 |
|
|
|
|
|
|
|
31 |
|
32 |
## Limitations and Bias
|
33 |
|
|
|
11 |
|
12 |
## Model Description
|
13 |
|
14 |
+
LLaMA-2-7B-32K-Chat is an open-source, long-context chat model finetuned from [Llama-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K), over high-quality instructions and chat data.
|
15 |
+
We build Llama-2-7B-32K-Chat with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the recipe fully available.
|
16 |
+
We hope that this can enable everyone to finetune their own version of [Llama-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
|
17 |
|
18 |
+
Llama-2-7B-32K-Chat is fine-tuned over 19K single- and multi-round conversations generated by human instructions and Llama-2-70B-Chat outputs,
|
19 |
+
The dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
|
20 |
|
21 |
+
## Inference
|
22 |
|
23 |
+
You can use the [Together API](https://together.ai/blog/api-announcement) to try out LLaMA-2-7B-32K-Chat for inference.
|
24 |
+
The updated inference stack allows for efficient inference.
|
25 |
|
26 |
+
To run the model locally, we strongly recommend to install Flash Attention V2, which is necessary to obtain the best performance:
|
27 |
+
```
|
28 |
+
# Please update the path of `CUDA_HOME`
|
29 |
+
export CUDA_HOME=/usr/local/cuda-11.8
|
30 |
+
pip install transformers==4.31.0
|
31 |
+
pip install sentencepiece
|
32 |
+
pip install ninja
|
33 |
+
pip install flash-attn --no-build-isolation
|
34 |
+
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
|
35 |
+
```
|
36 |
|
37 |
+
You can use this model directly from the Hugging Face Model Hub or fine-tune it on your own data using the OpenChatKit.
|
38 |
|
39 |
+
```python
|
40 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
41 |
|
42 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K")
|
43 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K", trust_remote_code=True, torch_dtype=torch.float16)
|
44 |
|
45 |
+
input_context = "Your text here"
|
46 |
+
input_ids = tokenizer.encode(input_context, return_tensors="pt")
|
47 |
+
output = model.generate(input_ids, max_length=128, temperature=0.7)
|
48 |
+
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
49 |
+
print(output_text)
|
50 |
+
```
|
51 |
+
|
52 |
+
Alternatively, you can set `trust_remote_code=False` if you prefer not to use flash attention.
|
53 |
|
54 |
+
To chat with the model, the prompt is in the format of
|
55 |
|
56 |
+
```
|
57 |
+
[INST] Write a song about elepants [\INST]
|
58 |
+
```
|
59 |
|
60 |
## Limitations and Bias
|
61 |
|