File size: 5,408 Bytes
ad1eeda
 
7799ac0
 
 
 
 
ad1eeda
7799ac0
fd00afd
7799ac0
 
 
fd00afd
 
65b5205
7799ac0
a8043fd
7799ac0
fd00afd
a8043fd
65e7cb3
 
 
a8043fd
c559461
 
65b5205
c559461
65b5205
 
7799ac0
65e7cb3
a8043fd
 
 
 
 
fd00afd
 
c559461
 
 
a8043fd
 
65e7cb3
7799ac0
98dd319
65b5205
98dd319
7799ac0
65b5205
7799ac0
65b5205
 
98dd319
 
65b5205
98dd319
65b5205
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98dd319
7799ac0
62502aa
 
c559461
 
 
62502aa
c559461
62502aa
 
 
fd00afd
62502aa
c559461
 
 
 
fd00afd
c559461
 
 
 
 
fd00afd
c559461
fd00afd
62502aa
7799ac0
 
fd00afd
7799ac0
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
license: llama2
language:
- en
library_name: transformers
datasets:
- togethercomputer/llama-instruct
---

# LLaMA-2-7B-32K-Instruct

## Model Description

LLaMA-2-7B-32K-Instruct is an open-source, long-context chat model finetuned from [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K), over high-quality instruction and chat data.
We built LLaMA-2-7B-32K-Instruct with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/LLaMA-2-32K-Instruct).
We hope that this can enable everyone to finetune their own version of [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback! 

## Data Collection Details

LLaMA-2-7B-32K-Instruct is fine-tuned over a combination of two parts:
1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
   We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)).
   The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
   We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/LLaMA-2-32K-Chat).
   
2. **Long-context Summarization and Long-context QA**.
   We follow the recipe of [LLaMA-2-7B-32K](https://together.ai/blog/llama-2-7b-32k), and train our model with the [BookSum dataset](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) and [Multi-document Question Answering](https://arxiv.org/abs/2307.03172).

The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%).

## Model Usage

We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference.
Alternatively, you can load the model directly from the Hugging Face model hub using

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Instruct")
model = AutoModelForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Instruct", trust_remote_code=True, torch_dtype=torch.float16)
input_ids = tokenizer.encode(<your instruction>, return_tensors="pt")
output = model.generate(input_ids, max_length=..., temperature=...)
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
```

The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by:

```
[INST] <your instruction here> [\INST].
```

For example, if we query the model with 

```
[INST] Write a poem about cats [\INST],
```

the model will return
```
Sure, here's a short poem about cats:

Cats, cats, oh how they enthrall,
With their soft fur and gentle grace,
They purr with joy, their eyes aglow,
And steal our hearts with their charm and grace.

In their own time, they move about,
With graceful strides, they tiptoe out,
Their whiskers twitching, as they go,
Their curiosity piqued, they want to know.

They climb, they jump, they play, they chase,
Their agility, a sight to behold,
Their naps, a treat, a break from the chase,
A time to rest, a time to be told.

Cats, cats, they're unique and special,
A part of our lives, a treasure to hold,
Their personalities, so diverse,
Their charm, a gift, that's forever told.
```

## Model Evaluation

We evaluate the model from three aspects: 1) [Normalized perplexity](https://together.ai/blog/llama-2-7b-32k) over [PG19 dataset](https://huggingface.co/datasets/pg19);
2) [Rouge score over BookSum](https://together.ai/blog/llama-2-7b-32k); and
3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/llama-2-7b-32k). We summarize the results below:

* Normalized Perplexity over PG19
| Model | 2K Seq | 4K Seq | 8K Seq | 16K Seq | 32K Seq |
| -------- | ------- | ------- | ------- | ------- | ------- |
| LLaMA-2-7B-Chat (Meta) | 1.844 | 1.833 | N/A | N/A | N/A |
| LLaMA-2-7B-32K-Instruct (ours) | 1.813 | 1.798 | 1.781 | 1.778 | 1.772|

* Rouge Score over BookSum
| Model | R1 | R2 | RL |
| -------- | ------- | ------- | ------- |
| LLaMA-2-7B-Chat (Meta) | 0.055 | 0.008 | 0.046 |
| LLaMA-2-7B-32K-Instruct (ours) | 0.365 | 0.086 | 0.192 |

* Accuracy over MQA
| Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) |
| -------- | ------- | ------- | ------- |
| LLaMA-2-7B-Chat (Meta) | 0.384 | 0.375 | 0.313 |
| LLaMA-2-7B-32K-Instruct (ours) | 0.451 | 0.434 | 0.373 |

We observe that LLaMA-2-7B-32K-Instruct obtains reasonable (and even better) perplexity, rouge score and accuracy over the original LLaMA-2-7B-Chat model.

## Limitations and Bias

As with all language models, LLaMA-2-7B-32K-Instruct may generate incorrect or biased content. It's important to keep this in mind when using the model.

## Community

Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)