Update README.md
Browse files
README.md
CHANGED
@@ -12,8 +12,7 @@ datasets:
|
|
12 |
## Model Description
|
13 |
|
14 |
LLaMA-2-7B-32K-Chat is an open-source, long-context chat model finetuned from [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K), over high-quality instruction and chat data.
|
15 |
-
We built Llama-2-7B-32K-Chat with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the recipe fully available.
|
16 |
-
For more details, please refer to our [Github repo](https://github.com/togethercomputer/LLaMA-2-32K-Chat).
|
17 |
We hope that this can enable everyone to finetune their own version of [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
|
18 |
|
19 |
## Data Collection Details
|
@@ -24,12 +23,10 @@ LLaMA-2-7B-32K-Chat is fine-tuned over a combination of two parts:
|
|
24 |
The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
|
25 |
We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/LLaMA-2-32K-Chat).
|
26 |
|
27 |
-
|
28 |
-
|
29 |
-
This dataset features source documents from the literature domain, including novels, plays, and stories, and offers human-written, highly abstractive summaries.
|
30 |
-
We here focus on chapter-level data. BookSum poses a unique set of challenges, necessitating that the model comprehensively read through each chapter.
|
31 |
-
We used 4K of the instructions in our fine-tuning.
|
32 |
|
|
|
33 |
|
34 |
## Model Usage
|
35 |
|
@@ -41,6 +38,9 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
41 |
|
42 |
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Chat")
|
43 |
model = AutoModelForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Chat", trust_remote_code=True, torch_dtype=torch.float16)
|
|
|
|
|
|
|
44 |
```
|
45 |
|
46 |
The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by:
|
@@ -82,15 +82,29 @@ Their charm, a gift, that's forever told.
|
|
82 |
|
83 |
## Model Evaluation
|
84 |
|
85 |
-
We evaluate the model
|
86 |
-
|
|
|
87 |
|
|
|
88 |
| Model | 2K Seq | 4K Seq | 8K Seq | 16K Seq | 32K Seq |
|
89 |
| -------- | ------- | ------- | ------- | ------- | ------- |
|
90 |
| LLaMA-2-7B-Chat (Meta) | 1.844 | 1.833 | N/A | N/A | N/A |
|
91 |
| LLaMA-2-7B-32K-Chat (ours) | 1.813 | 1.798 | 1.781 | 1.778 | 1.772|
|
92 |
|
93 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
|
95 |
## Limitations and Bias
|
96 |
|
|
|
12 |
## Model Description
|
13 |
|
14 |
LLaMA-2-7B-32K-Chat is an open-source, long-context chat model finetuned from [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K), over high-quality instruction and chat data.
|
15 |
+
We built Llama-2-7B-32K-Chat with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/LLaMA-2-32K-Chat).
|
|
|
16 |
We hope that this can enable everyone to finetune their own version of [LLaMA-2-7B-32K](https://huggingface.co/togethercomputer/LLaMA-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
|
17 |
|
18 |
## Data Collection Details
|
|
|
23 |
The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
|
24 |
We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/LLaMA-2-32K-Chat).
|
25 |
|
26 |
+
2. **Long-context Summarization and Long-context QA**.
|
27 |
+
We follow the recipe of [LLaMA-2-7B-32K](https://together.ai/blog/llama-2-7b-32k), and train our model with the [BookSum dataset](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) and [Multi-document Question Answering](https://arxiv.org/abs/2307.03172).
|
|
|
|
|
|
|
28 |
|
29 |
+
The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%).
|
30 |
|
31 |
## Model Usage
|
32 |
|
|
|
38 |
|
39 |
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Chat")
|
40 |
model = AutoModelForCausalLM.from_pretrained("togethercomputer/LLaMA-2-7B-32K-Chat", trust_remote_code=True, torch_dtype=torch.float16)
|
41 |
+
input_ids = tokenizer.encode(<your instruction>, return_tensors="pt")
|
42 |
+
output = model.generate(input_ids, max_length=..., temperature=...)
|
43 |
+
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
44 |
```
|
45 |
|
46 |
The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by:
|
|
|
82 |
|
83 |
## Model Evaluation
|
84 |
|
85 |
+
We evaluate the model from three aspects: 1) [Normalized perplexity](https://together.ai/blog/llama-2-7b-32k) over [PG19 dataset](https://huggingface.co/datasets/pg19);
|
86 |
+
2) [Rouge score over BookSum](https://together.ai/blog/llama-2-7b-32k); and
|
87 |
+
3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/llama-2-7b-32k). We summarize the results below:
|
88 |
|
89 |
+
* Normalized Perplexity over PG19
|
90 |
| Model | 2K Seq | 4K Seq | 8K Seq | 16K Seq | 32K Seq |
|
91 |
| -------- | ------- | ------- | ------- | ------- | ------- |
|
92 |
| LLaMA-2-7B-Chat (Meta) | 1.844 | 1.833 | N/A | N/A | N/A |
|
93 |
| LLaMA-2-7B-32K-Chat (ours) | 1.813 | 1.798 | 1.781 | 1.778 | 1.772|
|
94 |
|
95 |
+
* Rouge Score over BookSum
|
96 |
+
| Model | R1 | R2 | RL |
|
97 |
+
| -------- | ------- | ------- | ------- |
|
98 |
+
| LLaMA-2-7B-Chat (Meta) | 0.055 | 0.008 | 0.046 |
|
99 |
+
| LLaMA-2-7B-32K-Chat (ours) | 0.365 | 0.086 | 0.192 |
|
100 |
+
|
101 |
+
* Accuracy over MQA
|
102 |
+
| Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) |
|
103 |
+
| -------- | ------- | ------- | ------- |
|
104 |
+
| LLaMA-2-7B-Chat (Meta) | 0.384 | 0.375 | 0.313 |
|
105 |
+
| LLaMA-2-7B-32K-Chat (ours) | 0.451 | 0.434 | 0.373 |
|
106 |
+
|
107 |
+
We observe that LLaMA-2-7B-32K-Chat obtains reasonable (and even better) perplexity, rouge score and accuracy over the original LLaMA-2-7B-Chat model.
|
108 |
|
109 |
## Limitations and Bias
|
110 |
|