add measurement.json
Browse files- README.md +118 -0
- measurement.json +0 -0
README.md
ADDED
@@ -0,0 +1,118 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: llama3
|
4 |
+
datasets:
|
5 |
+
- aqua_rat
|
6 |
+
- microsoft/orca-math-word-problems-200k
|
7 |
+
- m-a-p/CodeFeedback-Filtered-Instruction
|
8 |
+
---
|
9 |
+
|
10 |
+
# Smaug-Llama-3-70B-Instruct-32K
|
11 |
+
|
12 |
+
### Built with Meta Llama 3
|
13 |
+
|
14 |
+
This is a 32K version of Smaug-Llama-3-70B-Instruct. It uses PoSE (https://arxiv.org/abs/2309.10400) and LoRA (https://arxiv.org/abs/2106.09685) adapter transfer. More details are coming soon.
|
15 |
+
|
16 |
+
Needle-In-A-Haystack (https://github.com/jzhang38/EasyContext) heatmap:
|
17 |
+
|
18 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/8Z5XgqrZXKcb2hmeTKTT6.png)
|
19 |
+
|
20 |
+
### Model Description
|
21 |
+
|
22 |
+
- **Developed by:** [Abacus.AI](https://abacus.ai)
|
23 |
+
- **License:** https://llama.meta.com/llama3/license/
|
24 |
+
- **Finetuned from model:** [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct).
|
25 |
+
|
26 |
+
## How to use
|
27 |
+
|
28 |
+
The prompt format is unchanged from Llama 3 70B Instruct.
|
29 |
+
|
30 |
+
### Use with transformers
|
31 |
+
|
32 |
+
See the snippet below for usage with Transformers:
|
33 |
+
|
34 |
+
```python
|
35 |
+
import transformers
|
36 |
+
import torch
|
37 |
+
|
38 |
+
model_id = "abacusai/Smaug-Llama-3-70B-Instruct"
|
39 |
+
|
40 |
+
pipeline = transformers.pipeline(
|
41 |
+
"text-generation",
|
42 |
+
model=model_id,
|
43 |
+
model_kwargs={"torch_dtype": torch.bfloat16},
|
44 |
+
device_map="auto",
|
45 |
+
)
|
46 |
+
|
47 |
+
messages = [
|
48 |
+
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
|
49 |
+
{"role": "user", "content": "Who are you?"},
|
50 |
+
]
|
51 |
+
|
52 |
+
prompt = pipeline.tokenizer.apply_chat_template(
|
53 |
+
messages,
|
54 |
+
tokenize=False,
|
55 |
+
add_generation_prompt=True
|
56 |
+
)
|
57 |
+
|
58 |
+
terminators = [
|
59 |
+
pipeline.tokenizer.eos_token_id,
|
60 |
+
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
|
61 |
+
]
|
62 |
+
|
63 |
+
outputs = pipeline(
|
64 |
+
prompt,
|
65 |
+
max_new_tokens=256,
|
66 |
+
eos_token_id=terminators,
|
67 |
+
do_sample=True,
|
68 |
+
temperature=0.6,
|
69 |
+
top_p=0.9,
|
70 |
+
)
|
71 |
+
print(outputs[0]["generated_text"][len(prompt):])
|
72 |
+
```
|
73 |
+
|
74 |
+
|
75 |
+
## Evaluation
|
76 |
+
|
77 |
+
### Arena-Hard
|
78 |
+
|
79 |
+
### Arena-Hard
|
80 |
+
|
81 |
+
Score vs selected others (sourced from: (https://lmsys.org/blog/2024-04-19-arena-hard/#full-leaderboard-with-gpt-4-turbo-as-judge)). GPT-4o and Gemini-1.5-pro-latest were missing from the original blob post, and we produced those numbers from a local run using the same methodology.
|
82 |
+
|
83 |
+
| Model | Score | 95% Confidence Interval | Average Tokens |
|
84 |
+
| :---- | ---------: | ----------: | ------: |
|
85 |
+
| GPT-4-Turbo-2024-04-09 | 82.6 | (-1.8, 1.6) | 662 |
|
86 |
+
| GPT-4o | 78.3 | (-2.4, 2.1) | 685 |
|
87 |
+
| Gemini-1.5-pro-latest | 72.1 | (-2.3, 2.2) | 630 |
|
88 |
+
| Claude-3-Opus-20240229 | 60.4 | (-3.3, 2.4) | 541 |
|
89 |
+
| **Smaug-Llama-3-70B-Instruct-32K** | 60.0 | (-2.6, 2.1) | 844 |
|
90 |
+
| Smaug-Llama-3-70B-Instruct | 56.7 | (-2.2, 2.6) | 661 |
|
91 |
+
| GPT-4-0314 | 50.0 | (-0.0, 0.0) | 423 |
|
92 |
+
| Claude-3-Sonnet-20240229 | 46.8 | (-2.1, 2.2) | 552 |
|
93 |
+
| Llama-3-70B-Instruct | 41.1 | (-2.5, 2.4) | 583 |
|
94 |
+
| GPT-4-0613 | 37.9 | (-2.2, 2.0) | 354 |
|
95 |
+
| Mistral-Large-2402 | 37.7 | (-1.9, 2.6) | 400 |
|
96 |
+
| Mixtral-8x22B-Instruct-v0.1 | 36.4 | (-2.7, 2.9) | 430 |
|
97 |
+
| Qwen1.5-72B-Chat | 36.1 | (-2.5, 2.2) | 474 |
|
98 |
+
| Command-R-Plus | 33.1 | (-2.1, 2.2) | 541 |
|
99 |
+
| Mistral-Medium | 31.9 | (-2.3, 2.4) | 485 |
|
100 |
+
| GPT-3.5-Turbo-0613 | 24.8 | (-1.6, 2.0) | 401 |
|
101 |
+
|
102 |
+
Note that we believe the number of tokens/verbosity of the model strongly influences the GPT-4 judge in this case, and at least partially explains the improvement in Arena-Hard score for the 32K model.
|
103 |
+
|
104 |
+
### OpenLLM Leaderboard Manual Evaluation
|
105 |
+
|
106 |
+
| Model | ARC | Hellaswag | MMLU | TruthfulQA | Winogrande | GSM8K* | Average |
|
107 |
+
| :---- | ---: | ------: | ---: | ---: | ---: | ---: | ---: |
|
108 |
+
| Smaug-Llama-3-70B-Instruct-32K | 70.1 | TBA | TBA | 61.9 | 82.2 | TBA | TBA |
|
109 |
+
| Llama-3-70B-Instruct | 71.4 | 85.7 | 80.0 | 61.8 | 82.9 | 91.1 | 78.8 |
|
110 |
+
|
111 |
+
**GSM8K** The GSM8K numbers quoted here are computed using a recent release
|
112 |
+
of the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness/).
|
113 |
+
The commit used by the leaderboard has a significant issue that impacts models that
|
114 |
+
tend to use `:` in their responses due to a bug in the stop word configuration for
|
115 |
+
GSM8K. The issue is covered in more detail in this
|
116 |
+
[GSM8K evaluation discussion](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard/discussions/770).
|
117 |
+
The score for both Llama-3 and this model are significantly different when evaluated
|
118 |
+
with the updated harness as the issue with stop words has been addressed.
|
measurement.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|