File size: 15,901 Bytes
e07bb44 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 |
---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
- llama-3.1
- conversational
- instruction following
- reasoning
- function calling
- mergekit
- finetuning
- axolotl
- autoquant
- gguf
---
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/tmOlbERGKP7JSODa6T06J.jpeg)
Authors: [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/), [Pawan Kumar Rajpoot](https://www.linkedin.com/in/pawanrajpoot/), [Ankur Parikh](https://www.linkedin.com/in/ankurnlpexpert/), [Akshita Sukhlecha](https://www.linkedin.com/in/akshita-sukhlecha/)
**π€ Hugging Face Announcement Blog**: https://huggingface.co/blog/akjindal53244/llama31-storm8b
**πOllama:** `ollama run ajindal/llama3.1-storm:8b`
## TL;DR
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/mDtDeiHwnBupw1k_n99Lf.png)
We present the [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model that outperforms Meta AI's [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps:
1. **Self-Curation**: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. **Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).**
2. **Targeted fine-tuning**: We performed [Spectrum](https://arxiv.org/abs/2406.06623)-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen.
3. **Model Merging**: We merged our fine-tuned model with the [Llama-Spark](https://huggingface.co/arcee-ai/Llama-Spark) model using [SLERP](https://huggingface.co/blog/mlabonne/merge-models#1-slerp) method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling.
## π Introducing Llama-3.1-Storm-8B
[**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class.
As shown in the left subplot of the above figure, [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following ([IFEval](https://arxiv.org/abs/2311.07911)), Knowledge-driven QA benchmarks ([GPQA](https://arxiv.org/abs/2311.12022), [MMLU-Pro](https://arxiv.org/pdf/2406.01574)), Reasoning ([ARC-C](https://arxiv.org/abs/1803.05457), [MuSR](https://arxiv.org/abs/2310.16049), [BBH](https://arxiv.org/pdf/2210.09261)), Reduced Hallucinations ([TruthfulQA](https://arxiv.org/abs/2109.07958)), and Function-Calling ([BFCL](https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard)). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources.
We also benchmarked our model with the recently published model [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, **Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks**, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark.
## Llama-3.1-Storm-8B Model Strengths
Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore [Llama-3.1-Storm-8B](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) and look forward to seeing how it will be utilized in various projects and applications.
<table>
<tr>
<td><strong>Model Strength</strong>
</td>
<td><strong>Relevant Benchmarks</strong>
</td>
<tr>
<tr>
<td>π― Improved Instruction Following
</td>
<td>IFEval Strict (+3.93%)
</td>
<tr>
<tr>
<td>π Enhanced Knowledge Driven Question Answering
</td>
<td>GPQA (+7.21%), MMLU-Pro (+0.55%), AGIEval (+3.77%)
</td>
<tr>
<tr>
<td>π§ Better Reasoning
</td>
<td>ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%)
</td>
<tr>
<tr>
<td>π€ Superior Agentic Capabilities
</td>
<td>BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%)
</td>
<tr>
<tr>
<td>π« Reduced Hallucinations
</td>
<td>TruthfulQA (+9%)
</td>
<tr>
</table>
**Note**: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct.
## Llama-3.1-Storm-8B Models
1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
2. β‘ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic)
3. β‘ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF)
4. π Ollama: `ollama run ajindal/llama3.1-storm:8b`
## π» How to Use the Model
The Hugging Face `transformers` library loads the model in `bfloat16` by default. This is the type used by the [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) checkpoint, so itβs the recommended way to run to ensure the best results.
### Installation
```bash
pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1
```
Developers can easily integrate Llama-3.1-Storm-8B into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples:
### Conversational Use-case
#### Use with [π€ Transformers](https://github.com/huggingface/transformers)
##### Using `transformers.pipeline()` API
```python
import transformers
import torch
model_id = "akjindal53244/Llama-3.1-Storm-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"}
]
outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
print(outputs[0]["generated_text"][-1]) # Expected Output: {'role': 'assistant', 'content': '2 + 2 = 4'}
```
##### Using `model.generate()` API
```bash
pip install flash_attn==2.6.3
```
```python
import torch
from transformers import AutoTokenizer, LlamaForCausalLM
# Apply Llama3.1 chat-template
def format_prompt(user_query):
template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"""
return template.format(user_query)
model_id = 'akjindal53244/Llama-3.1-Storm-8B'
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=False,
use_flash_attention_2=True
)
# Build final input prompt after applying chat-template
prompt = format_prompt("What is 2+2?")
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=128, temperature=0.01, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True)
print(response) # Expected Output: '2 + 2 = 4'
```
#### Use with [vLLM](https://github.com/vllm-project/vllm)
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
num_gpus = 1
tokenizer = AutoTokenizer.from_pretrained(model_id)
llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: 2 + 2 = 4
```
#### Use with [LitGPT](https://github.com/Lightning-AI/litgpt)
```bash
pip install 'litgpt[all]'
litgpt download akjindal53244/Llama-3.1-Storm-8B --model_name meta-llama/Meta-Llama-3.1-8B
```
```python
from litgpt import LLM
llm = LLM.load(model="akjindal53244/Llama-3.1-Storm-8B")
llm.generate("What do Llamas eat?")
```
### Function Calling Use-case
[**Llama-3.1-Storm-8B**](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) has impressive function calling capabilities compared to Meta-Llama-3.1-8B-Instruct as demonstrated by the BFCL benchmark.
#### Prompt Format for Function Calling
Llama-3.1-Storm-8B is trained with specific system prompt for Function Calling:
```
You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
Here are the available functions:
<tools>LIST_OF_TOOLS</tools>
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
<tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>
```
Above system prompt should be used with passing `LIST_OF_TOOLS` as input.
#### Use with [vLLM](https://github.com/vllm-project/vllm)
```python
import json
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
num_gpus = 1
tokenizer = AutoTokenizer.from_pretrained(model_id)
llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
def create_system_prompt(tools_list):
system_prompt_format = """You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
Here are the available functions:
<tools>{}</tools>
For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
<tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>"""
# Convert the tools list to a string representation
tools_str = json.dumps(tools_list, ensure_ascii=False)
# Format the system prompt with the tools list
system_prompt = system_prompt_format.format(tools_str)
return system_prompt
# Example tools list
tools_list = [
{
"name": "peers",
"description": "Retrieves a list of company peers given a stock symbol.",
"parameters": {
"symbol": {
"description": "The stock symbol for the company.",
"type": "str",
"default": ""
}
}
},
{
"name": "web_chain_details",
"description": "python",
"parameters": {
"chain_slug": {
"description": "The slug identifier for the blockchain (e.g., 'ethereum' for Ethereum mainnet).",
"type": "str",
"default": "ethereum"
}
}
}
]
# Create the system prompt with the tools list
system_prompt = create_system_prompt(tools_list)
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "I need to understand the details of the Ethereum blockchain for my cryptocurrency project. Can you fetch the details for 'ethereum'?"}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call>
```
#### Use with [Ollama](https://ollama.com/)
```
import ollama
tools = [{
'type': 'function',
'function': {
'name': 'get_current_weather',
'description': 'Get the current weather for a city',
'parameters': {
'type': 'object',
'properties': {
'city': {
'type': 'string',
'description': 'The name of the city',
},
},
'required': ['city'],
},
},
},
{
'type': 'function',
'function': {
'name': 'get_places_to_vist',
'description': 'Get places to visit in a city',
'parameters': {
'type': 'object',
'properties': {
'city': {
'type': 'string',
'description': 'The name of the city',
},
},
'required': ['city'],
},
},
},
]
response = ollama.chat(
model='ajindal/llama3.1-storm:8b',
messages=[
{'role': 'system', 'content': 'Do not answer to nay vulgar questions.'},
{'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'}
],
tools=tools
)
print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"}
```
## Alignment Note
While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model.
## Cite Our Work
```
@misc {ashvini_kumar_jindal_2024,
author = { {Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha} },
title = { Llama-3.1-Storm-8B },
year = 2024,
url = { https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B },
doi = { 10.57967/hf/2902 },
publisher = { Hugging Face }
}
```
## Support Our Work
With 3 team-members spanned across 3 different time-zones, we have won [NeurIPS LLM Efficiency Challenge 2023](https://llm-efficiency-challenge.github.io/) and 4 other competitions in Finance and Arabic LLM space. We have also published [SOTA mathematical reasoning model](https://huggingface.co/akjindal53244/Arithmo-Mistral-7B).
**Llama-3.1-Storm-8B** is our most valuable contribution so far towards the open-source community. We are committed in developing efficient generalist LLMs. **We're seeking both computational resources and innovative collaborators to drive this initiative forward.** |