|
--- |
|
tags: |
|
- int8 |
|
- w8a8 |
|
language: |
|
- en |
|
- fr |
|
- de |
|
- es |
|
- it |
|
- pt |
|
- zh |
|
- ja |
|
- ru |
|
- ko |
|
license: other |
|
license_name: mrl |
|
inference: false |
|
license_link: https://mistral.ai/licenses/MRL-0.1.md |
|
library_name: vllm |
|
base_model: |
|
- mistralai/Ministral-8B-Instruct-2410 |
|
--- |
|
|
|
# W8A8 Quant of Ministral-8B-Instruct-2410 |
|
Quantization script: <https://github.com/NeoChen1024/scripts/blob/master/llm-compressor-quantize.py> |
|
|
|
We introduce two new state-of-the-art models for local intelligence, on-device computing, and at-the-edge use cases. We call them les Ministraux: Ministral 3B and Ministral 8B. |
|
|
|
The Ministral-8B-Instruct-2410 Language Model is an instruct fine-tuned model significantly outperforming existing models of similar size, released under the Mistral Research License. |
|
|
|
If you are interested in using Ministral-3B or Ministral-8B commercially, outperforming Mistral-7B, [reach out to us](https://mistral.ai/contact/). |
|
|
|
For more details about les Ministraux please refer to our release [blog post](https://mistral.ai/news/ministraux). |
|
|
|
## Ministral 8B Key features |
|
- Released under the **Mistral Research License**, reach out to us for a commercial license |
|
- Trained with a **128k context window** with **interleaved sliding-window attention** |
|
- Trained on a large proportion of **multilingual and code data** |
|
- Supports **function calling** |
|
- Vocabulary size of **131k**, using the **V3-Tekken** tokenizer |
|
|
|
### Basic Instruct Template (V3-Tekken) |
|
|
|
``` |
|
<s>[INST]user message[/INST]assistant response</s>[INST]new user message[/INST] |
|
``` |
|
|
|
*For more information about the tokenizer please refer to [mistral-common](https://github.com/mistralai/mistral-common)* |
|
|
|
## Ministral 8B Architecture |
|
|
|
| Feature | Value | |
|
|:---------------------:|:--------------------:| |
|
| **Architecture** | Dense Transformer | |
|
| **Parameters** | 8,019,808,256 | |
|
| **Layers** | 36 | |
|
| **Heads** | 32 | |
|
| **Dim** | 4096 | |
|
| **KV Heads (GQA)** | 8 | |
|
| **Hidden Dim** | 12288 | |
|
| **Head Dim** | 128 | |
|
| **Vocab Size** | 131,072 | |
|
| **Context Length** | 128k | |
|
| **Attention Pattern** | Ragged (128k,32k,32k,32k) | |
|
|
|
## Benchmarks |
|
|
|
#### Base Models |
|
|
|
<u>Knowledge & Commonsense</u> |
|
|
|
| Model | MMLU | AGIEval | Winogrande | Arc-c | TriviaQA | |
|
|:-------------:|:------:|:---------:|:------------:|:-------:|:----------:| |
|
| Mistral 7B Base | 62.5 | 42.5 | 74.2 | 67.9 | 62.5 | |
|
| Llama 3.1 8B Base | 64.7 | 44.4 | 74.6 | 46.0 | 60.2 | |
|
| ***Ministral 8B Base*** | ***<u>65.0</u>*** | ***<u>48.3</u>*** | ***<u>75.3</u>*** | ***<u>71.9</u>*** | ***<u>65.5</u>*** | |
|
| | | | | | | |
|
| Gemma 2 2B Base | 52.4 | 33.8 | 68.7 | 42.6 | 47.8 | |
|
| Llama 3.2 3B Base | 56.2 | 37.4 | 59.6 | 43.1 | 50.7 | |
|
| ***Ministral 3B Base*** | ***<u>60.9</u>*** | ***<u>42.1</u>*** | ***<u>72.7</u>*** | ***<u>64.2</u>*** | ***<u>56.7</u>*** | |
|
|
|
<u>Code & Math</u> |
|
|
|
| Model | HumanEval pass@1 |GSM8K maj@8 | |
|
|:-------------:|:-------------------:|:---------------:| |
|
| Mistral 7B Base | 26.8 | 32.0 | |
|
| Llama 3.1 8B Base | ***<u>37.8</u>*** | 42.2 | |
|
| ***Ministral 8B Base*** | 34.8 | ***<u>64.5</u>*** | |
|
| | | | |
|
| Gemma 2 2B | 20.1 | 35.5 | |
|
| Llama 3.2 3B | 14.6 | 33.5 | |
|
| ***Ministral 3B*** | ***<u>34.2</u>*** | ***<u>50.9</u>*** | |
|
|
|
<u>Multilingual</u> |
|
|
|
| Model | French MMLU | German MMLU | Spanish MMLU | |
|
|:-------------:|:-------------:|:-------------:|:-------------:| |
|
| Mistral 7B Base | 50.6 | 49.6 | 51.4 | |
|
| Llama 3.1 8B Base | 50.8 | 52.8 | 54.6 | |
|
| ***Ministral 8B Base*** | ***<u>57.5</u>*** | ***<u>57.4</u>*** | ***<u>59.6</u>*** | |
|
| | | | | |
|
| Gemma 2 2B Base | 41.0 | 40.1 | 41.7 | |
|
| Llama 3.2 3B Base | 42.3 | 42.2 | 43.1 | |
|
| ***Ministral 3B Base*** | ***<u>49.1</u>*** | ***<u>48.3</u>*** | ***<u>49.5</u>*** | |
|
|
|
### Instruct Models |
|
|
|
<u>Chat/Arena (gpt-4o judge)</u> |
|
|
|
| Model | MTBench | Arena Hard | Wild bench | |
|
|:-------------:|:---------:|:------------:|:------------:| |
|
| Mistral 7B Instruct v0.3 | 6.7 | 44.3 | 33.1 | |
|
| Llama 3.1 8B Instruct | 7.5 | 62.4 | 37.0 | |
|
| Gemma 2 9B Instruct | 7.6 | 68.7 | ***<u>43.8</u>*** | |
|
| ***Ministral 8B Instruct*** | ***<u>8.3</u>*** | ***<u>70.9</u>*** | 41.3 | |
|
| | | | | |
|
| Gemma 2 2B Instruct | 7.5 | 51.7 | 32.5 | |
|
| Llama 3.2 3B Instruct | 7.2 | 46.0 | 27.2 | |
|
| ***Ministral 3B Instruct*** | ***<u>8.1</u>*** | ***<u>64.3</u>*** | ***<u>36.3</u>*** | |
|
|
|
<u>Code & Math</u> |
|
|
|
| Model | MBPP pass@1 | HumanEval pass@1 | Math maj@1 | |
|
|:-------------:|:-------------:|:------------------:|:-------------:| |
|
| Mistral 7B Instruct v0.3 | 50.2 | 38.4 | 13.2 | |
|
| Gemma 2 9B Instruct | 68.5 | 67.7 | 47.4 | |
|
Llama 3.1 8B Instruct | 69.7 | 67.1 | 49.3 | |
|
| ***Ministral 8B Instruct*** | ***<u>70.0</u>*** | ***<u>76.8</u>*** | ***<u>54.5</u>*** | |
|
| | | | | |
|
| Gemma 2 2B Instruct | 54.5 | 42.7 | 22.8 | |
|
| Llama 3.2 3B Instruct | 64.6 | 61.0 | 38.4 | |
|
| ***Ministral 3B* Instruct** | ***<u>67.7</u>*** | ***<u>77.4</u>*** | ***<u>51.7</u>*** | |
|
|
|
<u>Function calling</u> |
|
|
|
| Model | Internal bench | |
|
|:-------------:|:-----------------:| |
|
| Mistral 7B Instruct v0.3 | 6.9 | |
|
| Llama 3.1 8B Instruct | N/A | |
|
| Gemma 2 9B Instruct | N/A | |
|
| ***Ministral 8B Instruct*** | ***<u>31.6</u>*** | |
|
| | | |
|
| Gemma 2 2B Instruct | N/A | |
|
| Llama 3.2 3B Instruct | N/A | |
|
| ***Ministral 3B Instruct*** | ***<u>28.4</u>*** | |
|
|
|
## Usage Examples |
|
|
|
### vLLM (recommended) |
|
|
|
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) |
|
to implement production-ready inference pipelines. |
|
|
|
> [!IMPORTANT] |
|
> Currently vLLM is capped at 32k context size because interleaved attention kernels for paged attention are not yet implemented in vLLM. |
|
> Attention kernels for paged attention are being worked on and as soon as it is fully supported in vLLM, this model card will be updated. |
|
> To take advantage of the full 128k context size we recommend [Mistral Inference](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410#mistral-inference) |
|
|
|
**_Installation_** |
|
|
|
|
|
Make sure you install `vLLM >= v0.6.4`: |
|
|
|
``` |
|
pip install --upgrade vllm |
|
``` |
|
|
|
Also make sure you have `mistral_common >= 1.4.4` installed: |
|
|
|
``` |
|
pip install --upgrade mistral_common |
|
``` |
|
|
|
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile). |
|
|
|
**_Offline_** |
|
|
|
```py |
|
from vllm import LLM |
|
from vllm.sampling_params import SamplingParams |
|
|
|
model_name = "mistralai/Ministral-8B-Instruct-2410" |
|
|
|
sampling_params = SamplingParams(max_tokens=8192) |
|
|
|
# note that running Ministral 8B on a single GPU requires 24 GB of GPU RAM |
|
# If you want to divide the GPU requirement over multiple devices, please add *e.g.* `tensor_parallel=2` |
|
llm = LLM(model=model_name, tokenizer_mode="mistral", config_format="mistral", load_format="mistral") |
|
|
|
prompt = "Do we need to think for 10 seconds to find the answer of 1 + 1?" |
|
|
|
messages = [ |
|
{ |
|
"role": "user", |
|
"content": prompt |
|
}, |
|
] |
|
|
|
outputs = llm.chat(messages, sampling_params=sampling_params) |
|
|
|
print(outputs[0].outputs[0].text) |
|
# You don't need to think for 10 seconds to find the answer to 1 + 1. The answer is 2, |
|
# and you can easily add these two numbers in your mind very quickly without any delay. |
|
``` |
|
|
|
**_Server_** |
|
|
|
You can also use Ministral-8B in a server/client setting. |
|
|
|
1. Spin up a server: |
|
|
|
|
|
``` |
|
vllm serve mistralai/Ministral-8B-Instruct-2410 --tokenizer_mode mistral --config_format mistral --load_format mistral |
|
``` |
|
|
|
**Note:** Running Ministral-8B on a single GPU requires 24 GB of GPU RAM. |
|
|
|
If you want to divide the GPU requirement over multiple devices, please add *e.g.* `--tensor_parallel=2` |
|
|
|
2. And ping the client: |
|
|
|
``` |
|
curl --location 'http://<your-node-url>:8000/v1/chat/completions' \ |
|
--header 'Content-Type: application/json' \ |
|
--header 'Authorization: Bearer token' \ |
|
--data '{ |
|
"model": "mistralai/Ministral-8B-Instruct-2410", |
|
"messages": [ |
|
{ |
|
"role": "user", |
|
"content": "Do we need to think for 10 seconds to find the answer of 1 + 1?" |
|
} |
|
] |
|
}' |
|
|
|
``` |
|
|
|
### Mistral-inference |
|
|
|
We recommend using [mistral-inference](https://github.com/mistralai/mistral-inference) to quickly try out / "vibe-check" the model. |
|
|
|
|
|
**_Install_** |
|
|
|
Make sure to have `mistral_inference >= 1.5.0` installed. |
|
|
|
``` |
|
pip install mistral_inference --upgrade |
|
``` |
|
|
|
**_Download_** |
|
|
|
```py |
|
from huggingface_hub import snapshot_download |
|
from pathlib import Path |
|
|
|
mistral_models_path = Path.home().joinpath('mistral_models', '8B-Instruct') |
|
mistral_models_path.mkdir(parents=True, exist_ok=True) |
|
|
|
snapshot_download(repo_id="mistralai/Ministral-8B-Instruct-2410", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path) |
|
``` |
|
|
|
### Chat |
|
|
|
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using |
|
|
|
``` |
|
mistral-chat $HOME/mistral_models/8B-Instruct --instruct --max_tokens 256 |
|
``` |
|
|
|
### Passkey detection |
|
|
|
> [!IMPORTANT] |
|
> In this example the passkey message has over >100k tokens and mistral-inference |
|
> does not have a chunked pre-fill mechanism. Therefore you will need a lot of |
|
> GPU memory in order to run the below example (80 GB). For a more memory-efficient |
|
> solution we recommend using vLLM. |
|
|
|
```py |
|
from mistral_inference.transformer import Transformer |
|
from pathlib import Path |
|
import json |
|
from mistral_inference.generate import generate |
|
from huggingface_hub import hf_hub_download |
|
|
|
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer |
|
from mistral_common.protocol.instruct.messages import UserMessage |
|
from mistral_common.protocol.instruct.request import ChatCompletionRequest |
|
|
|
def load_passkey_request() -> ChatCompletionRequest: |
|
passkey_file = hf_hub_download(repo_id="mistralai/Ministral-8B-Instruct-2410", filename="passkey_example.json") |
|
|
|
with open(passkey_file, "r") as f: |
|
data = json.load(f) |
|
|
|
message_content = data["messages"][0]["content"] |
|
return ChatCompletionRequest(messages=[UserMessage(content=message_content)]) |
|
|
|
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json") |
|
model = Transformer.from_folder(mistral_models_path, softmax_fp32=False) |
|
|
|
completion_request = load_passkey_request() |
|
|
|
tokens = tokenizer.encode_chat_completion(completion_request).tokens |
|
|
|
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) |
|
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]) |
|
|
|
print(result) # The pass key is 13005. |
|
``` |
|
|
|
|
|
### Instruct following |
|
|
|
```py |
|
from mistral_inference.transformer import Transformer |
|
from mistral_inference.generate import generate |
|
|
|
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer |
|
from mistral_common.protocol.instruct.messages import UserMessage |
|
from mistral_common.protocol.instruct.request import ChatCompletionRequest |
|
|
|
|
|
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json") |
|
model = Transformer.from_folder(mistral_models_path) |
|
|
|
completion_request = ChatCompletionRequest(messages=[UserMessage(content="How often does the letter r occur in Mistral?")]) |
|
|
|
tokens = tokenizer.encode_chat_completion(completion_request).tokens |
|
|
|
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) |
|
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]) |
|
|
|
print(result) |
|
``` |
|
|
|
### Function calling |
|
|
|
```py |
|
from mistral_common.protocol.instruct.tool_calls import Function, Tool |
|
from mistral_inference.transformer import Transformer |
|
from mistral_inference.generate import generate |
|
|
|
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer |
|
from mistral_common.protocol.instruct.messages import UserMessage |
|
from mistral_common.protocol.instruct.request import ChatCompletionRequest |
|
from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy |
|
|
|
|
|
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json") |
|
tekken = tokenizer.instruct_tokenizer.tokenizer |
|
tekken.special_token_policy = SpecialTokenPolicy.IGNORE |
|
|
|
model = Transformer.from_folder(mistral_models_path) |
|
|
|
completion_request = ChatCompletionRequest( |
|
tools=[ |
|
Tool( |
|
function=Function( |
|
name="get_current_weather", |
|
description="Get the current weather", |
|
parameters={ |
|
"type": "object", |
|
"properties": { |
|
"location": { |
|
"type": "string", |
|
"description": "The city and state, e.g. San Francisco, CA", |
|
}, |
|
"format": { |
|
"type": "string", |
|
"enum": ["celsius", "fahrenheit"], |
|
"description": "The temperature unit to use. Infer this from the users location.", |
|
}, |
|
}, |
|
"required": ["location", "format"], |
|
}, |
|
) |
|
) |
|
], |
|
messages=[ |
|
UserMessage(content="What's the weather like today in Paris?"), |
|
], |
|
) |
|
|
|
tokens = tokenizer.encode_chat_completion(completion_request).tokens |
|
|
|
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) |
|
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]) |
|
|
|
print(result) |
|
``` |
|
|
|
## The Mistral AI Team |
|
|
|
Albert Jiang, Alexandre Abou Chahine, Alexandre Sablayrolles, Alexis Tacnet, Alodie Boissonnet, Alok Kothari, Amélie Héliou, Andy Lo, Anna Peronnin, Antoine Meunier, Antoine Roux, Antonin Faure, Aritra Paul, Arthur Darcet, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Avinash Sooriyarachchi, Baptiste Rozière, Barry Conklin, Bastien Bouillon, Blanche Savary de Beauregard, Carole Rambaud, Caroline Feldman, Charles de Freminville, Charline Mauro, Chih-Kuan Yeh, Chris Bamford, Clement Auguy, Corentin Heintz, Cyriaque Dubois, Devendra Singh Chaplot, Diego Las Casas, Diogo Costa, Eléonore Arcelin, Emma Bou Hanna, Etienne Metzger, Fanny Olivier Autran, Francois Lesage, Garance Gourdel, Gaspard Blanchet, Gaspard Donada Vidal, Gianna Maria Lengyel, Guillaume Bour, Guillaume Lample, Gustave Denis, Harizo Rajaona, Himanshu Jaju, Ian Mack, Ian Mathew, Jean-Malo Delignon, Jeremy Facchetti, Jessica Chudnovsky, Joachim Studnia, Justus Murke, Kartik Khandelwal, Kenneth Chiu, Kevin Riera, Leonard Blier, Leonard Suslian, Leonardo Deschaseaux, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Sophia Yang, Margaret Jennings, Marie Pellat, Marie Torelli, Marjorie Janiewicz, Mathis Felardos, Maxime Darrin, Michael Hoff, Mickaël Seznec, Misha Jessel Kenyon, Nayef Derwiche, Nicolas Carmont Zaragoza, Nicolas Faurie, Nicolas Moreau, Nicolas Schuhl, Nikhil Raghuraman, Niklas Muhs, Olivier de Garrigues, Patricia Rozé, Patricia Wang, Patrick von Platen, Paul Jacob, Pauline Buche, Pavankumar Reddy Muddireddy, Perry Savas, Pierre Stock, Pravesh Agrawal, Renaud de Peretti, Romain Sauvestre, Romain Sinthe, Roman Soletskyi, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Soham Ghosh, Sylvain Regnier, Szymon Antoniak, Teven Le Scao, Theophile Gervet, Thibault Schueller, Thibaut Lavril, Thomas Wang, Timothée Lacroix, Valeriia Nemychnikova, Wendy Shang, William El Sayed, William Marshall |
|
|