Triangle104
commited on
Commit
•
894bfdc
1
Parent(s):
b9fa503
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,260 +1,19 @@
|
|
1 |
---
|
2 |
-
base_model:
|
3 |
-
language:
|
4 |
-
- en
|
5 |
-
- fr
|
6 |
-
- de
|
7 |
-
- es
|
8 |
-
- it
|
9 |
-
- pt
|
10 |
-
- zh
|
11 |
-
- ja
|
12 |
-
- ru
|
13 |
-
- ko
|
14 |
-
library_name: transformers
|
15 |
license: other
|
16 |
license_name: mrl
|
17 |
license_link: https://mistral.ai/licenses/MRL-0.1.md
|
18 |
tags:
|
|
|
|
|
19 |
- llama-cpp
|
20 |
- gguf-my-repo
|
21 |
-
inference: false
|
22 |
-
extra_gated_description: If you want to learn more about how we process your personal
|
23 |
-
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
|
24 |
---
|
25 |
|
26 |
# Triangle104/Mistral-Small-Instruct-2409-Q6_K-GGUF
|
27 |
-
This model was converted to GGUF format from [`
|
28 |
-
Refer to the [original model card](https://huggingface.co/
|
29 |
|
30 |
-
---
|
31 |
-
Model details:
|
32 |
-
-
|
33 |
-
Mistral-Small-Instruct-2409 is an instruct fine-tuned version with the following characteristics:
|
34 |
-
|
35 |
-
22B parameters
|
36 |
-
Vocabulary to 32768
|
37 |
-
Supports function calling
|
38 |
-
32k sequence length
|
39 |
-
|
40 |
-
Usage Examples
|
41 |
-
vLLM (recommended)
|
42 |
-
|
43 |
-
We recommend using this model with the vLLM library to implement production-ready inference pipelines.
|
44 |
-
|
45 |
-
Installation
|
46 |
-
|
47 |
-
Make sure you install vLLM >= v0.6.1.post1:
|
48 |
-
|
49 |
-
pip install --upgrade vllm
|
50 |
-
|
51 |
-
Also make sure you have mistral_common >= 1.4.1 installed:
|
52 |
-
|
53 |
-
pip install --upgrade mistral_common
|
54 |
-
|
55 |
-
You can also make use of a ready-to-go docker image.
|
56 |
-
|
57 |
-
Offline
|
58 |
-
|
59 |
-
from vllm import LLM
|
60 |
-
from vllm.sampling_params import SamplingParams
|
61 |
-
|
62 |
-
model_name = "mistralai/Mistral-Small-Instruct-2409"
|
63 |
-
|
64 |
-
sampling_params = SamplingParams(max_tokens=8192)
|
65 |
-
|
66 |
-
# note that running Mistral-Small on a single GPU requires at least 44 GB of GPU RAM
|
67 |
-
# If you want to divide the GPU requirement over multiple devices, please add *e.g.* `tensor_parallel=2`
|
68 |
-
llm = LLM(model=model_name, tokenizer_mode="mistral", config_format="mistral", load_format="mistral")
|
69 |
-
|
70 |
-
prompt = "How often does the letter r occur in Mistral?"
|
71 |
-
|
72 |
-
messages = [
|
73 |
-
{
|
74 |
-
"role": "user",
|
75 |
-
"content": prompt
|
76 |
-
},
|
77 |
-
]
|
78 |
-
|
79 |
-
outputs = llm.chat(messages, sampling_params=sampling_params)
|
80 |
-
|
81 |
-
print(outputs[0].outputs[0].text)
|
82 |
-
|
83 |
-
Server
|
84 |
-
|
85 |
-
You can also use Mistral Small in a server/client setting.
|
86 |
-
|
87 |
-
Spin up a server:
|
88 |
-
|
89 |
-
vllm serve mistralai/Mistral-Small-Instruct-2409 --tokenizer_mode mistral --config_format mistral --load_format mistral
|
90 |
-
|
91 |
-
Note: Running Mistral-Small on a single GPU requires at least 44 GB of GPU RAM.
|
92 |
-
|
93 |
-
If you want to divide the GPU requirement over multiple devices, please add e.g. --tensor_parallel=2
|
94 |
-
|
95 |
-
And ping the client:
|
96 |
-
|
97 |
-
curl --location 'http://<your-node-url>:8000/v1/chat/completions' \
|
98 |
-
--header 'Content-Type: application/json' \
|
99 |
-
--header 'Authorization: Bearer token' \
|
100 |
-
--data '{
|
101 |
-
"model": "mistralai/Mistral-Small-Instruct-2409",
|
102 |
-
"messages": [
|
103 |
-
{
|
104 |
-
"role": "user",
|
105 |
-
"content": "How often does the letter r occur in Mistral?"
|
106 |
-
}
|
107 |
-
]
|
108 |
-
}'
|
109 |
-
|
110 |
-
Mistral-inference
|
111 |
-
|
112 |
-
We recommend using mistral-inference to quickly try out / "vibe-check" the model.
|
113 |
-
|
114 |
-
Install
|
115 |
-
|
116 |
-
Make sure to have mistral_inference >= 1.4.1 installed.
|
117 |
-
|
118 |
-
pip install mistral_inference --upgrade
|
119 |
-
|
120 |
-
Download
|
121 |
-
|
122 |
-
from huggingface_hub import snapshot_download
|
123 |
-
from pathlib import Path
|
124 |
-
|
125 |
-
mistral_models_path = Path.home().joinpath('mistral_models', '22B-Instruct-Small')
|
126 |
-
mistral_models_path.mkdir(parents=True, exist_ok=True)
|
127 |
-
|
128 |
-
snapshot_download(repo_id="mistralai/Mistral-Small-Instruct-2409", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
|
129 |
-
|
130 |
-
Chat
|
131 |
-
|
132 |
-
After installing mistral_inference, a mistral-chat CLI command should be available in your environment. You can chat with the model using
|
133 |
-
|
134 |
-
mistral-chat $HOME/mistral_models/22B-Instruct-Small --instruct --max_tokens 256
|
135 |
-
|
136 |
-
Instruct following
|
137 |
-
|
138 |
-
from mistral_inference.transformer import Transformer
|
139 |
-
from mistral_inference.generate import generate
|
140 |
-
|
141 |
-
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
|
142 |
-
from mistral_common.protocol.instruct.messages import UserMessage
|
143 |
-
from mistral_common.protocol.instruct.request import ChatCompletionRequest
|
144 |
-
|
145 |
-
|
146 |
-
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
|
147 |
-
model = Transformer.from_folder(mistral_models_path)
|
148 |
-
|
149 |
-
completion_request = ChatCompletionRequest(messages=[UserMessage(content="How often does the letter r occur in Mistral?")])
|
150 |
-
|
151 |
-
tokens = tokenizer.encode_chat_completion(completion_request).tokens
|
152 |
-
|
153 |
-
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
|
154 |
-
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
|
155 |
-
|
156 |
-
print(result)
|
157 |
-
|
158 |
-
Function calling
|
159 |
-
|
160 |
-
from mistral_common.protocol.instruct.tool_calls import Function, Tool
|
161 |
-
from mistral_inference.transformer import Transformer
|
162 |
-
from mistral_inference.generate import generate
|
163 |
-
|
164 |
-
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
|
165 |
-
from mistral_common.protocol.instruct.messages import UserMessage
|
166 |
-
from mistral_common.protocol.instruct.request import ChatCompletionRequest
|
167 |
-
|
168 |
-
|
169 |
-
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
|
170 |
-
model = Transformer.from_folder(mistral_models_path)
|
171 |
-
|
172 |
-
completion_request = ChatCompletionRequest(
|
173 |
-
tools=[
|
174 |
-
Tool(
|
175 |
-
function=Function(
|
176 |
-
name="get_current_weather",
|
177 |
-
description="Get the current weather",
|
178 |
-
parameters={
|
179 |
-
"type": "object",
|
180 |
-
"properties": {
|
181 |
-
"location": {
|
182 |
-
"type": "string",
|
183 |
-
"description": "The city and state, e.g. San Francisco, CA",
|
184 |
-
},
|
185 |
-
"format": {
|
186 |
-
"type": "string",
|
187 |
-
"enum": ["celsius", "fahrenheit"],
|
188 |
-
"description": "The temperature unit to use. Infer this from the users location.",
|
189 |
-
},
|
190 |
-
},
|
191 |
-
"required": ["location", "format"],
|
192 |
-
},
|
193 |
-
)
|
194 |
-
)
|
195 |
-
],
|
196 |
-
messages=[
|
197 |
-
UserMessage(content="What's the weather like today in Paris?"),
|
198 |
-
],
|
199 |
-
)
|
200 |
-
|
201 |
-
tokens = tokenizer.encode_chat_completion(completion_request).tokens
|
202 |
-
|
203 |
-
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
|
204 |
-
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
|
205 |
-
|
206 |
-
print(result)
|
207 |
-
|
208 |
-
Usage in Hugging Face Transformers
|
209 |
-
|
210 |
-
You can also use Hugging Face transformers library to run inference using various chat templates, or fine-tune the model. Example for inference:
|
211 |
-
|
212 |
-
from transformers import LlamaTokenizerFast, MistralForCausalLM
|
213 |
-
import torch
|
214 |
-
|
215 |
-
device = "cuda"
|
216 |
-
tokenizer = LlamaTokenizerFast.from_pretrained('mistralai/Mistral-Small-Instruct-2409')
|
217 |
-
tokenizer.pad_token = tokenizer.eos_token
|
218 |
-
|
219 |
-
model = MistralForCausalLM.from_pretrained('mistralai/Mistral-Small-Instruct-2409', torch_dtype=torch.bfloat16)
|
220 |
-
model = model.to(device)
|
221 |
-
|
222 |
-
prompt = "How often does the letter r occur in Mistral?"
|
223 |
-
|
224 |
-
messages = [
|
225 |
-
{"role": "user", "content": prompt},
|
226 |
-
]
|
227 |
-
|
228 |
-
model_input = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(device)
|
229 |
-
gen = model.generate(model_input, max_new_tokens=150)
|
230 |
-
dec = tokenizer.batch_decode(gen)
|
231 |
-
print(dec)
|
232 |
-
|
233 |
-
And you should obtain
|
234 |
-
|
235 |
-
<s>
|
236 |
-
[INST]
|
237 |
-
How often does the letter r occur in Mistral?
|
238 |
-
[/INST]
|
239 |
-
To determine how often the letter "r" occurs in the word "Mistral,"
|
240 |
-
we can simply count the instances of "r" in the word.
|
241 |
-
The word "Mistral" is broken down as follows:
|
242 |
-
- M
|
243 |
-
- i
|
244 |
-
- s
|
245 |
-
- t
|
246 |
-
- r
|
247 |
-
- a
|
248 |
-
- l
|
249 |
-
Counting the "r"s, we find that there is only one "r" in "Mistral."
|
250 |
-
Therefore, the letter "r" occurs once in the word "Mistral."
|
251 |
-
</s>
|
252 |
-
|
253 |
-
The Mistral AI Team
|
254 |
-
|
255 |
-
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Diogo Costa, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
|
256 |
-
|
257 |
-
---
|
258 |
## Use with llama.cpp
|
259 |
Install llama.cpp through brew (works on Mac and Linux)
|
260 |
|
|
|
1 |
---
|
2 |
+
base_model: unsloth/Mistral-Small-Instruct-2409
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
license: other
|
4 |
license_name: mrl
|
5 |
license_link: https://mistral.ai/licenses/MRL-0.1.md
|
6 |
tags:
|
7 |
+
- unsloth
|
8 |
+
- mistral
|
9 |
- llama-cpp
|
10 |
- gguf-my-repo
|
|
|
|
|
|
|
11 |
---
|
12 |
|
13 |
# Triangle104/Mistral-Small-Instruct-2409-Q6_K-GGUF
|
14 |
+
This model was converted to GGUF format from [`unsloth/Mistral-Small-Instruct-2409`](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
15 |
+
Refer to the [original model card](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) for more details on the model.
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
## Use with llama.cpp
|
18 |
Install llama.cpp through brew (works on Mac and Linux)
|
19 |
|