Commit
•
6f3ecec
1
Parent(s):
21a09ed
Suggestions for model card (#1)
Browse files- Suggestions for model card (ac597345ad2cdc043a87d6b60c27fae43b41cfa5)
Co-authored-by: Omar Sanseviero <[email protected]>
README.md
CHANGED
@@ -10,15 +10,18 @@ license: apache-2.0
|
|
10 |
> Sign up for the Apple Beta Software Program [here](https://beta.apple.com/en/) to get acccess.
|
11 |
> Check out the companion blog post to learn more about what's new in iOS 18 & macOS 15 [here](https://hf.co/blog/wwdc24).
|
12 |
|
13 |
-
This repo contains Mistral 7B Instruct v0.3 converted to CoreML in both FP16 & Int4 precision.
|
14 |
|
15 |
Mistral-7B-Instruct-v0.3 is an instruct fine-tuned version of the Mistral-7B-v0.3 by Mistral AI.
|
16 |
|
17 |
-
Mistral-7B-v0.3 has the following changes compared to
|
18 |
- Extended vocabulary to 32768
|
19 |
- Supports v3 Tokenizer
|
20 |
- Supports function calling
|
21 |
|
|
|
|
|
|
|
22 |
## Download
|
23 |
|
24 |
Install `huggingface-cli`
|
@@ -31,12 +34,13 @@ To download one of the `.mlpackage` folders to the `models` directory:
|
|
31 |
|
32 |
```bash
|
33 |
huggingface-cli download \
|
34 |
-
--local-dir models
|
35 |
-
|
|
|
36 |
--include "StatefulMistral7BInstructInt4.mlpackage/*"
|
37 |
```
|
38 |
|
39 |
-
To download everything,
|
40 |
|
41 |
## Integrate in Swift apps
|
42 |
|
@@ -49,7 +53,3 @@ You can integrate the model right into your Swift apps using the `preview` branc
|
|
49 |
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
|
50 |
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
|
51 |
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
|
52 |
-
|
53 |
-
## The Mistral AI Team
|
54 |
-
|
55 |
-
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
|
|
|
10 |
> Sign up for the Apple Beta Software Program [here](https://beta.apple.com/en/) to get acccess.
|
11 |
> Check out the companion blog post to learn more about what's new in iOS 18 & macOS 15 [here](https://hf.co/blog/wwdc24).
|
12 |
|
13 |
+
This repo contains [Mistral 7B Instruct v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) converted to CoreML in both FP16 & Int4 precision.
|
14 |
|
15 |
Mistral-7B-Instruct-v0.3 is an instruct fine-tuned version of the Mistral-7B-v0.3 by Mistral AI.
|
16 |
|
17 |
+
Mistral-7B-v0.3 has the following changes compared to v0.2 model:
|
18 |
- Extended vocabulary to 32768
|
19 |
- Supports v3 Tokenizer
|
20 |
- Supports function calling
|
21 |
|
22 |
+
To learn more about the model, we recommend looking at its documentation and original model card
|
23 |
+
|
24 |
+
|
25 |
## Download
|
26 |
|
27 |
Install `huggingface-cli`
|
|
|
34 |
|
35 |
```bash
|
36 |
huggingface-cli download \
|
37 |
+
--local-dir models \
|
38 |
+
--local-dir-use-symlinks False \
|
39 |
+
--repo_id apple/coreml-mistral-7b-instruct-v0.3 \
|
40 |
--include "StatefulMistral7BInstructInt4.mlpackage/*"
|
41 |
```
|
42 |
|
43 |
+
To download everything, remove the `--include` argument.
|
44 |
|
45 |
## Integrate in Swift apps
|
46 |
|
|
|
53 |
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
|
54 |
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
|
55 |
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
|
|
|
|
|
|
|
|