Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -20,26 +20,11 @@ model-index:
|
|
20 |
results: []
|
21 |
---
|
22 |
|
23 |
-
![Alt text](https://cdn.discordapp.com/attachments/989904887330521099/1204964869565317120/Alchemist_Hermes_Illustration.jpeg?ex=65d6a5fc&is=65c430fc&hm=9939eb11dd4b7872a67019a328ad2832315a1e2ad273e2d0dc7134a5d45a58ee&)
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
# mlx-community/NousHermes-Mixtral-8x7B-Reddit-mlx
|
28 |
-
This model was converted to MLX format from [`mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-4bit`]()
|
29 |
Refer to the [original model card](https://huggingface.co/mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-4bit) for more details on the model.
|
30 |
-
|
31 |
-
For the Dataset [original dataset](https://huggingface.co/datasets/euclaise/reddit-instruct-curated)
|
32 |
-
|
33 |
## Use with mlx
|
34 |
|
35 |
-
|
36 |
-
When using the model use the fromat:
|
37 |
-
|
38 |
-
Question: [your question]
|
39 |
-
|
40 |
-
Assistant:
|
41 |
-
|
42 |
-
|
43 |
```bash
|
44 |
pip install mlx-lm
|
45 |
```
|
@@ -48,5 +33,5 @@ pip install mlx-lm
|
|
48 |
from mlx_lm import load, generate
|
49 |
|
50 |
model, tokenizer = load("mlx-community/NousHermes-Mixtral-8x7B-Reddit-mlx")
|
51 |
-
response = generate(model, tokenizer, prompt="
|
52 |
```
|
|
|
20 |
results: []
|
21 |
---
|
22 |
|
|
|
|
|
|
|
|
|
23 |
# mlx-community/NousHermes-Mixtral-8x7B-Reddit-mlx
|
24 |
+
This model was converted to MLX format from [`mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-4bit`]().
|
25 |
Refer to the [original model card](https://huggingface.co/mlx-community/Nous-Hermes-2-Mixtral-8x7B-DPO-4bit) for more details on the model.
|
|
|
|
|
|
|
26 |
## Use with mlx
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
```bash
|
29 |
pip install mlx-lm
|
30 |
```
|
|
|
33 |
from mlx_lm import load, generate
|
34 |
|
35 |
model, tokenizer = load("mlx-community/NousHermes-Mixtral-8x7B-Reddit-mlx")
|
36 |
+
response = generate(model, tokenizer, prompt="hello", verbose=True)
|
37 |
```
|