chore: update README.md content
Browse files
README.md
CHANGED
|
@@ -301,6 +301,30 @@ For direct use with `unsloth`, you can easily get started with the following ste
|
|
| 301 |
print(results)
|
| 302 |
```
|
| 303 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 304 |
### Instructions
|
| 305 |
|
| 306 |
Here are specific instructions and explanations for each use case.
|
|
|
|
| 301 |
print(results)
|
| 302 |
```
|
| 303 |
|
| 304 |
+
#### Use with MLX
|
| 305 |
+
|
| 306 |
+
For direct use with `mlx`, you can easily get started with the following steps.
|
| 307 |
+
|
| 308 |
+
- Firstly, you need to install unsloth via the command below with `pip`.
|
| 309 |
+
|
| 310 |
+
```bash
|
| 311 |
+
pip install mlx-lm
|
| 312 |
+
```
|
| 313 |
+
|
| 314 |
+
- Right now, you can start using the model directly.
|
| 315 |
+
```python
|
| 316 |
+
from mlx_lm import load, generate
|
| 317 |
+
|
| 318 |
+
model, tokenizer = load("ghost-x/ghost-8b-beta-1608-mlx")
|
| 319 |
+
messages = [
|
| 320 |
+
{"role": "system", "content": ""},
|
| 321 |
+
{"role": "user", "content": "Why is the sky blue ?"},
|
| 322 |
+
]
|
| 323 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 324 |
+
response = generate(model, tokenizer, prompt=prompt, verbose=True)
|
| 325 |
+
```
|
| 326 |
+
|
| 327 |
+
|
| 328 |
### Instructions
|
| 329 |
|
| 330 |
Here are specific instructions and explanations for each use case.
|