File size: 571 Bytes
4e79d8c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
license: apache-2.0
tags:
- mlx
---

# GreenBitAI/Qwen-1.5-14B-layer-mix-bpw-2.2-mlx
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Qwen-1.5-14B-layer-mix-bpw-2.2`]().
Refer to the [original model card](https://huggingface.co/GreenBitAI/Qwen-1.5-14B-layer-mix-bpw-2.2) for more details on the model.
## Use with mlx

```bash
pip install gbx-lm
```

```python
from gbx_lm import load, generate

model, tokenizer = load("GreenBitAI/Qwen-1.5-14B-layer-mix-bpw-2.2-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```