MLX
English
llama
File size: 662 Bytes
36667f3
 
 
 
 
 
 
 
 
 
 
 
 
 
2b51cba
36667f3
 
 
2b51cba
36667f3
2b51cba
 
 
 
 
 
 
 
36667f3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
language:
- en
tags:
- mlx
datasets:
- HuggingFaceH4/ultrafeedback_binarized
- allenai/tulu-v2-sft-mixture
base_model: meta-llama/Llama-2-13b-hf
model-index:
- name: tulu-2-dpo-13b
  results: []
---

# mlx-community/tulu-2-dpo-13b-4bit-mlx
This model was converted to MLX format from [`allenai/tulu-2-dpo-13b`]().
Refer to the [original model card](https://huggingface.co/allenai/tulu-2-dpo-13b) for more details on the model.
## Use with mlx

```bash
pip install mlx-lm
```

```python
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/tulu-2-dpo-13b-4bit-mlx")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```