Felladrin commited on
Commit
26c8508
·
verified ·
1 Parent(s): dfb0f3f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -14,9 +14,9 @@ tags:
14
  library_name: transformers
15
  ---
16
 
17
- # Felladrin/QwQ-32B-Coder-Fusion-9010-Q4-mlx
18
 
19
- The Model [Felladrin/QwQ-32B-Coder-Fusion-9010-Q4-mlx](https://huggingface.co/Felladrin/QwQ-32B-Coder-Fusion-9010-Q4-mlx) was converted to MLX format from [huihui-ai/QwQ-32B-Coder-Fusion-9010](https://huggingface.co/huihui-ai/QwQ-32B-Coder-Fusion-9010) using mlx-lm version **0.19.2**.
20
 
21
  ## Use with mlx
22
 
@@ -27,7 +27,7 @@ pip install mlx-lm
27
  ```python
28
  from mlx_lm import load, generate
29
 
30
- model, tokenizer = load("Felladrin/QwQ-32B-Coder-Fusion-9010-Q4-mlx")
31
 
32
  prompt="hello"
33
 
 
14
  library_name: transformers
15
  ---
16
 
17
+ # mlx-community/QwQ-32B-Coder-Fusion-9010-4bit
18
 
19
+ The Model [mlx-community/QwQ-32B-Coder-Fusion-9010-4bit](https://huggingface.co/mlx-community/QwQ-32B-Coder-Fusion-9010-4bit) was converted to MLX format from [huihui-ai/QwQ-32B-Coder-Fusion-9010](https://huggingface.co/huihui-ai/QwQ-32B-Coder-Fusion-9010) using mlx-lm version **0.19.2**.
20
 
21
  ## Use with mlx
22
 
 
27
  ```python
28
  from mlx_lm import load, generate
29
 
30
+ model, tokenizer = load("mlx-community/QwQ-32B-Coder-Fusion-9010-4bit")
31
 
32
  prompt="hello"
33