Update README.md
Browse files
README.md
CHANGED
@@ -7,6 +7,13 @@ base_model: FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview
|
|
7 |
|
8 |
# bobig/FuseO1-R1-QwQ-SkyT1-Flash-32B-Q8
|
9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
The Model [bobig/FuseO1-R1-QwQ-SkyT1-Flash-32B-Q8](https://huggingface.co/bobig/FuseO1-R1-QwQ-SkyT1-Flash-32B-Q8) was
|
11 |
converted to MLX format from [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview)
|
12 |
using mlx-lm version **0.21.4**.
|
|
|
7 |
|
8 |
# bobig/FuseO1-R1-QwQ-SkyT1-Flash-32B-Q8
|
9 |
|
10 |
+
Quant made with the latest mlx-lm
|
11 |
+
|
12 |
+
This model is very good, my 2nd favorite for unpluged coding on Macs.
|
13 |
+
SpecDec works with this draft model [DeepScaleR-1.5B-Preview-Q8](https://huggingface.co/mlx-community/DeepScaleR-1.5B-Preview-Q8)
|
14 |
+
but the acceptance rate is only 61%.
|
15 |
+
|
16 |
+
|
17 |
The Model [bobig/FuseO1-R1-QwQ-SkyT1-Flash-32B-Q8](https://huggingface.co/bobig/FuseO1-R1-QwQ-SkyT1-Flash-32B-Q8) was
|
18 |
converted to MLX format from [FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview](https://huggingface.co/FuseAI/FuseO1-DeepSeekR1-QwQ-SkyT1-Flash-32B-Preview)
|
19 |
using mlx-lm version **0.21.4**.
|