umgbhalla/mlx-DeepSeek-R1-Distill-Qwen-1.5B-v0.1-2bit

The Model umgbhalla/mlx-DeepSeek-R1-Distill-Qwen-1.5B-v0.1-2bit was converted to MLX format from deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B using mlx-lm version 0.21.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("umgbhalla/mlx-DeepSeek-R1-Distill-Qwen-1.5B-v0.1-2bit")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
11
Safetensors
Model size
167M params
Tensor type
FP16
·
U32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for umgbhalla/mlx-DeepSeek-R1-Distill-Qwen-1.5B-v0.1-2bit

Quantized
(60)
this model