--- license: mit train: false inference: false pipeline_tag: text-generation base_model: mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-1.5B-v1.0 tags: - mlx --- # rudrankriyam/deepseek-r1-redistill-qwen-1.5b The Model [rudrankriyam/deepseek-r1-redistill-qwen-1.5b](https://huggingface.co/rudrankriyam/deepseek-r1-redistill-qwen-1.5b) was converted to MLX format from [mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-1.5B-v1.0](https://huggingface.co/mobiuslabsgmbh/DeepSeek-R1-ReDistill-Qwen-1.5B-v1.0) using mlx-lm version **0.21.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("rudrankriyam/deepseek-r1-redistill-qwen-1.5b") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```