DeepSeek-R1-DRAFT-Qwen2.5-0.5B

Updated to v1

This model is trained on outputs of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B and is meant to be used only as draft model for speculative decoding.

It's specifically intended for users of 3090/4090, allowing you to run the DeepSeek-R1-Distill-Qwen-32B-Q4_K_M GGUF version with 16k context and speeding up generation without sacrificing more context length or model quality.

Data info

The data consists of code, math, reasoning and general knowledge tasks collected from various datasets. It has been trained for 2 epochs on 7k unique examples, for a total of 26 million tokens per epoch.

Since data generation was done using spare GPU time, I may publish a further trained version later.

Downloads last month
1,338
Safetensors
Model size
494M params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for alamios/DeepSeek-R1-DRAFT-Qwen2.5-0.5B

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(144)
this model
Finetunes
1 model
Quantizations
6 models

Collection including alamios/DeepSeek-R1-DRAFT-Qwen2.5-0.5B