--- license: apache-2.0 language: - en base_model: - alamios/DeepSeek-R1-DRAFT-Qwen2.5-0.5B pipeline_tag: text-generation library_name: transformers tags: - qwen - qwen2.5 - deepseek --- # DeepSeek-R1-DRAFT-Qwen2.5-0.5B-GGUF This model is trained on outputs of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B and is meant to be used only as draft model for speculative decoding. It's specifically intended for users of 3090/4090, allowing you to run the DeepSeek-R1-Distill-Qwen-32B-Q4_K_M GGUF version with 16k context and speeding up generation without sacrificing more context length or model quality. # Data info The data consists of code, math, reasoning and general knowledge tasks collected from various datasets. It has been trained for 4 epochs on 5200 unique examples, for a total of 21,600,000 tokens per epoch. Since data generation was done using spare GPU time, I may publish a further trained version later.