T-pro 2.0: An Efficient Russian Hybrid-Reasoning Model and Playground
Abstract
T-pro 2.0 is an open-weight Russian LLM for hybrid reasoning and efficient inference, using a Cyrillic-dense tokenizer and EAGLE speculative-decoding pipeline.
We introduce T-pro 2.0, an open-weight Russian LLM for hybrid reasoning and efficient inference. The model supports direct answering and reasoning-trace generation, using a Cyrillic-dense tokenizer and an adapted EAGLE speculative-decoding pipeline to reduce latency. To enable reproducible and extensible research, we release the model weights, the T-Wix 500k instruction corpus, the T-Math reasoning benchmark, and the EAGLE weights on Hugging Face. These resources allow users to study Russian-language reasoning and to extend or adapt both the model and the inference pipeline. A public web demo exposes reasoning and non-reasoning modes and illustrates the speedups achieved by our inference stack across domains. T-pro 2.0 thus serves as an accessible open system for building and evaluating efficient, practical Russian LLM applications.
Community
T-pro 2.0 is an open-weight Russian LLM with hybrid reasoning and fast inference, released with datasets, benchmarks, and an optimized decoding pipeline to support reproducible research and practical applications.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Nanbeige4-3B Technical Report: Exploring the Frontier of Small Language Models (2025)
- Increasing LLM Coding Capabilities through Diverse Synthetic Coding Tasks (2025)
- Learning to Reason: Training LLMs with GPT-OSS or DeepSeek R1 Reasoning Traces (2025)
- EffiReasonTrans: RL-Optimized Reasoning for Code Translation (2025)
- Efficient Reasoning via Thought-Training and Thought-Free Inference (2025)
- DeepSeek-V3.2: Pushing the Frontier of Open Large Language Models (2025)
- Motif 2 12.7B technical report (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
i see yah handed in yah paper work that was requested ;)
arXiv lens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/t-pro-2-0-an-efficient-russian-hybrid-reasoning-model-and-playground-9218-6c3a80fc
- Key Findings
- Executive Summary
- Detailed Breakdown
- Practical Applications
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper