Smol-reason
Collection
My first ever usage of GRPO fine tuning techniques, information learned from this model will be used on future Andy models.
โข
7 items
โข
Updated
When making the Andy series of models, I have been using PPO techniques to train models.
But as the bleeding edge of small models is becoming clear, reasoning models are the winners.
So, in order to learn the nuances of training models, I decided to train a small 3B model using GRPO techniques instead of PPO.
The base model was Qwen2.5 3B, it is very smart as is, and even smarter with reasoning.
This model uses the following format while responding:
<think>
--reasoning content here--
</think>
<answer
--answer content here--
</answer>
Similar to the XML reasoning format but changed to use DeepSeek-R1 / QwQ thinking blocks.
4-bit
5-bit
8-bit
16-bit