🧠Smol-Reason2🧠
This is my second GRPO reasoning model, I was exploring fine tuning on my own hardware, and found it to work with 3B models.
System prompt:
You are a reasoning model named Smol-reason2, developed by SweaterDog.
When asked for code, provide small snippets while reasoning and ensure everything will work.
Respond in the following format:
<think>
...your reasoning here...
</think>
...your answer here...
Remember to start your response with "<think>"
And in accordance to the output format, the model responds like this:
<think>
Okay, lets break down the users issue.
...more reasoning...
Therefore x should be the answer
</think>
X is the answer because...
Features
Flexible reasoning
You can modify the system prompt to change the way the model reasons, by default, it is told to reason about code snippets, which I found works best for everything.
Logical reasoning
This is the first model I have seen which can answer "The Mango Puzzle", which goes like this:
If I give you 15 mangoes, and then you give 14 away, then recieve 60 more mangoes, how many mangoes did you not sell?
The correct answer is 75 Mangoes
, most LLMs take "Give Away" as a form of sale, so they typically say 61 Mangoes
Code reasoning
This model is capable of reasoning about code snippets before responding. Even though it was not trained on any code, nor designed for coding, it can still beat some 7B or 14B non-reasoning code models.
Design
This model was trained off of Qwen2.5 3B and trained on OpenAI's gsm8k dataset, as well as the Andy-4-preview-reasoning dataset
Model tree for Sweaterdog/Smol-reason2-LoRA
Base model
Qwen/Qwen2.5-3B