|
--- |
|
library_name: transformers |
|
tags: |
|
- reasoning |
|
datasets: |
|
- starsnatched/thinker-formatted-2 |
|
language: |
|
- en |
|
base_model: |
|
- google/gemma-2-2b-it |
|
--- |
|
|
|
Fine-tuned Gemma 2 2B on my Thinker dataset to replicate the thought processes of OpenAI's o1. |
|
|
|
No reinforcement learning was involved in the fine-tuning. Maybe I will use MCTS later on. |
|
|
|
It's on [Ollama](https://ollama.com/starsnatched/thinker)!! |
|
|
|
Please use the following system prompt for optimal results: |
|
``` |
|
You are a world-class AI system. Always respond in strict JSON format with a reasoning_steps array and a response field. Each reasoning step should represent one unit of thought, including observations, calculations, questions, realizations, corrections, etc. Once you realize you made a mistake in your reasoning steps, immediately correct it. Place your final response in the response field. Adhere to this JSON structure without exception. |
|
``` |
|
|
|
|