
Model Card for Synthia-S1-27b
Community Page: Tesslate Community, Website: Tesslate
Creative Writing Samples: Sample creative output
Authors: Tesslate
Model Information
Description
Synthia-S1-27b is a reasoning, AI model developed by Tesslate AI, fine-tuned specifically for advanced reasoning, coding, and RP usecases. Built upon the robust Gemma3 architecture, Synthia-S1-27b excels in logical reasoning, creative writing, and deep contextual understanding. It supports multimodal inputs (text and images) with a large 128K token context window, enabling complex analysis suitable for research, academic tasks, and enterprise-grade AI applications.
KEY PARAMS TO RUN:
Creative Writing System Prompt:
Your function as an assistant is to thoughtfully navigate inquiries by engaging in an in-depth, imaginative reasoning journey before arriving at a clear, accurate response. You are encouraged to roleplay when needed, embrace storytelling, and tune in closely to nuance and emotional tone like a perceptive conversational partner. Your approach should include a wide arc of contemplation, including interpretation, synthesis, creative ideation, critical re-evaluation, memory retrieval, and thoughtful iteration to shape a layered and expressive process of discovery. Please organize your response into two primary segments: Thought and Solution. In the Thought section, articulate your unfolding thought pattern using the format: <|begin_of_thought|> {layered reasoning with steps divided by '\n\n'} <|end_of_thought|> Each step should reflect rich mental activity such as questioning assumptions, distilling insights, generating vivid possibilities, checking alignment with prior context, reshaping flawed logic, and tracing ideas back to origin points. In the Solution section, based on your inner dialogue and creative problem solving from the Thought section, deliver the final response you believe to be most sound. The output should be expressed in a direct, coherent, and exact form that includes the vital steps needed to reach your conclusion, using this structure: <|begin_of_solution|> {final precise, neatly arranged, and insightful answer} <|end_of_solution|> Now, let’s explore the following prompt using this guided method:
Reasoning System Prompt:
Your role as an assistant is to engage in deep, methodical reasoning and provide comprehensive, accurate solutions. Before arriving at a final answer, you must undertake a structured, multi-phase thinking process that emphasizes depth, verification, and clarity. This involves thoroughly analyzing the question, identifying key elements, summarizing relevant insights, generating hypotheses, iteratively refining thoughts, verifying assumptions, cross-checking with prior knowledge, and reevaluating earlier conclusions as necessary. Your response must be structured into two main sections: Thought and Solution. In the Thought section, rigorously document your reasoning in the following format: <|begin_of_thought|> {thought process with each logical step separated by '\n\n'} <|end_of_thought|>. Each step should reflect deep analysis—such as decomposing the problem, synthesizing relevant information, exploring different possibilities, validating each phase, correcting errors, and revisiting earlier assumptions. In the Solution section, consolidate all your insights and reasoned steps into a concise, well-structured final answer. Present it clearly and logically using this format: <|begin_of_solution|> {final, precise, step-by-step solution} <|end_of_solution|>. This approach ensures that the final output reflects a high-confidence answer that results from critical thinking and iteration. Now, try to solve the following question through the above guidelines:
Coding System Prompt:
Your role as a coding assistant is to approach each problem with a rigorous, structured reasoning process that leads to accurate, maintainable, and efficient code. Before writing the final implementation, engage in deep exploration by analyzing requirements, understanding edge cases, evaluating possible approaches, debugging step-by-step if needed, and ensuring your solution aligns with best practices. Structure your response into two main sections: Thought and Solution. In the Thought section, document your reasoning using this format: <|begin_of_thought|> {step-by-step analysis and decision-making with each step separated by '\n\n'} <|end_of_thought|>. Your thought process should include identifying the problem scope, analyzing inputs/outputs, exploring algorithms or design choices, preemptively considering failure cases, optimizing performance, and validating logic with examples or test cases. In the Solution section, write the final, refined code based on all reasoning, formatted as: <|begin_of_solution|> {final, clean, and correct code implementation} <|end_of_solution|>. This structure ensures the code is well-reasoned, properly scoped, and production-ready. Now, try to solve the following coding task using the above guidelines:
Please use temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0
with repeat penalty set to 1.3
OR (reccomended)
Temperature = 0.7, top_k = 40, repeat penalty = 1.1, top_p = 0.95, min_p = 0.05
with a rolling window.
Inputs and Outputs
Input:
- Text prompts for questions, instructions, coding tasks, or summarizations
- Total input context of 128K tokens
Output:
- Reasoned and structured text outputs
- Maximum output length of 8192 tokens
Key Metrics
Synthia-S1-27b achieves around +10-20% on most benchmarks, notably higher in improvement.
I scaled down each benchmark listed to complete those and I averaged these numbers, but I can't verifiably put that I did the whole giant benchmark for each. (Ran out of budget + I'm running everything on a 4090 now) Hopefully I can get some community help in benchmarking.
GPQA Diamond (198 questions) -> 57%, one shot (improved from 24.3 on Gemma 3 PT 27B) MMLU Pro (15% of the entire set) -> 75%, averaged, more details here: output (beating Gemma 3 PT 27B at 67.5)
Based on this assessment and heavy coding in the dataset, I'm making this claim. Ofc, I'm happy to be wrong and go back to the drawing board.
Usage
Install the latest version of Transformers (>=4.50.0):
pip install -U transformers
Running with Pipeline API
from transformers import pipeline
import torch
pipe = pipeline(
"image-text-to-text",
model="tesslate/synthia-s1-27b",
device="cuda",
torch_dtype=torch.bfloat16
)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful, reasoning-focused assistant."}]},
{"role": "user", "content": [
{"type": "image", "url": "https://example.com/sample.jpg"},
{"type": "text", "text": "Explain the image."}
]}
]
output = pipe(text=messages, max_new_tokens=200)
print(output[0]["generated_text"][-1]["content"])
Training Data
Synthia-S1-27b was trained on diverse data including:
- Multiple web documents
- Programming debugging and solutions
- Mathematical solutions and thinking steps
Synthia-S1-27b was trained on an A100 for 205+ hours, with multiple rounds of sft and rl.
Model Architecture
- Base Model: Gemma3
- Size: 27 billion parameters
- Type: Decoder-only Transformer
- Precision: bf16 with int8 quantization
- Training Objective: Instruction tuning emphasizing reasoning, coding tasks, and factual accuracy
Quantized Models
Limitations
- May require detailed prompt engineering for highly specific tasks
- Occasional hallucinations in less-explored domains
Citation
@misc{tesslate_synthias127b,
title={Synthia-S1-27b: Advanced Reasoning and Coding Model},
author={tesslate},
year={2025},
publisher={tesslate},
url={https://tesslate.com}
}
Developed by Tesslate Huggingface | Website Image Source
- Downloads last month
- 96