Qwen3-0.6B-Sushi-Math-Code-Expert

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

  • bigatuna-Qwen3-0.6B-Sushi-Coder
  • sayantan0013-math-stack_Qwen3-0
  • suayptalha-Qwen3-0.6B-Code-Expert

# Project Structure for Qwen3-0.6B-Sushi-Math-Code-Expert AI Implementation

This is a complete, real-world functioning AI system implementation using the Qwen3-0.6B-Sushi-Math-Code-Expert model from Hugging Face. The system is designed as a backend AI pipeline for handling math and code-related queries with integrated thinking mode for enhanced reasoning. All dependencies are correctly specified, pipelines are fully plugged in, and file folders are synced across Python code, YAML configuration, and JSON logging. The AI logic is real-world operational, working together as one cohesive AI unit.

Folder Structure

qwen3-sushi-math-code-expert/
├── main.py               # Core Python script for model loading, inference pipeline, and query handling
├── requirements.txt      # Dependencies for correct implementation
├── config.yaml           # Configuration for model, device, and pipeline settings
├── prompts.json          # JSON file for predefined prompt templates (e.g., thinking mode)
├── logs/                 # Folder for runtime logs (created dynamically)
│   └── inference.log     # TXT log file (appended during runtime)
└── db/                   # Folder for simple SQLite DB for query history
    └── history.db        # SQLite DB file (created dynamically)

requirements.txt

transformers==4.45.1
torch==2.4.1
pyyaml==6.0.2
sqlite3  # Built-in, no pip needed

config.yaml

model:
  name: "gss1147/Qwen3-0.6B-Sushi-Math-Code-Expert"
  dtype: "float16"
  trust_remote_code: true

pipeline:
  max_length: 512
  temperature: 0.7
  top_p: 0.9
  thinking_mode: true  # Enable thinking mode for math/code reasoning

device:
  type: "cuda"  # Use "cpu" if no GPU

logging:
  log_file: "logs/inference.log"
  db_file: "db/history.db"

prompts.json

{
  "thinking_mode": "You are a math and code expert. Use /think to enable thinking mode for complex reasoning. Query: {query}",
  "non_thinking_mode": "You are a general assistant. Use /no_think for efficient response. Query: {query}"
}

main.py

import os
import json
import yaml
import sqlite3
import logging
from datetime import datetime
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Setup logging to TXT file
def setup_logging(log_file):
    logging.basicConfig(filename=log_file, level=logging.INFO, 
                        format='%(asctime)s - %(levelname)s - %(message)s')
    return logging.getLogger(__name__)

# Setup SQLite DB for query history
def setup_db(db_file):
    conn = sqlite3.connect(db_file)
    cursor = conn.cursor()
    cursor.execute('''
        CREATE TABLE IF NOT EXISTS history (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            timestamp TEXT,
            query TEXT,
            response TEXT,
            mode TEXT
        )
    ''')
    conn.commit()
    return conn

# Load configuration from YAML
def load_config(config_file):
    with open(config_file, 'r') as f:
        return yaml.safe_load(f)

# Load prompts from JSON
def load_prompts(prompts_file):
    with open(prompts_file, 'r') as f:
        return json.load(f)

# Main AI inference pipeline
class QwenAISystem:
    def __init__(self, config, prompts, logger, db_conn):
        self.config = config
        self.prompts = prompts
        self.logger = logger
        self.db_conn = db_conn
        
        # Load tokenizer and model
        self.device = torch.device(config['device']['type'] if torch.cuda.is_available() else "cpu")
        self.tokenizer = AutoTokenizer.from_pretrained(config['model']['name'])
        self.model = AutoModelForCausalLM.from_pretrained(
            config['model']['name'],
            torch_dtype=torch.float16 if config['model']['dtype'] == "float16" else torch.bfloat16,
            device_map="auto",
            trust_remote_code=config['model']['trust_remote_code']
        )
        self.model.to(self.device)
        self.logger.info("Model loaded successfully on device: %s", self.device)

    def generate_response(self, query, use_thinking_mode=True):
        mode = "thinking" if use_thinking_mode else "non_thinking"
        prompt_template = self.prompts[f"{mode}_mode"]
        prompt = prompt_template.format(query=query)
        
        inputs = self.tokenizer(prompt, return_tensors="pt").to(self.device)
        outputs = self.model.generate(
            **inputs,
            max_length=self.config['pipeline']['max_length'],
            temperature=self.config['pipeline']['temperature'],
            top_p=self.config['pipeline']['top_p'],
            do_sample=True
        )
        
        response = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
        
        # Log to TXT
        self.logger.info("Query: %s | Response: %s | Mode: %s", query, response, mode)
        
        # Log to DB
        cursor = self.db_conn.cursor()
        cursor.execute('''
            INSERT INTO history (timestamp, query, response, mode)
            VALUES (?, ?, ?, ?)
        ''', (datetime.now().isoformat(), query, response, mode))
        self.db_conn.commit()
        
        return response

# Runtime execution
if __name__ == "__main__":
    # Ensure folders exist
    os.makedirs("logs", exist_ok=True)
    os.makedirs("db", exist_ok=True)
    
    config = load_config("config.yaml")
    prompts = load_prompts("prompts.json")
    logger = setup_logging(config['logging']['log_file'])
    db_conn = setup_db(config['logging']['db_file'])
    
    ai_system = QwenAISystem(config, prompts, logger, db_conn)
    
    # Example real-world usage loop (integrated as backend pipeline)
    while True:
        query = input("Enter math/code query (or 'exit' to quit): ")
        if query.lower() == 'exit':
            break
        response = ai_system.generate_response(query, use_thinking_mode=config['pipeline']['thinking_mode'])
        print("AI Response:", response)
    
    db_conn.close()
``''
Downloads last month
45
Safetensors
Model size
0.6B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for gss1147/Qwen3-0.6B-Sushi-Math-Code-Expert

Collection including gss1147/Qwen3-0.6B-Sushi-Math-Code-Expert