Uploaded model

  • Developed by: TethysAI
  • License: apache-2.0
  • Finetuned from model : qwen/qwen2.5-3b-instruct

Use this tried and tested prompt only.


SYSTEM_PROMPT = """
Respond in the following format:
<reasoning>
...
</reasoning>
<answer>
...
</answer>
"""
Downloads last month
23
GGUF
Model size
3.09B params
Architecture
qwen2

4-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.