!-Open Source Week-!
This is an experimental Long context model, please use and transfer with caution.
IntelligentEstate/Cabbi_7B-Qwn2.6-Q4_K_M-GGUF
A transformative leap in Open Source tech and local intelligence
Presenting in Ovation. With a reckless disregard for the rules this model brings accuracy in coding and mathematics to the masses. No GPU needed just electricity silicon and the urge to bend fate to your slimey little human hands. It will get you where you need to go and fast. This model beats o1/o3 and NVIDIA's Llama 3.1-nematron 70B and other models which look gigantically down at this little cabbi just doing it's thing. The sheer amount of ability this model provides can't be attributed to "Unhobbling gains" and reaches the frontier in IFEval and Mathematics beating all service AIs in short for non reasoning answerts. Be sure to open up the context and remove the governor on this puppy. It is uncensored and may be VERY steerable for the end user. System templates below for reasoning and multi-in-turn tool use.
This model was converted to GGUF format from gz987/qwen2.5-7b-cabs-v0.3
using a customized and curated Imatrix for smoothing using QAT and TTT* methods the math to increase code functionality and tool use don't be shy about commenting if you have questions or need help and you can reach us at [email protected]
Custom system template for advanced reasoning and tool use
{%- set system_message = 'You are a helpful assistant.' %}
{%- if messages[0]['role'] == 'system' %}
{%- set system_message = messages[0]['content'] %}
{%- if messages[1]['role'] == 'system' %}
{%- set format_message = messages[1]['content'] %}
{%- set loop_messages = messages[2:] %}
{%- else %}
{%- set loop_messages = messages[1:] %}
{%- endif %}
{%- else %}
{%- set loop_messages = messages %}
{%- endif %}
{%- if not tools is defined %}
{%- set tools = none %}
{%- endif %}
{%- if system_message is defined %}
{{- '<|im_start|>system
' + system_message + '<|im_end|>
' }}
{%- endif %}
{%- if tools is not none %}
{% set task_instruction %}You are a tool calling assistant. In order to complete the user's request, you need to select one or more appropriate tools from the following tools and fill in the correct values for the tool parameters. Your specific tasks are:
1. Make one or more function/tool calls to meet the request based on the question.
2. If none of the function can be used, point it out and refuse to answer.
3. If the given question lacks the parameters required by the function, also point it out.
The following are characters that may interact with you
1. user: Provides query or additional information.
2. tool: Returns the results of the tool calling.
{% endset %}
{% set format_instruction %}
The output MUST strictly adhere to the following JSON format, and NO other text MUST be included.
The example format is as follows. Please make sure the parameter type is correct. If no function call is needed, please directly output an empty list '[]'
""[
{"name": "func_name1", "arguments": {"argument1": "value1", "argument2": "value2"}},
... (more tool calls as required)
]""
{% endset %}
{{- '<|im_start|>user
[BEGIN OF TASK INSTRUCTION]
' + task_instruction + '
[END OF TASK INSTRUCTION]
'}}
{{- '[BEGIN OF AVAILABLE_TOOLS]
' }}
{{- tools|string }}
{{- '
[END OF AVAILABLE_TOOLS]
' }}
{{- '
[BEGIN OF TASK INSTRUCTION]
' + format_instruction + '
[END OF TASK INSTRUCTION]
<|im_end|>
' }}
{%- endif %}
{%- for message in loop_messages %}
{%- set role = message['role'] %}
{%- set content = message['content'] %}
{{- '<|im_start|>'+ role +'
' + content + '<|im_end|>
'}}
{%- endfor %}
{{- '<|im_start|>assistant
' }}
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
- Downloads last month
- 0