Qwen2.5 Coder quant 4bit GGUF for llama.cpp and ollama with Modelfile focused solver for ARC-AGI. No appreciable enhancement with SFT

Qwen2.5 Coder F16 GGUF solver for ARC-AGI. No appreciable enhancement with SFT

Both trained using UNSLOTH and converted to GGUF

Downloads last month
53
GGUF
Model size
7.62B params
Architecture
qwen2

4-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.