Note: This model works well only for simple coding problems involving short sequences of tokens. For a better model, use the 4-bit version.
Model Details
This is Qwen/Qwen2.5-Coder-32B-Instruct quantized with AutoRound (symmetric quantization) and serialized with the GPTQ format in 2-bit. The model has been created, tested, and evaluated by The Kaitchup.
Details on the quantization process and how to use the model here: The Recipe for Extremely Accurate and Cheap Quantization of 70B+ LLMs
- Developed by: The Kaitchup
- Language(s) (NLP): English
- License: Apache 2.0 license
- Downloads last month
- 19
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.