Qwen2.5-14B-All-Variants-q8_0-q6_K-GGUF

This repo contains GGUF quantizations of Qwen/Qwen2.5-14B, Qwen/Qwen2.5-14B-Instruct, and Qwen/Qwen2.5-Coder-14B-Instruct models at q6_K, using q8_0 for output and embedding tensors.

Downloads last month
121
GGUF
Model size
14.8B params
Architecture
qwen2

6-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for ddh0/Qwen2.5-14B-All-Variants-q8_0-q6_K-GGUF

Base model

Qwen/Qwen2.5-14B
Quantized
(79)
this model