Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
4-bit precision
awq
tulu-2-dpo-7B-AWQ / quant_config.json
TheBloke's picture
AWQ model commit
1cfdb2b
raw
history blame
90 Bytes
{
"zero_point": true,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM"
}