W8A8-INT8 GPTQ + SmoothQuant quant of cyberagent/Mistral-Nemo-Japanese-Instruct-2408 w/ LLM Compressor 0.4.0 using augmxnt/ultra-orca-boros-en-ja-v1 as calibration set

Downloads last month
0
Safetensors
Model size
12.2B params
Tensor type
BF16
·
I8
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for shisa-ai/Mistral-Nemo-Japanese-Instruct-2408-SQ-GPTQ-W8A8-INT8

Collection including shisa-ai/Mistral-Nemo-Japanese-Instruct-2408-SQ-GPTQ-W8A8-INT8