pythia-ggml / pythia-160m-q4_0-ggjt.meta
LLukas22's picture
Upload new model file: 'pythia-160m-q4_0-ggjt.bin'
abbcc3f
raw
history blame
264 Bytes
{
"model": "GptNeoX",
"quantization": "Q4_0",
"quantization_version": "V2",
"container": "GGJT",
"converter": "llm-rs",
"hash": "32c3b5d293da24f9bcd0c6372c056789025a62290bdb91b2264633d57b11e879",
"base_model": "EleutherAI/pythia-160m"
}