pythia-ggml / pythia-160m-q4_0.meta
LLukas22's picture
Upload new model file: 'pythia-160m-q4_0.bin'
049a269
raw
history blame
264 Bytes
{
"model": "GptNeoX",
"quantization": "Q4_0",
"quantization_version": "V2",
"container": "GGML",
"converter": "llm-rs",
"hash": "7236635d361e39a6bf09b48daceae99fdd70d2e1d75fd7f9ba124a48bc8f7488",
"base_model": "EleutherAI/pythia-160m"
}