pythia-ggml / pythia-160m-q4_0.meta
LLukas22's picture
Upload new model file: 'pythia-160m-q4_0.bin'
f6f77b1
raw
history blame
264 Bytes
{
"model": "GptNeoX",
"quantization": "Q4_0",
"quantization_version": "V2",
"container": "GGML",
"converter": "llm-rs",
"hash": "848d706427cf96148a1d0abab614cba33ceecc80ab3482b463f4861c0c77127b",
"base_model": "EleutherAI/pythia-160m"
}