pythia-ggml / pythia-70m-q4_0.meta
LLukas22's picture
Upload new model file: 'pythia-70m-q4_0.bin'
6e38cd2
raw
history blame
263 Bytes
{
"model": "GptNeoX",
"quantization": "Q4_0",
"quantization_version": "V2",
"container": "GGML",
"converter": "llm-rs",
"hash": "2ced6b7a8803e139cc901e1a3c6922c74e2183daa1d30bcc0159215dc1c6289d",
"base_model": "EleutherAI/pythia-70m"
}