pythia-ggml / pythia-2.8b-q4_0.meta
LLukas22's picture
Upload new model file: 'pythia-2.8b-q4_0.bin'
80c80ab
raw
history blame
264 Bytes
{
"model": "GptNeoX",
"quantization": "Q4_0",
"quantization_version": "V2",
"container": "GGML",
"converter": "llm-rs",
"hash": "23f579016976b2c45300374d11dea6545060b1349b21815de7bc2db618375b3f",
"base_model": "EleutherAI/pythia-2.8b"
}