pythia-ggml / pythia-70m-q4_0.meta
LLukas22's picture
Upload new model file: 'pythia-70m-q4_0.bin'
17097ef
raw
history blame
194 Bytes
{"model": "GptNeoX", "quantization": "Q4_0", "quantization_version": "V2", "container": "GGML", "converter": "llm-rs", "hash": "dcf02bc6cf7cc685e2aaaeb7ec1cf06f859f007f739470f8095fbbcd91787899"}