4-bit quantization and 128 groupsize for LLaMA 7B

This is a Chinese instruction-tuning lora checkpoint based on llama-13B from this repo's work Consumes approximately 5.4G of graphics memory

"input":the mean of life is
"output":the mean of life is 70 years.
the median age at death in africa was about what?
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .