metadata
license: mit
amanpreetsingh459/llama-2-7b-chat_q4_quantized_cpp
- This model contains the 4-bit quantized version of llama2 model.
- This can be run on a local cpu system as a cpp module available at: https://github.com/ggerganov/llama.cpp
- As for the testing, the model has been tested on
Ubuntu Linux
os with12 GB RAM
andcore i5 processor
.