LLaMA 65B ggml

From Meta: https://ai.meta.com/blog/large-language-model-llama-meta-ai


Original llama.cpp quant methods: q4_0, q4_1, q5_0, q5_1, q8_0

Quantized using an older version of llama.cpp and compatible with llama.cpp from May 19, commit 2d5db48.

k-quant methods: q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K

Quantization methods compatible with latest llama.cpp from June 6, commit 2d43387.


Provided files

Name Quant method Bits Size Max RAM required Use case
llama-65b.ggmlv3.q2_K.bin q2_K 2 27.33 GB 29.83 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors.
llama-65b.ggmlv3.q3_K_L.bin q3_K_L 3 34.55 GB 37.05 GB New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
llama-65b.ggmlv3.q3_K_M.bin q3_K_M 3 31.40 GB 33.90 GB New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K
llama-65b.ggmlv3.q3_K_S.bin q3_K_S 3 28.06 GB 30.56 GB New k-quant method. Uses GGML_TYPE_Q3_K for all tensors
llama-65b.ggmlv3.q4_0.bin q4_0 4 36.73 GB 39.23 GB Original quant method, 4-bit.
llama-65b.ggmlv3.q4_1.bin q4_1 4 40.81 GB 43.31 GB Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.
llama-65b.ggmlv3.q4_K_M.bin q4_K_M 4 39.28 GB 41.78 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K
llama-65b.ggmlv3.q4_K_S.bin q4_K_S 4 36.73 GB 39.23 GB New k-quant method. Uses GGML_TYPE_Q4_K for all tensors
llama-65b.ggmlv3.q5_0.bin q5_0 5 44.89 GB 47.39 GB Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference.
llama-65b.ggmlv3.q5_1.bin q5_1 5 48.97 GB 51.47 GB Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference.
llama-65b.ggmlv3.q5_K_M.bin q5_K_M 5 46.20 GB 48.70 GB New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K
llama-65b.ggmlv3.q5_K_S.bin q5_K_S 5 44.89 GB 47.39 GB New k-quant method. Uses GGML_TYPE_Q5_K for all tensors
llama-65b.ggmlv3.q6_K.bin q6_K 6 53.56 GB 56.06 GB New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors
llama-65b.ggmlv3.q8_0.bin q8_0 8 69.370 GB 71.87 GB Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .