ThomasBaruzier
commited on
Commit
•
ab75ef7
1
Parent(s):
f01d1d6
Update README.md
Browse files
README.md
CHANGED
@@ -192,6 +192,9 @@ extra_gated_button_content: Submit
|
|
192 |
|
193 |
# Llama.cpp imatrix quantizations of meta-llama/Meta-Llama-3.1-8B-Instruct
|
194 |
|
|
|
|
|
|
|
195 |
Using llama.cpp commit [b5e9546](https://github.com/ggerganov/llama.cpp/commit/b5e95468b1676e1e5c9d80d1eeeb26f542a38f42) for quantization, featuring llama 3.1 rope scaling factors. This fixes low-quality issues when using 8-128k context lengths.
|
196 |
|
197 |
Original model: [https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
|
|
|
192 |
|
193 |
# Llama.cpp imatrix quantizations of meta-llama/Meta-Llama-3.1-8B-Instruct
|
194 |
|
195 |
+
<!-- Better pic but I would like to talk about my quants on Linkedin so yeah <img src="https://cdn-uploads.huggingface.co/production/uploads/646410e04bf9122922289dc7/xlkSJli8IQ9KoTAuTKOF2.png" alt="llama" width="30%"/> -->
|
196 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/646410e04bf9122922289dc7/LQUL7YII8okA8CG54mQSI.jpeg" alt="llama" width="60%"/>
|
197 |
+
|
198 |
Using llama.cpp commit [b5e9546](https://github.com/ggerganov/llama.cpp/commit/b5e95468b1676e1e5c9d80d1eeeb26f542a38f42) for quantization, featuring llama 3.1 rope scaling factors. This fixes low-quality issues when using 8-128k context lengths.
|
199 |
|
200 |
Original model: [https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
|