Update README.md
Browse files
README.md
CHANGED
@@ -238,6 +238,8 @@ Instruction fine tuning was performed with a combination of supervised fine-tuni
|
|
238 |
|
239 |
Only IQ1_M and IQ2_XS use importance matrix (iMatrix), the rest are made with the standard quant algorithms.
|
240 |
|
|
|
|
|
241 |
## Special thanks
|
242 |
|
243 |
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
|
|
238 |
|
239 |
Only IQ1_M and IQ2_XS use importance matrix (iMatrix), the rest are made with the standard quant algorithms.
|
240 |
|
241 |
+
Check out their blog post for more information [here](https://ai.meta.com/blog/meta-llama-3/)
|
242 |
+
|
243 |
## Special thanks
|
244 |
|
245 |
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|