--- language: - en - de - fr - it - pt - hi - es - th license: llama3.1 pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 - gguf - imatrix base_model: meta-llama/Meta-Llama-3.1-70B-Instruct --- # Quant Infos - ~Requires latest master + [Rope Scaling PR](https://github.com/ggerganov/llama.cpp/pull/8676).~ Rope scaling is merged, so just a a recent master is required now. - [@ubergarm](https://huggingface.co/ubergarm) explained how to set up your llama.cpp [here](https://huggingface.co/qwp4w3hyb/Meta-Llama-3.1-8B-Instruct-iMat-GGUF/discussions/1#66a26b63de4e162dd84c22c5) - quants done with an importance matrix for improved quantization loss - Quantized ggufs & imatrix from hf bf16, through bf16. `safetensors bf16 -> gguf bf16 -> quant` for *optimal* quant loss. - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S - experimental custom quant types - `_L` with `--output-tensor-type f16 --token-embedding-type f16`, which supposedly leads to better accuracy. - Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski). ``` ./imatrix -m $model_name-bf16.gguf -f calibration_datav3.txt -o $model_name.imatrix ``` # Original Model Card: TODO