language:
- en
Input files for generating the Importance Matrix
Which file to use for generating the importance matrix
Not all importance matrices are equal. The best results are obtained when using a source file similar to the training data. Size also matters: the bigger the model (eg: 70b vs 13b) and the higher the quant (eg: q6k_ vs iq3_xs), the bigger the source file needs to be to make an impact. Multiple input files can be combined if needed; for example:
cat technical.txt multilingual.txt wiki.txt >custom.txt
You will find below descriptions for the various input files provided, to help you choose the correct one.
Community provided files
8k_random_data
20k_random_data
groups_merged
group_10_merged
ptb.train
exllamav2 calibration data
https://github.com/turboderp/exllamav2/tree/master/conversion/standard_cal_data
c4
code
Programming
multilingual
English, Arabic, Chinese, French, German, Japanese, Polish, Russian, Spanish, Swedish, Turkish, Hebrew, Macedonian, Norwegian, Lithuanian, Greek, Italian, Afrikaans, Dutch, Danish.
technical
Technical writing.
tiny
Very short stories.
wiki
Wikipedia dump.
How to quantize with an imatrix in llama.cpp
- Get one of the input files collected here, or eleswhere.
- Convert or download the model you want to quantise, in fp16 GGUF format.
- Generate an imatrix file specific to the model you want to quantise
cd <llama.cpp directory>
./imatrix -m <model_path>/ggml-model-f16.gguf -f <matrix_training_path>/<plain_text_matrix_file> -o <output_binary_file.matrix> -t 12 -ngl 144 --chunks 100 -b 512 -c 512
# -ngl : layers offloaded to gpu (recommended to use number of layers the model contains)
# -t 12 : number of threads (should probably match no of cpu)
# -c 512 : context size, testing seems to show 512 is recommended (default=512, 0=loaded from model)
# -b 200 : batch size (default=512)
# --chunks 100 (recommended)
# --mlock : keep model in ram (only use if you had sufficient RAM for the whole fp16)
- Use the generated binary matrix file to quantise the model
./quantize <model_path>/ggml-model-f16.gguf -matrix <matrix_file> <output_model_path>/ggml-model-IQ4_XS.gguf IQ4_XS
Note: normal quantisation also benefits from using a matrix file. It also seem that a larger input data is better for higher quantisation.