itsdotscience's picture
Update README.md
b81bbce verified
|
raw
history blame contribute delete
No virus
886 Bytes
Quantization w/ imatrix of Replete-AI/Mistral-11b-v0.1
The 'groups_merged.txt' in the repo was used to generate the 'imatrix.dat'also in the repo w/ 350 importance matrix entries.
Quants:
```
3.3G Mistral-11b-v0.1_IQ2_XXS.gguf
4.3G Mistral-11b-v0.1_Q2_K.gguf
5.5G Mistral-11b-v0.1_Q3_K.gguf
5.9G Mistral-11b-v0.1_Q3_K_L.gguf
5.0G Mistral-11b-v0.1_Q3_K_S.gguf
6.3G Mistral-11b-v0.1_Q4_0.gguf
7.0G Mistral-11b-v0.1_Q4_1.gguf
6.7G Mistral-11b-v0.1_Q4_K.gguf
6.4G Mistral-11b-v0.1_Q4_K_S.gguf
7.6G Mistral-11b-v0.1_Q5_0.gguf
8.2G Mistral-11b-v0.1_Q5_1.gguf
7.6G Mistral-11b-v0.1_Q5_K_S.gguf
9.0G Mistral-11b-v0.1_Q6_K.gguf
```
llama.cpp build info:
```
main: build = 2354 (e25fb4b1) SYCL, GGML_SYCL_F16: yes
main: built with Intel(R) oneAPI DPC++/C++ Compiler 2024.0.2 (2024.0.2.20231213) for x86_64-unknown-linux-gnu
```
```
2x Intel(R) Xeon(R) Platinum 8480+
4x Intel MAX 1100 GPU```