legraphista
commited on
Commit
β’
fdadf96
1
Parent(s):
50556f5
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -96,7 +96,7 @@ Link: [here](https://huggingface.co/legraphista/xLAM-7b-r-IMat-GGUF/blob/main/im
|
|
96 |
| [xLAM-7b-r.IQ2_M.gguf](https://huggingface.co/legraphista/xLAM-7b-r-IMat-GGUF/blob/main/xLAM-7b-r.IQ2_M.gguf) | IQ2_M | 2.50GB | β
Available | π’ IMatrix | π¦ No
|
97 |
| [xLAM-7b-r.IQ2_S.gguf](https://huggingface.co/legraphista/xLAM-7b-r-IMat-GGUF/blob/main/xLAM-7b-r.IQ2_S.gguf) | IQ2_S | 2.31GB | β
Available | π’ IMatrix | π¦ No
|
98 |
| [xLAM-7b-r.IQ2_XS.gguf](https://huggingface.co/legraphista/xLAM-7b-r-IMat-GGUF/blob/main/xLAM-7b-r.IQ2_XS.gguf) | IQ2_XS | 2.20GB | β
Available | π’ IMatrix | π¦ No
|
99 |
-
| xLAM-7b-r.IQ2_XXS | IQ2_XXS |
|
100 |
| xLAM-7b-r.IQ1_M | IQ1_M | - | β³ Processing | π’ IMatrix | -
|
101 |
| xLAM-7b-r.IQ1_S | IQ1_S | - | β³ Processing | π’ IMatrix | -
|
102 |
|
|
|
96 |
| [xLAM-7b-r.IQ2_M.gguf](https://huggingface.co/legraphista/xLAM-7b-r-IMat-GGUF/blob/main/xLAM-7b-r.IQ2_M.gguf) | IQ2_M | 2.50GB | β
Available | π’ IMatrix | π¦ No
|
97 |
| [xLAM-7b-r.IQ2_S.gguf](https://huggingface.co/legraphista/xLAM-7b-r-IMat-GGUF/blob/main/xLAM-7b-r.IQ2_S.gguf) | IQ2_S | 2.31GB | β
Available | π’ IMatrix | π¦ No
|
98 |
| [xLAM-7b-r.IQ2_XS.gguf](https://huggingface.co/legraphista/xLAM-7b-r-IMat-GGUF/blob/main/xLAM-7b-r.IQ2_XS.gguf) | IQ2_XS | 2.20GB | β
Available | π’ IMatrix | π¦ No
|
99 |
+
| [xLAM-7b-r.IQ2_XXS.gguf](https://huggingface.co/legraphista/xLAM-7b-r-IMat-GGUF/blob/main/xLAM-7b-r.IQ2_XXS.gguf) | IQ2_XXS | 1.99GB | β
Available | π’ IMatrix | π¦ No
|
100 |
| xLAM-7b-r.IQ1_M | IQ1_M | - | β³ Processing | π’ IMatrix | -
|
101 |
| xLAM-7b-r.IQ1_S | IQ1_S | - | β³ Processing | π’ IMatrix | -
|
102 |
|