π¦ T-Rex-mini β GGUF I1 (Quantized - Imatrix)
This is a quantized GGUF version of saturated-labs/T-Rex-mini
, converted using llama.cpp and quantized with imatrix.
π§ Quantization Details
- Original Model:
saturated-labs/T-Rex-mini
- Format: GGUF (
.gguf
) - Quantization Type:
IQ4_XS, Q4_K_S, Q4_K_M, Q5_K_S, Q5_K_M, Q6_K
- Tool Used:
llama.cpp
- Command:
./llama-quantize.exe --imatrix imatrix.dat t-rex-mini-f16.gguf t-rex-mini-QX_X_X.gguf QX_X_X
- Downloads last month
- 162
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
6-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for Deaquay/T-Rex-mini-I1-GGUF
Base model
saturated-labs/T-Rex-mini