roleplaiapp/Codestral-22B-v0.1-Q3_K_S-GGUF

Repo: roleplaiapp/Codestral-22B-v0.1-Q3_K_S-GGUF
Original Model: Codestral-22B-v0.1 Organization: mistralai Quantized File: codestral-22b-v0.1-q3_k_s.gguf Quantization: GGUF Quantization Method: Q3_K_S
Use Imatrix: False
Split Model: False

Overview

This is an GGUF Q3_K_S quantized version of Codestral-22B-v0.1.

Quantization By

I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai

Downloads last month
7
GGUF
Model size
22.2B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for roleplaiapp/Codestral-22B-v0.1-Q3_K_S-GGUF

Quantized
(46)
this model