Edit model card

Pure quantizations of Codestral-22B-v0.1 for mistral.java.

In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the output.weights tensor is quantized with Q6_K, instead of Q4_0.
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the quantize utility from llama.cpp as follows:

./quantize --pure ./Codestral-22B-v0.1-F32.gguf ./Codestral-22B-v0.1-Q4_0.gguf Q4_0

Original model: https://huggingface.co/mistralai/Codestral-22B-v0.1

**Note that this model does not support a System prompt.

Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the Blogpost). The model can be queried:

  • As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
  • As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
Downloads last month
184
GGUF
Model size
22.2B params
Architecture
llama

4-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including mukel/Codestral-22B-v0.1-GGUF