Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8 Model Icon

Validated Badge

Model Overview

  • Model Architecture: Mistral3ForConditionalGeneration
    • Input: Text / Image
    • Output: Text
  • Model Optimizations:
    • Activation quantization: INT8
    • Weight quantization: INT8
  • Intended Use Cases: It is ideal for:
    • Fast-response conversational agents.
    • Low-latency function calling.
    • Subject matter experts via fine-tuning.
    • Local inference for hobbyists and organizations handling sensitive data.
    • Programming and math reasoning.
    • Long document understanding.
    • Visual understanding.
  • Out-of-scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages not officially supported by the model.
  • Release Date: 04/15/2025
  • Version: 1.0
  • Model Developers: Red Hat (Neural Magic)

Model Optimizations

This model was obtained by quantizing activations and weights of Mistral-Small-3.1-24B-Instruct-2503 to INT8 data type. This optimization reduces the number of bits used to represent weights and activations from 16 to 8, reducing GPU memory requirements (by approximately 50%) and increasing matrix-multiply compute throughput (by approximately 2x). Weight quantization also reduces disk size requirements by approximately 50%.

Only weights and activations of the linear operators within transformers blocks are quantized. Weights are quantized with a symmetric static per-channel scheme, whereas activations are quantized with a symmetric dynamic per-token scheme. A combination of the SmoothQuant and GPTQ algorithms is applied for quantization, as implemented in the llm-compressor library.

Deployment

This model can be deployed efficiently using the vLLM backend, as shown in the example below.

from vllm import LLM, SamplingParams
from transformers import AutoProcessor

model_id = "RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8"
number_gpus = 1

sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
processor = AutoProcessor.from_pretrained(model_id)

messages = [{"role": "user", "content": "Give me a short introduction to large language model."}]

prompts = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)

llm = LLM(model=model_id, tensor_parallel_size=number_gpus)

outputs = llm.generate(prompts, sampling_params)

generated_text = outputs[0].outputs[0].text
print(generated_text)

vLLM aslo supports OpenAI-compatible serving. See the documentation for more details.

Deploy on Red Hat AI Inference Server
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
 --ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768  \
--enforce-eager --model RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8

​​See Red Hat AI Inference Server documentation for more details.

Deploy on Red Hat Enterprise Linux AI
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/mistral-small-3-1-24b-instruct-2503-quantized-w8a8:1.5
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/mistral-small-3-1-24b-instruct-2503-quantized-w8a8
  
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/mistral-small-3-1-24b-instruct-2503-quantized-w8a8

See Red Hat Enterprise Linux AI documentation for more details.

Deploy on Red Hat Openshift AI
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
 name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
 annotations:
   openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
   opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
 labels:
   opendatahub.io/dashboard: 'true'
spec:
 annotations:
   prometheus.io/port: '8080'
   prometheus.io/path: '/metrics'
 multiModel: false
 supportedModelFormats:
   - autoSelect: true
     name: vLLM
 containers:
   - name: kserve-container
     image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
     command:
       - python
       - -m
       - vllm.entrypoints.openai.api_server
     args:
       - "--port=8080"
       - "--model=/mnt/models"
       - "--served-model-name={{.Name}}"
     env:
       - name: HF_HOME
         value: /tmp/hf_home
     ports:
       - containerPort: 8080
         protocol: TCP
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
  annotations:
    openshift.io/display-name: mistral-small-3-1-24b-instruct-2503-quantized-w8a8 # OPTIONAL CHANGE
    serving.kserve.io/deploymentMode: RawDeployment
  name: mistral-small-3-1-24b-instruct-2503-quantized-w8a8         # specify model name. This value will be used to invoke the model in the payload
  labels:
    opendatahub.io/dashboard: 'true'
spec:
  predictor:
    maxReplicas: 1
    minReplicas: 1
    model:
      modelFormat:
        name: vLLM
      name: ''
      resources:
        limits:
          cpu: '2'			# this is model specific
          memory: 8Gi		# this is model specific
          nvidia.com/gpu: '1'	# this is accelerator specific
        requests:			# same comment for this block
          cpu: '1'
          memory: 4Gi
          nvidia.com/gpu: '1'
      runtime: vllm-cuda-runtime	# must match the ServingRuntime name above
      storageUri: oci://registry.redhat.io/rhelai1/modelcar-mistral-small-3-1-24b-instruct-2503-quantized-w8a8:1.5
    tolerations:
    - effect: NoSchedule
      key: nvidia.com/gpu
      operator: Exists
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>

# apply both resources to run model

# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml

# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.

# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
        -H "Content-Type: application/json" \
        -d '{
    "model": "mistral-small-3-1-24b-instruct-2503-quantized-w8a8",
    "stream": true,
    "stream_options": {
        "include_usage": true
    },
    "max_tokens": 1,
    "messages": [
        {
            "role": "user",
            "content": "How can a bee fly when its wings are so small?"
        }
    ]
}'

See Red Hat Openshift AI documentation for more details.

Creation

Creation details This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
from transformers import AutoProcessor
from llmcompressor.modifiers.quantization import GPTQModifier
from llmcompressor.modifiers.smoothquant import SmoothQuantModifier
from llmcompressor.transformers import oneshot
from llmcompressor.transformers.tracing import TraceableMistral3ForConditionalGeneration
from datasets import load_dataset, interleave_datasets
from PIL import Image
import io

# Load model
model_stub = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
model_name = model_stub.split("/")[-1]

num_text_samples = 1024
num_vision_samples = 1024
max_seq_len = 8192

processor = AutoProcessor.from_pretrained(model_stub)

model = TraceableMistral3ForConditionalGeneration.from_pretrained(
    model_stub,
    device_map="auto",
    torch_dtype="auto",
)

# Text-only data subset
def preprocess_text(example):
    input = {
        "text": processor.apply_chat_template(
            example["messages"],
            add_generation_prompt=False,
        ),
        "images": None,
    }
    tokenized_input = processor(**input, max_length=max_seq_len, truncation=True)
    tokenized_input["pixel_values"] = tokenized_input.get("pixel_values", None)
    tokenized_input["image_sizes"] = tokenized_input.get("image_sizes", None)
    return tokenized_input

dst = load_dataset("neuralmagic/calibration", name="LLM", split="train").select(range(num_text_samples))
dst = dst.map(preprocess_text, remove_columns=dst.column_names)

# Text + vision data subset
def preprocess_vision(example):
    messages = []
    image = None
    for message in example["messages"]:
        message_content = []
        for content in message["content"]:
            if content["type"] == "text":
                message_content.append({"type": "text", "text": content["text"]})
            else:
                message_content.append({"type": "image"})
                image = Image.open(io.BytesIO(content["image"]))

        messages.append(
            {
                "role": message["role"],
                "content": message_content,
            }
        )

    input = {
        "text": processor.apply_chat_template(
            messages,
            add_generation_prompt=False,
        ),
        "images": image,
    }
    tokenized_input = processor(**input, max_length=max_seq_len, truncation=True)
    tokenized_input["pixel_values"] = tokenized_input.get("pixel_values", None)
    tokenized_input["image_sizes"] = tokenized_input.get("image_sizes", None)
    return tokenized_input

dsv = load_dataset("neuralmagic/calibration", name="VLM", split="train").select(range(num_vision_samples))
dsv = dsv.map(preprocess_vision, remove_columns=dsv.column_names)

# Interleave subsets
ds = interleave_datasets((dsv, dst))

# Configure the quantization algorithm and scheme
recipe = [
    SmoothQuantModifier(
      smoothing_strength=0.8,
      mappings=[
          [["re:.*q_proj", "re:.*k_proj", "re:.*v_proj"], "re:.*input_layernorm"],
          [["re:.*gate_proj", "re:.*up_proj"], "re:.*post_attention_layernorm"],
          [["re:.*down_proj"], "re:.*up_proj"],
      ],
    ),
    GPTQModifier(
        ignore=["language_model.lm_head", "re:vision_tower.*", "re:multi_modal_projector.*"],
        sequential_targets=["MistralDecoderLayer"],
        dampening_frac=0.01,
        targets="Linear",
        scheme="W8A8",
    ),
]

# Define data collator
def data_collator(batch):
    import torch
    assert len(batch) == 1
    collated = {}
    for k, v in batch[0].items():
        if v is None:
            continue
        if k == "input_ids":
            collated[k] = torch.LongTensor(v)
        elif k == "pixel_values":
            collated[k] = torch.tensor(v, dtype=torch.bfloat16)
        else:
            collated[k] = torch.tensor(v)
    return collated


# Apply quantization
oneshot(
    model=model,
    dataset=ds, 
    recipe=recipe,
    max_seq_length=max_seq_len,
    data_collator=data_collator,
    num_calibration_samples=num_text_samples + num_vision_samples,
)

# Save to disk in compressed-tensors format
save_path = model_name + "-quantized.w8a8"
model.save_pretrained(save_path)
processor.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")

Evaluation

The model was evaluated on the OpenLLM leaderboard tasks (version 1), MMLU-pro, GPQA, HumanEval and MBPP. Non-coding tasks were evaluated with lm-evaluation-harness, whereas coding tasks were evaluated with a fork of evalplus. vLLM is used as the engine in all cases.

Evaluation details

MMLU

lm_eval \
  --model vllm \
  --model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
  --tasks mmlu \
  --num_fewshot 5 \
  --apply_chat_template\
  --fewshot_as_multiturn \
  --batch_size auto

ARC Challenge

lm_eval \
  --model vllm \
  --model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
  --tasks arc_challenge \
  --num_fewshot 25 \
  --apply_chat_template\
  --fewshot_as_multiturn \
  --batch_size auto

GSM8k

lm_eval \
  --model vllm \
  --model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.9,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
  --tasks gsm8k \
  --num_fewshot 8 \
  --apply_chat_template\
  --fewshot_as_multiturn \
  --batch_size auto

Hellaswag

lm_eval \
  --model vllm \
  --model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
  --tasks hellaswag \
  --num_fewshot 10 \
  --apply_chat_template\
  --fewshot_as_multiturn \
  --batch_size auto

Winogrande

lm_eval \
  --model vllm \
  --model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
  --tasks winogrande \
  --num_fewshot 5 \
  --apply_chat_template\
  --fewshot_as_multiturn \
  --batch_size auto

TruthfulQA

lm_eval \
  --model vllm \
  --model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
  --tasks truthfulqa \
  --num_fewshot 0 \
  --apply_chat_template\
  --batch_size auto

MMLU-pro

lm_eval \
  --model vllm \
  --model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.5,max_model_len=8192,enable_chunk_prefill=True,tensor_parallel_size=2 \
  --tasks mmlu_pro \
  --num_fewshot 5 \
  --apply_chat_template\
  --fewshot_as_multiturn \
  --batch_size auto

MMMU

lm_eval \
  --model vllm \
  --model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.9,max_images=8,enable_chunk_prefill=True,tensor_parallel_size=2 \
  --tasks mmmu \
  --apply_chat_template\
  --batch_size auto

ChartQA

lm_eval \
  --model vllm \
  --model_args pretrained="RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8",dtype=auto,gpu_memory_utilization=0.9,max_images=8,enable_chunk_prefill=True,tensor_parallel_size=2 \
  --tasks chartqa \
  --apply_chat_template\
  --batch_size auto

Coding

The commands below can be used for mbpp by simply replacing the dataset name.

Generation

python3 codegen/generate.py \
  --model RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8 \
  --bs 16 \
  --temperature 0.2 \
  --n_samples 50 \
  --root "." \
  --dataset humaneval

Sanitization

python3 evalplus/sanitize.py \
  humaneval/RedHatAI--Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8_vllm_temp_0.2

Evaluation

evalplus.evaluate \
  --dataset humaneval \
  --samples humaneval/RedHatAI--Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8_vllm_temp_0.2-sanitized

Accuracy

Category Benchmark Mistral-Small-3.1-24B-Instruct-2503 Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8
(this model)
Recovery
OpenLLM v1 MMLU (5-shot) 80.67 80.40 99.7%
ARC Challenge (25-shot) 72.78 73.46 100.9%
GSM-8K (5-shot, strict-match) 56.68 61.18 104.3%
Hellaswag (10-shot) 83.70 82.26 98.3%
Winogrande (5-shot) 83.74 80.90 96.6%
TruthfulQA (0-shot, mc2) 70.62 69.15 97.9%
Average 75.03 74.56 99.4%
MMLU-Pro (5-shot) 67.25 66.54 98.9%
GPQA CoT main (5-shot) 42.63 44.64 104.7%
GPQA CoT diamond (5-shot) 45.96 41.92 91.2%
Coding HumanEval pass@1 84.70 84.20 99.4%
HumanEval+ pass@1 79.50 81.00 101.9%
MBPP pass@1 71.10 72.10 101.4%
MBPP+ pass@1 60.60 62.10 100.7%
Vision MMMU (0-shot) 52.11 53.11 101.9%
ChartQA (0-shot) 81.36 82.36 101.2%
Downloads last month
9,445
Safetensors
Model size
24B params
Tensor type
BF16
·
I8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8

Collections including RedHatAI/Mistral-Small-3.1-24B-Instruct-2503-quantized.w8a8