mistral-small-instruct-2409-ov

mistral-small-instruct-2409-ov is an OpenVino int4 quantized version of mistral-small-instruct-2409, which is a research 22B general purpose chat/instruct model from Mistral.

This model is licensed under MRL (Mistral Research License), which can be found here. Per this license, this model is available only for research.

We are including in this collection for the purpose of testing the speed and quality of a 4-bit OpenVino quantized 22B parameter model running locally on an AI PC.

Model Description

  • Developed by: mistralai
  • Quantized by: llmware
  • Model type: mistral-small-instruct-2409-ov
  • Parameters: 22 billion
  • Model Parent: mistralai/mistral-small-instruct-2409
  • Language(s) (NLP): English
  • License: MRL - research only - not commercial
  • Uses: General use
  • Quantization: int4

Model Card Contact

llmware on github

llmware on hf

llmware website

Downloads last month
20
Inference API
Inference API (serverless) has been turned off for this model.

Model tree for llmware/mistral-small-instruct-2409-ov

Quantized
(36)
this model

Collection including llmware/mistral-small-instruct-2409-ov