granite-vision-3.1-2b-preview

Model Summary: granite-vision-3.1-2b-preview is a compact and efficient vision-language model, specifically designed for visual document understanding, enabling automated content extraction from tables, charts, infographics, plots, diagrams, and more. The model was trained on a meticulously curated instruction-following dataset, comprising diverse public datasets and synthetic datasets tailored to support a wide range of document understanding and general image tasks. It was trained by fine-tuning a Granite large language model (https://huggingface.co/ibm-granite/granite-3.1-2b-instruct) with both image and text modalities.

Evaluations:

We evaluated Granite Vision 3.1 alongside other vision-language models (VLMs) in the 1B-4B parameter range using the standard llms-eval benchmark. The evaluation spanned multiple public benchmarks, with particular emphasis on document understanding tasks while also including general visual question-answering benchmarks.

Molmo-E (1B) InternVL2 (2B) Phi3v (4B) Phi3.5v (4B) Granite Vision 3.1 (2B)
Document benchmarks
DocVQA 0.66 0.87 0.87 0.88 0.88
ChartQA 0.60 0.75 0.81 0.82 0.86
TextVQA 0.62 0.72 0.69 0.7 0.76
AI2D 0.63 0.74 0.79 0.79 0.78
InfoVQA 0.44 0.58 0.55 0.61 0.63
OCRBench 0.65 0.75 0.64 0.64 0.75
LiveXiv VQA 0.47 0.51 0.61 - 0.61
LiveXiv TQA 0.36 0.38 0.48 - 0.55
Other benchmarks
MMMU 0.32 0.35 0.42 0.44 0.35
VQAv2 0.57 0.75 0.76 0.77 0.81
RealWorldQA 0.55 0.34 0.60 0.58 0.65
VizWiz VQA 0.49 0.46 0.57 0.57 0.64
OK VQA 0.40 0.44 0.51 0.53 0.57
  • Paper: coming soon
  • Release Date: Jan 31st, 2025
  • License: Apache 2.0

Supported Languages: English

Intended Use: The model is intended to be used in enterprise applications that involve processing visual and text data. In particular, the model is well-suited for a range of visual document understanding tasks, such as analyzing tables and charts, performing optical character recognition (OCR), and answering questions based on document content. Additionally, its capabilities extend to general image understanding, enabling it to be applied to a broader range of business applications. For tasks that exclusively involve text-based input, we suggest using our Granite large language models, which are optimized for text-only processing and offer superior performance compared to this model.

Generation: This is a simple example of how to use the granite-vision-3.1-2b-preview model.

Install the following libraries:

pip install torch torchvision torchaudio
pip install vllm==0.6.6

Then, copy the snippet from the section that is relevant for your use case.

from vllm import LLM, SamplingParams
from vllm.assets.image import ImageAsset

model_path = "ibm-granite/granite-vision-3.1-2b-preview"

model = LLM(
    model=model_path,
    limit_mm_per_prompt={"image": 1},
)

sampling_params = SamplingParams(
    temperature=0.2,
    max_tokens=64,
)

# Define the question we want to answer and format the prompt
image_token = "<image>"
system_prompt = "<|system|>\nA chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n"

question = "What type of flower is this?"
prompt = f"{system_prompt}<|user|>\n{image_token}\n{question}\n<|assistant|>\n"
image = ImageAsset("cherry_blossom").pil_image.convert("RGB")
print(image)

# Build the inputs to vLLM; the image is passed as `multi_modal_data`.
inputs = {
    "prompt": prompt,
    "multi_modal_data": {
        "image": image,
    }
}

outputs = model.generate(inputs, sampling_params=sampling_params)
print(f"Generated text: {outputs[0].outputs[0].text}")

Model Architecture:

The architecture of granite-vision-3.1-2b-preview consists of the following components:

(1) Vision encoder: SigLIP (https://huggingface.co/docs/transformers/en/model_doc/siglip).

(2) Vision-language connector: two-layer MLP with gelu activation function.

(3) Large language model: granite-3.1-2b-instruct with 128k context length (https://huggingface.co/ibm-granite/granite-3.1-2b-instruct).

We built upon LlaVA (https://llava-vl.github.io) to train our model. We use multi-layer encoder features and a denser grid resolution in AnyRes to enhance the model's ability to understand nuanced visual content, which is essential for accurately interpreting document images.

Training Data:

Overall, our training data is largely comprised of two key sources: (1) publicly available datasets (2) internally created synthetic data targeting specific capabilities including document understanding tasks. A detailed attribution of datasets can be found in the technical report (coming soon).

Infrastructure: We train Granite Vision using IBM's super computing cluster, Blue Vela, which is outfitted with NVIDIA H100 GPUs. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.

Ethical Considerations and Limitations: The use of Large Vision and Language Models involves risks and ethical considerations people must be aware of, including but not limited to: bias and fairness, misinformation, and autonomous decision-making. granite-vision-3.1-2b-preview is not the exception in this regard. Although our alignment processes include safety considerations, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying text verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use granite-vision-3.1-2b-preview with ethical intentions and in a responsible way. We recommend using this model for document understanding tasks, and note that more general vision tasks may pose higher inherent risks of triggering biased or harmful output.

Resources

Downloads last month
149
Safetensors
Model size
2.98B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for ibm-granite/granite-vision-3.1-2b-preview

Unable to build the model tree, the base model loops to the model itself. Learn more.

Collection including ibm-granite/granite-vision-3.1-2b-preview