UForm

Multi-Modal Inference Library
For Semantic Search Applications


UForm is a Multi-Modal Modal Inference package, designed to encode Multi-Lingual Texts, Images, and, soon, Audio, Video, and Documents, into a shared vector space!

This is model card of the English only model with:

  • 12 layers BERT (6 layers for unimodal encoding and rest layers for multimodal encoding)
  • ViT-L/14 (image resolution is 224x224)
  • Multiple embedding sizes: 64, 256, 512, 768

If you need Multilingual model, check this.

Evaluation

The following metrics were obtained with multimodal re-ranking (text-to-image retrieval):

Dataset Recall@1 Recall@5 Recall@10
Zero-Shot Flickr 0.693 0.875 0.923
Zero-Shot MS-COCO 0.382 0.617 0.728

ImageNet-Top1: 0.518
ImageNet-Top5: 0.756

Installation

pip install uform[onnx-gpu]

Usage

To load the model:

import uform
model, processor = uform.get_model_onnx('unum-cloud/uform-vl-english-large', device='gpu', dtype='fp32')

To encode data:

from PIL import Image

text = 'a small red panda in a zoo'
image = Image.open('red_panda.jpg')

image_data = processor.preprocess_image(image)
text_data = processor.preprocess_text(text)

image_features, image_embedding = model.encode_image(image_data, return_features=True)
text_features, text_embedding = model.encode_text(text_data, return_features=True)
score, joint_embedding = model.encode_multimodal(
    image_features=image_features,
    text_features=text_features,
    attention_mask=text_data['attention_mask'],
    return_scores=True
)

There are two options to calculate semantic compatibility between an image and a text: cosine similarity and Matching Score.

Cosine Similarity

Pros:

  • Computationally cheap.
  • Only unimodal embeddings are required, unimodal encoding is faster than joint encoding.
  • Suitable for retrieval in large collections.

Cons:

  • Takes into account only coarse-grained features.

Matching Score

Unlike cosine similarity, unimodal embedding are not enough. Joint embedding will be needed and the resulting score will belong to the [0, 1] range, 1 meaning the absolute match.

Pros:

  • Joint embedding captures fine-grained features.
  • Suitable for re-ranking – sorting retrieval result.

Cons:

  • Resource-intensive.
  • Not suitable for retrieval in large collections.
Downloads last month
20
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train unum-cloud/uform-vl-english-large-gpu-fp32