ONNX

UForm

Multi-Modal Inference Library
For Semantic Search Applications


UForm is a Multi-Modal Modal Inference package, designed to encode Multi-Lingual Texts, Images, and, soon, Audio, Video, and Documents, into a shared vector space!

This is the repository of English and multilingual UForm models converted to CoreML MLProgram format. Currently, only unimodal parts of models are converted.

Description

Each model is separated into two parts: image-encoder and text-encoder:

Each checkpoint is a zip archive with an MLProgram of the corresponding encoder.

Text encoders have the following input fields:

  • input_ids: int32
  • attention_mask: int32

and support flexible batch size.

Image encoders has a single input field image: float32 and support only batch of single image (due to CoreML bug).

Both encoders return:

  • features: float32
  • embeddings: float32

If you want to convert a model with other parameters (i.e fp16 precision or other batch size range), you can use convert.py.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .