---
license: apache-2.0
language:
- en
- de
- es
- fr
- it
- ja
- ko
- pl
- ru
- tr
- zh
- ar
---
UForm
Multi-Modal Inference Library
For Semantic Search Applications
---
UForm is a Multi-Modal Modal Inference package, designed to encode Multi-Lingual Texts, Images, and, soon, Audio, Video, and Documents, into a shared vector space!
This is the repository of [English](https://huggingface.co/unum-cloud/uform-vl-english/tree/main) and [multilingual](https://huggingface.co/unum-cloud/uform-vl-multilingual) UForm models converted to CoreML MLProgram format.
Currently, only __unimodal__ parts of models are converted.
## Description
Each model is separated into two parts: `image-encoder` and `text-encoder`:
* English image-encoder: [english.image-encoder.mlpackage](https://huggingface.co/unum-cloud/uform-coreml/blob/main/english.image-encoder.mlpackage.zip)
* English text-encoder: [english.text-encoder.mlpackage](https://huggingface.co/unum-cloud/uform-coreml/blob/main/english.text-encoder.mlpackage.zip)
* Multilingual image-encoder: [multilingual.image-encoder.mlpackage](https://huggingface.co/unum-cloud/uform-coreml/blob/main/multilingual.image-encoder.mlpackage.zip)
* Multilingual text-encoder: [multilingual.text-encoder.mlpackage](https://huggingface.co/unum-cloud/uform-coreml/blob/main/multilingual.text-encoder.mlpackage.zip)
Each checkpoint is a zip archive with an MLProgram of the corresponding encoder.
Text encoders have the following input fields:
* `input_ids`: int32
* `attention_mask`: int32
and support flexible batch size.
Image encoders has a single input field `image`: float32 and support only batch of single image (due to CoreML bug).
Both encoders return:
* `features`: float32
* `embeddings`: float32
If you want to convert a model with other parameters (i.e fp16 precision or other batch size range), you can use [convert.py](https://huggingface.co/unum-cloud/uform-coreml/blob/main/convert_model.py).