Feature Extraction
Transformers
clip
vision
Inference Endpoints
kimihailv commited on
Commit
016f2b5
1 Parent(s): 1ec7966

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -0
README.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ <h1 align="center">UForm</h1>
5
+ <h3 align="center">
6
+ Multi-Modal Inference Library<br/>
7
+ For Semantic Search Applications<br/>
8
+ </h3>
9
+
10
+ ---
11
+
12
+ UForm is a Multi-Modal Modal Inference package, designed to encode Multi-Lingual Texts, Images, and, soon, Audio, Video, and Documents, into a shared vector space!
13
+ It extends the `transfromers` package to support Mid-fusion Models.
14
+
15
+ This is model card of __English only model__ with:
16
+
17
+ * 4 layers BERT (2 layers for unimodal encoding and rest layers for multimodal encoding)
18
+ * ViT-B/16 (image resolution is 224x224)
19
+
20
+
21
+ If you need multilingual model, check [this](https://huggingface.co/unum-cloud/uform-vl-multilingual).
22
+
23
+ ## Installation
24
+
25
+ ```bash
26
+ pip install uform
27
+ ```
28
+
29
+ ## Usage
30
+
31
+ To load the model:
32
+
33
+ ```python
34
+ import uform
35
+
36
+ model = uform.get_model('unum-cloud/uform-vl-english')
37
+ ```
38
+
39
+ To encode data:
40
+
41
+ ```python
42
+ from PIL import Image
43
+
44
+ text = 'a small red panda in a zoo'
45
+ image = Image.open('red_panda.jpg')
46
+
47
+ image_data = model.preprocess_image(image)
48
+ text_data = model.preprocess_text(text)
49
+
50
+ image_embedding = model.encode_image(image_data)
51
+ text_embedding = model.encode_text(text_data)
52
+ joint_embedding = model.encode_multimodal(image=image_data, text=text_data)
53
+ ```
54
+
55
+ To get features:
56
+
57
+ ```python
58
+ image_features, image_embedding = model.encode_image(image_data, return_features=True)
59
+ text_features, text_embedding = model.encode_text(text_data, return_features=True)
60
+ ```
61
+
62
+ These features can later be used to produce joint multimodal encodings faster, as the first layers of the transformer can be skipped:
63
+
64
+ ```python
65
+ joint_embedding = model.encode_multimodal(
66
+ image_features=image_features,
67
+ text_features=text_features,
68
+ attention_mask=text_data['attention_mask']
69
+ )
70
+ ```
71
+
72
+ There are two options to calculate semantic compatibility between an image and a text: [Cosine Similarity](#cosine-similarity) and [Matching Score](#matching-score).
73
+
74
+ ### Cosine Similarity
75
+
76
+ ```python
77
+ import torch.nn.functional as F
78
+
79
+ similarity = F.cosine_similarity(image_embedding, text_embedding)
80
+ ```
81
+
82
+ The `similarity` will belong to the `[-1, 1]` range, `1` meaning the absolute match.
83
+
84
+ __Pros__:
85
+
86
+ - Computationally cheap.
87
+ - Only unimodal embeddings are required, unimodal encoding is faster than joint encoding.
88
+ - Suitable for retrieval in large collections.
89
+
90
+ __Cons__:
91
+
92
+ - Takes into account only coarse-grained features.
93
+
94
+
95
+ ### Matching Score
96
+
97
+ Unlike cosine similarity, unimodal embedding are not enough.
98
+ Joint embedding will be needed and the resulting `score` will belong to the `[0, 1]` range, `1` meaning the absolute match.
99
+
100
+ ```python
101
+ score = model.get_matching_scores(joint_embedding)
102
+ ```
103
+
104
+ __Pros__:
105
+
106
+ - Joint embedding captures fine-grained features.
107
+ - Suitable for re-ranking - sorting retrieval result.
108
+
109
+ __Cons__:
110
+
111
+ - Resource-intensive.
112
+ - Not suitable for retrieval in large collections.
113
+