bwang0911 commited on
Commit
86a4ad9
1 Parent(s): 440a9f4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -38
README.md CHANGED
@@ -22,35 +22,98 @@ tags:
22
  - transformers.js
23
  language:
24
  - multilingual
 
 
25
  - ar
 
 
 
 
26
  - bn
 
 
 
 
 
27
  - da
28
  - de
29
  - el
30
  - en
 
31
  - es
 
 
 
32
  - fi
33
  - fr
 
 
 
 
 
 
 
34
  - hi
 
 
 
35
  - id
 
36
  - it
37
  - ja
 
38
  - ka
 
 
 
39
  - ko
 
 
 
 
 
40
  - lv
 
 
 
 
 
 
 
 
41
  - nl
42
  - no
 
 
 
43
  - pl
 
44
  - pt
45
  - ro
46
  - ru
 
 
 
47
  - sk
 
 
 
 
 
48
  - sv
 
 
 
49
  - th
 
50
  - tr
 
51
  - uk
52
  - ur
 
53
  - vi
 
 
54
  - zh
55
  inference: false
56
  ---
@@ -70,15 +133,19 @@ inference: false
70
  <b>Jina CLIP: your CLIP model is also your text retriever!</b>
71
  </p>
72
 
 
 
 
 
73
 
74
  ## Intended Usage & Model Info
75
 
76
  `jina-clip-v2` is a state-of-the-art **multilingual and multimodal (text-image) embedding model**.
77
 
78
  `jina-clip-v2` is a successor to the [`jina-clip-v1`](https://huggingface.co/jinaai/jina-clip-v1) model and brings new features and capabilities, such as:
79
- * *support for multiple languages* - the text tower now supports 30 languages, including `en`, `zh`, `de`, `ar`, `hi`, `es`
80
- * *embedding truncation on both image and text vectors* - both towers are trained using [Matryoshka Representation Learning](https://arxiv.org/abs/2205.13147) which enables slicing the output vectors and in as a result computation and storage costs as well
81
- * *visual document retrieval performance boost* - with an image resolution of 384 (compared to 224 on `jina-clip-v1`) the image tower can now capture finer visual details. This feature along with a more diverse training set enable the model to perform much better on visual document retrieval tasks, as is evident by the performance gains on the [ViDoRe Benchmark](https://huggingface.co/spaces/vidore/vidore-leaderboard), compared to `jina-clip-v1`
82
 
83
  Similar to our predecessor model, `jina-clip-v2` bridges the gap between text-to-text and cross-modal retrieval. Via a single vector space, `jina-clip-v2` offers state-of-the-art performance on both tasks.
84
  This dual capability makes it an excellent tool for multimodal retrieval-augmented generation (MuRAG) applications, enabling seamless text-to-text and text-to-image searches within a single model.
@@ -210,38 +277,4 @@ If you find `jina-clip-v2` useful in your research, please cite the following pa
210
  Year = {2024},
211
  Eprint = {arXiv:2405.20204},
212
  }
213
- ```
214
-
215
- ## FAQ
216
-
217
- ### I encounter this problem, what should I do?
218
-
219
- ```
220
- ValueError: The model class you are passing has a `config_class` attribute that is not consistent with the config class you passed (model has <class 'transformers_modules.jinaai.jina-clip-implementation.7f069e2d54d609ef1ad2eb578c7bf07b5a51de41.configuration_clip.JinaCLIPConfig'> and you passed <class 'transformers_modules.jinaai.jina-clip-implementation.7f069e2d54d609ef1ad2eb578c7bf07b5a51de41.configuration_cli.JinaCLIPConfig'>. Fix one of those so they match!
221
- ```
222
-
223
- There was a bug in Transformers library between 4.40.x to 4.41.1. You can update transformers to >4.41.2 or <=4.40.0
224
-
225
- ### Given one query, how can I merge its text-text and text-image cosine similarity?
226
-
227
- Our emperical study shows that text-text cosine similarity is normally larger than text-image cosine similarity!
228
- If you want to merge two scores, we recommended 2 ways:
229
-
230
- 1. weighted average of text-text sim and text-image sim:
231
-
232
- ```python
233
- combined_scores = sim(text, text) + lambda * sim(text, image) # optimal lambda depends on your dataset, but in general lambda=2 can be a good choice.
234
- ```
235
-
236
- 2. apply z-score normalization before merging scores:
237
-
238
- ```python
239
- # pseudo code
240
- query_document_mean = np.mean(cos_sim_text_texts)
241
- query_document_std = np.std(cos_sim_text_texts)
242
- text_image_mean = np.mean(cos_sim_text_images)
243
- text_image_std = np.std(cos_sim_text_images)
244
-
245
- query_document_sim_normalized = (cos_sim_query_documents - query_document_mean) / query_document_std
246
- text_image_sim_normalized = (cos_sim_text_images - text_image_mean) / text_image_std
247
- ```
 
22
  - transformers.js
23
  language:
24
  - multilingual
25
+ - af
26
+ - am
27
  - ar
28
+ - as
29
+ - az
30
+ - be
31
+ - bg
32
  - bn
33
+ - br
34
+ - bs
35
+ - ca
36
+ - cs
37
+ - cy
38
  - da
39
  - de
40
  - el
41
  - en
42
+ - eo
43
  - es
44
+ - et
45
+ - eu
46
+ - fa
47
  - fi
48
  - fr
49
+ - fy
50
+ - ga
51
+ - gd
52
+ - gl
53
+ - gu
54
+ - ha
55
+ - he
56
  - hi
57
+ - hr
58
+ - hu
59
+ - hy
60
  - id
61
+ - is
62
  - it
63
  - ja
64
+ - jv
65
  - ka
66
+ - kk
67
+ - km
68
+ - kn
69
  - ko
70
+ - ku
71
+ - ky
72
+ - la
73
+ - lo
74
+ - lt
75
  - lv
76
+ - mg
77
+ - mk
78
+ - ml
79
+ - mn
80
+ - mr
81
+ - ms
82
+ - my
83
+ - ne
84
  - nl
85
  - no
86
+ - om
87
+ - or
88
+ - pa
89
  - pl
90
+ - ps
91
  - pt
92
  - ro
93
  - ru
94
+ - sa
95
+ - sd
96
+ - si
97
  - sk
98
+ - sl
99
+ - so
100
+ - sq
101
+ - sr
102
+ - su
103
  - sv
104
+ - sw
105
+ - ta
106
+ - te
107
  - th
108
+ - tl
109
  - tr
110
+ - ug
111
  - uk
112
  - ur
113
+ - uz
114
  - vi
115
+ - xh
116
+ - yi
117
  - zh
118
  inference: false
119
  ---
 
133
  <b>Jina CLIP: your CLIP model is also your text retriever!</b>
134
  </p>
135
 
136
+ ## Quick Start
137
+
138
+ [Blog](https://jina.ai/news/jina-embeddings-v3-a-frontier-multilingual-embedding-model/#parameter-dimensions) | [Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/jinaai.jina-clip-v2) | [AWS SageMaker](https://aws.amazon.com/marketplace/pp/prodview-kdi3xkt62lo32) | [API](https://jina.ai/embeddings)
139
+
140
 
141
  ## Intended Usage & Model Info
142
 
143
  `jina-clip-v2` is a state-of-the-art **multilingual and multimodal (text-image) embedding model**.
144
 
145
  `jina-clip-v2` is a successor to the [`jina-clip-v1`](https://huggingface.co/jinaai/jina-clip-v1) model and brings new features and capabilities, such as:
146
+ * *support for multiple languages* - the text tower now supports 100 languages with tuning focus on **Arabic, Bengali, Chinese, Danish, Dutch, English, Finnish, French, Georgian, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Latvian, Norwegian, Polish, Portuguese, Romanian, Russian, Slovak, Spanish, Swedish, Thai, Turkish, Ukrainian, Urdu,** and **Vietnamese.**
147
+ * *embedding truncation on both image and text vectors* - both towers are trained using [Matryoshka Representation Learning](https://arxiv.org/abs/2205.13147) which enables slicing the output vectors and in as a result computation and storage costs as well.
148
+ * *visual document retrieval performance boost* - with an image resolution of 512 (compared to 224 on `jina-clip-v1`) the image tower can now capture finer visual details. This feature along with a more diverse training set enable the model to perform much better on visual document retrieval tasks. This enable `jina-clip-v2` as a strong encoder for future vLLM based retriever.
149
 
150
  Similar to our predecessor model, `jina-clip-v2` bridges the gap between text-to-text and cross-modal retrieval. Via a single vector space, `jina-clip-v2` offers state-of-the-art performance on both tasks.
151
  This dual capability makes it an excellent tool for multimodal retrieval-augmented generation (MuRAG) applications, enabling seamless text-to-text and text-to-image searches within a single model.
 
277
  Year = {2024},
278
  Eprint = {arXiv:2405.20204},
279
  }
280
+ ```