spacemanidol
commited on
Commit
•
14cb554
1
Parent(s):
de9b6c6
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,208 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: sentence-similarity
|
3 |
+
tags:
|
4 |
+
- sentence-transformers
|
5 |
+
- feature-extraction
|
6 |
+
- sentence-similarity
|
7 |
+
- mteb
|
8 |
+
- arctic
|
9 |
+
- snowflake-arctic-embed
|
10 |
+
- transformers.js
|
11 |
+
license: apache-2.0
|
12 |
+
language:
|
13 |
+
- af
|
14 |
+
- ar
|
15 |
+
- az
|
16 |
+
- be
|
17 |
+
- bg
|
18 |
+
- bn
|
19 |
+
- ca
|
20 |
+
- ceb
|
21 |
+
- cs
|
22 |
+
- cy
|
23 |
+
- da
|
24 |
+
- de
|
25 |
+
- el
|
26 |
+
- en
|
27 |
+
- es
|
28 |
+
- et
|
29 |
+
- eu
|
30 |
+
- fa
|
31 |
+
- fi
|
32 |
+
- fr
|
33 |
+
- gl
|
34 |
+
- gu
|
35 |
+
- he
|
36 |
+
- hi
|
37 |
+
- hr
|
38 |
+
- ht
|
39 |
+
- hu
|
40 |
+
- hy
|
41 |
+
- id
|
42 |
+
- is
|
43 |
+
- it
|
44 |
+
- ja
|
45 |
+
- jv
|
46 |
+
- ka
|
47 |
+
- kk
|
48 |
+
- km
|
49 |
+
- kn
|
50 |
+
- ko
|
51 |
+
- ky
|
52 |
+
- lo
|
53 |
+
- lt
|
54 |
+
- lv
|
55 |
+
- mk
|
56 |
+
- ml
|
57 |
+
- mn
|
58 |
+
- mr
|
59 |
+
- ms
|
60 |
+
- my
|
61 |
+
- ne
|
62 |
+
- nl
|
63 |
+
- 'no'
|
64 |
+
- pa
|
65 |
+
- pl
|
66 |
+
- pt
|
67 |
+
- qu
|
68 |
+
- ro
|
69 |
+
- ru
|
70 |
+
- si
|
71 |
+
- sk
|
72 |
+
- sl
|
73 |
+
- so
|
74 |
+
- sq
|
75 |
+
- sr
|
76 |
+
- sv
|
77 |
+
- sw
|
78 |
+
- ta
|
79 |
+
- te
|
80 |
+
- th
|
81 |
+
- tl
|
82 |
+
- tr
|
83 |
+
- uk
|
84 |
+
- ur
|
85 |
+
- vi
|
86 |
+
- yo
|
87 |
+
- zh
|
88 |
+
---
|
89 |
+
<h1 align="center">Snowflake's Arctic-embed-l-v2.0</h1>
|
90 |
+
<h4 align="center">
|
91 |
+
<p>
|
92 |
+
<a href=#models>Models</a> |
|
93 |
+
<a href=#usage>Usage</a> |
|
94 |
+
<a href="#evaluation">Evaluation</a> |
|
95 |
+
<a href="#contact">Contact</a> |
|
96 |
+
<a href="#faq">FAQ</a>
|
97 |
+
<a href="#license">License</a> |
|
98 |
+
<a href="#acknowledgement">Acknowledgement</a>
|
99 |
+
<p>
|
100 |
+
</h4>
|
101 |
+
|
102 |
+
|
103 |
+
## Models
|
104 |
+
|
105 |
+
|
106 |
+
MIRACL (4) Voyage misc. (9) CLEF (5) CLEF, max context length Multilingual CLEF
|
107 |
+
Snowflake's snowflake-arctic-embed-l-v2.0 is a multilingual text embedding models that focuses on providing
|
108 |
+
BEIR
|
109 |
+
0.556 0.558 0.655 0.529 0.541 0.543
|
110 |
+
0.543 0.543 0.644 0.519 0.528 0.534
|
111 |
+
|
112 |
+
Focused on
|
113 |
+
|
114 |
+
| Model Name | # params | # non-emb params | # dimensions | BEIR (15) | MIRACL (4) | CLEF (Focused) | CLEF (Full) |
|
115 |
+
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
116 |
+
| me5 base | 560M | 303M | 1024 | 0.514 | 0.540 | 0.430 | 0.346 |
|
117 |
+
| bge-m3 (BAAI) | 568M | 303M | 1024 | 0.488 | 0.568 | 0.408 | 0.413 |
|
118 |
+
| gte (Alibaba) | 305M | 113M | 768 | 0.511 | 0.523 | 0.477 | 0.531 |
|
119 |
+
| Arctic-M | 109M | 86M | 768 | 0.549 | 0.249 | 0.344 | 0.291 |
|
120 |
+
| snowflake-arctic-m | 335M | 303M | 1024 | 0.560 | 0.348 | 0.382 | 0.337 |
|
121 |
+
| me5 base | 560M | 303M | 1024 | 0.514 | 0.540 | 0.430 | 0.346 |
|
122 |
+
| bge-m3 (BAAI) | 568M | 303M | 1024 | 0.488 | 0.568 | 0.408 | 0.413 |
|
123 |
+
| gte (Alibaba) | 305M | 113M | 768 | 0.511 | 0.523 | 0.477 | 0.531 |
|
124 |
+
| snowflake-arctic-m | 109M | 86M | 768 | 0.549 | 0.249 | 0.344 | 0.291 |
|
125 |
+
| snowflake-arctic-l | 335M | 303M | 1024 | 0.560 | 0.348 | 0.382 | 0.337 |
|
126 |
+
| snowflake-arctic-m-v2.0 | 305M | 113M | 768 | 0.554 | 0.552 | 0.517 | 0.539 |
|
127 |
+
| snowflake-arctic-l-v2.0 | 568M | 303M | 1024 | 0.556 | 0.558 | 0.529 | 0.543 |
|
128 |
+
|
129 |
+
MRL
|
130 |
+
|
131 |
+
| Model | | BEIR (15) | Relative Performance | MIRACL (4) | Relative Performance | CLEF (5) | Relative Performance | CLEF (Full) | Relative Performance |
|
132 |
+
|---|---|:---:|:---:|:---:|:---:|:---:|---|---|---|
|
133 |
+
| snowflake-arctic-l-v2.0 | 1024 | 0.556 | N/A | 0.558 | N/A | 0.529 | N/A | 0.543 | N/A |
|
134 |
+
| snowflake-arctic-l-v2.0 | 256 | 0.543 | -0.18% | 0.543 | -2.70% | 0.519 | -1.81% | 0.534 | -1.53% |
|
135 |
+
| snowflake-arctic-m-v2.0 | 768 | 0.554 | N/A | 0.552 | N/A | 0.517 | N/A | 0.539 | N/A |
|
136 |
+
| snowflake-arctic-m-v2.0 | 256 | 0.544 | -1.81% | 0.54 | -2.17% | 0.506 | -2.13% | 0.523 | -3.06% |
|
137 |
+
|
138 |
+
|
139 |
+
|
140 |
+
The `snowflake-arctic-embedding` models achieve **state-of-the-art performance on the MTEB/BEIR leaderboard** for each of their size variants. Evaluation is performed using these [scripts](https://github.com/Snowflake-Labs/snowflake-arctic-embed/tree/main/src). As shown below, each class of model size achieves SOTA retrieval accuracy compared to other top models.
|
141 |
+
|
142 |
+
|
143 |
+
The models are trained by leveraging existing open-source text representation models, such as bert-base-uncased, and are trained in a multi-stage pipeline to optimize their retrieval performance. First, the models are trained with large batches of query-document pairs where negatives are derived in-batch—pretraining leverages about 400m samples of a mix of public datasets and proprietary web search data. Following pretraining models are further optimized with long training on a smaller dataset (about 1m samples) of triplets of query, positive document, and negative document derived from hard harmful mining. Mining of the negatives and data curation is crucial to retrieval accuracy. A detailed technical report can be found [here](https://arxiv.org/abs/2405.05374).
|
144 |
+
|
145 |
+
|
146 |
+
| Name | MTEB Retrieval Score (NDCG @ 10) | Parameters (Millions) | Embedding Dimension |
|
147 |
+
| ----------------------------------------------------------------------- | -------------------------------- | --------------------- | ------------------- |
|
148 |
+
| [snowflake-arctic-embed-xs](https://huggingface.co/Snowflake/snowflake-arctic-embed-xs/) | 50.15 | 22 | 384 |
|
149 |
+
| [snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s/) | 51.98 | 33 | 384 |
|
150 |
+
| [snowflake-arctic-embed-m](https://huggingface.co/Snowflake/snowflake-arctic-embed-m/) | 54.90 | 110 | 768 |
|
151 |
+
| [snowflake-arctic-embed-m-long](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/) | 54.83 | 137 | 768 |
|
152 |
+
| [snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l/) | 55.98 | 335 | 1024 |
|
153 |
+
|
154 |
+
|
155 |
+
|
156 |
+
## Usage
|
157 |
+
|
158 |
+
### Using Huggingface transformers
|
159 |
+
|
160 |
+
|
161 |
+
You can use the transformers package to use an snowflake-arctic-embed model, as shown below. For optimal retrieval quality, use the CLS token to embed each text portion and use the query prefix below (just on the query).
|
162 |
+
|
163 |
+
```python
|
164 |
+
import torch
|
165 |
+
from transformers import AutoModel, AutoTokenizer
|
166 |
+
|
167 |
+
model_name = 'snowflake-arctic-embed-l-v2.0'
|
168 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
169 |
+
model = AutoModel.from_pretrained(model_name, add_pooling_layer=False)
|
170 |
+
model.eval()
|
171 |
+
|
172 |
+
query_prefix = 'Represent this sentence for searching relevant passages: '
|
173 |
+
queries = ['what is snowflake?', 'Where can I get the best tacos?']
|
174 |
+
queries_with_prefix = ["{}{}".format(query_prefix, i) for i in queries]
|
175 |
+
query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512)
|
176 |
+
|
177 |
+
documents = ['The Data Cloud!', 'Mexico City of Course!']
|
178 |
+
document_tokens = tokenizer(documents, padding=True, truncation=True, return_tensors='pt', max_length=512)
|
179 |
+
|
180 |
+
# Compute token embeddings
|
181 |
+
with torch.no_grad():
|
182 |
+
query_embeddings = model(**query_tokens)[0][:, 0]
|
183 |
+
document_embeddings = model(**document_tokens)[0][:, 0]
|
184 |
+
|
185 |
+
|
186 |
+
# normalize embeddings
|
187 |
+
query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1)
|
188 |
+
document_embeddings = torch.nn.functional.normalize(document_embeddings, p=2, dim=1)
|
189 |
+
|
190 |
+
scores = torch.mm(query_embeddings, document_embeddings.transpose(0, 1))
|
191 |
+
for query, query_scores in zip(queries, scores):
|
192 |
+
doc_score_pairs = list(zip(documents, query_scores))
|
193 |
+
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
|
194 |
+
#Output passages & scores
|
195 |
+
print("Query:", query)
|
196 |
+
for document, score in doc_score_pairs:
|
197 |
+
print(score, document)
|
198 |
+
```
|
199 |
+
|
200 |
+
## Contact
|
201 |
+
|
202 |
+
|
203 |
+
Feel free to open an issue or pull request if you have any questions or suggestions about this project.
|
204 |
+
You also can email Daniel Campos([email protected]).
|
205 |
+
|
206 |
+
|
207 |
+
## License
|
208 |
+
Arctic is licensed under the [Apache-2](https://www.apache.org/licenses/LICENSE-2.0). The released models can be used for commercial purposes free of charge.
|