arthurbr11 commited on
Commit
29e47dc
·
1 Parent(s): 8cac9d0

Add support for Sentence Transformer

Browse files
1_SpladePooling/config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "pooling_strategy": "max",
3
+ "activation_function": "relu",
4
+ "word_embedding_dimension": null
5
+ }
README.md CHANGED
@@ -10,6 +10,12 @@ tags:
10
  - query-expansion
11
  - document-expansion
12
  - bag-of-words
 
 
 
 
 
 
13
  ---
14
 
15
  # opensearch-neural-sparse-encoding-v2-distill
@@ -37,6 +43,91 @@ The training datasets includes MS MARCO, eli5_question_answer, squad_pairs, Wiki
37
 
38
  OpenSearch neural sparse feature supports learned sparse retrieval with lucene inverted index. Link: https://opensearch.org/docs/latest/query-dsl/specialized/neural-sparse/. The indexing and search can be performed with OpenSearch high-level API.
39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
  ## Usage (HuggingFace)
42
  This model is supposed to run inside OpenSearch cluster. But you can also use it outside the cluster, with HuggingFace models API.
 
10
  - query-expansion
11
  - document-expansion
12
  - bag-of-words
13
+ - sentence-transformers
14
+ - sparse-encoder
15
+ - sparse
16
+ - splade
17
+ pipeline_tag: feature-extraction
18
+ library_name: sentence-transformers
19
  ---
20
 
21
  # opensearch-neural-sparse-encoding-v2-distill
 
43
 
44
  OpenSearch neural sparse feature supports learned sparse retrieval with lucene inverted index. Link: https://opensearch.org/docs/latest/query-dsl/specialized/neural-sparse/. The indexing and search can be performed with OpenSearch high-level API.
45
 
46
+ ## Usage (Sentence Transformers)
47
+
48
+ First install the Sentence Transformers library:
49
+
50
+ ```bash
51
+ pip install -U sentence-transformers
52
+ ```
53
+
54
+ Then you can load this model and run inference.
55
+
56
+ ```python
57
+ from sentence_transformers.sparse_encoder import SparseEncoder
58
+
59
+ # Download from the 🤗 Hub
60
+ model = SparseEncoder("opensearch-project/opensearch-neural-sparse-encoding-v2-distill")
61
+
62
+ query = "What's the weather in ny now?"
63
+ document = "Currently New York is rainy."
64
+
65
+ query_embed = model.encode_query(query)
66
+ document_embed = model.encode_document(document)
67
+
68
+ sim = model.similarity(query_embed, document_embed)
69
+ print(f"Similarity: {sim}")
70
+ # Similarity: tensor([[38.6113]])
71
+
72
+ decoded_query = model.decode(query_embed)
73
+ decoded_document = model.decode(document_embed)
74
+
75
+ for i in range(len(decoded_query)):
76
+ query_token, query_score = decoded_query[i]
77
+ doc_score = next((score for token, score in decoded_document if token == query_token), 0)
78
+ if doc_score != 0:
79
+ print(f"Token: {query_token}, Query score: {query_score:.4f}, Document score: {doc_score:.4f}")
80
+
81
+ # Token: york, Query score: 2.7273, Document score: 2.9088
82
+ # Token: now, Query score: 2.5734, Document score: 0.9208
83
+ # Token: ny, Query score: 2.3895, Document score: 1.7237
84
+ # Token: weather, Query score: 2.2184, Document score: 1.2368
85
+ # Token: current, Query score: 1.8693, Document score: 1.4146
86
+ # Token: today, Query score: 1.5888, Document score: 0.7450
87
+ # Token: sunny, Query score: 1.4704, Document score: 0.9247
88
+ # Token: nyc, Query score: 1.4374, Document score: 1.9737
89
+ # Token: currently, Query score: 1.4347, Document score: 1.6019
90
+ # Token: climate, Query score: 1.1605, Document score: 0.9794
91
+ # Token: upstate, Query score: 1.0944, Document score: 0.7141
92
+ # Token: forecast, Query score: 1.0471, Document score: 0.5519
93
+ # Token: verve, Query score: 0.9268, Document score: 0.6692
94
+ # Token: huh, Query score: 0.9126, Document score: 0.4486
95
+ # Token: greene, Query score: 0.8960, Document score: 0.7706
96
+ # Token: picturesque, Query score: 0.8779, Document score: 0.7120
97
+ # Token: pleasantly, Query score: 0.8471, Document score: 0.4183
98
+ # Token: windy, Query score: 0.8079, Document score: 0.2140
99
+ # Token: favorable, Query score: 0.7537, Document score: 0.4925
100
+ # Token: rain, Query score: 0.7519, Document score: 2.1456
101
+ # Token: skies, Query score: 0.7277, Document score: 0.3818
102
+ # Token: lena, Query score: 0.6995, Document score: 0.8593
103
+ # Token: sunshine, Query score: 0.6895, Document score: 0.2410
104
+ # Token: johnny, Query score: 0.6621, Document score: 0.3016
105
+ # Token: skyline, Query score: 0.6604, Document score: 0.1933
106
+ # Token: sasha, Query score: 0.6117, Document score: 0.2197
107
+ # Token: vibe, Query score: 0.5962, Document score: 0.0414
108
+ # Token: hardly, Query score: 0.5381, Document score: 0.7560
109
+ # Token: prevailing, Query score: 0.4583, Document score: 0.4243
110
+ # Token: unpredictable, Query score: 0.4539, Document score: 0.5073
111
+ # Token: presently, Query score: 0.4350, Document score: 0.8463
112
+ # Token: hail, Query score: 0.3674, Document score: 0.2496
113
+ # Token: shivered, Query score: 0.3324, Document score: 0.5506
114
+ # Token: wind, Query score: 0.3281, Document score: 0.1964
115
+ # Token: rudy, Query score: 0.3052, Document score: 0.5785
116
+ # Token: looming, Query score: 0.2797, Document score: 0.0357
117
+ # Token: atmospheric, Query score: 0.2712, Document score: 0.0870
118
+ # Token: vicky, Query score: 0.2471, Document score: 0.3490
119
+ # Token: sandy, Query score: 0.2247, Document score: 0.2383
120
+ # Token: crowded, Query score: 0.2154, Document score: 0.5737
121
+ # Token: chilly, Query score: 0.1723, Document score: 0.1857
122
+ # Token: blizzard, Query score: 0.1700, Document score: 0.4110
123
+ # Token: ##cken, Query score: 0.1183, Document score: 0.0613
124
+ # Token: unrest, Query score: 0.0923, Document score: 0.6363
125
+ # Token: russ, Query score: 0.0624, Document score: 0.2127
126
+ # Token: blackout, Query score: 0.0558, Document score: 0.5542
127
+ # Token: kahn, Query score: 0.0549, Document score: 0.1589
128
+ # Token: 2020, Query score: 0.0160, Document score: 0.0566
129
+ # Token: nighttime, Query score: 0.0125, Document score: 0.3753
130
+ ```
131
 
132
  ## Usage (HuggingFace)
133
  This model is supposed to run inside OpenSearch cluster. But you can also use it outside the cluster, with HuggingFace models API.
config_sentence_transformers.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "SparseEncoder",
3
+ "__version__": {
4
+ "sentence_transformers": "5.0.0",
5
+ "transformers": "4.50.3",
6
+ "pytorch": "2.6.0+cu124"
7
+ },
8
+ "prompts": {
9
+ "query": "",
10
+ "document": ""
11
+ },
12
+ "default_prompt_name": null,
13
+ "similarity_fn_name": "dot"
14
+ }
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.sparse_encoder.models.MLMTransformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_SpladePooling",
12
+ "type": "sentence_transformers.sparse_encoder.models.SpladePooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }