Sentence Similarity
sentence-transformers
PyTorch
Transformers
English
t5
text-embedding
embeddings
information-retrieval
beir
text-classification
language-model
text-clustering
text-semantic-similarity
text-evaluation
prompt-retrieval
text-reranking
feature-extraction
English
Sentence Similarity
natural_questions
ms_marco
fever
hotpot_qa
mteb
Eval Results
multi-train
commited on
Commit
·
88f06d6
1
Parent(s):
48e04e4
Update README.md
Browse files
README.md
CHANGED
@@ -10,10 +10,10 @@ tags:
|
|
10 |
---
|
11 |
|
12 |
# hkunlp/instructor-base
|
13 |
-
|
14 |
-
|
15 |
The model is easy to use with `sentence-transformer` library.
|
16 |
|
|
|
17 |
## Installation
|
18 |
```bash
|
19 |
git clone https://github.com/HKUNLP/instructor-embedding
|
@@ -32,14 +32,24 @@ embeddings = model.encode([[instruction,sentence,0]])
|
|
32 |
print(embeddings)
|
33 |
```
|
34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
## Calculate Sentence similarities
|
36 |
You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**.
|
37 |
```python
|
38 |
from sklearn.metrics.pairwise import cosine_similarity
|
39 |
sentences_a = [['Represent the Science sentence; Input: ','Parton energy loss in QCD matter',0],
|
40 |
-
['Represent the Financial statement; Input: ','The Federal Reserve on Wednesday raised its benchmark interest rate.',0]
|
41 |
sentences_b = [['Represent the Science sentence; Input: ','The Chiral Phase Transition in Dissipative Dynamics', 0],
|
42 |
-
['Represent the Financial statement; Input: ','The funds rose less than 0.5 per cent on Friday',0]
|
43 |
embeddings_a = model.encode(sentences_a)
|
44 |
embeddings_b = model.encode(sentences_b)
|
45 |
similarities = cosine_similarity(embeddings_a,embeddings_b)
|
|
|
10 |
---
|
11 |
|
12 |
# hkunlp/instructor-base
|
13 |
+
We introduce **Instructor**👨🏫, an instruction-finetuned text embedding model that can generate text embeddings tailored to any task (e.g., classification, retrieval, clustering, text evaluation, etc.) and domains (e.g., science, finance, etc.) ***by simply providing the task instruction, without any finetuning***. Instructor👨 achieves sota on 70 diverse embedding tasks!
|
|
|
14 |
The model is easy to use with `sentence-transformer` library.
|
15 |
|
16 |
+
# Quick start
|
17 |
## Installation
|
18 |
```bash
|
19 |
git clone https://github.com/HKUNLP/instructor-embedding
|
|
|
32 |
print(embeddings)
|
33 |
```
|
34 |
|
35 |
+
# Use cases
|
36 |
+
We provide a few specific use cases in the following. For more examples and applications, refer to [our paper](https://arxiv.org/abs/2212.09741)
|
37 |
+
## Calculate embeddings for your customized texts
|
38 |
+
If you want to calculate customized embeddings for specific sentences, you may follow the unified template to write instructions:
|
39 |
+
|
40 |
+
Represent the `domain` `text_type` for `task_objective`; Input:
|
41 |
+
* `domain` is optional, and it specifies the domain of the text, e.g., science, finance, medicine, etc.
|
42 |
+
* `text_type` is required, and it specifies the encoding unit, e.g., sentence, document, paragraph, etc.
|
43 |
+
* `task_objective` is optional, and it specifies the objective of emebdding, e.g., retrieve a document, classify the sentence, etc.
|
44 |
+
|
45 |
## Calculate Sentence similarities
|
46 |
You can further use the model to compute similarities between two groups of sentences, with **customized embeddings**.
|
47 |
```python
|
48 |
from sklearn.metrics.pairwise import cosine_similarity
|
49 |
sentences_a = [['Represent the Science sentence; Input: ','Parton energy loss in QCD matter',0],
|
50 |
+
['Represent the Financial statement; Input: ','The Federal Reserve on Wednesday raised its benchmark interest rate.',0]]
|
51 |
sentences_b = [['Represent the Science sentence; Input: ','The Chiral Phase Transition in Dissipative Dynamics', 0],
|
52 |
+
['Represent the Financial statement; Input: ','The funds rose less than 0.5 per cent on Friday',0]]
|
53 |
embeddings_a = model.encode(sentences_a)
|
54 |
embeddings_b = model.encode(sentences_b)
|
55 |
similarities = cosine_similarity(embeddings_a,embeddings_b)
|