File size: 1,183 Bytes
6b1f225
 
86a2fd4
 
6b1f225
bb2f0ee
 
125b739
bb2f0ee
 
 
 
 
ecb53ee
04e4d7e
 
 
 
ecb53ee
bb2f0ee
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
license: mit
widget:
- text: "привет[SEP]привет![SEP]как дела?[RESPONSE_TOKEN]да, супер, вот только проснулся"
---


This classification model is based on [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2).
The model should be used to produce relevance and specificity of the last message in the context of a dialog.

It is pretrained on corpus of dialog data from social networks and finetuned on [tinkoff-ai/context_similarity](https://huggingface.co/tinkoff-ai/context_similarity). 
The performance of the model on validation split [tinkoff-ai/context_similarity](https://huggingface.co/tinkoff-ai/context_similarity) (with the best thresholds for validation samples):


|             |   f0.5 |   ROC AUC |
|:------------|-------:|----------:|
| relevance   |   0.82 |      0.74 |
| specificity |   0.81 |      0.8  |


The model can be loaded as follows:

```python
# pip install transformers
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("tinkoff-ai/context_similarity")
model = AutoModel.from_pretrained("tinkoff-ai/context_similarity")
# model.cuda()
```