Tochka-AI commited on
Commit
901bfb3
1 Parent(s): d847760

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -0
README.md ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - ru
5
+ pipeline_tag: feature-extraction
6
+ datasets:
7
+ - uonlp/CulturaX
8
+ ---
9
+
10
+ # ruRoPEBert Classic Model for Russian language
11
+
12
+ This is an encoder model from **Tochka AI** based on the **RoPEBert** architecture, using the cloning method described in [our article on Habr](https://habr.com/ru/companies/tochka/articles/797561/).
13
+
14
+ [CulturaX](https://huggingface.co/papers/2309.09400) dataset was used for model training. The **hivaze/ru-e5-base** (only english and russian embeddings of **intfloat/multilingual-e5-base**) model was used as the original; this model surpasses it and all other models in quality (at the time of creation), according to the `S+W` score of [encodechka](https://github.com/avidale/encodechka) benchmark.
15
+
16
+ The model source code is available in the file [modeling_rope_bert.py](https://huggingface.co/Tochka-AI/ruRoPEBert-classic-base-2k/blob/main/modeling_rope_bert.py)
17
+
18
+ The model is trained on contexts **up to 2048 tokens** in length, but can be used on larger contexts.
19
+
20
+ ## Usage
21
+
22
+ **Important**: To load the model correctly, you must enable dowloading code from the model's repository: `trust_remote_code=True`, this will download the **modeling_rope_bert.py** script and load the weights into the correct architecture.
23
+ Otherwise, you can download this script manually and use classes from it directly to load the model.
24
+
25
+ ### Basic usage (no efficient attention)
26
+
27
+ ```python
28
+ model_name = 'Tochka-AI/ruRoPEBert-e5-base-2k'
29
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
30
+ model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
31
+ ```
32
+
33
+ ### With SDPA (efficient attention)
34
+
35
+ ```python
36
+ model = AutoModel.from_pretrained(model_name, trust_remote_code=True, attn_implementation='sdpa')
37
+ ```
38
+
39
+ ### Getting embeddings
40
+
41
+ The correct pooler (`mean`) is already **built into the model architecture**, which averages embeddings based on the attention mask. You can also select the pooler type (`first_token_transform`), which performs a learnable linear transformation on the first token.
42
+
43
+ To change built-in pooler implementation use `pooler_type` parameter in `AutoModel.from_pretrained` function
44
+
45
+ ```python
46
+ test_batch = tokenizer.batch_encode_plus(["Привет, чем занят?", "Здравствуйте, чем вы занимаетесь?"], return_tensors='pt', padding=True)
47
+ with torch.inference_mode():
48
+ pooled_output = model(**test_batch).pooler_output
49
+ ```
50
+
51
+ In addition, you can calculate cosine similarities between texts in batch using normalization and matrix multiplication:
52
+
53
+ ```python
54
+ import torch.nn.functional as F
55
+ F.normalize(pooled_output, dim=1) @ F.normalize(pooled_output, dim=1).T
56
+ ```
57
+
58
+ ### Using as classifier
59
+
60
+ To load the model with trainable classification head on top (change `num_labels` parameter):
61
+
62
+ ```python
63
+ model = AutoModelForSequenceClassification.from_pretrained(model_name, trust_remote_code=True, attn_implementation='sdpa', num_labels=4)
64
+ ```
65
+
66
+ ### With RoPE scaling
67
+
68
+ Allowed types for RoPE scaling are: `linear` and `dynamic`. To extend the model's context window you need to change tokenizer max length and add `rope_scaling` parameter.
69
+
70
+ If you want to scale your model context by 2x:
71
+ ```python
72
+ tokenizer.model_max_length = 4096
73
+ model = AutoModel.from_pretrained(model_name,
74
+ trust_remote_code=True,
75
+ attn_implementation='sdpa',
76
+ rope_scaling={'type': 'dynamic','factor': 2.0}
77
+ ) # 2.0 for x2 scaling, 4.0 for x4, etc..
78
+ ```
79
+
80
+ P.S. Don't forget to specify the dtype and device you need to use resources efficiently.
81
+
82
+ ## Metrics
83
+
84
+ Evaluation of this model on encodechka benchmark:
85
+
86
+ | Model name | STS | PI | NLI | SA | TI | IA | IC | ICX | NE1 | NE2 | Avg S (no NE) | Avg S+W (with NE) |
87
+ |---------------------|-----|------|-----|-----|-----|-----|-----|-----|-----|-----|---------------|-------------------|
88
+ | ruRoPEBert-e5-base-512 | 0.793 | 0.704 | 0.457 | 0.803 | 0.970 | 0.788 | 0.802 | 0.749 | 0.328 | 0.396 | 0.758 | 0.679 |
89
+ | **ruRoPEBert-e5-base-2k** | 0.787 | 0.708 | 0.460 | 0.804 | 0.970 | 0.792 | 0.803 | 0.749 | 0.402 | 0.423 | 0.759 | 0.689 |
90
+ | intfloat/multilingual-e5-base | 0.834 | 0.704 | 0.458 | 0.795 | 0.964 | 0.782 | 0.803 | 0.740 | 0.234 | 0.373 | 0.76 | 0.668 |
91
+
92
+ ## Authors
93
+ - Sergei Bratchikov (Tochka AI Team, [HF](https://huggingface.co/hivaze), [GitHub](https://huggingface.co/hivaze))
94
+ - Maxim Afanasiev (Tochka AI Team, [HF](https://huggingface.co/mrapplexz), [GitHub](https://github.com/mrapplexz))