Tochka-AI commited on
Commit
6c1b5a9
1 Parent(s): acc6e65

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -7,7 +7,7 @@ datasets:
7
  - uonlp/CulturaX
8
  ---
9
 
10
- ## ruRoPEBert Classic Model for Russian language
11
 
12
  This is an encoder model from **Tochka AI** based on the **RoPEBert** architecture, using the cloning method described in [our article on Habr](https://habr.com/ru/companies/tochka/articles/797561/).
13
 
@@ -17,12 +17,12 @@ The model source code is available in the file [modeling_rope_bert.py](https://h
17
 
18
  The model is trained on contexts **up to 512 tokens** in length, but can be used on larger contexts. For better quality, use the version of this model with extended context - [Tochka-AI/ruRoPEBert-classic-base-2k](https://huggingface.co/Tochka-AI/ruRoPEBert-classic-base-2k)
19
 
20
- ### Usage
21
 
22
  **Important**: To load the model correctly, you must enable dowloading code from the model's repository: `trust_remote_code=True`, this will download the **modeling_rope_bert.py** script and load the weights into the correct architecture.
23
  Otherwise, you can download this script manually and use classes from it directly to load the model.
24
 
25
- #### Basic usage (no efficient attention)
26
 
27
  ```python
28
  model_name = 'Tochka-AI/ruRoPEBert-classic-base-512'
@@ -30,13 +30,13 @@ tokenizer = AutoTokenizer.from_pretrained(model_name)
30
  model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
31
  ```
32
 
33
- #### With SDPA (efficient attention)
34
 
35
  ```python
36
  model = AutoModel.from_pretrained(model_name, trust_remote_code=True, attn_implementation='sdpa')
37
  ```
38
 
39
- #### Getting embeddings
40
 
41
  The correct pooler (`mean`) is already **built into the model architecture**, which averages embeddings based on the attention mask. You can also select the pooler type (`first_token_transform`), which performs a learnable linear transformation on the first token.
42
 
@@ -55,7 +55,7 @@ import torch.nn.functional as F
55
  F.normalize(pooled_output, dim=1) @ F.normalize(pooled_output, dim=1).T
56
  ```
57
 
58
- #### Using as classifier
59
 
60
  To load the model with trainable classification head on top (change `num_labels` parameter):
61
 
@@ -63,7 +63,7 @@ To load the model with trainable classification head on top (change `num_labels`
63
  model = AutoModelForSequenceClassification.from_pretrained(model_name, trust_remote_code=True, attn_implementation='sdpa', num_labels=4)
64
  ```
65
 
66
- #### With RoPE scaling
67
 
68
  Allowed types for RoPE scaling are: `linear` and `dynamic`. To extend the model's context window you need to change tokenizer max length and add `rope_scaling` parameter.
69
 
@@ -80,7 +80,7 @@ model = AutoModel.from_pretrained(model_name,
80
 
81
  P.S. Don't forget to specify the dtype and device you need to use resources efficiently.
82
 
83
- ### Metrics
84
 
85
  Evaluation of this model on encodechka benchmark:
86
 
@@ -89,6 +89,6 @@ Evaluation of this model on encodechka benchmark:
89
  | **ruRoPEBert-classic-base-512** | 0.695 | 0.605 | 0.396 | 0.794 | 0.975 | 0.797 | 0.769 | 0.386 | 0.410 | 0.609 | 0.677 | 0.630 |
90
  | ai-forever/ruBert-base | 0.670 | 0.533 | 0.391 | 0.773 | 0.975 | 0.783 | 0.765 | 0.384 | - | - | 0.659 | - |
91
 
92
- ### Authors
93
  - Sergei Bratchikov (Tochka AI Team, [HF](https://huggingface.co/hivaze), [GitHub](https://huggingface.co/hivaze))
94
  - Maxim Afanasiev (Tochka AI Team, [HF](https://huggingface.co/mrapplexz), [GitHub](https://github.com/mrapplexz))
 
7
  - uonlp/CulturaX
8
  ---
9
 
10
+ # ruRoPEBert Classic Model for Russian language
11
 
12
  This is an encoder model from **Tochka AI** based on the **RoPEBert** architecture, using the cloning method described in [our article on Habr](https://habr.com/ru/companies/tochka/articles/797561/).
13
 
 
17
 
18
  The model is trained on contexts **up to 512 tokens** in length, but can be used on larger contexts. For better quality, use the version of this model with extended context - [Tochka-AI/ruRoPEBert-classic-base-2k](https://huggingface.co/Tochka-AI/ruRoPEBert-classic-base-2k)
19
 
20
+ ## Usage
21
 
22
  **Important**: To load the model correctly, you must enable dowloading code from the model's repository: `trust_remote_code=True`, this will download the **modeling_rope_bert.py** script and load the weights into the correct architecture.
23
  Otherwise, you can download this script manually and use classes from it directly to load the model.
24
 
25
+ ### Basic usage (no efficient attention)
26
 
27
  ```python
28
  model_name = 'Tochka-AI/ruRoPEBert-classic-base-512'
 
30
  model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
31
  ```
32
 
33
+ ### With SDPA (efficient attention)
34
 
35
  ```python
36
  model = AutoModel.from_pretrained(model_name, trust_remote_code=True, attn_implementation='sdpa')
37
  ```
38
 
39
+ ### Getting embeddings
40
 
41
  The correct pooler (`mean`) is already **built into the model architecture**, which averages embeddings based on the attention mask. You can also select the pooler type (`first_token_transform`), which performs a learnable linear transformation on the first token.
42
 
 
55
  F.normalize(pooled_output, dim=1) @ F.normalize(pooled_output, dim=1).T
56
  ```
57
 
58
+ ### Using as classifier
59
 
60
  To load the model with trainable classification head on top (change `num_labels` parameter):
61
 
 
63
  model = AutoModelForSequenceClassification.from_pretrained(model_name, trust_remote_code=True, attn_implementation='sdpa', num_labels=4)
64
  ```
65
 
66
+ ### With RoPE scaling
67
 
68
  Allowed types for RoPE scaling are: `linear` and `dynamic`. To extend the model's context window you need to change tokenizer max length and add `rope_scaling` parameter.
69
 
 
80
 
81
  P.S. Don't forget to specify the dtype and device you need to use resources efficiently.
82
 
83
+ ## Metrics
84
 
85
  Evaluation of this model on encodechka benchmark:
86
 
 
89
  | **ruRoPEBert-classic-base-512** | 0.695 | 0.605 | 0.396 | 0.794 | 0.975 | 0.797 | 0.769 | 0.386 | 0.410 | 0.609 | 0.677 | 0.630 |
90
  | ai-forever/ruBert-base | 0.670 | 0.533 | 0.391 | 0.773 | 0.975 | 0.783 | 0.765 | 0.384 | - | - | 0.659 | - |
91
 
92
+ ## Authors
93
  - Sergei Bratchikov (Tochka AI Team, [HF](https://huggingface.co/hivaze), [GitHub](https://huggingface.co/hivaze))
94
  - Maxim Afanasiev (Tochka AI Team, [HF](https://huggingface.co/mrapplexz), [GitHub](https://github.com/mrapplexz))