File size: 2,247 Bytes
dd9f2a3 584093b 1ca8092 dd9f2a3 584093b 018dd6c 1ca8092 dd9f2a3 b9c58df dd9f2a3 dd8caa8 dd9f2a3 dd8caa8 dd9f2a3 dd5d25c dd9f2a3 7aa4e61 dd9f2a3 584093b dd9f2a3 584093b dd5d25c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
---
license: mit
language:
- ru
metrics:
- f1
- roc_auc
- precision
- recall
pipeline_tag: text-classification
tags:
- sentiment-analysis
- multi-class-classification
- sentiment analysis
- rubert
- sentiment
- bert
- tiny
- russian
- multiclass
- classification
datasets:
- sismetanin/rureviews
- RuSentiment
- LinisCrowd2015
- LinisCrowd2016
- KaggleRussianNews
---
This is [RuBERT-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) model fine-tuned for __sentiment classification__ of short __Russian__ texts.
The task is a __multi-class classification__ with the following labels:
```yaml
0: neutral
1: positive
2: negative
```
Label to Russian label:
```yaml
neutral: нейтральный
positive: позитивный
negative: негативный
```
## Usage
```python
from transformers import pipeline
model = pipeline(model="seara/rubert-tiny2-russian-sentiment")
model("Привет, ты мне нравишься!")
# [{'label': 'positive', 'score': 0.9398769736289978}]
```
## Dataset
This model was trained on the union of the following datasets:
- Kaggle Russian News Dataset
- Linis Crowd 2015
- Linis Crowd 2016
- RuReviews
- RuSentiment
An overview of the training data can be found on [S. Smetanin Github repository](https://github.com/sismetanin/sentiment-analysis-in-russian).
__Download links for all Russian sentiment datasets collected by Smetanin can be found in this [repository](https://github.com/searayeah/russian-sentiment-emotion-datasets).__
## Training
Training were done in this [project](https://github.com/searayeah/bert-russian-sentiment-emotion) with this parameters:
```yaml
tokenizer.max_length: 512
batch_size: 64
optimizer: adam
lr: 0.00001
weight_decay: 0
epochs: 5
```
Train/validation/test splits are 80%/10%/10%.
## Eval results (on test split)
| |neutral|positive|negative|macro avg|weighted avg|
|---------|-------|--------|--------|---------|------------|
|precision|0.7 |0.84 |0.74 |0.76 |0.75 |
|recall |0.74 |0.83 |0.69 |0.75 |0.75 |
|f1-score |0.72 |0.83 |0.71 |0.75 |0.75 |
|auc-roc |0.85 |0.95 |0.91 |0.9 |0.9 |
|support |5196 |3831 |3599 |12626 |12626 |
|