khvatov commited on
Commit
70b91f2
·
1 Parent(s): 9b20b73

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -2
README.md CHANGED
@@ -6,7 +6,55 @@ tags:
6
  - russian
7
  - classification
8
  - toxicity
9
- - binary
10
  widget:
11
  - text: Нелепые лохи недовольны всегда и всем
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - russian
7
  - classification
8
  - toxicity
 
9
  widget:
10
  - text: Нелепые лохи недовольны всегда и всем
11
+ ---
12
+
13
+ Bert-based classifier (finetuned from [rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2))
14
+
15
+ Merged datasets:
16
+
17
+ - [Russian Language Toxic Comments from 2ch.hk and pikabu.ru](https://www.kaggle.com/datasets/blackmoon/russian-language-toxic-comments)
18
+ - [Toxic Russian Comments from ok.ru](https://www.kaggle.com/datasets/alexandersemiletov/toxic-russian-comments)
19
+
20
+ The datasets split into train, val, test splits in 80-10-10 proportion
21
+ The metrics obtained from test dataset is as follows:
22
+ | |precision|recall|f1-score|support|
23
+ |--------|---------|------|--------|-------|
24
+ |0 |0.9827 |0.9827|0.9827 |21216 |
25
+ |1 |0.9272 |0.9274|0.9273 |5054 |
26
+ | | | | | |
27
+ |accuracy| | |0.9720 |26270 |
28
+ |macro avg|0.9550 |0.9550|0.9550 |26270 |
29
+ |weighted avg|0.9720 |0.9720|0.9720 |26270 |
30
+
31
+ ### Usage
32
+
33
+ ```Python
34
+ import torch
35
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
36
+
37
+ PATH = 'khvatov/ru_toxicity_detector'
38
+ tokenizer = AutoTokenizer.from_pretrained(PATH)
39
+ model = AutoModelForSequenceClassification.from_pretrained(PATH)
40
+
41
+ # if torch.cuda.is_available():
42
+ # model.cuda()
43
+
44
+ model.to(torch.device("cpu"))
45
+
46
+
47
+ def get_toxicity_probs(text):
48
+ with torch.no_grad():
49
+ inputs = tokenizer(text, return_tensors='pt', truncation=True, padding=True).to(model.device)
50
+ proba = torch.nn.functional.softmax(model(**inputs).logits, dim=1).cpu().numpy()
51
+ return proba[0]
52
+
53
+
54
+ TEXT = "Марк был хороший"
55
+ print(f'text = {TEXT}, probs={get_toxicity_probs(TEXT)}')
56
+ # text = Марк был хороший, probs=[0.9940585 0.00594147]
57
+ ```
58
+
59
+ ### Train
60
+ The model has been trained with Adam optimizer, the learning rate of 2e-5, and batch size of 32 for 3 epochs