dardem commited on
Commit
d833608
1 Parent(s): 4678546

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -5
README.md CHANGED
@@ -13,10 +13,15 @@ base_model:
13
  - FacebookAI/xlm-roberta-base
14
  ---
15
 
 
 
 
 
16
  XLM-Roberta-based classifier trained on [XFORMAL](https://aclanthology.org/2021.naacl-main.256.bib) -- a multilingual formality classification dataset.
17
 
18
 
19
- all languages
 
20
 
21
  | | precision | recall | f1-score | support |
22
  |--------------|-----------|----------|----------|---------|
@@ -27,7 +32,7 @@ all languages
27
  | weighted avg | 0.813068 | 0.794405 | 0.789337 | 204864 |
28
 
29
 
30
- en
31
 
32
  | | precision | recall | f1-score | support |
33
  |--------------|-----------|----------|----------|---------|
@@ -37,7 +42,7 @@ en
37
  | macro avg | 0.872579 | 0.844440 | 0.847556 | 41600 |
38
  | weighted avg | 0.867869 | 0.852139 | 0.849273 | 41600 |
39
 
40
- fr
41
 
42
  | | precision | recall | f1-score | support |
43
  |--------------|-----------|----------|----------|---------|
@@ -47,7 +52,7 @@ fr
47
  | macro avg | 0.817007 | 0.788165 | 0.788686 | 40832 |
48
  | weighted avg | 0.813257 | 0.795504 | 0.790711 | 40832 |
49
 
50
- it
51
 
52
  | | precision | recall | f1-score | support |
53
  |--------------|-----------|----------|----------|---------|
@@ -57,7 +62,7 @@ it
57
  | macro avg | 0.793084 | 0.760902 | 0.759995 | 40896 |
58
  | weighted avg | 0.789292 | 0.769024 | 0.762454 | 40896 |
59
 
60
- pt
61
 
62
  | | precision | recall | f1-score | support |
63
  |--------------|-----------|----------|----------|---------|
 
13
  - FacebookAI/xlm-roberta-base
14
  ---
15
 
16
+ **Model Overview**
17
+
18
+ This is the model presented in the paper ["Detecting Text Formality: A Study of Text Classification Approaches"](https://aclanthology.org/2023.ranlp-1.31/).
19
+
20
  XLM-Roberta-based classifier trained on [XFORMAL](https://aclanthology.org/2021.naacl-main.256.bib) -- a multilingual formality classification dataset.
21
 
22
 
23
+ **Results**
24
+ All languages
25
 
26
  | | precision | recall | f1-score | support |
27
  |--------------|-----------|----------|----------|---------|
 
32
  | weighted avg | 0.813068 | 0.794405 | 0.789337 | 204864 |
33
 
34
 
35
+ EN
36
 
37
  | | precision | recall | f1-score | support |
38
  |--------------|-----------|----------|----------|---------|
 
42
  | macro avg | 0.872579 | 0.844440 | 0.847556 | 41600 |
43
  | weighted avg | 0.867869 | 0.852139 | 0.849273 | 41600 |
44
 
45
+ FR
46
 
47
  | | precision | recall | f1-score | support |
48
  |--------------|-----------|----------|----------|---------|
 
52
  | macro avg | 0.817007 | 0.788165 | 0.788686 | 40832 |
53
  | weighted avg | 0.813257 | 0.795504 | 0.790711 | 40832 |
54
 
55
+ IT
56
 
57
  | | precision | recall | f1-score | support |
58
  |--------------|-----------|----------|----------|---------|
 
62
  | macro avg | 0.793084 | 0.760902 | 0.759995 | 40896 |
63
  | weighted avg | 0.789292 | 0.769024 | 0.762454 | 40896 |
64
 
65
+ PT
66
 
67
  | | precision | recall | f1-score | support |
68
  |--------------|-----------|----------|----------|---------|