Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ datasets:
|
|
16 |
|
17 |
## Cross-Encoder for MS Marco with TinyBert
|
18 |
|
19 |
-
This is a fine-tuned version of the model checkpointed at [cross-encoder/ms-marco-TinyBert-L-2](https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L-2).
|
20 |
|
21 |
It was fine-tuned on html tags and labels generated using [Fathom](https://mozilla.github.io/fathom/commands/label.html).
|
22 |
|
@@ -51,31 +51,26 @@ More information on how the model was trained can be found here: https://github.
|
|
51 |
# Model Performance
|
52 |
```
|
53 |
Test Performance:
|
54 |
-
Precision: 0.
|
55 |
-
Recall: 0.
|
56 |
-
F1: 0.
|
57 |
|
58 |
-
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
Last Name 0.667 0.800 0.727 5
|
73 |
-
New Password 0.929 0.938 0.933 97
|
74 |
-
Other 0.985 0.985 0.985 1235
|
75 |
-
Phone 1.000 0.667 0.800 3
|
76 |
-
Zip Code 0.909 0.938 0.923 32
|
77 |
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
```
|
|
|
16 |
|
17 |
## Cross-Encoder for MS Marco with TinyBert
|
18 |
|
19 |
+
This is a fine-tuned version of the model checkpointed at [cross-encoder/ms-marco-TinyBert-L-2-v2](https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L-2-v2).
|
20 |
|
21 |
It was fine-tuned on html tags and labels generated using [Fathom](https://mozilla.github.io/fathom/commands/label.html).
|
22 |
|
|
|
51 |
# Model Performance
|
52 |
```
|
53 |
Test Performance:
|
54 |
+
Precision: 0.913
|
55 |
+
Recall: 0.872
|
56 |
+
F1: 0.887
|
57 |
|
58 |
+
precision recall f1-score support
|
59 |
|
60 |
+
cc-csc 0.943 0.950 0.946 139
|
61 |
+
cc-exp 1.000 0.883 0.938 60
|
62 |
+
cc-exp-month 0.954 0.922 0.938 90
|
63 |
+
cc-exp-year 0.904 0.934 0.919 91
|
64 |
+
cc-name 0.835 0.989 0.905 92
|
65 |
+
cc-number 0.953 0.970 0.961 167
|
66 |
+
cc-type 0.920 0.940 0.930 183
|
67 |
+
email 0.918 0.927 0.922 205
|
68 |
+
given-name 0.727 0.421 0.533 19
|
69 |
+
last-name 0.833 0.588 0.690 17
|
70 |
+
other 0.994 0.994 0.994 8000
|
71 |
+
postal-code 0.980 0.951 0.965 102
|
|
|
|
|
|
|
|
|
|
|
72 |
|
73 |
+
accuracy 0.985 9165
|
74 |
+
macro avg 0.913 0.872 0.887 9165
|
75 |
+
weighted avg 0.986 0.985 0.985 9165
|
76 |
```
|