Update README.md
Browse files
README.md
CHANGED
@@ -148,7 +148,7 @@ pipe(title_abs, return_all_scores=True)
|
|
148 |
```
|
149 |
## Evaluation Results
|
150 |
|
151 |
-
The model was evaluated on a manually labeled test set of 828
|
152 |
|
153 |
* **F1:** 93.21
|
154 |
* **Recall:** 93.99
|
|
|
148 |
```
|
149 |
## Evaluation Results
|
150 |
|
151 |
+
The model was evaluated on a manually labeled test set of 828 EMNLP 2022 papers. The following shows the average evaluation results for classifying papers according to the NLP taxonomy on three different training runs. Since the distribution of classes is very unbalanced, we report micro scores.
|
152 |
|
153 |
* **F1:** 93.21
|
154 |
* **Recall:** 93.99
|