wzkariampuzha
commited on
Commit
·
40b0209
1
Parent(s):
fff4533
Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ licenses:
|
|
12 |
multilinguality:
|
13 |
- monolingual
|
14 |
size_categories:
|
15 |
-
-
|
16 |
task_categories:
|
17 |
- structure-prediction
|
18 |
task_ids:
|
@@ -84,7 +84,7 @@ Comparing the programmatically labeled test set to the manually corrected test s
|
|
84 |
|:----------------:|:------------------------:|:---------:|:------:|:-----:|
|
85 |
| Entity-Level | Overall | 0.559 | 0.662 | 0.606 |
|
86 |
| | Location | 0.597 | 0.661 | 0.627 |
|
87 |
-
| | Epidemiologic
|
88 |
| | Epidemiologic Rate | 0.175 | 0.255 | 0.207 |
|
89 |
| Token-Level | Overall | 0.805 | 0.710 | 0.755 |
|
90 |
| | Location | 0.868 | 0.713 | 0.783 |
|
@@ -134,7 +134,7 @@ Assisting 25-30 millions Americans with rare diseases. Additionally can be usefu
|
|
134 |
- The [long short-term memory recurrent neural network epi classifier](https://pubmed.ncbi.nlm.nih.gov/34457147/) was used to sift the 7699 rare disease abstracts. This model had a hold-out validation F1 score of 0.886 and a test F1 (which was compared against a GARD curator who used full-text articles to determine truth-value of epidemiological abstract) of 0.701. With 620 epi abstracts filtered from 7699 original rare disease abstracts, there are likely several false positives and false negative epi abstracts.
|
135 |
- Tokenization was done by spaCy which may be a limitation (or not) for current and future models trained on this set.
|
136 |
- The programmatic labeling was very imprecise as seen by Table 1. This is likely the largest limitation of the [BioBERT-based model](https://huggingface.co/ncats/EpiExtract4GARD) trained on this set.
|
137 |
-
- The test set was difficult to validate even for general NCATS researchers, which is why we relied on a rare disease expert to verify our modifications. As this task of epidemiological information identification is quite difficult for non-expert humans to complete, this set, and especially a gold-standard dataset in the possible future, represents a challenging gauntlet for NLP systems to compete on.
|
138 |
|
139 |
## Additional Information
|
140 |
|
|
|
12 |
multilinguality:
|
13 |
- monolingual
|
14 |
size_categories:
|
15 |
+
- 100K<n<1M
|
16 |
task_categories:
|
17 |
- structure-prediction
|
18 |
task_ids:
|
|
|
84 |
|:----------------:|:------------------------:|:---------:|:------:|:-----:|
|
85 |
| Entity-Level | Overall | 0.559 | 0.662 | 0.606 |
|
86 |
| | Location | 0.597 | 0.661 | 0.627 |
|
87 |
+
| | Epidemiologic Type | 0.854 | 0.911 | 0.882 |
|
88 |
| | Epidemiologic Rate | 0.175 | 0.255 | 0.207 |
|
89 |
| Token-Level | Overall | 0.805 | 0.710 | 0.755 |
|
90 |
| | Location | 0.868 | 0.713 | 0.783 |
|
|
|
134 |
- The [long short-term memory recurrent neural network epi classifier](https://pubmed.ncbi.nlm.nih.gov/34457147/) was used to sift the 7699 rare disease abstracts. This model had a hold-out validation F1 score of 0.886 and a test F1 (which was compared against a GARD curator who used full-text articles to determine truth-value of epidemiological abstract) of 0.701. With 620 epi abstracts filtered from 7699 original rare disease abstracts, there are likely several false positives and false negative epi abstracts.
|
135 |
- Tokenization was done by spaCy which may be a limitation (or not) for current and future models trained on this set.
|
136 |
- The programmatic labeling was very imprecise as seen by Table 1. This is likely the largest limitation of the [BioBERT-based model](https://huggingface.co/ncats/EpiExtract4GARD) trained on this set.
|
137 |
+
- The test set was difficult to validate even for general NCATS researchers, which is why we relied on a rare disease expert to verify our modifications. As this task of epidemiological information identification is quite difficult for non-expert humans to complete, this set, and especially a gold-standard dataset in the possible future, represents a challenging gauntlet for NLP systems, especially those focusing on numeracy, to compete on.
|
138 |
|
139 |
## Additional Information
|
140 |
|