Datasets:

Languages:
English
License:
linnaeus / README.md
albertvillanova's picture
Host data file (#4)
0f79606
---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
paperswithcode_id: linnaeus
pretty_name: LINNAEUS
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B
'2': I
config_name: linnaeus
splits:
- name: train
num_bytes: 4772417
num_examples: 11936
- name: validation
num_bytes: 1592823
num_examples: 4079
- name: test
num_bytes: 2802877
num_examples: 7143
download_size: 18204624
dataset_size: 9168117
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [linnaeus](http://linnaeus.sourceforge.net/)
- **Repository:** https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/linnaeus-IOB
- **Paper:** [BMC Bioinformatics](https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-11-85)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The LINNAEUS corpus consists of 100 full-text documents from the PMCOA
document set which were randomly selected. All mentions of species terms were manually
annotated and normalized to the NCBI taxonomy IDs of the intended species.
The original LINNAEUS corpus is available in a TAB-separated standoff format. The resource does not define training,
development or test subsets.
We converted the corpus into BioNLP shared task standoff format using a custom script, split it into 50-, 17-, and 33-
document training, development and test sets, and then converted these into the CoNLL format using standoff2conll.
As a full-text corpus, LINNAEUS contains comparatively frequent
non-ASCII characters, which were mapped to ASCII using the
standoff2conll -a option.
The conversion was highly accurate, but due to sentence-splitting errors within entity mentions,
the number of annotations in the converted data was larger by four (100.09%) than that
in the source data. 99.77% of names in the original annotation matched names in the converted
data.
### Supported Tasks and Leaderboards
This dataset is used for species Named Entity Recognition.
### Languages
The dataset is in English.
## Dataset Structure
### Data Instances
An example from the dataset is:
```
{'id': '2',
'tokens': ['Scp160p', 'is', 'a', '160', 'kDa', 'protein', 'in', 'the', 'yeast', 'Saccharomyces', 'cerevisiae', 'that', 'contains', '14', 'repeats', 'of', 'the', 'hnRNP', 'K', '-', 'homology', '(', 'KH', ')', 'domain', ',', 'and', 'demonstrates', 'significant', 'sequence', 'homology', 'to', 'a', 'family', 'of', 'proteins', 'collectively', 'known', 'as', 'vigilins', '.'],
'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]}
```
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no species mentioned, `1` signals the first token of a species and `2` the subsequent tokens of the species.
### Data Splits
| name |train|validation|test|
|----------|----:|---------:|---:|
| linnaeus |11936| 4079|7143|
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
This version of the dataset is licensed under [Creative Commons Attribution 4.0 International](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/blob/master/LICENSE.md).
### Citation Information
```bibtex
@article{crichton2017neural,
title={A neural network multi-task learning approach to biomedical named entity recognition},
author={Crichton, Gamal and Pyysalo, Sampo and Chiu, Billy and Korhonen, Anna},
journal={BMC Bioinformatics},
volume={18},
number={1},
pages={368},
year={2017},
publisher={BioMed Central}
doi = {10.1186/s12859-017-1776-8},
issn = {1471-2105},
url = {https://doi.org/10.1186/s12859-017-1776-8},
}
@article{Gerner2010,
author = {Gerner, Martin and Nenadic, Goran and Bergman, Casey M},
doi = {10.1186/1471-2105-11-85},
issn = {1471-2105},
journal = {BMC Bioinformatics},
number = {1},
pages = {85},
title = {{LINNAEUS: A species name identification system for biomedical literature}},
url = {https://doi.org/10.1186/1471-2105-11-85},
volume = {11},
year = {2010}
}
```
### Contributions
Thanks to [@edugp](https://github.com/edugp) for adding this dataset.