Datasets:

Modalities:
Text
Languages:
Spanish
Libraries:
Datasets
License:
ccasimiro commited on
Commit
0805ad5
1 Parent(s): 2e7fd9b

upload dataset

Browse files
Files changed (6) hide show
  1. .gitattributes +3 -0
  2. README.md +142 -0
  3. cantemist-ner.py +114 -0
  4. dev.conll +3 -0
  5. test.conll +3 -0
  6. train.conll +3 -0
.gitattributes CHANGED
@@ -25,3 +25,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ test.conll filter=lfs diff=lfs merge=lfs -text
29
+ train.conll filter=lfs diff=lfs merge=lfs -text
30
+ dev.conll filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,142 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ languages:
5
+ - es
6
+ multilinguality:
7
+ - monolingual
8
+ task_categories:
9
+ - text-classification
10
+ - multi-label-text-classification
11
+ task_ids:
12
+ - named-entity-recognition
13
+ ---
14
+
15
+ # CANTEMIST Corpus
16
+
17
+ ## BibTeX citation
18
+ If you use these resources in your work, please cite the following paper:
19
+
20
+ ```bibtex
21
+ @inproceedings{miranda2020named,
22
+ title={Named entity recognition, concept normalization and clinical coding: Overview of the cantemist track for cancer text mining in spanish, corpus, guidelines, methods and results},
23
+ author={Miranda-Escalada, A and Farr{\'e}, E and Krallinger, M},
24
+ booktitle={Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2020), CEUR Workshop Proceedings},
25
+ year={2020}
26
+ }
27
+ ```
28
+
29
+ ## Digital Object Identifier (DOI) and access to dataset files
30
+
31
+ TO DO: link to zenodo
32
+
33
+ ## Introduction
34
+
35
+ TO DO: This is a dataset for Named Entity Recognition (NER) from...
36
+
37
+ ### Supported Tasks and Leaderboards
38
+
39
+ Named Entities Recognition, Language Model
40
+
41
+ ### Languages
42
+
43
+ ES - Spanish
44
+
45
+ ### Directory structure
46
+
47
+ * cantemist-ner.py
48
+ * dev.conll
49
+ * test.conll
50
+ * train.conll
51
+ * README.md
52
+
53
+ ## Dataset Structure
54
+
55
+ ### Data Instances
56
+
57
+ Three four-column files, one for each split.
58
+
59
+ ### Data Fields
60
+
61
+ Every file has four columns:
62
+ * 1st column: Word form or punctuation symbol
63
+ * 2nd column: Original BRAT file name
64
+ * 3rd column: Spans
65
+ * 4th column: IOB tag
66
+
67
+ ### Example:
68
+ <pre>
69
+ El cc_onco101 662_664 O
70
+ informe cc_onco101 665_672 O
71
+ HP cc_onco101 673_675 O
72
+ es cc_onco101 676_678 O
73
+ compatible cc_onco101 679_689 O
74
+ con cc_onco101 690_693 O
75
+ adenocarcinoma cc_onco101 694_708 B-MORFOLOGIA_NEOPLASIA
76
+ moderadamente cc_onco101 709_722 I-MORFOLOGIA_NEOPLASIA
77
+ diferenciado cc_onco101 723_735 I-MORFOLOGIA_NEOPLASIA
78
+ que cc_onco101 736_739 O
79
+ afecta cc_onco101 740_746 O
80
+ a cc_onco101 747_748 O
81
+ grasa cc_onco101 749_754 O
82
+ peripancreática cc_onco101 755_770 O
83
+ sobrepasando cc_onco101 771_783 O
84
+ la cc_onco101 784_786 O
85
+ serosa cc_onco101 787_793 O
86
+ , cc_onco101 793_794 O
87
+ infiltración cc_onco101 795_807 O
88
+ perineural cc_onco101 808_818 O
89
+ . cc_onco101 818_819 O
90
+ </pre>
91
+
92
+ ### Data Splits
93
+
94
+ * train: 18,916 tokens
95
+ * development: 17,656 tokens
96
+ * test: 10,886 tokens
97
+
98
+ ## Dataset Creation
99
+
100
+ ### Methodology
101
+
102
+ TO DO
103
+
104
+ ### Curation Rationale
105
+
106
+ For compatibility with similar datasets in other languages, we followed as close as possible existing curation guidelines.
107
+
108
+ ### Source Data
109
+
110
+ #### Initial Data Collection and Normalization
111
+
112
+ TO DO
113
+
114
+ #### Who are the source language producers?
115
+
116
+ TO DO
117
+
118
+ ### Annotations
119
+
120
+ #### Annotation process
121
+
122
+ TO DO
123
+
124
+ #### Who are the annotators?
125
+
126
+ TO DO
127
+
128
+ ### Dataset Curators
129
+
130
+ TO DO: Martin?
131
+
132
+ ### Personal and Sensitive Information
133
+
134
+ No personal or sensitive information included.
135
+
136
+ ## Contact
137
+
138
+ TO DO: Casimiro?
139
+
140
+ ## License
141
+
142
+ <a rel="license" href="https://creativecommons.org/licenses/by/4.0/"><img alt="Attribution 4.0 International License" style="border-width:0" src="https://chriszabriskie.com/img/cc-by.png" width="100"/></a><br />This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">Attribution 4.0 International License</a>.
cantemist-ner.py ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Loading script for the Cantemist NER dataset.
2
+ import datasets
3
+
4
+
5
+ logger = datasets.logging.get_logger(__name__)
6
+
7
+
8
+ _CITATION = """\
9
+ @inproceedings{miranda2020named,
10
+ title={Named entity recognition, concept normalization and clinical coding: Overview of the cantemist track for cancer text mining in spanish, corpus, guidelines, methods and results},
11
+ author={Miranda-Escalada, A and Farr{\'e}, E and Krallinger, M},
12
+ booktitle={Proceedings of the Iberian Languages Evaluation Forum (IberLEF 2020), CEUR Workshop Proceedings},
13
+ year={2020}
14
+ }"""
15
+
16
+ _DESCRIPTION = """\
17
+ https://temu.bsc.es/cantemist/
18
+ """
19
+
20
+ _URL = "https://huggingface.co/datasets/PlanTL-GOB-ES/pharmaconer/resolve/main/"
21
+ # _URL = "./"
22
+ _TRAINING_FILE = "train.conll"
23
+ _DEV_FILE = "dev.conll"
24
+ _TEST_FILE = "test.conll"
25
+
26
+ class CantemistNerConfig(datasets.BuilderConfig):
27
+ """BuilderConfig for Cantemist Ner dataset"""
28
+
29
+ def __init__(self, **kwargs):
30
+ """BuilderConfig for CantemistNer.
31
+
32
+ Args:
33
+ **kwargs: keyword arguments forwarded to super.
34
+ """
35
+ super(CantemistNerConfig, self).__init__(**kwargs)
36
+
37
+
38
+ class CantemistNer(datasets.GeneratorBasedBuilder):
39
+ """Cantemist Ner dataset."""
40
+
41
+ BUILDER_CONFIGS = [
42
+ CantemistNerConfig(
43
+ name="CantemistNer",
44
+ version=datasets.Version("1.0.0"),
45
+ description="CantemistNer dataset"),
46
+ ]
47
+
48
+ def _info(self):
49
+ return datasets.DatasetInfo(
50
+ description=_DESCRIPTION,
51
+ features=datasets.Features(
52
+ {
53
+ "id": datasets.Value("string"),
54
+ "tokens": datasets.Sequence(datasets.Value("string")),
55
+ "ner_tags": datasets.Sequence(
56
+ datasets.features.ClassLabel(
57
+ names=[
58
+ "O",
59
+ "B-MORFOLOGIA_NEOPLASIA",
60
+ "I-MORFOLOGIA_NEOPLASIA",
61
+ ]
62
+ )
63
+ ),
64
+ }
65
+ ),
66
+ supervised_keys=None,
67
+ homepage="https://temu.bsc.es/cantemist/",
68
+ citation=_CITATION,
69
+ )
70
+
71
+ def _split_generators(self, dl_manager):
72
+ """Returns SplitGenerators."""
73
+ urls_to_download = {
74
+ "train": f"{_URL}{_TRAINING_FILE}",
75
+ "dev": f"{_URL}{_DEV_FILE}",
76
+ "test": f"{_URL}{_TEST_FILE}",
77
+ }
78
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
79
+
80
+ return [
81
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
82
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
83
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
84
+ ]
85
+
86
+ def _generate_examples(self, filepath):
87
+ logger.info("⏳ Generating examples from = %s", filepath)
88
+ with open(filepath, encoding="utf-8") as f:
89
+ guid = 0
90
+ tokens = []
91
+ pos_tags = []
92
+ ner_tags = []
93
+ for line in f:
94
+ if line.startswith("-DOCSTART-") or line == "" or line == "\n":
95
+ if tokens:
96
+ yield guid, {
97
+ "id": str(guid),
98
+ "tokens": tokens,
99
+ "ner_tags": ner_tags,
100
+ }
101
+ guid += 1
102
+ tokens = []
103
+ ner_tags = []
104
+ else:
105
+ # Cantemist tokens are tab separated
106
+ splits = line.split("\t")
107
+ tokens.append(splits[0])
108
+ ner_tags.append(splits[-1].rstrip())
109
+ # last example
110
+ yield guid, {
111
+ "id": str(guid),
112
+ "tokens": tokens,
113
+ "ner_tags": ner_tags,
114
+ }
dev.conll ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a6015c123a4af8af788aa5d59c4986d9fccd9af21f85d9f1684edc8fa9f67fe
3
+ size 11823842
test.conll ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f75db5b4c230af5c8365af762eb42cf3896dddb274181780b7d7c7f009f0fbf
3
+ size 7153088
train.conll ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1aec255970def922a78aaa2aa44b43cef90c8a9c413fc3ea2e1d27346bcec028
3
+ size 12978461