Datasets:
Tasks:
Text Classification
Formats:
parquet
Sub-tasks:
multi-label-classification
Languages:
English
Size:
100K - 1M
License:
Commit
•
06113a9
0
Parent(s):
Update files from the datasets library (from 1.2.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.2.0
- .gitattributes +27 -0
- README.md +215 -0
- dataset_infos.json +1 -0
- dummy/ohsumed/1.1.0/dummy_data.zip +3 -0
- ohsumed.py +223 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,215 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- human-annotated
|
4 |
+
language_creators:
|
5 |
+
- crowdsourced
|
6 |
+
languages:
|
7 |
+
- en
|
8 |
+
licenses:
|
9 |
+
- cc-by-nc-4-0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
size_categories:
|
13 |
+
- 100k< n<500K
|
14 |
+
source_datasets:
|
15 |
+
- original
|
16 |
+
task_categories:
|
17 |
+
- text-classification
|
18 |
+
task_ids:
|
19 |
+
- multi-label-classification
|
20 |
+
---
|
21 |
+
|
22 |
+
# Dataset Card for ohsumed
|
23 |
+
|
24 |
+
## Table of Contents
|
25 |
+
- [Dataset Description](#dataset-description)
|
26 |
+
- [Dataset Summary](#dataset-summary)
|
27 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
28 |
+
- [Languages](#languages)
|
29 |
+
- [Dataset Structure](#dataset-structure)
|
30 |
+
- [Data Instances](#data-instances)
|
31 |
+
- [Data Fields](#data-fields)
|
32 |
+
- [Data Splits](#data-splits)
|
33 |
+
- [Dataset Creation](#dataset-creation)
|
34 |
+
- [Curation Rationale](#curation-rationale)
|
35 |
+
- [Source Data](#source-data)
|
36 |
+
- [Annotations](#annotations)
|
37 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
38 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
39 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
40 |
+
- [Discussion of Biases](#discussion-of-biases)
|
41 |
+
- [Other Known Limitations](#other-known-limitations)
|
42 |
+
- [Additional Information](#additional-information)
|
43 |
+
- [Dataset Curators](#dataset-curators)
|
44 |
+
- [Licensing Information](#licensing-information)
|
45 |
+
- [Citation Information](#citation-information)
|
46 |
+
|
47 |
+
## Dataset Description
|
48 |
+
|
49 |
+
- **Homepage: http://davis.wpi.edu/xmdv/datasets/ohsumed.html**
|
50 |
+
- **Repository: https://trec.nist.gov/data/filtering/t9.filtering.tar.gz**
|
51 |
+
- **Paper: https://link.springer.com/chapter/10.1007/978-1-4471-2099-5_20**
|
52 |
+
- **Leaderboard:**
|
53 |
+
- **Point of Contact: [William Hersh](mailto:[email protected]) [Aakash Gupta](mailto:[email protected])**
|
54 |
+
|
55 |
+
### Dataset Summary
|
56 |
+
|
57 |
+
The OHSUMED test collection is a set of 348,566 references from
|
58 |
+
MEDLINE, the on-line medical information database, consisting of
|
59 |
+
titles and/or abstracts from 270 medical journals over a five-year
|
60 |
+
period (1987-1991). The available fields are title, abstract, MeSH
|
61 |
+
indexing terms, author, source, and publication type. The National
|
62 |
+
Library of Medicine has agreed to make the MEDLINE references in the
|
63 |
+
test database available for experimentation, restricted to the
|
64 |
+
following conditions:
|
65 |
+
|
66 |
+
1. The data will not be used in any non-experimental clinical,
|
67 |
+
library, or other setting.
|
68 |
+
2. Any human users of the data will explicitly be told that the data
|
69 |
+
is incomplete and out-of-date.
|
70 |
+
|
71 |
+
Please check this [readme](https://trec.nist.gov/data/filtering/README.t9.filtering) for more details
|
72 |
+
|
73 |
+
|
74 |
+
### Supported Tasks and Leaderboards
|
75 |
+
|
76 |
+
[Text Classification](https://paperswithcode.com/sota/text-classification-on-ohsumed)
|
77 |
+
|
78 |
+
### Languages
|
79 |
+
|
80 |
+
The text is primarily in English. The BCP 47 code is `en`
|
81 |
+
|
82 |
+
## Dataset Structure
|
83 |
+
|
84 |
+
### Data Instances
|
85 |
+
|
86 |
+
```
|
87 |
+
{'seq_id': 7770,
|
88 |
+
'medline_ui': 87120420,
|
89 |
+
'mesh_terms': 'Adult; Aged; Aneurysm/CO; Arteriovenous Fistula/*TH; Carotid Arteries; Case Report; Female; Human; Jugular Veins; Male; Methods; Middle Age; Neck/*BS; Vertebral Artery.',
|
90 |
+
'title': 'Arteriovenous fistulas of the large vessels of the neck: nonsurgical percutaneous occlusion.',
|
91 |
+
'publication_type': 'JOURNAL ARTICLE.',
|
92 |
+
'abstract': 'We describe the nonsurgical treatment of arteriovenous fistulas of the large vessels in the neck using three different means of endovascular occlusion of these large lesions, which are surgically difficult to approach and treat.',
|
93 |
+
'author': 'Vitek JJ; Keller FS.',
|
94 |
+
'source': 'South Med J 8705; 80(2):196-200'}
|
95 |
+
|
96 |
+
```
|
97 |
+
|
98 |
+
|
99 |
+
### Data Fields
|
100 |
+
|
101 |
+
Here are the field definitions:
|
102 |
+
|
103 |
+
- seg_id: sequential identifier
|
104 |
+
(important note: documents should be processed in this order)
|
105 |
+
- medline_ui: MEDLINE identifier (UI)
|
106 |
+
(<DOCNO> used for relevance judgements)
|
107 |
+
- mesh_terms: Human-assigned MeSH terms (MH)
|
108 |
+
- title: Title (TI)
|
109 |
+
- publication_type : Publication type (PT)
|
110 |
+
- abstract: Abstract (AB)
|
111 |
+
- author: Author (AU)
|
112 |
+
- source: Source (SO)
|
113 |
+
|
114 |
+
Note: some abstracts are truncated at 250 words and some references
|
115 |
+
have no abstracts at all (titles only). We do not have access to the
|
116 |
+
full text of the documents.
|
117 |
+
|
118 |
+
### Data Splits
|
119 |
+
|
120 |
+
The files are Train/ Test. Where the training has files from 1987 while the test files has abstracts from 1988-91
|
121 |
+
|
122 |
+
Total number of files:
|
123 |
+
Train: 54710
|
124 |
+
Test: 348567
|
125 |
+
|
126 |
+
|
127 |
+
## Dataset Creation
|
128 |
+
|
129 |
+
### Curation Rationale
|
130 |
+
|
131 |
+
The OHSUMED document collection was obtained by William Hersh
|
132 |
+
([email protected]) and colleagues for the experiments described in the
|
133 |
+
papers below. [Check citation](#citation-information)
|
134 |
+
|
135 |
+
### Source Data
|
136 |
+
|
137 |
+
#### Initial Data Collection and Normalization
|
138 |
+
|
139 |
+
[More Information Needed]
|
140 |
+
|
141 |
+
#### Who are the source language producers?
|
142 |
+
|
143 |
+
The test collection was built as part of a study assessing the use of
|
144 |
+
MEDLINE by physicians in a clinical setting (Hersh and Hickam, above).
|
145 |
+
Novice physicians using MEDLINE generated 106 queries. Only a subset
|
146 |
+
of these queries were used in the TREC-9 Filtering Track. Before
|
147 |
+
they searched, they were asked to provide a statement of information
|
148 |
+
about their patient as well as their information need.
|
149 |
+
The data was collected by William Hersh & colleagues
|
150 |
+
|
151 |
+
### Annotations
|
152 |
+
|
153 |
+
#### Annotation process
|
154 |
+
|
155 |
+
The existing OHSUMED topics describe actual information needs, but the
|
156 |
+
relevance judgements probably do not have the same coverage provided
|
157 |
+
by the TREC pooling process. The MeSH terms do not directly represent
|
158 |
+
information needs, rather they are controlled indexing terms. However,
|
159 |
+
the assessment should be more or less complete and there are a lot of
|
160 |
+
them, so this provides an unusual opportunity to work with a very
|
161 |
+
large topic sample.
|
162 |
+
|
163 |
+
The topic statements are provided in the standard TREC format
|
164 |
+
|
165 |
+
#### Who are the annotators?
|
166 |
+
|
167 |
+
Each query was replicated by four searchers, two physicians
|
168 |
+
experienced in searching and two medical librarians. The results were
|
169 |
+
assessed for relevance by a different group of physicians, using a
|
170 |
+
three point scale: definitely, possibly, or not relevant. The list of
|
171 |
+
documents explicitly judged to be not relevant is not provided here.
|
172 |
+
Over 10% of the query-document pairs were judged in duplicate to
|
173 |
+
assess inter-observer reliability. For evaluation, all documents
|
174 |
+
judged here as either possibly or definitely relevant were
|
175 |
+
considered relevant. TREC-9 systems were allowed to distinguish
|
176 |
+
between these two categories during the learning process if desired.
|
177 |
+
|
178 |
+
### Personal and Sensitive Information
|
179 |
+
|
180 |
+
No PII data is present in the train, test or query files.
|
181 |
+
|
182 |
+
## Considerations for Using the Data
|
183 |
+
|
184 |
+
### Social Impact of Dataset
|
185 |
+
|
186 |
+
[More Information Needed]
|
187 |
+
|
188 |
+
### Discussion of Biases
|
189 |
+
|
190 |
+
[More Information Needed]
|
191 |
+
|
192 |
+
### Other Known Limitations
|
193 |
+
|
194 |
+
[More Information Needed]
|
195 |
+
|
196 |
+
## Additional Information
|
197 |
+
|
198 |
+
### Dataset Curators
|
199 |
+
|
200 |
+
[Aakash Gupta](mailto:[email protected])
|
201 |
+
*Th!nkEvolve Consulting* and Researcher at CoronaWhy
|
202 |
+
|
203 |
+
### Licensing Information
|
204 |
+
|
205 |
+
CC BY-NC 4.0
|
206 |
+
|
207 |
+
### Citation Information
|
208 |
+
|
209 |
+
Hersh WR, Buckley C, Leone TJ, Hickam DH, OHSUMED: An interactive
|
210 |
+
retrieval evaluation and new large test collection for research,
|
211 |
+
Proceedings of the 17th Annual ACM SIGIR Conference, 1994, 192-201.
|
212 |
+
|
213 |
+
Hersh WR, Hickam DH, Use of a multi-application computer workstation
|
214 |
+
in a clinical setting, Bulletin of the Medical Library Association,
|
215 |
+
1994, 82: 382-389.
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"ohsumed": {"description": "The OHSUMED test collection is a set of 348,566 references from\nMEDLINE, the on-line medical information database, consisting of\ntitles and/or abstracts from 270 medical journals over a five-year\nperiod (1987-1991). The available fields are title, abstract, MeSH\nindexing terms, author, source, and publication type.\n", "citation": "@InProceedings{10.1007/978-1-4471-2099-5_20,\nauthor=\"Hersh, William\nand Buckley, Chris\nand Leone, T. J.\nand Hickam, David\",\neditor=\"Croft, Bruce W.\nand van Rijsbergen, C. J.\",\ntitle=\"OHSUMED: An Interactive Retrieval Evaluation and New Large Test Collection for Research\",\nbooktitle=\"SIGIR '94\",\nyear=\"1994\",\npublisher=\"Springer London\",\naddress=\"London\",\npages=\"192--201\",\nabstract=\"A series of information retrieval experiments was carried out with a computer installed in a medical practice setting for relatively inexperienced physician end-users. Using a commercial MEDLINE product based on the vector space model, these physicians searched just as effectively as more experienced searchers using Boolean searching. The results of this experiment were subsequently used to create a new large medical test collection, which was used in experiments with the SMART retrieval system to obtain baseline performance data as well as compare SMART with the other searchers.\",\nisbn=\"978-1-4471-2099-5\"\n}\n", "homepage": "http://davis.wpi.edu/xmdv/datasets/ohsumed.html", "license": "CC BY-NC 4.0", "features": {"seq_id": {"dtype": "int64", "id": null, "_type": "Value"}, "medline_ui": {"dtype": "int64", "id": null, "_type": "Value"}, "mesh_terms": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "publication_type": {"dtype": "string", "id": null, "_type": "Value"}, "abstract": {"dtype": "string", "id": null, "_type": "Value"}, "author": {"dtype": "string", "id": null, "_type": "Value"}, "source": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "ohsumed", "config_name": "ohsumed", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 60117860, "num_examples": 54709, "dataset_name": "ohsumed"}, "test": {"name": "test", "num_bytes": 338533901, "num_examples": 293855, "dataset_name": "ohsumed"}}, "download_checksums": {"https://trec.nist.gov/data/filtering/t9.filtering.tar.gz": {"num_bytes": 139454017, "checksum": "39184391aab6d080699882dbfd87de4cbcb24cce8a0cffd611debf18914481b0"}}, "download_size": 139454017, "post_processing_size": null, "dataset_size": 398651761, "size_in_bytes": 538105778}}
|
dummy/ohsumed/1.1.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fc43b3b89ff1bf945681b92833b312f10f0e734d52f83dbaeddf1347b5d7585f
|
3 |
+
size 7432
|
ohsumed.py
ADDED
@@ -0,0 +1,223 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
"""OHSUMED: An Interactive Retrieval Evaluation and New Large Test Collection for Research."""
|
16 |
+
|
17 |
+
from __future__ import absolute_import, division, print_function
|
18 |
+
|
19 |
+
import os
|
20 |
+
|
21 |
+
import datasets
|
22 |
+
|
23 |
+
|
24 |
+
# TODO: Add BibTeX citation
|
25 |
+
# Find for instance the citation on arxiv or on the dataset repo/website
|
26 |
+
_CITATION = """\
|
27 |
+
@InProceedings{10.1007/978-1-4471-2099-5_20,
|
28 |
+
author="Hersh, William
|
29 |
+
and Buckley, Chris
|
30 |
+
and Leone, T. J.
|
31 |
+
and Hickam, David",
|
32 |
+
editor="Croft, Bruce W.
|
33 |
+
and van Rijsbergen, C. J.",
|
34 |
+
title="OHSUMED: An Interactive Retrieval Evaluation and New Large Test Collection for Research",
|
35 |
+
booktitle="SIGIR '94",
|
36 |
+
year="1994",
|
37 |
+
publisher="Springer London",
|
38 |
+
address="London",
|
39 |
+
pages="192--201",
|
40 |
+
abstract="A series of information retrieval experiments was carried out with a computer installed in a medical practice setting for relatively inexperienced physician end-users. Using a commercial MEDLINE product based on the vector space model, these physicians searched just as effectively as more experienced searchers using Boolean searching. The results of this experiment were subsequently used to create a new large medical test collection, which was used in experiments with the SMART retrieval system to obtain baseline performance data as well as compare SMART with the other searchers.",
|
41 |
+
isbn="978-1-4471-2099-5"
|
42 |
+
}
|
43 |
+
"""
|
44 |
+
|
45 |
+
# TODO: Add description of the dataset here
|
46 |
+
# You can copy an official description
|
47 |
+
_DESCRIPTION = """\
|
48 |
+
The OHSUMED test collection is a set of 348,566 references from
|
49 |
+
MEDLINE, the on-line medical information database, consisting of
|
50 |
+
titles and/or abstracts from 270 medical journals over a five-year
|
51 |
+
period (1987-1991). The available fields are title, abstract, MeSH
|
52 |
+
indexing terms, author, source, and publication type.
|
53 |
+
"""
|
54 |
+
|
55 |
+
# TODO: Add a link to an official homepage for the dataset here
|
56 |
+
_HOMEPAGE = "http://davis.wpi.edu/xmdv/datasets/ohsumed.html"
|
57 |
+
|
58 |
+
# TODO: Add the licence for the dataset here if you can find it
|
59 |
+
_LICENSE = "CC BY-NC 4.0"
|
60 |
+
|
61 |
+
# TODO: Add link to the official dataset URLs here
|
62 |
+
# The HuggingFace dataset library don't host the datasets but only point to the original files
|
63 |
+
# This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
|
64 |
+
_URLs = {"ohsumed": "https://trec.nist.gov/data/filtering/t9.filtering.tar.gz"}
|
65 |
+
|
66 |
+
|
67 |
+
# TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
|
68 |
+
class Ohsumed(datasets.GeneratorBasedBuilder):
|
69 |
+
"""OHSUMED: An Interactive Retrieval Evaluation and New Large Test Collection for Research."""
|
70 |
+
|
71 |
+
VERSION = datasets.Version("1.1.0")
|
72 |
+
|
73 |
+
# This is an example of a dataset with multiple configurations.
|
74 |
+
# If you don't want/need to define several sub-sets in your dataset,
|
75 |
+
# just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
|
76 |
+
|
77 |
+
# If you need to make complex sub-parts in the datasets with configurable options
|
78 |
+
# You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
|
79 |
+
# BUILDER_CONFIG_CLASS = MyBuilderConfig
|
80 |
+
|
81 |
+
# You will be able to load one or the other configurations in the following list with
|
82 |
+
# data = datasets.load_dataset('my_dataset', 'first_domain')
|
83 |
+
# data = datasets.load_dataset('my_dataset', 'second_domain')
|
84 |
+
BUILDER_CONFIGS = [
|
85 |
+
datasets.BuilderConfig(
|
86 |
+
name="ohsumed",
|
87 |
+
version=VERSION,
|
88 |
+
description="Config for the entire ohsumed dataset. An Interactive Retrieval Evaluation and New Large Test Collection for Research",
|
89 |
+
)
|
90 |
+
]
|
91 |
+
|
92 |
+
def _info(self):
|
93 |
+
# TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
|
94 |
+
features = datasets.Features(
|
95 |
+
{
|
96 |
+
"seq_id": datasets.Value("int64"),
|
97 |
+
"medline_ui": datasets.Value("int64"),
|
98 |
+
"mesh_terms": datasets.Value("string"),
|
99 |
+
"title": datasets.Value("string"),
|
100 |
+
"publication_type": datasets.Value("string"),
|
101 |
+
"abstract": datasets.Value("string"),
|
102 |
+
"author": datasets.Value("string"),
|
103 |
+
"source": datasets.Value("string"),
|
104 |
+
}
|
105 |
+
)
|
106 |
+
return datasets.DatasetInfo(
|
107 |
+
# This is the description that will appear on the datasets page.
|
108 |
+
description=_DESCRIPTION,
|
109 |
+
# This defines the different columns of the dataset and their types
|
110 |
+
features=features, # Here we define them above because they are different between the two configurations
|
111 |
+
# If there's a common (input, target) tuple from the features,
|
112 |
+
# specify them here. They'll be used if as_supervised=True in
|
113 |
+
# builder.as_dataset.
|
114 |
+
supervised_keys=None,
|
115 |
+
# Homepage of the dataset for documentation
|
116 |
+
homepage=_HOMEPAGE,
|
117 |
+
# License for the dataset if available
|
118 |
+
license=_LICENSE,
|
119 |
+
# Citation for the dataset
|
120 |
+
citation=_CITATION,
|
121 |
+
)
|
122 |
+
|
123 |
+
def _split_generators(self, dl_manager):
|
124 |
+
"""Returns SplitGenerators."""
|
125 |
+
# TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
|
126 |
+
# If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
|
127 |
+
|
128 |
+
# dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
|
129 |
+
# It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
|
130 |
+
# By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
|
131 |
+
my_urls = _URLs[self.config.name]
|
132 |
+
data_dir = dl_manager.download_and_extract(my_urls)
|
133 |
+
return [
|
134 |
+
datasets.SplitGenerator(
|
135 |
+
name=datasets.Split.TRAIN,
|
136 |
+
# These kwargs will be passed to _generate_examples
|
137 |
+
gen_kwargs={
|
138 |
+
"filepath": os.path.join(data_dir, "ohsu-trec/trec9-train/ohsumed.87"),
|
139 |
+
"split": "train",
|
140 |
+
},
|
141 |
+
),
|
142 |
+
datasets.SplitGenerator(
|
143 |
+
name=datasets.Split.TEST,
|
144 |
+
# These kwargs will be passed to _generate_examples
|
145 |
+
gen_kwargs={"filepath": os.path.join(data_dir, "ohsu-trec/trec9-test/ohsumed.88-91"), "split": "test"},
|
146 |
+
),
|
147 |
+
]
|
148 |
+
|
149 |
+
def _generate_examples(self, filepath, split):
|
150 |
+
""" Yields examples. """
|
151 |
+
# TODO: This method will receive as arguments the `gen_kwargs` defined in the previous `_split_generators` method.
|
152 |
+
# It is in charge of opening the given file and yielding (key, example) tuples from the dataset
|
153 |
+
# The key is not important, it's more here for legacy reason (legacy from tfds)
|
154 |
+
|
155 |
+
def ohsumed_dict():
|
156 |
+
"""Returns a dict."""
|
157 |
+
|
158 |
+
data = {
|
159 |
+
"seq_id": -1,
|
160 |
+
"medline_ui": -1,
|
161 |
+
"mesh_terms": "",
|
162 |
+
"title": "",
|
163 |
+
"publication_type": "",
|
164 |
+
"abstract": "",
|
165 |
+
"author": "",
|
166 |
+
"source": "",
|
167 |
+
}
|
168 |
+
|
169 |
+
return data
|
170 |
+
|
171 |
+
tag = ""
|
172 |
+
column_map = {
|
173 |
+
".I": "seq_id",
|
174 |
+
".U": "medline_ui",
|
175 |
+
".M": "mesh_terms",
|
176 |
+
".T": "title",
|
177 |
+
".P": "publication_type",
|
178 |
+
".W": "abstract",
|
179 |
+
".A": "author",
|
180 |
+
".S": "source",
|
181 |
+
}
|
182 |
+
|
183 |
+
with open(filepath, encoding="utf-8") as f:
|
184 |
+
data = ohsumed_dict()
|
185 |
+
|
186 |
+
for line in f.readlines():
|
187 |
+
line = line.strip()
|
188 |
+
|
189 |
+
if line.startswith(".I"):
|
190 |
+
tag = ".I"
|
191 |
+
if data["medline_ui"] != -1:
|
192 |
+
id_ = data["seq_id"]
|
193 |
+
yield id_, {
|
194 |
+
"seq_id": data["seq_id"],
|
195 |
+
"medline_ui": data["medline_ui"],
|
196 |
+
"mesh_terms": str(data["mesh_terms"]),
|
197 |
+
"title": str(data["title"]),
|
198 |
+
"publication_type": str(data["publication_type"]),
|
199 |
+
"abstract": str(data["abstract"]),
|
200 |
+
"author": str(data["author"]),
|
201 |
+
"source": str(data["source"]),
|
202 |
+
}
|
203 |
+
else:
|
204 |
+
data = ohsumed_dict()
|
205 |
+
line = line.replace(".I", "").strip()
|
206 |
+
data["seq_id"] = line
|
207 |
+
elif tag and not line.startswith("."):
|
208 |
+
key = column_map[tag]
|
209 |
+
data[key] = line
|
210 |
+
elif ".U" in line:
|
211 |
+
tag = ".U"
|
212 |
+
elif ".M" in line:
|
213 |
+
tag = ".M"
|
214 |
+
elif ".T" in line:
|
215 |
+
tag = ".T"
|
216 |
+
elif ".P" in line:
|
217 |
+
tag = ".P"
|
218 |
+
elif ".W" in line:
|
219 |
+
tag = ".W"
|
220 |
+
elif ".A" in line:
|
221 |
+
tag = ".A"
|
222 |
+
elif ".S" in line:
|
223 |
+
tag = ".S"
|