Datasets:

Modalities:
Text
Libraries:
Datasets
parquet-converter commited on
Commit
c9a1e07
1 Parent(s): 6657c24

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,28 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- test.jsonl filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,98 +0,0 @@
1
- ## Dataset Summary
2
-
3
- A dataset for benchmarking keyphrase extraction and generation techniques from long document english scientific papers. For more details about the dataset please refer the original paper - [https://www.semanticscholar.org/paper/Large-Dataset-for-Keyphrases-Extraction-Krapivin-Autaeu/2c56421ff3c2a69894d28b09a656b7157df8eb83](https://www.semanticscholar.org/paper/Large-Dataset-for-Keyphrases-Extraction-Krapivin-Autaeu/2c56421ff3c2a69894d28b09a656b7157df8eb83)
4
- Original source of the data - []()
5
-
6
-
7
- ## Dataset Structure
8
-
9
-
10
- ### Data Fields
11
-
12
- - **id**: unique identifier of the document.
13
- - **document**: Whitespace separated list of words in the document.
14
- - **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
15
- - **extractive_keyphrases**: List of all the present keyphrases.
16
- - **abstractive_keyphrase**: List of all the absent keyphrases.
17
-
18
-
19
- ### Data Splits
20
-
21
- |Split| #datapoints |
22
- |--|--|
23
- | Test | 2305 |
24
-
25
-
26
- ## Usage
27
-
28
- ### Full Dataset
29
-
30
- ```python
31
- from datasets import load_dataset
32
-
33
- # get entire dataset
34
- dataset = load_dataset("midas/krapivin", "raw")
35
-
36
- # sample from the test split
37
- print("Sample from test dataset split")
38
- test_sample = dataset["test"][0]
39
- print("Fields in the sample: ", [key for key in test_sample.keys()])
40
- print("Tokenized Document: ", test_sample["document"])
41
- print("Document BIO Tags: ", test_sample["doc_bio_tags"])
42
- print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
43
- print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
44
- print("\n-----------\n")
45
- ```
46
- **Output**
47
-
48
- ```bash
49
-
50
- ```
51
-
52
- ### Keyphrase Extraction
53
- ```python
54
- from datasets import load_dataset
55
-
56
- # get the dataset only for keyphrase extraction
57
- dataset = load_dataset("midas/krapivin", "extraction")
58
-
59
- print("Samples for Keyphrase Extraction")
60
-
61
- # sample from the test split
62
- print("Sample from test data split")
63
- test_sample = dataset["test"][0]
64
- print("Fields in the sample: ", [key for key in test_sample.keys()])
65
- print("Tokenized Document: ", test_sample["document"])
66
- print("Document BIO Tags: ", test_sample["doc_bio_tags"])
67
- print("\n-----------\n")
68
- ```
69
-
70
- ### Keyphrase Generation
71
- ```python
72
- # get the dataset only for keyphrase generation
73
- dataset = load_dataset("midas/krapivin", "generation")
74
-
75
- print("Samples for Keyphrase Generation")
76
-
77
- # sample from the test split
78
- print("Sample from test data split")
79
- test_sample = dataset["test"][0]
80
- print("Fields in the sample: ", [key for key in test_sample.keys()])
81
- print("Tokenized Document: ", test_sample["document"])
82
- print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
83
- print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
84
- print("\n-----------\n")
85
- ```
86
-
87
- ## Citation Information
88
- ```
89
- @inproceedings{Krapivin2009LargeDF,
90
- title={Large Dataset for Keyphrases Extraction},
91
- author={Mikalai Krapivin and Aliaksandr Autaeu and Maurizio Marchese},
92
- year={2009}
93
- }
94
-
95
- ```
96
-
97
- ## Contributions
98
- Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
test.jsonl → extraction/krapivin-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:45fc8eab71b5148ff344fd4d483d19b6972d2e42b9cc8b4ad30465101855122d
3
- size 284854394
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebc3c74a8fa4c11837efd74d89f6512653e86a44c860b419f1f5449e6e1de7f4
3
+ size 55532723
generation/krapivin-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:faebf9431a5b4437c9911bb853d38b3ccca6c2fe974d1b556c38571120305f93
3
+ size 55328770
krapivin.py DELETED
@@ -1,136 +0,0 @@
1
- import json
2
- import datasets
3
-
4
- # _SPLIT = ['test']
5
- _CITATION = """\
6
- @inproceedings{Krapivin2009LargeDF,
7
- title={Large Dataset for Keyphrases Extraction},
8
- author={Mikalai Krapivin and Aliaksandr Autaeu and Maurizio Marchese},
9
- year={2009}
10
- }
11
-
12
- """
13
-
14
- _DESCRIPTION = """\
15
-
16
- """
17
-
18
- _HOMEPAGE = ""
19
-
20
- # TODO: Add the licence for the dataset here if you can find it
21
- _LICENSE = ""
22
-
23
- # TODO: Add link to the official dataset URLs here
24
-
25
- _URLS = {
26
- "test": "test.jsonl"
27
- }
28
-
29
-
30
- # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
31
- class Krapivin(datasets.GeneratorBasedBuilder):
32
- """TODO: Short description of my dataset."""
33
-
34
- VERSION = datasets.Version("0.0.1")
35
-
36
- BUILDER_CONFIGS = [
37
- datasets.BuilderConfig(name="extraction", version=VERSION,
38
- description="This part of my dataset covers extraction"),
39
- datasets.BuilderConfig(name="generation", version=VERSION,
40
- description="This part of my dataset covers generation"),
41
- datasets.BuilderConfig(name="raw", version=VERSION, description="This part of my dataset covers the raw data"),
42
- ]
43
-
44
- DEFAULT_CONFIG_NAME = "extraction"
45
-
46
- def _info(self):
47
- if self.config.name == "extraction": # This is the name of the configuration selected in BUILDER_CONFIGS above
48
- features = datasets.Features(
49
- {
50
- "id": datasets.Value("int64"),
51
- "document": datasets.features.Sequence(datasets.Value("string")),
52
- "doc_bio_tags": datasets.features.Sequence(datasets.Value("string"))
53
-
54
- }
55
- )
56
- elif self.config.name == "generation":
57
- features = datasets.Features(
58
- {
59
- "id": datasets.Value("int64"),
60
- "document": datasets.features.Sequence(datasets.Value("string")),
61
- "extractive_keyphrases": datasets.features.Sequence(datasets.Value("string")),
62
- "abstractive_keyphrases": datasets.features.Sequence(datasets.Value("string"))
63
-
64
- }
65
- )
66
- else:
67
- features = datasets.Features(
68
- {
69
- "id": datasets.Value("int64"),
70
- "document": datasets.features.Sequence(datasets.Value("string")),
71
- "doc_bio_tags": datasets.features.Sequence(datasets.Value("string")),
72
- "extractive_keyphrases": datasets.features.Sequence(datasets.Value("string")),
73
- "abstractive_keyphrases": datasets.features.Sequence(datasets.Value("string")),
74
- "other_metadata": datasets.features.Sequence(
75
- {
76
- "text": datasets.features.Sequence(datasets.Value("string")),
77
- "bio_tags": datasets.features.Sequence(datasets.Value("string"))
78
- }
79
- )
80
-
81
- }
82
- )
83
- return datasets.DatasetInfo(
84
- # This is the description that will appear on the datasets page.
85
- description=_DESCRIPTION,
86
- # This defines the different columns of the dataset and their types
87
- features=features,
88
- homepage=_HOMEPAGE,
89
- # License for the dataset if available
90
- license=_LICENSE,
91
- # Citation for the dataset
92
- citation=_CITATION,
93
- )
94
-
95
- def _split_generators(self, dl_manager):
96
-
97
- data_dir = dl_manager.download_and_extract(_URLS)
98
- return [
99
- datasets.SplitGenerator(
100
- name=datasets.Split.TEST,
101
- # These kwargs will be passed to _generate_examples
102
- gen_kwargs={
103
- "filepath": data_dir['test'],
104
- "split": "test"
105
- },
106
- ),
107
- ]
108
-
109
- # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
110
- def _generate_examples(self, filepath, split):
111
- with open(filepath, encoding="utf-8") as f:
112
- for key, row in enumerate(f):
113
- data = json.loads(row)
114
- if self.config.name == "extraction":
115
- # Yields examples as (key, example) tuples
116
- yield key, {
117
- "id": data['paper_id'],
118
- "document": data["document"],
119
- "doc_bio_tags": data.get("doc_bio_tags")
120
- }
121
- elif self.config.name == "generation":
122
- yield key, {
123
- "id": data['paper_id'],
124
- "document": data["document"],
125
- "extractive_keyphrases": data.get("extractive_keyphrases"),
126
- "abstractive_keyphrases": data.get("abstractive_keyphrases")
127
- }
128
- else:
129
- yield key, {
130
- "id": data['paper_id'],
131
- "document": data["document"],
132
- "doc_bio_tags": data.get("doc_bio_tags"),
133
- "extractive_keyphrases": data.get("extractive_keyphrases"),
134
- "abstractive_keyphrases": data.get("abstractive_keyphrases"),
135
- "other_metadata": data["other_metadata"]
136
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
raw/krapivin-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b99e32f57c5cd8dc1417985e74af369a8585084d09ed7553df404858b8a61f3e
3
+ size 55684384