Datasets:

Modalities:
Text
Formats:
csv
Languages:
Polish
Libraries:
Datasets
pandas
License:
asawczyn commited on
Commit
82598f2
1 Parent(s): 28a47f4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +136 -0
README.md ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - other
6
+ language:
7
+ - pl
8
+ license:
9
+ - cc-by-sa-3.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: 'Did you know?'
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - question-answering
19
+ task_ids:
20
+ - open-domain-question-answering
21
+ ---
22
+
23
+ # klej-dyk
24
+
25
+ ## Description
26
+
27
+ The Czy wiesz? (eng. Did you know?) the dataset consists of almost 5k question-answer pairs obtained from Czy wiesz... section of Polish Wikipedia. Each question is written by a Wikipedia collaborator and is answered with a link to a relevant Wikipedia article. In huggingface version of this dataset, they chose the negatives which have the largest token overlap with a question.
28
+
29
+ ## Tasks (input, output, and metrics)
30
+
31
+ The task is to predict if the answer to the given question is correct or not.
32
+
33
+ **Input** ('question sentence', 'answer' columns): question and answer sentences
34
+
35
+ **Output** ('target' column): 1 if the answer is correct, 0 otherwise. Note that the test split doesn't have target values so -1 is used instead
36
+
37
+ **Domain**: Wikipedia
38
+
39
+ **Measurements**: F1-Score
40
+
41
+ **Example**:
42
+ *Czym zajmowali się świątnicy? vs. Świątnik – osoba, która dawniej zajmowała się
43
+ obsługą kościoła (świątyni).* → **1 (the answer is correct)**
44
+
45
+ ## Data splits
46
+
47
+ | Subset | Cardinality |
48
+ | ----------- | ----------: |
49
+ | train | 4154 |
50
+ | val | 0 |
51
+ | test | 1029 |
52
+
53
+ ## Class distribution
54
+
55
+ | Class | train | validation | test |
56
+ |:----------|--------:|-------------:|-------:|
57
+ | incorrect | 0.831 | - | 0.831 |
58
+ | correct | 0.169 | - | 0.169 |
59
+
60
+ ## Citation
61
+
62
+ ```
63
+ @misc{11321/39,
64
+ title = {Pytania i odpowiedzi z serwisu wikipedyjnego "Czy wiesz", wersja 1.1},
65
+ author = {Marci{\'n}czuk, Micha{\l} and Piasecki, Dominik and Piasecki, Maciej and Radziszewski, Adam},
66
+ url = {http://hdl.handle.net/11321/39},
67
+ note = {{CLARIN}-{PL} digital repository},
68
+ year = {2013}
69
+ }
70
+ ```
71
+
72
+ ## License
73
+
74
+ ```
75
+ Creative Commons Attribution ShareAlike 3.0 licence (CC-BY-SA 3.0)
76
+ ```
77
+
78
+ ## Links
79
+
80
+ [HuggingFace](https://huggingface.co/datasets/dyk)
81
+
82
+ [Source](http://nlp.pwr.wroc.pl/en/tools-and-resources/resources/czy-wiesz-question-answering-dataset)
83
+ [Source #2](https://clarin-pl.eu/dspace/handle/11321/39)
84
+
85
+ [Paper](https://www.researchgate.net/publication/272685895_Open_dataset_for_development_of_Polish_Question_Answering_systems)
86
+
87
+ ## Examples
88
+
89
+ ### Loading
90
+
91
+ ```python
92
+ from pprint import pprint
93
+
94
+ from datasets import load_dataset
95
+
96
+ dataset = load_dataset("allegro/klej-dyk")
97
+ pprint(dataset['train'][100])
98
+
99
+ #{'answer': '"W wyborach prezydenckich w 2004 roku, Moroz przekazał swoje '
100
+ # 'poparcie Wiktorowi Juszczence. Po wyborach w 2006 socjaliści '
101
+ # 'początkowo tworzyli ""pomarańczową koalicję"" z Naszą Ukrainą i '
102
+ # 'Blokiem Julii Tymoszenko."',
103
+ # 'q_id': 'czywiesz4362',
104
+ # 'question': 'ile partii tworzy powołaną przez Wiktora Juszczenkę koalicję '
105
+ # 'Blok Nasza Ukraina?',
106
+ # 'target': 0}
107
+ ```
108
+
109
+ ### Evaluation
110
+
111
+ ```python
112
+ import random
113
+ from pprint import pprint
114
+
115
+ from datasets import load_dataset, load_metric
116
+
117
+ dataset = load_dataset("allegro/klej-dyk")
118
+ dataset = dataset.class_encode_column("target")
119
+ references = dataset["test"]["target"]
120
+
121
+ # generate random predictions
122
+ predictions = [random.randrange(max(references) + 1) for _ in range(len(references))]
123
+
124
+ acc = load_metric("accuracy")
125
+ f1 = load_metric("f1")
126
+
127
+ acc_score = acc.compute(predictions=predictions, references=references)
128
+ f1_score = f1.compute(predictions=predictions, references=references, average="macro")
129
+
130
+ pprint(acc_score)
131
+ pprint(f1_score)
132
+
133
+ # {'accuracy': 0.5286686103012633}
134
+ # {'f1': 0.46700507614213194}
135
+
136
+ ```