baobab-odashi
commited on
Commit
•
0525576
1
Parent(s):
cb388fe
Update README.md
Browse files
README.md
CHANGED
@@ -21,15 +21,24 @@ The dataset consists of following data:
|
|
21 |
* List of references with either extracted paragraph or summarization from a Wikipedia
|
22 |
article
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
As well as successful sessions with answer paragraphs, we also recorded failed sessions:
|
25 |
the worker failed to construct the answer from the search results.
|
26 |
In this case we recorded at least the retrieval process despite lack of the answer.
|
27 |
|
28 |
-
Importantly, we requested the workers strictly to answer the questions based on only
|
29 |
-
explicit citation from Wikipedia.
|
30 |
-
That means, the workers should leave the answer empty if they couldn't find any information
|
31 |
-
from Wikipedia even if they have some implicit knowledge to answer the questions.
|
32 |
-
|
33 |
We release this version of the dataset with the following dataset chunks:
|
34 |
|
35 |
* "answered" chunk (838 examples): question, answer, and retrieval process
|
|
|
21 |
* List of references with either extracted paragraph or summarization from a Wikipedia
|
22 |
article
|
23 |
|
24 |
+
## Target situation and limitation
|
25 |
+
|
26 |
+
We designed this dataset to ensure that the answers reflect only exact information written in the cited references,
|
27 |
+
and does not reflect any external information and/or implicit knowledges.
|
28 |
+
This design is useful to measure/investigate QA tasks with accurate retrieval from the given data source.
|
29 |
+
Please keep in mind that the dataset is not designed to provide a QA with correct information.
|
30 |
+
|
31 |
+
We requested the workers strictly to answer the questions based on only explicit citation from Wikipedia.
|
32 |
+
That means, the workers should write answers that may be different from their implicit knowledge,
|
33 |
+
and should leave the answer empty if they couldn't find any information from Wikipedia
|
34 |
+
even if they know something to answer the questions.
|
35 |
+
|
36 |
+
# Dataset chunks
|
37 |
+
|
38 |
As well as successful sessions with answer paragraphs, we also recorded failed sessions:
|
39 |
the worker failed to construct the answer from the search results.
|
40 |
In this case we recorded at least the retrieval process despite lack of the answer.
|
41 |
|
|
|
|
|
|
|
|
|
|
|
42 |
We release this version of the dataset with the following dataset chunks:
|
43 |
|
44 |
* "answered" chunk (838 examples): question, answer, and retrieval process
|