Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -42,8 +42,7 @@ It already has been used to evaluate a newly trained [bi-encoder model](https://
|
|
42 |
The benchmark framework requires a particular dataset structure by default which has been created locally and uploaded here.
|
43 |
|
44 |
Acknowledgement: The dataset was initially created as "[germanDPR](https://www.deepset.ai/germanquad)" by Timo Möller, Julian Risch, Malte Pietsch, Julian Gutsch, Tom Hersperger, Luise Köhler, Iuliia Mozhina, and Justus Peter, during work done at deepset.ai.
|
45 |
-
|
46 |
-
### Dataset Creation
|
47 |
First, the original dataset [deepset/germanDPR](https://huggingface.co/datasets/deepset/germandpr) was converted into three files for BEIR compatibility:
|
48 |
- The first file is `queries.jsonl` and contains an ID and a question in each line.
|
49 |
- The second file, `corpus.jsonl`, contains in each line an ID, a title, a text and some metadata.
|
@@ -88,7 +87,7 @@ This means some of the results are missing.
|
|
88 |
A correct calculation of the overall result is no longer possible.
|
89 |
Have a look into [BEIR's evaluation.py](https://github.com/beir-cellar/beir/blob/c3334fd5b336dba03c5e3e605a82fcfb1bdf667d/beir/retrieval/evaluation.py#L49) for further understanding.
|
90 |
|
91 |
-
|
92 |
As earlier mentioned, this dataset is intended to be used with the BEIR benchmark framework.
|
93 |
The file and data structure required for BEIR can only be used to a limited extent with Huggingface Datasets or it is necessary to define multiple dataset repositories at once.
|
94 |
To make it easier, the [dl_dataset.py](https://huggingface.co/datasets/PM-AI/germandpr-beir/tree/main/dl_dataset.py) script is provided to download the dataset and to ensure the correct file and folder structure.
|
@@ -156,18 +155,18 @@ Now you can use the downloaded files in BEIR framework:
|
|
156 |
- Just set variable `"dataset"` to `"germandpr-beir-dataset/processed/test"` or `"germandpr-beir-dataset/original/test"`.
|
157 |
- Same goes for `"train"`.
|
158 |
|
159 |
-
|
160 |
- Original **train** `corpus` size, `queries` size and `qrels` size: `24009`, `9275` and `9275`
|
161 |
- Original **test** `corpus` size, `queries` size and `qrels` size: `2876`, `1025` and `1025`
|
162 |
|
163 |
- Processed **train** `corpus` size, `queries` size and `qrels` size: `23993`, `9275` and `9275`
|
164 |
- Processed **test** `corpus` size, `queries` size and `qrels` size: `2875` and `1025` and `1025`
|
165 |
|
166 |
-
|
167 |
|
168 |
This dataset only supports german (aka. de, DE).
|
169 |
|
170 |
-
|
171 |
|
172 |
The dataset was initially created as "[deepset/germanDPR](https://www.deepset.ai/germanquad)" by Timo Möller, Julian Risch, Malte Pietsch, Julian Gutsch, Tom Hersperger, Luise Köhler, Iuliia Mozhina, and Justus Peter, during work done at [deepset.ai](https://www.deepset.ai/).
|
173 |
|
|
|
42 |
The benchmark framework requires a particular dataset structure by default which has been created locally and uploaded here.
|
43 |
|
44 |
Acknowledgement: The dataset was initially created as "[germanDPR](https://www.deepset.ai/germanquad)" by Timo Möller, Julian Risch, Malte Pietsch, Julian Gutsch, Tom Hersperger, Luise Köhler, Iuliia Mozhina, and Justus Peter, during work done at deepset.ai.
|
45 |
+
## Dataset Creation
|
|
|
46 |
First, the original dataset [deepset/germanDPR](https://huggingface.co/datasets/deepset/germandpr) was converted into three files for BEIR compatibility:
|
47 |
- The first file is `queries.jsonl` and contains an ID and a question in each line.
|
48 |
- The second file, `corpus.jsonl`, contains in each line an ID, a title, a text and some metadata.
|
|
|
87 |
A correct calculation of the overall result is no longer possible.
|
88 |
Have a look into [BEIR's evaluation.py](https://github.com/beir-cellar/beir/blob/c3334fd5b336dba03c5e3e605a82fcfb1bdf667d/beir/retrieval/evaluation.py#L49) for further understanding.
|
89 |
|
90 |
+
## Dataset Usage
|
91 |
As earlier mentioned, this dataset is intended to be used with the BEIR benchmark framework.
|
92 |
The file and data structure required for BEIR can only be used to a limited extent with Huggingface Datasets or it is necessary to define multiple dataset repositories at once.
|
93 |
To make it easier, the [dl_dataset.py](https://huggingface.co/datasets/PM-AI/germandpr-beir/tree/main/dl_dataset.py) script is provided to download the dataset and to ensure the correct file and folder structure.
|
|
|
155 |
- Just set variable `"dataset"` to `"germandpr-beir-dataset/processed/test"` or `"germandpr-beir-dataset/original/test"`.
|
156 |
- Same goes for `"train"`.
|
157 |
|
158 |
+
## Dataset Sizes
|
159 |
- Original **train** `corpus` size, `queries` size and `qrels` size: `24009`, `9275` and `9275`
|
160 |
- Original **test** `corpus` size, `queries` size and `qrels` size: `2876`, `1025` and `1025`
|
161 |
|
162 |
- Processed **train** `corpus` size, `queries` size and `qrels` size: `23993`, `9275` and `9275`
|
163 |
- Processed **test** `corpus` size, `queries` size and `qrels` size: `2875` and `1025` and `1025`
|
164 |
|
165 |
+
## Languages
|
166 |
|
167 |
This dataset only supports german (aka. de, DE).
|
168 |
|
169 |
+
## Acknowledgment
|
170 |
|
171 |
The dataset was initially created as "[deepset/germanDPR](https://www.deepset.ai/germanquad)" by Timo Möller, Julian Risch, Malte Pietsch, Julian Gutsch, Tom Hersperger, Luise Köhler, Iuliia Mozhina, and Justus Peter, during work done at [deepset.ai](https://www.deepset.ai/).
|
172 |
|