File size: 1,756 Bytes
3215620
3433b33
 
 
 
 
 
3215620
 
 
 
 
 
 
 
 
 
 
 
 
9717478
3433b33
 
9717478
3433b33
9717478
 
3215620
 
 
 
 
 
 
30f6b20
1ab10bd
 
 
 
3215620
 
 
 
 
3433b33
 
3215620
 
30f6b20
 
3215620
 
 
 
 
 
4f4e78b
3215620
 
8a84590
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
language:
- en
license: cc-by-2.5
task_categories:
- question-answering
- sentence-similarity
dataset_info:
- config_name: question-answer-passages
  features:
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: id
    dtype: int64
  - name: relevant_passage_ids
    sequence: int64
  splits:
  - name: train
    num_bytes: 1615888.0491629583
    num_examples: 4012
  - name: test
    num_bytes: 284753.9508370418
    num_examples: 707
  download_size: 1309572
  dataset_size: 1900642.0
- config_name: text-corpus
  features:
  - name: passage
    dtype: string
  - name: id
    dtype: int64
  splits:
  - name: test
    num_bytes: 60166919
    num_examples: 40181
  download_size: 35304894
  dataset_size: 60166919
configs:
- config_name: question-answer-passages
  data_files:
  - split: train
    path: question-answer-passages/train-*
  - split: test
    path: question-answer-passages/test-*
- config_name: text-corpus
  data_files:
  - split: test
    path: text-corpus/test-*
tags:
- biology
- medical
- rag
---
This dataset is a subset of a training dataset by [the BioASQ Challenge](http://www.bioasq.org/), which is available [here](http://participants-area.bioasq.org/Tasks/11b/trainingDataset/).

It is derived from [`rag-datasets/rag-mini-bioasq`](https://huggingface.co/datasets/rag-datasets/rag-mini-bioasq).

Modifications include:
- filling in missing passages (some of them contained `"nan"` instead of actual text),
- changing `relevant_passage_ids`' type from string to sequence of ints,
- deduplicating the passages (removed 40 duplicates) and fixing the `relevant_passage_ids` in QAP triplets to point to the corrected, deduplicated passages' ids,
- splitting QAP triplets into train and test splits.