doc
Browse files
README.md
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Clinical Question Answering Dataset II (Farsi)
|
2 |
+
|
3 |
+
This dataset contains more than 211k questions and more than 700k answers, all produced in written form. The questions were posed by ordinary Persian speakers (Iranians), and the responses were provided by doctors from various specialties.
|
4 |
+
|
5 |
+
## Dataset Description
|
6 |
+
|
7 |
+
Question records without corresponding answers have been excluded from the dataset.
|
8 |
+
This dataset is NOT a part of [Clinical Question Answering I](https://huggingface.co/datasets/PerSets/cqai) dataset and is a whole different dataset.
|
9 |
+
|
10 |
+
## Usage
|
11 |
+
<details>
|
12 |
+
|
13 |
+
Huggingface datasets library:
|
14 |
+
```python
|
15 |
+
from datasets import load_dataset
|
16 |
+
dataset = load_dataset('PerSets/cqaii')
|
17 |
+
```
|
18 |
+
|
19 |
+
Pandas library:
|
20 |
+
```python
|
21 |
+
import pandas
|
22 |
+
import os
|
23 |
+
|
24 |
+
data_files = [file for file in os.listdir() if file.startswith("train") and file.endswith(".jsonl")]
|
25 |
+
df = pd.DataFrame()
|
26 |
+
for file in data_files:
|
27 |
+
df = pd.concat([df, pd.read_json(file, lines=True)], ignore_index=True)
|
28 |
+
```
|
29 |
+
|
30 |
+
Vanilla Python: <br>
|
31 |
+
(very slow - not recommended)
|
32 |
+
```python
|
33 |
+
import json
|
34 |
+
import os
|
35 |
+
|
36 |
+
data_files = [file for file in os.listdir() if file.startswith("train") and file.endswith(".jsonl")]
|
37 |
+
|
38 |
+
train = []
|
39 |
+
for file in data_files:
|
40 |
+
with open(file, encoding="utf-8") as f:
|
41 |
+
for line in f:
|
42 |
+
obj = json.loads(line)
|
43 |
+
train.append(obj)
|
44 |
+
```
|
45 |
+
</details>
|
46 |
+
|
47 |
+
## License
|
48 |
+
CC0
|