clefourrier HF staff commited on
Commit
21df6fb
1 Parent(s): 4fd53c3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -35
README.md CHANGED
@@ -1,41 +1,41 @@
1
- From "MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering"
2
- (Pal et al.), MedMCQA is a "multiple-choice question answering (MCQA) dataset designed to address
3
- real-world medical entrance exam questions." The dataset "...has more than 194k high-quality AIIMS & NEET PG
4
- entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average
5
- token length of 12.77 and high topical diversity."
6
 
7
- The following is an example from the dataset:
8
 
9
- Question: In a patient of heart disease antibiotic prophylaxis for dental extraction is:
10
- A. Amoxicillin.
11
- B. Imipenem.
12
- C. Gentamicin.
13
- D. Erythromycin.
14
- Answer: A
15
 
16
- Paper: https://arxiv.org/abs/2203.14371
17
- Code: https://github.com/MedMCQA/MedMCQA
18
 
19
  ```
20
- @InProceedings{pmlr-v174-pal22a,
21
- title = {MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering},
22
- author = {Pal, Ankit and Umapathi, Logesh Kumar and Sankarasubbu, Malaikannan},
23
- booktitle = {Proceedings of the Conference on Health, Inference, and Learning},
24
- pages = {248--260},
25
- year = {2022},
26
- editor = {Flores, Gerardo and Chen, George H and Pollard, Tom and Ho, Joyce C and Naumann, Tristan},
27
- volume = {174},
28
- series = {Proceedings of Machine Learning Research},
29
- month = {07--08 Apr},
30
- publisher = {PMLR},
31
- pdf = {https://proceedings.mlr.press/v174/pal22a/pal22a.pdf},
32
- url = {https://proceedings.mlr.press/v174/pal22a.html},
33
- abstract = {This paper introduces MedMCQA, a new large-scale, Multiple-Choice Question Answering (MCQA) dataset
34
- designed to address real-world medical entrance exam questions. More than 194k high-quality AIIMS & NEET PG
35
- entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token
36
- length of 12.77 and high topical diversity. Each sample contains a question, correct answer(s), and other
37
- options which requires a deeper language understanding as it tests the 10+ reasoning abilities of a model across
38
- a wide range of medical subjects & topics. A detailed explanation of the solution, along with the above
39
- information, is provided in this study.}
40
- }
41
  ```
 
1
+ From "MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering"
2
+ (Pal et al.), MedMCQA is a "multiple-choice question answering (MCQA) dataset designed to address
3
+ real-world medical entrance exam questions." The dataset "...has more than 194k high-quality AIIMS & NEET PG
4
+ entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average
5
+ token length of 12.77 and high topical diversity."
6
 
7
+ The following is an example from the dataset:
8
 
9
+ Question: In a patient of heart disease antibiotic prophylaxis for dental extraction is:
10
+ A. Amoxicillin.
11
+ B. Imipenem.
12
+ C. Gentamicin.
13
+ D. Erythromycin.
14
+ Answer: A
15
 
16
+ Paper: https://arxiv.org/abs/2203.14371
17
+ Code: https://github.com/MedMCQA/MedMCQA
18
 
19
  ```
20
+ @InProceedings{pmlr-v174-pal22a,
21
+ title = {MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering},
22
+ author = {Pal, Ankit and Umapathi, Logesh Kumar and Sankarasubbu, Malaikannan},
23
+ booktitle = {Proceedings of the Conference on Health, Inference, and Learning},
24
+ pages = {248--260},
25
+ year = {2022},
26
+ editor = {Flores, Gerardo and Chen, George H and Pollard, Tom and Ho, Joyce C and Naumann, Tristan},
27
+ volume = {174},
28
+ series = {Proceedings of Machine Learning Research},
29
+ month = {07--08 Apr},
30
+ publisher = {PMLR},
31
+ pdf = {https://proceedings.mlr.press/v174/pal22a/pal22a.pdf},
32
+ url = {https://proceedings.mlr.press/v174/pal22a.html},
33
+ abstract = {This paper introduces MedMCQA, a new large-scale, Multiple-Choice Question Answering (MCQA) dataset
34
+ designed to address real-world medical entrance exam questions. More than 194k high-quality AIIMS & NEET PG
35
+ entrance exam MCQs covering 2.4k healthcare topics and 21 medical subjects are collected with an average token
36
+ length of 12.77 and high topical diversity. Each sample contains a question, correct answer(s), and other
37
+ options which requires a deeper language understanding as it tests the 10+ reasoning abilities of a model across
38
+ a wide range of medical subjects & topics. A detailed explanation of the solution, along with the above
39
+ information, is provided in this study.}
40
+ }
41
  ```