The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

The M-QALM Benchmark utilizes the following datasets:

  1. MEDQA (USMLE dataset) [1]
  2. MEDMCQA [2]
  3. BioASQ (2022) [3] [4]
  4. HEADQA [5]
  5. ProcessBank [6]
  6. PubmedQA [7]
  7. MMLU (subset of datasets focussing on clinical and medical knowledge) [8]
  8. BioMRC (Tiny A and B) [9]
  9. Fellowship of the Royal College of Ophthalmologists (FRCOphth) Exams [10]
  10. QA4MRE (Alzheimer's Questions) [11]
  11. MedicationInfo [12]
  12. MedQuad [13]
  13. LiveQA dataset (Ranked version of answers used to evaluate MedQuad) [13] [14]
  14. MashQA [15]
  15. MEDIQA-ANS [16]

The HEADQA dataset was last modified on September 27th, 2023.

References:

[1] Jin D, Pan E, Oufattole N, Weng W-H, Fang H, Szolovits P. What Disease Does This Patient Have? A Large-Scale Open Domain Question Answering Dataset from Medical Exams. Applied Sciences. 2021; 11(14):6421. https://doi.org/10.3390/app11146421

[2] Pal, A., Umapathi, L.K. & Sankarasubbu, M.. (2022). MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering. Proceedings of the Conference on Health, Inference, and Learning, in Proceedings of Machine Learning Research 174:248-260 Available from https://proceedings.mlr.press/v174/pal22a.html.

[3] Tsatsaronis, G., Balikas, G., Malakasiotis, P. et al. An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition. BMC Bioinformatics 16, 138 (2015). https://doi.org/10.1186/s12859-015-0564-6

[4] Krithara, A., Nentidis, A., Bougiatiotis, K. et al. BioASQ-QA: A manually curated corpus for Biomedical Question Answering. Sci Data 10, 170 (2023). https://doi.org/10.1038/s41597-023-02068-4

[5] David Vilares and Carlos Gómez-Rodríguez. 2019. HEAD-QA: A Healthcare Dataset for Complex Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 960–966, Florence, Italy. Association for Computational Linguistics. http://dx.doi.org/10.18653/v1/P19-1092

[6] Jonathan Berant, Vivek Srikumar, Pei-Chun Chen, Abby Vander Linden, Brittany Harding, Brad Huang, Peter Clark, and Christopher D. Manning. 2014. Modeling Biological Processes for Reading Comprehension. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1499–1510, Doha, Qatar. Association for Computational Linguistics. http://dx.doi.org/10.3115/v1/D14-1159

[7] Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. PubMedQA: A Dataset for Biomedical Research Question Answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567–2577, Hong Kong, China. Association for Computational Linguistics. http://dx.doi.org/10.18653/v1/D19-1259

[8] D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J.Steinhardt, “Measuring massive multitask language understanding”, in International Conference on Learning Representations, 2021. https://openreview.net/forum?id=d7KBjmI3GmQ.

[9] Dimitris Pappas, Petros Stavropoulos, Ion Androutsopoulos, and Ryan McDonald. 2020. BioMRC: A Dataset for Biomedical Machine Reading Comprehension. In Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing, pages 140–149, Online. Association for Computational Linguistics. http://dx.doi.org/10.18653/v1/2020.bionlp-1.15

[10] Raimondi, R., Tzoumas, N., Salisbury, T. et al. Comparative analysis of large language models in the Royal College of Ophthalmologists fellowship exams. Eye (2023). https://doi.org/10.1038/s41433-023-02563-3

[11] Part 1 FRCOphth Sample MCQs. https://www.rcophth.ac.uk/wp-content/uploads/2022/01/Part-1-FRCOphth-Sample-MCQs.pdf

[12] Part 2 FRCOphth Written Sample MCQs. https://www.rcophth.ac.uk/wp-content/uploads/2022/01/Part-2-FRCOphth-Written-Sample-MCQs-20160524.pdf

[13] Ben Abacha, A., Demner-Fushman, D. A question-entailment approach to question answering. BMC Bioinformatics 20, 511 (2019). https://doi.org/10.1186/s12859-019-3119-4

[14] Asma Ben Abacha, Eugene Agichtein, Yuval Pinter & Dina Demner-Fushman. Overview of the Medical Question Answering Task at TREC 2017 LiveQA. TREC, Gaithersburg, MD, 2017 (https://trec.nist.gov/pubs/trec26/papers/Overview-QA.pdf).

[15] Ming Zhu, Aman Ahuja, Da-Cheng Juan, Wei Wei, and Chandan K. Reddy. 2020. Question Answering with Long Multiple-Span Answers. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3840–3849, Online. Association for Computational Linguistics. http://dx.doi.org/10.18653/v1/2020.findings-emnlp.342

[16] Savery, M., Abacha, A.B., Gayen, S. et al. Question-driven summarization of answers to consumer health questions. Sci Data 7, 322 (2020). https://doi.org/10.1038/s41597-020-00667-z

Downloads last month
75