Datasets:
Tasks:
Visual Question Answering
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
Tags:
medical
License:
Commit
·
68167c7
1
Parent(s):
7745354
Update README.md
Browse files
README.md
CHANGED
@@ -55,8 +55,8 @@ There are a few image-question-answer triplets which occur more than once in the
|
|
55 |
After dropping the duplicate image-question-answer triplets, the dataset contains 32,632 question-answer pairs on 4,289 images.
|
56 |
|
57 |
#### Supported Tasks and Leaderboards
|
58 |
-
This dataset has an active leaderboard
|
59 |
-
|
60 |
the accuracy of a model's generated answers for the subset of binary "yes/no" questions. "Free-form accuracy" is the accuracy
|
61 |
of a model's generated answers for the subset of open-ended questions. "Overall accuracy" is the accuracy of a model's generated
|
62 |
answers across all questions.
|
|
|
55 |
After dropping the duplicate image-question-answer triplets, the dataset contains 32,632 question-answer pairs on 4,289 images.
|
56 |
|
57 |
#### Supported Tasks and Leaderboards
|
58 |
+
This dataset has an active leaderboard on [Papers with Code](https://paperswithcode.com/sota/medical-visual-question-answering-on-pathvqa)
|
59 |
+
where models are ranked based on three metrics: "Yes/No Accuracy", "Free-form accuracy" and "Overall accuracy". "Yes/No Accuracy" is
|
60 |
the accuracy of a model's generated answers for the subset of binary "yes/no" questions. "Free-form accuracy" is the accuracy
|
61 |
of a model's generated answers for the subset of open-ended questions. "Overall accuracy" is the accuracy of a model's generated
|
62 |
answers across all questions.
|