AtsuMiyai commited on
Commit
ceb493a
1 Parent(s): 0ca2470

update explanations on MM-UPD Bench

Browse files
Files changed (1) hide show
  1. constants.py +38 -33
constants.py CHANGED
@@ -35,40 +35,13 @@ LEADERBORAD_INTRODUCTION = """
35
  <a href='https://arxiv.org/abs/2403.20331'><img src='https://img.shields.io/badge/cs.CV-Paper-b31b1b?logo=arxiv&logoColor=red'></a>
36
  </div>
37
 
38
- ## What is MM-UPD Bench?
39
  MM-UPD Bench: A Comprehensive Benchmark for Evaluating the Trustworthiness of Vision Language Models (VLMs) in the Context of Unsolvable Problem Detection (UPD)
40
 
41
  Our MM-UPD Bench encompasses three benchmarks: MM-AAD, MM-IASD, and MM-IVQD.
42
 
43
- 1\. **MM-AAD:** Benchmark for Absent Answer Detection (AAD).
44
- MM-AAD Bench is a dataset where the correct answer option for each question is removed.
45
- MM-AAD tests the model's capability to recognize when the correct answer is absent from the provided choices.
46
 
47
- 2\. **MM-IASD:** Benchmark for Incompatible Answer Set Detection (IASD).
48
- MM-IASD Bench is a dataset where the answer set is completely incompatible with the context specified by the question and the image.
49
- MM-IASD tests the model's capability to recognize when the answer set is incompatible with the context.
50
-
51
- 3\. **MM-IVQD:** Benchmark for Incompatible Visual Question Detection (IVQD).
52
- MM-IVQD Bench is a dataset where the question is incompatible with the image.
53
- MM-IVQD evaluates the VLMs' capability to discern when a question and image are irrelevant or inappropriate.
54
-
55
-
56
- ## Characteristics of MM-UPD Bench
57
- We design MM-UPD Bench to provide a comprehensive evaluation of VLMs across multiple senarios.
58
-
59
- 1\. **Multiple Senario Evaluation:** We carefully design prompts choices and examine the three senario: (i) Base (w/o instruction), (ii) Option (w/ additional option), (iii) Instruction (w/ additional instruction).
60
-
61
- 2\. **Ability-Wise Evaluation:** We carefully decompose each benchmark into various abilities to reveal individual model's strengths and weaknesses.
62
-
63
-
64
- ## About Evaluation Metrics
65
- We evaluate the performance of VLMs on MM-UPD Bench using the following metrics:
66
- 1. **Dual accuracy:** The accuracy on standard-UPD pairs, where we count
67
- success only if the model is correct on both the standard and UPD questions.
68
-
69
- 2. **Standard accuracy:** The accuracy on standard questions.
70
-
71
- 3. **UPD (AAD/IASD/IVQD) accuracy:** The accuracy of AAD/IASD/IVQD questions.
72
 
73
 
74
  Please follow the instructions in [UPD](https://github.com/AtsuMiyai/UPD) to upload the generated JSON file here. After clicking the `Submit Eval` button, click the `Refresh` button.
@@ -106,10 +79,42 @@ SUBMIT_INTRODUCTION = """# Submit on MM-UPD Benchmark Introduction
106
 
107
 
108
  LEADERBORAD_INFO = """
109
- MM-UPD Bench is a comprehensive benchmark for evaluating the trustworthiness of Vision Language Models
110
- (VLMs) in the context of Unsolvable Problem Detection (UPD). MM-UPD encompasses three benchmarks:
111
- MM-AAD, MM-IASD, and MM-IVQD. Each benchmark cover a wide range of abilities. Through these benchmarks,
112
- we aim to provide a comprehensive evaluation of VLMs across multiple senarios.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
  """
114
 
115
 
 
35
  <a href='https://arxiv.org/abs/2403.20331'><img src='https://img.shields.io/badge/cs.CV-Paper-b31b1b?logo=arxiv&logoColor=red'></a>
36
  </div>
37
 
 
38
  MM-UPD Bench: A Comprehensive Benchmark for Evaluating the Trustworthiness of Vision Language Models (VLMs) in the Context of Unsolvable Problem Detection (UPD)
39
 
40
  Our MM-UPD Bench encompasses three benchmarks: MM-AAD, MM-IASD, and MM-IVQD.
41
 
42
+ Through these benchmarks, we aim to provide a comprehensive evaluation of VLMs across multiple senarios.
 
 
43
 
44
+ As for more detail information, we can refer to 'About' section.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
 
47
  Please follow the instructions in [UPD](https://github.com/AtsuMiyai/UPD) to upload the generated JSON file here. After clicking the `Submit Eval` button, click the `Refresh` button.
 
79
 
80
 
81
  LEADERBORAD_INFO = """
82
+ ## What is MM-UPD Bench?
83
+ MM-UPD Bench: A Comprehensive Benchmark for Evaluating the Trustworthiness of Vision Language Models (VLMs) in the Context of Unsolvable Problem Detection (UPD)
84
+
85
+ Our MM-UPD Bench encompasses three benchmarks: MM-AAD, MM-IASD, and MM-IVQD.
86
+
87
+ 1\. **MM-AAD:** Benchmark for Absent Answer Detection (AAD).
88
+ MM-AAD Bench is a dataset where the correct answer option for each question is removed.
89
+ MM-AAD tests the model's capability to recognize when the correct answer is absent from the provided choices.
90
+
91
+ 2\. **MM-IASD:** Benchmark for Incompatible Answer Set Detection (IASD).
92
+ MM-IASD Bench is a dataset where the answer set is completely incompatible with the context specified by the question and the image.
93
+ MM-IASD tests the model's capability to recognize when the answer set is incompatible with the context.
94
+
95
+ 3\. **MM-IVQD:** Benchmark for Incompatible Visual Question Detection (IVQD).
96
+ MM-IVQD Bench is a dataset where the question is incompatible with the image.
97
+ MM-IVQD evaluates the VLMs' capability to discern when a question and image are irrelevant or inappropriate.
98
+
99
+
100
+ ## Characteristics of MM-UPD Bench
101
+ We design MM-UPD Bench to provide a comprehensive evaluation of VLMs across multiple senarios.
102
+
103
+ 1\. **Multiple Senario Evaluation:** We carefully design prompts choices and examine the three senario: (i) Base (w/o instruction), (ii) Option (w/ additional option), (iii) Instruction (w/ additional instruction).
104
+
105
+ 2\. **Ability-Wise Evaluation:** We carefully decompose each benchmark into various abilities to reveal individual model's strengths and weaknesses.
106
+
107
+
108
+ ## About Evaluation Metrics
109
+ We evaluate the performance of VLMs on MM-UPD Bench using the following metrics:
110
+ 1. **Dual accuracy:** The accuracy on standard-UPD pairs, where we count
111
+ success only if the model is correct on both the standard and UPD questions.
112
+
113
+ 2. **Standard accuracy:** The accuracy on standard questions.
114
+
115
+ 3. **UPD (AAD/IASD/IVQD) accuracy:** The accuracy of AAD/IASD/IVQD questions.
116
+
117
+
118
  """
119
 
120