SLMLAH commited on
Commit
539f2aa
Β·
verified Β·
1 Parent(s): d8179fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +203 -21
README.md CHANGED
@@ -2,6 +2,7 @@
2
  license: mit
3
  language:
4
  - en
 
5
  base_model:
6
  - qwen2-VL-7B
7
  pipeline_tag: visual-question-answering
@@ -10,7 +11,11 @@ tags:
10
  - Arabic
11
  ---
12
 
13
- # AIN: The Arabic INclusive Large Multimodal Model
 
 
 
 
14
 
15
  [Ahmed Heakl](https://huggingface.co/ahmedheakl) <sup> * </sup> &nbsp;
16
  [Sara Ghaboura](https://huggingface.co/SLMLAH) <sup> * </sup> &nbsp;
@@ -23,45 +28,222 @@ tags:
23
  <em> <sup> *Equal Contribution </sup> </em>
24
  <br>
25
  #### **Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), UAE**
26
- [![arXiv](https://img.shields.io/badge/arXiv-2502.0094-3399FF)](https://arxiv.org/abs/2502.00094)&nbsp;
27
- [![Our Page](https://img.shields.io/badge/Visit-Our%20Page-8C7AFF?style=flat)](https://mbzuai-oryx.github.io/AIN/)&nbsp;
28
- [![GitHub issues](https://img.shields.io/github/issues/mbzuai-oryx/Camel-Bench?color=FFF359&label=issues&style=flat)](https://github.com/mbzuai-oryx/AIN/issues)&nbsp;
29
- [![GitHub stars](https://img.shields.io/github/stars/mbzuai-oryx/AIN?color=FF6A07&style=flat)](https://github.com/mbzuai-oryx/AIN/stargazers)&nbsp;
30
  [![GitHub license](https://img.shields.io/github/license/mbzuai-oryx/Camel-Bench?color=FF6666)](https://github.com/mbzuai-oryx/AIN/blob/main/LICENSE)
31
 
32
- <div align="center">
33
 
34
- <br>
35
- <br>
36
- &emsp; &emsp; &emsp;
37
- <img src="https://github.com/user-attachments/assets/29421075-ec74-4843-ad8a-8bd9dfd535d6" alt="chatbot" width="30px" />
38
- &emsp; <a href="https://huggingface.co/spaces/ahmedheakl/AIN-Arabic-VLM" target="_blank">AIN Chatbot</a>
39
- <img src="https://github.com/user-attachments/assets/451fd639-7cb7-4e77-b7ed-c3678bf980e3" alt="telegram" width="25px" />
40
- &emsp; <a href="https://t.me/arabicvlm_bot" target="_blank">AIN Telegram</a>
41
- <img src="https://github.com/user-attachments/assets/35bd262d-55d1-45e9-8382-e44567b09102" alt="WhatsApp" width="25px" />
42
- &emsp; <a href="https://wa.me/46738645096" target="_blank">AIN WhatsApp</a>
43
- <br>
44
- <br>
45
- </div>
46
 
 
 
 
 
 
 
 
 
 
47
 
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ---
 
 
50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
  ---
53
 
54
  ## License
55
  This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
56
- <br>
57
- <br>
58
 
59
  ## πŸ’¬ Contact us
60
  For questions or suggestions, feel free to reach out to us on [GitHub Discussions](https://github.com/mbzuai-oryx/AIN/discussions).
61
 
62
  ---
63
 
64
-
65
  If you use AIN in your research, please cite our work as follows:
66
 
67
  ```
 
2
  license: mit
3
  language:
4
  - en
5
+ - ar
6
  base_model:
7
  - qwen2-VL-7B
8
  pipeline_tag: visual-question-answering
 
11
  - Arabic
12
  ---
13
 
14
+
15
+ <div style="display: flex; align-items: center;">
16
+ <img src="assets_hf/AIN.png" width="5%" alt="logo" style="margin-right: 10px;" />
17
+ <h1 style="margin: 0; font-size: 32px;";">AIN: The Arabic INclusive Large Multimodal Model</h1>
18
+ </div>
19
 
20
  [Ahmed Heakl](https://huggingface.co/ahmedheakl) <sup> * </sup> &nbsp;
21
  [Sara Ghaboura](https://huggingface.co/SLMLAH) <sup> * </sup> &nbsp;
 
28
  <em> <sup> *Equal Contribution </sup> </em>
29
  <br>
30
  #### **Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), UAE**
31
+ [![arXiv](https://img.shields.io/badge/arXiv-2502.0094-3399FF)](https://arxiv.org/abs/2502.00094)
32
+ [![Our Page](https://img.shields.io/badge/Visit-Our%20Page-8C7AFF?style=flat)](https://mbzuai-oryx.github.io/AIN/)
33
+ [![GitHub issues](https://img.shields.io/github/issues/mbzuai-oryx/Camel-Bench?color=FFF359&label=issues&style=flat)](https://github.com/mbzuai-oryx/AIN/issues)
34
+ [![GitHub stars](https://img.shields.io/github/stars/mbzuai-oryx/AIN?color=FF6A07&style=flat)](https://github.com/mbzuai-oryx/AIN/stargazers)
35
  [![GitHub license](https://img.shields.io/github/license/mbzuai-oryx/Camel-Bench?color=FF6666)](https://github.com/mbzuai-oryx/AIN/blob/main/LICENSE)
36
 
37
+ ---
38
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
+ <div class="abstract-container">
41
+ <h2>Abstract</h2>
42
+ <div class="abstract-content">
43
+ <p>
44
+ Amid the swift progress of large language models (LLMs) and their evolution into large multimodal models (LMMs), significant strides have been made in high-resource languages such as English and Chinese. While Arabic LLMs have seen notable progress, Arabic LMMs remain largely unexplored, often narrowly focusing on a few specific aspects of the language and visual understanding. To bridge this gap, we introduce <b><em>AIN - the Arabic Inclusive Multimodal Model-</em></b> designed to excel across diverse domains.
45
+ AIN is an English-Arabic <b>bilingual LMM</b> designed to excel in English and Arabic, leveraging carefully constructed <b>3.6 million</b> high-quality Arabic-English multimodal data samples. AIN demonstrates state-of-the-art Arabic performance, while also possessing strong English-language visual capabilities.
46
+ </p>
47
+ </div>
48
+ </div>
49
 
50
 
51
+
52
+ ## 🌟 Key Features
53
+ - The **first Arabic-centric inclusive Large Multimodal Model (LMM)** trained on **3.6M samples**.
54
+ - Includes **35% authentic Arabic data** within its Arabic data subset.
55
+ - Achieves **superior performance compared to open- and closed-source models** (e.g., GPT-4o) and open-source models (e.g., Qwen2-VL-7B) across tasks such as OCR and specialized domains.
56
+ - Demonstrates **robust bilingual capabilities** (Arabic/English), **validated** through **comprehensive testing** and **human evaluation** across 17 Arab countries.
57
+ - Exhibits **advanced cultural understanding** and domain expertise in fields such as **medical imaging**, **agriculture**, and **scientific visualization**.
58
+
59
+
60
+ <p align="center">
61
+ <img src="assets_hf/intro_bar.png" width="50%" alt="intro_bar" style="margin-right: 2px";/>
62
+ <h6>
63
+ <em> <b>Figure 1.</b> Comparative performance of AIN-7B against other models across key domains, including OCR & Document Understanding, Remote Sensing, Agricultural Understanding, and overall performance across all domains. </em>
64
+ </h6>
65
+ </p>
66
+
67
+ <p align="center" >
68
+ <img src="assets_hf/radar_chart.png" width="35%" alt="radar_chart" style="margin-right: 2px";/>
69
+ <h6>
70
+ <em> <b>Figure 2.</b> showcases a comprehensive performance analysis of AIN-7B across CAMEL-Bench domains, comparing it with prominent closed-source models as well as open-source counterparts. <strong>OCR:</strong> "OCR & Document Understanding", <strong>Video:</strong> "General Video & Multi-Image Understanding", <strong>RS:</strong> "Remote Sensing Understanding", <strong>CDT:</strong> "Chart, Diagram & Table Understanding", <strong>Agro.:</strong> "Agricultural Image Understanding", <strong>Cultural:</strong> "Cultural-Specific Understanding", <strong>Medical:</strong> "Medical Image Understanding".
71
+ </em>
72
+ </h6>
73
+
74
  ---
75
+ ## βš–οΈ Quantitative Evaluation and Results
76
+ AIN demonstrates state-of-the-art performance across diverse domains, surpassing both open- and closed-source models. Notably, it achieves an aggregate performance score of 63.77%, with significant gains in OCR, remote sensing, and agricultural image understanding.
77
 
78
+ <div align="center" >
79
+ <table>
80
+ <caption>
81
+ <h6>
82
+ <strong>Table 1. Performance comparison of AIN and different closed- and open-source LMMs across CAMEL-Bench domains.</strong>
83
+ <br> <em>Best performance is marked with πŸ₯‡; second-best is πŸ₯ˆ.</em>
84
+ <strong>OCR</strong>: "OCR & Document Understanding",
85
+ <strong>Video</strong>: "General Video & Multi-Image Understanding",
86
+ <strong>RS</strong>: "Remote Sensing Understanding",
87
+ <strong>CDT</strong>: "Chart, Diagram & Table Understanding",
88
+ <strong>Agro.</strong>: "Agricultural Image Understanding",
89
+ <strong>Cult.</strong>: "Cultural-Specific Understanding",
90
+ <strong>Med.</strong>: "Medical Image Understanding".
91
+ </h6>
92
+ </caption>
93
+ <thead>
94
+ <tr style="background-color: #e0e0e0;">
95
+ <th>Models</th>
96
+ <th>VQA</th>
97
+ <th>OCR</th>
98
+ <th>Video</th>
99
+ <th>RS</th>
100
+ <th>CDT</th>
101
+ <th>Agro.</th>
102
+ <th>Cult.</th>
103
+ <th>Med.</th>
104
+ <th style="background-color: #d0d0d0;">Total</th>
105
+ </tr>
106
+ </thead>
107
+ <tbody>
108
+ <tr>
109
+ <td>GPT-4o</td>
110
+ <td>πŸ₯ˆ55.15</td>
111
+ <td>πŸ₯ˆ54.98</td>
112
+ <td>πŸ₯‡69.65</td>
113
+ <td>πŸ₯ˆ27.36</td>
114
+ <td>πŸ₯ˆ62.35</td>
115
+ <td>πŸ₯ˆ80.75</td>
116
+ <td>πŸ₯‡80.86</td>
117
+ <td>πŸ₯‡49.91</td>
118
+ <td style="background-color: #d0d0d0;">πŸ₯ˆ60.13</td>
119
+ </tr>
120
+ <tr>
121
+ <td>GPT-4o-mini</td>
122
+ <td>48.83</td>
123
+ <td>39.38</td>
124
+ <td>πŸ₯ˆ66.28</td>
125
+ <td>16.93</td>
126
+ <td>56.37</td>
127
+ <td>78.80</td>
128
+ <td>65.92</td>
129
+ <td>πŸ₯ˆ47.37</td>
130
+ <td style="background-color: #d0d0d0;">52.49</td>
131
+ </tr>
132
+ <tr>
133
+ <td>Gemini-1.5-Pro</td>
134
+ <td>46.68</td>
135
+ <td>28.68</td>
136
+ <td>42.95</td>
137
+ <td>17.07</td>
138
+ <td>47.06</td>
139
+ <td>72.14</td>
140
+ <td>56.24</td>
141
+ <td>33.78</td>
142
+ <td style="background-color: #d0d0d0;">52.38</td>
143
+ </tr>
144
+ <tr>
145
+ <td>Gemini-1.5-flash</td>
146
+ <td>45.59</td>
147
+ <td>27.58</td>
148
+ <td>53.31</td>
149
+ <td>14.95</td>
150
+ <td>48.26</td>
151
+ <td>76.07</td>
152
+ <td>46.54</td>
153
+ <td>42.87</td>
154
+ <td style="background-color: #d0d0d0;">44.40</td>
155
+ </tr>
156
+ <tr>
157
+ <td>InternVL-8B </td>
158
+ <td>30.41 </td>
159
+ <td>15.91 </td>
160
+ <td>51.42 </td>
161
+ <td>5.36 </td>
162
+ <td>30.27 </td>
163
+ <td>44.47 </td>
164
+ <td>20.88 </td>
165
+ <td>29.48 </td>
166
+ <td style="background-color: #d0d0d0;">28.52 </td>
167
+ </tr>
168
+ <tr>
169
+ <td>InternVL2.5-1B </td>
170
+ <td>27.22 </td>
171
+ <td>19.45 </td>
172
+ <td>38.20 </td>
173
+ <td>3.39 </td>
174
+ <td>30.75 </td>
175
+ <td>39.53 </td>
176
+ <td>35.68 </td>
177
+ <td>21.27 </td>
178
+ <td style="background-color: #d0d0d0;">26.94 </td>
179
+ </tr>
180
+ <tr>
181
+ <td>Qwen-VL-2B </td>
182
+ <td>41.02 </td>
183
+ <td>22.93 </td>
184
+ <td>38.90 </td>
185
+ <td>12.56 </td>
186
+ <td>27.83 </td>
187
+ <td>52.02 </td>
188
+ <td>34.28 </td>
189
+ <td>29.12 </td>
190
+ <td style="background-color: #d0d0d0;">32.33 </td>
191
+ </tr>
192
+ <tr>
193
+ <td>AIN-7B <em>(ours)</em> </td>
194
+ <td>πŸ₯‡56.78 </td>
195
+ <td>πŸ₯‡72.35 </td>
196
+ <td>64.09 </td>
197
+ <td>πŸ₯‡45.92 </td>
198
+ <td>πŸ₯‡64.10 </td>
199
+ <td>πŸ₯‡85.05 </td>
200
+ <td>πŸ₯ˆ78.09 </td>
201
+ <td>43.77 </td>
202
+ <td style="background-color: #d0d0d0;">πŸ†63.77 </td>
203
+ </tr>
204
+ </tbody>
205
+ </table>
206
+ </div>
207
+
208
+ ---
209
+ ## 🎯 Qualitative Evaluation
210
+ The qualitative evaluation showcases AIN's advanced capabilities in handling diverse, complex tasks, including OCR, medical imaging, remote sensing, and cultural-specific understanding, with remarkable precision and contextual relevance. Unlike GPT-4o and LLaVA, AIN demonstrates superior performance in identifying intricate details and maintaining accuracy across varied query formats and multi-domain challenges.
211
+
212
+ <div align="center">
213
+ <img src="assets_hf/qualitative.png" width="50%" alt="qualitative" />
214
+ <h6>
215
+ <em> <b>Figure 3.</b> Qualitative examples showcasing AIN-7B’s capabilities across various domains, including general VQA, OCR & Document Understanding, Remote Sensing, Medical Imaging, Agricultural Understanding, and Cultural-Specific tasks. </em>
216
+ </h6>
217
+ </div>
218
+
219
+ ---
220
+ ## 🧐 Data Verification and Toxicity Filtering
221
+ A multi-step verification pipeline was implemented to ensure high-quality translations and safe visual data. Translation accuracy was assessed through human evaluation, where native Arabic speakers rated outputs against reference translations, and semantic similarity checks were conducted using **LaBSE**. Additionally, translated samples were reverse-translated and validated using **BLEU, METEOR, and ROUGE scores** to measure correctness, correlation, and overlap. For visual data, toxicity filtering was applied using **LLavaGuard’s safety policies and GPT-4o**, identifying and removing unsafe content related to violence, substance abuse, and harmful imagery, ensuring compliance with ethical AI standards.
222
+
223
+ <p align="center">
224
+ <img src="assets_hf/verify_pipeline.png" width="45%" alt="verify" style="margin-right: 2px";/>
225
+ <h6>
226
+ <em> <b>Figure 4.</b> Data verification and filtering pipeline for textual and visual data, ensuring high-quality training data through semantic similarity checks, translation quality evaluations, and toxicity screening for safety compliance. </em>
227
+ </h6>
228
+ </p>
229
+ <p align="center">
230
+ <img src="assets_hf/toxicity.png" width=30%" alt="verify" style="margin-right: 2px";/>
231
+ <h6>
232
+ <em> <b>Figure 5.</b> Distribution of visual data toxicity filtering results, showing that 95% of the data is classified as safe, while 5% is identified as unsafe due to categories like weapons or substance abuse, violence, and animal cruelty. </em>
233
+ </h6>
234
+ </p>
235
 
236
  ---
237
 
238
  ## License
239
  This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
240
+
 
241
 
242
  ## πŸ’¬ Contact us
243
  For questions or suggestions, feel free to reach out to us on [GitHub Discussions](https://github.com/mbzuai-oryx/AIN/discussions).
244
 
245
  ---
246
 
 
247
  If you use AIN in your research, please cite our work as follows:
248
 
249
  ```