Spaces:
Running
Running
title: README | |
emoji: π | |
colorFrom: indigo | |
colorTo: blue | |
sdk: static | |
pinned: true | |
thumbnail: >- | |
https://cdn-uploads.huggingface.co/production/uploads/650bd036be6db1ec2139be92/0tQbv0-E0ik_RxI5-mULf.png | |
short_description: Multilingual Multimodal Model | |
license: apache-2.0 | |
We introduce **Maya**, an open-source Multilingual Multimodal model. | |
1) A multilingual image-text pretraining dataset in eight languages, based on the LLaVA pretraining dataset; | |
2) A novel toxicity-free version across eight languages; and | |
3) A multilingual image-text 8B model supporting these languages, enhancing culture and linguistics. |