Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -4,13 +4,14 @@ emoji: 🌍
|
|
4 |
colorFrom: indigo
|
5 |
colorTo: blue
|
6 |
sdk: static
|
7 |
-
pinned:
|
8 |
thumbnail: >-
|
9 |
https://cdn-uploads.huggingface.co/production/uploads/650bd036be6db1ec2139be92/0tQbv0-E0ik_RxI5-mULf.png
|
10 |
short_description: Multilingual Multimodal Model
|
|
|
11 |
---
|
12 |
|
13 |
-
we introduce **Maya**, an open-source Multimodal
|
14 |
|
15 |
1) a multilingual image-text pretraining dataset in eight languages, based on the LLaVA pretraining dataset;
|
16 |
2) a novel toxicity-free version across eight languages; and
|
|
|
4 |
colorFrom: indigo
|
5 |
colorTo: blue
|
6 |
sdk: static
|
7 |
+
pinned: true
|
8 |
thumbnail: >-
|
9 |
https://cdn-uploads.huggingface.co/production/uploads/650bd036be6db1ec2139be92/0tQbv0-E0ik_RxI5-mULf.png
|
10 |
short_description: Multilingual Multimodal Model
|
11 |
+
license: apache-2.0
|
12 |
---
|
13 |
|
14 |
+
we introduce **Maya**, an open-source Multilingual Multimodal model.
|
15 |
|
16 |
1) a multilingual image-text pretraining dataset in eight languages, based on the LLaVA pretraining dataset;
|
17 |
2) a novel toxicity-free version across eight languages; and
|