innocent-charles
commited on
Commit
•
ad07bfb
1
Parent(s):
125640d
Update README.md
Browse files
README.md
CHANGED
@@ -36,7 +36,7 @@ Learning Visual Concepts Directly From African Languages Supervision. [Paper is
|
|
36 |
AViLaMa is the large open-source text-vision alignment pre-training model in African languages. It brings a way to learn visual concepts directly from African languages supervision. Inspired from OpenAI CLIP, but with more based on African languages to capture the nuances, cultural context, and social aspect use of our languages that are so impossible to get just from machine translation. It includes techniques like agnostic languages encoding, data filtering network etc... All for more than 12 African languages, trained on the #AViLaDa-2B datasets of filtered image-text pairs.
|
37 |
|
38 |
- **Developed by :** Sartify LLC (www.sartify.com)
|
39 |
-
- **Authors :**
|
40 |
- **Funded by :** Sartify LLC, Open Source Community, etc..(We always welcome other donors)
|
41 |
- **Model type :** multilingual & multimodality transformer
|
42 |
- **Language(s) :** en (English), sw (Swahili), ha (Hausa), yo (Yoruba), ig (Igbo), zu (Zulu), sn (Shona), ar (Arabic), am (Amharic), fr (French), pt (Portuguese)
|
@@ -74,7 +74,7 @@ model = model.eval()
|
|
74 |
AViLaMa paper
|
75 |
@article{sartifyllc2023africanvision,
|
76 |
title={AViLaMa: Learning Visual Concepts Directly From African Languages Supervision},
|
77 |
-
author={
|
78 |
journal={To be inserted},
|
79 |
year={2024}
|
80 |
}
|
|
|
36 |
AViLaMa is the large open-source text-vision alignment pre-training model in African languages. It brings a way to learn visual concepts directly from African languages supervision. Inspired from OpenAI CLIP, but with more based on African languages to capture the nuances, cultural context, and social aspect use of our languages that are so impossible to get just from machine translation. It includes techniques like agnostic languages encoding, data filtering network etc... All for more than 12 African languages, trained on the #AViLaDa-2B datasets of filtered image-text pairs.
|
37 |
|
38 |
- **Developed by :** Sartify LLC (www.sartify.com)
|
39 |
+
- **Authors :** Sartify LLC Research Team
|
40 |
- **Funded by :** Sartify LLC, Open Source Community, etc..(We always welcome other donors)
|
41 |
- **Model type :** multilingual & multimodality transformer
|
42 |
- **Language(s) :** en (English), sw (Swahili), ha (Hausa), yo (Yoruba), ig (Igbo), zu (Zulu), sn (Shona), ar (Arabic), am (Amharic), fr (French), pt (Portuguese)
|
|
|
74 |
AViLaMa paper
|
75 |
@article{sartifyllc2023africanvision,
|
76 |
title={AViLaMa: Learning Visual Concepts Directly From African Languages Supervision},
|
77 |
+
author={Sartify LLC Research Team},
|
78 |
journal={To be inserted},
|
79 |
year={2024}
|
80 |
}
|