readme: mention XLM-V experiments repo
Browse files
README.md
CHANGED
@@ -103,7 +103,7 @@ XLM-V is multilingual language model with a one million token vocabulary trained
|
|
103 |
It was introduced in the [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472)
|
104 |
paper by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer and Madian Khabsa.
|
105 |
|
106 |
-
**Disclaimer**: The team releasing XLM-V did not write a model card for this model so this model card has been written by the Hugging Face team.
|
107 |
|
108 |
## Model description
|
109 |
|
|
|
103 |
It was introduced in the [XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472)
|
104 |
paper by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer and Madian Khabsa.
|
105 |
|
106 |
+
**Disclaimer**: The team releasing XLM-V did not write a model card for this model so this model card has been written by the Hugging Face team. [This repository](https://github.com/stefan-it/xlm-v-experiments) documents all necessary integeration steps.
|
107 |
|
108 |
## Model description
|
109 |
|