This model is one of the artifacts of the paper [Massively Multilingual Lexical Specialization of Multilingual Transformers](https://aclanthology.org/2023.acl-long.426/). It was obtained by fine-tuning the representations of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the dataset [babelbert-dataset](https://huggingface.co/datasets/umanlp/babelbert-dataset).