XLM-R-Large-Tweet-Base

XLM-R-Large-Tweet-Base is an additionally pretrained version of the XLM-RoBERTa large-sized model, tailored specifically for the social media domain. The model has been pretrained using 37,200 COVID-19 vaccination-related tweets in the Serbian language (approximately 1.3 million tokens), leveraging the unique linguistic features and informal writing styles prevalent on social media platforms.

Its fine-tuned version for the five-class sentiment analysis task is available as XLM-R-Large-Tweet.

Downloads last month
9
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.