license: mit
datasets:
- jhu-clsp/bernice-pretrain-data
language:
- en
- es
- pt
- ja
- ar
- in
- ko
- tr
- fr
- tl
- ru
- und
- it
- th
- de
- hi
- pl
- nl
- fa
- et
- ht
- ur
- sv
- ca
- el
- fi
- cs
- iw
- da
- vi
- zh
- ta
- ro
- 'no'
- uk
- cy
- ne
- hu
- eu
- sl
- lv
- lt
- bn
- sr
- bg
- mr
- ml
- is
- te
- gu
- kn
- ps
- ckb
- si
- hy
- or
- pa
- am
- sd
- my
- ka
- km
- dv
- lo
- ug
- bo
Bernice
Bernice is a multilingual pre-trained encoder exclusively for Twitter data. The model was released with the EMNLP 2022 paper Bernice: A Multilingual Pre-trained Encoder for Twitter by Alexandra DeLucia, Shijie Wu, Aaron Mueller, Carlos Aguirre, Mark Dredze, and Philip Resnik.
Please reach out to Alexandra DeLucia (aadelucia at jhu.edu) or open an issue if there are questions.
Model description
The language of Twitter differs significantly from that of other domains commonly included in large language model training. While tweets are typically multilingual and contain informal language, including emoji and hashtags, most pre-trained language models for Twitter are either monolingual, adapted from other domains rather than trained exclusively on Twitter, or are trained on a limited amount of in-domain Twitter data.We introduce Bernice, the first multilingual RoBERTa language model trained from scratch on 2.5 billion tweets with a custom tweet-focused tokenizer. We evaluate on a variety of monolingual and multilingual Twitter benchmarks, finding that our model consistently exceeds or matches the performance of a variety of models adapted to social media data as well as strong multilingual baselines, despite being trained on less data overall.We posit that it is more efficient compute- and data-wise to train completely on in-domain data with a specialized domain-specific tokenizer.
Training data
2.5 billion tweets with 56 billion subwords in 66 languages (as identified in Twitter metadata). The tweets are collected from the 1% public Twitter stream between January 2016 and December 2021.
Training procedure
RoBERTa pre-training (i.e., masked language modeling) with BERT-base architecture.
Evaluation results
TBD
How to use
You can use this model for tweet representation. To use with HuggingFace PyTorch interface:
from transformers import AutoTokenizer, AutoModel
import re
# Load model
model = AutoModel("jhu-clsp/bernice")
tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/bernice", model_max_length=128)
# Data
raw_tweets = [
"So, Nintendo and Illimination's upcoming animated #SuperMarioBrosMovie is reportedly titled 'The Super Mario Bros. Movie'. Alrighty. :)",
"AMLO se vio muy indignado porque propusieron al presidente de Ucrania para el premio nobel de la paz. ¿Qué no hay otros que luchen por la paz? ¿Acaso se quería proponer él?"
]
# Pre-process tweets for tokenizer
URL_RE = re.compile(r"https?:\/\/[\w\.\/\?\=\d&#%_:/-]+")
HANDLE_RE = re.compile(r"@\w+")
tweets = []
for t in raw_tweets:
t = HANDLE_RE.sub("@USER", t)
t = URL_RE.sub("HTTPURL", t)
tweets.append(t)
with torch.no_grad():
embeddings = model(tweets)
Limitations and bias
TBD
BibTeX entry and citation info
@inproceedings{delucia-etal-2022-bernice,
title = "Bernice: A Multilingual Pre-trained Encoder for {T}witter",
author = "DeLucia, Alexandra and
Wu, Shijie and
Mueller, Aaron and
Aguirre, Carlos and
Resnik, Philip and
Dredze, Mark",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.415",
pages = "6191--6205",
abstract = "The language of Twitter differs significantly from that of other domains commonly included in large language model training. While tweets are typically multilingual and contain informal language, including emoji and hashtags, most pre-trained language models for Twitter are either monolingual, adapted from other domains rather than trained exclusively on Twitter, or are trained on a limited amount of in-domain Twitter data.We introduce Bernice, the first multilingual RoBERTa language model trained from scratch on 2.5 billion tweets with a custom tweet-focused tokenizer. We evaluate on a variety of monolingual and multilingual Twitter benchmarks, finding that our model consistently exceeds or matches the performance of a variety of models adapted to social media data as well as strong multilingual baselines, despite being trained on less data overall.We posit that it is more efficient compute- and data-wise to train completely on in-domain data with a specialized domain-specific tokenizer.",
}