cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-dec2021 on the tweet_topic_multi. This model is fine-tuned on train_all
split and validated on test_2021
split of tweet_topic.
Fine-tuning script can be found here. It achieves the following results on the test_2021 set:
- F1 (micro): 0.7647668393782383
- F1 (macro): 0.6187022581213811
- Accuracy: 0.5485407980941036
Usage
import math
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
def sigmoid(x):
return 1 / (1 + math.exp(-x))
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all")
model = AutoModelForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all", problem_type="multi_label_classification")
model.eval()
class_mapping = model.config.id2label
with torch.no_grad():
text = #NewVideo Cray Dollas- Water- Ft. Charlie Rose- (Official Music Video)- {{URL}} via {@YouTube@} #watchandlearn {{USERNAME}}
tokens = tokenizer(text, return_tensors='pt')
output = model(**tokens)
flags = [sigmoid(s) > 0.5 for s in output[0][0].detach().tolist()]
topic = [class_mapping[n] for n, i in enumerate(flags) if i]
print(topic)
Reference
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
- Downloads last month
- 16,995
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Dataset used to train cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all
Spaces using cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all 2
Evaluation results
- F1 on cardiffnlp/tweet_topic_multiself-reported0.765
- F1 (macro) on cardiffnlp/tweet_topic_multiself-reported0.619
- Accuracy on cardiffnlp/tweet_topic_multiself-reported0.549