Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# SocBERT model
|
2 |
+
Pretrained model on 20GB English tweets and 72GB Reddit comments using a masked language modeling (MLM) objective.
|
3 |
+
The model was trained from scratch following the model architecture of RoBERTa-base.
|
4 |
+
We benchmarked SocBERT, on 40 text classification tasks with social media data.
|
5 |
+
The experiment results can be found in our paper:
|
6 |
+
```
|
7 |
+
@inproceedings{socbert:2023,
|
8 |
+
title = {{SocBERT: A Pretrained Model for Social Media Text}},
|
9 |
+
author = {Yuting Guo and Abeed Sarker},
|
10 |
+
booktitle = {Proceedings of the Fourth Workshop on Insights from Negative Results in NLP},
|
11 |
+
year = {2023}
|
12 |
+
}
|
13 |
+
```
|