NERToxicBERT
This model was trained to do a token classification of online comments to determine whether the token contains a vulgarity or not (swear words, insult, ...).
This model is based don GBERT from deepset (https://huggingface.co/deepset/gbert-base) which was mainly trained on wikipedia. To this model we added a freshly initialized token classification header, which had to be trained on our labeled data.
Training
For the training a dataset of 4500 comments german comments label on toxicity was used. This dataset is not publicly available, but can be requested form TU-Wien (https://doi.org/10.5281/zenodo.10996203).
Data preparation
The dataset contains additional tags, which are
- Target_Group
- Target_Individual
- Target_Other
- Vulgarity
We decided to use the Vulgarity tag to mark the words which are considered to be an insult. 1306 Comments contained a Vulgarity, but 452 did not belong to a toxic considered comment. These comments are split into 1484 number of sentences containing vulgarities. Data prepared to have sentence by sentence data set tagged with vulgarity token. [‘O’,’Vul’] (1484 sentences). A 80/10/10 train/validation/test split was used.
Training Setup
Out of 4500 comments 1306 contained a vulgarity tags. In order to identify an optimally performing model for classifying toxic speech, a large set of models was trained and evaluated. Hyperparameter:
- Layer 2 and 6 layers frozen
- 5 and 10 epochs, with a batch size of 8
Model Evaluation
The best model used 2 frozen layers and was evaluated on the training set with the following metrics:
accuracy | f1 | precision | recall |
---|---|---|---|
0.922 | 0.761 | 0.815 | 0.764 |
Usage
Here is how to use this model to get the features of a given text in PyTorch:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import numpy as np
from transformers import pipeline
# Replace this with your own checkpoint
model_checkpoint = "./saved_model"
token_classifier = pipeline(
"token-classification", model=model_checkpoint, aggregation_strategy="simple"
)
print(token_classifier("Die Fpö hat also auch ein Bescheuert-Gen in ihrer politischen DNA."))
[{'entity_group': 'Vul', 'score': 0.9548946, 'word': 'Bescheuert - Gen', 'start': 26, 'end': 40}]
- Downloads last month
- 14
Model tree for mono80/NERToxicBERT
Base model
deepset/gbert-base