Text Classification
Transformers
PyTorch
English
deberta
hate-speech-detection
Inference Endpoints
Hatemoji / README.md
HannahRoseKirk's picture
Update README.md
affeb8a
|
raw
history blame
2.92 kB
metadata
license: cc-by-4.0

Hatemoji Model

Model description

This model is a fine-tuned version of the DeBERTa base model. This model is cased. The model was trained on iterative rounds of adversarial data generation with human-and-model-in-the-loop. Each round of data has emoji-containing statements which are either non-hateful (LABEL 0.0) or hateful (LABEL 1.0).

Intended uses & limitations

The intended use of the model is to classify English-language, emoji-containing, short-form text documents as a binary task: non-hateful vs hateful. The model has demonstrated strengths compared to commercial and academic models on classifying emoji-based hate, but is also a strong classifier of text-only hate. Because the model was trained on synthetic, adversarially-generated data, it may have some weaknesses when it comes to empirical emoji-based hate 'in-the-wild'.

How to use

Add

Training data

The model was trained on:

  • The three rounds of emoji-containing, adversarially-generated texts from HatemojiBuild
  • The four rounds of text-only, adversarially-generated texts from Vidgen et al., (2021). Learning from the worst: Dynamically generated datasets to improve online hate detection. Available on Github and explained in their paper.
  • A collection of widely available and publicly accessible datasets from https://hatespeechdata.com/

Train procedure

The model was trained using HuggingFace's run glue script, using the following parameters:

python3 transformers/examples/pytorch/text-classification/run_glue.py \
--model_name_or_path microsoft/deberta-base \
--validation_file path_to_data/dev.csv \
--train_file path_to_data/train.csv \
--do_train --do_eval --max_seq_length 512 --learning_rate 2e-5 \
--num_train_epochs 3 --evaluation_strategy epoch \
--load_best_model_at_end --output_dir path_to_outdir/deberta123/ \
--seed 123 \
--cache_dir /.cache/huggingface/transformers/ \
--overwrite_output_dir > ./log_deb 2> ./err_deb

We experimented with upsampling the train split of each round to improve performance with increments of [1, 5, 10, 100], with the optimum upsampling taken forward to all subsequent rounds. The optimal upsampling ratios for R1-R4 (text rounds from Vidgen et al.,) are carried forward. This model is trained on upsampling ratios of `{'R0': 1, 'R1':, 'R2':, 'R3':, 'R4': , 'R5':, 'R6':, 'R7':}.

Variable and metrics

Evaluation results