|
--- |
|
license: cc-by-4.0 |
|
--- |
|
|
|
# Hatemoji Model |
|
|
|
## Model description |
|
|
|
This model is a fine-tuned version of the [DeBERTa base model](https://huggingface.co/microsoft/deberta-base). This model is cased. The model was trained on iterative rounds of adversarial data generation with human-and-model-in-the-loop. Each round of data has emoji-containing statements which are either non-hateful (LABEL 0.0) or hateful (LABEL 1.0). |
|
- **Data Repository:** https://github.com/HannahKirk/Hatemoji |
|
- **Paper:** https://arxiv.org/abs/2108.05921 |
|
- **Point of Contact:** [email protected] |
|
|
|
## Intended uses & limitations |
|
The intended use of the model is to classify English-language, emoji-containing, short-form text documents as a binary task: non-hateful vs hateful. The model has demonstrated strengths compared to commercial and academic models on classifying emoji-based hate, but is also a strong classifier of text-only hate. Because the model was trained on synthetic, adversarially-generated data, it may have some weaknesses when it comes to empirical emoji-based hate 'in-the-wild'. |
|
|
|
## How to use |
|
The model can be used with pipeline: |
|
```python |
|
from transformers import pipeline |
|
classifier = pipeline("text-classification",model='HannahRoseKirk/Hatemoji', return_all_scores=True) |
|
prediction = classifier("I πππ emoji π", ) |
|
print(prediction) |
|
""" |
|
Output |
|
[[{'label': 'LABEL_0', 'score': 0.9999157190322876}, {'label': 'LABEL_1', 'score': 8.425049600191414e-05}]] |
|
""" |
|
``` |
|
|
|
### Training data |
|
The model was trained on: |
|
* The three rounds of emoji-containing, adversarially-generated texts from [HatemojiBuild](https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild) |
|
* The four rounds of text-only, adversarially-generated texts from Vidgen et al., (2021). _Learning from the worst: Dynamically generated datasets to improve online hate detection_. Available on [Github](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset) and explained in their [paper](https://arxiv.org/abs/2012.15761). |
|
* A collection of widely available and publicly accessible datasets from [https://hatespeechdata.com/](hatespeechdata.com) |
|
|
|
## Train procedure |
|
The model was trained using HuggingFace's [run glue script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py), using the following parameters: |
|
``` |
|
python3 transformers/examples/pytorch/text-classification/run_glue.py \ |
|
--model_name_or_path microsoft/deberta-base \ |
|
--validation_file path_to_data/dev.csv \ |
|
--train_file path_to_data/train.csv \ |
|
--do_train --do_eval --max_seq_length 512 --learning_rate 2e-5 \ |
|
--num_train_epochs 3 --evaluation_strategy epoch \ |
|
--load_best_model_at_end --output_dir path_to_outdir/deberta123/ \ |
|
--seed 123 \ |
|
--cache_dir /.cache/huggingface/transformers/ \ |
|
--overwrite_output_dir > ./log_deb 2> ./err_deb |
|
``` |
|
|
|
We experimented with upsampling the train split of each round to improve performance with increments of [1, 5, 10, 100], with the optimum upsampling taken |
|
forward to all subsequent rounds. The optimal upsampling ratios for R1-R4 (text rounds from Vidgen et al.,) are carried forward. This model is trained on upsampling ratios of `{'R0': 1, 'R1':, 'R2':, 'R3':, 'R4': , 'R5':, 'R6':, 'R7':}. |
|
|
|
## Variable and metrics |
|
|
|
## Evaluation results |
|
|
|
|
|
|