Text Classification
Transformers
PyTorch
English
deberta
hate-speech-detection
Inference Endpoints
HannahRoseKirk commited on
Commit
affeb8a
1 Parent(s): 3c511aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -2
README.md CHANGED
@@ -15,6 +15,34 @@ This model is a fine-tuned version of the [DeBERTa base model](https://huggingfa
15
  The intended use of the model is to classify English-language, emoji-containing, short-form text documents as a binary task: non-hateful vs hateful. The model has demonstrated strengths compared to commercial and academic models on classifying emoji-based hate, but is also a strong classifier of text-only hate. Because the model was trained on synthetic, adversarially-generated data, it may have some weaknesses when it comes to empirical emoji-based hate 'in-the-wild'.
16
 
17
  ## How to use
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- ## Training data
20
- The model was trained on [HatemojiBuild](https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild), alongside the four rounds of text-only adversarial data from Vidgen, B., Thrush, T., Waseem, Z., & Kiela, D. (2020). Learning from the worst: Dynamically generated datasets to improve online hate detection. arXiv
 
15
  The intended use of the model is to classify English-language, emoji-containing, short-form text documents as a binary task: non-hateful vs hateful. The model has demonstrated strengths compared to commercial and academic models on classifying emoji-based hate, but is also a strong classifier of text-only hate. Because the model was trained on synthetic, adversarially-generated data, it may have some weaknesses when it comes to empirical emoji-based hate 'in-the-wild'.
16
 
17
  ## How to use
18
+ Add
19
+
20
+ ### Training data
21
+ The model was trained on:
22
+ * The three rounds of emoji-containing, adversarially-generated texts from [HatemojiBuild](https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild)
23
+ * The four rounds of text-only, adversarially-generated texts from Vidgen et al., (2021). _Learning from the worst: Dynamically generated datasets to improve online hate detection_. Available on [Github](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset) and explained in their [paper](https://arxiv.org/abs/2012.15761).
24
+ * A collection of widely available and publicly accessible datasets from [https://hatespeechdata.com/](hatespeechdata.com)
25
+
26
+ ## Train procedure
27
+ The model was trained using HuggingFace's [run glue script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py), using the following parameters:
28
+ ```
29
+ python3 transformers/examples/pytorch/text-classification/run_glue.py \
30
+ --model_name_or_path microsoft/deberta-base \
31
+ --validation_file path_to_data/dev.csv \
32
+ --train_file path_to_data/train.csv \
33
+ --do_train --do_eval --max_seq_length 512 --learning_rate 2e-5 \
34
+ --num_train_epochs 3 --evaluation_strategy epoch \
35
+ --load_best_model_at_end --output_dir path_to_outdir/deberta123/ \
36
+ --seed 123 \
37
+ --cache_dir /.cache/huggingface/transformers/ \
38
+ --overwrite_output_dir > ./log_deb 2> ./err_deb
39
+ ```
40
+
41
+ We experimented with upsampling the train split of each round to improve performance with increments of [1, 5, 10, 100], with the optimum upsampling taken
42
+ forward to all subsequent rounds. The optimal upsampling ratios for R1-R4 (text rounds from Vidgen et al.,) are carried forward. This model is trained on upsampling ratios of `{'R0': 1, 'R1':, 'R2':, 'R3':, 'R4': , 'R5':, 'R6':, 'R7':}.
43
+
44
+ ## Variable and metrics
45
+
46
+ ## Evaluation results
47
+
48