File size: 6,164 Bytes
0441c9f
8fe92d9
 
 
 
 
 
 
 
 
34b4aee
 
 
 
 
6becf29
 
34b4aee
 
 
 
6becf29
34b4aee
 
 
e8a1fde
6becf29
 
 
 
 
e8a1fde
6becf29
 
 
 
 
e8a1fde
6becf29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e8a1fde
6becf29
34b4aee
 
 
 
 
 
 
 
 
 
29f59fc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
---
task_categories:
- text-classification
- text-generation
language:
- en
tags:
- not-for-all-audiences
- legal
pretty_name: dynamically generated hate speech dataset
---
# Dataset Card for dynamically generated hate speech dataset

## Dataset Description

- **Homepage:** [GitHub](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset)
- **Point of Contact:** [[email protected]](mailto:[email protected])

### Dataset Summary

This is a copy of the Dynamically-Generated-Hate-Speech-Dataset, presented in [this paper](https://arxiv.org/abs/2012.15761) by
- **Bertie Vidgen**, **Tristan Thrush**, **Zeerak Waseem** and **Douwe Kiela**

## Original README from [GitHub](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset/blob/main/README.md)

  ## Dynamically-Generated-Hate-Speech-Dataset
  ReadMe for v0.2 of the Dynamically Generated Hate Speech Dataset from Vidgen et al. (2021). If you use the dataset, please cite our paper in the Proceedings of ACL 2021, and available on [Arxiv](https://arxiv.org/abs/2012.15761).
  Contact Dr. Bertie Vidgen if you have feedback or queries: [email protected].
  
  The full author list is: Bertie Vidgen (The Alan Turing Institute), Tristan Thrush (Facebook AI Research), Zeerak Waseem (University of Sheffield) and Douwe Kiela (Facebook AI Research). This paper is an output of the Dynabench project: https://dynabench.org/tasks/5#overall
  
  ### Dataset descriptions
  v0.2.2.csv is the full dataset used in our ACL paper. 
  
  v0.2.3.csv removes duplicate entries, all of which occurred in round 1. Duplicates come from two sources: (1) annotators entering the same content multiple times and (2) different annotators entering the same content. The duplicates are interesting for understanding the annotation process, and the challenges of dynamically generating datasets. However, they are likely to be less useful for training classifiers and so are removed in v0.2.3. We did not lower case the text before removing duplicates as capitalisations contain potentially useful signals.
  
  
  ### Overview
  The Dynamically Generated Hate Speech Dataset is provided in one table.
  
  'acl.id' is the unique ID of the entry.
  
  'Text' is the content which has been entered. All content is synthetic.
  
  'Label' is a binary variable, indicating whether or not the content has been identified as hateful. It takes two values: hate, nothate.
  
  'Type' is a categorical variable, providing a secondary label for hateful content. For hate it can take five values: Animosity, Derogation, Dehumanization, Threatening and Support for Hateful Entities. Please see the paper for more detail. For nothate the 'type' is 'none'. In round 1 the 'type' was not given and is marked as 'notgiven'.
  
  'Target' is a categorical variable, providing the group that is attacked by the hate. It can include intersectional characteristics and multiple groups can be identified. For nothate the type is 'none'. Note that in round 1 the 'target' was not given and is marked as 'notgiven'.
  
  'Level' reports whether the entry is original content or a perturbation.
  
  'Round' is a categorical variable. It gives the round of data entry (1, 2, 3 or 4) with a letter for whether the entry is original content ('a') or a perturbation ('b'). Perturbations were not made for round 1.
  
  'Round.base' is a categorical variable. It gives the round of data entry, indicated with just a number (1, 2, 3 or 4).
  
  'Split' is a categorical variable. it gives the data split that the entry has been assigned to. This can take the values 'train', 'dev' and 'test'. The choice of splits is explained in the paper.
  
  'Annotator' is a categorical variable. It gives the annotator who entered the content. Annotator IDs are random alphanumeric strings. There are 20 annotators in the dataset.
  
  'acl.id.matched' is the ID of the matched entry, connecting the original (given in 'acl.id') and the perturbed version.
  
  
  For identities (recorded under 'Target') we use shorthand labels to constructed the dataset, which can be converted (and grouped) as follows:
  
  	none -> for non hateful entries 
  	NoTargetRecorded -> for hateful entries with no target recorded
  	
  	mixed -> Mixed race background
  	ethnic minority -> Ethnic Minorities
  	indig -> Indigenous people
  	indigwom -> Indigenous Women
  	non-white -> Non-whites (attacked as 'non-whites', rather than specific non-white groups which are generally addressed separately)
  	trav -> Travellers (including Roma, gypsies)
  
  	bla -> Black people
  	blawom -> Black women
  	blaman -> Black men
  	african -> African (all 'African' attacks will also be an attack against Black people)
  	
  	jew -> Jewish people
  	mus -> Muslims
  	muswom -> Muslim women
  
  	wom -> Women	
  	trans -> Trans people
  	gendermin -> Gender minorities, 
  	bis -> Bisexual
  	gay -> Gay people (both men and women)
  	gayman -> Gay men
  	gaywom -> Lesbians	
  	
  	dis -> People with disabilities
  	working -> Working class people
  	old -> Elderly people
  
  	asi -> Asians
  	asiwom -> Asian women
  	east -> East Asians
  	south -> South Asians (e.g. Indians)
  	chinese -> Chinese people
  	pak -> Pakistanis
  	arab -> Arabs, including people from the Middle East
  
  	immig -> Immigrants
  	asylum -> Asylum seekers
  	ref -> Refguees
  	for -> Foreigners
  	
  	eastern european -> Eastern Europeans
  	russian -> Russian people
  	pol -> Polish people
  	hispanic -> Hispanic people, including latinx and Mexicans
  
  	nazi -> Nazis ('Support' type of hate)
  	hitler -> Hitler ('Support' type of hate)
  	
  ### Code
  Code was implemented using hugging face transformers library.

## Additional Information

### Licensing Information

The original repository does not provide any license, but is free for use with proper citation of the original paper in the Proceedings of ACL 2021, available on [Arxiv](https://arxiv.org/abs/2012.15761)

### Citation Information

cite as [arXiv:2012.15761](https://arxiv.org/abs/2012.15761)
or [https://doi.org/10.48550/arXiv.2012.15761](https://[doi.org/10.48550/arXiv.2012.15761)