LennardZuendorf commited on
Commit
6becf29
1 Parent(s): 34b4aee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -103
README.md CHANGED
@@ -14,118 +14,112 @@ pretty_name: dynamically generated hate speech dataset
14
 
15
  ## Dataset Description
16
 
17
- - **Homepage:**
18
- - **Repository:**
19
- - **Paper:**
20
- - **Leaderboard:**
21
- - **Point of Contact:**
22
 
23
  ### Dataset Summary
24
 
25
  This is a copy of the Dynamically-Generated-Hate-Speech-Dataset, presented in [this paper](https://arxiv.org/abs/2012.15761) by
26
- - **Bertie Vidgen**
27
- - **Tristan Thrush**
28
- - **Zeerak Waseem**
29
- - **and Douwe Kiela**
30
 
31
  This repository only contains v0.2.3, I have not many any additional changes to this paper.
32
 
33
  ## Original README from [GitHub](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset/blob/main/README.md)
34
 
35
- # Dynamically-Generated-Hate-Speech-Dataset
36
- ReadMe for v0.2 of the Dynamically Generated Hate Speech Dataset from Vidgen et al. (2021). If you use the dataset, please cite our paper in the Proceedings of ACL 2021, and available on [Arxiv](https://arxiv.org/abs/2012.15761).
37
- Contact Dr. Bertie Vidgen if you have feedback or queries: [email protected].
38
-
39
- The full author list is: Bertie Vidgen (The Alan Turing Institute), Tristan Thrush (Facebook AI Research), Zeerak Waseem (University of Sheffield) and Douwe Kiela (Facebook AI Research). This paper is an output of the Dynabench project: https://dynabench.org/tasks/5#overall
40
-
41
- ## Dataset descriptions
42
- v0.2.2.csv is the full dataset used in our ACL paper.
43
-
44
- v0.2.3.csv removes duplicate entries, all of which occurred in round 1. Duplicates come from two sources: (1) annotators entering the same content multiple times and (2) different annotators entering the same content. The duplicates are interesting for understanding the annotation process, and the challenges of dynamically generating datasets. However, they are likely to be less useful for training classifiers and so are removed in v0.2.3. We did not lower case the text before removing duplicates as capitalisations contain potentially useful signals.
45
-
46
-
47
- ## Overview
48
- The Dynamically Generated Hate Speech Dataset is provided in one table.
49
-
50
- 'acl.id' is the unique ID of the entry.
51
-
52
- 'Text' is the content which has been entered. All content is synthetic.
53
-
54
- 'Label' is a binary variable, indicating whether or not the content has been identified as hateful. It takes two values: hate, nothate.
55
-
56
- 'Type' is a categorical variable, providing a secondary label for hateful content. For hate it can take five values: Animosity, Derogation, Dehumanization, Threatening and Support for Hateful Entities. Please see the paper for more detail. For nothate the 'type' is 'none'. In round 1 the 'type' was not given and is marked as 'notgiven'.
57
-
58
- 'Target' is a categorical variable, providing the group that is attacked by the hate. It can include intersectional characteristics and multiple groups can be identified. For nothate the type is 'none'. Note that in round 1 the 'target' was not given and is marked as 'notgiven'.
59
-
60
- 'Level' reports whether the entry is original content or a perturbation.
61
-
62
- 'Round' is a categorical variable. It gives the round of data entry (1, 2, 3 or 4) with a letter for whether the entry is original content ('a') or a perturbation ('b'). Perturbations were not made for round 1.
63
-
64
- 'Round.base' is a categorical variable. It gives the round of data entry, indicated with just a number (1, 2, 3 or 4).
65
-
66
- 'Split' is a categorical variable. it gives the data split that the entry has been assigned to. This can take the values 'train', 'dev' and 'test'. The choice of splits is explained in the paper.
67
-
68
- 'Annotator' is a categorical variable. It gives the annotator who entered the content. Annotator IDs are random alphanumeric strings. There are 20 annotators in the dataset.
69
-
70
- 'acl.id.matched' is the ID of the matched entry, connecting the original (given in 'acl.id') and the perturbed version.
71
-
72
-
73
- For identities (recorded under 'Target') we use shorthand labels to constructed the dataset, which can be converted (and grouped) as follows:
74
-
75
- none -> for non hateful entries
76
- NoTargetRecorded -> for hateful entries with no target recorded
77
-
78
- mixed -> Mixed race background
79
- ethnic minority -> Ethnic Minorities
80
- indig -> Indigenous people
81
- indigwom -> Indigenous Women
82
- non-white -> Non-whites (attacked as 'non-whites', rather than specific non-white groups which are generally addressed separately)
83
- trav -> Travellers (including Roma, gypsies)
84
-
85
- bla -> Black people
86
- blawom -> Black women
87
- blaman -> Black men
88
- african -> African (all 'African' attacks will also be an attack against Black people)
89
-
90
- jew -> Jewish people
91
- mus -> Muslims
92
- muswom -> Muslim women
93
-
94
- wom -> Women
95
- trans -> Trans people
96
- gendermin -> Gender minorities,
97
- bis -> Bisexual
98
- gay -> Gay people (both men and women)
99
- gayman -> Gay men
100
- gaywom -> Lesbians
101
-
102
- dis -> People with disabilities
103
- working -> Working class people
104
- old -> Elderly people
105
-
106
- asi -> Asians
107
- asiwom -> Asian women
108
- east -> East Asians
109
- south -> South Asians (e.g. Indians)
110
- chinese -> Chinese people
111
- pak -> Pakistanis
112
- arab -> Arabs, including people from the Middle East
113
-
114
- immig -> Immigrants
115
- asylum -> Asylum seekers
116
- ref -> Refguees
117
- for -> Foreigners
118
-
119
- eastern european -> Eastern Europeans
120
- russian -> Russian people
121
- pol -> Polish people
122
- hispanic -> Hispanic people, including latinx and Mexicans
123
-
124
- nazi -> Nazis ('Support' type of hate)
125
- hitler -> Hitler ('Support' type of hate)
126
-
127
- ## Code
128
- Code was implemented using hugging face transformers library.
129
 
130
  ## Additional Information
131
 
 
14
 
15
  ## Dataset Description
16
 
17
+ - **Homepage:** [GitHub](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset)
18
+ - **Point of Contact:** [[email protected]](mailto:[email protected])
 
 
 
19
 
20
  ### Dataset Summary
21
 
22
  This is a copy of the Dynamically-Generated-Hate-Speech-Dataset, presented in [this paper](https://arxiv.org/abs/2012.15761) by
23
+ - **Bertie Vidgen**, **Tristan Thrush**, **Zeerak Waseem** and **Douwe Kiela**
 
 
 
24
 
25
  This repository only contains v0.2.3, I have not many any additional changes to this paper.
26
 
27
  ## Original README from [GitHub](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset/blob/main/README.md)
28
 
29
+ # Dynamically-Generated-Hate-Speech-Dataset
30
+ ReadMe for v0.2 of the Dynamically Generated Hate Speech Dataset from Vidgen et al. (2021). If you use the dataset, please cite our paper in the Proceedings of ACL 2021, and available on [Arxiv](https://arxiv.org/abs/2012.15761).
31
+ Contact Dr. Bertie Vidgen if you have feedback or queries: [email protected].
32
+
33
+ The full author list is: Bertie Vidgen (The Alan Turing Institute), Tristan Thrush (Facebook AI Research), Zeerak Waseem (University of Sheffield) and Douwe Kiela (Facebook AI Research). This paper is an output of the Dynabench project: https://dynabench.org/tasks/5#overall
34
+
35
+ ## Dataset descriptions
36
+ v0.2.2.csv is the full dataset used in our ACL paper.
37
+
38
+ v0.2.3.csv removes duplicate entries, all of which occurred in round 1. Duplicates come from two sources: (1) annotators entering the same content multiple times and (2) different annotators entering the same content. The duplicates are interesting for understanding the annotation process, and the challenges of dynamically generating datasets. However, they are likely to be less useful for training classifiers and so are removed in v0.2.3. We did not lower case the text before removing duplicates as capitalisations contain potentially useful signals.
39
+
40
+
41
+ ## Overview
42
+ The Dynamically Generated Hate Speech Dataset is provided in one table.
43
+
44
+ 'acl.id' is the unique ID of the entry.
45
+
46
+ 'Text' is the content which has been entered. All content is synthetic.
47
+
48
+ 'Label' is a binary variable, indicating whether or not the content has been identified as hateful. It takes two values: hate, nothate.
49
+
50
+ 'Type' is a categorical variable, providing a secondary label for hateful content. For hate it can take five values: Animosity, Derogation, Dehumanization, Threatening and Support for Hateful Entities. Please see the paper for more detail. For nothate the 'type' is 'none'. In round 1 the 'type' was not given and is marked as 'notgiven'.
51
+
52
+ 'Target' is a categorical variable, providing the group that is attacked by the hate. It can include intersectional characteristics and multiple groups can be identified. For nothate the type is 'none'. Note that in round 1 the 'target' was not given and is marked as 'notgiven'.
53
+
54
+ 'Level' reports whether the entry is original content or a perturbation.
55
+
56
+ 'Round' is a categorical variable. It gives the round of data entry (1, 2, 3 or 4) with a letter for whether the entry is original content ('a') or a perturbation ('b'). Perturbations were not made for round 1.
57
+
58
+ 'Round.base' is a categorical variable. It gives the round of data entry, indicated with just a number (1, 2, 3 or 4).
59
+
60
+ 'Split' is a categorical variable. it gives the data split that the entry has been assigned to. This can take the values 'train', 'dev' and 'test'. The choice of splits is explained in the paper.
61
+
62
+ 'Annotator' is a categorical variable. It gives the annotator who entered the content. Annotator IDs are random alphanumeric strings. There are 20 annotators in the dataset.
63
+
64
+ 'acl.id.matched' is the ID of the matched entry, connecting the original (given in 'acl.id') and the perturbed version.
65
+
66
+
67
+ For identities (recorded under 'Target') we use shorthand labels to constructed the dataset, which can be converted (and grouped) as follows:
68
+
69
+ none -> for non hateful entries
70
+ NoTargetRecorded -> for hateful entries with no target recorded
71
+
72
+ mixed -> Mixed race background
73
+ ethnic minority -> Ethnic Minorities
74
+ indig -> Indigenous people
75
+ indigwom -> Indigenous Women
76
+ non-white -> Non-whites (attacked as 'non-whites', rather than specific non-white groups which are generally addressed separately)
77
+ trav -> Travellers (including Roma, gypsies)
78
+
79
+ bla -> Black people
80
+ blawom -> Black women
81
+ blaman -> Black men
82
+ african -> African (all 'African' attacks will also be an attack against Black people)
83
+
84
+ jew -> Jewish people
85
+ mus -> Muslims
86
+ muswom -> Muslim women
87
+
88
+ wom -> Women
89
+ trans -> Trans people
90
+ gendermin -> Gender minorities,
91
+ bis -> Bisexual
92
+ gay -> Gay people (both men and women)
93
+ gayman -> Gay men
94
+ gaywom -> Lesbians
95
+
96
+ dis -> People with disabilities
97
+ working -> Working class people
98
+ old -> Elderly people
99
+
100
+ asi -> Asians
101
+ asiwom -> Asian women
102
+ east -> East Asians
103
+ south -> South Asians (e.g. Indians)
104
+ chinese -> Chinese people
105
+ pak -> Pakistanis
106
+ arab -> Arabs, including people from the Middle East
107
+
108
+ immig -> Immigrants
109
+ asylum -> Asylum seekers
110
+ ref -> Refguees
111
+ for -> Foreigners
112
+
113
+ eastern european -> Eastern Europeans
114
+ russian -> Russian people
115
+ pol -> Polish people
116
+ hispanic -> Hispanic people, including latinx and Mexicans
117
+
118
+ nazi -> Nazis ('Support' type of hate)
119
+ hitler -> Hitler ('Support' type of hate)
120
+
121
+ ## Code
122
+ Code was implemented using hugging face transformers library.
123
 
124
  ## Additional Information
125