Djacon commited on
Commit
f921912
·
1 Parent(s): ec47d96

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +145 -1
README.md CHANGED
@@ -1,3 +1,147 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - ru
4
+ license:
5
+ - mit
6
+ multilinguality:
7
+ - monolingual
8
+ task_categories:
9
+ - text-classification
10
+ task_ids:
11
+ - multi-class-classification
12
+ - multi-label-classification
13
+ pretty_name: RuGoEmotions
14
+ tags:
15
+ - emotion
16
  ---
17
+
18
+ # Dataset Card for GoEmotions
19
+
20
+ ## Table of Contents
21
+ - [Dataset Description](#dataset-description)
22
+ - [Dataset Summary](#dataset-summary)
23
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
24
+ - [Languages](#languages)
25
+ - [Dataset Structure](#dataset-structure)
26
+ - [Data Instances](#data-instances)
27
+ - [Data Fields](#data-fields)
28
+ - [Data Splits](#data-splits)
29
+ - [Dataset Creation](#dataset-creation)
30
+ - [Curation Rationale](#curation-rationale)
31
+ - [Source Data](#source-data)
32
+ - [Annotations](#annotations)
33
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
34
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
35
+ - [Social Impact of Dataset](#social-impact-of-dataset)
36
+ - [Discussion of Biases](#discussion-of-biases)
37
+ - [Other Known Limitations](#other-known-limitations)
38
+ - [Additional Information](#additional-information)
39
+ - [Dataset Curators](#dataset-curators)
40
+ - [Licensing Information](#licensing-information)
41
+ - [Citation Information](#citation-information)
42
+ - [Contributions](#contributions)
43
+
44
+ ### Dataset Summary
45
+
46
+ The RuGoEmotions dataset contains 34k Reddit comments labeled for 9 emotion categories (joy, interest, surprice, sadness, anger, disgust, fear, guilt and neutral).
47
+ The dataset already with predefined train/val/test splits
48
+
49
+ ### Supported Tasks and Leaderboards
50
+
51
+ This dataset is intended for multi-class, multi-label emotion classification.
52
+
53
+ ### Languages
54
+
55
+ The data is in Russian.
56
+
57
+ ## Dataset Structure
58
+
59
+ ### Data Instances
60
+
61
+ Each instance is a reddit comment with one or more emotion annotations (or neutral).
62
+
63
+ ### Data Fields
64
+
65
+ The configuration includes:
66
+ - `text`: the reddit comment
67
+ - `labels`: the emotion annotations
68
+
69
+ ### Data Splits
70
+
71
+ The simplified data includes a set of train/val/test splits with 26.9k, 3.29k, and 3.37k examples respectively.
72
+
73
+ ## Dataset Creation
74
+
75
+ ### Curation Rationale
76
+
77
+ From the paper abstract:
78
+
79
+ > Understanding emotion expressed in language has a wide range of applications, from building empathetic chatbots to
80
+ detecting harmful online behavior. Advancement in this area can be improved using large-scale datasets with a
81
+ fine-grained typology, adaptable to multiple downstream tasks.
82
+
83
+ ### Source Data
84
+
85
+ #### Initial Data Collection and Normalization
86
+
87
+ Data was collected from Reddit comments via a variety of automated methods discussed in 3.1 of the paper.
88
+
89
+ #### Who are the source language producers?
90
+
91
+ English-speaking Reddit users.
92
+
93
+ ### Annotations
94
+
95
+ #### Who are the annotators?
96
+
97
+ Annotations were produced by 3 English-speaking crowdworkers in India.
98
+
99
+ ### Personal and Sensitive Information
100
+
101
+ This dataset includes the original usernames of the Reddit users who posted each comment. Although Reddit usernames
102
+ are typically disasociated from personal real-world identities, this is not always the case. It may therefore be
103
+ possible to discover the identities of the individuals who created this content in some cases.
104
+
105
+ ## Considerations for Using the Data
106
+
107
+ ### Social Impact of Dataset
108
+
109
+ Emotion detection is a worthwhile problem which can potentially lead to improvements such as better human/computer
110
+ interaction. However, emotion detection algorithms (particularly in computer vision) have been abused in some cases
111
+ to make erroneous inferences in human monitoring and assessment applications such as hiring decisions, insurance
112
+ pricing, and student attentiveness (see
113
+ [this article](https://www.unite.ai/ai-now-institute-warns-about-misuse-of-emotion-detection-software-and-other-ethical-issues/)).
114
+
115
+ ### Discussion of Biases
116
+
117
+ From the authors' github page:
118
+
119
+ > Potential biases in the data include: Inherent biases in Reddit and user base biases, the offensive/vulgar word lists used for data filtering, inherent or unconscious bias in assessment of offensive identity labels, annotators were all native English speakers from India. All these likely affect labelling, precision, and recall for a trained model. Anyone using this dataset should be aware of these limitations of the dataset.
120
+
121
+ ### Other Known Limitations
122
+
123
+ [More Information Needed]
124
+
125
+ ## Additional Information
126
+
127
+ ### Dataset Curators
128
+
129
+ Researchers at Amazon Alexa, Google Research, and Stanford. See the [author list](https://arxiv.org/abs/2005.00547).
130
+
131
+ ### Licensing Information
132
+
133
+ The GitHub repository which houses this dataset has an
134
+ [Apache License 2.0](https://github.com/google-research/google-research/blob/master/LICENSE).
135
+
136
+ ### Citation Information
137
+
138
+ @inproceedings{demszky2020goemotions,
139
+ author = {Demszky, Dorottya and Movshovitz-Attias, Dana and Ko, Jeongwoo and Cowen, Alan and Nemade, Gaurav and Ravi, Sujith},
140
+ booktitle = {58th Annual Meeting of the Association for Computational Linguistics (ACL)},
141
+ title = {{GoEmotions: A Dataset of Fine-Grained Emotions}},
142
+ year = {2020}
143
+ }
144
+
145
+ ### Contributions
146
+
147
+ Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.