yonatanko commited on
Commit
e6f7b7c
·
verified ·
1 Parent(s): f397922

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -42
README.md CHANGED
@@ -146,59 +146,35 @@ The posts and comments do not contain any personal information and are submitted
146
 
147
  ### Social Impact of Dataset
148
 
149
- The purpose of this dataset is to help develop better question answering systems.
 
150
 
151
- A system that succeeds at the supported task would be able to provide a coherent answer to even complex questions requiring a multi-step explanation, which is beyond the ability of even the larger existing models. The task is also thought as a test-bed for retrieval model which can show the users which source text was used in generating the answer and allow them to confirm the information provided to them.
152
 
153
- It should be noted however that the provided answers were written by Reddit users, an information which may be lost if models trained on it are deployed in down-stream applications and presented to users without context. The specific biases this may introduce are discussed in the next section.
 
 
154
 
155
- ### Discussion of Biases
156
 
157
- While Reddit hosts a number of thriving communities with high quality discussions, it is also widely known to have corners where sexism, hate, and harassment are significant issues. See for example the [recent post from Reddit founder u/spez](https://www.reddit.com/r/announcements/comments/gxas21/upcoming_changes_to_our_content_policy_our_board/) outlining some of the ways he thinks the website's historical policies have been responsible for this problem, [Adrienne Massanari's 2015 article on GamerGate](https://www.researchgate.net/publication/283848479_Gamergate_and_The_Fappening_How_Reddit's_algorithm_governance_and_culture_support_toxic_technocultures) and follow-up works, or a [2019 Wired article on misogyny on Reddit](https://www.wired.com/story/misogyny-reddit-research/).
158
 
159
- While there has been some recent work in the NLP community on *de-biasing* models (e.g. [Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings](https://arxiv.org/abs/1904.04047) for word embeddings trained specifically on Reddit data), this problem is far from solved, and the likelihood that a trained model might learn the biases present in the data remains a significant concern.
160
 
161
- We still note some encouraging signs for all of these communities: [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) and [r/askscience](https://www.reddit.com/r/askscience/) have similar structures and purposes, and [r/askscience](https://www.reddit.com/r/askscience/) was found in 2015 to show medium supportiveness and very low toxicity when compared to other subreddits (see a [hackerfall post](https://hackerfall.com/story/study-and-interactive-visualization-of-toxicity-in), [thecut.com write-up](https://www.thecut.com/2015/03/interactive-chart-of-reddits-toxicity.html) and supporting [data](https://chart-studio.plotly.com/~bsbell21/210/toxicity-vs-supportiveness-by-subreddit/#data)). Meanwhile, the [r/AskHistorians rules](https://www.reddit.com/r/AskHistorians/wiki/rules) mention that the admins will not tolerate "_racism, sexism, or any other forms of bigotry_". However, further analysis of whether and to what extent these rules reduce toxicity is still needed.
162
 
163
- We also note that given the audience of the Reddit website which is more broadly used in the US and Europe, the answers will likely present a Western perspectives, which is particularly important to note when dealing with historical topics.
164
 
165
- ### Other Known Limitations
166
 
167
- The answers provided in the dataset are represent the opinion of Reddit users. While these communities strive to be helpful, they should not be considered to represent a ground truth.
168
 
169
  ## Additional Information
170
 
171
- ### Dataset Curators
172
-
173
- The dataset was initially created by Angela Fan, Ethan Perez, Yacine Jernite, Jason Weston, Michael Auli, and David Grangier, during work done at Facebook AI Research (FAIR).
174
-
175
- ### Licensing Information
176
-
177
- The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
178
-
179
- ### Citation Information
180
-
181
- ```
182
- @inproceedings{eli5_lfqa,
183
- author = {Angela Fan and
184
- Yacine Jernite and
185
- Ethan Perez and
186
- David Grangier and
187
- Jason Weston and
188
- Michael Auli},
189
- editor = {Anna Korhonen and
190
- David R. Traum and
191
- Llu{\'{\i}}s M{\`{a}}rquez},
192
- title = {{ELI5:} Long Form Question Answering},
193
- booktitle = {Proceedings of the 57th Conference of the Association for Computational
194
- Linguistics, {ACL} 2019, Florence, Italy, July 28- August 2, 2019,
195
- Volume 1: Long Papers},
196
- pages = {3558--3567},
197
- publisher = {Association for Computational Linguistics},
198
- year = {2019},
199
- url = {https://doi.org/10.18653/v1/p19-1346},
200
- doi = {10.18653/v1/p19-1346}
201
- }
202
- ```
203
 
204
  ### Contributions
 
 
 
146
 
147
  ### Social Impact of Dataset
148
 
149
+ The Relationship Advice dataset has the potential to significantly impact how language models interact with users, especially in emotionally charged situations. By training models on this dataset, we can develop AI systems capable of providing sensitive, empathetic, and contextually appropriate responses in scenarios related to human relationships. This could enhance AI's ability to support individuals seeking advice or emotional support online, potentially alleviating feelings of isolation or distress.
150
+ However, it's crucial to consider the implications of deploying such models. While the AI can assist users, there is a risk of it reinforcing harmful stereotypes or providing advice that might not be suitable for all users. Therefore, continuous monitoring, improvement, and ethical considerations must accompany the development and deployment of models trained on this dataset to ensure they contribute positively to users' well-being.
151
 
152
+ ### Discussion of Biases
153
 
154
+ Bias is an inherent risk in any dataset derived from human-generated content, especially from platforms like Reddit, where the user base may not be representative of the broader population. The Relationship Advice dataset could contain biases related to gender, cultural norms, socio-economic status, or other demographic factors that are not explicitly captured in the data but may influence the nature of the advice and comments.
155
+ Moreover, the subreddit communities from which the data is drawn may have their own subcultural biases, which can be reflected in the data. For example, advice given might skew toward certain perspectives that are more prevalent in these online communities, potentially marginalizing other viewpoints. Additionally, the annotation process, despite efforts to standardize it through guidelines, might also introduce biases based on the annotators' interpretations and backgrounds.
156
+ To mitigate these biases, users of the dataset should consider employing techniques such as bias detection and mitigation during model training and evaluation. It is also recommended that models trained on this dataset be tested across diverse user groups to ensure fairness and inclusivity in the responses generated.
157
 
158
+ ### Other Known Limitations
159
 
160
+ While the Relationship Advice dataset provides a valuable resource for training models to understand and generate emotionally appropriate responses, it has several limitations:
161
 
162
+ - Limited Scope of Advice: The dataset is derived from a specific subset of Reddit users who engage with particular topics. As a result, the advice and perspectives captured may not generalize to broader contexts or different cultures.
163
 
164
+ - Post Length and Comment Limitation: Posts and comments are truncated at 500 characters, which may omit important context or nuance that could be crucial for understanding the full scope of the conversation. This limitation could affect the model’s ability to fully comprehend and appropriately respond to more complex or detailed scenarios.
165
 
166
+ - Lack of Demographic Information: The dataset does not include detailed demographic information about the users who generated the posts and comments. This lack of metadata makes it challenging to analyze how responses may vary across different demographic groups, and limits the ability to account for or correct potential biases related to user backgrounds.
167
 
168
+ - Annotation Subjectivity: Despite the use of guidelines, the annotation process is inherently subjective. Different annotators may interpret the same post or comment differently, leading to inconsistencies in the labels. This subjectivity could influence the performance of models trained on the dataset, especially in tasks requiring nuanced understanding of emotional content.
169
 
170
+ - Potential for Misuse: As with any dataset involving sensitive topics, there is a risk of misuse. Models trained on this data should not be deployed in critical settings without thorough evaluation and safeguards to prevent harm. For instance, using such models in mental health support contexts without proper oversight could lead to inappropriate or harmful advice being given to users.
171
 
172
  ## Additional Information
173
 
174
+ ### Dataset Creators
175
+
176
+ The dataset was created by Yonatan Koifman, Yael Hari, and Yahav Cohen as part of a project for the NLP Research course at the Data Science & Decisions Faculty at the Technion.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
177
 
178
  ### Contributions
179
+
180
+ Special thanks to Or Cohen, Or Dado, Kere Gruetke, and Tal Shalom for being the external annotators of our dataset.