Update README.md
Browse files
README.md
CHANGED
@@ -146,59 +146,35 @@ The posts and comments do not contain any personal information and are submitted
|
|
146 |
|
147 |
### Social Impact of Dataset
|
148 |
|
149 |
-
The
|
|
|
150 |
|
151 |
-
|
152 |
|
153 |
-
|
|
|
|
|
154 |
|
155 |
-
###
|
156 |
|
157 |
-
While
|
158 |
|
159 |
-
|
160 |
|
161 |
-
|
162 |
|
163 |
-
|
164 |
|
165 |
-
|
166 |
|
167 |
-
|
168 |
|
169 |
## Additional Information
|
170 |
|
171 |
-
### Dataset
|
172 |
-
|
173 |
-
The dataset was
|
174 |
-
|
175 |
-
### Licensing Information
|
176 |
-
|
177 |
-
The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
|
178 |
-
|
179 |
-
### Citation Information
|
180 |
-
|
181 |
-
```
|
182 |
-
@inproceedings{eli5_lfqa,
|
183 |
-
author = {Angela Fan and
|
184 |
-
Yacine Jernite and
|
185 |
-
Ethan Perez and
|
186 |
-
David Grangier and
|
187 |
-
Jason Weston and
|
188 |
-
Michael Auli},
|
189 |
-
editor = {Anna Korhonen and
|
190 |
-
David R. Traum and
|
191 |
-
Llu{\'{\i}}s M{\`{a}}rquez},
|
192 |
-
title = {{ELI5:} Long Form Question Answering},
|
193 |
-
booktitle = {Proceedings of the 57th Conference of the Association for Computational
|
194 |
-
Linguistics, {ACL} 2019, Florence, Italy, July 28- August 2, 2019,
|
195 |
-
Volume 1: Long Papers},
|
196 |
-
pages = {3558--3567},
|
197 |
-
publisher = {Association for Computational Linguistics},
|
198 |
-
year = {2019},
|
199 |
-
url = {https://doi.org/10.18653/v1/p19-1346},
|
200 |
-
doi = {10.18653/v1/p19-1346}
|
201 |
-
}
|
202 |
-
```
|
203 |
|
204 |
### Contributions
|
|
|
|
|
|
146 |
|
147 |
### Social Impact of Dataset
|
148 |
|
149 |
+
The Relationship Advice dataset has the potential to significantly impact how language models interact with users, especially in emotionally charged situations. By training models on this dataset, we can develop AI systems capable of providing sensitive, empathetic, and contextually appropriate responses in scenarios related to human relationships. This could enhance AI's ability to support individuals seeking advice or emotional support online, potentially alleviating feelings of isolation or distress.
|
150 |
+
However, it's crucial to consider the implications of deploying such models. While the AI can assist users, there is a risk of it reinforcing harmful stereotypes or providing advice that might not be suitable for all users. Therefore, continuous monitoring, improvement, and ethical considerations must accompany the development and deployment of models trained on this dataset to ensure they contribute positively to users' well-being.
|
151 |
|
152 |
+
### Discussion of Biases
|
153 |
|
154 |
+
Bias is an inherent risk in any dataset derived from human-generated content, especially from platforms like Reddit, where the user base may not be representative of the broader population. The Relationship Advice dataset could contain biases related to gender, cultural norms, socio-economic status, or other demographic factors that are not explicitly captured in the data but may influence the nature of the advice and comments.
|
155 |
+
Moreover, the subreddit communities from which the data is drawn may have their own subcultural biases, which can be reflected in the data. For example, advice given might skew toward certain perspectives that are more prevalent in these online communities, potentially marginalizing other viewpoints. Additionally, the annotation process, despite efforts to standardize it through guidelines, might also introduce biases based on the annotators' interpretations and backgrounds.
|
156 |
+
To mitigate these biases, users of the dataset should consider employing techniques such as bias detection and mitigation during model training and evaluation. It is also recommended that models trained on this dataset be tested across diverse user groups to ensure fairness and inclusivity in the responses generated.
|
157 |
|
158 |
+
### Other Known Limitations
|
159 |
|
160 |
+
While the Relationship Advice dataset provides a valuable resource for training models to understand and generate emotionally appropriate responses, it has several limitations:
|
161 |
|
162 |
+
- Limited Scope of Advice: The dataset is derived from a specific subset of Reddit users who engage with particular topics. As a result, the advice and perspectives captured may not generalize to broader contexts or different cultures.
|
163 |
|
164 |
+
- Post Length and Comment Limitation: Posts and comments are truncated at 500 characters, which may omit important context or nuance that could be crucial for understanding the full scope of the conversation. This limitation could affect the model’s ability to fully comprehend and appropriately respond to more complex or detailed scenarios.
|
165 |
|
166 |
+
- Lack of Demographic Information: The dataset does not include detailed demographic information about the users who generated the posts and comments. This lack of metadata makes it challenging to analyze how responses may vary across different demographic groups, and limits the ability to account for or correct potential biases related to user backgrounds.
|
167 |
|
168 |
+
- Annotation Subjectivity: Despite the use of guidelines, the annotation process is inherently subjective. Different annotators may interpret the same post or comment differently, leading to inconsistencies in the labels. This subjectivity could influence the performance of models trained on the dataset, especially in tasks requiring nuanced understanding of emotional content.
|
169 |
|
170 |
+
- Potential for Misuse: As with any dataset involving sensitive topics, there is a risk of misuse. Models trained on this data should not be deployed in critical settings without thorough evaluation and safeguards to prevent harm. For instance, using such models in mental health support contexts without proper oversight could lead to inappropriate or harmful advice being given to users.
|
171 |
|
172 |
## Additional Information
|
173 |
|
174 |
+
### Dataset Creators
|
175 |
+
|
176 |
+
The dataset was created by Yonatan Koifman, Yael Hari, and Yahav Cohen as part of a project for the NLP Research course at the Data Science & Decisions Faculty at the Technion.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
177 |
|
178 |
### Contributions
|
179 |
+
|
180 |
+
Special thanks to Or Cohen, Or Dado, Kere Gruetke, and Tal Shalom for being the external annotators of our dataset.
|