Datasets:
Tasks:
Question Answering
Formats:
parquet
Languages:
English
Size:
10K - 100K
Tags:
finance
License:
license: mit | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
- split: test | |
path: data/test-* | |
dataset_info: | |
features: | |
- name: text | |
dtype: string | |
- name: context | |
dtype: string | |
- name: answer_1 | |
dtype: string | |
- name: answer_2 | |
dtype: string | |
- name: subreddit | |
dtype: string | |
- name: num_comments | |
dtype: int64 | |
- name: score | |
dtype: int64 | |
- name: upvote_ratio | |
dtype: float64 | |
- name: ups | |
dtype: float64 | |
- name: downs | |
dtype: float64 | |
- name: author | |
dtype: string | |
- name: created_utc | |
dtype: int64 | |
- name: retrieved_on | |
dtype: float64 | |
- name: retrieved_utc | |
dtype: float64 | |
- name: id | |
dtype: string | |
- name: comment_is_submission | |
dtype: bool | |
- name: score_answer1 | |
dtype: int64 | |
- name: upvote_ratio_answer1 | |
dtype: 'null' | |
- name: ups_answer1 | |
dtype: float64 | |
- name: downs_answer1 | |
dtype: float64 | |
- name: author_answer1 | |
dtype: string | |
- name: name_answer1 | |
dtype: string | |
- name: id_answer1 | |
dtype: string | |
- name: created_utc_answer1 | |
dtype: int64 | |
- name: retrieved_on_answer1 | |
dtype: float64 | |
- name: retrieved_utc_answer1 | |
dtype: float64 | |
- name: score_answer2 | |
dtype: int64 | |
- name: upvote_ratio_answer2 | |
dtype: 'null' | |
- name: ups_answer2 | |
dtype: float64 | |
- name: downs_answer2 | |
dtype: float64 | |
- name: author_answer2 | |
dtype: string | |
- name: name_answer2 | |
dtype: string | |
- name: id_answer2 | |
dtype: string | |
- name: created_utc_answer2 | |
dtype: int64 | |
- name: retrieved_on_answer2 | |
dtype: float64 | |
- name: retrieved_utc_answer2 | |
dtype: float64 | |
splits: | |
- name: train | |
num_bytes: 116393440 | |
num_examples: 53061 | |
- name: test | |
num_bytes: 1076885 | |
num_examples: 500 | |
download_size: 73381121 | |
dataset_size: 117470325 | |
task_categories: | |
- question-answering | |
language: | |
- en | |
tags: | |
- finance | |
size_categories: | |
- 10K<n<100K | |
# Dataset Card for Dataset Name | |
<!-- Provide a quick summary of the dataset. --> | |
SocialFinanceQA is the first social-media preference dataset for fine-tuning and aligning LLMs for the finance domain. | |
## Dataset Details | |
### Dataset Description | |
<!-- Provide a longer summary of what this dataset is. --> | |
- **Curated by:** Kris-Fillip Kahl, Tolga Buz, Russa Biswas, Gerard de Melo | |
- **Language(s) (NLP):** English | |
- **License:** MIT | |
### Dataset Sources | |
- **Repository:** https://github.com/kris-fillip/SocialFinanceQA | |
<!-- - **Paper [optional]:** [More Information Needed] --> | |
## Uses | |
<!-- Address questions around how the dataset is intended to be used. --> | |
This dataset is intended to be used for fine-tuning and aligning LLMs for the task of Long Form Question Answering (LFQA). | |
### Direct Use | |
<!-- This section describes suitable use cases for the dataset. --> | |
This dataset is suitable for supervised fine-tuning as well as preference alignment of LLMs. | |
## Dataset Structure | |
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> | |
The dataset consists of questions from submissions and corresponding answers from comments within the selected subreddits. | |
It is a preference dataset constructed based on the scores (upvotes and downvotes) of the individual submissions and comments within their subreddits. | |
All fields entailing answer_1 correspond to the preferred answer for a specific question, while the answer_2 fields correspond to the non-preferred answer. | |
## Dataset Creation | |
### Curation Rationale | |
<!-- Motivation for the creation of this dataset. --> | |
Retail investing is on the rise, and a growing number of users are relying on online finance communities to educate themselves. | |
However, recent years have positioned Large Language Models (LLMs) as powerful question-answering (QA) tools, shifting users away from interacting in communities toward discourse with AI-driven conversational interfaces. | |
These AI tools are currently limited by the availability of labeled data containing domain-specific financial knowledge. | |
Therefore, in this work, we curate a QA preference dataset SOCIALFINANCEQA for fine-tuning and aligning LLMs, extracted from over 7.4 million submissions and 82 million comments from 2008 to 2022 in Reddit’s 15 largest finance communities. | |
### Source Data | |
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> | |
The data originally stems from Reddit submissions and comments made to 15 of the largest financial subreddits (personalfinance, financialindependence, FinancialPlanning, investing, wallstreetbets, Wallstreetbetsnew, stocks, StockMarket, pennystocks, options, RealEstate, Economics, realestateinvesting, AskEconomics, explainlikeimfive) between 2008 and 2022. | |
#### Data Collection and Processing | |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> | |
#### Who are the source data producers? | |
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> | |
Reddit users who contributed to the selected financial subreddits between 2008 and 2022. | |
#### Personal and Sensitive Information | |
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> | |
The dataset contains the usernames of the authors of the texts. | |
These user names are pseudonyms that do not contain information about the author's identity unless the author actively decides to share personal information in their Reddit profile. | |
We have not obtained dedicated permission from the authors of the texts to include them in our dataset. | |
However, this is mitigated by the users agreeing to Reddit’s terms, which allow data extraction services such as Pushshift, which is the source for our dataset and was accessed legitimately. | |
Additionally, the texts may contain the names of individuals, usually public figures relevant to the stock market, whom the authors included in their questions or responses. | |
We have decided to leave these names in the dataset, as they are usually critical for understanding the text (e.g. when a company’s CEO is mentioned in a text about how the same company performs). | |
## Bias, Risks, and Limitations | |
<!-- This section is meant to convey both technical and sociotechnical limitations. --> | |
Social media should never be the only source for investors planning to make financial decisions. | |
We have chosen 15 communities covering various topics to provide a broad foundation. | |
Nonetheless, we recommend retail investors consult other sources, such as relevant books, scientific literature, and trained professionals, before making significant investment decisions. | |
Investing often bears the risk of losing money, so investors should be careful with money they cannot afford to lose. | |
Inappropriate, toxic, or offensive language is always a risk when working with social media data. | |
We mitigate this by filtering our dataset carefully, leveraging Reddit’s internal mechanisms and additional techniques, such as toxicity detection. | |
Nonetheless, there is a possibility that our dataset still harbors remaining instances of text that may be considered offensive or may cause fine-tuned LLMs to generate such. | |
<!-- ## Citation [optional] --> | |
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> | |
<!-- **BibTeX:** --> | |
<!-- **APA:** --> | |
<!-- ## Glossary [optional] --> | |
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> | |
## Dataset Card Authors | |
Kris-Fillip Kahl, Tolga Buz |