Datasets:
Tasks:
Text Classification
Sub-tasks:
semantic-similarity-classification
Languages:
English
Size:
100K<n<1M
License:
albertvillanova
HF staff
Convert dataset sizes from base 2 to base 10 in the dataset card (#1)
eba59a9
annotations_creators: | |
- expert-generated | |
language: | |
- en | |
language_creators: | |
- found | |
license: | |
- unknown | |
multilinguality: | |
- monolingual | |
pretty_name: Quora Question Pairs | |
size_categories: | |
- 100K<n<1M | |
source_datasets: | |
- original | |
task_categories: | |
- text-classification | |
task_ids: | |
- semantic-similarity-classification | |
paperswithcode_id: null | |
dataset_info: | |
features: | |
- name: questions | |
sequence: | |
- name: id | |
dtype: int32 | |
- name: text | |
dtype: string | |
- name: is_duplicate | |
dtype: bool | |
splits: | |
- name: train | |
num_bytes: 58155622 | |
num_examples: 404290 | |
download_size: 58176133 | |
dataset_size: 58155622 | |
# Dataset Card for "quora" | |
## Table of Contents | |
- [Dataset Description](#dataset-description) | |
- [Dataset Summary](#dataset-summary) | |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) | |
- [Languages](#languages) | |
- [Dataset Structure](#dataset-structure) | |
- [Data Instances](#data-instances) | |
- [Data Fields](#data-fields) | |
- [Data Splits](#data-splits) | |
- [Dataset Creation](#dataset-creation) | |
- [Curation Rationale](#curation-rationale) | |
- [Source Data](#source-data) | |
- [Annotations](#annotations) | |
- [Personal and Sensitive Information](#personal-and-sensitive-information) | |
- [Considerations for Using the Data](#considerations-for-using-the-data) | |
- [Social Impact of Dataset](#social-impact-of-dataset) | |
- [Discussion of Biases](#discussion-of-biases) | |
- [Other Known Limitations](#other-known-limitations) | |
- [Additional Information](#additional-information) | |
- [Dataset Curators](#dataset-curators) | |
- [Licensing Information](#licensing-information) | |
- [Citation Information](#citation-information) | |
- [Contributions](#contributions) | |
## Dataset Description | |
- **Homepage:** [https://www.kaggle.com/c/quora-question-pairs](https://www.kaggle.com/c/quora-question-pairs) | |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
- **Size of downloaded dataset files:** 58.17 MB | |
- **Size of the generated dataset:** 58.15 MB | |
- **Total amount of disk used:** 116.33 MB | |
### Dataset Summary | |
The Quora dataset is composed of question pairs, and the task is to determine if the questions are paraphrases of each other (have the same meaning). | |
### Supported Tasks and Leaderboards | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Languages | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
## Dataset Structure | |
### Data Instances | |
#### default | |
- **Size of downloaded dataset files:** 58.17 MB | |
- **Size of the generated dataset:** 58.15 MB | |
- **Total amount of disk used:** 116.33 MB | |
An example of 'train' looks as follows. | |
``` | |
{ | |
"is_duplicate": true, | |
"questions": { | |
"id": [1, 2], | |
"text": ["Is this a sample question?", "Is this an example question?"] | |
} | |
} | |
``` | |
### Data Fields | |
The data fields are the same among all splits. | |
#### default | |
- `questions`: a dictionary feature containing: | |
- `id`: a `int32` feature. | |
- `text`: a `string` feature. | |
- `is_duplicate`: a `bool` feature. | |
### Data Splits | |
| name |train | | |
|-------|-----:| | |
|default|404290| | |
## Dataset Creation | |
### Curation Rationale | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Source Data | |
#### Initial Data Collection and Normalization | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
#### Who are the source language producers? | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Annotations | |
#### Annotation process | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
#### Who are the annotators? | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Personal and Sensitive Information | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
## Considerations for Using the Data | |
### Social Impact of Dataset | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Discussion of Biases | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Other Known Limitations | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
## Additional Information | |
### Dataset Curators | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Licensing Information | |
Unknown license. | |
### Citation Information | |
Unknown. | |
### Contributions | |
Thanks to [@thomwolf](https://github.com/thomwolf), [@ghomasHudson](https://github.com/ghomasHudson), [@lewtun](https://github.com/lewtun) for adding this dataset. |