metadata
language:
- en
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: AllNLI
dataset_info:
- config_name: pair
features:
- name: anchor
dtype: string
- name: positive
dtype: string
splits:
- name: train
num_bytes: 43012118
num_examples: 314315
- name: dev
num_bytes: 992955
num_examples: 6808
- name: test
num_bytes: 1042254
num_examples: 6831
download_size: 27501136
dataset_size: 45047327
- config_name: pair-class
features:
- name: premise
dtype: string
- name: hypothesis
dtype: string
- name: label
dtype:
class_label:
names:
'0': entailment
'1': neutral
'2': contradiction
splits:
- name: train
num_bytes: 138755142
num_examples: 942069
- name: dev
num_bytes: 3034127
num_examples: 19657
- name: test
num_bytes: 3142127
num_examples: 19656
download_size: 72651651
dataset_size: 144931396
- config_name: pair-score
features:
- name: label
dtype: float64
- name: sentence_1
dtype: string
- name: sentence_2
dtype: string
splits:
- name: train
num_bytes: 138755142
num_examples: 942069
- name: dev
num_bytes: 3034127
num_examples: 19657
- name: test
num_bytes: 3142127
num_examples: 19656
download_size: 72656563
dataset_size: 144931396
- config_name: triplet
features:
- name: anchor
dtype: string
- name: positive
dtype: string
- name: negative
dtype: string
splits:
- name: train
num_bytes: 197631954
num_examples: 1115700
- name: dev
num_bytes: 2545182
num_examples: 13168
- name: test
num_bytes: 2682532
num_examples: 13218
download_size: 65778763
dataset_size: 202859668
configs:
- config_name: pair
data_files:
- split: train
path: pair/train-*
- split: dev
path: pair/dev-*
- split: test
path: pair/test-*
- config_name: pair-class
data_files:
- split: train
path: pair-class/train-*
- split: dev
path: pair-class/dev-*
- split: test
path: pair-class/test-*
- config_name: pair-score
data_files:
- split: train
path: pair-score/train-*
- split: dev
path: pair-score/dev-*
- split: test
path: pair-score/test-*
- config_name: triplet
data_files:
- split: train
path: triplet/train-*
- split: dev
path: triplet/dev-*
- split: test
path: triplet/test-*
Dataset Card for AllNLI
This dataset is a concatenation of the SNLI and MultiNLI datasets. Despite originally being intended for Natural Language Inference (NLI), this dataset can be used for training/finetuning an embedding model for semantic textual similarity.
Dataset Subsets
pair-class
subset
- Columns: "premise", "hypothesis", "label"
- Column types:
str
,str
,class
with {"0": "entailment", "1": "neutral", "2", "contradiction"} - Examples:
{'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1}
- Collection strategy: Reading the premise, hypothesis and integer label from SNLI & MultiNLI datasets.
- Deduplified: Yes
pair-score
subset
- Columns: "sentence_1", "sentence_2", "label"
- Column types:
str
,str
,float
- Examples:
{'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1.0}
- Collection strategy: Taking the
pair-class
subset and remapping "entailment", "neutral" and "contradiction" to 1.0, 0.5 and 0.0, respectively. - Deduplified: Yes
pair
subset
- Columns: "anchor", "positive"
- Column types:
str
,str
- Examples:
{'anchor': 'A person on a horse jumps over a broken down airplane.', 'positive': 'A person is training his horse for a competition.'}
- Collection strategy: Reading the SNLI & MultiNLI datasets and considering the "premise" as the "anchor" and the "hypothesis" as the "positive" if the label is "entailment". The reverse ("entailment" as "anchor" and "premise" as "positive") is not included.
- Deduplified: Yes
triplet
subset
- Columns: "anchor", "positive", "negative"
- Column types:
str
,str
,str
- Examples:
{'anchor': 'A person on a horse jumps over a broken down airplane.', 'positive': 'A person is outdoors, on a horse.', 'negative': 'A person is at a diner, ordering an omelette.'}
- Collection strategy: Reading the SNLI & MultiNLI datasets, for each "premise" making a list of entailing and contradictory sentences using the dataset labels. Then, considering all possible triplets out of these entailing and contradictory lists. The reverse ("entailment" as "anchor" and "premise" as "positive") is also included.
- Deduplified: Yes