premise
stringlengths
4
2.27k
hypothesis
stringlengths
10
331
label
stringclasses
4 values
num_options
int64
2
3
id
stringlengths
42
49
If Charles' left wing, commanded by Nauendorf, united with Hotze's force, approaching from the east, Masséna would prepare for Charles to attack and very likely push him out of Zürich.
If Charles' left wing, commanded by Nauendorf, united with Hotze's force, approaching from the east, Masséna knew Charles would attack and very likely push him out of Zürich.
not_entailment
2
diagnostics_test_b679da64a985466895d1d8b730472fa9
Ferdinand of Naples refused to pay agreed-upon tribute to France, and his subjects followed this refusal with a rebellion.
Ferdinand of Naples refused to pay France the agreed-upon tribute, and his subjects followed this refusal with a rebellion.
entailment
2
diagnostics_test_3907842329334591b266ebc66502c6d9
Ferdinand of Naples refused to pay France the agreed-upon tribute, and his subjects followed this refusal with a rebellion.
Ferdinand of Naples refused to pay agreed-upon tribute to France, and his subjects followed this refusal with a rebellion.
entailment
2
diagnostics_test_32ef609d98d047089a418dd95137fab6
Furthermore, the French dangerously underestimated Austrian tenacity and military skill.
Furthermore, the French dangerously underestimated Austrian military skill and tenacity.
entailment
2
diagnostics_test_42d18cc9803e4d87bbbdc7efab13941e
Furthermore, the French dangerously underestimated Austrian military skill and tenacity.
Furthermore, the French dangerously underestimated Austrian tenacity and military skill.
entailment
2
diagnostics_test_f69b5869bc9a4585b179933ee4b19b7e
Furthermore, the French dangerously underestimated Austrian tenacity and military skill.
Furthermore, the French dangerously underestimated Austrian military skill.
entailment
2
diagnostics_test_1e5907242cb54811b7ec1ab5adcb3315
Furthermore, the French dangerously underestimated Austrian military skill.
Furthermore, the French dangerously underestimated Austrian tenacity and military skill.
not_entailment
2
diagnostics_test_a72bd3b4409b46c2b6d9fa0b1092946a
There are four supraocular scales (above the eyes) in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
There are four supraocular scales (above the eyes) in most specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
entailment
2
diagnostics_test_24c3aeeefb734fff9e5d01398f254ee2
There are four supraocular scales (above the eyes) in most specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
There are four supraocular scales (above the eyes) in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
not_entailment
2
diagnostics_test_3f32a8fd02644fd6b640069158041d6b
There are four supraocular scales (above the eyes) in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
There are four scales above the eyes in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
entailment
2
diagnostics_test_1bd793d9e1814981b4eb7f39f38cd95a
There are four scales above the eyes in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
There are four supraocular scales (above the eyes) in almost all specimens and five supraciliary scales (immediately above the eyes, below the supraoculars).
entailment
2
diagnostics_test_6755438882844a289c00b5278b825562
All 860 officers and men on board, including Spee, went down with the ship.
Spee went down with the ship.
entailment
2
diagnostics_test_6c813ac4475e4151acaca7e71f2ff079
Spee went down with the ship.
All 860 officers and men on board, including Spee, went down with the ship.
not_entailment
2
diagnostics_test_c21bd05775ed4920882dd957eb6ecd3f
Regional governors could not rely on the king for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars.
Regional governors could not rely on anyone for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars.
not_entailment
2
diagnostics_test_1037708d83f54761a7a0a6b99ce860e5
Regional governors could not rely on anyone for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars.
Regional governors could not rely on the king for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars.
entailment
2
diagnostics_test_eb8adf815da7459fbb6eef6081011f75
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom of Egypt restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
entailment
2
diagnostics_test_ce5ddb095d63411a886bd83cde87a957
The pharaohs of the Middle Kingdom of Egypt restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
entailment
2
diagnostics_test_8619bccff21640f9805c294eff9034db
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom of China restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
not_entailment
2
diagnostics_test_6c141c6cb36e43eca3adc0849d811616
The pharaohs of the Middle Kingdom of China restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
not_entailment
2
diagnostics_test_ed08382b2abc475eb591cf3a5317db69
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of Egypt restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
entailment
2
diagnostics_test_7e95a62f198d4a1dac17ad6a33f7f10d
The pharaohs of Egypt restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
The pharaohs of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects.
not_entailment
2
diagnostics_test_181ba15a74084485996f6c0061189dda
The 15th Tank Corps was a tank corps of the Soviet Union's Red Army.
The 15th Tank Corps was a corps of the Soviet Union's Red Army.
entailment
2
diagnostics_test_a706ad4e642f442288143da4313a76dd
The 15th Tank Corps was a corps of the Soviet Union's Red Army.
The 15th Tank Corps was a tank corps of the Soviet Union's Red Army.
entailment
2
diagnostics_test_0c8690cb60264fb2941102f24e308f42
I can't believe it's not butter.
It's not butter.
not_entailment
2
diagnostics_test_151cfae939d6416595223ef8362d00bb
It's not butter.
I can't believe it's not butter.
not_entailment
2
diagnostics_test_7a1756d7750c4526a93b6eab631e7ae7
I can't believe it's not butter.
It's butter.
not_entailment
2
diagnostics_test_1558f03a784641c98325513001bd63cc
It's butter.
I can't believe it's not butter.
not_entailment
2
diagnostics_test_f0141d03ca38427897e5721902739045
However, these regularities are sometimes obscured by semantic and syntactic differences.
However, these regularities are always obscured by semantic and syntactic differences.
not_entailment
2
diagnostics_test_7dc81d0cb85746bcb2afce91219c3876
However, these regularities are always obscured by semantic and syntactic differences.
However, these regularities are sometimes obscured by semantic and syntactic differences.
not_entailment
2
diagnostics_test_1689fef50a9644e8a0a385937140ce04
However, these regularities are sometimes obscured by semantic and syntactic differences.
However, these regularities are sometimes obscured by syntactic differences.
entailment
2
diagnostics_test_c4df7b75f1c7432e941ecdd89899128d
However, these regularities are sometimes obscured by syntactic differences.
However, these regularities are sometimes obscured by semantic and syntactic differences.
not_entailment
2
diagnostics_test_770e2c6f5e504099acf71e44e1be84f1
In grounded communication tasks, speakers face pressures in choosing referential expressions that distinguish their targets from others in the context, leading to many kinds of pragmatic meaning enrichment.
In grounded communication tasks, speakers face pressures in choosing referential expressions that distinguish their targets from others in the context, leading to many kinds of meaning enrichment.
entailment
2
diagnostics_test_84180ab743164c63a8db5148842bcbfa
In grounded communication tasks, speakers face pressures in choosing referential expressions that distinguish their targets from others in the context, leading to many kinds of meaning enrichment.
In grounded communication tasks, speakers face pressures in choosing referential expressions that distinguish their targets from others in the context, leading to many kinds of pragmatic meaning enrichment.
entailment
2
diagnostics_test_2de6c3515c1b4e5bb6b77708b60569cc
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the others.
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the other colors.
entailment
2
diagnostics_test_ef296831c72848d4a2da721e24242088
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the other colors.
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the others.
entailment
2
diagnostics_test_733b2a8736cc43ad86c59dbcf3f45427
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the others.
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the other utterances.
not_entailment
2
diagnostics_test_eab69354e7bc423a8331440e4584dd85
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the other utterances.
Thus, a model of the speaker must process representations of the colors in the context and produce an utterance to distinguish the target color from the others.
not_entailment
2
diagnostics_test_3fc27328c14f41f2b1c35fb43ae083e4
While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences.
While most approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences.
not_entailment
2
diagnostics_test_c9e8ca1331374b23ad565864696ea8ec
While most approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences.
While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences.
not_entailment
2
diagnostics_test_1a6e623d88f445a69d44c1e1dec42bf4
Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can usually be inferred from the first few sentences.
Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can always be inferred from the first few sentences.
not_entailment
2
diagnostics_test_9d9399bf34bc4400b00921e88ad46a45
Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can always be inferred from the first few sentences.
Due to the structure and short length of most Wikipedia documents (median number of sentences: 9), the answer can usually be inferred from the first few sentences.
entailment
2
diagnostics_test_14f60d18cba648bb846e600b9746d0df
Each captures only a single aspect of coherence, and all focus on scoring existing sentences, rather than on generating coherent discourse for tasks like abstractive summarization.
Each captures only a single aspect of coherence and focuses on scoring existing sentences, rather than on generating coherent discourse for tasks like abstractive summarization.
entailment
2
diagnostics_test_15f1b9276e564886972b005bbc5ba23e
Each captures only a single aspect of coherence and focuses on scoring existing sentences, rather than on generating coherent discourse for tasks like abstractive summarization.
Each captures only a single aspect of coherence, and all focus on scoring existing sentences, rather than on generating coherent discourse for tasks like abstractive summarization.
entailment
2
diagnostics_test_ad49fbbc1b404475ba5d1c6e8cd579f3
In a coherent context, a machine should be able to guess the next utterance given the preceding ones.
In a coherent context, a machine can guess the next utterance given the preceding ones.
not_entailment
2
diagnostics_test_d4decf59b6b046c396e21e63ec5dbea5
In a coherent context, a machine can guess the next utterance given the preceding ones.
In a coherent context, a machine should be able to guess the next utterance given the preceding ones.
not_entailment
2
diagnostics_test_20b9e3aad0374d5c99f79edd134da8bb
We thus propose eliminating the influence of the language model, which yields the following coherence score.
The language model yields the following coherence score.
not_entailment
2
diagnostics_test_bb44ba6fe238459fac69b251384547b9
The language model yields the following coherence score.
We thus propose eliminating the influence of the language model, which yields the following coherence score.
not_entailment
2
diagnostics_test_1b4f8368338e41d28efce19a6a498140
We thus propose eliminating the influence of the language model, which yields the following coherence score.
Eliminating the influence of the language model yields the following coherence score.
entailment
2
diagnostics_test_6c7eb0fffa5646878f26ff97fff7b607
Eliminating the influence of the language model yields the following coherence score.
We thus propose eliminating the influence of the language model, which yields the following coherence score.
not_entailment
2
diagnostics_test_a82e1f522b27410e8698824ba6273da6
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word) rather than on the global document-level topic distribution in vanilla LDA.
The topic for the current sentence is drawn based on the global document-level topic distribution in vanilla LDA.
not_entailment
2
diagnostics_test_3f331d947b30434380b146b8307064e1
The topic for the current sentence is drawn based on the global document-level topic distribution in vanilla LDA.
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word) rather than on the global document-level topic distribution in vanilla LDA.
not_entailment
2
diagnostics_test_6a9b04695dbb4f9e93a2f65e63e57947
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word) rather than on the global document-level topic distribution in vanilla LDA.
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word).
entailment
2
diagnostics_test_6ece5f96ac3a40a4bacbcc29fd58abc1
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word).
The topic for the current sentence is drawn based on the topic of the preceding sentence (or word) rather than on the global document-level topic distribution in vanilla LDA.
entailment
2
diagnostics_test_d6bc8e2a4c934bd0bf5ef09930057223
We publicly share our dataset and code for future research.
We publicly share our dataset for future research.
entailment
2
diagnostics_test_028ab31053dc43598b3b90e0698a9552
We publicly share our dataset for future research.
We publicly share our dataset and code for future research.
not_entailment
2
diagnostics_test_7b8c415b74454ce8a3aef09fd39bb09f
We publicly share our dataset and code for future research.
We code for future research.
not_entailment
2
diagnostics_test_de1572b2109f48dfad5d319af8da99ed
We code for future research.
We publicly share our dataset and code for future research.
not_entailment
2
diagnostics_test_c3161f2d32ac46b497cf7cb2225c506a
This gives the model a sense of the implied action dynamics of the verb between the agent and the world.
This gives to the model a sense of the implied action dynamics of the verb between the agent and the world.
entailment
2
diagnostics_test_9cb04d5011ed433186a31a2887057e56
This gives to the model a sense of the implied action dynamics of the verb between the agent and the world.
This gives the model a sense of the implied action dynamics of the verb between the agent and the world.
entailment
2
diagnostics_test_40bc982eac6d4e2a9438ca4271cbbc8a
This gives the model a sense of the implied action dynamics of the verb between the agent and the world.
This gives the model to a sense of the implied action dynamics of the verb between the agent and the world.
not_entailment
2
diagnostics_test_a0bed779d202426b803923827b09cad4
This gives the model to a sense of the implied action dynamics of the verb between the agent and the world.
This gives the model a sense of the implied action dynamics of the verb between the agent and the world.
not_entailment
2
diagnostics_test_abafafeb6526427f93d3462f2cf2191b
This attribute group specifies prominent body parts involved in carrying out the action.
This attribute group specifies prominent limbs involved in carrying out the action.
entailment
2
diagnostics_test_1516fc8f130e4639a969089c3951f38d
This attribute group specifies prominent limbs involved in carrying out the action.
This attribute group specifies prominent body parts involved in carrying out the action.
not_entailment
2
diagnostics_test_caa5f2de8c724b96a8c4ae452ba773d5
This problem has been studied before for zero-shot object recognition, but there are several key differences.
This problem has been previously studied for zero-shot object recognition, but there are several key differences.
entailment
2
diagnostics_test_ac8e1c8b7e054535bb70b7175d1c47c5
This problem has been previously studied for zero-shot object recognition, but there are several key differences.
This problem has been studied before for zero-shot object recognition, but there are several key differences.
entailment
2
diagnostics_test_42c0a240b6a84b33bcd36d4d193624d8
This problem has been studied before for zero-shot object recognition, but there are several key differences.
This problem will be studied for zero-shot object recognition, but there are several key differences.
not_entailment
2
diagnostics_test_0c073613ff0c43d08aa6123247dcb3bf
This problem will be studied for zero-shot object recognition, but there are several key differences.
This problem has been studied before for zero-shot object recognition, but there are several key differences.
not_entailment
2
diagnostics_test_052b098de80042e28ef5ee8c87975f07
Understanding a long document requires tracking how entities are introduced and evolve over time.
Understanding a long document requires evolving over time.
not_entailment
2
diagnostics_test_0606409a8657487d96025b20b5ee1538
Understanding a long document requires evolving over time.
Understanding a long document requires tracking how entities are introduced and evolve over time.
not_entailment
2
diagnostics_test_865037a2b0c244b5b79997d84657017f
Understanding a long document requires tracking how entities are introduced and evolve over time.
Understanding a long document requires tracking how entities evolve over time.
entailment
2
diagnostics_test_fc66efb338814eefa0fd23c7c61cf09d
Understanding a long document requires tracking how entities evolve over time.
Understanding a long document requires tracking how entities are introduced and evolve over time.
not_entailment
2
diagnostics_test_b646be9689d8445fab9a59562808fbd3
Understanding a long document requires tracking how entities are introduced and evolve over time.
Understanding a long document requires understanding how entities are introduced.
entailment
2
diagnostics_test_282de1ea982c4e6c9e02b40a3b9a9395
Understanding a long document requires understanding how entities are introduced.
Understanding a long document requires tracking how entities are introduced and evolve over time.
not_entailment
2
diagnostics_test_346c8b6cc5e94a5bb14d641db538752b
We do not assume that these variables are observed at test time.
These variables are not observed at test time.
not_entailment
2
diagnostics_test_d838aeb6488d4c1793e6863f0d94fcf7
These variables are not observed at test time.
We do not assume that these variables are observed at test time.
not_entailment
2
diagnostics_test_b2d5c15b7b1f43e0bce42700655c5798
To compute the perplexity numbers on the test data, our model only takes account of log probabilities on word prediction.
To compute the perplexity numbers on the test data, our model doesn't take account of anything other than the log probabilities on word prediction.
entailment
2
diagnostics_test_e936a24b5d014a049e2e90e4dc17be17
To compute the perplexity numbers on the test data, our model doesn't take account of anything other than the log probabilities on word prediction.
To compute the perplexity numbers on the test data, our model only takes account of log probabilities on word prediction.
entailment
2
diagnostics_test_31a111a1fe004c7f87f4595e02428196
We also experiment with the option to either use the pretrained GloVe word embeddings or randomly initialized word embeddings (then updated during training).
We experiment with the option using randomly initialized word embeddings (then updated during training).
entailment
2
diagnostics_test_2890d9a90710434b8822096b3d2f305e
We experiment with the option using randomly initialized word embeddings (then updated during training).
We also experiment with the option to either use the pretrained GloVe word embeddings or randomly initialized word embeddings (then updated during training).
not_entailment
2
diagnostics_test_e1f26ce355c94abcbf63779e09f9a16c
The entity prediction task requires predicting xxxx given the preceding text either by choosing a previously mentioned entity or deciding that this is a “new entity”.
The entity prediction task requires predicting xxxx given the preceding text by choosing a previously mentioned entity.
not_entailment
2
diagnostics_test_d3c51c1a66c940edbcc7027afe6af085
The entity prediction task requires predicting xxxx given the preceding text by choosing a previously mentioned entity.
The entity prediction task requires predicting xxxx given the preceding text either by choosing a previously mentioned entity or deciding that this is a “new entity”.
not_entailment
2
diagnostics_test_c96e079d3e9749ffb5b639db738a90d3
So there is no dedicated memory block for every entity and no distinction between entity mentions and non-mention words.
So there is no dedicated high-dimensional memory block for every entity and no distinction between entity mentions and non-mention words.
entailment
2
diagnostics_test_0cd1e48c59fd45079e649f8b8ffa79cb
So there is no dedicated high-dimensional memory block for every entity and no distinction between entity mentions and non-mention words.
So there is no dedicated memory block for every entity and no distinction between entity mentions and non-mention words.
not_entailment
2
diagnostics_test_b9a40be9485341bf83e1f691bd3f3175
Our approach complements these previous methods.
Our approach complements some previous methods.
entailment
2
diagnostics_test_6d6a999a37ac4bc69cf200d692441abc
Our approach complements some previous methods.
Our approach complements these previous methods.
not_entailment
2
diagnostics_test_389d011518cc45f8a800803d9f04f4bf
We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
We manually annotated over 650 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
entailment
2
diagnostics_test_7ff5e6b30ece45c3a77da62657113030
We manually annotated over 650 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
not_entailment
2
diagnostics_test_5de274475ff34c0facba2bedb4e39c42
We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
We manually annotated over 690 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
not_entailment
2
diagnostics_test_9682bbc74b4d479ebded5acfc779648f
We manually annotated over 690 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query.
not_entailment
2
diagnostics_test_3dd6a70d87f34762a6d7ce3f9d771c70
To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question.
To generate diversity, workers whose paraphrases had high edit distance compared to the MG question got a bonus.
entailment
2
diagnostics_test_ffd410b98b9f420eb208abae07cc5757
To generate diversity, workers whose paraphrases had high edit distance compared to the MG question got a bonus.
To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question.
entailment
2
diagnostics_test_ca21c289e1974b94a5d3a4360de3db3e
To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question.
To generate diversity, workers got a bonus if the edit distance of a paraphrase was above 3 operations compared to the MG question.
not_entailment
2
diagnostics_test_9035973cb3c14c80af825a5cbd308c33
To generate diversity, workers got a bonus if the edit distance of a paraphrase was above 3 operations compared to the MG question.
To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question.
not_entailment
2
diagnostics_test_3cf44feb8dd2475bb5ce05b97c700ab2
To generate complex questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
To generate simple questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
not_entailment
2
diagnostics_test_f7f754a139084884a2579e945ffe17b5
To generate simple questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
To generate complex questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
not_entailment
2
diagnostics_test_1e28eb60ba3e45cda3e49d6277f51c64
To generate complex questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
To generate highly compositional questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
entailment
2
diagnostics_test_e45d658d75c449ba9b2a84e0c800714f
To generate highly compositional questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
To generate complex questions we use the dataset WEBQUESTIONSSP, which contains 4,737 questions paired with SPARQL queries for Freebase.
entailment
2
diagnostics_test_507c8ac4f90b46d18eeccd002b243da7
In this paper, we explore the idea of polyglot semantic translation, or learning semantic parsing models that are trained on multiple datasets and natural languages.
In this paper, we explore the idea of learning semantic parsing models that are trained on multiple datasets and natural languages.
entailment
2
diagnostics_test_705b7b62686f4ac08d8c870dfd5fba48
In this paper, we explore the idea of learning semantic parsing models that are trained on multiple datasets and natural languages.
In this paper, we explore the idea of polyglot semantic translation, or learning semantic parsing models that are trained on multiple datasets and natural languages.
entailment
2
diagnostics_test_1137b2b6726e439fb4bcc3df26770f0c
They then use a discriminative model to rerank the translation output using additional nonworld level features.
They then use a generative model to rerank the translation output using additional nonworld level features.
not_entailment
2
diagnostics_test_c473c136d50f40578c3711b0ab21735e