Datasets:
metadata
license: cc-by-nc-sa-3.0
task_categories:
- question-answering
language:
- en
- fr
- de
- it
- es
- pt
pretty_name: mASNQ
size_categories:
- 100M<n<1B
configs:
- config_name: default
data_files:
- split: train_en
path: eng-train.jsonl
- split: train_de
path: deu-train.jsonl
- split: train_fr
path: fra-train.jsonl
- split: train_it
path: ita-train.jsonl
- split: train_po
path: por-train.jsonl
- split: train_sp
path: spa-train.jsonl
- split: validation_en
path: eng-dev.jsonl
- split: validation_de
path: deu-dev.jsonl
- split: validation_fr
path: fra-dev.jsonl
- split: validation_it
path: ita-dev.jsonl
- split: validation_po
path: por-dev.jsonl
- split: validation_sp
path: spa-dev.jsonl
- config_name: en
data_files:
- split: train
path: eng-train.jsonl
- split: validation
path: eng-dev.jsonl
- config_name: de
data_files:
- split: train
path: deu-train.jsonl
- split: validation
path: deu-dev.jsonl
- config_name: fr
data_files:
- split: train
path: fra-train.jsonl
- split: validation
path: fra-dev.jsonl
- config_name: it
data_files:
- split: train
path: ita-train.jsonl
- split: validation
path: ita-dev.jsonl
- config_name: po
data_files:
- split: train
path: por-train.jsonl
- split: validation
path: por-dev.jsonl
- config_name: sp
data_files:
- split: train
path: spa-train.jsonl
- split: validation
path: spa-dev.jsonl
Dataset Description
mASNQ is a translated version of ASNQ which is an AS2 dataset created by adapting the Natural Question corpus from Machine Reading (MR) to the AS2 task.
The dataset has been translated into five European languages: French, German, Italian, Portuguese, and Spanish, as described in this paper: Datasets for Multilingual Answer Sentence Selection.
Splits:
For each language (English, French, German, Italian, Portuguese, and Spanish), we provide:
- train split
- validation split
How to load them:
To use these splits, you can use the following snippet of code replacing [LANG]
with a language identifier (en, fr, de, it, po, sp).
from datasets import load_dataset
# if you want the whole corpora
corpora = load_dataset("matteogabburo/mASNQ")
"""
if you want the default splits of a specific language, replace [LANG] with an identifier in: en, fr, de, it, po, sp
dataset = load_dataset("matteogabburo/mASNQ", "[LANG]")
"""
# example:
italian_dataset = load_dataset("matteogabburo/mASNQ", "it")
Format:
Each example has the following format:
{
'eid': 25,
'qid': 0,
'cid': 18,
'label': 1,
'question': 'what is the use of fn key in mac',
'candidate': 'While it is most common for the Fn key processing to happen directly in the keyboard micro-controller , offering no knowledge to the main computer of whether the Fn key was pressed , some manufacturers , like Lenovo , perform this mapping in BIOS running on the main CPU , allowing remapping the Fn key by modifying the BIOS interrupt handler ; and Apple , in which the Fn key is mappable and serves other uses too , as triggering the Dictation function by pressing the Fn key twice .'
}
Where:
- eid: is the unique id of the example (question, candidate)
- qid: is the unique id of the question
- cid: is the unique id of the answer candidate
- label: identifies whether the answer candidate
candidate
is correct for thequestion
(1 if correct, 0 otherwise) - question: the question
- candidate: the answer candidate
Citation
If you find this dataset useful, please cite the following paper:
BibTeX:
@misc{gabburo2024datasetsmultilingualanswersentence,
title={Datasets for Multilingual Answer Sentence Selection},
author={Matteo Gabburo and Stefano Campese and Federico Agostini and Alessandro Moschitti},
year={2024},
eprint={2406.10172},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2406.10172},
}