Datasets:
ltg
/

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
norcommonsenseqa / README.md
vmkhlv's picture
Update README.md
b487d83 verified
metadata
dataset_info:
  - config_name: nb
    features:
      - name: id
        dtype: string
      - name: question
        dtype: string
      - name: choices
        struct:
          - name: label
            sequence: string
          - name: text
            sequence: string
      - name: answer
        dtype: string
      - name: curated
        dtype: bool
    splits:
      - name: train
        num_bytes: 219533
        num_examples: 998
    download_size: 128333
    dataset_size: 219533
  - config_name: nn
    features:
      - name: id
        dtype: string
      - name: question
        dtype: string
      - name: choices
        struct:
          - name: label
            sequence: string
          - name: text
            sequence: string
      - name: answer
        dtype: string
      - name: curated
        dtype: bool
    splits:
      - name: train
        num_bytes: 24664
        num_examples: 95
    download_size: 20552
    dataset_size: 24664
configs:
  - config_name: nb
    data_files:
      - split: train
        path: nb/train-*
  - config_name: nn
    data_files:
      - split: train
        path: nn/train-*
license: mit
task_categories:
  - question-answering
language:
  - nb
  - nn
pretty_name: NorCommonSenseQA
size_categories:
  - n<1K

Dataset Card for NorCommonSenseQA

Dataset Details

Dataset Description

NorCommonSenseQA is a multiple-choice question answering (QA) dataset designed for zero-shot evaluation of language models' commonsense reasoning abilities. NorCommonSenseQA counts 1093 examples in both written standards of Norwegian: Bokmål and Nynorsk (the minority variant). Each example consists of a question and five answer choices.

NorCommonSenseQA is part of the collection of Norwegian QA datasets, which also includes: NRK-Quiz-QA, NorOpenBookQA, NorTruthfulQA (Multiple Choice), and NorTruthfulQA (Generation). We describe our high-level dataset creation approach here and provide more details, general statistics, and model evaluation results in our paper.

Citation

@article{mikhailov2025collection,
  title={A Collection of Question Answering Datasets for Norwegian},
  author={Mikhailov, Vladislav and M{\ae}hlum, Petter and Lang{\o}, Victoria Ovedie Chruickshank and Velldal, Erik and {\O}vrelid, Lilja},
  journal={arXiv preprint arXiv:2501.11128},
  year={2025}
}

Uses

NorCommonSenseQA is intended to be used for zero-shot evaluation of language models for Norwegian.

Dataset Creation

NorCommonSenseQA is created by adapting the CommonSenseQA dataset for English via a two-stage annotation. Our annotation team consists of 21 BA/BSc and MA/MSc students in linguistics and computer science, all native Norwegian speakers. The team is divided into two groups: 19 annotators focus on Bokmål, while two annotators work on Nynorsk.

Stage 1: Human annotation and translation The annotation task here involves adapting the English examples from NorCommonSenseQA using two strategies.
  1. Manual translation and localization: The annotators manually translate the original examples, with localization that reflects Norwegian contexts where necessary.
  2. Creative adaptation: The annotators create new examples in Bokmål and Nynorsk from scratch, drawing inspiration from the shown English examples.
Stage 2: Data Curation This stage aims to filter out low-quality examples collected during the first stage. Due to resource constraints, we have curated 91% of the examples (998 out of 1093), with each example validated by a single annotator. Each annotator receives pairs of the original and translated/localized examples or newly created examples for review. The annotation task here involves two main steps.
  1. Quality judgment: The annotators judge the overall quality of an example and label any example that is of low quality or requires a substantial revision. Examples like this are not included in our datasets.
  2. Quality control: The annotators judge spelling, grammar, and natural flow of an example, making minor edits if needed.

Personal and Sensitive Information

The dataset does not contain information considered personal or sensitive.

Dataset Structure

Dataset Instances

Each dataset instance looks as follows:

Bokmål

{
    'id': 'ec882fc3a9bfaeae2a26fe31c2ef2c07',
    'question': 'Hvor plasserer man et pizzastykke før man spiser det?',
    'choices': {
        'label': ['A', 'B', 'C', 'D', 'E'],
        'text': [
            'På bordet',
            'På en tallerken',
            'På en restaurant',
            'I ovnen',
            'Populær'
        ],
    },
    'answer': 'B',
    'curated': True
}

Nynorsk

{
    'id': 'd35a8a3bd560fdd651ecf314878ed30f-78',
    'question': 'Viss du skulle ha steikt noko kjøt, kva ville du ha sett kjøtet på?',
    'choices': {
        'label': ['A', 'B', 'C', 'D', 'E'],
        'text': [
            'Olje',
            'Ein frysar',
            'Eit skinkesmørbrød',
            'Ein omn',
            'Ei steikepanne'
        ]
    },
    'answer': 'E',
    'curated': False
}

Dataset Fields

id: an example id
question: a question
choices: answer choices (label: a list of labels; text: a list of possible answers)
answer: the correct answer from the list of labels (A/B/C/D/E)
curated: an indicator of whether an example has been curated or not

Dataset Card Contact