newsqa / README.md
varun-v-rao's picture
Update README.md
694b94d verified
metadata
dataset_info:
  features:
    - name: context
      dtype: string
    - name: question
      dtype: string
    - name: answers
      struct:
        - name: answer_start
          sequence: int64
        - name: text
          sequence: string
    - name: id
      dtype: string
    - name: labels
      list:
        - name: end
          sequence: int64
        - name: start
          sequence: int64
  splits:
    - name: train
      num_bytes: 57635506.94441748
      num_examples: 18142
    - name: validation
      num_bytes: 3374870.9449192784
      num_examples: 1070
  download_size: 4666280
  dataset_size: 61010377.88933676
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*

Dataset Card for "squad"

This truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.

Preprocessing and Filtering

Preprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer.