siqa / README.md
pratyushmaini's picture
Upload dataset
00d957f verified
|
raw
history blame
1.85 kB
metadata
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: question
      dtype: string
    - name: choices
      sequence: string
    - name: answerID
      dtype: int64
  splits:
    - name: eval
      num_bytes: 396224
      num_examples: 1954
    - name: train
      num_bytes: 2017203
      num_examples: 10000
  download_size: 1321087
  dataset_size: 2413427
configs:
  - config_name: default
    data_files:
      - split: eval
        path: data/eval-*
      - split: train
        path: data/train-*

siqa Dataset

Overview

This repository contains the processed version of the siqa dataset. The dataset is formatted as a collection of multiple-choice questions.

Dataset Structure

Each example in the dataset contains the following fields:

{
  "id": 0,
  "question": "Tracy didn't go home that evening and resisted Riley's attacks. What does Tracy need to do before this?",
  "choices": [
    "make a new plan",
    "Go home and see Riley",
    "Find somewhere to go"
  ],
  "answerID": 2
}

Fields Description

  • id: Unique identifier for each example
  • question: The question or prompt text
  • choices: List of possible answers
  • answerID: Index of the correct answer in the choices list (0-based)

Loading the Dataset

You can load this dataset using the Hugging Face datasets library:

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("DatologyAI/siqa")

# Access the data
for example in dataset['train']:
    print(example)

Example Usage

# Load the dataset
dataset = load_dataset("DatologyAI/siqa")

# Get a sample question
sample = dataset['train'][0]

# Print the question
print("Question:", sample['question'])
print("Choices:")
for idx, choice in enumerate(sample['choices']):
    print(f"{idx}. {choice}")
print("Correct Answer:", sample['choices'][sample['answerID']])