phantom-wiki-v050 / README.md
ag2435's picture
Convert dataset to Parquet
91fdfd9 verified
|
raw
history blame
11.1 kB
metadata
license: mit
task_categories:
  - question-answering
language:
  - en
size_categories:
  - 1M<n<10M
dataset_info:
  config_name: text-corpus
  features:
    - name: title
      dtype: string
    - name: article
      dtype: string
  splits:
    - name: depth_20_size_25_seed_1
      num_bytes: 12734
      num_examples: 26
    - name: depth_20_size_25_seed_2
      num_bytes: 12888
      num_examples: 25
    - name: depth_20_size_25_seed_3
      num_bytes: 11729
      num_examples: 25
    - name: depth_20_size_50_seed_1
      num_bytes: 25543
      num_examples: 51
    - name: depth_20_size_50_seed_2
      num_bytes: 25910
      num_examples: 50
    - name: depth_20_size_50_seed_3
      num_bytes: 25426
      num_examples: 51
    - name: depth_20_size_100_seed_1
      num_bytes: 52632
      num_examples: 101
    - name: depth_20_size_100_seed_2
      num_bytes: 52884
      num_examples: 101
    - name: depth_20_size_100_seed_3
      num_bytes: 51329
      num_examples: 101
    - name: depth_20_size_200_seed_1
      num_bytes: 105140
      num_examples: 201
    - name: depth_20_size_200_seed_2
      num_bytes: 104014
      num_examples: 203
    - name: depth_20_size_200_seed_3
      num_bytes: 103079
      num_examples: 202
    - name: depth_20_size_300_seed_1
      num_bytes: 159074
      num_examples: 302
    - name: depth_20_size_300_seed_2
      num_bytes: 155315
      num_examples: 303
    - name: depth_20_size_300_seed_3
      num_bytes: 155195
      num_examples: 304
    - name: depth_20_size_400_seed_1
      num_bytes: 208421
      num_examples: 403
    - name: depth_20_size_400_seed_2
      num_bytes: 207117
      num_examples: 403
    - name: depth_20_size_400_seed_3
      num_bytes: 206623
      num_examples: 404
    - name: depth_20_size_500_seed_1
      num_bytes: 259954
      num_examples: 503
    - name: depth_20_size_500_seed_2
      num_bytes: 258230
      num_examples: 503
    - name: depth_20_size_500_seed_3
      num_bytes: 257583
      num_examples: 504
    - name: depth_20_size_1000_seed_1
      num_bytes: 518503
      num_examples: 1007
    - name: depth_20_size_1000_seed_2
      num_bytes: 514399
      num_examples: 1006
    - name: depth_20_size_1000_seed_3
      num_bytes: 516161
      num_examples: 1008
    - name: depth_20_size_2500_seed_1
      num_bytes: 1294832
      num_examples: 2516
    - name: depth_20_size_2500_seed_2
      num_bytes: 1291796
      num_examples: 2518
    - name: depth_20_size_2500_seed_3
      num_bytes: 1291338
      num_examples: 2518
    - name: depth_20_size_5000_seed_1
      num_bytes: 2594123
      num_examples: 5030
    - name: depth_20_size_5000_seed_2
      num_bytes: 2588081
      num_examples: 5029
    - name: depth_20_size_5000_seed_3
      num_bytes: 2588663
      num_examples: 5039
    - name: depth_20_size_10000_seed_1
      num_bytes: 5175231
      num_examples: 10069
    - name: depth_20_size_10000_seed_2
      num_bytes: 5189283
      num_examples: 10058
    - name: depth_20_size_10000_seed_3
      num_bytes: 5179131
      num_examples: 10070
  download_size: 10322976
  dataset_size: 31192361
configs:
  - config_name: text-corpus
    data_files:
      - split: depth_20_size_25_seed_1
        path: text-corpus/depth_20_size_25_seed_1-*
      - split: depth_20_size_25_seed_2
        path: text-corpus/depth_20_size_25_seed_2-*
      - split: depth_20_size_25_seed_3
        path: text-corpus/depth_20_size_25_seed_3-*
      - split: depth_20_size_50_seed_1
        path: text-corpus/depth_20_size_50_seed_1-*
      - split: depth_20_size_50_seed_2
        path: text-corpus/depth_20_size_50_seed_2-*
      - split: depth_20_size_50_seed_3
        path: text-corpus/depth_20_size_50_seed_3-*
      - split: depth_20_size_100_seed_1
        path: text-corpus/depth_20_size_100_seed_1-*
      - split: depth_20_size_100_seed_2
        path: text-corpus/depth_20_size_100_seed_2-*
      - split: depth_20_size_100_seed_3
        path: text-corpus/depth_20_size_100_seed_3-*
      - split: depth_20_size_200_seed_1
        path: text-corpus/depth_20_size_200_seed_1-*
      - split: depth_20_size_200_seed_2
        path: text-corpus/depth_20_size_200_seed_2-*
      - split: depth_20_size_200_seed_3
        path: text-corpus/depth_20_size_200_seed_3-*
      - split: depth_20_size_300_seed_1
        path: text-corpus/depth_20_size_300_seed_1-*
      - split: depth_20_size_300_seed_2
        path: text-corpus/depth_20_size_300_seed_2-*
      - split: depth_20_size_300_seed_3
        path: text-corpus/depth_20_size_300_seed_3-*
      - split: depth_20_size_400_seed_1
        path: text-corpus/depth_20_size_400_seed_1-*
      - split: depth_20_size_400_seed_2
        path: text-corpus/depth_20_size_400_seed_2-*
      - split: depth_20_size_400_seed_3
        path: text-corpus/depth_20_size_400_seed_3-*
      - split: depth_20_size_500_seed_1
        path: text-corpus/depth_20_size_500_seed_1-*
      - split: depth_20_size_500_seed_2
        path: text-corpus/depth_20_size_500_seed_2-*
      - split: depth_20_size_500_seed_3
        path: text-corpus/depth_20_size_500_seed_3-*
      - split: depth_20_size_1000_seed_1
        path: text-corpus/depth_20_size_1000_seed_1-*
      - split: depth_20_size_1000_seed_2
        path: text-corpus/depth_20_size_1000_seed_2-*
      - split: depth_20_size_1000_seed_3
        path: text-corpus/depth_20_size_1000_seed_3-*
      - split: depth_20_size_2500_seed_1
        path: text-corpus/depth_20_size_2500_seed_1-*
      - split: depth_20_size_2500_seed_2
        path: text-corpus/depth_20_size_2500_seed_2-*
      - split: depth_20_size_2500_seed_3
        path: text-corpus/depth_20_size_2500_seed_3-*
      - split: depth_20_size_5000_seed_1
        path: text-corpus/depth_20_size_5000_seed_1-*
      - split: depth_20_size_5000_seed_2
        path: text-corpus/depth_20_size_5000_seed_2-*
      - split: depth_20_size_5000_seed_3
        path: text-corpus/depth_20_size_5000_seed_3-*
      - split: depth_20_size_10000_seed_1
        path: text-corpus/depth_20_size_10000_seed_1-*
      - split: depth_20_size_10000_seed_2
        path: text-corpus/depth_20_size_10000_seed_2-*
      - split: depth_20_size_10000_seed_3
        path: text-corpus/depth_20_size_10000_seed_3-*

Dataset Card for PhantomWiki

This repository is a collection of PhantomWiki instances generated using the phantom-wiki Python package.

PhantomWiki is a framework for generating unique, factually consistent document corpora with diverse question-answer pairs. Unlike prior work, PhantomWiki is neither a fixed dataset, nor is it based on any existing data. Instead, a new PhantomWiki instance is generated on demand for each evaluation.

Dataset Details

Dataset Description

PhantomWiki generates a fictional universe of characters along with a set of facts. We reflect these facts in a large-scale corpus, mimicking the style of fan-wiki websites. Then we generate question-answer pairs with tunable difficulties, encapsulating the types of multi-hop questions commonly considered in the question-answering (QA) literature.

  • Curated by: Albert Gong, Kamilė Stankevičiūtė, Chao Wan, Anmol Kabra, Raphael Thesmar, Johann Lee, Julius Klenke, Carla P. Gomes, Kilian Q. Weinberger
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Language(s) (NLP): English
  • License: Apache License 2.0

Dataset Sources [optional]

Uses

PhantomWiki is intended to evaluate retrieval augmented generation (RAG) systems and agentic workflows.

Direct Use

Owing to its fully synthetic and controllable nature, PhantomWiki is particularly useful for disentangling the reasoning and retrieval capabilities of large language models.

Out-of-Scope Use

[More Information Needed]

Dataset Structure

PhantomWiki exposes three components, reflected in the three configurations:

  1. question-answer: Question-answer pairs generated using a context-free grammar
  2. text-corpus: Documents generated using natural-language templates
  3. database: Prolog database containing the facts and clauses representing the universe

Each universe is saved as a split.

Dataset Creation

Curation Rationale

Most mathematical and logical reasoning datasets do not explicity evaluate retrieval capabilities and few retrieval datasets incorporate complex reasoning, save for a few exceptions (e.g., BRIGHT, MultiHop-RAG). However, virtually all retrieval datasets are derived from Wikipedia or internet articles, which are contained in LLM training data. We take the first steps toward a large-scale synthetic dataset that can evaluate LLMs' reasoning and retrieval capabilities.

Source Data

This is a synthetic dataset.

Data Collection and Processing

This dataset was generated on commodity CPUs using Python and Prolog. See paper for full details of the generation pipeline, including timings.

Who are the source data producers?

N/A

Annotations [optional]

N/A

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

N/A

Bias, Risks, and Limitations

N/A

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

Albert Gong

Dataset Card Contact

[email protected]