Dataset Viewer
The dataset viewer is not available for this subset.
The database config contains 33 while it should generally contain 3 splits maximum (train/validation/test). If the splits depth_20_size_25_seed_1, depth_20_size_25_seed_2, depth_20_size_25_seed_3, depth_20_size_50_seed_1, depth_20_size_50_seed_2... are not used to differentiate between training and evaluation, please consider defining configs of this dataset instead. You can find how to define configs instead of splits here: https://huggingface.co/docs/hub/datasets-data-files-configuration

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for PhantomWiki

This repository is a collection of PhantomWiki instances generated using the phantom-wiki Python package.

PhantomWiki is a framework for generating unique, factually consistent document corpora with diverse question-answer pairs. Unlike prior work, PhantomWiki is neither a fixed dataset, nor is it based on any existing data. Instead, a new PhantomWiki instance is generated on demand for each evaluation.

Dataset Details

Dataset Description

PhantomWiki generates a fictional universe of characters along with a set of facts. We reflect these facts in a large-scale corpus, mimicking the style of fan-wiki websites. Then we generate question-answer pairs with tunable difficulties, encapsulating the types of multi-hop questions commonly considered in the question-answering (QA) literature.

  • Curated by: Albert Gong, Kamilė Stankevičiūtė, Chao Wan, Anmol Kabra, Raphael Thesmar, Johann Lee, Julius Klenke, Carla P. Gomes, Kilian Q. Weinberger
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Language(s) (NLP): English
  • License: Apache License 2.0

Dataset Sources [optional]

Uses

PhantomWiki is intended to evaluate retrieval augmented generation (RAG) systems and agentic workflows.

Direct Use

Owing to its fully synthetic and controllable nature, PhantomWiki is particularly useful for disentangling the reasoning and retrieval capabilities of large language models.

Out-of-Scope Use

[More Information Needed]

Dataset Structure

PhantomWiki exposes three components, reflected in the three configurations:

  1. question-answer: Question-answer pairs generated using a context-free grammar
  2. text-corpus: Documents generated using natural-language templates
  3. database: Prolog database containing the facts and clauses representing the universe

Each universe is saved as a split.

Dataset Creation

Curation Rationale

Most mathematical and logical reasoning datasets do not explicity evaluate retrieval capabilities and few retrieval datasets incorporate complex reasoning, save for a few exceptions (e.g., BRIGHT, MultiHop-RAG). However, virtually all retrieval datasets are derived from Wikipedia or internet articles, which are contained in LLM training data. We take the first steps toward a large-scale synthetic dataset that can evaluate LLMs' reasoning and retrieval capabilities.

Source Data

This is a synthetic dataset.

Data Collection and Processing

This dataset was generated on commodity CPUs using Python and Prolog. See paper for full details of the generation pipeline, including timings.

Who are the source data producers?

N/A

Annotations [optional]

N/A

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

N/A

Bias, Risks, and Limitations

N/A

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

Albert Gong

Dataset Card Contact

[email protected]

Downloads last month
3,593

Collection including kilian-group/phantom-wiki-v050