GRBench / README.md
PeterJinGo's picture
Update README.md
1a65f44 verified
metadata
license: apache-2.0
task_categories:
  - question-answering
  - text2text-generation
language:
  - en
tags:
  - chemistry
  - biology
  - legal
  - medical
configs:
  - config_name: amazon
    data_files:
      - split: test
        path: amazon.json
  - config_name: medicine
    data_files:
      - split: test
        path: medicine.json
  - config_name: physics
    data_files:
      - split: test
        path: physics.json
  - config_name: biology
    data_files:
      - split: test
        path: biology.json
  - config_name: chemistry
    data_files:
      - split: test
        path: chemistry.json
  - config_name: computer_science
    data_files:
      - split: test
        path: computer_science.json
  - config_name: healthcare
    data_files:
      - split: test
        path: healthcare.json
  - config_name: legal
    data_files:
      - split: test
        path: legal.json
  - config_name: literature
    data_files:
      - split: test
        path: literature.json
  - config_name: material_science
    data_files:
      - split: test
        path: material_science.json

GRBench

GRBench is a comprehensive benchmark dataset to support the development of methodology and facilitate the evaluation of the proposed models for Augmenting Large Language Models with External Textual Graphs.

Dataset Details

Dataset Description

GRBench includes 10 real-world graphs that can serve as external knowledge sources for LLMs from five domains including academic, e-commerce, literature, healthcare, and legal domains. Each sample in GRBench consists of a manually designed question and an answer, which can be directly answered by referring to the graphs or retrieving the information from the graphs as context. To make the dataset comprehensive, we include samples of different difficulty levels: easy questions (which can be answered with single-hop reasoning on graphs), medium questions (which necessitate multi-hop reasoning on graphs), and hard questions (which call for inductive reasoning with information on graphs as context).

Dataset Sources

Uses

Direct Use

You can access the graph environment data for each domain here: https://drive.google.com/drive/folders/1DJIgRZ3G-TOf7h0-Xub5_sE4slBUEqy9. Then download the question answering data for each domain:

from datasets import load_dataset
domain = 'amazon' # can be selected from [amazon, medicine, physics, biology, chemistry, computer_science, healthcare, legal, literature, material_science]
dataset = load_dataset("PeterJinGo/GRBench", data_files=f'{domain}.json')

Dataset Structure

Information on how the graph file looks can be found here: https://github.com/PeterGriffinJin/Graph-CoT/tree/main/data.

Dataset Creation

More details of how the dataset is constructed can be found in Section 3 of this paper (https://arxiv.org/pdf/2404.07103.pdf).

The raw graph data sources can be found here: https://github.com/PeterGriffinJin/Graph-CoT/tree/main/data/raw_data.

Citation

BibTeX:

@article{jin2024graph,

title={Graph Chain-of-Thought: Augmenting Large Language Models by Reasoning on Graphs},

author={Jin, Bowen and Xie, Chulin and Zhang, Jiawei and Roy, Kashob Kumar and Zhang, Yu and Wang, Suhang and Meng, Yu and Han, Jiawei},

journal={arXiv preprint arXiv:2404.07103},

year={2024}

}

Dataset Card Authors

Bowen Jin

Dataset Card Contact

[email protected]