Question-Sparql / README.md
julioc-p's picture
Update README.md
1737cfa verified
metadata
license: mit
dataset_info:
  features:
    - name: text_query
      dtype: string
    - name: language
      dtype: string
    - name: sparql_query
      dtype: string
    - name: knowledge_graphs
      dtype: string
    - name: context
      dtype: string
  splits:
    - name: train
      num_bytes: 374237004
      num_examples: 895166
    - name: test
      num_bytes: 230499
      num_examples: 788
  download_size: 97377947
  dataset_size: 374467503
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
task_categories:
  - text-generation
language:
  - en
  - de
  - he
  - kn
  - zh
  - es
  - it
  - fr
  - nl
  - ro
  - fa
  - ru
tags:
  - code
size_categories:
  - 100K<n<1M

Dataset Description

This dataset contains 895,954 examples of natural language questions paired with their corresponding SPARQL queries. It spans 12 languages and targets 15 distinct knowledge graphs, with a significant portion focused on Wikidata and DBpedia.

The dataset was developed as a contribution for the Master Thesis: "Impact of Continual Multilingual Pre-training on Cross-Lingual Transferability for Source Languages". Its purpose is to facilitate research in text-to-SPARQL generation, particularly regarding multilinguality.

Key Features:

  • Multilingual: Covers 12 languages: English (en), German (de), Hebrew (he), Kannada (kn), Chinese (zh), Spanish (es), Italian (it), French (fr), Dutch (nl), Romanian (ro), Farsi (fa), and Russian (ru).
  • Diverse Knowledge Graphs: Includes queries for 15 KGs, prominently Wikidata and DBpedia.
  • Large Scale: Nearly 900,000 question-SPARQL pairs.
  • Augmented Data: Features German translations for many English questions and Wikidata entity/relationship mappings in the context column for most of the examples of Wikidata in German and English.

Dataset Structure

The dataset is provided in Parquet format and consists of the following columns:

  • text_query (string): The natural language question.
    • (Example: "What is the boiling point of water?")
  • language (string): The language code of the text_query (e.g., 'de', 'en', 'es').
  • sparql_query (string): The corresponding SPARQL query.
    • (Example: PREFIX dbo: <http://dbpedia.org/ontology/> ... SELECT DISTINCT ?uri WHERE { ... })
  • knowledge_graphs (string): The knowledge graph targeted by the sparql_query (e.g., 'DBpedia', 'Wikidata').
  • context (string, often null): (Optional) Wikidata entity/relationship mappings in JSON string format (e.g., {"entities": {"United States Army": "Q9212"}, "relationships": {"spouse": "P26"}}).

Data Splits

  • train: 895,954 rows.
  • test: 788 rows.

How to Use

You can load the dataset using the Hugging Face datasets library:

from datasets import load_dataset

# Load a specific split (e.g., train)
dataset = load_dataset("julioc-p/Question-Sparql", split="train")

# Iterate through the dataset
for example in dataset:
    print(f"Question ({example['language']}): {example['text_query']}")
    print(f"Knowledge Graph: {example['knowledge_graphs']}")
    print(f"SPARQL Query: {example['sparql_query']}")
    if example['context']:
        print(f"Context: {example['context']}")
    print("-" * 20)
    break