File size: 3,265 Bytes
e3f829b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1737cfa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: mit
dataset_info:
  features:
  - name: text_query
    dtype: string
  - name: language
    dtype: string
  - name: sparql_query
    dtype: string
  - name: knowledge_graphs
    dtype: string
  - name: context
    dtype: string
  splits:
  - name: train
    num_bytes: 374237004
    num_examples: 895166
  - name: test
    num_bytes: 230499
    num_examples: 788
  download_size: 97377947
  dataset_size: 374467503
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
task_categories:
- text-generation
language:
- en
- de
- he
- kn
- zh
- es
- it
- fr
- nl
- ro
- fa
- ru
tags:
- code
size_categories:
- 100K<n<1M
---

## Dataset Description

This dataset contains **895,954 examples** of natural language questions paired with their corresponding SPARQL queries. It spans **12 languages** and targets **15 distinct knowledge graphs**, with a significant portion focused on Wikidata and DBpedia.

The dataset was developed as a contribution for the Master Thesis: *"Impact of Continual Multilingual Pre-training on Cross-Lingual Transferability for Source Languages"*. Its purpose is to facilitate research in text-to-SPARQL generation, particularly regarding multilinguality.

### Key Features:
* **Multilingual:** Covers 12 languages: English (en), German (de), Hebrew (he), Kannada (kn), Chinese (zh), Spanish (es), Italian (it), French (fr), Dutch (nl), Romanian (ro), Farsi (fa), and Russian (ru).
* **Diverse Knowledge Graphs**: Includes queries for 15 KGs, prominently Wikidata and DBpedia.
* **Large Scale:** Nearly 900,000 question-SPARQL pairs.
* **Augmented Data:** Features German translations for many English questions and Wikidata entity/relationship mappings in the `context` column for most of the examples of Wikidata in German and English.

## Dataset Structure

The dataset is provided in Parquet format and consists of the following columns:

* `text_query` (string): The natural language question.
    * *(Example: "What is the boiling point of water?")*
* `language` (string): The language code of the `text_query` (e.g., 'de', 'en', 'es').
* `sparql_query` (string): The corresponding SPARQL query.
    * *(Example: `PREFIX dbo: <http://dbpedia.org/ontology/> ... SELECT DISTINCT ?uri WHERE { ... }`)*
* `knowledge_graphs` (string): The knowledge graph targeted by the `sparql_query` (e.g., 'DBpedia', 'Wikidata').
* `context` (string, often null): (Optional) Wikidata entity/relationship mappings in JSON string format (e.g., `{"entities": {"United States Army": "Q9212"}, "relationships": {"spouse": "P26"}}`).

### Data Splits

* `train`: 895,954 rows.
* `test`: 788 rows.

## How to Use

You can load the dataset using the Hugging Face `datasets` library:

```python
from datasets import load_dataset

# Load a specific split (e.g., train)
dataset = load_dataset("julioc-p/Question-Sparql", split="train")

# Iterate through the dataset
for example in dataset:
    print(f"Question ({example['language']}): {example['text_query']}")
    print(f"Knowledge Graph: {example['knowledge_graphs']}")
    print(f"SPARQL Query: {example['sparql_query']}")
    if example['context']:
        print(f"Context: {example['context']}")
    print("-" * 20)
    break