File size: 8,021 Bytes
14fed89
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3688807
 
14fed89
 
 
 
 
 
 
 
66e9333
 
 
fb5780a
66e9333
 
14fed89
 
 
66e9333
 
 
5c397ca
14fed89
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1b17b5f
 
 
 
 
 
 
 
 
 
 
 
71405e6
 
 
 
 
61ae7c9
1b17b5f
61ae7c9
 
1b17b5f
 
 
 
 
 
03484d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
---
license: apache-2.0
task_categories:
- question-answering
language:
- en
tags:
- finance
- music
- medical
- food
- academic disciplines
- natural disasters
- software
- synthetic
pretty_name: Using KGs to test knowledge consistency in LLMs
size_categories:
- 10K<n<100K
---




## What it is:  
Each dataset in this delivery is made up of query clusters that test an aspect of the consistency of the LLM knowledge about a particular domain. All the questions in each
cluster are meant to be answered either 'yes' or 'no'. When the answers vary within a cluster, the knowledge is said to be inconsistent. When all the questions in a cluster 
are answered 'no' when the expected answer is 'yes' (or viceversa), the knowledge is said to be 'incomplete' (i.e., maybe the LLM wasn't trained in that particular domain). 
It is our experience that incomplete clusters are very few (less than 3%) meaning that the LLMs we have tested know about the domains included here (see below for a list of the
individual datasets), as opposed to inconsistent clusters, which can be between 6%-20% of the total clusters. 

The image below indicates the types of edges the query clusters are supposed to test. It is worth noting that these correspond to common sense axioms about conceptualization, like
the fact that subConceptOf is transitive (4) or that subconcepts inherit the properties of their parent concepts (5). These axioms are listed in the accompanying paper (see below) 

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c80841d418013c77d9f1cd/Kdx6_qaipaZvbJKQZ_M9Y.png)


## How it is made: 
The questions and clusters are automatically generated from a knowledge graph from seed concepts and properties. In our case, we have used Wikidata, 
a well known knowledge graph. The result is an RDF/OWL subgraph that can be queried and reasoned over using Semantic Web technology. 
The figure below summarizes the steps used. The last two steps refer to a possible use case for this dataset, including using in-context learning to improve the 
performance of the dataset. 

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c80841d418013c77d9f1cd/McMdDv_0IzBzrlrVMPfWs.png)

## Types of query clusters

There are different types of query clusters depending on what aspect of the knowledge graph and its deductive closure they capture: 

Edge clusters test a single edge using different questions. For example, to test the edge ('orthopedic pediatric surgeon', IsA, 'orthopedic surgeon), the positive 
or 'edge_yes' (expected answer is 'yes') cluster is: 

      "is 'orthopedic pediatric surgeon' a subconcept of 'orthopedic surgeon' ?",
      "is 'orthopedic pediatric surgeon' a type of 'orthopedic surgeon' ?",
      "is every kind of 'orthopedic pediatric surgeon' also a kind of 'orthopedic surgeon' ?",
      "is 'orthopedic pediatric surgeon' a subcategory of 'orthopedic surgeon' ?"

There are also inverse edge clusters (with questions like "is 'orthopedic surgeon' a subconcept of 'orthopedic pediatric surgeon' ?") and negative or 'edge_no' clusters
(with questions like "is 'orthopedic pediatric surgeon' a subconcept of 'dermatologist' ?")

Hierarchy clusters measure the consistency of a given path, including n-hop virtual edges (in graph's the deductive closure). For example, the path 
('orthopedic surgeon', 'surgeon', 'medical specialist', 'medical occupation') is tested by the cluster below 

      "is 'orthopedic surgeon' a subconcept of 'surgeon' ?",
      "is 'orthopedic surgeon' a type of 'surgeon' ?",
      "is every kind of 'orthopedic surgeon' also a kind of 'surgeon' ?",
      "is 'orthopedic surgeon' a subcategory of 'surgeon' ?",
      "is 'orthopedic surgeon' a subconcept of 'medical specialist' ?",
      "is 'orthopedic surgeon' a type of 'medical specialist' ?",
      "is every kind of 'orthopedic surgeon' also a kind of 'medical specialist' ?",
      "is 'orthopedic surgeon' a subcategory of 'medical specialist' ?",
      "is 'orthopedic surgeon' a subconcept of 'medical_occupation' ?",
      "is 'orthopedic surgeon' a type of 'medical_occupation' ?",
      "is every kind of 'orthopedic surgeon' also a kind of 'medical_occupation' ?",
      "is 'orthopedic surgeon' a subcategory of 'medical_occupation' ?"

Property inheritance clusters test the most basic property of conceptualization. If an orthopedic surgeon is a type of surgeon, we expect that
all the properties of surgeons, e.g., having to be board certified, having attended medical school or working on the field of surgery, are inherited by orthopedic surgeons.
The example below tests the later: 

      "is 'orthopedic surgeon' a subconcept of 'surgeon' ?",
      "is 'orthopedic surgeon' a type of 'surgeon' ?",
      "is every kind of 'orthopedic surgeon' also a kind of 'surgeon' ?",
      "is 'orthopedic surgeon' a subcategory of 'surgeon' ?",
      "is the following statement true? 'orthopedic surgeon works on the field of  surgery' ",
      "is the following statement true? 'surgeon works on the field of  surgery' ",
      "is it accurate to say that  'orthopedic surgeon works on the field of  surgery'? ",
      "is it accurate to say that  'surgeon works on the field of  surgery'? "



## List of datasets 

To show the versatility of our approach, we have constructed similar datasets in the domains below. We test one property inheritance per dataset. The Wikidata main QNode 
(the node corresponding to the entities) and PNode (the node corresponding to the property) are indicated in parenthesis. 

| domain  | top concept | WD concept | main property | WD property |
|----- | ----- | -----| ----- | ----- |
| Academic Disciplines | "Academic Discipline"  | https://www.wikidata.org/wiki/Q11862829 | "has use" | https://www.wikidata.org/wiki/Property:P366 |
| Dishes | "Dish"  | https://www.wikidata.org/wiki/Q746549 | "has parts"  | https://www.wikidata.org/wiki/Property:P527 | 
| Financial products | "Financial product" | https://www.wikidata.org/wiki/Q15809678 | "used by" | https://www.wikidata.org/wiki/Property:P1535 |
| Home appliances | "Home appliance" | https://www.wikidata.org/wiki/Q212920 | "has use" | https://www.wikidata.org/wiki/Property:P366 | 
| Medical specialties | "Medical specialty" | https://www.wikidata.org/wiki/Q930752 | "field of occupation" | https://www.wikidata.org/wiki/Property:P425 | 
| Music genres | "Music genre" | https://www.wikidata.org/wiki/Q188451 | "practiced by" | https://www.wikidata.org/wiki/Property:P3095 | 
| Natural disasters | "Natural disaster" | https://www.wikidata.org/wiki/Q8065 | "has cause" | https://www.wikidata.org/wiki/Property:P828 | 
| Software | "Software" | https://www.wikidata.org/wiki/Q7397 | "studied in" | https://www.wikidata.org/wiki/Property:P7397 | 




The size and configuration of the datasets is listed below


| domain  | edges_yes | edges_no | edges_in | hierarchies | property hierarchies |
| ------------------- | :----: | :-----: | :-----: | :-----: | :-----: |
| Academic Disciplines  | 52  | 308 | 52 | 30 | 1 | 
| Dishes  | 225  | 521 | 224 | 72 | 178 |
| Financial product | 112 | 433 | 108 | 40 | 32 | 
| Home appliances | 58 | 261 | 58 | 31 | 13 | 
| Medical specialties | 122 | 386 | 114 | 55 | 63 | 
| Music genres | 490 | 807 | 488 | 212 | 139 | 
| Natural disasters | 45 | 225 | 44 | 21 | 22 | 
| Software | 80 | 572 | 79 | 114 | 4 | 


## Want to know more?

For background and motivation on this dataset, please check https://arxiv.org/abs/2405.20163  Also to be published in COLM 2024, 

@inproceedings{Uceda_2024_1, <br/>
&ensp;  title={Reasoning about concepts with LLMs: Inconsistencies abound},  <br/>
&ensp;  author={Rosario Uceda Sosa and Karthikeyan Natesan Ramamurthy and Maria Chang and Moninder Singh}, <br/>
&ensp;  booktitle={Proc.\ 1st Conference on Language Modeling (COLM 24)}, <br/>
&ensp;  year={2024} <br/>
} 

## Questions? Comments?

Please contact [email protected], [email protected], [email protected] or [email protected]