jackkuo commited on
Commit
68c6a8d
·
verified ·
1 Parent(s): 0627e6d

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -NFQT4oBgHgl3EQfKDVk/content/2301.13258v1.pdf +3 -0
  2. -NFQT4oBgHgl3EQfKDVk/vector_store/index.faiss +3 -0
  3. -tAzT4oBgHgl3EQfSvv8/content/tmp_files/2301.01239v1.pdf.txt +758 -0
  4. -tAzT4oBgHgl3EQfSvv8/content/tmp_files/load_file.txt +382 -0
  5. .gitattributes +34 -0
  6. 1tE1T4oBgHgl3EQflQSM/vector_store/index.pkl +3 -0
  7. 29AzT4oBgHgl3EQfDvqc/content/tmp_files/2301.00982v1.pdf.txt +1521 -0
  8. 29AzT4oBgHgl3EQfDvqc/content/tmp_files/load_file.txt +0 -0
  9. 39AyT4oBgHgl3EQfcPct/content/tmp_files/2301.00277v1.pdf.txt +3607 -0
  10. 39AyT4oBgHgl3EQfcPct/content/tmp_files/load_file.txt +0 -0
  11. 39FAT4oBgHgl3EQfEhxj/content/tmp_files/2301.08422v1.pdf.txt +944 -0
  12. 39FAT4oBgHgl3EQfEhxj/content/tmp_files/load_file.txt +0 -0
  13. 3tFAT4oBgHgl3EQfERy2/content/tmp_files/2301.08421v1.pdf.txt +1420 -0
  14. 3tFAT4oBgHgl3EQfERy2/content/tmp_files/load_file.txt +0 -0
  15. 4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf +3 -0
  16. 4NFKT4oBgHgl3EQf9C5Z/vector_store/index.faiss +3 -0
  17. 5dAyT4oBgHgl3EQfpfgA/content/tmp_files/2301.00524v1.pdf.txt +4048 -0
  18. 5dAyT4oBgHgl3EQfpfgA/content/tmp_files/load_file.txt +0 -0
  19. 5dE4T4oBgHgl3EQf1Q2P/content/tmp_files/2301.05289v1.pdf.txt +0 -0
  20. 5dE4T4oBgHgl3EQf1Q2P/content/tmp_files/load_file.txt +0 -0
  21. 5tE1T4oBgHgl3EQfBAK1/content/2301.02847v1.pdf +3 -0
  22. 5tE1T4oBgHgl3EQfBAK1/vector_store/index.pkl +3 -0
  23. 69E1T4oBgHgl3EQf7AXC/content/tmp_files/2301.03530v1.pdf.txt +1375 -0
  24. 69E1T4oBgHgl3EQf7AXC/content/tmp_files/load_file.txt +0 -0
  25. 6tFAT4oBgHgl3EQfnx0W/vector_store/index.pkl +3 -0
  26. 89FLT4oBgHgl3EQfBi6R/vector_store/index.faiss +3 -0
  27. 8NFLT4oBgHgl3EQfsy_c/vector_store/index.faiss +3 -0
  28. ANFQT4oBgHgl3EQf8jdP/content/2301.13447v1.pdf +3 -0
  29. ANFQT4oBgHgl3EQf8jdP/vector_store/index.faiss +3 -0
  30. AtE1T4oBgHgl3EQfpAWX/content/tmp_files/2301.03327v1.pdf.txt +1369 -0
  31. AtE1T4oBgHgl3EQfpAWX/content/tmp_files/load_file.txt +0 -0
  32. AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf +3 -0
  33. AtE2T4oBgHgl3EQf8QmS/vector_store/index.pkl +3 -0
  34. AtE4T4oBgHgl3EQf5A4w/content/tmp_files/2301.05318v1.pdf.txt +589 -0
  35. AtE4T4oBgHgl3EQf5A4w/content/tmp_files/load_file.txt +505 -0
  36. BdFQT4oBgHgl3EQf9zeq/content/tmp_files/2301.13452v1.pdf.txt +2126 -0
  37. BdFQT4oBgHgl3EQf9zeq/content/tmp_files/load_file.txt +0 -0
  38. BtE1T4oBgHgl3EQfpgUG/vector_store/index.faiss +3 -0
  39. GNAyT4oBgHgl3EQfrfkX/content/tmp_files/2301.00560v1.pdf.txt +851 -0
  40. GNAyT4oBgHgl3EQfrfkX/content/tmp_files/load_file.txt +494 -0
  41. GdAyT4oBgHgl3EQfrflY/content/tmp_files/2301.00561v1.pdf.txt +1425 -0
  42. GdAyT4oBgHgl3EQfrflY/content/tmp_files/load_file.txt +0 -0
  43. INAzT4oBgHgl3EQfHvu0/vector_store/index.faiss +3 -0
  44. J9FLT4oBgHgl3EQfKi8f/vector_store/index.faiss +3 -0
  45. K9E1T4oBgHgl3EQfswUE/content/tmp_files/2301.03368v1.pdf.txt +1404 -0
  46. K9E1T4oBgHgl3EQfswUE/content/tmp_files/load_file.txt +0 -0
  47. KNE3T4oBgHgl3EQfvQuV/vector_store/index.faiss +3 -0
  48. L9E4T4oBgHgl3EQfKAwg/vector_store/index.faiss +3 -0
  49. LdFRT4oBgHgl3EQf1zij/content/2301.13658v1.pdf +3 -0
  50. LdFRT4oBgHgl3EQf1zij/vector_store/index.pkl +3 -0
-NFQT4oBgHgl3EQfKDVk/content/2301.13258v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac38d1b693ff07e0099c0ee0be518faf5aa5f098a851e11e8104993b297b5e34
3
+ size 11128650
-NFQT4oBgHgl3EQfKDVk/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b574fc534b7aa29d57f963611f6a98e2135fd8bd63a60faa4f46b2a7d90be18f
3
+ size 10485805
-tAzT4oBgHgl3EQfSvv8/content/tmp_files/2301.01239v1.pdf.txt ADDED
@@ -0,0 +1,758 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ S.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high
2
+ voltage instrument transformers in the Dutch transmission system
3
+ 1
4
+ Preprint accepted in WCEAM 2022 Seville
5
+ Use of survival analysis and simulation to
6
+ improve maintenance planning of high
7
+ voltage instrument transformers in the
8
+ Dutch transmission system
9
+ Swasti R. Khuntia1, Fatma Zghal1, Ranjan Bhuyan1, Erik Schenkel1, Paul
10
+ Duvivier2, Olivier Blancke2, Witold Krasny2
11
+ Abstract This paper describes the use of survival analysis and simulation to model
12
+ the lifetime of high voltage instrument transformers in the Dutch transmission sys-
13
+ tem. To represent asset aging, the non-parametric Kaplan-Meier method is used to
14
+ enable the fitting of Weibull distribution. Such an approach is implemented on three
15
+ different voltage levels, namely 110kV, 150kV, and 220/380kV. Real failure and
16
+ inspection data is used to achieve a realistic failure model of the instrument trans-
17
+ formers. Failure and maintenance data occurring between 1989 and 2021 have been
18
+ used for this study. In spite of missing and low-quality data, a rich failure database
19
+ could still be prepared. This study also offers insights into factors (i.e., voltage level,
20
+ in-service age) influencing the remaining life from both graphical survival function
21
+ and parametric Weibull distribution analysis. Based on the derived statistics, future
22
+ possible maintenance planning scenarios are simulated under a complex system
23
+ modelling framework in a digital twin enabled platform. Eventually, the scenarios
24
+ are evaluated in terms of replacement costs (CAPEX), inspection hours, and una-
25
+ vailability hours.
26
+ 1 Introduction
27
+ TenneT, as European transmission system operator, is facing power supply reli-
28
+ ability challenges that originate in a globally aging infrastructure and increasing
29
+ complexity of business operations in the context of energy transition. While power
30
+ transformers, due to the criticality of their function on the grid have been the focus
31
+ of many studies, concerns have been raised recently on the lack of focus on long-
32
+ term asset management of Instrument Transformers (ITs). ITs play an important
33
+
34
+ 1 S.R. Khuntia (), F. Zghal, R. Bhuyan, E. Schenkel
35
+ Asset Management Onshore, TenneT TSO B.V., Arnhem, The Netherlands
36
+ e-mail: [email protected]
37
+ 2 P. Duvivier, O. Blancke, W. Krasny
38
+ Cosmo Tech, Lyon, France
39
40
+
41
+ S.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high
42
+ voltage instrument transformers in the Dutch transmission system
43
+ 2
44
+ Preprint accepted in WCEAM 2022 Seville
45
+ role in the metering of electrical quantities and protection of other system compo-
46
+ nents. Due to their importance, any unplanned unavailability due to failures can
47
+ cause considerable outage costs to utilities. Consequently, it is crucial to properly
48
+ characterize the aging of ITs using statistical approaches that will enable to predict
49
+ the evolution of the IT population failure over the next years. In addition, it will
50
+ yield valuable perspectives in terms of optimizing maintenance and replacement
51
+ policies accordingly. The reliability analysis of ITs is very much dependent on the
52
+ defined maintenance strategies which will provide a reliable and safe power supply.
53
+ By definition, asset management involves strategies to explore, identify, plan, in-
54
+ vest, utilize, maintain, replace, and dispose of assets while maximizing their value
55
+ and performance under some prescribed financial constraint (Khuntia et al., 2016).
56
+ Since ITs play such an important role, it is expected that statistical failure analysis
57
+ will give a better insight on actual maintenance planning performance to the asset
58
+ management team at TenneT. Technically, in the reliability analysis of IT, it is in-
59
+ teresting to identify the independence or dependence of the specific covariates that
60
+ indicate the operation of the IT.
61
+ For any kind of data-driven methodology and, in particular, asset reliability char-
62
+ acterization, a robust database is needed, both in terms of volumetry and quality
63
+ (Balzer and Neumann, 2011). However, it can be argued that there should be a pref-
64
+ erence for robust data and that there are techniques that could be used to cope with
65
+ data discrepancies. In our case, the historical failure data play an important role in
66
+ understanding the behavior of ITs. Literature study reveals that explosion is one of
67
+ the highest reported failure modes. Impact of explosion not only relates to direct
68
+ cost of IT replacement but also chances of replacement of neighboring equipment
69
+ damaged in the explosion. CIGRE reports are one of the primary sources for pub-
70
+ licly available failure databases of ITs. Three series of CIGRE reports are available
71
+ online. The first report was published in 1990 which covered failures of ITs (voltage
72
+ >72.5kV) in about 15 countries. The survey covered 136033 transformers in the
73
+ period from 1970 to 1986 (CIGRE, 1990). The second report published results for
74
+ 131207 ITs (voltage > 60kV) in the period from 1985 to 1995 in the year 2009
75
+ (CIGRE, 2009). The third results of a wider international survey was published in
76
+ 2012. It collected population and failure data for ITs of voltage > 60kV and ex-
77
+ cluded AIS ring current transformers that were in service during the years 2004 to
78
+ 2007 inclusive (CIGRE, 2012). Some other failure investigations were reported
79
+ (Poljak et al., 2010; Raetze et al., 2012; Tee et al., 2021), where authors focus on
80
+ reduction of IT explosion and better condition monitoring of ITs. Nonetheless, the
81
+ truth is that failure is probabilistic in nature, and it needs investigations on the rela-
82
+ tionship with asset data and failure cause. The use of semi-parametric Cox model
83
+ was reported in (Tee et al., 2021). The authors elaborated the factors influencing the
84
+ probability of failures through analysis on the lifetime data from both graphical sur-
85
+ vival function plots and semi-parametric Cox model.
86
+ With the use of Simulation Digital Twin technology from Cosmo Tech, TenneT
87
+ analyzed various maintenance strategies. The Digital Twin has been calibrated
88
+
89
+ S.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high
90
+ voltage instrument transformers in the Dutch transmission system
91
+ 3
92
+ Preprint accepted in WCEAM 2022 Seville
93
+ based on the historical failure data that it recorded with statistical technique relying
94
+ on survival analysis. Literature study shows that survival analysis was used for
95
+ power transformer reliability studies of around 2000 nos. in the Canadian and
96
+ around 6000 nos. in the Australian utility (Picher et al., 2014; Martin et al., 2018).
97
+ Ref. (Picher et al., 2014) described the data of Canadian utility Hydro-Quebec
98
+ where they adopted a good match using the Kaplan-Meier and Weibull distribution.
99
+ Finally, the method concluded that Weibull distribution is a better fit and the results
100
+ looked promising. Similarly, ref. (Martin et al., 2018) followed a similar strategy
101
+ for Australian data. The authors deduced the choice of Kaplan-Meier or Weibull
102
+ distribution based on the different voltage classes. In practice, Weibull distribution
103
+ fitted to empirical failure data are commonly used to calculate life expectancy.
104
+ However, the challenge in applying such a distribution to electrical assets is that
105
+ often the root cause of failure is not related to the normal aging of the asset, but
106
+ rather external factors. The aim of this paper is three-fold: (1) use of real failure data
107
+ to model a time-varying failure rate based on Weibull parameters obtained from
108
+ Kaplan-Meier survival analysis, (2) investigate extrapolation methods to maximize
109
+ value of existing inspection results across IT population, and (3) use digital twin
110
+ enabled simulation to tune the required resources necessary to realize TenneT’s
111
+ strategy for considered substation equipment maintenance and renewals.
112
+ 2 Data and Methodology
113
+ 2.1
114
+ Description of Data
115
+ As of the date of writing this paper, TenneT owns and maintains a large fleet of
116
+ ITs in the Dutch high voltage AC network (i.e., 110, 150, 220 and 380kV) as shown
117
+ in Figure 1(a). It is of interest to see the age profile of the existing population, in
118
+ terms of years since manufacture because reliability is often related to age. How-
119
+ ever, lifetime data can be complicated as some ITs often extend over several dec-
120
+ ades. At TenneT, the expected design life of an IT is 45 years. This age is affected
121
+ and reduced, sometimes substantially, depending on the design or utilization of the
122
+ IT, i.e. its loading or the environment to which it is exposed. In some cases, a good
123
+ maintenance scheme can even increase the replacement age. Although there is no
124
+ prescribed replacement age, it is the responsibility of the asset management depart-
125
+ ment to formulate the maintenance policies based on failure history. For this study,
126
+ failure data was obtained from various sources, starting from failure records, reports
127
+ to talking to experts. Fortunately, TenneT did not record a high number of major
128
+ failures since the 1989. A major failure is defined as a sudden explosive event that
129
+ has caused an immediate emergency system outage or trip. Figure 1(b) lists the fail-
130
+ ure events with respect to manufacturer (coded for confidentiality) and IT age.
131
+ The failure list was not adequate to come up with a statistical model. In addition,
132
+ maintenance reports (or work orders) and expert knowledge was used to populate
133
+ the list and gain utmost information. A work order is a document that provides all
134
+
135
+ S.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high
136
+ voltage instrument transformers in the Dutch transmission system
137
+ 4
138
+ Preprint accepted in WCEAM 2022 Seville
139
+ the information about a maintenance task and outlines a process for completing that
140
+ task. In case of IT, corrective work orders are used (the others being periodic
141
+ maintenance and inspection work orders). Discussion with experts led us to use the
142
+ work orders when an IT was out of service for any kind of maintenance. Figure 1(c)
143
+ shows the total recorded failures for the IT population. In the recent years, one ob-
144
+ servation worth noticing is that the number of failures has increased significantly.
145
+
146
+ (a)
147
+
148
+ (b)
149
+
150
+ 10000
151
+ SLI
152
+ 8000
153
+ Number of
154
+ 6000
155
+ 4000
156
+ 2000
157
+ 0
158
+ 110
159
+ 150
160
+ 220
161
+ 380
162
+ Voltage level (kV)5
163
+ Number of ITs
164
+ 4
165
+ m
166
+ 2
167
+ 1
168
+ 0
169
+ 990
170
+ 7
171
+ 68
172
+ 1
173
+ 3
174
+ 6
175
+ 80
176
+ 9
177
+ 0
178
+ 00
179
+ 05
180
+ 600
181
+ 2
182
+ 6
183
+ 7
184
+ 7
185
+ 7
186
+ 8
187
+ 9
188
+ 9
189
+ 9
190
+ 9
191
+ 9
192
+ 9
193
+ 9
194
+ 6
195
+ 9
196
+ 6
197
+ 0
198
+ 0
199
+ 0
200
+ 0
201
+ 0
202
+ 0
203
+ 1
204
+ 1
205
+ 1
206
+ 1
207
+ 1
208
+ L
209
+ 1
210
+ 1
211
+ L
212
+ 2
213
+ 2
214
+ 2
215
+ 2
216
+ 2
217
+ 2
218
+ Year of constructionS.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high
219
+ voltage instrument transformers in the Dutch transmission system
220
+ 5
221
+ Preprint accepted in WCEAM 2022 Seville
222
+
223
+ (c)
224
+ Figure 1 (a) Voltage-based IT population, and (b) Actual failure list until July 2021,
225
+ (c) Populated failure from work order and expert opinion until July 2021
226
+ 2.2
227
+ Survival Analysis and Failure Rate Modelling
228
+ Survival analysis is a statistical technique used to estimate the lifespan of a par-
229
+ ticular population under study. It is an analysis of time-to-event data (Wang et al.,
230
+ 2019). One of the widely used survival analysis technique is the Kaplan-Meier
231
+ (KM) estimate (Bland and Altman, 1998). The KM estimator uses lifetime data to
232
+ perform survival analysis. Although it is widely used in medical research to gauge
233
+ the part of patients living for a specific measure of time after treatment, it has been
234
+ used in the power systems sector to model the survival of electric assets (Martin et
235
+ al., 2018). The use of KM estimate is supported by two reasons: one is that it does
236
+ not assume that the data fits a statistical distribution, and second is that it allows the
237
+ inclusion of censored data (when an IT had not failed by mid-2021).
238
+ For a population, the survival function 𝑆̂(𝑡) is defined as:
239
+ 𝑆̂(𝑡) = ∏ (1 − 𝑑𝑖
240
+ 𝑛𝑖
241
+ )
242
+ 𝑖:𝑡𝑖<𝑡
243
+
244
+ where, 𝑡𝑖is the time at least one event happened, 𝑑𝑖 is the number of events that
245
+ happened at time 𝑡𝑖 and 𝑛𝑖 is the number of individuals known to have survived up
246
+ to time 𝑡𝑖 (Davidson-Pilon, 2019). In our study, the estimates are calculated for three
247
+ different voltage levels and 𝑛𝑗 considers observations that occurred between the
248
+ oldest IT age and mid-2021. An important aspect in survival analysis is considering
249
+ the censored data. Censoring occurs when the value of an observation is only known
250
+
251
+ 1000
252
+ SI
253
+ 800
254
+ Number of
255
+ 009
256
+ 400
257
+ 200
258
+ YearofconstructionS.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high
259
+ voltage instrument transformers in the Dutch transmission system
260
+ 6
261
+ Preprint accepted in WCEAM 2022 Seville
262
+ to some extent. Censored data is often encountered when analysing practical life
263
+ data, especially in case of electrical power systems where most of the installed
264
+ equipment is still in-service, and most of the time the exact age of equipment at the
265
+ moment of failure is unknown (CIGRE, 2017). In this study, a large amount of data
266
+ falls under the right censored data (suspended data) category. A dataset is termed as
267
+ right censored or suspended when it is composed of components that did not fail.
268
+ The term right censored indicates that the event is located to the right of the dataset,
269
+ which implies that certain components are still operating. In our dataset, we had to
270
+ deal with right censoring and no left truncation since the year of construction was
271
+ known to us. Ignoring truncation causes bias in model’s estimation.
272
+
273
+
274
+
275
+ 1.0
276
+ Weibull
277
+ Kaplan-Meier
278
+ 0.8
279
+ 0.6
280
+ 0.4
281
+ 0.2
282
+ 0.0
283
+ 0
284
+ 20
285
+ 40
286
+ 60
287
+ 80
288
+ 100
289
+ timeline1.0
290
+ Weibull
291
+ Kaplan-Meier
292
+ 0.8
293
+ 0.6
294
+ 0.4
295
+ 0.2
296
+ 0.0
297
+ 0
298
+ 20
299
+ 40
300
+ 60
301
+ 80
302
+ 100
303
+ timelineS.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high
304
+ voltage instrument transformers in the Dutch transmission system
305
+ 7
306
+ Preprint accepted in WCEAM 2022 Seville
307
+
308
+ Figure 2 Kaplan-Meier estimate of all different voltage levels.
309
+ The IT dataset was split into three different families, each one with its own deg-
310
+ radation law, based on their voltage level as is shown in Figure 2. A useful statistic
311
+ in this analysis is calculating the median survival time, which defines the point in
312
+ time where on average 50% of the population should have failed. For 110kV, the
313
+ median survival time is 61 years. However, the median survival time for 150, 220
314
+ and 380kV is infinity because there have been an insufficient number of failures to
315
+ determine it. In such cases, the two best options are:
316
+ 1. use another quantile (e.g. 0.75) to compare the groups;
317
+ 2. approximate the survival curve by means of a parametric fit and derive the me-
318
+ dian survival time using the model.
319
+ The second option is chosen in our study since all the three voltages can be mod-
320
+ elled using the parametric fit assuming that failure times have a Weibull distribu-
321
+ tion. In other words, Weibull distribution is used to parameterize the KM estimate.
322
+ The Weibull distribution is a widely used method to analyse the statistical features
323
+ of failure (Rinne, 2008). The probability 𝑓(𝑡) and cumulative density function 𝐹(𝑡)
324
+ are defined as: 𝑓(𝑡) = 𝛽
325
+ 𝑡𝛽−1
326
+ 𝜂𝛽 𝑒
327
+ −(𝑡
328
+ 𝜂)
329
+ 𝛽
330
+ 𝑎𝑛𝑑 𝐹(𝑡) = 1 − 𝑒
331
+ −(𝑡
332
+ 𝜂)
333
+ 𝛽
334
+ ; where, 𝑡 is the time,
335
+ 𝛽 is the shape and 𝜂 is the scale parameter. Table 1 shows the different parameters
336
+ calculated for our study from the corresponding survival function.
337
+ Table 1 Statistics and Weibull parameters.
338
+ Voltage (kV)
339
+ No. of ITs
340
+ No. of
341
+ censored
342
+ β
343
+ η
344
+ median
345
+ 110
346
+ 3168
347
+ 255
348
+ 6.67
349
+ 63.79
350
+ 61
351
+ 150
352
+ 10058
353
+ 298
354
+ 6.42
355
+ 74.20
356
+ infinity
357
+ 220 and 380
358
+ 2982
359
+ 25
360
+ 5.65
361
+ 77.05
362
+ infinity
363
+
364
+ 1.0
365
+ 0.8
366
+ 0.6
367
+ 0.4
368
+ 0.2
369
+ Weibull
370
+ Kaplan-Meier
371
+ 0.0
372
+ 0]
373
+ 20
374
+ 40
375
+ 60
376
+ 80
377
+ 100
378
+ timelineS.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high
379
+ voltage instrument transformers in the Dutch transmission system
380
+ 8
381
+ Preprint accepted in WCEAM 2022 Seville
382
+ 3 Modelling in Cosmo Tech Asset and Simulations
383
+ Founded in 2010, Cosmo Tech is a technology company pioneer in the modeling
384
+ of complex systems (https://cosmotech.com/). Relying on its industry-validated
385
+ modeling and simulation software platform, Cosmo Tech has developed a solution
386
+ called Cosmo Tech Asset, henceforth called CTA. CTA allows to build digital twins
387
+ of asset portfolios with their full complexity such as network dependencies, opera-
388
+ tive strategies, or dynamical resources allocations.
389
+ 3.1
390
+ Cosmo Tech Asset Platform
391
+ The different steps involved in the CTA platform are:
392
+ 1. Experiment the CTA platform’s pre-built health assessment methods and com-
393
+ pare the results with internal initiatives. For health assessment, the asset health
394
+ index is a key simulation variable, and it is described in the next sub-section.
395
+ 2. Demonstrate the calibration of reliability law (using Weibull distribution) for
396
+ simulations against up-to-date condition of ITs, but also historical IT related
397
+ data, such as field observation or inspection data and measurement inputs.
398
+ 3. Investigate the functional possibilities that would allow to leverage existing in-
399
+ spection results across ITs using extrapolation methods when applicable, there-
400
+ fore maximize inspection result value.
401
+ 4. Finally, based on the achieved health assessment technique, use the simulation
402
+ platform to tune the required resources necessary to realize TenneT’s strategy
403
+ for considered IT maintenance and replacements.
404
+ 3.2
405
+ TenneT Asset Health Index
406
+ For health assessment, the TenneT asset health index (AHI) is considered and is
407
+ shown in Table 1(a) (TenneT, 2021). The AHI is based on asset age and failure
408
+ probability, and it is used to drive short-term maintenance and long-term replace-
409
+ ment strategies. It provides a consistent way to compare the overall asset health of
410
+ TenneT's assets.
411
+ The evaluation of the AHI is based on two metrics:
412
+ 1. probability of failure of IT in the coming years for AHI score of 1 to 6, and
413
+ 2. age of IT for AHI score of 7 to 10.
414
+ In addition to AHI, the study of IT uses reliability law over which failures are
415
+ drawn during the simulations. The reliability law corresponds to the KM survival
416
+ function and the Weibull estimates that are described in section 2. These laws have
417
+ a cumulative distribution function which represent the probability for a failure to
418
+ occur before a certain age. And the probability of failure over the next year can be
419
+ evaluated using the following formula:
420
+
421
+ S.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high
422
+ voltage instrument transformers in the Dutch transmission system
423
+ 9
424
+ Preprint accepted in WCEAM 2022 Seville
425
+ 𝑃(𝑋 < 𝑡 + 3 | 𝑋 > 𝑡) = 1 − 𝑃(𝑋 > 𝑡 + 3 | 𝑋 > 𝑡)
426
+ = 1 −
427
+ 𝑃(𝑋>𝑡+3 ∩ 𝑋>𝑡)
428
+ 𝑃(𝑋 > 𝑡)
429
+ = 1 −
430
+ 𝑃(𝑋 > 𝑡+3)
431
+ 𝑃(𝑋 > 𝑡)
432
+ = 1 −
433
+ 1 − 𝑃(𝑋 < 𝑡+3)
434
+ 1 − 𝑃(𝑋 < 𝑡) = 1 −
435
+ 1 − 𝐹(𝑡+3)
436
+ 1 − 𝐹(𝑡)
437
+ where,
438
+
439
+ 𝐹is the cumulative distribution function of the reliability law
440
+
441
+ 𝑋 is a random variable representing the occurrence of a failure.
442
+
443
+ Table 1 (a)TenneT Asset Health Index (AHI) definition, (b) Classification of Resources (FTE:
444
+ Full Time Employment).
445
+
446
+ (a)
447
+
448
+ (b)
449
+ 3.3
450
+ Simulation
451
+ The reliability law was used to evaluate the different scenarios for an efficient
452
+ maintenance planning. A simulation period of 100 years is chosen for this study
453
+ since it is assumed that the most recent IT replacements will be in operation until
454
+ the end of this century. Time-based scenario is the current maintenance planning at
455
+ TenneT. It is compared against a condition-based scenario. Both the scenarios are
456
+ explained in detail in Table 2. The resources are listed in Table 1(b).
457
+ Table 2 Different Scenarios under Study.
458
+
459
+ Condition-based
460
+ Time-based
461
+ Replacement
462
+ 220/380kV
463
+ 45 years
464
+ 45 years
465
+ 110/150kV
466
+ AHI score red or
467
+ purple
468
+ 45 years
469
+ Inspections on bay
470
+ every 3,6,12 months
471
+ 220/380kV
472
+ No inspections
473
+ No inspections
474
+ 110/150kV
475
+ Time-based start-
476
+ ing at 25 years
477
+ Time-based start-
478
+ ing at 25 years
479
+ In principle, both scenarios are very similar in the sense that the same simulation
480
+ model dataset is used. The difference lies in the trigger for the replacement activities
481
+ of the 110/150kV assets. In fact, in time-based scenario, which represents the cur-
482
+ rent way of working, the trigger is based on the real age of the asset. As soon as the
483
+
484
+ AHI Score
485
+ Colour
486
+ Definition
487
+ Purple
488
+ Within 3 years, 80% of chance that the asset is ir-
489
+ reparably damaged
490
+ 2
491
+ Purple
492
+ Within 3 years, 50% of chance that the asset is ir-
493
+ reparably damaged
494
+ 3
495
+ Purple
496
+ Within 3 years, 20% of chance that the asset is ir-
497
+ reparably damaged
498
+ 4
499
+ Red
500
+ Within 7 years, 80% of chance that the asset is ir
501
+ reparably damaged
502
+ 5
503
+ Red
504
+ Within 7 years, 50% of chance that the asset is ir-
505
+ reparably damaged
506
+ 6
507
+ Red
508
+ Within 7 years, 20% of chance that the asset is ir-
509
+ Orange
510
+ reparably damaged
511
+ 7
512
+ Older than 75% of the average age
513
+ 8
514
+ Orange
515
+ Between 60% and 75% of the average age
516
+ 9
517
+ Older than 5 years old and less than 60% of the av-
518
+ _10
519
+ Green.
520
+ Younger than 5 years old
521
+ erage ageActivity name
522
+ Dura-
523
+ Required
524
+ Material
525
+ Workforce
526
+ Total
527
+ tion (h)
528
+ FTE
529
+ (?) 1503
530
+ () 1503
531
+ (?) 1503
532
+ Inspection
533
+ 0.5
534
+ I
535
+ 0
536
+ every 3 years
537
+ 41.624
538
+ 41.624
539
+ Inspection
540
+ 1.33
541
+ 2
542
+ 49.81
543
+ 180.18
544
+ 229.99
545
+ every 6 years
546
+ Replacement
547
+ IT 110kV
548
+ 40
549
+ 10
550
+ 8211
551
+ 35000
552
+ 43211
553
+ Replacement
554
+ 40
555
+ 10
556
+ IT 150kV
557
+ 10044
558
+ 35000
559
+ 45044
560
+ Replacement
561
+ IT 220kV
562
+ 40
563
+ 10
564
+ 15000
565
+ 35000
566
+ 50000
567
+ Replacement
568
+ IT 380kV
569
+ 40
570
+ 10
571
+ 15000
572
+ 35000
573
+ 50000S.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high
574
+ voltage instrument transformers in the Dutch transmission system
575
+ 10
576
+ Preprint accepted in WCEAM 2022 Seville
577
+ asset reaches 45 years of age, replacement is triggered, and action is performed as
578
+ resources are unlimited. On the other hand, in the condition-based scenario, the trig-
579
+ ger is based on the apparent age of the asset. The apparent age is an attribute of
580
+ every asset that reflects its degradation rate and it can be different from the real age
581
+ of the asset. If the apparent age is higher than the real age, the asset degrades faster
582
+ than normal. If the apparent age is lower than the real age, the asset degrades slower
583
+ than normal. When the apparent age of the asset reaches 50 or 54, it means that the
584
+ asset is reaching AHI score of respectively 6 or 3 that is red or purple (see Table
585
+ 1(a)), and the replacement action is triggered.
586
+
587
+ Figure 3 Unconstrained Scenarios Simulation.
588
+
589
+ Figure 4 40 FTE constrained Scenarios Simulation.
590
+
591
+ ooFTE-Time-BasedReplacement
592
+ coFTE-Condition-BasedReplacement
593
+ 40K
594
+ 1.49M
595
+ TOTEX
596
+ 0.1M
597
+ 1.36M
598
+ 20K
599
+ TOTEX
600
+ TOTEX
601
+ 0.0M
602
+ OK
603
+ 2050
604
+ 2100
605
+ 2050
606
+ 2100
607
+ HR Used (FTE)
608
+ 500
609
+ 100
610
+ 44.87
611
+ 40.63
612
+ Av HR
613
+ Av HR
614
+ Used
615
+ Used
616
+ 0
617
+ 2050
618
+ 2100
619
+ 2050
620
+ 210040FTE-Time-BasedReplacement
621
+ 40FTE - Condition-Based Replacemen
622
+ () )
623
+ TOTEX
624
+ 10K
625
+ 1.20M
626
+ 10K
627
+ 1.21M
628
+ 5K
629
+ TOTEX
630
+ TOTEX
631
+ OK
632
+ OK
633
+ 2050
634
+ 2100
635
+ 2050
636
+ 2100
637
+ HR Used (FTE)
638
+ 40
639
+ 40
640
+ 36.28
641
+ 20
642
+ 36.19
643
+ 20
644
+ AvHR
645
+ Av HR
646
+ Used
647
+ Used
648
+ 0
649
+ 2050
650
+ 2100
651
+ 2050
652
+ 2100S.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high
653
+ voltage instrument transformers in the Dutch transmission system
654
+ 11
655
+ Preprint accepted in WCEAM 2022 Seville
656
+
657
+ Figure 5 60 FTE constrained Scenarios Simulation.
658
+ From the figures, two conclusions can be made: (1) replacement activities are
659
+ the major cost driver in the TOTEX (Total Expenses), and (2) Human resources
660
+ (HR) costs are the major cost driver in the replacement costs. Simulation results
661
+ show that in case HR availability is restricted, there is no significant difference be-
662
+ tween the time-based and condition-based replacement strategies. In fact, switching
663
+ to a condition-based strategy might not be beneficial in that case since it comes with
664
+ change and investments for little to no reward. If HR availability is guaranteed for
665
+ the foreseeable future, then it is highly beneficial to switch from a time-based re-
666
+ placement strategy to a condition-based strategy as this would contribute to flatten-
667
+ ing the curve. Also, this would represent a lot of work at the beginning to prepare
668
+ the necessary processes and investments for the new strategy but would lead to sig-
669
+ nificant gains on the long term.
670
+ 4 Conclusion
671
+ Maintenance planning of high voltage ITs using real data from the Dutch trans-
672
+ mission system operator was illustrated in this study. The study aimed at under-
673
+ standing how digital twin enabled technology along with failure data can help Ten-
674
+ neT to make better future maintenance strategies. The strategies aimed at easing
675
+ financial decisions related to replacements (in terms of flattening the replacement
676
+ curve) and unavailability of ITs in the network. Working on real data uncovered
677
+ several challenges including missing data (both quantity and quality) and outliers.
678
+ The non-parametric Kaplan-Meier survival analysis helped in parameter estimation
679
+ of Weibull distribution. TenneT data could be translated to the data format to be
680
+ used in the digital twin CTA tool, meaning that our data could be easily adapted to
681
+ other software platforms. It is worth to mention that in this study, both data owner-
682
+ ship as well as data confidence did not hinder the progress. Data confidence was
683
+ built upon although multiple data sources had to be aligned together. TenneT part-
684
+ nered with Cosmo Tech to build the data ownership philosophy for successful dig-
685
+ ital twin implementation for maintenance planning.
686
+
687
+ 60FTE-Time-BasedReplacement
688
+ 6oFTE-Condition-BasedReplacement
689
+ 20K
690
+ TOTEX (C)
691
+ 1.44M
692
+ 10K
693
+ 1.34M
694
+ 10K
695
+ TOTEX
696
+ TOTEX
697
+ OK
698
+ OK
699
+ 2050
700
+ 2100
701
+ 2050
702
+ 2100
703
+ HR Used (FTE)
704
+ 50
705
+ 50
706
+ 43.35
707
+ 40.09
708
+ Av HR
709
+ AvHR
710
+ Used
711
+ Used
712
+ 2050
713
+ 2100
714
+ 2050
715
+ 2100S.R.Khuntia - Use of survival analysis and simulation to improve maintenance planning of high
716
+ voltage instrument transformers in the Dutch transmission system
717
+ 12
718
+ Preprint accepted in WCEAM 2022 Seville
719
+ References
720
+ Balzer, G. and Neumann, C., 2011. Asset simulation and life cycle assessment
721
+ for gas insulated substation.
722
+ CIGRE, Germany Bland, J.M. and Altman, D.G., 1998. Survival probabilities
723
+ (the Kaplan-Meier method). Bmj, 317(7172), pp.1572-1580.
724
+ CIGRÉ WG 23.07: The paper-oil insulated measurement transformer, CIGRÉ
725
+ Technical Brochure no. 57, 1990.
726
+ CIGRÉ SC A3: State of the art of instrument transformers, CIGRÉ Technical
727
+ Brochure no. 394, 2009.
728
+ CIGRE Final Report of the 2004 – 2007 International Enquiry on Reliability of
729
+ High Voltage Equipment Part 4 - Instrument Transformers. Working Group A3.06,
730
+ 2012.
731
+ CIGRE. Guidelines for the Use of Statistics and Statistical Tools on Life Data,
732
+ Working Group D1.39, 2017.
733
+ Davidson-Pilon, C., 2019. lifelines: survival analysis in Python. Journal of Open
734
+ Source Software, 4(40), p.1317.
735
+ Khuntia, S.R., Rueda, J.L., Bouwman, S. and van der Meijden, M.A., 2016. A
736
+ literature survey on asset management in electrical power [transmission and distri-
737
+ bution] system. International Transactions on Electrical Energy Systems, 26(10),
738
+ pp.2123-2133.
739
+ Martin, D., Marks, J., Saha, T.K., Krause, O. and Mahmoudi, N., 2018. Investi-
740
+ gation into modeling Australian power transformer failure and retirement statis-
741
+ tics. IEEE Transactions on Power Delivery, 33(4), pp.2011-2019.
742
+ Picher, P., Boudreau, J.F., Manga, A., Rajotte, C., Tardif, C., Bizier, G., Di
743
+ Gaetano, N., Garon, D., Girard, B., Hamel, J.F. and Proulx, S., 2014. Use of health
744
+ index and reliability data for transformer condition assessment and fleet rank-
745
+ ing. A2-101, CIGRE.
746
+ Poljak, M. and Bojanić, B., 2010. Method for the reduction of in‐service instru-
747
+ ment transformer explosions. European transactions on electrical power, 20(7),
748
+ pp.927-937.
749
+ Raetzke, S., Koch, M. and Anglhuber, M., 2012, September. Modern insulation
750
+ condition assessment for instrument transformers. In 2012 IEEE International Con-
751
+ ference on Condition Monitoring and Diagnosis (pp. 52-55). IEEE.
752
+ Rinne, H., 2008. The Weibull distribution: a handbook. Chapman and Hall/CRC.
753
+ Tee, S., Liu, Q., Wang, Z., Hafid, F. and Tournet, P., 2021. Failure investigation
754
+ and asset management of combined measuring instrument transformers. High Volt-
755
+ age, 6(1), pp.61-70.
756
+ Wang, P., Li, Y. and Reddy, C.K., 2019. Machine learning for survival analysis:
757
+ A survey. ACM Computing Surveys (CSUR), 51(6), pp.1-36.
758
+
-tAzT4oBgHgl3EQfSvv8/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,382 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf,len=381
2
+ page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
3
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
4
+ page_content='Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system 1 Preprint accepted in WCEAM 2022 Seville Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system Swasti R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
5
+ page_content=' Khuntia1, Fatma Zghal1, Ranjan Bhuyan1, Erik Schenkel1, Paul Duvivier2, Olivier Blancke2, Witold Krasny2 Abstract This paper describes the use of survival analysis and simulation to model the lifetime of high voltage instrument transformers in the Dutch transmission sys- tem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
6
+ page_content=' To represent asset aging, the non-parametric Kaplan-Meier method is used to enable the fitting of Weibull distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
7
+ page_content=' Such an approach is implemented on three different voltage levels, namely 110kV, 150kV, and 220/380kV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
8
+ page_content=' Real failure and inspection data is used to achieve a realistic failure model of the instrument trans- formers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
9
+ page_content=' Failure and maintenance data occurring between 1989 and 2021 have been used for this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
10
+ page_content=' In spite of missing and low-quality data, a rich failure database could still be prepared.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
11
+ page_content=' This study also offers insights into factors (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
12
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
13
+ page_content=', voltage level, in-service age) influencing the remaining life from both graphical survival function and parametric Weibull distribution analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
14
+ page_content=' Based on the derived statistics, future possible maintenance planning scenarios are simulated under a complex system modelling framework in a digital twin enabled platform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
15
+ page_content=' Eventually, the scenarios are evaluated in terms of replacement costs (CAPEX), inspection hours, and una- vailability hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
16
+ page_content=' 1 Introduction TenneT, as European transmission system operator, is facing power supply reli- ability challenges that originate in a globally aging infrastructure and increasing complexity of business operations in the context of energy transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
17
+ page_content=' While power transformers, due to the criticality of their function on the grid have been the focus of many studies, concerns have been raised recently on the lack of focus on long- term asset management of Instrument Transformers (ITs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
18
+ page_content=' ITs play an important 1 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
19
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
20
+ page_content=' Khuntia (\uf02a), F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
21
+ page_content=' Zghal, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
22
+ page_content=' Bhuyan, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
23
+ page_content=' Schenkel Asset Management Onshore, TenneT TSO B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
24
+ page_content='V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
25
+ page_content=', Arnhem, The Netherlands e-mail: firstname.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
26
+ page_content='lastname@tennet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
27
+ page_content='eu 2 P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
28
+ page_content=' Duvivier, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
29
+ page_content=' Blancke, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
30
+ page_content=' Krasny Cosmo Tech, Lyon, France email: firstname.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
31
+ page_content='lastname@cosmotech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
32
+ page_content='com S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
33
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
34
+ page_content='Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system 2 Preprint accepted in WCEAM 2022 Seville role in the metering of electrical quantities and protection of other system compo- nents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
35
+ page_content=' Due to their importance, any unplanned unavailability due to failures can cause considerable outage costs to utilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
36
+ page_content=' Consequently, it is crucial to properly characterize the aging of ITs using statistical approaches that will enable to predict the evolution of the IT population failure over the next years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
37
+ page_content=' In addition, it will yield valuable perspectives in terms of optimizing maintenance and replacement policies accordingly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
38
+ page_content=' The reliability analysis of ITs is very much dependent on the defined maintenance strategies which will provide a reliable and safe power supply.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
39
+ page_content=' By definition, asset management involves strategies to explore, identify, plan, in- vest, utilize, maintain, replace, and dispose of assets while maximizing their value and performance under some prescribed financial constraint (Khuntia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
40
+ page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
41
+ page_content=' Since ITs play such an important role, it is expected that statistical failure analysis will give a better insight on actual maintenance planning performance to the asset management team at TenneT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
42
+ page_content=' Technically, in the reliability analysis of IT, it is in- teresting to identify the independence or dependence of the specific covariates that indicate the operation of the IT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
43
+ page_content=' For any kind of data-driven methodology and, in particular, asset reliability char- acterization, a robust database is needed, both in terms of volumetry and quality (Balzer and Neumann, 2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
44
+ page_content=' However, it can be argued that there should be a pref- erence for robust data and that there are techniques that could be used to cope with data discrepancies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
45
+ page_content=' In our case, the historical failure data play an important role in understanding the behavior of ITs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
46
+ page_content=' Literature study reveals that explosion is one of the highest reported failure modes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
47
+ page_content=' Impact of explosion not only relates to direct cost of IT replacement but also chances of replacement of neighboring equipment damaged in the explosion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
48
+ page_content=' CIGRE reports are one of the primary sources for pub- licly available failure databases of ITs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
49
+ page_content=' Three series of CIGRE reports are available online.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
50
+ page_content=' The first report was published in 1990 which covered failures of ITs (voltage >72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
51
+ page_content='5kV) in about 15 countries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
52
+ page_content=' The survey covered 136033 transformers in the period from 1970 to 1986 (CIGRE, 1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
53
+ page_content=' The second report published results for 131207 ITs (voltage > 60kV) in the period from 1985 to 1995 in the year 2009 (CIGRE, 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
54
+ page_content=' The third results of a wider international survey was published in 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
55
+ page_content=' It collected population and failure data for ITs of voltage > 60kV and ex- cluded AIS ring current transformers that were in service during the years 2004 to 2007 inclusive (CIGRE, 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
56
+ page_content=' Some other failure investigations were reported (Poljak et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
57
+ page_content=', 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
58
+ page_content=' Raetze et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
59
+ page_content=', 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
60
+ page_content=' Tee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
61
+ page_content=', 2021), where authors focus on reduction of IT explosion and better condition monitoring of ITs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
62
+ page_content=' Nonetheless, the truth is that failure is probabilistic in nature, and it needs investigations on the rela- tionship with asset data and failure cause.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
63
+ page_content=' The use of semi-parametric Cox model was reported in (Tee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
64
+ page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
65
+ page_content=' The authors elaborated the factors influencing the probability of failures through analysis on the lifetime data from both graphical sur- vival function plots and semi-parametric Cox model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
66
+ page_content=' With the use of Simulation Digital Twin technology from Cosmo Tech, TenneT analyzed various maintenance strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
67
+ page_content=' The Digital Twin has been calibrated S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
68
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
69
+ page_content='Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system 3 Preprint accepted in WCEAM 2022 Seville based on the historical failure data that it recorded with statistical technique relying on survival analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
70
+ page_content=' Literature study shows that survival analysis was used for power transformer reliability studies of around 2000 nos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
71
+ page_content=' in the Canadian and around 6000 nos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
72
+ page_content=' in the Australian utility (Picher et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
73
+ page_content=', 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
74
+ page_content=' Martin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
75
+ page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
76
+ page_content=' Ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
77
+ page_content=' (Picher et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
78
+ page_content=', 2014) described the data of Canadian utility Hydro-Quebec where they adopted a good match using the Kaplan-Meier and Weibull distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
79
+ page_content=' Finally, the method concluded that Weibull distribution is a better fit and the results looked promising.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
80
+ page_content=' Similarly, ref.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
81
+ page_content=' (Martin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
82
+ page_content=', 2018) followed a similar strategy for Australian data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
83
+ page_content=' The authors deduced the choice of Kaplan-Meier or Weibull distribution based on the different voltage classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
84
+ page_content=' In practice, Weibull distribution fitted to empirical failure data are commonly used to calculate life expectancy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
85
+ page_content=' However, the challenge in applying such a distribution to electrical assets is that often the root cause of failure is not related to the normal aging of the asset, but rather external factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
86
+ page_content=' The aim of this paper is three-fold: (1) use of real failure data to model a time-varying failure rate based on Weibull parameters obtained from Kaplan-Meier survival analysis, (2) investigate extrapolation methods to maximize value of existing inspection results across IT population, and (3) use digital twin enabled simulation to tune the required resources necessary to realize TenneT’s strategy for considered substation equipment maintenance and renewals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
87
+ page_content=' 2 Data and Methodology 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
88
+ page_content='1 Description of Data As of the date of writing this paper, TenneT owns and maintains a large fleet of ITs in the Dutch high voltage AC network (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
89
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
90
+ page_content=', 110, 150, 220 and 380kV) as shown in Figure 1(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
91
+ page_content=' It is of interest to see the age profile of the existing population, in terms of years since manufacture because reliability is often related to age.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
92
+ page_content=' How- ever, lifetime data can be complicated as some ITs often extend over several dec- ades.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
93
+ page_content=' At TenneT, the expected design life of an IT is 45 years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
94
+ page_content=' This age is affected and reduced, sometimes substantially, depending on the design or utilization of the IT, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
95
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
96
+ page_content=' its loading or the environment to which it is exposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
97
+ page_content=' In some cases, a good maintenance scheme can even increase the replacement age.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
98
+ page_content=' Although there is no prescribed replacement age, it is the responsibility of the asset management depart- ment to formulate the maintenance policies based on failure history.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
99
+ page_content=' For this study, failure data was obtained from various sources, starting from failure records, reports to talking to experts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
100
+ page_content=' Fortunately, TenneT did not record a high number of major failures since the 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
101
+ page_content=' A major failure is defined as a sudden explosive event that has caused an immediate emergency system outage or trip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
102
+ page_content=' Figure 1(b) lists the fail- ure events with respect to manufacturer (coded for confidentiality) and IT age.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
103
+ page_content=' The failure list was not adequate to come up with a statistical model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
104
+ page_content=' In addition, maintenance reports (or work orders) and expert knowledge was used to populate the list and gain utmost information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
105
+ page_content=' A work order is a document that provides all S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
106
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
107
+ page_content='Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system 4 Preprint accepted in WCEAM 2022 Seville the information about a maintenance task and outlines a process for completing that task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
108
+ page_content=' In case of IT, corrective work orders are used (the others being periodic maintenance and inspection work orders).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
109
+ page_content=' Discussion with experts led us to use the work orders when an IT was out of service for any kind of maintenance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
110
+ page_content=' Figure 1(c) shows the total recorded failures for the IT population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
111
+ page_content=' In the recent years, one ob- servation worth noticing is that the number of failures has increased significantly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
112
+ page_content=' (a) (b) 10000 SLI 8000 Number of 6000 4000 2000 0 110 150 220 380 Voltage level (kV)5 Number of ITs 4 m 2 1 0 990 7 68 1 3 6 80 9 0 00 05 600 2 6 7 7 7 8 9 9 9 9 9 9 9 6 9 6 0 0 0 0 0 0 1 1 1 1 1 L 1 1 L 2 2 2 2 2 2 Year of constructionS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
113
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
114
+ page_content='Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system 5 Preprint accepted in WCEAM 2022 Seville (c) Figure 1 (a) Voltage-based IT population, and (b) Actual failure list until July 2021, (c) Populated failure from work order and expert opinion until July 2021 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
115
+ page_content='2 Survival Analysis and Failure Rate Modelling Survival analysis is a statistical technique used to estimate the lifespan of a par- ticular population under study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
116
+ page_content=' It is an analysis of time-to-event data (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
117
+ page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
118
+ page_content=' One of the widely used survival analysis technique is the Kaplan-Meier (KM) estimate (Bland and Altman, 1998).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
119
+ page_content=' The KM estimator uses lifetime data to perform survival analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
120
+ page_content=' Although it is widely used in medical research to gauge the part of patients living for a specific measure of time after treatment, it has been used in the power systems sector to model the survival of electric assets (Martin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
121
+ page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
122
+ page_content=' The use of KM estimate is supported by two reasons: one is that it does not assume that the data fits a statistical distribution, and second is that it allows the inclusion of censored data (when an IT had not failed by mid-2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
123
+ page_content=' For a population, the survival function 𝑆̂(𝑡) is defined as: 𝑆̂(𝑡) = ∏ (1 − 𝑑𝑖 𝑛𝑖 ) 𝑖:𝑡𝑖<𝑡 where, 𝑡𝑖is the time at least one event happened, 𝑑𝑖 is the number of events that happened at time 𝑡𝑖 and 𝑛𝑖 is the number of individuals known to have survived up to time 𝑡𝑖 (Davidson-Pilon, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
124
+ page_content=' In our study, the estimates are calculated for three different voltage levels and 𝑛𝑗 considers observations that occurred between the oldest IT age and mid-2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
125
+ page_content=' An important aspect in survival analysis is considering the censored data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
126
+ page_content=' Censoring occurs when the value of an observation is only known 1000 SI 800 Number of 009 400 200 YearofconstructionS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
127
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
128
+ page_content='Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system 6 Preprint accepted in WCEAM 2022 Seville to some extent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
129
+ page_content=' Censored data is often encountered when analysing practical life data, especially in case of electrical power systems where most of the installed equipment is still in-service, and most of the time the exact age of equipment at the moment of failure is unknown (CIGRE, 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
130
+ page_content=' In this study, a large amount of data falls under the right censored data (suspended data) category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
131
+ page_content=' A dataset is termed as right censored or suspended when it is composed of components that did not fail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
132
+ page_content=' The term right censored indicates that the event is located to the right of the dataset, which implies that certain components are still operating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
133
+ page_content=' In our dataset, we had to deal with right censoring and no left truncation since the year of construction was known to us.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
134
+ page_content=' Ignoring truncation causes bias in model’s estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
135
+ page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
136
+ page_content='0 Weibull Kaplan-Meier 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
137
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
138
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
139
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
140
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
141
+ page_content='0 0 20 40 60 80 100 timeline1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
142
+ page_content='0 Weibull Kaplan-Meier 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
143
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
144
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
145
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
146
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
147
+ page_content='0 0 20 40 60 80 100 timelineS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
148
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
149
+ page_content='Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system 7 Preprint accepted in WCEAM 2022 Seville Figure 2 Kaplan-Meier estimate of all different voltage levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
150
+ page_content=' The IT dataset was split into three different families, each one with its own deg- radation law, based on their voltage level as is shown in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
151
+ page_content=' A useful statistic in this analysis is calculating the median survival time, which defines the point in time where on average 50% of the population should have failed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
152
+ page_content=' For 110kV, the median survival time is 61 years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
153
+ page_content=' However, the median survival time for 150, 220 and 380kV is infinity because there have been an insufficient number of failures to determine it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
154
+ page_content=' In such cases, the two best options are: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
155
+ page_content=' use another quantile (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
156
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
157
+ page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
158
+ page_content='75) to compare the groups;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
159
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
160
+ page_content=' approximate the survival curve by means of a parametric fit and derive the me- dian survival time using the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
161
+ page_content=' The second option is chosen in our study since all the three voltages can be mod- elled using the parametric fit assuming that failure times have a Weibull distribu- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
162
+ page_content=' In other words, Weibull distribution is used to parameterize the KM estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
163
+ page_content=' The Weibull distribution is a widely used method to analyse the statistical features of failure (Rinne, 2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
164
+ page_content=' The probability 𝑓(𝑡) and cumulative density function 𝐹(𝑡) are defined as: 𝑓(𝑡) = 𝛽 𝑡𝛽−1 𝜂𝛽 𝑒 −(𝑡 𝜂) 𝛽 𝑎𝑛𝑑 𝐹(𝑡) = 1 − 𝑒 −(𝑡 𝜂) 𝛽 ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
165
+ page_content=' where, 𝑡 is the time, 𝛽 is the shape and 𝜂 is the scale parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
166
+ page_content=' Table 1 shows the different parameters calculated for our study from the corresponding survival function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
167
+ page_content=' Table 1 Statistics and Weibull parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
168
+ page_content=' Voltage (kV) No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
169
+ page_content=' of ITs No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
170
+ page_content=' of censored β η median 110 3168 255 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
171
+ page_content='67 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
172
+ page_content='79 61 150 10058 298 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
173
+ page_content='42 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
174
+ page_content='20 infinity 220 and 380 2982 25 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
175
+ page_content='65 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
176
+ page_content='05 infinity 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
177
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
178
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
179
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
180
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
181
+ page_content='2 Weibull Kaplan-Meier 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
182
+ page_content='0 0] 20 40 60 80 100 timelineS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
183
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
184
+ page_content='Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system 8 Preprint accepted in WCEAM 2022 Seville 3 Modelling in Cosmo Tech Asset and Simulations Founded in 2010, Cosmo Tech is a technology company pioneer in the modeling of complex systems (https://cosmotech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
185
+ page_content='com/).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
186
+ page_content=' Relying on its industry-validated modeling and simulation software platform, Cosmo Tech has developed a solution called Cosmo Tech Asset, henceforth called CTA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
187
+ page_content=' CTA allows to build digital twins of asset portfolios with their full complexity such as network dependencies, opera- tive strategies, or dynamical resources allocations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
188
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
189
+ page_content='1 Cosmo Tech Asset Platform The different steps involved in the CTA platform are: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
190
+ page_content=' Experiment the CTA platform’s pre-built health assessment methods and com- pare the results with internal initiatives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
191
+ page_content=' For health assessment, the asset health index is a key simulation variable, and it is described in the next sub-section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
192
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
193
+ page_content=' Demonstrate the calibration of reliability law (using Weibull distribution) for simulations against up-to-date condition of ITs, but also historical IT related data, such as field observation or inspection data and measurement inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
194
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
195
+ page_content=' Investigate the functional possibilities that would allow to leverage existing in- spection results across ITs using extrapolation methods when applicable, there- fore maximize inspection result value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
196
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
197
+ page_content=' Finally, based on the achieved health assessment technique, use the simulation platform to tune the required resources necessary to realize TenneT’s strategy for considered IT maintenance and replacements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
198
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
199
+ page_content='2 TenneT Asset Health Index For health assessment, the TenneT asset health index (AHI) is considered and is shown in Table 1(a) (TenneT, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
200
+ page_content=' The AHI is based on asset age and failure probability, and it is used to drive short-term maintenance and long-term replace- ment strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
201
+ page_content=" It provides a consistent way to compare the overall asset health of TenneT's assets." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
202
+ page_content=' The evaluation of the AHI is based on two metrics: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
203
+ page_content=' probability of failure of IT in the coming years for AHI score of 1 to 6, and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
204
+ page_content=' age of IT for AHI score of 7 to 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
205
+ page_content=' In addition to AHI, the study of IT uses reliability law over which failures are drawn during the simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
206
+ page_content=' The reliability law corresponds to the KM survival function and the Weibull estimates that are described in section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
207
+ page_content=' These laws have a cumulative distribution function which represent the probability for a failure to occur before a certain age.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
208
+ page_content=' And the probability of failure over the next year can be evaluated using the following formula: S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
209
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
210
+ page_content='Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system 9 Preprint accepted in WCEAM 2022 Seville 𝑃(𝑋 < 𝑡 + 3 | 𝑋 > 𝑡) = 1 − 𝑃(𝑋 > 𝑡 + 3 | 𝑋 > 𝑡) = 1 − 𝑃(𝑋>𝑡+3 ∩ 𝑋>𝑡) 𝑃(𝑋 > 𝑡) = 1 − 𝑃(𝑋 > 𝑡+3) 𝑃(𝑋 > 𝑡) = 1 − 1 − 𝑃(𝑋 < 𝑡+3) 1 − 𝑃(𝑋 < 𝑡) = 1 − 1 − 𝐹(𝑡+3) 1 − 𝐹(𝑡) where,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
211
+ page_content=' 𝐹is the cumulative distribution function of the reliability law 𝑋 is a random variable representing the occurrence of a failure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
212
+ page_content=' Table 1 (a)TenneT Asset Health Index (AHI) definition, (b) Classification of Resources (FTE: Full Time Employment).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
213
+ page_content=' (a) (b) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
214
+ page_content='3 Simulation The reliability law was used to evaluate the different scenarios for an efficient maintenance planning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
215
+ page_content=' A simulation period of 100 years is chosen for this study since it is assumed that the most recent IT replacements will be in operation until the end of this century.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
216
+ page_content=' Time-based scenario is the current maintenance planning at TenneT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
217
+ page_content=' It is compared against a condition-based scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
218
+ page_content=' Both the scenarios are explained in detail in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
219
+ page_content=' The resources are listed in Table 1(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
220
+ page_content=' Table 2 Different Scenarios under Study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
221
+ page_content=' Condition-based Time-based Replacement 220/380kV 45 years 45 years 110/150kV AHI score red or purple 45 years Inspections on bay every 3,6,12 months 220/380kV No inspections No inspections 110/150kV Time-based start- ing at 25 years Time-based start- ing at 25 years In principle, both scenarios are very similar in the sense that the same simulation model dataset is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
222
+ page_content=' The difference lies in the trigger for the replacement activities of the 110/150kV assets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
223
+ page_content=' In fact, in time-based scenario, which represents the cur- rent way of working, the trigger is based on the real age of the asset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
224
+ page_content=' As soon as the AHI Score Colour Definition Purple Within 3 years,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
225
+ page_content=' 80% of chance that the asset is ir- reparably damaged 2 Purple Within 3 years,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
226
+ page_content=' 50% of chance that the asset is ir- reparably damaged 3 Purple Within 3 years,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
227
+ page_content=' 20% of chance that the asset is ir- reparably damaged 4 Red Within 7 years,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
228
+ page_content=' 80% of chance that the asset is ir reparably damaged 5 Red Within 7 years,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
229
+ page_content=' 50% of chance that the asset is ir- reparably damaged 6 Red Within 7 years,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
230
+ page_content=' 20% of chance that the asset is ir- Orange reparably damaged 7 Older than 75% of the average age 8 Orange Between 60% and 75% of the average age 9 Older than 5 years old and less than 60% of the av- _10 Green.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
231
+ page_content=' Younger than 5 years old erage ageActivity name Dura- Required Material Workforce Total tion (h) FTE (?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
232
+ page_content=') 1503 () 1503 (?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
233
+ page_content=') 1503 Inspection 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
234
+ page_content='5 I 0 every 3 years 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
235
+ page_content='624 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
236
+ page_content='624 Inspection 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
237
+ page_content='33 2 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
238
+ page_content='81 180.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
239
+ page_content='18 229.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
240
+ page_content='99 every 6 years Replacement IT 110kV 40 10 8211 35000 43211 Replacement 40 10 IT 150kV 10044 35000 45044 Replacement IT 220kV 40 10 15000 35000 50000 Replacement IT 380kV 40 10 15000 35000 50000S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
241
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
242
+ page_content='Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system 10 Preprint accepted in WCEAM 2022 Seville asset reaches 45 years of age, replacement is triggered, and action is performed as resources are unlimited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
243
+ page_content=' On the other hand, in the condition-based scenario, the trig- ger is based on the apparent age of the asset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
244
+ page_content=' The apparent age is an attribute of every asset that reflects its degradation rate and it can be different from the real age of the asset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
245
+ page_content=' If the apparent age is higher than the real age, the asset degrades faster than normal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
246
+ page_content=' If the apparent age is lower than the real age, the asset degrades slower than normal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
247
+ page_content=' When the apparent age of the asset reaches 50 or 54, it means that the asset is reaching AHI score of respectively 6 or 3 that is red or purple (see Table 1(a)), and the replacement action is triggered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
248
+ page_content=' Figure 3 Unconstrained Scenarios Simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
249
+ page_content=' Figure 4 40 FTE constrained Scenarios Simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
250
+ page_content=' ooFTE-Time-BasedReplacement coFTE-Condition-BasedReplacement 40K 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
251
+ page_content='49M TOTEX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
252
+ page_content='1M 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
253
+ page_content='36M 20K TOTEX TOTEX 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
254
+ page_content='0M OK 2050 2100 2050 2100 HR Used (FTE) 500 100 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
255
+ page_content='87 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
256
+ page_content='63 Av HR Av HR Used Used 0 2050 2100 2050 210040FTE-Time-BasedReplacement 40FTE - Condition-Based Replacemen () ) TOTEX 10K 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
257
+ page_content='20M 10K 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
258
+ page_content='21M 5K TOTEX TOTEX OK OK 2050 2100 2050 2100 HR Used (FTE) 40 40 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
259
+ page_content='28 20 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
260
+ page_content='19 20 AvHR Av HR Used Used 0 2050 2100 2050 2100S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
261
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
262
+ page_content='Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system 11 Preprint accepted in WCEAM 2022 Seville Figure 5 60 FTE constrained Scenarios Simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
263
+ page_content=' From the figures, two conclusions can be made: (1) replacement activities are the major cost driver in the TOTEX (Total Expenses), and (2) Human resources (HR) costs are the major cost driver in the replacement costs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
264
+ page_content=' Simulation results show that in case HR availability is restricted, there is no significant difference be- tween the time-based and condition-based replacement strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
265
+ page_content=' In fact, switching to a condition-based strategy might not be beneficial in that case since it comes with change and investments for little to no reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
266
+ page_content=' If HR availability is guaranteed for the foreseeable future, then it is highly beneficial to switch from a time-based re- placement strategy to a condition-based strategy as this would contribute to flatten- ing the curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
267
+ page_content=' Also, this would represent a lot of work at the beginning to prepare the necessary processes and investments for the new strategy but would lead to sig- nificant gains on the long term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
268
+ page_content=' 4 Conclusion Maintenance planning of high voltage ITs using real data from the Dutch trans- mission system operator was illustrated in this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
269
+ page_content=' The study aimed at under- standing how digital twin enabled technology along with failure data can help Ten- neT to make better future maintenance strategies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
270
+ page_content=' The strategies aimed at easing financial decisions related to replacements (in terms of flattening the replacement curve) and unavailability of ITs in the network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
271
+ page_content=' Working on real data uncovered several challenges including missing data (both quantity and quality) and outliers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
272
+ page_content=' The non-parametric Kaplan-Meier survival analysis helped in parameter estimation of Weibull distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
273
+ page_content=' TenneT data could be translated to the data format to be used in the digital twin CTA tool, meaning that our data could be easily adapted to other software platforms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
274
+ page_content=' It is worth to mention that in this study, both data owner- ship as well as data confidence did not hinder the progress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
275
+ page_content=' Data confidence was built upon although multiple data sources had to be aligned together.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
276
+ page_content=' TenneT part- nered with Cosmo Tech to build the data ownership philosophy for successful dig- ital twin implementation for maintenance planning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
277
+ page_content=' 60FTE-Time-BasedReplacement 6oFTE-Condition-BasedReplacement 20K TOTEX (C) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
278
+ page_content='44M 10K 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
279
+ page_content='34M 10K TOTEX TOTEX OK OK 2050 2100 2050 2100 HR Used (FTE) 50 50 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
280
+ page_content='35 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
281
+ page_content='09 Av HR AvHR Used Used 2050 2100 2050 2100S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
282
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
283
+ page_content='Khuntia - Use of survival analysis and simulation to improve maintenance planning of high voltage instrument transformers in the Dutch transmission system 12 Preprint accepted in WCEAM 2022 Seville References Balzer, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
284
+ page_content=' and Neumann, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
285
+ page_content=', 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
286
+ page_content=' Asset simulation and life cycle assessment for gas insulated substation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
287
+ page_content=' CIGRE, Germany Bland, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
288
+ page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
289
+ page_content=' and Altman, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
290
+ page_content='G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
291
+ page_content=', 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
292
+ page_content=' Survival probabilities (the Kaplan-Meier method).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
293
+ page_content=' Bmj, 317(7172), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
294
+ page_content='1572-1580.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
295
+ page_content=' CIGRÉ WG 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
296
+ page_content='07: The paper-oil insulated measurement transformer, CIGRÉ Technical Brochure no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
297
+ page_content=' 57, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
298
+ page_content=' CIGRÉ SC A3: State of the art of instrument transformers, CIGRÉ Technical Brochure no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
299
+ page_content=' 394, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
300
+ page_content=' CIGRE Final Report of the 2004 – 2007 International Enquiry on Reliability of High Voltage Equipment Part 4 - Instrument Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
301
+ page_content=' Working Group A3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
302
+ page_content='06, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
303
+ page_content=' CIGRE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
304
+ page_content=' Guidelines for the Use of Statistics and Statistical Tools on Life Data, Working Group D1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
305
+ page_content='39, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
306
+ page_content=' Davidson-Pilon, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
307
+ page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
308
+ page_content=' lifelines: survival analysis in Python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
309
+ page_content=' Journal of Open Source Software, 4(40), p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
310
+ page_content='1317.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
311
+ page_content=' Khuntia, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
312
+ page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
313
+ page_content=', Rueda, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
314
+ page_content='L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
315
+ page_content=', Bouwman, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
316
+ page_content=' and van der Meijden, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
317
+ page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
318
+ page_content=', 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
319
+ page_content=' A literature survey on asset management in electrical power [transmission and distri- bution] system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
320
+ page_content=' International Transactions on Electrical Energy Systems, 26(10), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
321
+ page_content='2123-2133.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
322
+ page_content=' Martin, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
323
+ page_content=', Marks, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
324
+ page_content=', Saha, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
325
+ page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
326
+ page_content=', Krause, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
327
+ page_content=' and Mahmoudi, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
328
+ page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
329
+ page_content=' Investi- gation into modeling Australian power transformer failure and retirement statis- tics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
330
+ page_content=' IEEE Transactions on Power Delivery, 33(4), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
331
+ page_content='2011-2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
332
+ page_content=' Picher, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
333
+ page_content=', Boudreau, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
334
+ page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
335
+ page_content=', Manga, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
336
+ page_content=', Rajotte, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
337
+ page_content=', Tardif, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
338
+ page_content=', Bizier, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
339
+ page_content=', Di Gaetano, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
340
+ page_content=', Garon, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
341
+ page_content=', Girard, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
342
+ page_content=', Hamel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
343
+ page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
344
+ page_content=' and Proulx, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
345
+ page_content=', 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
346
+ page_content=' Use of health index and reliability data for transformer condition assessment and fleet rank- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
347
+ page_content=' A2-101, CIGRE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
348
+ page_content=' Poljak, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
349
+ page_content=' and Bojanić, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
350
+ page_content=', 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
351
+ page_content=' Method for the reduction of in‐service instru- ment transformer explosions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
352
+ page_content=' European transactions on electrical power, 20(7), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
353
+ page_content='927-937.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
354
+ page_content=' Raetzke, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
355
+ page_content=', Koch, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
356
+ page_content=' and Anglhuber, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
357
+ page_content=', 2012, September.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
358
+ page_content=' Modern insulation condition assessment for instrument transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
359
+ page_content=' In 2012 IEEE International Con- ference on Condition Monitoring and Diagnosis (pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
360
+ page_content=' 52-55).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
361
+ page_content=' IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
362
+ page_content=' Rinne, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
363
+ page_content=', 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
364
+ page_content=' The Weibull distribution: a handbook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
365
+ page_content=' Chapman and Hall/CRC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
366
+ page_content=' Tee, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
367
+ page_content=', Liu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
368
+ page_content=', Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
369
+ page_content=', Hafid, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
370
+ page_content=' and Tournet, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
371
+ page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
372
+ page_content=' Failure investigation and asset management of combined measuring instrument transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
373
+ page_content=' High Volt- age, 6(1), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
374
+ page_content='61-70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
375
+ page_content=' Wang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
376
+ page_content=', Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
377
+ page_content=' and Reddy, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
378
+ page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
379
+ page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
380
+ page_content=' Machine learning for survival analysis: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
381
+ page_content=' ACM Computing Surveys (CSUR), 51(6), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
382
+ page_content='1-36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-tAzT4oBgHgl3EQfSvv8/content/2301.01239v1.pdf'}
.gitattributes CHANGED
@@ -5483,3 +5483,37 @@ fNFST4oBgHgl3EQfGjh7/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex
5483
  vdFJT4oBgHgl3EQffCyN/content/2301.11555v1.pdf filter=lfs diff=lfs merge=lfs -text
5484
  69E1T4oBgHgl3EQfnASE/content/2301.03304v1.pdf filter=lfs diff=lfs merge=lfs -text
5485
  PNFPT4oBgHgl3EQfnjVw/content/2301.13130v1.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5483
  vdFJT4oBgHgl3EQffCyN/content/2301.11555v1.pdf filter=lfs diff=lfs merge=lfs -text
5484
  69E1T4oBgHgl3EQfnASE/content/2301.03304v1.pdf filter=lfs diff=lfs merge=lfs -text
5485
  PNFPT4oBgHgl3EQfnjVw/content/2301.13130v1.pdf filter=lfs diff=lfs merge=lfs -text
5486
+ L9E4T4oBgHgl3EQfKAwg/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5487
+ 4NFKT4oBgHgl3EQf9C5Z/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5488
+ k9FRT4oBgHgl3EQfYTew/content/2301.13549v1.pdf filter=lfs diff=lfs merge=lfs -text
5489
+ VNAzT4oBgHgl3EQfX_zt/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5490
+ X9AzT4oBgHgl3EQf1_4_/content/2301.01807v1.pdf filter=lfs diff=lfs merge=lfs -text
5491
+ h9E1T4oBgHgl3EQfMwOA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5492
+ KNE3T4oBgHgl3EQfvQuV/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5493
+ 4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf filter=lfs diff=lfs merge=lfs -text
5494
+ tNAyT4oBgHgl3EQfmvgr/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5495
+ ANFQT4oBgHgl3EQf8jdP/content/2301.13447v1.pdf filter=lfs diff=lfs merge=lfs -text
5496
+ _9FLT4oBgHgl3EQfwi-I/content/2301.12164v1.pdf filter=lfs diff=lfs merge=lfs -text
5497
+ -NFQT4oBgHgl3EQfKDVk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5498
+ 89FLT4oBgHgl3EQfBi6R/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5499
+ jtAzT4oBgHgl3EQfpf1N/content/2301.01613v1.pdf filter=lfs diff=lfs merge=lfs -text
5500
+ J9FLT4oBgHgl3EQfKi8f/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5501
+ 5tE1T4oBgHgl3EQfBAK1/content/2301.02847v1.pdf filter=lfs diff=lfs merge=lfs -text
5502
+ V9E0T4oBgHgl3EQf3AJU/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5503
+ X9AzT4oBgHgl3EQf1_4_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5504
+ ydFST4oBgHgl3EQfTTiJ/content/2301.13769v1.pdf filter=lfs diff=lfs merge=lfs -text
5505
+ INAzT4oBgHgl3EQfHvu0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5506
+ PNFPT4oBgHgl3EQfnjVw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5507
+ AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf filter=lfs diff=lfs merge=lfs -text
5508
+ RNA0T4oBgHgl3EQfDv8H/content/2301.02006v1.pdf filter=lfs diff=lfs merge=lfs -text
5509
+ W9AyT4oBgHgl3EQfWPem/content/2301.00160v1.pdf filter=lfs diff=lfs merge=lfs -text
5510
+ -NFQT4oBgHgl3EQfKDVk/content/2301.13258v1.pdf filter=lfs diff=lfs merge=lfs -text
5511
+ atFIT4oBgHgl3EQflys0/content/2301.11306v1.pdf filter=lfs diff=lfs merge=lfs -text
5512
+ z9AzT4oBgHgl3EQfC_rr/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5513
+ ANFQT4oBgHgl3EQf8jdP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5514
+ BtE1T4oBgHgl3EQfpgUG/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5515
+ 8NFLT4oBgHgl3EQfsy_c/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5516
+ jtAzT4oBgHgl3EQfpf1N/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5517
+ RNA0T4oBgHgl3EQfDv8H/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5518
+ k9FRT4oBgHgl3EQfYTew/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5519
+ LdFRT4oBgHgl3EQf1zij/content/2301.13658v1.pdf filter=lfs diff=lfs merge=lfs -text
1tE1T4oBgHgl3EQflQSM/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa722dbea8c39827f618cb5bb42fbe05c5b98b2b25bec6ef11d56dd7201bd13c
3
+ size 242455
29AzT4oBgHgl3EQfDvqc/content/tmp_files/2301.00982v1.pdf.txt ADDED
@@ -0,0 +1,1521 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Analogical Inference Enhanced Knowledge Graph Embedding
2
+ Zhen Yao1*, Wen Zhang1*, Mingyang Chen2, Yufeng Huang1, Yi Yang4, Huajun Chen2,3,5†
3
+ 1School of Software Technology, Zhejiang University
4
+ 2College of Computer Science and Technology, Zhejiang University
5
+ 3Donghai Laboratory, Zhoushan 316021, China
6
+ 4Huawei Technologies Co., Ltd
7
+ 5Alibaba-Zhejiang University Joint Institute of Frontier Technologies
8
+ {yz0204, zhang.wen, mingyangchen, huangyufeng, huajunsir}@zju.edu.cn
9
10
+ Abstract
11
+ Knowledge graph embedding (KGE), which maps entities
12
+ and relations in a knowledge graph into continuous vector
13
+ spaces, has achieved great success in predicting missing links
14
+ in knowledge graphs. However, knowledge graphs often con-
15
+ tain incomplete triples that are difficult to inductively infer
16
+ by KGEs. To address this challenge, we resort to analogi-
17
+ cal inference and propose a novel and general self-supervised
18
+ framework AnKGE to enhance KGE models with analog-
19
+ ical inference capability. We propose an analogical object
20
+ retriever that retrieves appropriate analogical objects from
21
+ entity-level, relation-level, and triple-level. And in AnKGE,
22
+ we train an analogy function for each level of analogical in-
23
+ ference with the original element embedding from a well-
24
+ trained KGE model as input, which outputs the analogical
25
+ object embedding. In order to combine inductive inference
26
+ capability from the original KGE model and analogical in-
27
+ ference capability enhanced by AnKGE, we interpolate the
28
+ analogy score with the base model score and introduce the
29
+ adaptive weights in the score function for prediction. Through
30
+ extensive experiments on FB15k-237 and WN18RR datasets,
31
+ we show that AnKGE achieves competitive results on link
32
+ prediction task and well performs analogical inference.
33
+ 1
34
+ Introduction
35
+ Knowledge graphs (KGs) storing a large number of triples
36
+ in the form of (head entity, relation, tail entity), (h, r, t)
37
+ for short, are popular data structures for representing fac-
38
+ tual knowledge. Many knowledge graph projects such as
39
+ Freebase (Bollacker et al. 2008), WordNet (Miller 1994),
40
+ YAGO (Suchanek, Kasneci, and Weikum 2007) and DB-
41
+ pedia (Lehmann et al. 2015) are significant foundations to
42
+ support artificial intelligence applications. They have been
43
+ successfully used in downstream applications such as word
44
+ sense disambiguation (Bevilacqua and Navigli 2020), ques-
45
+ tion answering (Yasunaga et al. 2021), and information
46
+ extraction (Hu et al. 2021), gaining widespread attention.
47
+ However, most KGs are incomplete, so predicting the miss-
48
+ ing links between entities is a fundamental problem for KGs
49
+ *These authors contributed equally.
50
+ †Corresponding Author.
51
+ Copyright © 2023, Association for the Advancement of Artificial
52
+ Intelligence (www.aaai.org). All rights reserved.
53
+ called link prediction. One of the common approaches to
54
+ this problem is knowledge graph embedding (KGE) meth-
55
+ ods, which make prediction through a predefined triple score
56
+ function with learnt entity and relation embeddings as input.
57
+ Many KGE models have been proposed like TransE (Bordes
58
+ et al. 2013), DistMult (Yang et al. 2015), RotatE (Sun et al.
59
+ 2019) and HAKE (Zhang et al. 2020). These methods have
60
+ gained great success in knowledge graph completion task.
61
+ For most KGE methods, the parametric learning paradigm
62
+ can be viewed as memorization regarding training data as
63
+ a book and predicting missing links as the close-book test
64
+ (Chen et al. 2022), which belongs to inductive inference.
65
+ However, the large knowledge graphs often contain incom-
66
+ plete triples that are difficult to be inductively inferred by
67
+ applying memorization paradigm. Nevertheless, the problem
68
+ may be well solved by using analogical inference method.
69
+ That is because analogical inference is a referential method,
70
+ which retrieves similar solutions to solve new problems,
71
+ similar to an open-book examination. For example, it seems
72
+ that most people could not remember even learn about what
73
+ company Ron Wayne founded. However, if they know that
74
+ Ron Wayne and Steve Jobs are the co-founders, i.e., Steve
75
+ Jobs and Ron Wayne are analogical objects in this context,
76
+ and it is well known that Steve Jobs founded Apple Inc.;
77
+ thus they could analogically infer that Ron Wayne founded
78
+ Apple Inc. .
79
+ In order to enhance KGEs with analogical inference ca-
80
+ pability, there are three problems should be solved: 1) How
81
+ to define the analogical objects of elements given a task? 2)
82
+ How to enable the model to map elements to analogical ob-
83
+ jects? 3) How to combine the original inductive inference
84
+ capability and enhanced analogical inference capability?
85
+ We propose AnKGE, a novel and general self-supervised
86
+ framework, which solves these problems very well and en-
87
+ hances well-trained KGEs with analogical inference capa-
88
+ bility. For problem 1, we think that an analogical object can
89
+ solve the given task well, and inspired by the nearest neigh-
90
+ bor language model (Khandelwal et al. 2020), we propose an
91
+ analogical retriever covering objects of three levels, includ-
92
+ ing entity, relation, and triple level. Specifically, we consider
93
+ the score function of KGEs as the assessment of the quality
94
+ of triples and regrade the replacement triples with the high-
95
+ est scoring as the appropriate analogical objects. For prob-
96
+ arXiv:2301.00982v1 [cs.AI] 3 Jan 2023
97
+
98
+ lem 2, we trained a projecting function using analogical ob-
99
+ jects as supervision signals. This function projects original
100
+ objects onto appropriate analogical objects. For problem 3,
101
+ we interpolate the analogy score with the base model score
102
+ to combine the original inductive inference capability and
103
+ enhanced analogical inference capability. Moreover, we in-
104
+ troduce the adaptive weight to adjust analogical inference in
105
+ knowledge graph completion task.
106
+ Finally, through link prediction experiments on FB15k-
107
+ 237 and WN18RR datasets, we demonstrate the AnKGE is
108
+ significantly compatible and outperforms the other baseline
109
+ models. To the best of our knowledge, AnKGE is the first
110
+ framework to enhance KGEs with analogical inference abil-
111
+ ity.
112
+ In summary, our contributions in this work include:
113
+ • We explore the knowledge graph completion task from
114
+ the analogical inference view. We propose an effective
115
+ retrieval method covering three levels to obtain the ap-
116
+ propriate analogy objects.
117
+ • We propose a novelty analogical inference enhanced
118
+ framework called AnKGE, which could project original
119
+ objects onto appropriate objects for analogical inference.
120
+ To our knowledge, the AnKGE is the first framework of
121
+ knowledge graph embedding to enhance analogical infer-
122
+ ence ability.
123
+ • We conduct experimental evaluations to demonstrate
124
+ the proposed AnKGE is significantly compatible and
125
+ achieves competitive performance on FB15k-237 and
126
+ WN18RR datasets, promising practical applications.
127
+ 2
128
+ Related Work
129
+ Knowledge graph embedding
130
+ According to previous
131
+ work (Zhang et al. 2022), the KGE methods can be di-
132
+ vided into two categories based on the scoring function and
133
+ whether a global graph structure is utilized. The first cat-
134
+ egory is the Conventional KGEs (C-KGEs), which apply a
135
+ geometric assumption in vector space for true triples and use
136
+ single triple as input for triple scoring. Conventional KGEs
137
+ use the score function to measure the plausibility of triple.
138
+ TransE (Bordes et al. 2013) is a representative conventional
139
+ KGE method whose score function is ∥h + r − t∥2. What
140
+ is more, there are many variants to improve the performance
141
+ of TransE, such as RotatE (Sun et al. 2019), DistMult (Yang
142
+ et al. 2015) and HAKE (Zhang et al. 2020). The other cate-
143
+ gory is the GNN-based methods, which use representations
144
+ of entities and relations aggregated from their neighbors in
145
+ the graph instead of embedding them for triple scoring to
146
+ capture the graph patterns explicitly. R-GCN (Schlichtkrull
147
+ et al. 2018) is the first GNN framework to model relational
148
+ data. It introduces relation-specific transformations when
149
+ neighbor aggregating. SE-GNN (Li et al. 2022) models three
150
+ levels semantic evidence into knowledge embedding. Note
151
+ that SE-GNN introducing three levels from the semantic ev-
152
+ idence view differs from our three levels analogical objects.
153
+ Enhanced KGE framework
154
+ Recently, some work has
155
+ proposed some frameworks and strategies to improve the
156
+ performance of KGE models, which are called enhanced
157
+ KGE, such as CAKE (Niu et al. 2022), PUDA(Tang
158
+ et al. 2022) and REP (Wang et al. 2022). CAKE is a
159
+ commonsense-aware knowledge embedding framework to
160
+ extract commonsense from factual triples with entity con-
161
+ cepts automatically, which generates commonsense aug-
162
+ ments to facilitate high-quality negative sampling. PUDA
163
+ is a data augmentation strategy to address the false nega-
164
+ tive and data sparsity issue. REP is a post-processing tech-
165
+ nique to adapt pre-trained KG embeddings with graph con-
166
+ text. Our method is designed to enhance a well-trained KGE
167
+ model with analogical inference capability belonging to the
168
+ enhanced KGE framework.
169
+ Analogical inference
170
+ In classic artificial intelligence, ana-
171
+ logical inference was an active research topic. However, the
172
+ early computational model of analogy-making study (Gen-
173
+ tner 1983; Turney 2008) mainly focuses on structure map-
174
+ ping theory and its implementation in the structure map-
175
+ ping engine. Recently, some researchers proposed k-Nearest
176
+ Neighbor language model(kNN-LM) (Khandelwal et al.
177
+ 2020), which can directly query training examples at test
178
+ time, also can be considered the analogy inference model
179
+ in the neural language process topic. While effective, these
180
+ models often require retrieval from a large datastore at test
181
+ time, significantly increasing the inference overhead. In the
182
+ field of knowledge graph, the study of analogical inference
183
+ to solve knowledge graph incomplete problem is missing.
184
+ ANALOGY (Liu, Wu, and Yang 2017) is the first method for
185
+ modeling analogical structures in multi-relational embed-
186
+ ding, but the performance is not good. Differ in our method
187
+ uses the nearest neighbor method to perform explicit anal-
188
+ ogy, ANALOGY uses the commutativity constraint of the
189
+ normal matrix to model analogical relations implicitly.
190
+ 3
191
+ Analogical Object Retriever
192
+ Before introducing our method, in this section, we firstly in-
193
+ troduce the background of knowledge graph and analogical
194
+ inference, and then we propose the analogical object retriev-
195
+ ers that retrieve appropriate analogical objects from entity-
196
+ level, relation-level, and triple-level. The retrieved analog-
197
+ ical objects will be used as supervision signals with our
198
+ method.
199
+ Background
200
+ A knowledge graph is denoted as G
201
+ =
202
+ (E, R, F), where E represents the set of entities, R repre-
203
+ sents the set of relations, and F = {(h, r, t)} ⊆ E × R × E
204
+ represents the set of triple facts.
205
+ Analogical inference, which has been long researched in
206
+ artificial intelligence, maps the target problem to a known
207
+ source problem that could effectively utilize known knowl-
208
+ edge (Hall 1989). Applying analogical inference into link
209
+ prediction task (h, r, ?) in knowledge graphs, instead of di-
210
+ rectly predicting the tail entity t, we could make predic-
211
+ tion through similar triples that we know, i.e. triples in train
212
+ dataset. We consider similar triples are composed by ana-
213
+ logical objects of (h, r, t). Specifically, we assume that the
214
+ analogy objects may come from three levels: the analogy
215
+ of head entity h part resulting similar triple (h′, r, t)(entity-
216
+ level), the analogy of relation r part resulting similar triple
217
+
218
+ (h, r′, t) (relation-level) and the analogy of combination pair
219
+ (h, r) part t resulting similar triple (h′, r′, t) (triple-level).
220
+ Thus, we propose three retrievers to obtain different
221
+ level’s analogical objects.
222
+ Entity-Level Retriever
223
+ The retriever is designed based on
224
+ the score function fkge(h, r, t) predefined in a well-trained
225
+ KGE model, where triples with higher scores are assumed
226
+ with higher probability to be true. Inspired by the near-
227
+ est neighbor language model (Khandelwal et al. 2020), we
228
+ replace all possible objects of the triple and regrade the
229
+ replacement triples with highest scoring as the appropri-
230
+ ate analogical objects. Given a triple (h, r, t), entity-level
231
+ retriever retrieves similar true triples (h′, r, t) for entity-
232
+ level analogical inference. For example, we could get the
233
+ answer of (Sergey Brin, found, ?) is Google through
234
+ (Larry Page, found, Google) if we know Sergey Brin and
235
+ Larry Page are co-founders.
236
+ Specifically, in entity-level retriever, we first replace h
237
+ with all entities resulting |E| replacement triples, and then
238
+ regard triples with highest scores measured by the KGE as
239
+ similar triples. And we name the head entity in similar triples
240
+ as analogical objects from entity-level retriever. Thus ana-
241
+ logical object set could be represented as
242
+ Ehrt
243
+ Ne = {hi | Top( {fkge(hi, r, t) | hi ∈ E} )Ne},
244
+ (1)
245
+ where Top(·)k denotes the k elements with top k values
246
+ among all inputs, fkge(·, ·, ·) is the predefined score function
247
+ in KGE model, and hrt denotes a specific triple (h, r, t) as
248
+ input. If not otherwise specified, we omit hrt and use ENe in-
249
+ stead of Ehrt
250
+ Ne for simplicity. Compared to retrieving similar
251
+ triples directly from the train dataset, retrieving according to
252
+ scores from KGEs could help overcome the incompleteness
253
+ of KGs.
254
+ Relation-Level Retriever
255
+ Given (h, r, t), relation-level
256
+ retriever retrieves (h, r′, t) for relation-level analogical in-
257
+ ference, since there are relations with similar contexts in
258
+ KGs. For example, the founder of a company is usually
259
+ the board member. Thus the relation-level analogy object
260
+ of found is board member. Similar to the entity-level re-
261
+ triever, the analogical object set of (h, r, t) from relation-
262
+ level retriever is as follow :
263
+ RNr = {ri | Top( {fkge(h, ri, t) | ri ∈ R} )Nr}.
264
+ (2)
265
+ Triple-Level Retriever
266
+ Given (h, r, t), triple-level re-
267
+ triever retrieves (h′, r′, t) for triple-level analogical infer-
268
+ ence, which is the combination of entity-level and relation-
269
+ level retriever. For instance, Sergey Brin is the founder of
270
+ Google and Sundar Pichai is the CEO of Google. Therefore,
271
+ the triple-level analogical objects of (SergeyBrin, found)
272
+ is (SundarPichai, CEO). Actually, the number of all can-
273
+ didate (h′, r′) pairs is in millions in most knowledge graphs.
274
+ In order to reduce the cost of retrieving candidate pairs and
275
+ inspired by the principle of locality, we often select m en-
276
+ tities and n relations with high triple scores separately, and
277
+ then pair them with each other. Thus the set of analogical
278
+ objects, namely (h′, r′) pairs, from triple-level retriever is
279
+ TNt = {(hi, ri) |
280
+ Top( {fkge(hi, ri, t) | hi ∈ Em, ri ∈ Rn})Nt}.
281
+ (3)
282
+ 4
283
+ Methodology
284
+ In this section, we present a novel KGE enhanced frame-
285
+ work called Analogy Enhanced Knowledge Graph Embed-
286
+ ding (AnKGE), which could model the three levels of ana-
287
+ logical inference as introduced in Section 3. Next, we first
288
+ introduce the definition of analogy function (Section 4.1)
289
+ and how to train it by using analogical objects (Section 4.2
290
+ and Section 4.3). Finally, we introduce how to combine the
291
+ original inductive inference capability and enhanced ana-
292
+ logical inference capability in knowledge graph completion
293
+ task. (Section 4.4)
294
+ 4.1
295
+ Analogy Function
296
+ Given a well-trained KGE model M = {E, R, fkge, Θ},
297
+ where E, R and fkge are entity embedding table, relation
298
+ embedding table, and score function of the M, and Θ is the
299
+ set of other parameters, AnKGE enhances M with capa-
300
+ bility of analogical inference through a projecting function
301
+ called analogy function f. We train an analogy function for
302
+ each level of analogical inference with the original element
303
+ embedding from E or R in M as input and output the ana-
304
+ logical object embedding to conduct link prediction.
305
+ Specifically, analogy function for relation-level analogical
306
+ inference frel maps an original embedding of a relation r
307
+ in (h, r, t) to the analogical embedding through a relation
308
+ projecting vector vR
309
+ r ∈ Rdr that
310
+ frel(r) = ra = vR
311
+ r ◦ r,
312
+ (4)
313
+ where dr is the relation hidden dimension, ◦ is the element-
314
+ wise product.
315
+ Similarly, the analogy function for entity-level analogical
316
+ inference fent maps an original embedding of an entity h
317
+ in (h, r, t) to the analogical embedding. Considering that an
318
+ entity generally tends to be associated with multiple rela-
319
+ tions, we define fent as:
320
+ fent(h, r) = ha = vE
321
+ h ◦ h + λMtrans × vR
322
+ r ◦ r,
323
+ (5)
324
+ where vE
325
+ h ∈ Rde is the entity projecting vector and de is
326
+ the entity hidden dimension. Mtrans ∈ Rde×dr denotes the
327
+ transformation matrix that enable to make relation r into
328
+ consideration. λ is a weight hyper-parameter.
329
+ Analogy function for triple-level analogical inference ftrp
330
+ outputs the analogical embedding of entity and relation pairs
331
+ through combining embedding of entity-level and relation-
332
+ level according to KGEs as follows:
333
+ ftrp(h, r) = za = gkge (ha, ra) ,
334
+ (6)
335
+ gkge(·, ·) is the function in KGEs that maps a head entity em-
336
+ bedding to the tail entity embedding according to the given
337
+ relation embedding. gkge(·, ·) and fkge(·, ·, ·) of representa-
338
+ tive KGE models are provided in Appendix A.
339
+ 4.2
340
+ Analogy Objects Aggregator
341
+ In order to enhance the framework’s robustness for analog-
342
+ ical inference, we make the analogical objects retrieved fol-
343
+ lowing Section 3 as the supervision signals for analogy func-
344
+ tions. Specifically, we make the analogy embedding as intro-
345
+ duced in Section 4.1 to approach the weighted average of the
346
+ analogical objects from KGE model M.
347
+
348
+ Entity Level
349
+ Relation Level
350
+ Triple Level
351
+ Entity Analogy
352
+ Embedding
353
+ Relation Analogy
354
+ Embedding
355
+ Triple Analogy
356
+ Embedding
357
+ Analogy Function
358
+ Entity Loss
359
+ close
360
+ Relation Loss
361
+ close
362
+ Triple Loss
363
+ close
364
+
365
+ Training Stage
366
+
367
+ Testing Stage
368
+ Analogical Retriever
369
+ Link Prediction
370
+ Analogy Score
371
+ Base Model Score
372
+ Score Function:
373
+
374
+ Entity Embedding:
375
+
376
+ Relation Embedding:
377
+
378
+
379
+ Base KGE Model
380
+ Figure 1: This is the AnKGE structure diagram with TransE as the base model. For simplicity, we set the numbers of three levels
381
+ analogical object are 1. The upper half of figure shows the module of base model. The predefined score function is applied to
382
+ learnt embedding to get the well-trained model. The lower half of figure shows the module of AnKGE. First, AnKGE retrieves
383
+ the analogy objects for training the analogy function. The solid line arrow indicates the AnKGE training process. Then, AnKGE
384
+ remakes the prediction ranking by interpolating analogy score. The dashed line arrow indicates the AnKGE testing process.
385
+ The aggregated embeddings of entity-level and relation-
386
+ level, h+ and r+ respectively, are calculated as follows
387
+ h+ =
388
+
389
+ hi∈ENe
390
+ hi S(fkge(hi, r, t)),
391
+ (7)
392
+ r+ =
393
+
394
+ ri∈RNr
395
+ ri S(fkge(h, ri, t)),
396
+ (8)
397
+ where S(·) is the softmax function that converts a vector of
398
+ K real numbers into a probability distribution of K possible
399
+ outcomes, which is formulated as S (ci) = eci/�K
400
+ k=1 eck.
401
+ Triple-level aggregated embedding z+ is obtained by the
402
+ firstly aggregating entity and relation embedding separately
403
+ and then calculating combination embedding, which can be
404
+ formulated as:
405
+ z+ =gkge
406
+
407
+ z+
408
+ e , z+
409
+ r
410
+
411
+ ,
412
+ z+
413
+ e =
414
+
415
+ (hi,ri)∈TNt
416
+ hi S(fkge(hi, ri, t)),
417
+ z+
418
+ r =
419
+
420
+ (hi,ri)∈TNt
421
+ ri S(fkge(hi, ri, t)).
422
+ (9)
423
+ 4.3
424
+ Loss Function
425
+ The training goal of the analogy function is to reduce the dis-
426
+ tance between the analogy embedding and aggregated em-
427
+ bedding obtained following Section 4.1 and 4.2 respectively.
428
+ In addition, considering that fkge performs priori on the
429
+ truth value of triples, we take the analogy triple score as an-
430
+ other supervision signal. Therefore, given a pair of analogy
431
+ embedding Xa and aggregated embedding X + of a triple
432
+ embeddings (h, r, t), the loss function is
433
+ L(X,(h, r, t)) =
434
+ logσ
435
+
436
+ γ
437
+ ��Xa − X +��
438
+ 2 − fkge(h, r, t)
439
+
440
+ ,
441
+ (10)
442
+ where γ is a hyper-parameter of the loss function, σ is the
443
+ sigmoid function. ∥·∥2 is the euclidean norm.
444
+ However, the three levels of analogical inference are not
445
+ equally important for different triples. We add weight pa-
446
+ rameters for each loss of three levels and the final training
447
+ objective is1:
448
+ min Loss =
449
+
450
+ (h,r,t)∈F
451
+
452
+ βEL(h, (ha, r, t))
453
+ +βR L(r, (h, ra, t))
454
+ +βT L(z, (ha, ra, t))
455
+
456
+ .
457
+ (11)
458
+ As a result, considering the different contributions of three
459
+ level, we introduce βE, βR and βT to adjust gradient de-
460
+ scent. The three levels loss function distribution is positively
461
+ correlated with the score of the analogy triple. Due to page
462
+ limitation, we put the calculation details in Appendix B.
463
+ 4.4
464
+ Link Prediction
465
+ For a test triple (h, r, t) in test set Fte, we follow the kNN-
466
+ LM (Khandelwal et al. 2020) and interpolate the analogy
467
+ 1During the gradient update, the parameters of the original
468
+ model are frozen.
469
+
470
+ FB15k-237
471
+ WN18RR
472
+ MRR
473
+ Hit@1
474
+ Hit@3
475
+ Hit@10
476
+ MRR
477
+ Hit@1
478
+ Hit@3
479
+ Hit@10
480
+ Conventional KGE
481
+ TransE (Bordes et al. 2013)
482
+ 0.317
483
+ 0.223
484
+ 0.352
485
+ 0.504
486
+ 0.224
487
+ 0.022
488
+ 0.390
489
+ 0.520
490
+ ANALOGY (Liu, Wu, and Yang 2017)
491
+ 0.256
492
+ 0.165
493
+ 0.290
494
+ 0.436
495
+ 0.405
496
+ 0.363
497
+ 0.429
498
+ 0.474
499
+ RotatE (Sun et al. 2019)
500
+ 0.336
501
+ 0.244
502
+ 0.370
503
+ 0.524
504
+ 0.473
505
+ 0.428
506
+ 0.491
507
+ 0.564
508
+ HAKE (Zhang et al. 2020)
509
+ 0.349
510
+ 0.252
511
+ 0.385
512
+ 0.545
513
+ 0.496
514
+ 0.452
515
+ 0.513
516
+ 0.580
517
+ Rot-Pro (Song, Luo, and Huang 2021)
518
+ 0.344
519
+ 0.246
520
+ 0.383
521
+ 0.540
522
+ 0.457
523
+ 0.397
524
+ 0.482
525
+ 0.577
526
+ PairRE (Chao et al. 2021)
527
+ 0.348
528
+ 0.254
529
+ 0.384
530
+ 0.539
531
+ 0.455
532
+ 0.413
533
+ 0.469
534
+ 0.539
535
+ DualE (Cao et al. 2021)
536
+ 0.365
537
+ 0.268
538
+ 0.400
539
+ 0.559
540
+ 0.492
541
+ 0.444
542
+ 0.513
543
+ 0.584
544
+ GNN-based KGE
545
+ R-GCN (Schlichtkrull et al. 2018)
546
+ 0.249
547
+ 0.151
548
+ 0.264
549
+ 0.417
550
+ -
551
+ -
552
+ -
553
+ -
554
+ A2N (Bansal et al. 2019)
555
+ 0.317
556
+ 0.232
557
+ 0.348
558
+ 0.486
559
+ 0.450
560
+ 0.420
561
+ 0.460
562
+ 0.510
563
+ CompGCN (Vashishth et al. 2020)
564
+ 0.355
565
+ 0.264
566
+ 0.390
567
+ 0.535
568
+ 0.479
569
+ 0.443
570
+ 0.494
571
+ 0.546
572
+ SE-GNN (Li et al. 2022)
573
+ 0.365
574
+ 0.271
575
+ 0.399
576
+ 0.549
577
+ 0.484
578
+ 0.446
579
+ 0.509
580
+ 0.572
581
+ Enhanced KGE
582
+ CAKE (Niu et al. 2022)
583
+ 0.321
584
+ 0.226
585
+ 0.355
586
+ 0.515
587
+ -
588
+ -
589
+ -
590
+ -
591
+ PUDA (Tang et al. 2022)
592
+ 0.369
593
+ 0.268
594
+ 0.408
595
+ 0.578
596
+ 0.481
597
+ 0.436
598
+ 0.498
599
+ 0.582
600
+ REP (Wang et al. 2022)
601
+ 0.354
602
+ 0.262
603
+ 0.388
604
+ 0.540
605
+ 0.488
606
+ 0.439
607
+ 0.505
608
+ 0.588
609
+ AnKGE-HAKE(ours)
610
+ 0.385
611
+ 0.288
612
+ 0.428
613
+ 0.572
614
+ 0.500
615
+ 0.454
616
+ 0.515
617
+ 0.587
618
+ Table 1: Link Prediction results on FB15k-237 and WN18RR. The best results are bold and second best results are underline.
619
+ score with base model score to get the final score function:
620
+ Score(h, r, t) = fkge (h, r, t) + λEfkge (ha, r, t) +
621
+ λRfkge (h, ra, t) + λT fkge (ha, ra, t) (12)
622
+ where λ is the adaptive weight parameter, which is dynam-
623
+ ically adjusts analogy weight according to training triples.
624
+ λE is proportional to the number of triples with the same
625
+ (r, t) in the training set. λR is proportional to the number of
626
+ triples with the same (h, t) in the training set. λT is propor-
627
+ tional to the number of triples with the same tail entity in the
628
+ training set. The formula for adaptive weight parameter is2:
629
+ λE = min (| {(hi, r, t) ∈ F} |/Ne, 1) × αE,
630
+ λR = min (| {(h, ri, t) ∈ F} |/Nr, 1) × αR,
631
+ λT = min (| {(hi, ri, t) ∈ F} |/Nt, 1) × αT ,
632
+ (13)
633
+ where αE, αR, αT
634
+ are basic weight hyper-parameters.
635
+ Adaptive weight utilizes the train dataset to determine
636
+ whether test triples are suitable for different levels of ana-
637
+ logical inference. When all levels of analogical inference are
638
+ not suitable, this score function degenerates to the base KGE
639
+ model. In fact, AnKGE remakes the rank of hard-predicted
640
+ triples in the base model by analogical inference to improve
641
+ the prediction performance.
642
+ 5
643
+ Experiments
644
+ In this section, we present and analyze the experimental re-
645
+ sults.3 We first introduce the experimental settings in de-
646
+ tail. Then we show the effectiveness and compatibility of the
647
+ 2When link prediction, we add reverse relations to expand the
648
+ dataset and predict tail entity only, which is equivalent to the effect
649
+ of predicting both head and tail entities. Each prediction will use all
650
+ entities to replace tail entity. Thus, there is no risk of label leakage.
651
+ 3Our code is available at https://github.com/zjukg/AnKGE
652
+ AnKGE with multiple base KGE models. Besides, we fur-
653
+ ther analyze the effect of three levels analogical inference by
654
+ ablation study. Finally, we conduct case study presenting a
655
+ new view for the explanations of knowledge graph inference
656
+ by analogical inference.
657
+ 5.1
658
+ Experiments Setup
659
+ Dataset
660
+ We conduct experiments on link prediction task
661
+ on two well-known benchmarks: WN18RR and FB15k-237.
662
+ WN18RR and FB15k-237 are subsets of WN18 and FB15k,
663
+ respectively. Some previous work (Dettmers et al. 2018) has
664
+ indicated the test leakage flaw in WN18 and FB15k, which
665
+ means test triples appear in train dataset with inverse rela-
666
+ tions. WN18RR and FB15k-237 removing inverse relations
667
+ are the modified version. Therefore, we use WN18RR and
668
+ FB15k-237 as the experiment datasets. The statistic details
669
+ of these datasets are summarized in Appendix C.
670
+ Evaluation protocol
671
+ We evaluate the KGE framework
672
+ performance by four frequent evaluation metrics: the recip-
673
+ rocal mean of correct entity ranks in the whole entity set
674
+ (MRR) and percentage of test triples with correct entities
675
+ ranked in top 1/3/10 (Hit@1, Hit@3, Hit@10). For a test
676
+ task (h, r, ?) → t, we replace all entities to create corrupted
677
+ triples. Following the filter setting protocol, we exclude the
678
+ other true triples appearing in train, valid and test datasets.
679
+ Finally, we sort the filter corrupted triples according to the
680
+ triple scores.
681
+ Implementation details
682
+ We train AnKGE framework
683
+ based on four representative KGE models : TransE (Bor-
684
+ des et al. 2013), RotatE (Sun et al. 2019), HAKE (Zhang
685
+ et al. 2020) and PairRE (Chao et al. 2021). We use the grid
686
+ search to select the hyper-parameters of our framework. We
687
+ search the number of analogy objects of three levels Ne,
688
+
689
+ FB15k-237
690
+ WN18RR
691
+ MRR
692
+ Hit@1
693
+ Hit@3
694
+ Hit@10
695
+ MRR
696
+ Hit@1
697
+ Hit@3
698
+ Hit@10
699
+ TransE
700
+ 0.317
701
+ 0.223
702
+ 0.352
703
+ 0.504
704
+ 0.224
705
+ 0.022
706
+ 0.390
707
+ 0.520
708
+ AnKGE-TransE
709
+ 0.340
710
+ 0.245
711
+ 0.379
712
+ 0.523
713
+ 0.232
714
+ 0.031
715
+ 0.402
716
+ 0.526
717
+ RotatE
718
+ 0.336
719
+ 0.244
720
+ 0.370
721
+ 0.524
722
+ 0.473
723
+ 0.428
724
+ 0.491
725
+ 0.564
726
+ AnKGE-RotatE
727
+ 0.366
728
+ 0.273
729
+ 0.405
730
+ 0.546
731
+ 0.480
732
+ 0.431
733
+ 0.499
734
+ 0.578
735
+ HAKE
736
+ 0.349
737
+ 0.252
738
+ 0.385
739
+ 0.545
740
+ 0.496
741
+ 0.452
742
+ 0.513
743
+ 0.580
744
+ AnKGE-HAKE
745
+ 0.385
746
+ 0.288
747
+ 0.428
748
+ 0.572
749
+ 0.500
750
+ 0.454
751
+ 0.515
752
+ 0.587
753
+ PairRE
754
+ 0.348
755
+ 0.254
756
+ 0.384
757
+ 0.539
758
+ 0.455
759
+ 0.413
760
+ 0.469
761
+ 0.539
762
+ AnKGE-PairRE
763
+ 0.376
764
+ 0.281
765
+ 0.417
766
+ 0.558
767
+ 0.462
768
+ 0.415
769
+ 0.480
770
+ 0.556
771
+ Table 2: AnKGE upon different model on FB15k-237 and WN18RR. The better results are bold.
772
+ 1
773
+ 3
774
+ 5
775
+ 10
776
+ 50
777
+ 100
778
+ HAKE Ranking
779
+ 100
780
+ 50
781
+ 10
782
+ 5
783
+ 3
784
+ 1
785
+ AnKGE Ranking
786
+ 0
787
+ 0
788
+ 0
789
+ 3
790
+ 366
791
+ 1727
792
+ 2
793
+ 9
794
+ 19
795
+ 332
796
+ 5824
797
+ 593
798
+ 5
799
+ 48
800
+ 292
801
+ 2189
802
+ 817
803
+ 29
804
+ 22
805
+ 283
806
+ 1448
807
+ 546
808
+ 186
809
+ 3
810
+ 352
811
+ 3816
812
+ 882
813
+ 456
814
+ 214
815
+ 18
816
+ 9953
817
+ 1250
818
+ 257
819
+ 143
820
+ 158
821
+ 15
822
+ 0
823
+ 200
824
+ 400
825
+ 600
826
+ 800
827
+ 1000
828
+ Figure 2: Comparison of the ranking between AnKGE and
829
+ base model on the FB15k-237.
830
+ Nr and Nt ∈ {1, 3, 5, 10, 20}, the basic weight of three
831
+ levels αE, αR and αT ∈ {0.01, 0.05, 0.1, 0.2, 0.3}, learn
832
+ rate α ∈ {1e−3, 1e−4, 1e−5}. The loss function weight
833
+ γ in Equation (10) is set to 10, the transformation matrix
834
+ weight λ in Equation (5) is set to 1 and 0 in FB15k-237 and
835
+ WN18RR respectively. Before training AnKGE, we retrieve
836
+ analogical objects of three levels in train dataset for once.
837
+ In both training and inference processes, AnKGE is ex-
838
+ tended based on the scoring function of the original model.
839
+ Thus, AnKGE has the same model complexity as the origi-
840
+ nal model.
841
+ 5.2
842
+ Link Prediction Results
843
+ Main results
844
+ We use HAKE (Zhang et al. 2020) as the
845
+ base model for AnKGE to compare with other baselines.
846
+ Baselines are selected from three categories Conventional
847
+ KGE models including TransE (Bordes et al. 2013), ANAL-
848
+ OGY (Liu, Wu, and Yang 2017), RotatE (Sun et al. 2019),
849
+ HAKE, Rot-Pro (Song, Luo, and Huang 2021), PairRE
850
+ (Chao et al. 2021), and DualE (Cao et al. 2021), GNN-
851
+ based KGE models including R-GCN (Schlichtkrull et al.
852
+ 2018), A2N (Bansal et al. 2019), CompGCN (Vashishth
853
+ et al. 2020), and SE-GNN (Li et al. 2022), and Enhanced
854
+ KGE framework including CAKE (Niu et al. 2022), PUDA
855
+ Models
856
+ FB15k-237
857
+ WN18RR
858
+ MRR
859
+ Hit@1
860
+ MRR
861
+ Hit@1
862
+ AnKGE
863
+ 0.385
864
+ 0.288
865
+ 0.500
866
+ 0.454
867
+ w/o entity-level
868
+ 0.384
869
+ 0.288
870
+ 0.497
871
+ 0.451
872
+ w/o relation-level
873
+ 0.349
874
+ 0.253
875
+ 0.500
876
+ 0.455
877
+ w/o triple-level
878
+ 0.384
879
+ 0.287
880
+ 0.499
881
+ 0.453
882
+ w/o all
883
+ 0.349
884
+ 0.252
885
+ 0.496
886
+ 0.452
887
+ Table 3: Ablation study of three analogy level, where w/o
888
+ means removing the corresponding level in AnKGE.
889
+ (Tang et al. 2022), and REP (Wang et al. 2022).
890
+ The Table 1 summarizes experiment results on FB15k-
891
+ 237 and WN18RR. The result of ANALOGY is from code4.
892
+ The result of TransE, RotatE, HAKE and PairRE are from
893
+ our trained model. The base model and AnKGE frame-
894
+ work training details are provided in Appendix D. The
895
+ other results are from the published paper. We can see that
896
+ AnKGE enhances the analogical inference ability of the
897
+ base model HAKE through analogical inference and outper-
898
+ forms the baseline models on most evaluation metrics except
899
+ the Hit@10 metric where results of AnKGE slightly lower
900
+ than PUDA and REP and achieve the second best. Overall,
901
+ AnKGE remakes the rank of hard-predicted triples in HAKE
902
+ by analogical inference, achieving the best results on both
903
+ datasets.
904
+ Compatibility results
905
+ The AnKGE is a framework to en-
906
+ hance the analogical inference ability of KGE models, which
907
+ retrieves analogical objects through fkge predefined in KGE
908
+ models. Theoretically, our framework is applicable to most
909
+ KGE models defining a score function for triples. We chose
910
+ four C-KGE models: TransE, RotatE, HAKE, PairRE as
911
+ base model to validate compatibility. As Table 2 shows,
912
+ AnKGE achieves a significant improvement over the base
913
+ model on all metrics. The MRR metric improves by about
914
+ 3% on the FB15k-237. The result demonstrates that AnKGE
915
+ is compatible with a wide range of KGE models. Moreover,
916
+ AnKGE based on HAKE achieves a more significant im-
917
+ provement on FB15k-237 dataset. HAKE makes the entities
918
+ 4https://github.com/thunlp/OpenKE
919
+
920
+ Incomplete triple
921
+ Analogy object
922
+ AnKGE
923
+ Original
924
+ Rank
925
+ Rank
926
+ Entity
927
+ (diencephalon, has part, ?) → hypothalamus
928
+ brain
929
+ 5
930
+ 25
931
+ (rest, derivationally related form, ?) → breath
932
+ drowse
933
+ 6
934
+ 38
935
+ (roof, hypernym, ?) → protective covering
936
+ cap
937
+ 39
938
+ 20
939
+ Relation
940
+ (felidae, member meronym, ?) → panthera
941
+ has part
942
+ 5
943
+ 17
944
+ (monodontidae, member meronym, ?) → delphinapterus
945
+ hypernym Reverse
946
+ 1
947
+ 64
948
+ (literary composition, hypernym, ?) → writing
949
+ has part
950
+ 88
951
+ 18
952
+ Triple
953
+ (ticino, instance hypernym, ?) → swiss canton
954
+ (switzerland, has part)
955
+ 8
956
+ 54
957
+ (south korea, has part, ?) → inchon
958
+ (port, instance hypernym Reverse)
959
+ 1
960
+ 31
961
+ (elementary geometry, hypernym, ?) → geometry
962
+ (construct, synset domain topic of)
963
+ 39
964
+ 12
965
+ Table 4: Analogical inference case Study. The better ranks are blod.
966
+ hierarchical by using the depth of the entity to model differ-
967
+ ent levels of the hierarchy, which is more helpful for analog-
968
+ ical inference.
969
+ Compared with WN18RR, the improvement of the model
970
+ on FB15k-237 is more significant, which we speculate is
971
+ because FB15k-237 has richer relational patterns. So it has
972
+ more improvement in the process of relation-level analogi-
973
+ cal inference. In addition, AnKGE is designed to predict the
974
+ hard-predicted triples. The overall accuracy of FB15k-237
975
+ is lower than WN18RR. Consequently, the boosting effect
976
+ of the model is reflected more obviously.
977
+ 5.3
978
+ Model Analysis
979
+ Ranking study
980
+ In order to analyze the improvement ef-
981
+ fect of AnKGE, we compare the ranking results in FB15k-
982
+ 237 of the AnKGE-HAKE and original HAKE in Figure 2.
983
+ The horizontal coordinate represents the ranking range of
984
+ the HAKE model, and the vertical coordinate represents the
985
+ ranking range of AnKGE. We found that ranking changes
986
+ are less apparent when the ranking is more significant than
987
+ 100, so we selected the triples ranking within 100 and di-
988
+ vided them into six ranking ranges for analysis. The diag-
989
+ onal line represents the unchanged ranking, the lower right
990
+ of the diagonal line represents the AnKGE ranking as better
991
+ than the HAKE ranking, and the upper left represents worse.
992
+ We find some triples with worse rankings, but the number is
993
+ much smaller than those with better rankings. In addition,
994
+ the change in ranking is not so evident as the base model
995
+ ranking increases; the better the base model ranking is, the
996
+ more possible that AnKGE could improve the rankings.
997
+ Ablation Study
998
+ We conduct ablation experiments for the
999
+ analogical inference part of AnkGE. Table 3 shows the re-
1000
+ sults of the ablation study for the AnKGE-HAKE on two
1001
+ datasets. We can see that the removal of any part makes the
1002
+ model less effective, except the relation-level on WN18RR
1003
+ dataset. Since there are only 11 relations in WN18RR, it
1004
+ is hard to retrieve suitable relation-level analogical objects.
1005
+ We explain this in more detail in case study. In addition, the
1006
+ WN18RR consists of a lexicon containing contextual words
1007
+ that naturally provide entity-level analogical objects, which
1008
+ makes the model more effective for entity-level analogical
1009
+ inference. The result of FB15k-237 is the opposite. It may be
1010
+ because it has rich relationship patterns, making the relation-
1011
+ level analogical inference more effective.
1012
+ Case Study
1013
+ Analogical inference can generate explana-
1014
+ tions for predicted triples, which are valuable for real-life
1015
+ applications. Our method also provides an analogy view for
1016
+ the explanations of knowledge graph inference. As the Table
1017
+ 4 shows, we provide an intuitive demonstration about ana-
1018
+ logical inference. For each level, we select multiple example
1019
+ cases from WN18RR test set, and list their corresponding
1020
+ analogical objects and prediction results based on RotatE.
1021
+ For entity-level, the idea is to retrieve hypernym or hyponym
1022
+ as the analogy object. For example, the diencephalon is lo-
1023
+ cated in the core of the brain. The fact that hypothalamus is
1024
+ part of brain improves the reliability of the people‘s trust on
1025
+ predicted result. However, if hyponym entity becomes the
1026
+ analogy object, it will generate bad explanations and results.
1027
+ For instance, although cap can be regraded as a special type
1028
+ of roof, it is not the protective covering. Thus the misleading
1029
+ explanation that (cap, hypernym, protective covering)
1030
+ downgrades the trustworthiness of the predicting result,
1031
+ which ranks the correct answer at 39. For relation-level,
1032
+ AnKGE tends to retrieve the conceptually similar rela-
1033
+ tions, such as the ( member meronym) and ( has part).
1034
+ Nevertheless, there are only 11 relations on WN18RR,
1035
+ which makes the AnKGE sometimes retrieve the inappro-
1036
+ priate analogy relations. For example, ( hypernym) and
1037
+ ( has part) are the relations of opposite concepts, which
1038
+ leads to bad explanation and worse ranking. For triple-level,
1039
+ AnKGE typically focuses on the (h, r) pair structure. As
1040
+ proof, ticino is a canton of Switzerland means that triple
1041
+ (switzerland, has part, swiss canton) is good explana-
1042
+ tion. However, sometimes the (h, r) pair structure varies too
1043
+ much leading the misclassification.
1044
+ 6
1045
+ Conclusion
1046
+ In this paper, we resort to analogical inference to study the
1047
+ knowledge graph completion task. We propose an analogical
1048
+ object retriever that retrieves appropriate analogical objects
1049
+ from entity-level, relation-level, and triple-level. Then, we
1050
+ design a novel and general self-supervised framework to en-
1051
+ hance well-trained KGEs with analogical inference capabil-
1052
+ ity called AnKGE. Our method achieves competitive results
1053
+ on knowledge graph completion task and performs enhanced
1054
+ analogical inference ability. Some future directions include
1055
+ exploring more analogy patterns and a more general frame-
1056
+ work to adapt to the GNN-based KGE.
1057
+
1058
+ 7
1059
+ Acknowledgments
1060
+ This work is funded by NSFCU19B2027/91846204.
1061
+ References
1062
+ Bansal, T.; Juan, D.; Ravi, S.; and McCallum, A. 2019. A2N:
1063
+ Attending to Neighbors for Knowledge Graph Inference. In
1064
+ ACL (1), 4387–4392. Association for Computational Lin-
1065
+ guistics.
1066
+ Bevilacqua, M.; and Navigli, R. 2020. Breaking Through
1067
+ the 80% Glass Ceiling: Raising the State of the Art in Word
1068
+ Sense Disambiguation by Incorporating Knowledge Graph
1069
+ Information. In ACL, 2854–2864. Association for Compu-
1070
+ tational Linguistics.
1071
+ Bollacker, K. D.; Evans, C.; Paritosh, P. K.; Sturge, T.; and
1072
+ Taylor, J. 2008. Freebase: a collaboratively created graph
1073
+ database for structuring human knowledge.
1074
+ In SIGMOD
1075
+ Conference, 1247–1250. ACM.
1076
+ Bordes, A.; Usunier, N.; Garc´ıa-Dur´an, A.; Weston, J.; and
1077
+ Yakhnenko, O. 2013. Translating Embeddings for Modeling
1078
+ Multi-relational Data. In NIPS, 2787–2795.
1079
+ Cao, Z.; Xu, Q.; Yang, Z.; Cao, X.; and Huang, Q. 2021.
1080
+ Dual Quaternion Knowledge Graph Embeddings. In AAAI,
1081
+ 6894–6902. AAAI Press.
1082
+ Chao, L.; He, J.; Wang, T.; and Chu, W. 2021.
1083
+ PairRE:
1084
+ Knowledge Graph Embeddings via Paired Relation Vectors.
1085
+ In ACL/IJCNLP (1), 4360–4369. Association for Computa-
1086
+ tional Linguistics.
1087
+ Chen, X.; Li, L.; Zhang, N.; Tan, C.; Huang, F.; Si, L.; and
1088
+ Chen, H. 2022. Relation Extraction as Open-book Exami-
1089
+ nation: Retrieval-enhanced Prompt Tuning. In SIGIR, 2443–
1090
+ 2448. ACM.
1091
+ Dettmers, T.; Minervini, P.; Stenetorp, P.; and Riedel, S.
1092
+ 2018. Convolutional 2D Knowledge Graph Embeddings. In
1093
+ AAAI, 1811–1818. AAAI Press.
1094
+ Gentner, D. 1983. Structure-Mapping: A Theoretical Frame-
1095
+ work for Analogy. Cogn. Sci., 7(2): 155–170.
1096
+ Hall, R. P. 1989. Computational Approaches to Analogical
1097
+ Reasoning: A Comparative Analysis. Artif. Intell., 39(1):
1098
+ 39–120.
1099
+ Hu, Z.; Cao, Y.; Huang, L.; and Chua, T. 2021. How Knowl-
1100
+ edge Graph and Attention Help? A Qualitative Analysis into
1101
+ Bag-level Relation Extraction. In ACL/IJCNLP (1), 4662–
1102
+ 4671. Association for Computational Linguistics.
1103
+ Khandelwal, U.; Levy, O.; Jurafsky, D.; Zettlemoyer, L.;
1104
+ and Lewis, M. 2020.
1105
+ Generalization through Memoriza-
1106
+ tion: Nearest Neighbor Language Models. In ICLR. Open-
1107
+ Review.net.
1108
+ Lehmann, J.; Isele, R.; Jakob, M.; Jentzsch, A.; Kontokostas,
1109
+ D.; Mendes, P. N.; Hellmann, S.; Morsey, M.; van Kleef, P.;
1110
+ Auer, S.; and Bizer, C. 2015. DBpedia - A large-scale, multi-
1111
+ lingual knowledge base extracted from Wikipedia. Semantic
1112
+ Web, 6(2): 167–195.
1113
+ Li, R.; Cao, Y.; Zhu, Q.; Bi, G.; Fang, F.; Liu, Y.; and Li, Q.
1114
+ 2022. How Does Knowledge Graph Embedding Extrapolate
1115
+ to Unseen Data: A Semantic Evidence View. In AAAI, 5781–
1116
+ 5791. AAAI Press.
1117
+ Liu, H.; Wu, Y.; and Yang, Y. 2017.
1118
+ Analogical Infer-
1119
+ ence for Multi-relational Embeddings. In ICML, volume 70
1120
+ of Proceedings of Machine Learning Research, 2168–2178.
1121
+ PMLR.
1122
+ Miller, G. A. 1994. WORDNET: A Lexical Database for
1123
+ English. In HLT. Morgan Kaufmann.
1124
+ Niu, G.; Li, B.; Zhang, Y.; and Pu, S. 2022.
1125
+ CAKE: A
1126
+ Scalable Commonsense-Aware Framework For Multi-View
1127
+ Knowledge Graph Completion. In ACL (1), 2867–2877. As-
1128
+ sociation for Computational Linguistics.
1129
+ Schlichtkrull, M. S.; Kipf, T. N.; Bloem, P.; van den Berg,
1130
+ R.; Titov, I.; and Welling, M. 2018. Modeling Relational
1131
+ Data with Graph Convolutional Networks. In ESWC, vol-
1132
+ ume 10843 of Lecture Notes in Computer Science, 593–607.
1133
+ Springer.
1134
+ Song, T.; Luo, J.; and Huang, L. 2021. Rot-Pro: Modeling
1135
+ Transitivity by Projection in Knowledge Graph Embedding.
1136
+ In NeurIPS, 24695–24706.
1137
+ Suchanek, F. M.; Kasneci, G.; and Weikum, G. 2007. Yago:
1138
+ a core of semantic knowledge. In WWW, 697–706. ACM.
1139
+ Sun, Z.; Deng, Z.; Nie, J.; and Tang, J. 2019. RotatE: Knowl-
1140
+ edge Graph Embedding by Relational Rotation in Complex
1141
+ Space. In ICLR (Poster). OpenReview.net.
1142
+ Tang, Z.; Pei, S.; Zhang, Z.; Zhu, Y.; Zhuang, F.; Hoehn-
1143
+ dorf, R.; and Zhang, X. 2022. Positive-Unlabeled Learning
1144
+ with Adversarial Data Augmentation for Knowledge Graph
1145
+ Completion. In IJCAI, 2248–2254. ijcai.org.
1146
+ Turney, P. D. 2008. The Latent Relation Mapping Engine:
1147
+ Algorithm and Experiments. J. Artif. Intell. Res., 33: 615–
1148
+ 655.
1149
+ Vashishth, S.; Sanyal, S.; Nitin, V.; and Talukdar, P. P. 2020.
1150
+ Composition-based Multi-Relational Graph Convolutional
1151
+ Networks. In ICLR. OpenReview.net.
1152
+ Wang, H.; Dai, S.; Su, W.; Zhong, H.; Fang, Z.; Huang, Z.;
1153
+ Feng, S.; Chen, Z.; Sun, Y.; and Yu, D. 2022. Simple and Ef-
1154
+ fective Relation-based Embedding Propagation for Knowl-
1155
+ edge Representation Learning.
1156
+ In IJCAI, 2755–2761. ij-
1157
+ cai.org.
1158
+ Yang, B.; Yih, W.; He, X.; Gao, J.; and Deng, L. 2015. Em-
1159
+ bedding Entities and Relations for Learning and Inference
1160
+ in Knowledge Bases. In ICLR (Poster).
1161
+ Yasunaga, M.; Ren, H.; Bosselut, A.; Liang, P.; and
1162
+ Leskovec, J. 2021.
1163
+ QA-GNN: Reasoning with Language
1164
+ Models and Knowledge Graphs for Question Answering. In
1165
+ NAACL-HLT, 535–546. Association for Computational Lin-
1166
+ guistics.
1167
+ Zhang, W.; Chen, X.; Yao, Z.; Chen, M.; Zhu, Y.; Yu, H.;
1168
+ Huang, Y.; Xu, Y.; Zhang, N.; Xu, Z.; Yuan, Z.; Xiong, F.;
1169
+ and Chen, H. 2022. NeuralKG: An Open Source Library for
1170
+ Diverse Representation Learning of Knowledge Graphs. In
1171
+ SIGIR, 3323–3328. ACM.
1172
+ Zhang, Z.; Cai, J.; Zhang, Y.; and Wang, J. 2020. Learning
1173
+ Hierarchy-Aware Knowledge Graph Embeddings for Link
1174
+ Prediction. In AAAI, 3065–3072. AAAI Press.
1175
+
1176
+ Dataset
1177
+ |E|
1178
+ |R|
1179
+ Train
1180
+ Valid
1181
+ Test
1182
+ FB15k-237
1183
+ 14,541
1184
+ 237
1185
+ 272,115
1186
+ 17,535
1187
+ 20,466
1188
+ WN18RR
1189
+ 40,493
1190
+ 11
1191
+ 86,835
1192
+ 3,034
1193
+ 3,134
1194
+ Table 5: Statistics of datasets. Train, Valid, and Test denote
1195
+ the size of train set, validation set, and test set, respectively.
1196
+ A
1197
+ KGE Models
1198
+ We can divide knowledge graph embedding models into
1199
+ two categories: conventional knowledge graph embedding
1200
+ models and GNN-based models. Theoretically, the AnKGE
1201
+ framework applies to most conventional KGE models defin-
1202
+ ing a score function for triples. In order to demonstrate
1203
+ the effectiveness and compatibility of AnKGE. We chose
1204
+ four representative conventional knowledge graph embed-
1205
+ ding models: TransE (Bordes et al. 2013), RotatE (Sun et al.
1206
+ 2019), HAKE (Zhang et al. 2020) and PairRE (Chao et al.
1207
+ 2021) as base models for AnKGE. Table 6 exhibits the
1208
+ gkge(·, ·) and fkge(·, ·, ·) defined in these knowledge graph
1209
+ embedding models. We will introduce the four KGE models,
1210
+ respectively.
1211
+ TransE
1212
+ is the first knowledge graph embedding model
1213
+ proposing a geometric interpretation of the latent space. The
1214
+ TransE model is inspired by Word2vec vectors, requiring the
1215
+ sum of head embedding and relation embedding close to the
1216
+ tail embedding. It makes TransE successfully capture the re-
1217
+ lations between words through translations. However, due
1218
+ to the limit of translation, TransE cannot correctly handle
1219
+ N-to-one, one-to-N, and symmetric relations.
1220
+ RotatE
1221
+ requires the embedding of (h, r, t) belong to Ck,
1222
+ and considers the relation embedding as rotation vector in
1223
+ a complex latent space. Specifically, the complex compo-
1224
+ nent conveys the rotation along that axis, whereas the real
1225
+ component always equals 1. RotatE has been demonstrated
1226
+ that rotation allows modeling correctly numerous relational
1227
+ patterns, such as symmetry, anti-symmetry and inversion.
1228
+ However, RotatE cannot model the relation with hierarchy
1229
+ pattern.
1230
+ HAKE
1231
+ is a hierarchy-aware knowledge graph embedding
1232
+ model that uses the depth of entity to model different levels
1233
+ of the hierarchy. HAKE distinguishes the entities into two
1234
+ categories: the entities at different levels of the hierarchy and
1235
+ the entities at the same level of the hierarchy, to model the
1236
+ semantic hierarchies. Experiments demonstrate that HAKE
1237
+ can effectively model the semantic hierarchies in knowledge
1238
+ graphs.
1239
+ PairRE
1240
+ is a method capable of simultaneously encoding
1241
+ complex relations and multiple relation patterns. The model
1242
+ uses paired relation representations to adjust the margin in
1243
+ loss function to fit different complex relations. PairRE cap-
1244
+ tures the semantic connection among relation vectors which
1245
+ have been demonstrated to encode three important relation
1246
+ patterns, symmetry/anti-symmetry, inversion and composi-
1247
+ tion.
1248
+ B
1249
+ Loss Weight
1250
+ Considering the different contributions of the three levels,
1251
+ we introduce βE, βR and βT to adjust gradient descent. Sim-
1252
+ ilarity to the softmax function, firstly, we replace the original
1253
+ element embedding with three level aggregated embeddings
1254
+ respectively, then calculate the exponential sum of analogy
1255
+ triple scores and original triple score, which is formulated
1256
+ as:
1257
+ T =efkge(h+,r,t) + efkge(h,r+,t)
1258
+ + efkge(z+
1259
+ e ,z+
1260
+ r ,t) + efkge(h,r,t).
1261
+ (14)
1262
+ The weights of loss function in Equation (11) are the rate of
1263
+ T :
1264
+ βE = efkge(h+,r,t)/T ,
1265
+ βR = efkge(h,r+,t)/T ,
1266
+ βT = efkge(z+
1267
+ e ,z+
1268
+ e ,t)/T .
1269
+ (15)
1270
+ The highest analogy triple score means mapping original el-
1271
+ ement embedding to aggregate embedding is necessary. If
1272
+ the original triple score is the highest, the triple should not
1273
+ analogy with other objects.
1274
+ C
1275
+ Datasets
1276
+ We evaluate the AnKGE framework on two widely-used
1277
+ datasets: WN18RR and FB15k-237. FB15k-237 is from the
1278
+ Freebase knowledge graph project, whose design is inspired
1279
+ by broadly used information communities such as The Se-
1280
+ mantic Web and Wikipedia. FB15k-237 contains informa-
1281
+ tion including locations, media, geographical and people.
1282
+ WN18RR is from the WordNet knowledge graph project,
1283
+ a dataset that characterizes associations between English
1284
+ words.WN18RR contains information including symmetric,
1285
+ asymmetric and compositional relations. Statistics of these
1286
+ datasets are shown in Table 5.
1287
+ D
1288
+ Implementation Details
1289
+ Firstly, we train four KGE models: TransE, RotatE, HAKE
1290
+ and PairRE, as base models. In the training stage, we ap-
1291
+ ply a widely used negative sampling loss function with self-
1292
+ adversarial training:
1293
+ L = log σ(γm − fkge(h, r, t))
1294
+ +
1295
+ n
1296
+
1297
+ i=1
1298
+ p(h′
1299
+ i, r, t′
1300
+ i) log σ(fkge(h′
1301
+ i, r, t′
1302
+ i) − γm),
1303
+ where γm is a fixed margin, σ is the sigmoid function,
1304
+ (h′
1305
+ i, r, t′
1306
+ i) is the ith corrupting negative triple for (h, r, t) and
1307
+ n is the number of negative triples. Moreover, p(h′
1308
+ j, r, t′
1309
+ j) is
1310
+ the self-adversarial weight for this negative triple. The cal-
1311
+ culation of the weight is:
1312
+ p(h′
1313
+ j, r, t′
1314
+ j) =
1315
+ exp αtempfkge(h′
1316
+ j, r, t′
1317
+ j)
1318
+
1319
+ i exp αtempfkge(h′
1320
+ i, r, t′
1321
+ i)
1322
+ which is the probability distribution of negative sampling
1323
+ triples, where αtemp is the adversarial temperature of sam-
1324
+ pling. When training and testing, we add reverse relations to
1325
+
1326
+ Model M
1327
+ fkge(h, r, t)
1328
+ gkge(h, r)
1329
+ Parameters
1330
+ TransE
1331
+ −∥h + r − t∥1
1332
+ h + r
1333
+ h, r, t ∈ Rk
1334
+ RotatE
1335
+ −∥h ◦ r − t∥2
1336
+ h ◦ r
1337
+ h, r, t ∈ Ck, |ri| = 1
1338
+ HAKE
1339
+ −∥hm ◦ rm − tm∥2−
1340
+ Cat[∥hm ◦ rm∥2,
1341
+ hm, tm ∈ Rk,rm ∈ Rk
1342
+ +,
1343
+ λ∥ sin((hp + rp − tp)/2)∥1
1344
+ λ∥ sin((hp + rp)/2)∥1]
1345
+ hp, rp, tp ∈ [0, 2π)k, λ ∈ Rk
1346
+ PairRE
1347
+ −∥h ◦ rH − t ◦ rT ∥1
1348
+ h ◦ rH
1349
+ h, r, t ∈ Rk
1350
+ Table 6: The details of knowledge graph embedding models, where ∥ · ∥1 denotes the absolute-value norm, Cat [·] denotes the
1351
+ concatenate vector function.
1352
+ Dataset
1353
+ Model
1354
+ Emebdding
1355
+ Margin
1356
+ Adversarial
1357
+ Negative
1358
+ Batch Size
1359
+ Inverse Relation
1360
+ Dimension
1361
+ Temperature
1362
+ Samples
1363
+ FB15k-237
1364
+ TransE
1365
+ 500
1366
+ 9.0
1367
+ 1.0
1368
+ 256
1369
+ 1024
1370
+ True
1371
+ RotatE
1372
+ 500
1373
+ 9.0
1374
+ 1.0
1375
+ 256
1376
+ 1024
1377
+ True
1378
+ HAKE
1379
+ 1000
1380
+ 9.0
1381
+ 1.0
1382
+ 512
1383
+ 1024
1384
+ True
1385
+ PairRE
1386
+ 1500
1387
+ 6.0
1388
+ 1.0
1389
+ 256
1390
+ 1024
1391
+ True
1392
+ WN18RR
1393
+ TransE
1394
+ 500
1395
+ 6.0
1396
+ 1.0
1397
+ 256
1398
+ 2048
1399
+ True
1400
+ RotatE
1401
+ 500
1402
+ 6.0
1403
+ 0.5
1404
+ 1024
1405
+ 512
1406
+ True
1407
+ HAKE
1408
+ 500
1409
+ 6.0
1410
+ 0.5
1411
+ 1024
1412
+ 512
1413
+ True
1414
+ PairRE
1415
+ 500
1416
+ 6.0
1417
+ 0.5
1418
+ 1024
1419
+ 512
1420
+ True
1421
+ Dataset
1422
+ Model
1423
+ Entity
1424
+ Relation
1425
+ Triple
1426
+ Entity
1427
+ Relation
1428
+ Triple
1429
+ Cand. Ne
1430
+ Cand. Nr
1431
+ Cand. Nt
1432
+ lambda αE
1433
+ lambda αR
1434
+ lambda αT
1435
+ FB15k-237
1436
+ AnKGE-TransE
1437
+ 1
1438
+ 1
1439
+ 3
1440
+ 0.01
1441
+ 0.2
1442
+ 0.02
1443
+ AnKGE-RotatE
1444
+ 1
1445
+ 1
1446
+ 5
1447
+ 0.01
1448
+ 0.2
1449
+ 0.05
1450
+ AnKGE-HAKE
1451
+ 1
1452
+ 1
1453
+ 5
1454
+ 0.05
1455
+ 0.3
1456
+ 0.1
1457
+ AnKGE-PairRE
1458
+ 1
1459
+ 1
1460
+ 3
1461
+ 0.01
1462
+ 0.3
1463
+ 0.05
1464
+ WN18RR
1465
+ AnKGE-TransE
1466
+ 1
1467
+ 1
1468
+ 20
1469
+ 0.01
1470
+ 0.3
1471
+ 0.3
1472
+ AnKGE-RotatE
1473
+ 1
1474
+ 1
1475
+ 3
1476
+ 0.1
1477
+ 0.05
1478
+ 0.1
1479
+ AnKGE-HAKE
1480
+ 1
1481
+ 1
1482
+ 3
1483
+ 0.1
1484
+ 0.05
1485
+ 0.1
1486
+ AnKGE-PairRE
1487
+ 1
1488
+ 1
1489
+ 3
1490
+ 0.1
1491
+ 0.05
1492
+ 0.2
1493
+ Table 7: The hyper-parameter settings of base model and AnKGE over different datasets.
1494
+ expand the dataset. Specifically, for a triple (h, r, t), we add
1495
+ a new reverse triple (t, r−1, h) in dataset. r−1 represents the
1496
+ reverse relation of r. In the link prediction task, the model
1497
+ only predicts tail entity, which is equivalent to the effect of
1498
+ predicting both head and tail entities.
1499
+ Then, we use AnKGE to enhance the well-trained KGEs
1500
+ with analogical inference capability. We show that AnKGE
1501
+ achieves competitive results on knowledge graph comple-
1502
+ tion task and performs enhanced analogical inference abil-
1503
+ ity. The loss function weight γ in Equation (10) is set to 10,
1504
+ the transformation matrix weight λ in Equation (5) is set to 1
1505
+ and 0 in FB15k-237 and WN18RR respectively. We use the
1506
+ grid search to select other hyper-parameters, including: en-
1507
+ tity candidates Ne, relation candidates Nr triple candidates
1508
+ Nt, entity lambda αE, relation lambda αR and triple lambda
1509
+ αT . Other experimental settings are the same. The experi-
1510
+ ment of AnKGE-TransE on WN18RR is the only exception.
1511
+ We use fixed weight parameter instead of adaptive weight
1512
+ parameter and cosine similarity instead of euclidean norm.
1513
+ In addition, since there is no negative sampling, the mem-
1514
+ ory footprint and time cost are lower than the base model,
1515
+ which is generally acceptable.
1516
+ We implement all the models with PyTorch, and run ex-
1517
+ periments on NVIDIA RTX3090 GPUs with 24GB RAM
1518
+ and Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz with
1519
+ 40 cores. The hyper-parameter settings of base model and
1520
+ AnKGE are shown in Table 7
1521
+
29AzT4oBgHgl3EQfDvqc/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
39AyT4oBgHgl3EQfcPct/content/tmp_files/2301.00277v1.pdf.txt ADDED
@@ -0,0 +1,3607 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.00277v1 [econ.EM] 31 Dec 2022
2
+ Higher-order Refinements of Small Bandwidth Asymptotics for
3
+ Density-Weighted Average Derivative Estimators∗
4
+ Matias D. Cattaneo†
5
+ Max H. Farrell‡
6
+ Michael Jansson§
7
+ Ricardo Masini¶
8
+ January 3, 2023
9
+ Abstract
10
+ The density weighted average derivative (DWAD) of a regression function is a canonical
11
+ parameter of interest in economics. Classical first-order large sample distribution theory for
12
+ kernel-based DWAD estimators relies on tuning parameter restrictions and model assumptions
13
+ leading to an asymptotic linear representation of the point estimator.
14
+ Such conditions can
15
+ be restrictive, and the resulting distributional approximation may not be representative of the
16
+ underlying sampling distribution of the statistic of interest, in particular not being robust to
17
+ bandwidth choices. Small bandwidth asymptotics offers an alternative, more general distribu-
18
+ tional approximation for kernel-based DWAD estimators that allows for, but does not require,
19
+ asymptotic linearity. The resulting inference procedures based on small bandwidth asymptotics
20
+ were found to exhibit superior finite sample performance in simulations, but no formal theory
21
+ justifying that empirical success is available in the literature. Employing Edgeworth expan-
22
+ sions, this paper shows that small bandwidth asymptotics lead to inference procedures with
23
+ demonstrable superior higher-order distributional properties relative to procedures based on
24
+ asymptotic linear approximations.
25
+ Keywords: density weighted average derivatives, Edgeworth expansions, small bandwidth asymp-
26
+ totics.
27
+ ∗Prepared for the Conference in Honor of James L. Powell at UC-Berkeley, March 25–26, 2022. We thank the
28
+ conference participants for their comments. Cattaneo gratefully acknowledges financial support from the National
29
+ Science Foundation through grants SES-1947805 and DMS-2210561, Jansson gratefully acknowledges financial support
30
+ from the National Science Foundation through grant SES-1947662, and Masini gratefully acknowledges financial
31
+ support from the National Science Foundation through grant DMS-2210561.
32
+ †Department of Operations Research and Financial Engineering, Princeton University.
33
+ ‡Booth School of Business, University of Chicago.
34
+ §Department of Economics, UC Berkeley.
35
+ ¶Center for Statistics and Machine Learning, Princeton University.
36
+
37
+ 1
38
+ Introduction
39
+ Identification, estimation and inference in the context of two-step semiparametric models has a long
40
+ tradition in econometrics (Powell, 1994). Canonical two-step semiparametric parameters are finite
41
+ dimensional functionals of some other unknown infinite dimensional parameters in the model (e.g.,
42
+ a density or regression function), a leading example being the density weighted average derivative
43
+ (DWAD) of a regression function (Stoker, 1986).
44
+ This paper seeks to honor the many contri-
45
+ butions of Jim Powell to semiparametric theory in econometrics by juxtaposing the higher-order
46
+ distributional properties of Powell et al. (1989)’s two-step kernel-based DWAD estimator under
47
+ two alternative large sample approximation regimes: one based on the classical asymptotic linear
48
+ representation, and the other based on a more general quadratic distributional approximation.1
49
+ In a landmark contribution, Powell et al. (1989) proposed a kernel-based DWAD estimator and
50
+ obtained first-order, asymptotically linear distribution theory employing ideas from the U-statistics
51
+ literature, along with plug-in standard error estimators, to develop valid inference procedures in
52
+ large samples. This work sparked a wealth of subsequent developments in the econometrics litera-
53
+ ture: Robinson (1995) obtained Berry-Esseen bounds, Powell and Stoker (1996) considered mean
54
+ square error expansions, Nishiyama and Robinson (2000, 2001, 2005) developed Edgeworth expan-
55
+ sions, and Newey et al. (2004) investigated bias properties, just to mention a few contributions.
56
+ The two-step semiparametric estimator in this literature employs a preliminary kernel-based esti-
57
+ mator of a density function, which requires choosing two main tuning parameters (a bandwidth and
58
+ a kernel function), and their “optimal” choices depend on the goal of interest (e.g., point estimation
59
+ vs. inference) as well as the features of the underlying data generating process (e.g., smoothness of
60
+ the unknown density and dimensionality of the covariates).
61
+ Classical first-order distribution theory for kernel-based DWAD estimators has focused on cases
62
+ where tuning parameter restrictions and model assumptions lead to an asymptotic linear representa-
63
+ tion of the two-step semiparametric point estimator (Newey and McFadden, 1994; Ichimura and Todd,
64
+ 2007, for overviews), that is, the two-step estimator is approximated by a sample average based
65
+ on the so-called influence function. This approach can lead to semiparametric efficient inference
66
+ procedures in large samples, but the implied distributional approximation may not be “robust”
67
+ to tuning parameter choices and/or model features. More specifically, the limiting distribution
68
+ obtained based on the asymptotic linear representation is invariant to the way that the preliminary
69
+ nonparametric estimators are constructed, and requires potentially high smoothness levels of the
70
+ underlying unknown functions and thus the use of higher-order kernels. At its core, asymptotic
71
+ linear approximations assume away the contribution of additional terms forming the statistic of
72
+ interest, despite the fact that these terms do contribute to the sampling variability of the two-step
73
+ semiparametric estimator and, more importantly, do reflect the effect of tuning parameter choices
74
+ 1Jim Powell’s contributions to semiparametric theory are numerous. Honor´e and Powell (1994), Powell and Stoker
75
+ (1996), Blundell and Powell (2004), Aradillas-Lopez et al. (2007), Ahn et al. (2018), and Graham et al. (2023) are
76
+ some of the most closely connected to the our work. These papers employ U-statistics methods for two-step kernel-
77
+ based estimators similar to those considered herein. See Powell (2017) for more discussion and references.
78
+ 1
79
+
80
+ in finite samples.
81
+ Cattaneo et al. (2014a) proposed an alternative distributional approximation for kernel-based
82
+ DWAD estimators that allows for, but does not require, asymptotic linearity.
83
+ The key idea is
84
+ to capture the joint contribution to the sampling distribution of both linear and quadratic terms
85
+ forming the kernel-based DWAD estimator. To operationalize this idea, Cattaneo et al. (2014a)
86
+ introduced an asymptotic experiment where the bandwidth sequence is allowed to vanish at a speed
87
+ that would render the classical asymptotic linear representation invalid because the quadratic term
88
+ becomes first order even in large samples, which they termed “small bandwidth” asymptotics. This
89
+ framework was carefully developed to obtain a distributional approximation that explicitly depends
90
+ on both linear and quadratic terms, thereby forcing a more careful analysis of how the quadratic
91
+ term contributes to the sampling distribution of the statistic.
92
+ Small bandwidth asymptotics inference methods for kernel-based DWAD estimators were found
93
+ to perform well in simulations (Cattaneo et al., 2010, 2014a,b), but no formal justification for its
94
+ finite sample success is available in the literature. Methodologically, this alternative distributional
95
+ approximation leads to a new way of conducting inference (e.g., constructing confidence interval
96
+ estimators) because the original standard error formula proposed by Powell et al. (1989) must be
97
+ modified to make the asymptotic approximation valid across the full range of allowable bandwidths
98
+ (including the region where asymptotic linearity fails). Theoretically, however, the empirical success
99
+ of small bandwidth asymptotics could in principle come from two distinct sources: (i) it could deliver
100
+ a better distributional approximation to the sampling distribution of the point estimator; or (ii) it
101
+ could deliver a better distributional approximation to the sampling distribution of the Studentized
102
+ t-statistic because the standard error formula was modified.
103
+ Employing Edgeworth expansions (Bhattacharya and Rao, 1976; Hall, 1992), this paper shows
104
+ that the small bandwidth asymptotics approximation framework leads to inference procedures with
105
+ demonstrable superior higher-order distributional properties relative to procedures based on asymp-
106
+ totic linear approximations. We study both standardized and Studentized t-statistics, under both
107
+ asymptotic linearity and small bandwidth asymptotic regimes, and show that both standardized
108
+ and Studentized t-statistics emerging from the small bandwidth regime offer higher-order correc-
109
+ tions as measured by the second cummulant underlying their Edgeworth expansions. An immediate
110
+ implication of our results is that the small bandwidth asymptotic framework delivers both a better
111
+ distributional approximation (Theorem 1, standardized t-statistic) and leads to a better standard
112
+ error construction (Theorem 2, Studentized t-statistic). Therefore, our results have both theoretical
113
+ and practical implications for empirical work in economics, in addition to providing a theory-based
114
+ explanation for prior simulation-based findings exhibiting better numerical performance of infer-
115
+ ence procedures constructed using small bandwidth asymptotics relative to those constructed using
116
+ classical distributional approximations.
117
+ The closest antecedent to our work is Nishiyama and Robinson (2000, 2001), who also studied
118
+ Edgeworth expansions for kernel-based DWAD estimators. Their expansions, however, were moti-
119
+ vated by the asymptotic linear approximation to the point estimator, and hence can not be used
120
+ 2
121
+
122
+ to compare and contrast to the distributional approximation emerging from the alternative small
123
+ bandwidth asymptotic regime. Therefore, from a technical perspective, this paper also offers novel
124
+ Edgeworth expansions that allow for different standardization and Studentization schemes, thereby
125
+ allowing us to plug-and-play when comparing the two competing asymptotic frameworks. More
126
+ specifically, Theorem 1 below concerns a generic standardized t-statistic and is proven based on
127
+ Theorem A in the appendix, which may be of independent technical interest due to is generality.
128
+ Theorem 2 below concerns a more specialized class of Studentized t-statistic because establishing
129
+ valid Edgeworth expansions is considerably harder when dealing with Studentization.
130
+ The idea of employing alternative (more general) asymptotic approximation frameworks that do
131
+ not enforce asymptotic linearity for two-step semiparametric estimators has also featured in other
132
+ context such as partially linear series-based, many covariates and many instrument estimation as
133
+ well as certain network estimation settings (Cattaneo et al., 2018a,b; Matsushita and Otsu, 2021),
134
+ as well as other non-linear two-step semiparametric settings (Cattaneo et al., 2013; Cattaneo and Jansson,
135
+ 2018; Cattaneo et al., 2019). While our theoretical developments and results focus specifically on
136
+ the case of kernel-based DWAD estimation, their main conceptual conclusions can be extrapolated
137
+ to those settings as well.
138
+ The main takeaway is that employing alternative asymptotic frame-
139
+ works can deliver improved inference with smaller higher-order distributional approximation errors,
140
+ thereby offering more robust inference procedures in finite samples.
141
+ The paper continues as follows. Section 2 introduces the setup and main assumptions. Section
142
+ 3 reviews the classical first-order distributional approximation based on asymptotic linearity and
143
+ the more general small bandwidth distributional approximation, along with their corresponding
144
+ choices of standard error formulas. Section 4 presents the main results of our paper. Section 5
145
+ concludes. The appendix is organized in three parts: Appendix A provides a self-contained generic
146
+ Edgeworth expansion for second-order U-statistics, which may be of independent technical interest,
147
+ Appendix B gives the proof of Theorem 1 (standardized t-statistic), and Appendix C gives the proof
148
+ of Theorem 2 (Studentized t-statistic).
149
+ 2
150
+ Setup and Assumptions
151
+ Suppose Zi = (Yi, X′
152
+ i)′, i = 1, . . . , n, is a random sample from the distribution of the random
153
+ vector Z = (Y, X′)′, where Y is an outcome variable of interest and X takes value on Rd with
154
+ Lebesgue density f. We consider the density weighted average derivative of the regression function
155
+ g(X) = E[Y |X] given by
156
+ θ := E[f(X)˙g(X)],
157
+ where for any function a we define ˙a(x) :=
158
+
159
+ ∂xa(x).
160
+ To save notation, we also define e(X) :=
161
+ f(X)g(X) and v(X) := E[Y 2|X].
162
+ We impose the following conditions on the underlying data
163
+ generating process. Let ∥ · ∥ be the Euclidean norm.
164
+ Assumption 1.
165
+ 3
166
+
167
+ (a) E[|Y |p] < ∞, for some p ≥ 3.
168
+ (b) Σ := E[ψ(Z)ψ(Z)′] is positive definite, where ψ(Z) := 2
169
+
170
+ ˙e(X) − Y ˙f(X) − θ
171
+
172
+ .
173
+ (c) f is (S +1) times differentiable, and f and its (S +1) derivatives are bounded, for 2S > d+2;
174
+ (d) g is (S + 1) times differentiable and its first three derivatives are bounded;
175
+ (e) e and its first (S + 1) derivatives are bounded;
176
+ (f) v is twice diferentiable, and its first two derivatives are bounded, and v ˙f and E[|Y |3|X]f(X)
177
+ are bounded;
178
+ (g) f, gf, ˙gf and vf vanish on the boundaries of their convex supports;
179
+ (h) Cram´er Condition:
180
+ sup
181
+ ν∈Rd:∥v∥=1
182
+ lim sup
183
+ |t|→∞
184
+ |E exp(ιtℓ1/¯σν)| < 1 where ¯σν := ν′Σν.
185
+ Under Assumption 1 and using integration by parts, the DWAD vector can be expressed as
186
+ θ = −2E[Y ˙f(X)],
187
+ which motivates the celebrated plug-in analog estimator of Powell et al. (1989) given by
188
+ �θ = −2 1
189
+ n
190
+ n
191
+
192
+ i=1
193
+ Yi�˙f i(Xi),
194
+ �fi(x) =
195
+ 1
196
+ n − 1
197
+ n
198
+
199
+ j=1,j̸=i
200
+ 1
201
+ hd K
202
+ �Xj − x
203
+ h
204
+
205
+ ,
206
+ where �fi(·) is a “leave-one-out” kernel density estimator for kernel function K : Rd → R and positive
207
+ vanishing (bandwidth) sequence h. For the kernel function, we impose the following conditions.
208
+ Assumption 2.
209
+ (a) K is even, differentiable, and ˙K is bounded;
210
+ (b)
211
+
212
+ Rd ˙K(u) ˙K(u)′du is positive definite;
213
+ (c) For some P ≥ 2,
214
+
215
+ Rd |K(u)|(1 + ∥u∥P )du +
216
+
217
+ Rd ∥ ˙K(u)∥(1 + ∥u∥2)du < ∞
218
+ and
219
+
220
+ Rd uaK(u)du =
221
+
222
+
223
+
224
+
225
+
226
+
227
+
228
+
229
+
230
+ 1,
231
+ if [a] = 0,
232
+ 0,
233
+ if 0 < [a] < P
234
+ µa < ∞,
235
+ if [a] = P,
236
+ where a ∈ Zd
237
+ + is a multi-index.2
238
+ 2We employ standard multi-index notation. For a := (a1, . . . , ad) we have (i) [a] := a1+· · ·+ad, (ii) a! := a1! . . . ad!,
239
+ (iii) xa := xa1
240
+ 1 . . . xad
241
+ d
242
+ for x ∈ Rd and (iv) q(a)(x) =
243
+ ∂[a]q
244
+ ∂a1 x1...∂ad xd for smooth enough q : Rd → R.
245
+ 4
246
+
247
+ The estimator �θ can be expressed as a second-order U-statistic with n-varying kernel:
248
+ �θ =
249
+ �n
250
+ 2
251
+ �−1
252
+ n
253
+
254
+ i<j
255
+ Uij,
256
+ Uij = −
257
+ 1
258
+ hd+1 ˙K
259
+ �Xi − Xj
260
+ h
261
+
262
+ (Yi − Yj),
263
+ (2.1)
264
+ where �n
265
+ i<j is shorthand notation for �n−1
266
+ i=1
267
+ �n
268
+ j=i+1.
269
+ 3
270
+ First-order Theory
271
+ Before presenting our main results concerning the higher-order distributional properties of different
272
+ statistics based on �θ, we overview conventional and alternative asymptotic distributional approx-
273
+ imations, and the variance estimation methods proposed in the literature emerging from those
274
+ distinct approximation frameworks. Limits are taken as h → 0 and n → ∞ unless otherwise noted.
275
+ 3.1
276
+ Distributional Approximation
277
+ In a landmark contribution, Powell et al. (1989) studied the first-order large sample distributional
278
+ properties of �θ.
279
+ They showed that, under appropriate restrictions on h and K, the estimator
280
+ �θ is asymptotically linear with (efficient) influence function ψ(z), and thus with semiparametric
281
+ (efficient) asymptotic variance Σ. More precisely, Powell et al. (1989) showed that if Assumptions
282
+ 1 and 2 hold, and if nh2 min(P,S) → 0 and nhd+2 → ∞, then
283
+ √n(�θ − θ) =
284
+ 1
285
+ √n
286
+ n
287
+
288
+ i=1
289
+ ψ(Zi) + oP(1) ⇝ N(0, Σ).
290
+ (3.1)
291
+ This result follows from the U-statistic representation in (2.1) and its Hoeffding decomposition,
292
+ which gives �θ = E[Uij] + ¯L + ¯Q, where
293
+ ¯L = 1
294
+ n
295
+ n
296
+
297
+ i=1
298
+ Li,
299
+ Li = 2(E[Uij|Zi] − E[Uij]),
300
+ and
301
+ ¯Q =
302
+ �n
303
+ 2
304
+ �−1
305
+ n
306
+
307
+ i<j
308
+ Qij,
309
+ Qij = Uij − E[Uij|Zi] − E[Uij|Zj] + E[Uij],
310
+ both mean zero random vectors. Because E[Uij] = θ + O(hmin(P,S)) and ¯Q = OP(n−1h−(d+2)/2), it
311
+ follows that
312
+ √n(�θ − θ) =
313
+ 1
314
+ √n
315
+ n
316
+
317
+ i=1
318
+
319
+ E[Uij|Zi] − E[Uij]
320
+
321
+ + OP
322
+ �√nhmin(P,S) +
323
+ 1
324
+
325
+ nhd+2
326
+
327
+ ,
328
+ from which the asymptotic linear representation based on the (efficient) influence function in (3.1)
329
+ is established upon noting that E[∥¯L − �n
330
+ i=1 ψ(Zi)/n∥2] = O(n−1h).
331
+ 5
332
+
333
+ Conceptually, the Hoeffding decomposition and subsequent analysis of each of its terms shows
334
+ that the estimator admits a bilinear form representation in general, which then is reduced to a
335
+ sample average approximation by assuming a bandwidth sequence and kernel shape that makes
336
+ both the misspecification error (smoothing bias) and the variability introduced by ¯Q (“quadratic
337
+ term” term) negligible in large samples. As a result, provided that such tuning parameter choices
338
+ are feasible, the estimator will be asymptotically linear.
339
+ Asymptotic linearity of a semiparametric estimator has several distinct features that may be
340
+ considered attractive from a theoretical point of view (Newey, 1994). In particular, it is a necessary
341
+ condition for semiparametric efficiency and it leads to a limiting distribution that is invariant to the
342
+ choice of the first-step nonparametric estimator entering the two-step semiparametric procedure.
343
+ However, insisting on asymptotic linearity may also have its drawbacks because it requires several
344
+ potentially strong assumptions and leads to a large sample theory that may not accurately represent
345
+ the finite sample behavior of the statistic. In the case of �θ, asymptotic linearity requires P > 2
346
+ unless d = 1, thereby forcing restrictive smoothness conditions (S ≥ P) and the use of higher-order
347
+ kernels or similar debiasing techniques (see, e.g., Chernozhukov et al., 2022, and references therein).
348
+ In addition, classical asymptotic linear theory (whenever valid) leads to a limiting experiment
349
+ which is invariant to the particular choices of smoothing (K) and bandwidth (h) tuning parameters
350
+ involved in the construction of the estimator, and therefore it is unable to “adapt” to changes in
351
+ those choices. As a result, asymptotically linear large sample distribution theory is silent with
352
+ respect to the impact that tuning parameter choices may have on the finite sample behavior of the
353
+ two-step semiparametric statistic.
354
+ To address the aforementioned limitations with classical asymptotic distribution theory, Cattaneo et al.
355
+ (2014a) proposed a more general distributional approximation for kernel-based DWAD estimators
356
+ that accommodates but does not enforces asymptotic linearity. The core idea is to characterize the
357
+ joint asymptotic distributional features of both the linear (¯L) and quadratic ( ¯Q) terms jointly, and
358
+ in the process develop an alternative first-order asymptotic theory that accommodates weaker as-
359
+ sumptions than those imposed in the classical asymptotically linear distribution theory. Formally,
360
+ if Assumptions 1 and 2 hold, and if min(nhd+2, 1)nh2 min(P,S) → 0 and n2hd → ∞, then
361
+ (V[�θ])−1/2(�θ − θ) ⇝ N(0, I),
362
+ (3.2)
363
+ where
364
+ V[�θ] = V[¯L] + V[ ¯Q],
365
+ V[¯L] = 1
366
+ n
367
+
368
+ Σ + o(1)
369
+
370
+ ,
371
+ V[ ¯Q] =
372
+ �n
373
+ 2
374
+ �−1
375
+ h−d−2�
376
+ ∆ + o(1)
377
+
378
+ ,
379
+ and ∆ = 2E[v(X)f(X)]
380
+
381
+ Rd ˙K(u) ˙K(u)′du.
382
+ This more general distributional approximation was developed explicitly in an attempt to better
383
+ characterize the finite sample behavior of �θ.
384
+ The result in (3.2) shows that the conditions on
385
+ the bandwidth sequence may be considerably weakened without invalidating the limiting Gaussian
386
+ distribution, albeit the asymptotic variance formula may change. Importantly, if nhd+2 is bounded
387
+ 6
388
+
389
+ then �θ is no longer asymptotically linear and its limiting distribution will cease to be invariant with
390
+ respect to the underlying preliminary nonparametric estimator. In particular, if nhd+2 → c > 0
391
+ then �θ is root-n consistency but not asymptotically linear. In addition, because the bandwidth
392
+ is allowed to be “smaller” than usual, the bias of the estimator is controlled in a different way,
393
+ removing the need for higher-order kernels. Interestingly, (3.2) allows for the point estimator to
394
+ not even be consistent for θ, for sufficiently small bandwidth sequences.
395
+ Beyond the aforementioned technical considerations, the result in (3.2) can conceptually be
396
+ interpreted as a more refined first-order distributional approximation for the standarized statistics
397
+ (V[�θ])−1/2(�θ − θ), which by relying on a quadratic approximation (i.e., capturing the stochastic
398
+ contributions of both ¯L and ¯Q) it is expected to offer a “better” distributional approximation.
399
+ The idea of standarizing a U-statistic by the joint variance of the linear and quadratic terms
400
+ underlying its Hoeffding decomposition can be traced back to the original paper of Hoeffding
401
+ (1948, p.
402
+ 307).
403
+ Furthermore, the asymptotic distribution theory proposed by Cattaneo et al.
404
+ (2014a) can be viewed as highlighting the well known trade-off between robustness and efficiency in
405
+ two-step semiparametric settings: �θ is semiparametric efficient if and only if nhd+2 → ∞, while it
406
+ seems possible to construct more robust inference procedures under considerably weaker conditions
407
+ that would not be semiparametric efficient. Simulation evidence reported in Cattaneo et al. (2010,
408
+ 2014a,b) corroborated those conceptual interpretations numerically, but no formal justification is
409
+ available in the literature. Theorem 1 below will offer the first theoretical result in the literature
410
+ highlighting specific robustness features of the distributional approximation in (3.2) by showing that
411
+ such approximation has a demonstrably smaller higher-order distributional approximation error.
412
+ 3.2
413
+ Variance Estimation
414
+ Based on the asymptotically linear distributional approximation in (3.1), Powell et al. (1989) also
415
+ proposed the following variance estimator
416
+ �Σ = 1
417
+ n
418
+ n
419
+
420
+ i=1
421
+ �Li�L′
422
+ i,
423
+ �Li = 2
424
+
425
+ 1
426
+ n − 1
427
+ n
428
+
429
+ j=1,j̸=i
430
+ Uij − �θ
431
+
432
+ ,
433
+ and proved its consistency (i.e., �Σ →P Σ) under the same bandwidth sequences (nh2 min(P,S) → 0
434
+ and nhd+2 → ∞) required for asymptotic linearity. This result justifies employing the Studentized
435
+ statistic
436
+ �Σ−1/2√n(�θ − θ) ⇝ N(0, I)
437
+ (3.3)
438
+ for inferences purposes, that is, to construct a confidence interval for θ and smooth trasformations
439
+ thereof, or to carry out statistical hypothesis testing in the usual way.
440
+ However, motivated by their alternative asymptotic approximation, Cattaneo et al. (2014a) showed
441
+ that
442
+ 1
443
+ n
444
+ �Σ = 1
445
+ n[Σ + oP(1)] + 2
446
+ �n
447
+ 2
448
+ �−1
449
+ h−d−2[∆ + oP(1)],
450
+ 7
451
+
452
+ which implies that the consistency result �Σ →P Σ is valid if and only if nhd+2 → ∞; otherwise,
453
+ �Σ is in general asymptotically upwards biased relative to V[�θ] in (3.2). Because �Σ is asymptoti-
454
+ cally equivalent to the jackknife variance estimator of �θ, Cattaneo et al. (2014b) also noted that
455
+ the asymptotic bias of �Σ is a result of a more generic phenomena underlying jackknife variance
456
+ estimators studied in Efron and Stein (1981).
457
+ See also Matsushita and Otsu (2021) for related
458
+ discussion.
459
+ To conduct asymptotically valid inference under the more general small bandwidth asymptotic
460
+ regime, Cattaneo et al. (2014a) proposed several “debiased” variance estimators, including the
461
+ following
462
+ �V = 1
463
+ n
464
+ �Σ −
465
+ �n
466
+ 2
467
+ �−1
468
+ h−d−2 �∆,
469
+ �∆ = hd+2
470
+ �n
471
+ 2
472
+ �−1 n−1
473
+
474
+ i=1
475
+ n
476
+
477
+ j=i+1
478
+ UijU ′
479
+ ij,
480
+ and show that �∆ →P ∆ under the same bandwidth sequences (nh2 min(P,S) → 0 and n2hd → ∞)
481
+ required for (3.2) to hold. The estimator �∆ is asymptotically equivalent to the debiasing procedure
482
+ proposed in Efron and Stein (1981). This result justifies employing the Studentized statistic
483
+ �V −1/2(�θ − θ) ⇝ N(0, I)
484
+ (3.4)
485
+ for more “robust” inferences purposes relative to those constructed using (3.3).
486
+ Heuristically, robustness manifests in two distinct ways. First, the underlying Gaussian distribu-
487
+ tional approximation holds under weaker bandwidth restrictions and does not require asymptotic
488
+ linearity, thereby making the limiting distribution explicitly depend on tuning parameter choices.
489
+ Second, the new standard error formula �V is derived from the more general small bandwidth ap-
490
+ proximation and make explicit the contribution of terms regarded as higher-order by classical large
491
+ sample distributional approximations.
492
+ While not reproduced here to conserve space, the in-depth Monte Carlo evidence reported in
493
+ Cattaneo et al. (2010, 2014a,b) also showed that employing inference procedures based on (3.4) lead
494
+ to large improvements in terms of “robustness” to bandwidth choice and other tuning inputs, when
495
+ compared to classical asymptotically linear inference procedures based on (3.3). Theorem 2 below
496
+ will study those two feasible statistics and show formally that the distributional approximation
497
+ (3.4) has demonstrably smaller higher-order errors than the distributional approximation (3.3).
498
+ 4
499
+ Higher-order Distribution Theory
500
+ We present Edgeworth expansions for scalar standarized and studentized statistics based on �θν −θν
501
+ with �θν := ν′�θ and θν := ν′θ, where ν ∈ Rd is a fixed non-random vector. Considering scalar
502
+ statistics substantially simplify the developments and proofs without affecting the main conceptual
503
+ and theoretical takeaways. The sequence ϑ will first be non-random, thereby allowing us to inves-
504
+ tigate the role of classical distributional approximations based on asymptotic linearity vis-`a-vis the
505
+ more general distributional approximations based on small bandwidth asymptotics for standarized
506
+ 8
507
+
508
+ statistics. Then, the sequence ϑ will be taken to be random based on the two alternative variance
509
+ estimators introduced in the previous section, thereby allowing us to investigate the role of variance
510
+ estimation on the performance of distributional approximations for Studentized statistics.
511
+ 4.1
512
+ Distributional Approximation
513
+ Our first theorem offers a valid Edgeworth expansion for the sampling distribution function
514
+ Fϑ(t) := P
515
+ � �θν − θν
516
+ ϑ
517
+ ≤ t
518
+
519
+ ,
520
+ t ∈ R,
521
+ with precise characterization of the first three cummulants determining the leading errors in dis-
522
+ tributional approximation of the Studentized statistic. Define the following key quantities:
523
+ β := 2(−1)P �
524
+ [k]=P
525
+ µk
526
+ k! E
527
+
528
+ g(X) ∂k
529
+ ∂Xk ν′ ˙f(X)
530
+
531
+ ,
532
+ σ2 := V[�θν],
533
+ κ1 := E[ν′ψ(Z)3],
534
+ κ2 := 4E[δ(Z) ˙η(Z)] − 8E[δ(Z)2]θν + 4θ3
535
+ ν,
536
+ where δ(Z) := ν′ψ(Z)/2 + θν and η(Z2) = limn→∞ E[δ(Z1)ν′U12|Z2].
537
+ Theorem 1 (Standardized). Suppose Assumptions 1 and 2 hold. If √nhP → 0 and nhd+2 → ∞,
538
+ then for any positive non-random sequence ϑ such that ϑ/σ → 1,
539
+ sup
540
+ t∈R
541
+ ��Fϑ(t) − Gϑ(t)
542
+ �� = O(Rn) + o(n−1/2)
543
+ with
544
+ Gϑ(t) := Φ(t) − φ(t)
545
+ �β
546
+ ϑhP +
547
+ �σ2
548
+ ϑ2 − 1
549
+
550
+ + κ1 + κ2
551
+ 6n2ϑ3 (t2 − 1)
552
+
553
+ ,
554
+ and Rn := nh2P +
555
+
556
+ (log n)3
557
+ nhd+2
558
+ �3/2
559
+ + hd/3+1
560
+ nhd+2 +
561
+
562
+ hd/9+2/3
563
+ nhd+2
564
+ �3/2
565
+ , where Φ and φ are the c.d.f. and p.d.f. of
566
+ a standard Gaussian distribution. Furthermore, if (log n)3
567
+ nhd+2 → 0, then Rn = o
568
+ �√nhP +
569
+ 1
570
+ nhd+2
571
+
572
+ .
573
+ This theorem is proven by verifying the high-level conditions of a result in Appendix A es-
574
+ tablishing a valid Edgeworth Expansion for a generic class of U-statistics with n-varying kernels,
575
+ which may be of independent theoretical interest.
576
+ Specifically, Theorem A.1 and its corollary
577
+ A.1 improve on Jing and Wang (2003) by allowing for n-varying kernels under more general con-
578
+ dition suitable for the semiparametric problem of interest herein. Theorem 1 also improves on
579
+ Nishiyama and Robinson (2000, Theorem 1) in two respects: (i) it allows for a generic standard-
580
+ ization scheme ϑ instead of their specific choice
581
+
582
+ ν′Σν/n; and (ii) it presents a valid Edgeworth
583
+ expansion with precise error rates with respect to the bandwidth. These improvements enable us
584
+ to compare the two different distributional approximations of interest, (3.1) vs. (3.2).
585
+ The main conclusion in Theorem 1 follows the expected logic underlying Edgeworth Expansions:
586
+ β
587
+ ϑhP , σ2
588
+ ϑ2 −1 and κ1+κ2
589
+ 6n2ϑ3 capture, respectively, the standardized bias, variance and higher moments of
590
+ 9
591
+
592
+ the statistic. Inspection of these terms lead to interesting implications for large sample distribution
593
+ theory, in particular leading to a sharp contrast between distribution theory based on asymptotic
594
+ linear representations vis-`a-vis alternative asymptotics, each with either fixed-bandwidth or leading
595
+ asymptotic variance standardization. More specifically, we can consider four distinct standarization
596
+ schemes: from first-order asymptotic linear theory (3.1) we have
597
+ ϑ2
598
+ AL := V[ν′ ¯L] = 1
599
+ nV[ν′Li]
600
+ and
601
+ ˘ϑ2
602
+ AL := 1
603
+ nν′Σν,
604
+ while from small bandwidth distribution theory (3.2) we have
605
+ ϑ2
606
+ SB := V[�θν] = σ2
607
+ and
608
+ ˘ϑ2
609
+ SB := 1
610
+ nν′Σν +
611
+ �n
612
+ 2
613
+ �−1
614
+ h−d−2ν′∆ν.
615
+ The standardizations ϑAL and ϑSB correspond to those constructed using the pre-asymptotic variance
616
+ of the point estimator, each justified according to the asymptotic regime considered (asymptotic
617
+ linear and small bandwidth, respectively). In contrast, the standardizations ˘ϑAL and ˘ϑSB correspond
618
+ to employing the leading term only in the large sample approximation of the pre-asymptotic variance
619
+ of the point estimator, again keeping only those terms that are justified by the asymptotic regime
620
+ considered. That is, ϑAL = ˘ϑAL + o(n−1) and ϑSB = ˘ϑSB + o(n−1) under the assumptions of Theorem
621
+ 1. For comparison, Nishiyama and Robinson (2000, Theorem 1) used ˘ϑAL.
622
+ Employing Theorem 1 we can now compare the different approaches to standardization and
623
+ their associated errors generated in the distributional approximation. Firstly, it is easy to see that
624
+ employing ˘ϑAL and ˘ϑSB will generate larger distributional approximation errors relative to their pre-
625
+ asymptotic counterparts, ϑAL and ϑSB, respectively. See the proof in the appendix for exact rates,
626
+ which are not reproduced here to conserve space. The main conceptual message is that one should
627
+ always employ variance formulas that capture the full variability of the statistic whenever possible,
628
+ as opposed to employing those that capture only the leading variability in large samples.
629
+ See
630
+ Calonico et al. (2018, 2022) for closely related results in the context of nonparametric kernel-based
631
+ density and local polynomial regression estimation and inference.
632
+ Secondly, and more importantly for our purposes, Theorem 1 shows that even if the full finite-
633
+ sample variance of the point estimator is captured for standardization purposes, it is still crucial to
634
+ incorporate the variability of both the linear and quadratic terms. More precisely, setting ϑ = ϑAL
635
+ then σ2
636
+ ϑ2 − 1 = O(n−1h−d−2), while setting ϑ = ϑSB implies that σ2
637
+ ϑ2 − 1 = 0. As a consequence,
638
+ our first main result shows that employing the pre-asymptotic variance of the statistic, which is
639
+ naturally justified by the more general asymptotic distributional approximation (3.2), leads to the
640
+ smallest error in the distributional approximation of the sampling distribution of the standardized
641
+ statistic. This result thus provides theory-based evidence in favor of employing small bandwidth
642
+ asymptotics for kernel-based DWAD methods whenever the goal is to minimize errors of inference
643
+ procedures relying on large sample Gaussian approximations.
644
+ The methodological implications of our first theoretical result can be illustrated by analyzing the
645
+ 10
646
+
647
+ coverage error of standardized confidence intervals. According to Theorem 1, for any α ∈ (0, 1), a
648
+ 100(1 − α)% two-sided confidence interval based on asymptotic linearity satisfy
649
+ P
650
+
651
+ θν ∈
652
+ ��θν ± Φ1−α/2ϑAL
653
+ ��
654
+ = 1 − α +
655
+ KAL
656
+ nhd+2 + o
657
+ �√nhP + n−1h−d−2 + n−1/2�
658
+ ,
659
+ where Φα = Φ−1(α), and KAL = 2Φ1−α/2φ(1 − α/2)n−1h−d−2(σ2/ϑ2
660
+ AL − 1) = O(1 + h2), with the
661
+ exact form of the leading terms described in the appendix. On the other hand, under the conditions
662
+ in Theorem 1, a 100(1− α)% two-sided confidence intervals based on small bandwidth asymptotics
663
+ satisfy
664
+ P
665
+
666
+ θν ∈
667
+ ��θν ± Φ1−α/2ϑSB
668
+ ��
669
+ = 1 − α + o
670
+ �√nhP + n−1h−d−2 + n−1/2�
671
+ ,
672
+ implying a smaller coverage error distortion in large samples.
673
+ The above coverage error comparison is conceptually useful, but it does not directly translate to
674
+ practice because the confidence intervals are infeasible. To complement the results in this section,
675
+ we consider next the implications of constructing variance estimators and hence study feasible
676
+ (Studentized) inference procedures.
677
+ 4.2
678
+ Variance Estimation
679
+ We study the role of Studentization and thus obtain valid Edgeworth expansion for the sampling
680
+ distribution functions
681
+ FAL(t) := P
682
+ � �θν − θν
683
+ �ϑAL
684
+ ≤ t
685
+
686
+ ,
687
+ �ϑAL := 1
688
+ nν′�Σν
689
+ and
690
+ FSB(t) := P
691
+ � �θν − θν
692
+ �ϑSB
693
+ ≤ t
694
+
695
+ ,
696
+ �ϑSB := 1
697
+ nν′�Σν −
698
+ �n
699
+ 2
700
+ �−1
701
+ h−d−2ν′ �∆ν.
702
+ Crucially, the estimators �Σ and �∆ target the total variability nV[¯L] = V[Li] and
703
+ �n
704
+ 2
705
+
706
+ hd+2V[ ¯Q] =
707
+ hd+2V[Qij], respectively, and not just their leading quantities Σ and ∆. Therefore, in light of the
708
+ results reported in the previous section, we do not explicitly consider na¨ıve plug-in estimators of
709
+ ˘ϑAL and ˘ϑSBA such as
710
+ 2
711
+ n2
712
+ �n
713
+ i=1(ν′[�˙e(Xi) − y�˙f(Xi) − �θ])2 for the former, where �˙e(x) and �˙f(x) are
714
+ plug-in nonparametric estimators of ˙e(x) and ˙f(x), respectively. These alternative Studentization
715
+ schemes will lead to larger higher-order distributional approximation errors when compared to �ϑAL
716
+ and �ϑSB.
717
+ Theorem 2 (Studentized). Suppose Assumptions 1 and 2 hold with p ≥ 8. If √nhP → 0 and
718
+ nhd+2/(log n)9 → ∞, then
719
+ sup
720
+ t∈R
721
+ ��FAL(t) − GAL(t)
722
+ �� = o(rn)
723
+ with
724
+ GAL(t) := Φ(t) − φ(t)
725
+ �√nhP β
726
+ ν′Σν
727
+
728
+ 1
729
+ nhd+2
730
+ ν′∆ν
731
+ ν′Σν t −
732
+ 1
733
+ √n6(ν′Σν)3
734
+
735
+ κ1(2t2 + 1) + κ2(t2 + 1)
736
+ ��
737
+ ,
738
+ 11
739
+
740
+ and
741
+ sup
742
+ t∈R
743
+ ��FSB(t) − GSB(t)
744
+ �� = o(rn)
745
+ with
746
+ GSB(t) := Φ(t) − φ(t)
747
+ �√nhP β
748
+ ν′Σν
749
+
750
+ 1
751
+ √n6(ν′Σν)3
752
+
753
+ κ1(2t2 + 1) + κ2(t2 + 1)
754
+ ��
755
+ ,
756
+ where rn := √nhP + n−1h−d−2 + n−1/2
757
+ This theorem shows that employing Studentization based on small bandwidth asymptotics offers
758
+ demonstrable improvements in terms of distributional approximations for the resulting feasible t-
759
+ test. The main practical implication of our second result can again be illustrated by analyzing
760
+ the coverage error of Studentized confidence intervals. According to Theorem 2, and as it was
761
+ the case for stdentized confidence intervals, a 100(1 − α)% two-sided confidence intervals based on
762
+ asymptotic linearity satisfy
763
+ P
764
+
765
+ θν ∈
766
+ ��θν ± Φ1−α/2 �ϑAL
767
+ ��
768
+ = 1 − α +
769
+ 1
770
+ nhd+2 2Φ1−α/2φ(1 − α/2)ν′∆ν
771
+ ν′Σν + o(rn),
772
+ while, under the conditions in Theorem 2, a 100(1 − α)% two-sided confidence intervals based on
773
+ small bandwidth asymptotics satisfy
774
+ P
775
+
776
+ θν ∈
777
+ ��θν ± Φ1−α/2 �ϑSB
778
+ ��
779
+ = 1 − α + o(rn),
780
+ implying a smaller coverage error distortion in large samples. This result provides a theoretical
781
+ justification to the simulation evidence reported in Cattaneo et al. (2014a,b, 2010) where feasible
782
+ confidence intervals based on small bandwidth asymptotics were shown to offer better finite sample
783
+ performance in terms of coverage error than their counterparts based classical asymptotic linear
784
+ approximations.
785
+ 5
786
+ Conclusion
787
+ Employing Edgeworth expansions, we study the higher-order properties of two alternative first-order
788
+ distributional approximations and their associated inference procedures (e.g., confidence intervals)
789
+ for the kernel-based DWAD estimator of Powell et al. (1989). We showed that small bandwidth
790
+ asymptotics not only give demonstrable better distributional approximations than asymptotic linear
791
+ approximations, but also justify employing a variance estimator for Studentization purposes that
792
+ also improves the distributional approximation.
793
+ The main take away from our results is that
794
+ in two-step semiparametric settings and related problems, alternative asymptotic approximations
795
+ that capture higher-order terms ignored by classic asymptotic linear approximation can deliver
796
+ better distributional approximations and, by implication, better inference procedures with improved
797
+ performance in finite samples.
798
+ While beyond the scope of this paper, it would be of interest to develop analogous Edgeworth
799
+ 12
800
+
801
+ expansions for non-linear two-step semiparamtric procedures developed using alternative asymp-
802
+ totic approximations and resampling methods (Cattaneo et al., 2013; Cattaneo and Jansson, 2018;
803
+ Cattaneo et al., 2019). For the special case of kernel-based DWAD estimators (a linear two-step
804
+ kernle-based semiparametric estimator), Nishiyama and Robinson (2005) present results that could
805
+ be contrasted with those obtained under under small bandwidth asymptotics (Cattaneo et al.,
806
+ 2014b). We relegate such developments for future research due to the substantial amount of addi-
807
+ tional technical work required.
808
+ A
809
+ Edgeworth Expansion for Second-Order U-Statistic
810
+ Consider the sequence of maps (un : Rd × Rd → R, n ∈ N) where u := un is symmetric in terms
811
+ of the permutation of its two arguments for every n ∈ N. Given a random sample Z1, . . . , Zn for
812
+ n ≥ 2 of the random variable Z taking values on Rd, the object of interest in the second order
813
+ U-statistics with an n-varying kernel given by
814
+ ¯U :=
815
+ �n
816
+ 2
817
+ �−1
818
+ n
819
+
820
+ 1≤i<j≤n
821
+ u(Zi, Zj).
822
+ (A.1)
823
+ We drop the subscript n to simplify notation. By the Hoeffding decomposition,
824
+ ¯U − θ
825
+ ϑ
826
+ = B + L + Q,
827
+ where B := (Eu(Z1, Z2) − θ)/ϑ, L :=
828
+ 1
829
+ ϑn
830
+ �n
831
+ i=1 ℓi and Q := 1
832
+ ϑ
833
+ �n
834
+ 2
835
+ �−1 �n
836
+ 1≤i<j≤n qij, where ℓi := ℓ(Zi)
837
+ and qij := q(Zi, Zj) with ℓ(Z1) := 2[Eu(Z1, Z2|Z1) − Eu(Z1, Z2)] and q(Z1, Z2) := u(Z1, Z2) −
838
+ ℓ(Z1)/2 − ℓ(Z2)/2 − Eu(Z1, Z2). Given the decomposition above,
839
+ σ2 := V[ ¯U] = 1
840
+ nσ2
841
+ ℓ +
842
+ �n
843
+ 2
844
+ �−1
845
+ σ2
846
+ q,
847
+ (A.2)
848
+ where σ2
849
+ ℓ := Eℓ2
850
+ 1 and σ2
851
+ q := Eq2
852
+ 12.
853
+ We establish a valid third-order Edgeworth expansion for the the sampling distribution of the
854
+ centered and standardized version of ¯U:
855
+ F(t) := P
856
+ � ¯U − θ
857
+ ϑ
858
+ ≤ t
859
+
860
+ ,
861
+ t ∈ R,
862
+ (A.3)
863
+ where θ ∈ R and ϑ > 0 are non-random.
864
+ Theorem A.1. Let the following conditions hold:
865
+ (a) E
866
+
867
+ (ℓ1/σℓ)3�
868
+ = O(1) and E[|q12|]2+δ < ∞, and σℓ > 0
869
+ (b)
870
+ σq
871
+ √nσℓ → 0 and σ
872
+ ϑ → 1.
873
+ 13
874
+
875
+ (c) lim sup
876
+ n→∞
877
+ lim sup
878
+ |t|→∞
879
+ |E exp(ιtℓ1/σℓ)| < 1.
880
+ Then, supt∈R |F(t) − G(t)| = O(E) + o(n−1/2) where G is the distribution function with character-
881
+ istic function
882
+ χG(t) := eιtB− t2
883
+ 2
884
+
885
+ 1 +
886
+ 9
887
+
888
+ j=2
889
+ (ιt)j γj
890
+
891
+  ,
892
+ with ι := √−1,
893
+ γ2 = 1
894
+ 2
895
+
896
+ σ2
897
+ ϑ2 − 1
898
+
899
+ ,
900
+ γ3 =
901
+ 1
902
+ 6ϑ3n2
903
+
904
+ Eℓ3
905
+ 1 + 6Eℓ1ℓ2q12
906
+
907
+ ,
908
+ γ4 =
909
+ 1
910
+ 4ϑ2
911
+
912
+ σ2
913
+ ϑ2 − 1
914
+ � �n
915
+ 2
916
+ �−1
917
+ σ2
918
+ q
919
+ γ5 =
920
+ 1
921
+ 12n2ϑ5
922
+ ��n
923
+ 2
924
+ �−1
925
+ (Eℓ3
926
+ 1)σ2
927
+ q + 6ϑ2 � σ2
928
+
929
+ ϑ2n − 1
930
+
931
+ Eℓ1ℓ2q12
932
+
933
+ γ6 =
934
+ 1
935
+ 6ϑ6n4
936
+
937
+ (Eℓ3
938
+ 1)Eℓ1ℓ2q12 + 12
939
+ �n
940
+ 2
941
+ �−2�n
942
+ 4
943
+
944
+ [Eℓ1ℓ2q12]2
945
+
946
+ ,
947
+ γ7 = 0,
948
+ γ8 =
949
+ 1
950
+ 4ϑ6n4
951
+ � σ2
952
+
953
+ ϑ2n − 1
954
+ � �n
955
+ 2
956
+ �−2�n
957
+ 4
958
+
959
+ [Eℓ1ℓ2q12]2 ,
960
+ γ9 =
961
+ 1
962
+ 12ϑ9n6
963
+ �n
964
+ 2
965
+ �−2�n
966
+ 4
967
+
968
+ Eℓ3
969
+ 1 [Eℓ1ℓ2q12]2 ,
970
+ and
971
+ E :=
972
+
973
+ log n
974
+ n3/2σℓ
975
+ �2+δ
976
+ Π2+δ(n) +
977
+
978
+ (log n)
979
+ 4+δ
980
+ 2+δ σ2
981
+ q
982
+ nσ2
983
+
984
+ �2+δ
985
+ 2
986
+ +
987
+
988
+ log n
989
+ nσℓ
990
+ �2+δ
991
+ Π2+δ(log n)
992
+ +
993
+ 1
994
+ σ4
995
+ ℓ nE|ℓ2
996
+ 1ℓ2q12| +
997
+ 1
998
+ σ5
999
+ ℓ n3/2 E|ℓ2
1000
+ 1ℓ2
1001
+ 2q12| +
1002
+ 1
1003
+ σ2
1004
+ ℓ n2 E|ℓ1q2
1005
+ 12| +
1006
+ 1
1007
+ σ5
1008
+ ℓ n3/2 E|ℓ1ℓ2ℓ3q13q23|
1009
+ +
1010
+ 1
1011
+ σ7
1012
+ ℓ n3/2 (Eℓ1ℓ2q12)(E|ℓ2
1013
+ 1ℓ2q12|) +
1014
+ 1
1015
+ σ8
1016
+ ℓ n2 (Eℓ1ℓ2q12)(E|ℓ2
1017
+ 1ℓ2
1018
+ 2q12|),
1019
+ with Π2+δ(m) := E| �[m]−1
1020
+ i=1
1021
+ �n
1022
+ j=i+1 qij|2+δ for real m > 1 and [·] denoting the floor operator.
1023
+ Corollary A.1. Let the assumptions of Theorem A.1 hold. If B → 0, then
1024
+ sup
1025
+ t∈R
1026
+ |F(t) − G(t)| = O
1027
+
1028
+ B2 + E
1029
+
1030
+ + o(n−1/2),
1031
+ with
1032
+ χG(t) := e− t2
1033
+ 2
1034
+
1035
+ 1 + Bιt +
1036
+ 9
1037
+
1038
+ j=2
1039
+ � ιt
1040
+ ϑ
1041
+ �j γj
1042
+
1043
+  .
1044
+ Remark A.1. Lemma A.2 below gives the following simpler bound
1045
+ Π2+δ(m) ≲ (nmσ2
1046
+ q)(2+δ)/2 ∨ mn1+δ/2E
1047
+
1048
+ (E(q2
1049
+ 12|Z1))1+δ/2�
1050
+ ∨ nmE|q12|2+δ,
1051
+ where ≲ denotes bounded up to a fixed constant, and a ∨ b = max{a, b}.
1052
+
1053
+ 14
1054
+
1055
+ Remark A.2. We can invert the characteristic function above to obtain a close form for F using
1056
+ the fact that for non-negative integer k,
1057
+ 1
1058
+
1059
+
1060
+ R exp (−ιtx − t2/2)(ιt)kdt = Hk(x)φ(x), where Hk(x) is
1061
+ the k-th order Hermite polynomial (e.g., H0(k) = 1, H1(x) = x, H2(x) = x2 − 1, H3(x) = x3 − 3x).
1062
+ Therefore, the distribution function of χG(t) from Corollary A.1 is
1063
+ G(x) = Φ(x) − φ(x)
1064
+
1065
+
1066
+ 9
1067
+
1068
+ j=1
1069
+ γjHj−1(x)
1070
+
1071
+  .
1072
+
1073
+ Remark A.3. To compare to Jing and Wang (2003), let u(·, ·) not dependent on n, θ = Eu(Z1, Z2),
1074
+ ϑ2 = σ2
1075
+ ℓ /n, and E|q12|2+δ bounded. Then, E = o(n−1/2) and χG(t) = exp(−t2/2)
1076
+
1077
+ 1 − ικ3t3
1078
+ 6√n
1079
+
1080
+ +
1081
+ o(n−1/2), giving
1082
+ G(x) = Φ(x) − φ(x)
1083
+ 1
1084
+ 6√n
1085
+
1086
+ E
1087
+
1088
+ ℓi
1089
+ σℓ
1090
+ �3
1091
+ + 6Eℓ1ℓ2q12
1092
+ σ3
1093
+
1094
+
1095
+ (x2 − 1).
1096
+
1097
+ A.1
1098
+ Proof of Theorem A.1
1099
+ Let χF denote the characteristic function F and g be the density of G. Using the well-known
1100
+ “smoothing inequality” (Bhattacharya and Rao, 1976; Hall, 1992), we write
1101
+ ρ(F, G) ≤ 1
1102
+ π
1103
+ �� υ
1104
+ −υ
1105
+ ����
1106
+ χF (t) − χG(t)
1107
+ t
1108
+ ���� dt + 24 supx∈R |g(x)|
1109
+ υ
1110
+
1111
+ ,
1112
+ υ > 0
1113
+ where ρ is the Kolmogorov distance. We set v = √n log n and split the range of integration into
1114
+ “low” frequencies and “high” frequencies. By the triangle inequality,
1115
+ ρ(F, G) ≲ I1 + I2 + I3 + I4 +
1116
+ 1
1117
+ √n log n,
1118
+ (A.4)
1119
+ where
1120
+ I1 :=
1121
+
1122
+ |t|≤log n
1123
+ ����
1124
+ χF(t) − χG(t)
1125
+ t
1126
+ ���� dt,
1127
+ I2 :=
1128
+
1129
+ log n<|t|≤c√n
1130
+ ����
1131
+ χF(t)
1132
+ t
1133
+ ���� dt,
1134
+ I3 :=
1135
+
1136
+ c√n<|t|≤√n log n
1137
+ ����
1138
+ χF (t)
1139
+ t
1140
+ ���� dt,
1141
+ I4 :=
1142
+
1143
+ |t|>log n
1144
+ ����
1145
+ χG(t)
1146
+ t
1147
+ ���� dt;
1148
+ Moreover, c > 0 is a fixed constant to be specified later.
1149
+ We now bound each of these integrals in turn. We use extensively the fact hat
1150
+ ������
1151
+ exp(ιx) −
1152
+ 2
1153
+
1154
+ j=0
1155
+ (ιx)j
1156
+ j!
1157
+ ������
1158
+ ≤ |x|2+δ
1159
+ ,
1160
+ ∀δ ∈ [0, 1].
1161
+ (A.5)
1162
+ 15
1163
+
1164
+ Also, define for ψ(t) := E exp(ιtℓ1) for t ∈ R where σℓ is positive by Assumption (a).
1165
+ Bound for I1
1166
+ We start by decomposing χF(t) = E exp
1167
+
1168
+ ιt( ¯U−θ
1169
+ ϑ )
1170
+
1171
+ = exp(ιtb)χL+Q(t) where χL+Q(t) := E exp(ιtL) exp(ιtQ).
1172
+ Use (A.5) to expand the second exponential in χL+Q(t) to write
1173
+ χL+Q(t) = E exp(ιtL)[1 + ιtQ − 1
1174
+ 2(tQ)2 + O((tQ)2+δ)].
1175
+ (A.6)
1176
+ Since ℓ1, . . . , ℓn is a i.i.d sequence (for a given n ≥ 2), the first term in (A.6) can be written as
1177
+ E exp(ιtL) = E exp
1178
+
1179
+ ιt
1180
+ ϑn
1181
+ n
1182
+
1183
+ i=1
1184
+ ℓi
1185
+
1186
+ = E
1187
+ n
1188
+
1189
+ i=1
1190
+ exp
1191
+ �ιtℓi
1192
+ ϑn
1193
+
1194
+ =
1195
+ n
1196
+
1197
+ i=1
1198
+ E exp
1199
+ �ιtℓi
1200
+ ϑn
1201
+
1202
+ = ψn
1203
+ � t
1204
+ ϑn
1205
+
1206
+ .
1207
+ For the second term in (A.6), we have
1208
+ E exp(ιtL)ιtQ = ιt
1209
+ ϑ
1210
+ �n
1211
+ 2
1212
+ �−1 �
1213
+ i<j
1214
+ E
1215
+ n
1216
+
1217
+ k=1
1218
+ exp
1219
+ � ιt
1220
+ ϑnℓk
1221
+
1222
+ qij
1223
+ = ιt
1224
+ ϑ
1225
+ �n
1226
+ 2
1227
+ �−1 �
1228
+ i<j
1229
+ E
1230
+ n
1231
+
1232
+ k̸=i,j
1233
+ exp
1234
+ � ιt
1235
+ ϑnℓk
1236
+
1237
+ exp
1238
+ � ιt
1239
+ ϑn(ℓi + ℓj)
1240
+
1241
+ qij
1242
+ = ιt
1243
+ ϑ
1244
+ �n
1245
+ 2
1246
+ �−1 �
1247
+ i<j
1248
+ n
1249
+
1250
+ k̸=i,j
1251
+ E exp
1252
+ � ιt
1253
+ ϑnℓk
1254
+
1255
+ E exp
1256
+ � ιt
1257
+ ϑn(ℓi + ℓj)
1258
+
1259
+ qij
1260
+ = ιt
1261
+ ϑ ψn−2 � t
1262
+ ϑn
1263
+
1264
+ E exp
1265
+ � ιt
1266
+ ϑn(ℓ1 + ℓ2)
1267
+
1268
+ q12.
1269
+ Similarly, for the third term in (A.6), we use
1270
+ E exp(ιtL)(ιtQ)2 =
1271
+
1272
+ ιt
1273
+ ϑ
1274
+ �n
1275
+ 2
1276
+ �−1�2
1277
+ ×
1278
+
1279
+ �
1280
+ i<j
1281
+ E
1282
+ n
1283
+
1284
+ k=1
1285
+ exp
1286
+ � ιt
1287
+ ϑnℓk
1288
+
1289
+ q2
1290
+ ij
1291
+ +
1292
+
1293
+ i<j=k<l
1294
+ E
1295
+ n
1296
+
1297
+ m̸=i,j,l
1298
+ exp
1299
+ � ιt
1300
+ ϑnℓm
1301
+
1302
+ qijqjl
1303
+ +
1304
+
1305
+ i<j<k<l
1306
+ E
1307
+ n
1308
+
1309
+ m̸=i,j,k,l
1310
+ exp
1311
+ � ιt
1312
+ ϑnℓm
1313
+
1314
+ qijqkl
1315
+
1316
+
1317
+ =
1318
+
1319
+ ιt
1320
+ ϑ
1321
+ �n
1322
+ 2
1323
+ �−1�2
1324
+ ×
1325
+
1326
+ ψn−2 � t
1327
+ ϑn
1328
+ � �n
1329
+ 2
1330
+
1331
+ E exp
1332
+ � ιt
1333
+ ϑn(ℓ1 + ℓ2)
1334
+
1335
+ q2
1336
+ 12
1337
+ + ψn−3 � t
1338
+ ϑn
1339
+ � �n
1340
+ 3
1341
+
1342
+ E exp
1343
+ � ιt
1344
+ ϑn(ℓ1 + ℓ2 + ℓ3)
1345
+
1346
+ q12q23
1347
+ +ψn−4 � t
1348
+ ϑn
1349
+ � �n
1350
+ 4
1351
+ � �
1352
+ E exp
1353
+ � ιt
1354
+ ϑn(ℓ1 + ℓ2)
1355
+
1356
+ q12
1357
+ �2�
1358
+ .
1359
+ 16
1360
+
1361
+ For the last in (A.6), we have
1362
+ |E exp(ιtL)(tQ)2+δ| ≤ E|tQ|2+δ =
1363
+
1364
+ |t|
1365
+ ϑ
1366
+ �n
1367
+ 2
1368
+ �−1�2+δ
1369
+ E
1370
+ ���
1371
+
1372
+ i<j
1373
+ qij
1374
+ ���
1375
+ 2+δ
1376
+ = O
1377
+ �� |t|
1378
+ ϑn2
1379
+ �2+δ
1380
+ Π2+δ(n)
1381
+
1382
+ .
1383
+ Using the last four displays, we simplify (A.6) to
1384
+ χL+Q(t) = ψn � t
1385
+ ϑn
1386
+
1387
+ + ψn−2 � t
1388
+ ϑn
1389
+
1390
+
1391
+ ιt
1392
+ ϑ E exp( ιt
1393
+ ϑn(ℓ1 + ℓ2))q12 + (it)2
1394
+ 2ϑ2
1395
+ �n
1396
+ 2
1397
+ �−1
1398
+ E exp( ιt
1399
+ ϑn(ℓ1 + ℓ2))q2
1400
+ 12
1401
+
1402
+ + 1
1403
+ 2
1404
+
1405
+ ιt
1406
+ ϑ
1407
+ �n
1408
+ 2
1409
+ �−1�2
1410
+ ψn−3 � t
1411
+ ϑn
1412
+ � �n
1413
+ 3
1414
+
1415
+ E exp( ιt
1416
+ ϑn(ℓ1 + ℓ2 + ℓ3))q13q23
1417
+ + 1
1418
+ 2
1419
+
1420
+ ιt
1421
+ ϑ
1422
+ �n
1423
+ 2
1424
+ �−1�2
1425
+ ψn−4 � t
1426
+ ϑn
1427
+ � �n
1428
+ 4
1429
+ � �
1430
+ E exp( ιt
1431
+ ϑn(ℓ1 + ℓ2))q12
1432
+ �2
1433
+ + O
1434
+ ��
1435
+ |t|
1436
+ ϑn2
1437
+ �2+δ
1438
+ Π2+δ(n)
1439
+
1440
+ .
1441
+ (A.7)
1442
+ We now expand the exponentials inside the expectation and collect terms. For notation brevity,
1443
+ write a := ιt
1444
+ ϑn. For the first one, we have
1445
+ E exp(a(ℓ1 + ℓ2))q12 = E
1446
+
1447
+ exp(aℓ1) − 1
1448
+ ��
1449
+ exp(aℓ2) − 1
1450
+
1451
+ q12
1452
+ = E
1453
+ ��
1454
+ exp(aℓ1) − 1 − aℓ1
1455
+ ��
1456
+ exp(aℓ2) − 1 − aℓ2
1457
+
1458
+ q12
1459
+ +aℓ1
1460
+
1461
+ exp(aℓ2) − 1 − aℓ2
1462
+
1463
+ q12 + aℓ2
1464
+
1465
+ exp(aℓ1) − 1 − aℓ1
1466
+
1467
+ q12 + a2ℓ1ℓ2q12
1468
+
1469
+ = a2Eℓ1ℓ2q12 + O
1470
+
1471
+ |a|3E|ℓ2
1472
+ 1ℓ2q12| + |a|4E|ℓ2
1473
+ 1ℓ2
1474
+ 2q12|
1475
+
1476
+ ,
1477
+ for the second term we have
1478
+ E exp(a(ℓ1 + ℓ2))q2
1479
+ 12 = σ2
1480
+ q + E
1481
+
1482
+ exp(a(ℓ1 + ℓ2)) − 1
1483
+
1484
+ q2
1485
+ 12 = σ2
1486
+ q + O(|a|E|ℓ1q2
1487
+ 12|),
1488
+ and for the third term we have
1489
+ E
1490
+ 3
1491
+
1492
+ i=1
1493
+ exp(aℓi)q13q23 = E
1494
+ 3
1495
+
1496
+ i=1
1497
+
1498
+ exp(aℓi) − 1
1499
+
1500
+ q13q23 = O(|a|3E|ℓ1ℓ2ℓ3q13q23|).
1501
+ Plugging the above expansions back into (A.7) yields
1502
+ χL+Q(t) = ψn � t
1503
+ ϑn
1504
+
1505
+ + ψn−2 � t
1506
+ ϑn
1507
+
1508
+
1509
+ (ιt)3
1510
+ ϑ3n2 Eℓ1ℓ2q12 + (it)2
1511
+ 2ϑ2
1512
+ �n
1513
+ 2
1514
+ �−1
1515
+ σ2
1516
+ q
1517
+
1518
+ + ψn−4 � t
1519
+ ϑn
1520
+ � 1
1521
+ 2
1522
+ (ιt)6
1523
+ ϑ6n4
1524
+ �n
1525
+ 2
1526
+ �−2�n
1527
+ 4
1528
+
1529
+ [Eℓ1ℓ2q12]2
1530
+ 17
1531
+
1532
+ + O
1533
+
1534
+ ψn−2 � t
1535
+ ϑn
1536
+ � �
1537
+ t4
1538
+ ϑ4n3E|ℓ2
1539
+ 1ℓ2q12| +
1540
+ |t|5
1541
+ ϑ5n4E|ℓ2
1542
+ 1ℓ2
1543
+ 2q12| +
1544
+ |t|3
1545
+ ϑ2n3E|ℓ1q2
1546
+ 12|
1547
+ ��
1548
+ + O
1549
+
1550
+ ψn−3 � t
1551
+ ϑn
1552
+ � |t|5
1553
+ ϑ5n4E|ℓ1ℓ2ℓ3q13q23|
1554
+
1555
+ + O
1556
+
1557
+ ψn−4 � t
1558
+ ϑn
1559
+ � �
1560
+ |t|7
1561
+ ϑ7n5(Eℓ1ℓ2q12)(E|ℓ2
1562
+ 1ℓ2q12|) +
1563
+ |t|8
1564
+ ϑ8n6(Eℓ1ℓ2q12)(E|ℓ2
1565
+ 1ℓ2
1566
+ 2q12|)
1567
+ ��
1568
+ + O
1569
+ ��
1570
+ |t|
1571
+ ϑn2
1572
+ �2+δ
1573
+ Π2+δ(n)
1574
+
1575
+ .
1576
+ (A.8)
1577
+ From the Edgeworth expansion theory for sum off i.i.d random variables (Bhattacharya and Rao,
1578
+ 1976; Hall, 1992), we have for |t| ≤ δ∗√n for some small enough δ∗ > 0
1579
+ ψn
1580
+
1581
+ t
1582
+ σℓ
1583
+ √n
1584
+
1585
+ = exp
1586
+
1587
+ − 1
1588
+ 2t2�
1589
+
1590
+ 1 − ιt3
1591
+ 6√nE
1592
+ � ℓ1
1593
+ σℓ
1594
+ �3�
1595
+ + o
1596
+ �(|t|3 + t6)
1597
+ √n
1598
+ exp(−t2/4)
1599
+
1600
+ .
1601
+ Let αk := σℓ
1602
+ √n−k
1603
+ ϑn
1604
+ for k ∈ {0, 2, 3, 4}. Since αk ≍ 1 by assumption, where ≍ denotes proportional
1605
+ up to a fixed finite positive constant, we obtain
1606
+ ψn−k
1607
+ � t
1608
+ ϑn
1609
+
1610
+ = ψn−k
1611
+
1612
+ αkt
1613
+ σℓ
1614
+
1615
+ n − k
1616
+
1617
+ = exp
1618
+
1619
+ − 1
1620
+ 2 (αkt)2� �
1621
+ 1 − ι(αkt)3
1622
+ 6
1623
+
1624
+ n − kE
1625
+ �ℓ1
1626
+ σℓ
1627
+ �3�
1628
+ + o
1629
+ �(|t|3 + t6)
1630
+ √n
1631
+ exp(−(αkt)2/4)
1632
+
1633
+ .
1634
+ A first-order Taylor expansion yields
1635
+ exp(−(αkt)2/2) = exp(−t2/2)
1636
+
1637
+ 1 − (α2
1638
+ k − 1)t2
1639
+ 2 + O(p(t)(α2
1640
+ k − 1)2)
1641
+
1642
+ ,
1643
+ and plugging it back in the previous expression, we have
1644
+ ψn−k
1645
+ � t
1646
+ ϑn
1647
+
1648
+ = exp
1649
+
1650
+ − t2
1651
+ 2
1652
+ � �
1653
+ 1 − (α2
1654
+ k − 1)t2
1655
+ 2 − ι(αkt)3
1656
+ 6
1657
+
1658
+ n − kE
1659
+ �ℓ1
1660
+ σℓ
1661
+ �3�
1662
+ + O
1663
+
1664
+ (α2
1665
+ k − 1)2p(t) exp (−t2/2)
1666
+
1667
+ + o
1668
+ �(|t|3 + t6)
1669
+ √n
1670
+ exp(−(αkt)2/4)
1671
+
1672
+ .
1673
+ Use the fact that α2
1674
+ k = α2
1675
+ 0(1 − k/n) =
1676
+
1677
+ σℓ
1678
+ ϑ√n
1679
+ �2
1680
+ + O(n−1) to conclude that
1681
+ ψn−k
1682
+ � t
1683
+ ϑn
1684
+
1685
+ = exp
1686
+
1687
+ − t2
1688
+ 2
1689
+ � �
1690
+ 1 −
1691
+ � σ2
1692
+
1693
+ ϑ2n − 1
1694
+ � t2
1695
+ 2 −
1696
+ ιt3
1697
+ 6ϑ3n2 Eℓ3
1698
+ 1
1699
+
1700
+ + O
1701
+ �� σ2
1702
+
1703
+ ϑ2n − 1
1704
+ �2
1705
+ p(t) exp (−t2/2)
1706
+
1707
+ + o
1708
+ �(|t|3 + t6)
1709
+ √n
1710
+ exp(−t2/4)
1711
+
1712
+ ,
1713
+ (A.9)
1714
+ for |t| ≤ δ∗√n.
1715
+ 18
1716
+
1717
+ Combine (A.8) and (A.9) to conclude that, for |t| ≤ δ∗√n,
1718
+ χL+Q(t) = exp
1719
+
1720
+ − t2
1721
+ 2
1722
+ � �
1723
+ 1 −
1724
+ � σ2
1725
+
1726
+ ϑ2n − 1
1727
+ � t2
1728
+ 2 −
1729
+ ιt3
1730
+ 6ϑ3n2 Eℓ3
1731
+ 1
1732
+
1733
+ ×
1734
+
1735
+ 1 + (it)2
1736
+ 2ϑ2
1737
+ �n
1738
+ 2
1739
+ �−1
1740
+ σ2
1741
+ q + (ιt)3
1742
+ ϑ3n2 Eℓ1ℓ2q12 + 1
1743
+ 2
1744
+ (ιt)6
1745
+ ϑ6n4
1746
+ �n
1747
+ 2
1748
+ �−2�n
1749
+ 4
1750
+
1751
+ [Eℓ1ℓ2q12]2
1752
+
1753
+ + O
1754
+
1755
+ exp
1756
+
1757
+ − t2
1758
+ 2
1759
+ � �
1760
+ 1 +
1761
+ � σ2
1762
+
1763
+ ϑ2n − 1
1764
+
1765
+ t2 +
1766
+ � σ2
1767
+
1768
+ ϑ2n − 1
1769
+ �2
1770
+ p(|t|) + |t|3
1771
+ √n
1772
+
1773
+ R(t)
1774
+
1775
+ + o
1776
+
1777
+ exp
1778
+
1779
+ − t2
1780
+ 4
1781
+ � �
1782
+ |t|3+t6
1783
+ √n
1784
+
1785
+ R(t)
1786
+
1787
+ + O
1788
+ ��
1789
+ |t|
1790
+ ϑn2
1791
+ �2+δ
1792
+ Π2+δ(n)
1793
+
1794
+ ,
1795
+ (A.10)
1796
+ where
1797
+ R(t) :=
1798
+ t4
1799
+ ϑ4n3 E|ℓ2
1800
+ 1ℓ2q12| +
1801
+ |t|5
1802
+ ϑ5n4 E|ℓ2
1803
+ 1ℓ2
1804
+ 2q12| +
1805
+ |t|3
1806
+ ϑ2n3 E|ℓ1q2
1807
+ 12| +
1808
+ |t|5
1809
+ ϑ5n4E|ℓ1ℓ2ℓ3q13q23|
1810
+ + |t|7
1811
+ ϑ7n5 (Eℓ1ℓ2q12)(E|ℓ2
1812
+ 1ℓ2q12|) +
1813
+ |t|8
1814
+ ϑ8n6 (Eℓ1ℓ2q12)(E|ℓ2
1815
+ 1ℓ2
1816
+ 2q12|).
1817
+ After some rearrangement, the first term in (A.10) becomes
1818
+ �χL+Q(t) := exp
1819
+
1820
+ − t2
1821
+ 2
1822
+
1823
+ P(t) = exp
1824
+
1825
+ − t2
1826
+ 2
1827
+
1828
+
1829
+ 1 +
1830
+ 9
1831
+
1832
+ j=2
1833
+ � ιt
1834
+ ϑ
1835
+ �j γj
1836
+
1837
+  ,
1838
+ where
1839
+ P(t) := 1 + (ιt)2
1840
+ 2
1841
+
1842
+ σ2
1843
+ ϑ2 − 1
1844
+
1845
+ +
1846
+ (ιt)3
1847
+ 6ϑ3n2
1848
+
1849
+ Eℓ3
1850
+ 1 + 6Eℓ1ℓ2q12
1851
+
1852
+ + (ιt)4
1853
+ 4ϑ2
1854
+
1855
+ σ2
1856
+ ϑ2 − 1
1857
+ � �n
1858
+ 2
1859
+ �−1
1860
+ σ2
1861
+ q
1862
+ +
1863
+ (ιt)5
1864
+ 12ϑ5n2
1865
+ ��n
1866
+ 2
1867
+ �−1
1868
+ (Eℓ3
1869
+ 1)σ2
1870
+ q + 6ϑ2 � σ2
1871
+
1872
+ ϑ2n − 1
1873
+
1874
+ Eℓ1ℓ2q12
1875
+
1876
+ +
1877
+ (ιt)6
1878
+ 6ϑ6n4
1879
+
1880
+ (Eℓ3
1881
+ 1)Eℓ1ℓ2q12 + 12
1882
+ �n
1883
+ 2
1884
+ �−2�n
1885
+ 4
1886
+
1887
+ [Eℓ1ℓ2q12]2
1888
+
1889
+ + 1
1890
+ 4
1891
+ (ιt)8
1892
+ ϑ6n4
1893
+ � σ2
1894
+
1895
+ ϑ2n − 1
1896
+ � �n
1897
+ 2
1898
+ �−2�n
1899
+ 4
1900
+
1901
+ [Eℓ1ℓ2q12]2
1902
+ + 1
1903
+ 12
1904
+ (ιt)9
1905
+ ϑ9n6
1906
+ �n
1907
+ 2
1908
+ �−2�n
1909
+ 4
1910
+
1911
+ Eℓ3
1912
+ 1 [Eℓ1ℓ2q12]2 .
1913
+ Since �χL+Q(t) = exp(−ιtb)χG(t), we have under Assumption (a) and (b)
1914
+ |χF (t) − χG(t)| = O
1915
+
1916
+ exp
1917
+
1918
+ − t2
1919
+ 4
1920
+
1921
+ R(t) +
1922
+
1923
+ |t|
1924
+ ϑn2
1925
+ �2+δ
1926
+ Π2+δ(n)
1927
+
1928
+ .
1929
+ 19
1930
+
1931
+ Therefore,
1932
+ I1 = O
1933
+ ��
1934
+ |t|≤log n
1935
+ |t|−1 exp(−t2/4)R(t)dt + Π2+δ(n)
1936
+ (ϑn2)2+δ
1937
+
1938
+ |t|≤log n
1939
+ |t|1+δdt
1940
+
1941
+ = O
1942
+
1943
+ R(1) +
1944
+ �log n
1945
+ ϑn2
1946
+ �2+δ
1947
+ Π2+δ(n)
1948
+
1949
+ .
1950
+ Bound for I2
1951
+ For 1 ≤ m < n, define Qm := 1
1952
+ ϑ
1953
+ �n
1954
+ 2
1955
+ �−1 �m
1956
+ i=1
1957
+ �n
1958
+ j=i+1 qij. Using (A.5) we can write
1959
+ |χF (t)| = |χL+Q(t)| ≤
1960
+ �����E exp(ιt(L + Q − Qm))
1961
+ 2
1962
+
1963
+ k=0
1964
+ (itQm)k
1965
+ k!
1966
+ )
1967
+ ����� + |t|2+δE|Qm|2+δ.
1968
+ Exploiting the fact that Q − Qm is only a function of Xm+1, . . . , Xn, we have
1969
+ |E exp(ιt(L + Q − Qm)| ≤ |ψ
1970
+ � t
1971
+ ϑn
1972
+
1973
+ |m.
1974
+ For the second term
1975
+ |E exp(ιt(T − Qm)Qm| = 1
1976
+ ϑ
1977
+ �n
1978
+ 2
1979
+ �−1
1980
+ ������
1981
+ m
1982
+
1983
+ i=1
1984
+ n
1985
+
1986
+ j=i+1
1987
+ E exp(ιt(L + Q − Qm)qij
1988
+ ������
1989
+
1990
+ 1
1991
+ ϑn2 |ψ
1992
+ � t
1993
+ ϑn
1994
+
1995
+ |m−2mnE|q12|.
1996
+ Similarly, using the fact that
1997
+
1998
+ ϑ
1999
+ �n
2000
+ 2
2001
+
2002
+ Qm
2003
+ �2
2004
+ =
2005
+ m
2006
+
2007
+ i=1
2008
+ n
2009
+
2010
+ j=i+1
2011
+ q2
2012
+ ij, +
2013
+ m
2014
+
2015
+ i=1
2016
+ n
2017
+
2018
+ j=i+1
2019
+ m
2020
+
2021
+ k=1,
2022
+ k̸=i,j
2023
+ qijqjk +
2024
+ m
2025
+
2026
+ i=1
2027
+ n
2028
+
2029
+ j=i+1
2030
+ m
2031
+
2032
+ k=1,
2033
+ k̸=i,j
2034
+ n
2035
+
2036
+ l=k+1,
2037
+ l̸=i,j
2038
+ qijqkl,
2039
+ we conclude for k ∈ {0, 1, 2},
2040
+ |E exp(ιt(T − Qm))Qk
2041
+ m| ≲ |ψ
2042
+ � t
2043
+ ϑn
2044
+
2045
+ |m−2k
2046
+
2047
+ mn
2048
+ ϑ
2049
+ �n
2050
+ 2
2051
+ �−1�k
2052
+ E|q12|k.
2053
+ Finally, using the fact that ϑ = O(σℓ/√n) by Assumption (b) and combining the last displays,
2054
+ |χF (t)| ≲
2055
+ 2
2056
+
2057
+ k=0
2058
+ �|t|m
2059
+ √n
2060
+ �k
2061
+
2062
+ � t
2063
+ ϑn
2064
+
2065
+ |m−2kE
2066
+ ���q12
2067
+ σℓ
2068
+ ���
2069
+ k
2070
+ + |t|2+δE|Qm|2+δ,
2071
+ (A.11)
2072
+ for 1 ≤ m < n and δ ∈ [0, 1].
2073
+ 20
2074
+
2075
+ By the triangle inequality followed by (A.5), we have
2076
+
2077
+ � t
2078
+ ϑn
2079
+
2080
+ | − |1 −
2081
+ t2
2082
+ 2(ϑn)2 σ2
2083
+ ℓ | ≤ |ψ
2084
+ � t
2085
+ ϑn
2086
+
2087
+ − (1 −
2088
+ t2
2089
+ 2(ϑn)2 σ2
2090
+ ℓ)| ≤ 1
2091
+ 6
2092
+ |t|3
2093
+ (ϑn)3 E|ℓ1|3.
2094
+ For |t| ≤
2095
+
2096
+ 2ϑn
2097
+ σℓ
2098
+ we have |1 −
2099
+ t2
2100
+ 2(ϑn)2 σ2
2101
+ ℓ| = 1 −
2102
+ t2
2103
+ 2(ϑn)2 σ2
2104
+ ℓ hence
2105
+
2106
+ � t
2107
+ ϑn
2108
+
2109
+ | ≤ 1 −
2110
+ t2
2111
+ 2(ϑn)2 σ2
2112
+ ℓ + 1
2113
+ 6
2114
+ |t|3
2115
+ (ϑn)3 E|ℓ1|3;
2116
+ |t| ≤
2117
+
2118
+ 2ϑn
2119
+ σℓ .
2120
+ Assumption (b) together with (A.2) implies that
2121
+ σℓ
2122
+ √nϑ → 1 as n → ∞. Then, we can find a N1 ∈ N
2123
+ such that
2124
+
2125
+ 5/6 ≤
2126
+ σℓ
2127
+ √nϑ ≤ (6/5)1/3 for n ≥ N1. Also, Assumption (a) implies the existence of a
2128
+ constant C > 0 and N2 ∈ N such that E|ℓ1/σℓ|3 ≤ C for n ≥ N2. Then, for |t| ≤ c√n where
2129
+ c := (
2130
+
2131
+ 2/(6/5)1/3) ∧ (5/(12C)) and n ≥ N0 := N1 ∨ N2, we have
2132
+
2133
+ � t
2134
+ ϑn
2135
+
2136
+ | ≤ 1 − t2
2137
+ n
2138
+
2139
+ 1
2140
+ 2
2141
+ � σℓ
2142
+ ϑ√n
2143
+ �2
2144
+ − |t|E|ℓ1/σℓ|3
2145
+ 6√n
2146
+ � σℓ
2147
+ ϑ√n
2148
+ �3�
2149
+ ≤ 1 − t2
2150
+ 3n ≤ exp(− t2
2151
+ 3n).
2152
+ (A.12)
2153
+ For log n < |t| ≤ c√n, set m = [15n log n
2154
+ t2
2155
+ ] + 1 = O(n), then plug in (A.12) to conclude that
2156
+
2157
+ � t
2158
+ ϑn
2159
+
2160
+ |m−2k ≲ exp(− t2m
2161
+ 3n ) ≲ n−5. Combining this last bound with (A.11), we obtain
2162
+ |χF (t)| ≲
2163
+ 2
2164
+
2165
+ k=0
2166
+ |t|k
2167
+ n5−k E
2168
+ ���q12
2169
+ σℓ
2170
+ ���
2171
+ k
2172
+ + |t|2+δE|Qm|2+δ,
2173
+ (A.13)
2174
+ for |t| ≤ c√n and n ≥ N0. Then,
2175
+ I2 ≲
2176
+ 2
2177
+
2178
+ k=0
2179
+ 1
2180
+ n5−k−k/2
2181
+ E|q12|k
2182
+ σk
2183
+
2184
+ +
2185
+ �√n log n
2186
+ n
2187
+ �2+δ �
2188
+ σq
2189
+ σℓ
2190
+ �2+δ
2191
+ log n
2192
+ ≲ 1
2193
+ n
2194
+ σ2
2195
+ q
2196
+ nσ2
2197
+
2198
+ + log n
2199
+
2200
+ (log n)σ2
2201
+ q
2202
+ nσ2
2203
+
2204
+ � 2+δ
2205
+ 2
2206
+ .
2207
+ Therefore, since
2208
+ σ2
2209
+ q
2210
+ nσ2
2211
+ ℓ = o(1) by Assumption (b), we conclude
2212
+ I2 = o(n−1) + O
2213
+
2214
+
2215
+
2216
+
2217
+
2218
+ (log n)
2219
+ 4+δ
2220
+ 2+δ σ2
2221
+ q
2222
+ nσ2
2223
+
2224
+
2225
+
2226
+ 2+δ
2227
+ 2
2228
+
2229
+
2230
+
2231
+  .
2232
+ Bound for I3 and I4
2233
+ Under Assumption (c), for sufficient large n, we may find a b > 0 such that for |t| > c√n
2234
+
2235
+ � t
2236
+ ϑn
2237
+
2238
+ | ≤ 1 − b < exp(−b),
2239
+ 21
2240
+
2241
+ where c > 0 is define just before (A.12). Set m = [4 log n
2242
+ b
2243
+ ]+1, then nm ≲ n log n and |ψ
2244
+ � t
2245
+ ϑn
2246
+ �m−s | ≲
2247
+ n−4 for sufficient large n and s ∈ {1, 3, 4, 5}. Use these upper bounds on (A.11) to conclude that
2248
+ |χF (t)| ≲ n−4 �
2249
+ 1 + |t| log nE|q12/σℓ| + t2(log n)2 σ2
2250
+ q
2251
+ σ2
2252
+
2253
+
2254
+ + |t|2+δ(n log n)1+δ/2E|q12|2+δ,
2255
+ (A.14)
2256
+ for sufficient large n and |t| > c√n. Then,
2257
+ I3 = o(n−1/2) + O
2258
+
2259
+ (n log n)1+δ/2E|q12|2+δ
2260
+
2261
+ c√n≤|t|≤√n log n
2262
+ |t|1+δdt
2263
+
2264
+ = o(n−1/2) + O
2265
+ ��log n
2266
+ n
2267
+ �2+δ
2268
+ Π2+δ(log n)
2269
+
2270
+ .
2271
+ Finally,
2272
+ I4 =
2273
+
2274
+ |t|>log n
2275
+ |t|−1 exp(− t2
2276
+ 2 )
2277
+ ������
2278
+ 1 +
2279
+ 9
2280
+
2281
+ j=2
2282
+ �it
2283
+ ϑ
2284
+ �j γj
2285
+ ������
2286
+ dt
2287
+ ≤ C
2288
+
2289
+ t>log n
2290
+ t−1 exp(− t2
2291
+ 2 )dt +
2292
+ 9
2293
+
2294
+ j=2
2295
+ |γj|
2296
+ ϑj
2297
+
2298
+ t>log n
2299
+ tj−1 exp(− t2
2300
+ 2 )dt,
2301
+ where the first integral is o(n−1) and the second is o(1). Therefore,
2302
+ I4 = o
2303
+
2304
+ n−1 +
2305
+ 9
2306
+
2307
+ j=2
2308
+ |γj|
2309
+ ϑj
2310
+
2311
+  .
2312
+ The proof is complete.
2313
+ A.2
2314
+ Auxiliary Lemmas
2315
+ Lemma A.1. Let n ≥ 2, 1 ≤ l ≤ m < n and p ≥ 2 then
2316
+ E
2317
+ ������
2318
+ m
2319
+
2320
+ i=l
2321
+ n
2322
+
2323
+ j=i+1
2324
+ qij
2325
+ ������
2326
+ p
2327
+ ≤ Cp(n − l)p/2 max
2328
+ l<j≤n E
2329
+ ������
2330
+ (m∧j)−1
2331
+
2332
+ i=l
2333
+ qij
2334
+ ������
2335
+ p
2336
+ ≤ Kp [(n − l)(m − l)]p/2 E|q12|p,
2337
+ where Cp :=
2338
+
2339
+ 8(p − 1)(1 ∨ 2p−3)
2340
+ �p and Kp is a constant only depending on p.
2341
+ Proof. The double summation on the left-hand side can be written as �n
2342
+ j=l+1 ξj where ξj :=
2343
+ �(m∧j)−1
2344
+ i=l
2345
+ qij. Notice that {ξj, Fj} is m.d.s when Fj is the σ-algebra generated by {X1, . . . , Xj} for
2346
+ j ≥ 1 and F0 is trivial. Then by Dharmadhikari et al. (1968) followed by a trivial bound
2347
+ E
2348
+ ������
2349
+ m
2350
+
2351
+ i=l
2352
+ n
2353
+
2354
+ j=i+1
2355
+ qij
2356
+ ������
2357
+ p
2358
+ = E
2359
+ ������
2360
+ n
2361
+
2362
+ j=l+1
2363
+ ξj
2364
+ ������
2365
+ p
2366
+ ≤ Cp(n − l)p/2−1
2367
+ n
2368
+
2369
+ j=l+1
2370
+ E|ξj|p ≤ Cp(n − l)p/2 max
2371
+ l<j≤n E |ξj|p .
2372
+ 22
2373
+
2374
+ Lemma A.2. For p ∈ [2, ∞) there exist a constant Cp only depending on p such that for S ⊆
2375
+ {(i, j) : 1 ≤ i < j ≤ n}
2376
+ E
2377
+ �����
2378
+
2379
+ S
2380
+ qij
2381
+ �����
2382
+ p
2383
+ ≤ Cp
2384
+
2385
+ |S|p/2�
2386
+ Eq2
2387
+ 12
2388
+ �p/2 ∨ sE
2389
+
2390
+ (E(q2
2391
+ 12|Z1))p/2�
2392
+ ∨ |S|E|q12|p�
2393
+ ,
2394
+ where |S| denotes the cardinality of the set S, s := si ∨ sj with si := �
2395
+ (i,·)∈S
2396
+ ��
2397
+ (·,j)∈S 1
2398
+ �p/2
2399
+ and
2400
+ sj := �
2401
+ (·,j)∈S
2402
+ ��
2403
+ (i,·)∈S 1
2404
+ �p/2
2405
+ .
2406
+ Proof. Combining Proposition 2.1 with expression (2.18) in Gin´e et al. (2000), we obtain the in-
2407
+ equality above for the decoupled version of qij, defined as �qij := q(Z(1)
2408
+ i
2409
+ , Z(2)
2410
+ j
2411
+ ) where Z(j)
2412
+ i
2413
+ : 1 ≤ i ≤ n,
2414
+ 1 ≤ j ≤ 2 are i.i.d. Finally, we can apply the decoupling inequalities in de la Pe˜na and Montgomery-Smith
2415
+ (1995) to obtain the result at the expense of increasing the constant without altering the order of
2416
+ the upper bound. For further details, see section 2.5 in Gin´e et al. (2000).
2417
+ B
2418
+ Proof of Theorem 1 (Standardized Edgeworth Expansion)
2419
+ We apply Corollary A.1 with u(Zi, Zj) = ν′Uij in (2.1) and δ = 1. We assume throughout that
2420
+ Assumptions 1 and 2 hold.
2421
+ Condition (a) in Theorem A.1 is verified by direct calculations as
2422
+ in Cattaneo et al. (2010, 2014a,b). Condition (b) in Theorem A.1 is verified because (A.2) gives
2423
+ σ2 = 1
2424
+ nV[ν′Li] +
2425
+ �n
2426
+ 2
2427
+ �−1V[ν′Qi,j], which implies
2428
+ σ2
2429
+ ℓ = ν′Σν + O(hP )
2430
+ and
2431
+ σ2
2432
+ q =
2433
+ 1
2434
+ hd+2 [ν′∆ν + h2ν′Vν] + o(h−d),
2435
+ with V given in Cattaneo et al. (2010). These results imply σ2
2436
+ q = o(nσ2
2437
+ ℓ) if (and only if) nhd+2 → ∞.
2438
+ Therefore, we take ϑ ≍ σ ≍ 1/√n. Condition (c) in Theorem A.1 holds by assumption.
2439
+ The additional condition B → 0 in Corollary A.1 holds if (and only if, when β ̸= 0) √nhP → 0.
2440
+ To see this, using integration by parts, E[U12|Z1] =
2441
+
2442
+ Rd ν′ ˙e(X1 + uh)K(u)du − Y1
2443
+
2444
+ Rd ν′ ˙f(X1 +
2445
+ uh)K(u)du. Then, repeated Taylor series expansions and integration by parts give E[u(Z1, Z2)|Z1] =
2446
+ δ(Z1) + hP (−1)P �
2447
+ [k]=P
2448
+ µk
2449
+ k! δ(1+k)(z) + o(hP ).
2450
+ In turn, this result implies that E[u(Z1, Z2)] =
2451
+ θν + hP β + o(hP ). As a consequence, B = (E[�θν] − θν)/ϑ = hP β/ϑ + o(√nhP ). See Cattaneo et al.
2452
+ (2010, 2014a,b) for details.
2453
+ Law of iterated expectations, integration by parts, and Taylor series expansions give
2454
+ E[ℓ3
2455
+ 1] = κ1 + O(hP ).
2456
+ Proceeding analogously, because E[ℓ1ℓ2q12] = E[ℓ2E[ℓ1q12|Z2]] = E[ℓ2ℓ1U12] and E[ℓ2ℓ1U12] =
2457
+ 4E[E[U12|Z1]E[U12|Z2]U12] − 8E[U12]E[E[U12|Z1]2] + 4E[U12]3, we have E[E[U12|Z1]E[U12|Z2]U12] =
2458
+ 23
2459
+
2460
+ E[δ(Z) ˙η(Z)] + O(hP ), E[E[U12|Z1]2] = E[δ(Z1)2] + O(hP ), and E[U12] = θ + O(hP ) and E[U12]3 =
2461
+ θ3 + O(hP ). Collecting these results, we verify
2462
+ E[ℓ1ℓ2q12] = κ2 + O(hP ).
2463
+ These results imply γ3 = O(n−2), γ4 = O(n−3h−d−2), γ5 = O(n−2), γ6 = O(n−1), γ8 = O(n−3),
2464
+ γ9 = O(n−7/2).
2465
+ It remains to bound E. First, by standard results E|q12|3 = O(h2d+3), so
2466
+ Π3(m) = O
2467
+ � (mn)3/2
2468
+ h(3/2)(d+2) ∨
2469
+ mn3/2
2470
+ h(3/2)(d+2) ∨ mn
2471
+ h2d+3
2472
+
2473
+ = O
2474
+ � mn
2475
+ hd+2
2476
+ �3/2
2477
+ .
2478
+ Second, using the results in Cattaneo et al. (2014b, Supplemental Appendix), we have E|ℓ2
2479
+ 1ℓ2q12| =
2480
+ O(h−2d/3−1), E|ℓ2
2481
+ 1ℓ2
2482
+ 2q12| = O(h−2d/3−1), E|ℓ1q2
2483
+ 12| = O(h−d−2), E|ℓ1ℓ2ℓ3q13q23| = O(h−4d/3−2). Thus,
2484
+ collecting all the bounds, we verify:
2485
+ E = O
2486
+ ��(log n)3
2487
+ nhd+2
2488
+ �3/2
2489
+ + hd/3+1
2490
+ nhd+2 +
2491
+ �hd/9+2/3
2492
+ nhd+2
2493
+ �3/2�
2494
+ = o
2495
+
2496
+ 1
2497
+ nhd+2
2498
+
2499
+ This completes the proof.
2500
+ C
2501
+ Proof of Theorem 2 (Studentized Edgeworth Expansion)
2502
+ For any estimated scale �ϑ and nonrandom centering ϑ, we have
2503
+ �θν − θν
2504
+ �ϑ
2505
+ =
2506
+ �θν − θν
2507
+ ϑ
2508
+
2509
+ 1 −
2510
+ �ϑ2 − ϑ2
2511
+ 2ϑ2
2512
+ +
2513
+ �ϑ + 2ϑ2
2514
+ 2ϑ2 �ϑ
2515
+ (�ϑ2 − ϑ2)2
2516
+ �ϑ2 + ϑ2
2517
+
2518
+ .
2519
+ Recall that �θν is a second-order U-statistic satisfying the H-decomposition (�θν −θν)/ϑ = B + ¯L/ϑ+
2520
+ ¯Q/ϑ. Using standard results for Edgeworth expansions (Bhattacharya and Rao, 1976; Hall, 1992),
2521
+ sup
2522
+ t∈R
2523
+ ���P
2524
+ � �θν−θν
2525
+ �ϑ
2526
+ ≤ t
2527
+
2528
+ − G(t)
2529
+ ��� ≤ E + R1 + R2 + R3 + O
2530
+ � rn
2531
+ log n
2532
+
2533
+ ,
2534
+ where
2535
+ E := sup
2536
+ t∈R
2537
+ �����P
2538
+ ��
2539
+ 1 −
2540
+ �ϑ2 − ϑ2
2541
+ 2ϑ2
2542
+ ��
2543
+ ν′ ¯L/ϑ + ν′ ¯Q/ϑ
2544
+
2545
+ + B ≤ t
2546
+
2547
+ − G(t)
2548
+ ����� ,
2549
+ B = ν′(E[U12] − θ)/ϑ, G denoting a distribution function later to be set to either GAL or GSB as
2550
+ appropriate, and
2551
+ R1 := P
2552
+ ������
2553
+ �ϑ + 2ϑ2
2554
+ 2ϑ2 �ϑ
2555
+ �����
2556
+ (�ϑ2 − ϑ2)2
2557
+ �ϑ2 + ϑ2
2558
+ > C
2559
+ rn
2560
+ (log n)2
2561
+
2562
+ ,
2563
+ R2 := P
2564
+ ������
2565
+ �θν − θν
2566
+ ϑ
2567
+ ����� > C log n
2568
+
2569
+ ,
2570
+ 24
2571
+
2572
+ R3 := P
2573
+ ������
2574
+ �ϑ − ϑ
2575
+ ϑ
2576
+ B
2577
+ ����� > C
2578
+ √nhP
2579
+ log n
2580
+
2581
+ ,
2582
+ with C denoting a generic constant, which can take different values in different places. The term
2583
+ E will give the Edgeworth expansion upon setting �ϑ and ϑ appropriately, while the terms R1–R3
2584
+ capture higher-order remainders.
2585
+ Variance Estimators
2586
+ The estimators �ϑ2
2587
+ AL and �ϑ2
2588
+ SB are linear combinations of U-statistics as follows:
2589
+ �ϑ2
2590
+ AL = 1
2591
+ nν′�Σν = 2
2592
+ �n
2593
+ 2
2594
+ �−1
2595
+ ¯W1 + 4
2596
+ n
2597
+ n − 2
2598
+ n − 1
2599
+ ¯W2 − 4
2600
+ n
2601
+ �θ2
2602
+ ν
2603
+ and
2604
+ �n
2605
+ 2
2606
+ �−1
2607
+ h−d−2ν′ �∆ν =
2608
+ �n
2609
+ 2
2610
+ �−1
2611
+ ¯W1
2612
+ with
2613
+ �θν =
2614
+ �n
2615
+ 2
2616
+ �−1 �
2617
+ i<j
2618
+ (ν′Uij),
2619
+ ¯W1 =
2620
+ �n
2621
+ 2
2622
+ �−1 �
2623
+ i<j
2624
+ (ν′Uij)2,
2625
+ ¯W2 =
2626
+ �n
2627
+ 3
2628
+ �−1 �
2629
+ i<j<k
2630
+ Wijk,
2631
+ Wijk = (ν′Uij)(ν′Uik) + (ν′Uij)(ν′Uh,jk) + (ν′Uik)(ν′Uh,jk)
2632
+ 3
2633
+ .
2634
+ See Lemmas 3.1.1 and 3.1.2 in the Supplemental Appendix of Cattaneo et al. (2014b) for a proof.
2635
+ Thus, for c ∈ R, we consider the following generic (debiased when c = 1) Studentization:
2636
+ �ϑ2
2637
+ c := (2 − c)
2638
+ �n
2639
+ 2
2640
+ �−1
2641
+ ¯W1 + 4
2642
+ n[1 + o(n−1)] ¯W2 − 4
2643
+ n
2644
+ �θ2
2645
+ ν.
2646
+ In particular, �ϑ2
2647
+ AL = �ϑ2
2648
+ 0 and �ϑ2
2649
+ SB = �ϑ2
2650
+ 1. The centering considered in the literature is �ϑ2
2651
+ c is
2652
+ ϑ2
2653
+ c := c
2654
+ �n
2655
+ 2
2656
+ �−1
2657
+ E[ ¯W1] + 4
2658
+ nE[ ¯W2] − 4
2659
+ n(E[�θν])2,
2660
+ which implies that ϑ2
2661
+ 0 = ϑ2
2662
+ AL and ϑ2
2663
+ 1 = ϑ2
2664
+ SB + o(n−1).
2665
+ The underlying U-statistics have the following mean square convergence rates:
2666
+ E[(�θν − E[�θν])2] = O(n−1 + n−2h−d−2),
2667
+ E[( ¯W1 − E[(ν′U12)2])2] = O(n−1h−2d−4 + n−2h−3d−4),
2668
+ E[( ¯W2 − E[(E[ν′U12|Z1])2])2] = O(n−1 + n−2h−d−4 + n−3h−2d−4),
2669
+ The proof is given in Cattaneo et al. (2014b, Supplemental Appendix): see Lemma 3.1.3 for the first
2670
+ 25
2671
+
2672
+ two results, and Lemma 3.1.4 for the third result. (Note that while the statement of those lemmas
2673
+ gives convergence rates in probability, the proof mean square convergence rates.) Therefore,
2674
+ E
2675
+
2676
+ (�ϑ2
2677
+ c − ϑ2
2678
+ c)2�
2679
+ ≤ Cn−4E[( ¯W1 − E[(ν′U12)2])2] + Cn−2E[( ¯W2 − E[(E[ν′U12|Z1])2])2]
2680
+ + Cn−2E[( ¯U − E[ν′U12])2]
2681
+ = O(n−3 + n−4h−d−4).
2682
+ Similar long calculations as in Cattaneo et al. (2014b, Supplemental Appendix) show that:
2683
+ E[(�θν − E[�θν])4] = O(n−2 + n−4h−d−4 + n−5h−2d−4 + n−6h−3d−4),
2684
+ E[( ¯W1 − E[(ν′U12)2])4] = O(n−2h−4d−8 + n−4h−6d−8 + n−5h−6d−8 + n−6h−7d−8),
2685
+ E[( ¯W2 − E[(E[ν′U12|Z1])2])4] = O(n−2 + n−4h−d−8 + n−5h−2d−8 + n−6h−3d−8),
2686
+ which gives
2687
+ E
2688
+
2689
+ (�ϑ2
2690
+ c − ϑ2
2691
+ c)4�
2692
+ ≤ Cn−8E[( ¯W1 − E[(ν′U12)2])4] + Cn−4E[( ¯W2 − E[(E[ν′U12|Z1])2])4]
2693
+ + Cn−4E[( ¯U − E[ν′U12])4]
2694
+ = O(n−6 + n−8h−d−8).
2695
+ Consequently, for the remainder of the proof we set �ϑ2 = �ϑ2
2696
+ c and ϑ2 = ϑ2
2697
+ c .
2698
+ Bounds for R1–R3
2699
+ For n large enough, and using Markov inequality,
2700
+ R1 ≤ P
2701
+
2702
+ (�ϑ2
2703
+ c − ϑ2
2704
+ c )2 >
2705
+ Crn
2706
+ n2(log n)2
2707
+
2708
+ + o(rn) ≤ Cn4(log n)4r−2
2709
+ n E
2710
+
2711
+ (�ϑ2
2712
+ c − ϑ2
2713
+ c)4�
2714
+ + o(rn)
2715
+ = n5(log n)4O(n−6 + n−8h−d−8) + o(rn) = o(rn).
2716
+ Using Theorem A.1 and Corollary A.1, it follows that a valid Edgeworth expansion holds for
2717
+ �θν−θν
2718
+ ϑc
2719
+ , which implies that
2720
+ R2 = 1 − P
2721
+ � �θν − θν
2722
+ ϑ
2723
+ ≤ C log n
2724
+
2725
+ + P
2726
+ � �θν − θν
2727
+ ϑ
2728
+ ≤ −C log n
2729
+
2730
+ = 1 − Φ(C log(n)) + Φ(−C log(n)) + C φ(log n) log n
2731
+ nhd+2
2732
+ + o(rn) = o(rn),
2733
+ by properties of the Gaussian distribution.
2734
+ Finally, Markov inequality implies
2735
+ R3 ≤ Cn(log n)2E
2736
+
2737
+ (�ϑ2
2738
+ c − ϑ2
2739
+ c)2�
2740
+ = n(log n)2O(n−3 + n−4h−d−4) = o(rn).
2741
+ 26
2742
+
2743
+ Therefore, R1 + R2 + R3 = o(rn).
2744
+ Expansion for E
2745
+ We consider E = ρ( ˘Fc, Gc), where
2746
+ ˘Fc(t) := P
2747
+ ��
2748
+ 1 −
2749
+ �ϑ2
2750
+ c − ϑ2
2751
+ c
2752
+ 2ϑ2c
2753
+ ��ν′ ¯L
2754
+ ϑc
2755
+ + ν′ ¯Q
2756
+ ϑc
2757
+
2758
+ + Bc ≤ t
2759
+
2760
+ ,
2761
+ Bc := E[�θν] − θν
2762
+ ϑc
2763
+ ,
2764
+ and
2765
+ Gc(t) := Φ(t) − φ(t)
2766
+ �√nhP β
2767
+ ν′Σν
2768
+ − 1 − c
2769
+ nhd+2
2770
+ ν′∆ν
2771
+ ν′Σν t −
2772
+ 1
2773
+ √n6(ν′Σν)3
2774
+
2775
+ κ1(2t2 + 1) + κ2(t2 + 1)
2776
+ ��
2777
+ .
2778
+ Recall that, in particular, c = 0 corresponds to AL implementation and c = 1 corresponds to SB
2779
+ implementation (i.e., GAL(t) = G0(t) and GSB(t) = G1(t)). Then, applying the smoothing inequality
2780
+ as in Theorem A.1,
2781
+ ρ( ˘Fc, Gc) ≲ ˘I1 + ˘I2 + ˘I3 + ˘I4 +
2782
+ 1
2783
+ √n log n,
2784
+ where
2785
+ ˘I1 :=
2786
+
2787
+ |t|≤log n
2788
+ ����
2789
+ χ ˘Fc(t) − χGc(t)
2790
+ t
2791
+ ���� dt,
2792
+ ˘I2 :=
2793
+
2794
+ log n<|t|≤c√n
2795
+ ����
2796
+ χ ˘Fc(t)
2797
+ t
2798
+ ���� dt,
2799
+ ˘I3 :=
2800
+
2801
+ c√n<|t|≤√n log n
2802
+ ����
2803
+ χ ˘Fc(t)
2804
+ t
2805
+ ���� dt,
2806
+ ˘I4 :=
2807
+
2808
+ |t|>log n
2809
+ ����
2810
+ χGc(t)
2811
+ t
2812
+ ���� dt.
2813
+ The last three integrals above can be upper bounded following the same arguments used in the
2814
+ proof of Theorem A.1 to conclude that ˘I2 + ˘I3 + ˘I4 = o(√nhP + n−1h−d−2 + n−1/2). The first
2815
+ integral, ˘I1, is analyzed by expanding χ ˘Fc(t) by generalizing the proof of Theorem A.1 to account
2816
+ for the contribution from Studentization to the sampling distribution of the linearized version of
2817
+ the statistic ( ˘Fc).
2818
+ First, by (A.5) we write
2819
+ χ ˘Fc(t) = exp(ιtBc)E exp(ιt�Uc) =
2820
+
2821
+ 1 + ιtBc + O(t2B2
2822
+ c )
2823
+
2824
+ E exp(ιt�Uc),
2825
+ (C.1)
2826
+ where
2827
+ �Uc =
2828
+
2829
+ 1 −
2830
+ �ϑ2
2831
+ c − ϑ2
2832
+ c
2833
+ 2ϑ2c
2834
+ � �ν′ ¯L
2835
+ ϑc
2836
+ + ν′ ¯Q
2837
+ ϑc
2838
+
2839
+ From (C.8) below, we have
2840
+
2841
+ �ϑ2
2842
+ c − ϑ2
2843
+ c
2844
+ 2ϑ2c
2845
+ = Hc + Tc
2846
+ (C.2)
2847
+ 27
2848
+
2849
+ with
2850
+ Hc := −
2851
+ �n
2852
+ 2
2853
+ �−1 1 − c
2854
+ 2ϑ2c
2855
+ E[q2
2856
+ 12] −
2857
+ 1
2858
+ 2nϑ2c
2859
+ 1
2860
+ n
2861
+ n
2862
+
2863
+ i=1
2864
+ ��
2865
+ ℓ2
2866
+ i − E[ℓ2
2867
+ i ]
2868
+
2869
+ + 4E[ℓiqij|Zi]
2870
+
2871
+
2872
+ 2
2873
+ nϑ2c
2874
+ �n
2875
+ 2
2876
+ �−1 �
2877
+ i<j
2878
+ E[qijqik|Zj, Zk],
2879
+ where we define ℓi := ν′Li and qij := ν′Qij, and Tc := −Vc/(2ϑ2
2880
+ c ) with Vc is given in (C.8). Next,
2881
+ applying (A.5) repeatedly, we are left with
2882
+ E exp(ιt�Uc) = E exp
2883
+
2884
+ ιt
2885
+
2886
+ ν′ ¯L
2887
+ ϑc + ν′ ¯Q
2888
+ ϑc
2889
+ ��
2890
+ + ιtEHc ν′ ¯L
2891
+ ϑc exp(ιt ν′ ¯L
2892
+ ϑc ) + O (E1(t)) ,
2893
+ (C.3)
2894
+ where E1(t) = |t|E|Tc(ν′ ¯L)+(Hc +Tc)(ν′ ¯Q)|+t2(E(Hc(ν′ ¯L))2 +E|Hc(ν′ ¯L)(ν′ ¯Q)|. The first term was
2895
+ expanded in the proof of Theorem A.1, as it corresponds to the standardized version of the statistic.
2896
+ The second term can be expanded analogously (see, e.g., Appendix B-(a) in Nishiyama and Robinson
2897
+ (2001)):
2898
+ E
2899
+
2900
+ Hc ν′ ¯L
2901
+ ϑc exp(ιtν′ ¯L/ϑc)
2902
+
2903
+ (C.4)
2904
+ = −[ψ
2905
+
2906
+ t
2907
+ nϑc
2908
+
2909
+ ]n−1
2910
+
2911
+ 1 − c
2912
+ 2
2913
+ ιt
2914
+ ϑ2c
2915
+ �n
2916
+ 2
2917
+ �−1
2918
+ E[q2
2919
+ 12] + O
2920
+
2921
+ |t|
2922
+ n2hd+2 +
2923
+ t2
2924
+ n3/2hd+2 + |t|hP
2925
+ hd+2
2926
+ ��
2927
+ − [ψ
2928
+
2929
+ t
2930
+ nϑc
2931
+
2932
+ ]n−1
2933
+
2934
+ 1
2935
+ 2ϑ3c n2 (Eℓ3
2936
+ 1 + 4Eℓ1ℓ2q12) + O
2937
+
2938
+ |t|
2939
+ n
2940
+ ��
2941
+ − [ψ
2942
+
2943
+ t
2944
+ nϑc
2945
+
2946
+ ]n−2
2947
+ � (ιt)2
2948
+ 2ϑ3c n2 (Eℓ3
2949
+ 1 + 4Eℓ1ℓ2q12) + O
2950
+
2951
+ t2+|t|3
2952
+ n
2953
+ +
2954
+ t4
2955
+ n3/2
2956
+ ��
2957
+ − [ψ
2958
+
2959
+ t
2960
+ nϑc
2961
+
2962
+ ]n−3 �
2963
+ O
2964
+
2965
+ |t|3+|t|
2966
+ n
2967
+ +
2968
+ t2
2969
+ n3/2hd+2 +
2970
+ t6
2971
+ n3hd+2 +
2972
+ |t|5
2973
+ n5/2hd+2 + t4+|t|3
2974
+ n2hd+2
2975
+ ��
2976
+ .
2977
+ (C.5)
2978
+ Combine (A.9), (A.10), (C.3), and (C.4) to obtain
2979
+ E exp(ιt�Uc) = exp
2980
+
2981
+ − t2
2982
+ 2
2983
+ � �
2984
+ 1 + (ιt)2
2985
+ 2
2986
+ � E[ℓ2
2987
+ 1]
2988
+ ϑ2c n − 1
2989
+
2990
+ + (ιt)3
2991
+ 6ϑ3c n2 Eℓ3
2992
+ 1 + O(E2(t)) + o(E3(t))
2993
+
2994
+ ×
2995
+
2996
+ 1 + (ιt)2
2997
+ 2ϑ2c
2998
+ �n
2999
+ 2
3000
+ �−1
3001
+ E[q2
3002
+ 12] + (ιt)3
3003
+ ϑ3cn2 Eℓ1ℓ2q12
3004
+ − 1 − c
3005
+ 2
3006
+ (ιt)2
3007
+ ϑ2c
3008
+ �n
3009
+ 2
3010
+ �−1
3011
+ E[q2
3012
+ 12] −
3013
+ �ιt + (ιt)3
3014
+ ϑ3cn2
3015
+ � �Eℓ3
3016
+ 1
3017
+ 2
3018
+ + 2Eℓ1ℓ2q12
3019
+
3020
+ + O(E4(t))
3021
+
3022
+ + O(E1(t)),
3023
+ (C.6)
3024
+ where E2(t) and E3(t) are the last two rates appearing in (A.9) respectively. Also, proceeding as in
3025
+ Nishiyama and Robinson (2001),
3026
+ E4(t) = o
3027
+ �t2 + t10
3028
+ nhd+2 + t2 + t6
3029
+ √n
3030
+
3031
+ .
3032
+ 28
3033
+
3034
+ Combine (C.6) with (C.1) and expand the product to obtain
3035
+ χ ˘Fc(t) = exp
3036
+
3037
+ − t2
3038
+ 2
3039
+
3040
+
3041
+ 1 +
3042
+ 3
3043
+
3044
+ j=1
3045
+ (ιt)j˘γc,j
3046
+
3047
+  + O(E5(t)),
3048
+ where
3049
+ ˘γc,1 :=
3050
+ �βhP
3051
+ ϑc
3052
+ − Eℓ3
3053
+ 1/2 + 2Eℓ1ℓ2q12
3054
+ ϑ3cn2
3055
+
3056
+ ,
3057
+ ˘γc,2 := −(1 − c)
3058
+ �n
3059
+ 2
3060
+ �−1 Eq2
3061
+ 12
3062
+ 2ϑ2c
3063
+ ,
3064
+ ˘γc,3 := −
3065
+ 1
3066
+ 6n2ϑ3c
3067
+ (2Eℓ3
3068
+ 1 + 6Eℓ1ℓ2q12),
3069
+ and
3070
+ E5(t) :=
3071
+
3072
+ e−t2/2 |t|3
3073
+ √n + o(n−1/2(t6 + |t|3)e−t2/4)
3074
+ � �
3075
+ t2
3076
+ n2hd+2 + |t|3+|t|
3077
+ √n
3078
+ + E4(t)
3079
+
3080
+ + e−t2/2 �
3081
+ |t|√nhP + t2h2P + |t|√nh2P � �
3082
+ t2
3083
+ n2hd+2 + |t|3+|t|
3084
+ √n
3085
+ + E4(t)
3086
+
3087
+ + (|t|√nhP + t2nh2P )
3088
+
3089
+ e−t2/2 |t|3
3090
+ √n + o(n−1/2(t6 + |t|3)e−t2/4)
3091
+
3092
+ + (|t|√nhP + t2nh2P )
3093
+
3094
+ e−t2/2 |t|3
3095
+ √n + o(n−1/2(t6 + |t|3)e−t2/4)
3096
+ � �
3097
+ t2
3098
+ n2hd+2 + |t|+|t|3
3099
+ √n
3100
+ + E4(t)
3101
+
3102
+ + (|t| + t2√nhP + |t|3nh2P )
3103
+
3104
+ E|Tc ¯L| + E|(Hc + Tc) ¯Q|
3105
+
3106
+ + (t2 + |t|3√nhP + t4nh2P )
3107
+
3108
+ E(Hc ¯L)2 + E|Hc ¯L ¯Q|
3109
+
3110
+ .
3111
+ We showed in the proof of Theorem 1 that Eℓ3
3112
+ 1 = κ1+O(hP ) = o(1) and Eℓ1ℓ2q12 = κ2+O(hP ) =
3113
+ o(1), and hence
3114
+ χ ˘Fc(t) = exp
3115
+
3116
+ − t2
3117
+ 2
3118
+ � �
3119
+ 1 + ιt
3120
+
3121
+ βhP
3122
+ ϑc − κ1/2+2κ2
3123
+ ϑ3c n2
3124
+
3125
+ − (ιt)2
3126
+ �n
3127
+ 2
3128
+ �−1 Eq2
3129
+ 12
3130
+ 2ϑ2c
3131
+
3132
+ (ιt)3
3133
+ 6n2ϑ3c (2κ1 + 6κ2)
3134
+
3135
+ + O(E5(t)) + o
3136
+
3137
+ exp
3138
+
3139
+ − t2
3140
+ 2
3141
+
3142
+ |t|+|t|3
3143
+ √n
3144
+
3145
+ .
3146
+ Note that the first term is the characteristic function of G. Finally, we bound the moments ap-
3147
+ pearing in E5(t): E|Tc ¯L|, E|(Hc +Tc) ¯Q|, E(Hc ¯L)2, and E|Hc ¯L ¯Q|. Holder’s inequality combined with
3148
+ the theorem assumptions give
3149
+ E|Tc ¯L| ≤
3150
+
3151
+ E|Tc|2E|¯L|2 = O(n−1h−(d+2)/2)
3152
+ E|(Hc + Tc) ¯Q| ≤
3153
+
3154
+ E|Hc + Tc|2E| ¯Q|2 = O
3155
+
3156
+ (n−1/2 + n−1h−d−2)(n−1/2h−d/2−1)
3157
+
3158
+ E(Hc ¯L)2 = O(n−1 + n−2h−2d−4)
3159
+ E|Hc ¯L ¯Q| = O
3160
+
3161
+ (n−1/2 + n−1h−d−2)(n−1/2h−d/2−1)
3162
+
3163
+ .
3164
+ 29
3165
+
3166
+ Therefore, if (log n)9/(nhd+2) → 0,
3167
+ ˘I1 :=
3168
+
3169
+ |t|≤log n
3170
+ |χ ˘Fc(t) − χGc(t)|
3171
+ |t|
3172
+ = o(√nhP + n−1h−d−2 + n−1/2).
3173
+ The proof is finalized.
3174
+ C.1
3175
+ Alternative Decomposition of �ϑc
3176
+ Let uij = ν′Uij and following Callaert and Veraverbeke (1981) with S2
3177
+ N given in the their main
3178
+ Theorem, we have
3179
+ S2
3180
+ N := n2(n − 1)
3181
+ (n − 2)2 �ϑ2
3182
+ AL = 4(n − 1)
3183
+ (n − 2)2
3184
+ n
3185
+
3186
+ i=1
3187
+ (ν′�Li/2)2
3188
+ =
3189
+ 8
3190
+ (n − 1)(n − 2)2
3191
+
3192
+ �
3193
+ i<j
3194
+ (uij − Eu12)2 +
3195
+ n
3196
+
3197
+ i=1
3198
+
3199
+ j<k,j̸=i
3200
+ (uij − Eu12)(uik − Eu12)
3201
+
3202
+
3203
+ − 4n(n − 1)
3204
+ (n − 2)2 (�θ − Eu12).
3205
+ Define gi := E[ℓjqij|Zi] and use the fact that uij − Eu12 = ℓi/2 + ℓj/2 + qij to further decompose
3206
+ S2
3207
+ N = 1
3208
+ n
3209
+ n
3210
+
3211
+ i=1
3212
+ ℓ2
3213
+ i + 4gi −
3214
+ �n
3215
+ 2
3216
+ �−1
3217
+ n
3218
+
3219
+ i<j
3220
+ ℓiℓj + 2
3221
+ �n
3222
+ 2
3223
+ �−1
3224
+ n
3225
+
3226
+ i<j
3227
+
3228
+ (ℓi + ℓj)qij − gi − gj
3229
+
3230
+ − 4
3231
+ n
3232
+ �n − 1
3233
+ 2
3234
+ �−1
3235
+ n
3236
+
3237
+ i=1
3238
+ ℓi
3239
+ n
3240
+
3241
+ j<k,j̸=i
3242
+ qjk +
3243
+ 4
3244
+ n − 2
3245
+ �n − 1
3246
+ 2
3247
+ �−1
3248
+ n
3249
+
3250
+ i=1
3251
+ n
3252
+
3253
+ j<k,j̸=i
3254
+ qijqik
3255
+ − 4n(n − 1)
3256
+ (n − 2)2
3257
+
3258
+
3259
+ �n
3260
+ 2
3261
+ �−1 �
3262
+ i<j
3263
+ qij
3264
+
3265
+
3266
+ 2
3267
+ +
3268
+ 4n
3269
+ (n − 2)2
3270
+ �n
3271
+ 2
3272
+ �−1 �
3273
+ i<j
3274
+ q2
3275
+ ij.
3276
+ (C.7)
3277
+ The first term, we center on its expectation
3278
+ 1
3279
+ n
3280
+ n
3281
+
3282
+ i=1
3283
+ ℓ2
3284
+ i + 4gi = E[ℓ2
3285
+ 1] + 1
3286
+ n
3287
+ n
3288
+
3289
+ i=1
3290
+
3291
+ ℓ2
3292
+ i − E[ℓ2
3293
+ 1]
3294
+
3295
+ + 4gi.
3296
+ Define ϕij := E[qkiqkj|Zi, Zj], and for the fifth term we write
3297
+ 4
3298
+ n − 2
3299
+ �n − 1
3300
+ 2
3301
+ �−1
3302
+ n
3303
+
3304
+ i=1
3305
+ n
3306
+
3307
+ j<k,j̸=i
3308
+ qijqik = 4
3309
+ �n − 1
3310
+ 2
3311
+ �−1 �
3312
+ i<j
3313
+ ϕij +
3314
+ 4
3315
+ n − 2
3316
+ �n − 1
3317
+ 2
3318
+ �−1
3319
+ n
3320
+
3321
+ i=1
3322
+ n
3323
+
3324
+ j<k,j̸=i
3325
+ (qijqik−ϕjk).
3326
+ Finally, for the last term
3327
+ 4n
3328
+ (n − 2)2
3329
+ �n
3330
+ 2
3331
+ �−1 �
3332
+ i<j
3333
+ q2
3334
+ ij =
3335
+ 4n
3336
+ (n − 2)2
3337
+ �n
3338
+ 2
3339
+ �−1 �
3340
+ i<j
3341
+
3342
+ q2
3343
+ ij − ϕii − ϕjj + E[q12]2]
3344
+ 30
3345
+
3346
+ +
3347
+ 8
3348
+ (n − 2)2
3349
+ n
3350
+
3351
+ i=1
3352
+
3353
+ ϕii − E[q2
3354
+ 12]
3355
+
3356
+ +
3357
+ 4n
3358
+ (n − 2)2 E[q2
3359
+ 12]
3360
+ Plug the last three displays back into (C.7) to conclude
3361
+ S2
3362
+ N = E[ℓ2
3363
+ 1] +
3364
+ 4n
3365
+ (n − 2)2 E[q2
3366
+ 12] + 1
3367
+ n
3368
+ n
3369
+
3370
+ i=1
3371
+
3372
+ (ℓ2
3373
+ i − E[ℓ2
3374
+ 1]) + 4gi
3375
+
3376
+ + 4
3377
+ �n − 1
3378
+ 2
3379
+ �−1 �
3380
+ i<j
3381
+ ϕij + Sc,
3382
+ where Sc collection the remaining terms.
3383
+ Next, we have
3384
+ �n
3385
+ 2
3386
+ �−1 �
3387
+ i<j
3388
+ u2
3389
+ ij =
3390
+ �n
3391
+ 2
3392
+ �−1 �
3393
+ i<j
3394
+ (qij − ℓi/2 − ℓj/2 − Eu12)2
3395
+ = Eq2
3396
+ 12 +
3397
+ �n
3398
+ 2
3399
+ �−1 �
3400
+ i<j
3401
+ (q2
3402
+ ij − Eq2
3403
+ ij) −
3404
+ �n
3405
+ 2
3406
+ �−1
3407
+ n
3408
+
3409
+ i=1
3410
+ (ℓi/2)2 − (Eu12)2
3411
+
3412
+ �n
3413
+ 2
3414
+ �−1 �
3415
+ i<j
3416
+ qij(ℓi + ℓj) + 2Eu12
3417
+ �n
3418
+ 2
3419
+ �−1 �
3420
+ i<j
3421
+ (qij + ℓi/2 + ℓj/2)
3422
+ + 1
3423
+ 2
3424
+ �n
3425
+ 2
3426
+ �−1 �
3427
+ i<j
3428
+ ℓiℓj
3429
+ = Eq2
3430
+ 12 + Qc,
3431
+ where Qc is by definition.
3432
+ We have
3433
+ �ϑc = �ϑ2
3434
+ AL − c
3435
+ �n
3436
+ 2
3437
+ �−1
3438
+ h−d−2ν′ �∆ν
3439
+ = (n − 2)2
3440
+ n2(n − 1)S2
3441
+ N − c
3442
+ �n
3443
+ 2
3444
+ �−1
3445
+
3446
+
3447
+ �n
3448
+ 2
3449
+ �−1 �
3450
+ i<j
3451
+ u2
3452
+ ij
3453
+
3454
+
3455
+ = (2 − c)
3456
+ �n
3457
+ 2
3458
+ �−1
3459
+ E[q2
3460
+ 12] + 1 + o(1)
3461
+ n
3462
+ E[ℓ2
3463
+ 1] + 1 + o(1)
3464
+ n2
3465
+ n
3466
+
3467
+ i=1
3468
+
3469
+ (ℓ2
3470
+ i − E[ℓ2
3471
+ 1]) + 4gi
3472
+
3473
+ + 1 + o(1)
3474
+ n
3475
+ 4
3476
+ �n
3477
+ 2
3478
+ �−1 �
3479
+ i<j
3480
+ ϕij + 1 + o(1)
3481
+ n
3482
+ Sc + c
3483
+ �n
3484
+ 2
3485
+ �−1
3486
+ Qc,
3487
+ which gives the following simplified expression for the class of Studentizations:
3488
+ �ϑc = (2 − c)
3489
+ �n
3490
+ 2
3491
+ �−1
3492
+ E[q2
3493
+ 12] + 1
3494
+ nE[ℓ2
3495
+ 1] + 1
3496
+ n2
3497
+ n
3498
+
3499
+ i=1
3500
+
3501
+ (ℓ2
3502
+ i − E[ℓ2
3503
+ 1]) + 4gi
3504
+
3505
+ + 1
3506
+ n4
3507
+ �n
3508
+ 2
3509
+ �−1 �
3510
+ i<j
3511
+ ϕij + Vc,
3512
+ (C.8)
3513
+ where Vc is by definition.
3514
+ 31
3515
+
3516
+ References
3517
+ Ahn, H., Ichimura, H., Powell, J. L., and Ruud, P. A. (2018). “Simple Estimators for Invertible
3518
+ Index Models,” Journal of Business & Economic Statistics, 36(1), 1–10.
3519
+ Aradillas-Lopez, A., Honor´e, B. E., and Powell, J. L. (2007). “Pairwise Difference Estimation with
3520
+ Nonparametric Control Variables,” International Economic Review, 48(4), 1119–1158.
3521
+ Bhattacharya, R. N. and Rao, R. R. (1976). Normal Approximation and Asymptotic Expansions:
3522
+ John Wiley and Sons.
3523
+ Blundell, R. W. and Powell, J. L. (2004). “Endogeneity in Semiparametric Binary Response Mod-
3524
+ els,” Review of Economic Studies, 71(3), 655–679.
3525
+ Callaert, H. and Veraverbeke, N. (1981). “The Order of the Normal Approximation for a Studentized
3526
+ U-statistic,” Annals of Statistics, 9, 194–200.
3527
+ Calonico, S., Cattaneo, M. D., and Farrell, M. H. (2018). “On the Effect of Bias Estimation on
3528
+ Coverage Accuracy in Nonparametric Inference,” Journal of the American Statistical Association,
3529
+ 113(522), 767–779.
3530
+ Calonico, S., Cattaneo, M. D., and Farrell, M. H. (2022). “Coverage Error Optimal Confidence
3531
+ Intervals for Local Polynomial Regression,” Bernoulli, 28(4), 2998–3022.
3532
+ Cattaneo, M. D., Crump, R. K., and Jansson, M. (2010). “Robust Data-Driven Inference
3533
+ for Density-Weighted Average Derivatives,” Journal of the American Statistical Association,
3534
+ 105(491), 1070–1083.
3535
+ Cattaneo, M. D., Crump, R. K., and Jansson, M. (2013). “Generalized Jackknife Estimators of
3536
+ Weighted Average Derivatives (with Discussions and Rejoinder),” Journal of the American Sta-
3537
+ tistical Association, 108(504), 1243–1268.
3538
+ Cattaneo, M. D., Crump, R. K., and Jansson, M. (2014a). “Small Bandwidth Asymptotics for
3539
+ Density-Weighted Average Derivatives,” Econometric Theory, 30(1), 176–200.
3540
+ Cattaneo, M. D., Crump, R. K., and Jansson, M. (2014b). “Bootstrapping Density-Weighted Av-
3541
+ erage Derivatives,” Econometric Theory, 30(6), 1135–1164.
3542
+ Cattaneo, M. D. and Jansson, M. (2018). “Kernel-Based Semiparametric Estimators: Small Band-
3543
+ width Asymptotics and Bootstrap Consistency,” Econometrica, 86(3), 955–995.
3544
+ Cattaneo, M. D., Jansson, M., and Ma, X. (2019). “Two-step Estimation and Inference with Pos-
3545
+ sibly Many Included Covariates,” Review of Economic Studies, 86(3), 210–245.
3546
+ Cattaneo, M. D., Jansson, M., and Newey, W. K. (2018a). “Alternative Asymptotics and the
3547
+ Partially Linear Model with Many Regressors,” Econometric Theory, 34(2), 277–301.
3548
+ 32
3549
+
3550
+ Cattaneo, M. D., Jansson, M., and Newey, W. K. (2018b). “Inference in Linear Regression Models
3551
+ with Many Covariates and Heteroscedasticity,” Journal of the American Statistical Association,
3552
+ 113(523), 1350–1361.
3553
+ Chernozhukov, V., Escanciano, J. C., Ichimura, H., Newey, W. K., and Robins, J. M. (2022).
3554
+ “Locally Robust Semiparametric Estimation,” Econometrica, 90(4), 1501–1535.
3555
+ Dharmadhikari, S. W., Fabian, V., and Jogdeo, K. (1968). “Bounds on the Moments of Martin-
3556
+ gales,” Annals of Mathematical Statistics, 39(5), 1719 – 1723.
3557
+ Efron, B. and Stein, C. (1981). “The Jackknife Estimate of Variance,” Annals of Statistics, 9(3),
3558
+ 586–596.
3559
+ Gin´e, E., Lata�la, R., and Zinn, J. (2000). “Exponential and Moment Inequalities for U-Statistics,”
3560
+ In E. Gin´e, D. M. Mason, and J. A. Wellner (eds.) High Dimensional Probability II, 13–38,
3561
+ Boston, MA: Birkh¨auser Boston.
3562
+ Graham, B. S., Niu, F., and Powell, J. L. (2023). “Kernel density estimation for undirected dyadic
3563
+ data,” Journal of Econometrics.
3564
+ Hall, P. (1992). The Bootstrap and Edgeworth Expansion, New York: Springer-Verlag.
3565
+ Hoeffding, W. (1948). “A Class of Statistics with Asymptotically Normal Distribution,” Annals of
3566
+ Mathematical Statistics, 19(3), 293 – 325.
3567
+ Honor´e, B. E. and Powell, J. L. (1994). “Pairwise Difference Estimators of Censored and Truncated
3568
+ Regression Models,” Journal of Econometrics, 64(1-2), 241–278.
3569
+ Ichimura, H. and Todd, P. E. (2007). “Implementing Nonparametric and Semiparametric Estima-
3570
+ tors,” In J. Heckman and E. Leamer (eds.) Handbook of Econometrics, Volume VIB: Elsevier
3571
+ Science B.V. 5370–5468.
3572
+ Jing, B.-Y. and Wang, Q. (2003). “Edgeworth expansion for U-statistics under minimal conditions,”
3573
+ Annals of statistics, 31(4), 1376–1391.
3574
+ Matsushita, Y. and Otsu, T. (2021). “Jackknife empirical likelihood: small bandwidth, sparse
3575
+ network and high-dimensional asymptotics,” Biometrika, 108(3), 661–674.
3576
+ Newey, W. K. (1994). “The Asymptotic Variance of Semiparametric Estimators,” Econometrica,
3577
+ 62(6), 1349–1382.
3578
+ Newey, W. K., Hsieh, F., and Robins, J. M. (2004). “Twicing Kernels and a Small Bias Property
3579
+ of Semiparametric Estimators,” Econometrica, 72, 947–962.
3580
+ Newey, W. K. and McFadden, D. L. (1994). “Large Sample Estimation and Hypothesis Testing,”
3581
+ In R. F. Engle and D. L. McFadden (eds.) Handbook of Econometrics, Volume IV, New York:
3582
+ Elsevier Science B.V. 2111–2245.
3583
+ 33
3584
+
3585
+ Nishiyama, Y. and Robinson, P. M. (2000). “Edgeworth Expansions for Semiparametric Averaged
3586
+ Derivatives,” Econometrica, 68, 931–979.
3587
+ Nishiyama, Y. and Robinson, P. M. (2001). “Studentization in Edgeworth Expansions for Estimates
3588
+ of Semiparametric Index Models,” In C. Hsiao, K. Morimune, and J. L. Powell (eds.) Nonlinear
3589
+ Statistical Modeling: Essays in Honor of Takeshi Amemiya, New York: Cambridge University
3590
+ Press, 197–240.
3591
+ Nishiyama, Y. and Robinson, P. M. (2005). “The Bootstrap and the Edgeworth Correction for
3592
+ Semiparametric Averaged Derivatives,” Econometrica, 73, 197–240.
3593
+ de la Pe˜na, V. H. and Montgomery-Smith, S. J. (1995). “Decoupling Inequalities for the Tail
3594
+ Probabilities of Multivariate U-statistics,” Annals of Probability, 23(2), 806–816.
3595
+ Powell, J. L. (1994). “Estimation of Semiparametric Models,” In R. F. Engle and D. L. McFadden
3596
+ (eds.) Handbook of Econometrics, Volume IV, New York: Elsevier Science B.V. 2443–2521.
3597
+ Powell, J. L. (2017). “Identification and Asymptotic Approximations: Three Examples of Progress
3598
+ in Econometric Theory,” Journal of Economic Perspectives, 31(2), 107–24.
3599
+ Powell, J. L., Stock, J. H., and Stoker, T. M. (1989). “Semiparametric Estimation of Index Coeffi-
3600
+ cients,” Econometrica, 57(6), 1403–1430.
3601
+ Powell, J. L. and Stoker, T. M. (1996). “Optimal Bandwidth Choice for Density-Weighted Aver-
3602
+ ages,” Journal of Econometrics, 75, 291–316.
3603
+ Robinson, P. M. (1995). “The Normal Approximation for Semiparametric Averaged Derivatives,”
3604
+ Econometrica, 63, 291–216.
3605
+ Stoker, T. M. (1986). “Consistent Estimation of Scaled Coefficients,” Econometrica, 54, 1461–1481.
3606
+ 34
3607
+
39AyT4oBgHgl3EQfcPct/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
39FAT4oBgHgl3EQfEhxj/content/tmp_files/2301.08422v1.pdf.txt ADDED
@@ -0,0 +1,944 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A vision-based autonomous UAV inspection framework for
2
+ unknown tunnel construction sites with dynamic obstacles
3
+ Zhefan Xu, Baihan Chen, Xiaoyang Zhan, Yumeng Xiu, Christopher Suzuki, and Kenji Shimada
4
+ Abstract—Tunnel construction using the drill-and-blast method
5
+ requires the 3D measurement of the excavation front to evaluate
6
+ underbreak locations. Considering the inspection and measure-
7
+ ment task’s safety, cost, and efficiency, deploying lightweight
8
+ autonomous robots, such as unmanned aerial vehicles (UAV),
9
+ becomes more necessary and popular. Most of the previous works
10
+ use a prior map for inspection viewpoint determination and do
11
+ not consider dynamic obstacles. To maximally increase the level
12
+ of autonomy, this paper proposes a vision-based UAV inspection
13
+ framework for dynamic tunnel environments without using a
14
+ prior map. Our approach utilizes a hierarchical planning scheme,
15
+ decomposing the inspection problem into different levels. The
16
+ high-level decision maker first determines the task for the robot
17
+ and generates the target point. Then, the mid-level path planner
18
+ finds the waypoint path and optimizes the collision-free static
19
+ trajectory. Finally, the static trajectory will be fed into the low-
20
+ level local planner to avoid dynamic obstacles and navigate to the
21
+ target point. Besides, our framework contains a novel dynamic
22
+ map module that can simultaneously track dynamic obstacles
23
+ and represent static obstacles based on an RGB-D camera.
24
+ After inspection, the Structure-from-Motion (SfM) pipeline is
25
+ applied to generate the 3D shape of the target. To our best
26
+ knowledge, this is the first time autonomous inspection has been
27
+ realized in unknown and dynamic tunnel environments. Our
28
+ flight experiments in a real tunnel prove that our method can
29
+ autonomously inspect the tunnel excavation front surface.
30
+ Index Terms—Field Robotics, Motion and Path Planning, Per-
31
+ ception and Autonomy, Robotics and Automation in Construction
32
+ I. INTRODUCTION
33
+ Drilling and blasting is a common tunnel construction and
34
+ excavation method. The main cycle of this method includes
35
+ steps such as drilling for explosives, blasting, measuring
36
+ underbreaks, and spraying concrete. Among these steps, mea-
37
+ suring underbreaks in the tunnel excavation front is dangerous
38
+ for workers because of the potential falling rocks. With the
39
+ emergence of lightweight unmanned aerial vehicles, the robot
40
+ becomes suitable for handling measurement and inspection
41
+ tasks as it can avoid potential human dangers and inspect un-
42
+ reachable locations. Consequently, an autonomous inspection
43
+ framework is essential to improve the safety and efficiency of
44
+ underbreaks measurement and tunnel construction.
45
+ There are two main challenges of autonomous UAV in-
46
+ spection in tunnel environments. First, since the tunnel en-
47
+ vironments under construction are changing with time, it is
48
+ unlikely to have update-to-date maps of huge construction
49
+ Zhefan Xu, Baihan Chen, Xiaoyang Zhan, Yumeng Xiu, Christopher
50
+ Suzuki, and Kenji Shimada are with the Department of Mechanical Engineer-
51
+ ing, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA, 15213,
52
53
+ Fig. 1.
54
+ Illustration of UAV navigating and inspecting the excavation front
55
+ in the tunnel environment. (a) The tunnel under construction. (b) The target
56
+ inspection area (the excavation front). (c) The robot navigates toward the
57
+ inspection target and avoids obstacles. (d) The robot inspects the target area.
58
+ vehicles and equipment nearby the excavation front. In this
59
+ way, the robot should be able to navigate from arbitrary
60
+ positions in the tunnel towards the excavation front area
61
+ (i.e., the end of the tunnel) based on the onboard sensing.
62
+ Previous works of the sampling-based unknown exploration
63
+ [1][2][3][4] can make the robot successfully navigate and map
64
+ unknown environments with the onboard sensor and applies
65
+ this exploration method to the unknown tunnel inspection [5].
66
+ However, because their approaches only utilize the explored
67
+ map information to randomly sample viewpoints, the output
68
+ trajectory could be zigzag and over-conservative, making
69
+ navigation less efficient. The second challenge comes from
70
+ the moving workers and machines in tunnels, as the robot
71
+ should track them and avoid them safely. Even though some
72
+ recent research [6][7][8] has investigated the UAV dynamic
73
+ obstacle avoidance problems, their local planning strategies
74
+ without global path fusion make them insufficient for complex
75
+ inspection tasks in tunnel environments, which contain com-
76
+ plicated static structures and unpredictable dynamic obstacles.
77
+ To solve these issues, this paper proposes a vision-based
78
+ autonomous UAV inspection framework for unknown and
79
+ dynamic tunnel environments. We develop a small, lightweight
80
+ quadcopter with an RGB-D camera for safely sharing and
81
+ operating with vehicles, equipment and workers in the tun-
82
+ nel. The proposed approach utilizes the hierarchical planning
83
+ method decomposing the entire inspection planning into high,
84
+ mid, and low levels. The current task is determined at the high
85
+ planning level to generate the goal position for navigation and
86
+ arXiv:2301.08422v1 [cs.RO] 20 Jan 2023
87
+
88
+ b
89
+ Excavation Front
90
+ Target Inpection Area
91
+ Inspection
92
+ Navigation
93
+ (d)
94
+ Cexploration. Then, the mid-level planner will find and optimize
95
+ a smooth trajectory toward the goal based on the static obstacle
96
+ information from the incrementally built map. Finally, at the
97
+ low level, our vision-aided gradient-based planner is applied to
98
+ locally optimize the trajectory for avoiding dynamic obstacles.
99
+ In addition, we propose a novel dynamic map representation
100
+ that can simultaneously represent static and track dynamic ob-
101
+ stacles. The example tunnel environment and the autonomous
102
+ robot using the proposed method are shown in Fig. 1. The
103
+ main contributions and novelties of this work are:
104
+ • Hierarchical inspection framework: This paper applies
105
+ a hierarchical scheme to solve the autonomous inspection
106
+ problem based on different planning layers.
107
+ • Depth-based 3D dynamic map: Our method utilizes
108
+ depth images to detect and track dynamic obstacles and
109
+ update the occupancy information of static environments.
110
+ • Gradient-based dynamic obstacle avoidance: We pro-
111
+ pose a gradient-based B-spline trajectory optimization to
112
+ avoid dynamic obstacles in real time.
113
+ • Tunnel experiments with 3D reconstruction: The entire
114
+ system is verified with a customized quadcopter in a
115
+ tunnel with 3D reconstruction results of the target surface.
116
+ II. RELATED WORK
117
+ This section first discusses the recent trends and approaches
118
+ in construction site inspection by autonomous UAVs. Then,
119
+ relevant works on the key challenges of tunnel inspection (i.e.,
120
+ exploration and dynamic obstacle avoidance) are reviewed.
121
+ There are mainly two categories of construction site and
122
+ building inspection methods: model-based and non-model-
123
+ based methods. For the model-based methods, the inspection
124
+ target model is usually available, and the planner generates a
125
+ set of optimal viewpoints based on the provided model. In
126
+ [9], the target bridge is first partitioned into surfaces with
127
+ inspection nodes, and their GTSP solver is then applied to
128
+ find optimal paths for inspection. Similarly, some works use
129
+ the BIM model to find viewpoints of interest (VPI) and
130
+ solve the path-planning problem using the TSP-based method
131
+ [10][11]. However, the target model can be unavailable for
132
+ tunnel inspection, so the robot can only rely on the onboard
133
+ sensors. In this way, the reactive methods are proposed for
134
+ unknown tunnel navigation using the lidar points measurement
135
+ [12][13]. Their methods can navigate tunnels of arbitrary
136
+ shapes but do not consider obstacle avoidance. Bendris et al.
137
+ [5] utilize the sampling-based method to generate viewpoints
138
+ for unknown exploration and inspection. Their method can
139
+ successfully avoid static obstacles but might not be safe for
140
+ dynamic obstacles due to the long replanning time. Besides,
141
+ their random sampling strategy in the explored area can
142
+ lead to zigzag and over-conservative paths for navigation. In
143
+ [14], it proposes a 3D reconstruction method for UAV tunnel
144
+ inspection without the path-planning strategy.
145
+ The unknown exploration problem can be viewed as deter-
146
+ mining a series of informative viewpoints [15]. Yamauchi [16]
147
+ first uses the frontier exploration approach, allowing robots to
148
+ visit the map boundary to gain environment information. Later
149
+ in [17], it extends the frontier exploration to high-speed UAVs.
150
+ Some approach [18] applies the information-theoretic method
151
+ to evaluate the information gains of viewpoints. Considering
152
+ the limited computation power of lightweight UAVs, the
153
+ sampling-based methods [1][2][3][4] have been preferred in
154
+ recent years. In [1], their RH-NBV planner grows an RRT with
155
+ the information gains stored in each node. The robot will then
156
+ follow the highest gain branch in a receding horizon manner.
157
+ Selin et al. [2] combine the RH-NBV with frontier exploration,
158
+ further improving the exploration efficiency. To save and reuse
159
+ the computation in each planning iteration, Schmid et al. [3]
160
+ adopt the RRT* algorithm with the rewiring to incrementally
161
+ build the tree. With a similar incremental sampling idea in [4],
162
+ it proposes a PRM-based method for exploration and obstacle
163
+ avoidance in dynamic environments.
164
+ Dynamic obstacle avoidance problem still remains open in
165
+ recent years. In the reactive-based methods, the robots directly
166
+ generate control velocities to avoid obstacles. Khatib [19]
167
+ constructs the artificial potential field to find the velocity for
168
+ obstacle avoidance and navigation, and Berg et al. [20] use
169
+ linear programming to optimize velocities based on Velocity
170
+ Obstacle [21]. These methods require less computation than
171
+ the trajectory-based methods but might lead to more myopic
172
+ performance. The trajectory-based methods are more prevalent
173
+ in UAV planning in recent years. Some [22][23][24][25] use
174
+ the model predictive control scheme to generate collision-
175
+ free trajectories based on the kinematic constraints. In [8],
176
+ it utilizes the B-spline optimization to generate collision-free
177
+ trajectory with vision aided, and Chen et al. [26] evaluate
178
+ trajectory risks using their dual-structure particle map.
179
+ III. PROBLEM DESCRIPTION
180
+ In an unknown tunnel space, Vt ∈ R3, with a straight tunnel
181
+ centerline C of a finite length, there exists an excavation front
182
+ (i.e., the target wall for inspection) at the end of the tunnel.
183
+ Inside the tunnel space Vt, there are different sizes of static
184
+ obstacles Ostatic and dynamic obstacles Odynamic. A UAV with
185
+ an onboard depth camera is deployed for the inspection task.
186
+ Without a prior map M, the robot needs to first navigate
187
+ toward the excavation front area from an arbitrary position in
188
+ the space Vt, then generate an inspection path to collect RGB
189
+ images of the inspection target, and finally return to the start
190
+ location. During the forward navigation and returning period,
191
+ the robot should avoid all static obstacles Oall
192
+ static and dynamic
193
+ obstacles Osensor
194
+ dynamic in its sensor range. The final output of the
195
+ entire system should be the 3D shape of the inspection target
196
+ reconstructed using the collected RGB images.
197
+ IV. PROPOSED METHOD
198
+ The proposed inspection framework has three main compo-
199
+ nents shown in Fig. 2: visual perception, hierarchical planning,
200
+ and data post-processing. The visual perception step processes
201
+ the sensor measurements from the onboard depth camera and
202
+ the inertial measurement unit (IMU). The localization module
203
+ runs the visual-inertial odometry (VIO) algorithm with the
204
+ EKF fusion to get robot state estimation. Besides, the dynamic
205
+
206
+ Fig. 2. System framework for autonomous inspection. Our proposed framework contains three parts: visual perception, hierarchical planning, and data post-
207
+ processing. In the visual perception step, the localization module applies the visual-inertial odometry with EKF fusion for state estimation. The dynamic map
208
+ module builds the static voxel map and tracks dynamic obstacles based on depth images. In the hierarchical planning section, the high-level and mid-level
209
+ planners use the static voxel map to generate the static trajectory. Then, the low-level planner uses the dynamic obstacle information to optimize the output
210
+ trajectory for execution. The final data post-processing step takes the images collected from the inspection stage to reconstruct the target model for analysis.
211
+ map module utilizes depth images to track dynamic obstacles
212
+ and update the occupancy information for static obstacles
213
+ using the voxel map, which will be further discussed in
214
+ Sec.IV-B. After the perception step, the hierarchical planning
215
+ section generates collision-free trajectories for the robot to
216
+ achieve the entire inspection task. Sec. IV-A will introduce the
217
+ logic of our hierarchical planning for the tunnel inspection and
218
+ the task decision maker in the high-level planner. Then, the
219
+ obstacle avoidance based on the mid-level trajectory planner
220
+ and low-level dynamic planner will be covered in Sec. IV-C.
221
+ After finishing the inspection task, the data post-processing
222
+ step, mentioned in Sec. IV-D, takes the collected target images
223
+ to perform 3D reconstruction to obtain the target model.
224
+ A. Hierarchical Planning and High-level Task Planner
225
+ Since our inspection problem consists of multiple compli-
226
+ cated procedures, applying only one planner cannot efficiently
227
+ accomplish the entire task. There are mainly three stages
228
+ of the inspection: (a) approaching the inspection target (i.e.,
229
+ the end of the tunnel), (b) collecting target images, and (c)
230
+ returning to the start location. Based on the inspection stages,
231
+ we decompose the problem into the following abstract tasks:
232
+ ST = {Forward, Explore, Inspect, Return},
233
+ (1)
234
+ where the Forward task aims at approaching the inspection
235
+ target, the Explore task helps the robot gain local map in-
236
+ formation for navigation, the Inspect task mode generates the
237
+ path for collecting target images, and the Return task mode
238
+ navigates the robot back to the starting location. During the
239
+ inspection process, the robot constantly alternates the task
240
+ mode using the proposed task planning algorithm (Alg. 1). For
241
+ each abstract task, the task planner generates the corresponding
242
+ goal positions and passes them to the lower-level planners for
243
+ path planning and trajectory optimization.
244
+ In the beginning stage of task planning (Alg. 1), the task
245
+ planner sets the robot to the Forward task mode as the robot
246
+ needs first to approach the tunnel end (Line 1). The task
247
+ planner runs at a certain replanning frequency to select the
248
+ current task mode for the robot. Before the robot arrives at
249
+ the inspection location, the Forward mode (Lines 7-12) lets
250
+ the robot generate a forward goal with a distance l from the
251
+ current robot position for navigation. Since, at this stage, the
252
+ robot does not have a complete environment map and can
253
+ only rely on the partially built from its flight, it will first try
254
+ using the partial map to perform local obstacle avoidance to
255
+ achieve the forward goal (Line 9). Suppose the lower-level
256
+ planner fails to find a collision-free trajectory due to the lack
257
+ of environmental knowledge. In that case, the task planner
258
+ will switch the current task to the explore mode to increase the
259
+ local map information (Lines 10-12). In the Explore mode, the
260
+ planner first samples to get the best viewpoints with the highest
261
+ sensor information gain in the current map then uses the lower-
262
+ level planner to generate a feasible trajectory for exploration,
263
+ and finally switches back to the previous task mode (Lines 13-
264
+ 16). For the information gain evaluation, refers to [1][2][3][4]
265
+ for further details. At the start of each replanning iteration, the
266
+ algorithm checks whether the robot has reached the inspection
267
+ target (Lines 4-6). If the robot detects the inspection target
268
+ wall, the planner will enter the Inspect mode and generate
269
+ a zigzag path for collecting target images. However, when
270
+ the built map around the target is not detailed enough for
271
+ the inspection path generation, the planner will switch to
272
+ the explore mode again to increase the explored map range
273
+ (Lines 17-21). After finishing collecting images, the planner
274
+ will enter the Return mode and navigate back to the start
275
+ position (Lines 24-27). Note that in the returning step, the
276
+ robot has already had a sufficient informative map for static
277
+ obstacles, incrementally built from the forward and explore
278
+ step, to generate a global trajectory to the origin directly.
279
+
280
+ Localization Module
281
+ High-level Task Planner
282
+ Navigation
283
+ Forward
284
+ Visual Inertial
285
+ PX4 EKF Fusion
286
+ Goal
287
+ Task Decision
288
+ Odometry
289
+ Explore
290
+ Maker
291
+ Exploration
292
+ Viewpoint
293
+ Backward *
294
+ Dynamic Map Module
295
+ Inspect
296
+ Inspection
297
+ Return Position
298
+ Dynamic Obstacle
299
+ Dynamic Obstacle
300
+ Location
301
+ Proposal Detection
302
+ Tracking
303
+ ★ Target Position
304
+ Static Occupancy
305
+ Mid-level Static Planner
306
+ Map Update
307
+ RRT* Waypoint
308
+ Minimum Snap
309
+ Planner
310
+ raiectoryplanner
311
+ Obstacle
312
+ Static Trajectory
313
+ Bounding
314
+ Low-level Dynamic Planner
315
+ Boxes
316
+ 3D Reconstruction Module
317
+ Vision-aided B-spline
318
+ Collision Cheking
319
+ Correspondence
320
+ Incremental
321
+ Trajectory Planner
322
+ & Replanning
323
+ Search
324
+ ReconstructionAlgorithm 1: High-level Task Planning Algorithm
325
+ 1 Tcurr ← Forward Mode ;
326
+ ▷ initial task
327
+ 2 Ct ← false ;
328
+ ▷ termination condition
329
+ 3 while not Ct do
330
+ 4
331
+ Icond ← reachInspectionTarget();
332
+ 5
333
+ if Icond then
334
+ 6
335
+ Tcurr ← Inspect Mode;
336
+ 7
337
+ if Tcurr ≡ Forward Mode then
338
+ 8
339
+ Pgoal ← getForwardGoal();
340
+ 9
341
+ σtraj, success ← lowerLevelPlanner(Pgoal);
342
+ 10
343
+ if not success then
344
+ 11
345
+ Tcurr ← Explore Mode;
346
+ 12
347
+ Tprev ← Forward Mode;
348
+ 13
349
+ else if Tcurr ≡ Explore Mode then
350
+ 14
351
+ Pgoal ← getBestViewpoint();
352
+ 15
353
+ σtraj ← lowerLevelPlanner(Pgoal);
354
+ 16
355
+ Tcurr ← Tprev;
356
+ 17
357
+ else if Tcurr ≡ Inspect Mode then
358
+ 18
359
+ σtraj, success ← getInspectionPath();
360
+ 19
361
+ if not success then
362
+ 20
363
+ Tcurr ← Explore Mode;
364
+ 21
365
+ Tprev ← Inspect Mode;
366
+ 22
367
+ else
368
+ 23
369
+ Tcurr ← Return Mode;
370
+ 24
371
+ else if Tcurr ≡ Return Mode then
372
+ 25
373
+ Pgoal ← getReturnGoal();
374
+ 26
375
+ σtraj ← lowerLevelPlanner(Pgoal);
376
+ 27
377
+ Ct ← isInspectionComplete();
378
+ B. Perception and 3D Dynamic Mapping
379
+ This section introduces our proposed 3D dynamic map for
380
+ navigating dynamic environments, as shown in Fig. 3d. Our
381
+ dynamic map adopts a hybrid method to represent environ-
382
+ ments by using the occupancy voxels for static obstacles and
383
+ the bounding boxes for dynamic obstacles. For static obstacles,
384
+ we predefine a static voxel map size (i.e., maximum voxel
385
+ numbers) based on the environment and store the occupancy
386
+ information of each voxel in an array with the preserved
387
+ length. This allows our planners to access the occupancy
388
+ information with O(1) time complexity. For the occupancy
389
+ information update of each voxel, as most static occupancy
390
+ mapping algorithm does, we apply the classic Bayesian filter
391
+ with the Markov assumption:
392
+ lt(x) = log p(x|zt)
393
+ p(¯x|zt) + log p(¯x)
394
+ p(x) + lt−1(x),
395
+ (2)
396
+ where lt(x) is the log odds for the voxel being occupied. By
397
+ applying Eqn. 2, we can update the occupancy information
398
+ (i.e., log odds) for each voxel by recursively adding the inverse
399
+ sensor model log p(x|zt)
400
+ p(¯x|zt) with the predefined prior log p(¯x)
401
+ p(x).
402
+ Besides, since dynamic obstacles can also be mapped into the
403
+ static voxel map, which can lead to noisy voxels, we iterate
404
+ through each detected dynamic obstacle bounding box and set
405
+ all voxels inside the dynamic regions to be free.
406
+ Fig. 3. Illustration of the proposed 3D dynamic map. (a) A person walks in
407
+ front of the robot in the RGB camera view. (b) The person is detected as a
408
+ dynamic obstacle in the depth image. (c) The detection results in the U-depth
409
+ map for obstacle widths and thicknesses. (d) The 3D dynamic map shows the
410
+ dynamic obstacle as a bounding box and static obstacles as the voxel map.
411
+ The dynamic obstacles are detected and tracked using the
412
+ depth image and represented by axis-aligned 3D bounding
413
+ boxes. There are mainly three steps in the proposed method:
414
+ region proposal detection, map-depth fusion and dynamic
415
+ obstacle filtering. In the region proposal detection step, we use
416
+ the method mentioned in [6] to generate the U-depth map, as
417
+ shown in Fig. 3c, by constructing a histogram of the depth
418
+ values using the depth image. The vertical axis from top to
419
+ bottom of the U-depth map represents the depth range of the
420
+ user-defined bin width. Intuitively, the U-depth map can be
421
+ viewed as a top-down view image. Inspired by [6][24], we
422
+ apply the line grouping method to detect the obstacle regions
423
+ in the U-depth map. With these detection results, we can obtain
424
+ the widths and thicknesses of obstacles and then further find
425
+ the corresponding heights in the original depth image as shown
426
+ in Fig. 3b. After this step, we can get the “region proposal
427
+ bounding boxes” for dynamic obstacles by applying coordinate
428
+ transformation into the map frame. Since the region proposals
429
+ are only the rough detection results, our second step, map-
430
+ depth fusion, inflates those region proposals locally with a
431
+ ratio λ and then searches occupied voxels from the static voxel
432
+ map to get the refined bounding boxes of obstacles. With the
433
+ refined bounding boxes, the dynamic obstacle filtering method
434
+ is applied to identify and track dynamic obstacles. First, we
435
+ utilize the Kalman filter to track and compute the velocity of
436
+ each obstacle bounding box with the linear propagation model:
437
+ pk+1
438
+ o
439
+ = pk
440
+ o + vk
441
+ o(tk+1 − tk),
442
+ vk
443
+ o = pk
444
+ o − pk−1
445
+ o
446
+ tk − tk−1
447
+ ,
448
+ (3)
449
+ where pk+1
450
+ o
451
+ is the predicted obstacle position in the next time
452
+ step and vk
453
+ o is the previously estimated velocity. Then, we
454
+ identify those bounding boxes with velocities greater than the
455
+ threshold Vth as the dynamic obstacles. Finally, we remove
456
+ the bounding boxes with jerky motions using the obstacles’
457
+ history velocities, considering the detection noises that make
458
+ static obstacles shake back and forth slightly.
459
+
460
+ Dynamic Obstacle
461
+ Dynamic Obstacle
462
+ Camera Pose
463
+ Dynamic Obstacle
464
+ -Robot
465
+ Voxel Map (static)
466
+ (d) 3D Dynamic MapC. Navigation and Obstacle Avoidance
467
+ When a goal position is determined by the high-level task
468
+ planner, the mid-level static planner first finds a smooth
469
+ trajectory considering static obstacles. Then, using this static
470
+ trajectory, the low-level dynamic planner optimizes a collision-
471
+ free trajectory based on static and dynamic obstacles at a
472
+ certain replanning frequency. For the mid-level static planner,
473
+ we apply the RRT* planner to find the waypoint path and use
474
+ the minimum snap-based polynomial optimization with corri-
475
+ dor constraints [27][28] for trajectory generation. To achieve
476
+ fast replanning for dynamic obstacle avoidance, the low-level
477
+ planner adopts our gradient-based trajectory optimization. The
478
+ B-spline trajectory with order k over a time knot vector can
479
+ be parameterized as a series of control points:
480
+ ˆS = {P1, P2, P3, ..., PN−1, PN},
481
+ Pi ∈ R3,
482
+ (4)
483
+ where the optimization variable set S contains the N−2(k−1)
484
+ intermediate control points Pi. With the trajectory optimization
485
+ variables, we can write the objective function as follows:
486
+ Ctotal(S) = αcontrol · Ccontrol + αsmooth · Csmooth
487
+ +αstatic · Cstatic + αdynamic · Cdynamic,
488
+ (5)
489
+ and the weighted sum has four costs to minimize: the control
490
+ limit cost, the smoothness cost, the static collision cost, and
491
+ the dynamic collision cost. The control limit cost ensures the
492
+ trajectory has feasible velocities and accelerations. The control
493
+ points for velocity Vi and acceleration Ai are computed by:
494
+ Vi = Pi+1 − Pi
495
+ δt
496
+ , Ai = Vi+1 − Vi
497
+ δt
498
+ ,
499
+ (6)
500
+ where δt is the time step. We use the L2 norm to penalize the
501
+ infeasible velocities and accelerations:
502
+ Ccontrol =
503
+
504
+ i
505
+ ||Vi − vmax||2
506
+ 2
507
+ λvel
508
+ + ||Ai − amax||2
509
+ 2
510
+ λacc
511
+ ,
512
+ (7)
513
+ in which vmax and amax are the maximum velocity and accel-
514
+ eration limits. The λ terms are the unit normalization factor.
515
+ Note that the control limit costs are zero for velocities and
516
+ acceleration that are less than the limits. The smoothness cost
517
+ tries to reduce the jerk (i.e., the third derivative to position)
518
+ of the trajectory using the following equations:
519
+ Csmooth =
520
+
521
+ i
522
+ ||Ji||2
523
+ 2,
524
+ Ji = Ai+1 − Ai
525
+ δt
526
+ .
527
+ (8)
528
+ The static collision cost is computed based on the proposed
529
+ circle-based guide-point method shown in Fig. 4a. The initial
530
+ trajectory is shown as the blue dot line with the brown collision
531
+ control points. To calculate the costs for those collision control
532
+ points, we first search a collision-free path (purple dots and
533
+ lines in Fig. 4a) using A* or Dijkstra to bypass the static
534
+ obstacle. If there are N collision control points, we cast a ray
535
+ for the collision control point of sequence order n with the
536
+ angle
537
+ 180
538
+ n+1 degree. Note that the angle is between the casting
539
+ ray (dot blue arrow) and the line connecting the first and
540
+ last collision control points. The guide points Pguide are the
541
+ intersection points of the casting ray with the searched path.
542
+ The algorithm is circle-based because the direction angles
543
+ sweep a semi-circle. With the associated guide points for each
544
+ collision control point, we design the total static collision cost
545
+ based on experiments as a clipped cubic penalty function:
546
+ Cstatic =
547
+
548
+ i
549
+
550
+ max
551
+
552
+ dsafe − signDist(Pi, Pi
553
+ guide), 0
554
+ ��3
555
+ ,
556
+ (9)
557
+ where dsafe is the user-defined safe distance, and the signed
558
+ distance function defines the positive and negative distance as
559
+ the control point outside and inside the obstacle. Intuitively,
560
+ we penalize the control points with small or negative distances
561
+ to obstacles, and the static collision costs are zero for control
562
+ points with a distance greater than the safe distance.
563
+ Since the dynamic obstacles are moving, it is unreliable to
564
+ only use the current detected information for cost computation.
565
+ So, we propose the receding horizon distance field to estimate
566
+ the dynamic collision cost with future predictions shown in
567
+ Fig. 4b. In this figure, the dynamic obstacle with left moving
568
+ velocity Vo is represented as the blue circle with the center
569
+ O and the radius r. We apply linear prediction to get the
570
+ obstacle’s future position C with the prediction horizon k
571
+ time step. Since the reliability of future prediction decreases
572
+ with the increasing prediction time, we linearly decrease the
573
+ obstacle size to zero at the final predicted position C in the
574
+ receding horizon manner. So, we can obtain the collision
575
+ region as the combination of a polygon region AOBC and
576
+ a circular region enclosed by the arc >
577
+ AEB, line AO, and line
578
+ BO. When the control point Pi,p is inside the polygon region,
579
+ we draw a red line through the control point Pi,p perpendicular
580
+ to the line AC intersecting at point D. The distance di to the
581
+ safe area (outside the collision region) can be computed as:
582
+ ∆di = ||D − O
583
+ ′||2 − ||Pi,p − O
584
+ ′||2.
585
+ (10)
586
+ On the other hand, when the control point Pi,c is inside the
587
+ circular region, the distance di to the safe area is:
588
+ ∆di = r − ||Pi,c − O0||2.
589
+ (11)
590
+ For the control points Pi,out that are outside both polygon and
591
+ circular regions, we set the distance di to the safe area to zero.
592
+ So, with the distance to the safe area, we can use the following
593
+ equation to compute the final dynamic collision cost:
594
+ Cdynamic =
595
+
596
+ i
597
+
598
+ max(∆di, 0)
599
+ �3
600
+ .
601
+ (12)
602
+ For both static and dynamic collision costs, the gradients can
603
+ be computed using the chain rule with Eqn. 9 and Eqn.12.
604
+ D. Inspection and 3D Reconstruction
605
+ After finishing the entire inspection task, the data post-
606
+ processing module applies the Structure-from-Motion (SfM)
607
+ to reconstruct the 3D shape of the inspection target from
608
+ the collected target images. When the robot has reached
609
+ the inspection target, it first explores the local area until
610
+ having enough map information about the target. Then, in
611
+ our implementation, the robot generates a zigzag pattern path
612
+
613
+ Fig. 4. Illustration of the collision cost in our B-spline optimization. (a) The
614
+ static collision cost is calculated using the proposed circle-based guide points
615
+ (red dots). (b) The dynamic collision cost is obtained by the receding horizon
616
+ distance field, which considers the future predictions of the obstacle positions.
617
+ fully covering the target wall and collects the color images
618
+ during the flight. Our SfM pipeline for reconstruction is based
619
+ on COLMAP [29]. The algorithm first extracts the features
620
+ of each image using a numerical descriptor. Since our input
621
+ images are from the streaming of an RGB camera, the second
622
+ step utilizes sequential matching to find the correspondence in
623
+ different images. Finally, from an initial corresponding image
624
+ pair, the algorithm incrementally reconstructs the 3D shape of
625
+ the inspection target by triangulating new points.
626
+ V. RESULT AND DISCUSSION
627
+ A. Implementation Details
628
+ We conduct simulation experiments and physical flight tests
629
+ in dynamic tunnel environments to evaluate the proposed
630
+ method’s performance. The simulation environments are based
631
+ on ROS and Gazebo. For the physical experiments, we visited
632
+ a tunnel under construction in Japan and applied our cus-
633
+ tomized quadcopter (Fig. 5) to test the proposed framework.
634
+ The quadcopter is equipped with an Intel RealSense D435i
635
+ stereo camera, a PX4-based flight controller, and an NVIDIA
636
+ Jetson Xavier NX onboard computer. The weight is ∼1.5kg
637
+ with a 15-minute flight duration. We adopt the visual-inertial
638
+ odometry (VIO) algorithm [30] for robot state estimation. All
639
+ of the perception and planning computations are performed
640
+ within the onboard computer. The color images are collected
641
+ during the inspection stage with the RealSense D435i camera,
642
+ and the data post-processing for 3D reconstruction is com-
643
+ pleted using the desktop with an NVIDIA RTX 3080 GPU.
644
+ B. Evaluation of Navigation and Obstacle Avoidance
645
+ The navigation and obstacle avoidance in the forward task
646
+ (i.e., approaching the tunnel end) is the most challenging and
647
+ time-consuming part of the entire inspection process since the
648
+ environment is cluttered and unknown. So, to evaluate the
649
+ performance of forward navigation and obstacle avoidance, we
650
+ prepared 5 simulation environments containing different static
651
+ and dynamic obstacles, with one example environment shown
652
+ Fig. 5. The customized autonomous quadcopter for inspection experiments.
653
+ Fig. 6. Illustration of an example simulation tunnel environment in Gazebo.
654
+ In the forward task, the robot needs to navigate from the tunnel start (left
655
+ side) to the tunnel end (right side) and avoid static and dynamic obstacles.
656
+ in Fig. 6. For benchmarking, we select the sampling-based
657
+ planning methods (SBP) [1][5] and the dynamic exploration
658
+ planning (DEP) method [4] with modifications to the tunnel
659
+ environments. Besides, we also include our method without
660
+ using the dynamic map (mentioned in Sec. IV-B) to compare
661
+ the obstacle avoidance performance. In each experiment, we
662
+ let the robot navigate from the start of the tunnel to the end
663
+ of the tunnel. We run 10 experiments in each environment
664
+ of different obstacles and record the average navigation time,
665
+ the average replanning time for dynamic obstacle avoidance,
666
+ and the collision rate over all experiments. Note that we set
667
+ the navigation time and replanning time of the sampling-based
668
+ planning methods (SBP) [1][5] to 100% for comparison. The
669
+ collision rate is calculated by the number of experiments with
670
+ collisions divided by the total number of experiments.
671
+ From the results in Table I, one can see that our method
672
+ has the second least navigation time, which is 81.69% of the
673
+ sampling-based planning (SBP) method, and takes almost the
674
+ same amount of time as its non-dynamic-map version. The
675
+ dynamic exploration planning (DEP) method uses less time
676
+ than the sampling-based method and longer time than our
677
+ method. From our observations, both the SBP and the DEP
678
+ generate their trajectories inside the explored regions, which
679
+ is over-conservative, leading to more stop-and-go behavior. On
680
+ the contrary, since our planner adopts a hierarchical scheme,
681
+ the task planner first tries using the more aggressive local
682
+ planner for obstacle avoidance by planning in the unknown
683
+ regions and only applies the conservative exploration planner
684
+ when the local planning fails. This task-switching behavior
685
+ hugely reduces the navigation time. For the replanning time,
686
+ our method takes only 1.16% of the time compared to the
687
+ SBP and significantly less than the DEP. This huge difference
688
+ in the replanning speed is mainly due to our computationally
689
+ lightweight gradient-based trajectory optimization and the long
690
+
691
+ Tunnel Start
692
+ Static Obstacle
693
+ Tunnel End
694
+ Robot
695
+ Dynamic Obstacle
696
+ Inspection Targetcomputation time in the information gain evaluation of the
697
+ SBP and the DEP. For the collision rate, it is shown that our
698
+ method has no collision among all experiment runs, and both
699
+ the SBP and our method without the dynamic map have a high
700
+ collision rate (around 30%). The DEP has a lower collision
701
+ rate than the SBP since it utilizes an incremental roadmap for
702
+ faster dynamic obstacle avoidance but still has more collisions
703
+ than our method. Comparing our method with and without the
704
+ dynamic map shows that the dynamic map version has a much
705
+ lower collision rate by using dynamic obstacle information.
706
+ TABLE I
707
+ THE BENCHMARK OF THE NAVIGATION TIME, THE REPLANNING TIME,
708
+ AND THE COLLISION RATE BY RUNNING 50 SIMULATION EXPERIMENTS.
709
+ Methods
710
+ Nav. Time
711
+ Replan. Time
712
+ Collision Rate
713
+ SBP [1][5]
714
+ 100 ± 0%
715
+ 100%
716
+ 30.00%
717
+ DEP [4]
718
+ 92.80 ± 3.01%
719
+ 54.30%
720
+ 24.00%
721
+ Ours w/o DM
722
+ 81.06 ± 4.40%
723
+ 1.20%
724
+ 32.00%
725
+ Ours
726
+ 81.69 ± 3.66%
727
+ 1.16%
728
+ 0.00%
729
+ C. Evaluation of Dynamic Obstacle Tracking
730
+ We measure the average tracking errors in positions, ve-
731
+ locities, and obstacle sizes shown in Table II to evaluate the
732
+ dynamic obstacle detection and tracking performance. The
733
+ ground truth states of the obstacles can be easily obtained
734
+ in the simulation experiments, and we apply the OptiTrack
735
+ motion capture system in the physical tests to obtain the
736
+ ground truth states. We let two persons walk within the motion
737
+ capture area, compare the tracking results from the robot
738
+ and the motion capture system, and use the average value
739
+ differences as tracking errors. From Table II, one can see that
740
+ the position errors are 0.09m and 0.19m in simulation and
741
+ physical tests, respectively. The position errors in the physical
742
+ tests are larger than in simulation tests due to the image’s
743
+ noises from the depth camera. Similarly, the camera noises
744
+ also make the velocity errors in physical tests greater than the
745
+ simulations’. The size errors are similar in both simulation and
746
+ physical tests. In the experiments, to account for the tracking
747
+ errors in the positions, velocities, and sizes, we increase the
748
+ safety distance to obstacles by a self-defined size r, and our
749
+ experiment results prove that our dynamic obstacle tracking
750
+ system can let successfully avoid moving obstacles.
751
+ TABLE II
752
+ MEASUREMENT OF THE DETECTION AND TRACKING ERRORS.
753
+ Errors
754
+ Simulation Tests
755
+ Physical Tests
756
+ Position Error (m)
757
+ 0.09
758
+ 0.19
759
+ Velocity Error (m/s)
760
+ 0.10
761
+ 0.21
762
+ Size Error (m)
763
+ 0.25
764
+ 0.25
765
+ D. Physical Flight Tests
766
+ To evaluate and verify the proposed framework, we ran
767
+ flight tests in a tunnel under construction in Japan, shown in
768
+ Fig. 1 and 7. In each flight test, the robot starts at 20 meters
769
+ Fig. 7. The physical inspection flight test in a tunnel under construction in
770
+ Japan. The bottom shows the Rviz of the obstacles and the trajectory.
771
+ in front of the tunnel excavation front and navigates toward
772
+ the inspection area. Note that there are static and dynamic
773
+ obstacles (i.e., walking workers) on the robot’s way to its target
774
+ location shown at the top of Fig. 7. The corresponding Rviz
775
+ visualization is shown at the bottom of Fig. 7, and one can
776
+ see that the robot can generate a collision-free trajectory for
777
+ navigation. After reaching the inspection area, the robot will
778
+ follow the zigzag path to inspect the tunnel excavation front
779
+ shown in Fig. 1d and collect RGB images for further 3D re-
780
+ construction. During the navigation period, the robot’s velocity
781
+ is maintained at 1.0m/s. The results show that our framework
782
+ can complete the entire inspection task autonomously.
783
+ E. Evaluation of 3D Reconstruction
784
+ The final output of our framework is the 3D shape of
785
+ the tunnel excavation front shown in Fig. 8. To obtain the
786
+ results, we run the SfM-based reconstruction mentioned in
787
+ Sec. IV-D with 294 color images of 640x480 resolution. The
788
+ total processing time is 30 minutes using an NVIDIA RTX
789
+ 3080 GPU, and the minimum number of images required for
790
+ this experiment is 60 images which take only 5 minutes for
791
+ reconstruction. In Fig. 8, the first row shows the reconstruction
792
+ results from different views, and the second row visualizes
793
+ the error heatmap from the comparison with the ground truth
794
+ model. Note that we use the Topcon laser scanner to obtain
795
+ the ground truth model of the inspection target. The red
796
+ and blue portion of the heatmap represents the high and
797
+ low reconstruction error values. The average error of the
798
+ reconstruction model is 5.38cm with a standard deviation of
799
+ 7.96cm. The third row shows the heatmap comparison with
800
+ the tunnel CAD model, the designed shape for the tunnel.
801
+ From the heatmap, the workers can identify the yellow and red
802
+ regions as the locations for concrete spraying and excavation.
803
+ VI. CONCLUSION AND FUTURE WORK
804
+ This paper presents a vision-based autonomous UAV in-
805
+ spection framework for tunnel environments. The proposed
806
+ framework adopts a hierarchical planning scheme to solve
807
+ the complicated inspection problem using different planning
808
+ layers. Our depth-based 3D dynamic map can represent static
809
+
810
+ Dynamic Obstacle
811
+ Trajectory
812
+ Static Obstacle
813
+ RobotFig. 8.
814
+ The 3D reconstruction results of the excavation front of the tunnel
815
+ under construction in Japan. The first row shows the 3D reconstruction model
816
+ from different views. The second row visualizes the error heatmap obtained
817
+ from the comparison of the laser-scanned ground truth. The third row presents
818
+ the heatmap comparison of the reconstruction model with the CAD model.
819
+ obstacles and track dynamic obstacles simultaneously. The
820
+ experiment results prove that our framework can make the
821
+ quadcopter safely navigate toward the inspection target to
822
+ perform the inspection and return to the origin. The final
823
+ 3D reconstruction results obtained from our SfM-based data
824
+ post-processing pipeline have a low error compared to the
825
+ ground truth. For future work, we want to apply learning-based
826
+ methods to classify dynamic obstacles for better performance.
827
+ VII. ACKNOWLEDGEMENT
828
+ The authors would like to thank TOPRISE CO., LTD and
829
+ Obayashi Corporation for their financial support in this work
830
+ and for providing a tunnel construction site for the flight tests.
831
+ REFERENCES
832
+ [1] A. Bircher, M. Kamel, K. Alexis, H. Oleynikova, and R. Siegwart,
833
+ “Receding horizon ”next-best-view” planner for 3d exploration,” in 2016
834
+ IEEE International Conference on Robotics and Automation (ICRA),
835
+ 2016, pp. 1462–1468.
836
+ [2] M. Selin, M. Tiger, D. Duberg, F. Heintz, and P. Jensfelt, “Efficient
837
+ autonomous exploration planning of large-scale 3-d environments,”
838
+ IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1699–1706,
839
+ 2019.
840
+ [3] L. Schmid, M. Pantic, R. Khanna, L. Ott, R. Siegwart, and J. Nieto, “An
841
+ efficient sampling-based method for online informative path planning in
842
+ unknown environments,” IEEE Robotics and Automation Letters, vol. 5,
843
+ no. 2, pp. 1500–1507, 2020.
844
+ [4] Z. Xu, D. Deng, and K. Shimada, “Autonomous uav exploration
845
+ of dynamic environments via incremental sampling and probabilistic
846
+ roadmap,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp.
847
+ 2729–2736, 2021.
848
+ [5] B. Bendris and J. Cayero Becerra, “Design and experimental evaluation
849
+ of an aerial solution for visual inspection of tunnel-like infrastructures,”
850
+ Remote Sensing, vol. 14, no. 1, p. 195, 2022.
851
+ [6] H. Oleynikova, D. Honegger, and M. Pollefeys, “Reactive avoidance
852
+ using embedded stereo vision for mav flight,” in 2015 IEEE Interna-
853
+ tional Conference on Robotics and Automation (ICRA).
854
+ IEEE, 2015,
855
+ pp. 50–56.
856
+ [7] J. Lin, H. Zhu, and J. Alonso-Mora, “Robust vision-based obstacle
857
+ avoidance for micro aerial vehicles in dynamic environments,” in 2020
858
+ IEEE International Conference on Robotics and Automation (ICRA).
859
+ IEEE, 2020, pp. 2682–2688.
860
+ [8] Z. Xu, Y. Xiu, X. Zhan, B. Chen, and K. Shimada, “Vision-aided
861
+ uav navigation and dynamic obstacle avoidance using gradient-based b-
862
+ spline trajectory optimization,” arXiv preprint arXiv:2209.07003, 2022.
863
+ [9] P. Shanthakumar, K. Yu, M. Singh, J. Orevillo, E. Bianchi, M. Heb-
864
+ don, and P. Tokekar, “View planning and navigation algorithms for
865
+ autonomous bridge inspection with uavs,” in International Symposium
866
+ on Experimental Robotics.
867
+ Springer, 2020, pp. 201–210.
868
+ [10] N. Bolourian and A. Hammad, “Lidar-equipped uav path planning con-
869
+ sidering potential locations of defects for bridge inspection,” Automation
870
+ in Construction, vol. 117, p. 103250, 2020.
871
+ [11] Y. Tan, S. Li, H. Liu, P. Chen, and Z. Zhou, “Automatic inspection data
872
+ collection of building surface based on bim and uav,” Automation in
873
+ Construction, vol. 131, p. 103881, 2021.
874
+ [12] T. Elmokadem, “A 3d reactive navigation method for uavs in unknown
875
+ tunnel-like environments,” in 2020 Australian and New Zealand Control
876
+ Conference (ANZCC).
877
+ IEEE, 2020, pp. 119–124.
878
+ [13] T. Elmokadem and A. V. Savkin, “A method for autonomous collision-
879
+ free navigation of a quadrotor uav in unknown tunnel-like environ-
880
+ ments,” Robotica, vol. 40, no. 4, pp. 835–861, 2022.
881
+ [14] R. S. Pahwa, K. Y. Chan, J. Bai, V. B. Saputra, M. N. Do, and
882
+ S. Foong, “Dense 3d reconstruction for visual tunnel inspection using
883
+ unmanned aerial vehicle,” in 2019 IEEE/RSJ International Conference
884
+ on Intelligent Robots and Systems (IROS). IEEE, 2019, pp. 7025–7032.
885
+ [15] C. Connolly, “The determination of next best views,” in Proceedings.
886
+ 1985 IEEE international conference on robotics and automation, vol. 2.
887
+ IEEE, 1985, pp. 432–435.
888
+ [16] B. Yamauchi, “A frontier-based approach for autonomous exploration,”
889
+ in Proceedings 1997 IEEE International Symposium on Computational
890
+ Intelligence in Robotics and Automation CIRA’97.’Towards New Com-
891
+ putational Principles for Robotics and Automation’.
892
+ IEEE, 1997, pp.
893
+ 146–151.
894
+ [17] T. Cieslewski, E. Kaufmann, and D. Scaramuzza, “Rapid exploration
895
+ with multi-rotors: A frontier selection method for high speed flight,”
896
+ in 2017 IEEE/RSJ International Conference on Intelligent Robots and
897
+ Systems (IROS).
898
+ IEEE, 2017, pp. 2135–2142.
899
+ [18] B. Charrow, G. Kahn, S. Patil, S. Liu, K. Goldberg, P. Abbeel,
900
+ N. Michael, and V. Kumar, “Information-theoretic planning with tra-
901
+ jectory optimization for dense 3d mapping.” in Robotics: Science and
902
+ Systems, vol. 11, 2015, pp. 3–12.
903
+ [19] O. Khatib, “Real-time obstacle avoidance for manipulators and mobile
904
+ robots,” in Autonomous robot vehicles.
905
+ Springer, 1986, pp. 396–404.
906
+ [20] J. v. d. Berg, S. J. Guy, M. Lin, and D. Manocha, “Reciprocal n-body
907
+ collision avoidance,” in Robotics research.
908
+ Springer, 2011, pp. 3–19.
909
+ [21] P. Fiorini and Z. Shiller, “Motion planning in dynamic environments
910
+ using velocity obstacles,” The international journal of robotics research,
911
+ vol. 17, no. 7, pp. 760–772, 1998.
912
+ [22] L. Blackmore, M. Ono, and B. C. Williams, “Chance-constrained
913
+ optimal path planning with obstacles,” IEEE Transactions on Robotics,
914
+ vol. 27, no. 6, pp. 1080–1094, 2011.
915
+ [23] H. Zhu and J. Alonso-Mora, “Chance-constrained collision avoidance
916
+ for mavs in dynamic environments,” IEEE Robotics and Automation
917
+ Letters, vol. 4, no. 2, pp. 776–783, 2019.
918
+ [24] J. Lin, H. Zhu, and J. Alonso-Mora, “Robust vision-based obstacle
919
+ avoidance for micro aerial vehicles in dynamic environments,” in 2020
920
+ IEEE International Conference on Robotics and Automation (ICRA).
921
+ IEEE, 2020, pp. 2682–2688.
922
+ [25] Z. Xu, D. Deng, Y. Dong, and K. Shimada, “Dpmpc-planner: A real-
923
+ time uav trajectory planning framework for complex static environments
924
+ with dynamic obstacles,” in 2022 International Conference on Robotics
925
+ and Automation (ICRA).
926
+ IEEE, 2022, pp. 250–256.
927
+ [26] G. Chen, P. Peng, P. Zhang, and W. Dong, “Risk-aware trajectory
928
+ sampling for quadrotor obstacle avoidance in dynamic environments,”
929
+ arXiv preprint arXiv:2201.06645, 2022.
930
+ [27] D. Mellinger and V. Kumar, “Minimum snap trajectory generation
931
+ and control for quadrotors,” in 2011 IEEE international conference on
932
+ robotics and automation.
933
+ IEEE, 2011, pp. 2520–2525.
934
+ [28] C. Richter, A. Bry, and N. Roy, “Polynomial trajectory planning for
935
+ aggressive quadrotor flight in dense indoor environments,” in Robotics
936
+ research.
937
+ Springer, 2016, pp. 649–666.
938
+ [29] J. L. Sch¨onberger and J.-M. Frahm, “Structure-from-motion revisited,”
939
+ in Conference on Computer Vision and Pattern Recognition (CVPR),
940
+ 2016.
941
+ [30] T. Qin, P. Li, and S. Shen, “Vins-mono: A robust and versatile monocular
942
+ visual-inertial state estimator,” IEEE Transactions on Robotics, vol. 34,
943
+ no. 4, pp. 1004–1020, 2018.
944
+
39FAT4oBgHgl3EQfEhxj/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
3tFAT4oBgHgl3EQfERy2/content/tmp_files/2301.08421v1.pdf.txt ADDED
@@ -0,0 +1,1420 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Peculiar properties in quasi-normal spectra from loop quantum
2
+ gravity effect
3
+ Guoyang Fu1,∗ Dan Zhang 2,† Peng Liu 3,‡ Xiao-Mei Kuang1,4,§ and Jian-Pin Wu1,4¶
4
+ 1 Center for Gravitation and Cosmology,
5
+ College of Physical Science and Technology,
6
+ Yangzhou University, Yangzhou 225009, China
7
+ 2
8
+ Key Laboratory of Low Dimensional Quantum
9
+ Structures and Quantum Control of Ministry of Education,
10
+ Synergetic Innovation Center for Quantum Effects and Applications,
11
+ and Department of Physics, Hunan Normal University, Changsha, Hunan 410081, China
12
+ 3 Department of Physics and Siyuan Laboratory,
13
+ Jinan University, Guangzhou 510632, P.R. China and
14
+ 4 Shanghai Frontier Science Center for Gravitational Wave Detection,
15
+ Shanghai Jiao Tong University, Shanghai 200240, China
16
+ Abstract
17
+ We investigate the quasi-normal mode (QNM) spectra for scalar and electromagnetic fields over
18
+ a covairant loop quantum gravity black hole (LQG-BH). For the fundamental modes, the LQG
19
+ effect reduces the oscillations in the scalar field, however it induces stronger oscillations in the elec-
20
+ tromagnetic field, comparing to the classical case. Under the scalar field perturbation, the system
21
+ enjoys faster decaying modes with more oscillations than the electromagnetic field. Some peculiar
22
+ phenomena emerge in the scalar field’s QNM spectra with high overtones for the angular quantum
23
+ numbers l > 0. It is that the LQG-BH has a larger real part of QNM with high overtones than
24
+ the Schwarzschild black hole (SS-BH). Such an anomalous phenomenon results in the oscillation
25
+ of the scalar field in the LQG-BH to be nearly identical to that in the SS-BH. Therefore, the high
26
+ overtone modes of the scalar field in LQG-BH play an important role in the modes with l > 0.
27
+ This anomalous phenomenon, however, does not occur in the electromagnetic field’s QNM spectra.
28
29
30
31
32
33
+ 1
34
+ arXiv:2301.08421v1 [gr-qc] 20 Jan 2023
35
+
36
+ I.
37
+ INTRODUCTION
38
+ A non-perturbative and background-independent technique, loop quantum gravity (LQG)
39
+ [1–4], provides a scenario for quantizing space-time structure. This approach has been suc-
40
+ cessfully applied to quantize symmetry reduced cosmological space-times, known as loop
41
+ quantum cosmology (LQC) [5–12]. Effective LQC theory can be constructed by incorporat-
42
+ ing two key quantum gravity effects, namely the inverse volume correction and the holonomy
43
+ correction, which can be achieved using both the canonical approach [13–18] and the path
44
+ integral perspectives [19–25]. The quantum gravity effects in LQC can be connected to
45
+ low-energy physics, resulting in a solvable cosmological model for studying quantum gravity
46
+ effects. In particular, the big bang singularity in classical general relativity (GR) is suc-
47
+ cessfully avoided by the quantum gravity effects [5–12, 26–31], which instead result in a
48
+ non-singular big bounce even at the semi-classical level [32, 33].
49
+ Following the same idea in LQC [5–12], several effective black holes (BH) models with
50
+ LQG corrections have been constructed. Up to date, most of effective LQG-BHs are im-
51
+ plemented through the input of the holonomy correction; see, for example, [34–44] and
52
+ references therein. A common feature of LQG-BHs is that the singularity is replaced by a
53
+ transition surface between a trapped and an anti-trapped region, which can be understood
54
+ as the interior region of black hole and white hole.
55
+ The heart of the holonomy correction is the phase space regularisation technique called
56
+ polymerisation [45]. Because of this, the effective LQG-BH with holonomy correction is also
57
+ known as the polymer BH. The basic idea behind polymerisation is the replacement of the
58
+ conjugate momentum p with their regularised counterpart sin(¯λp)/¯λ, where ¯λ is a quantity
59
+ known as polymerisation scale, which is linked to the area-gap. Depending on whether the
60
+ polymerization scale is constant or phase space dependent function, the polymer BHs are
61
+ classified into two basic types:
62
+ • µ0-type scheme
63
+ In this scheme, the polymerization scale is assumed to remain constant over the whole
64
+ phase space [34–38]. This approach has the drawback that the final result is reliant on
65
+ the fiducial structures, which are introduced in the construction of the classical phase
66
+ space. In addition, even in the low-curvature regimes, significant quantum effects may
67
+ manifest, making these models unphysical.
68
+ 2
69
+
70
+ • ¯µ-type scheme
71
+ The polymerization scale in ¯µ-type scheme is chosen to be a function of the phase
72
+ space [39–42] such that the dependency on fiducial structures is removed, though in
73
+ this scheme significant quantum corrections near the horizon may still survive.
74
+ Further, to fix the problems mentioned above, the authors in [46, 47] came up with an
75
+ generalized version of the µ0-scheme in which the polymerization scale relies on the black hole
76
+ mass such that it stays the same only along the dynamical trajectories. This effective model
77
+ does remove the problems described above, namely, the dependence on fiducial structures
78
+ and the large quantum effects that emerge in the low-curvature regime. This model, however,
79
+ also suffers from mass amplifications when crossing the transition surface. More recently,
80
+ A. Ashtekar, J. Olmedo, and P. Singh proposed an improved version of the generalized
81
+ µ0-scheme [48, 49], which is now known as the AOS model. The polymerization scale in
82
+ this model is thought to depend on the effective Hamiltonian itself. This model is quite
83
+ interesting since it removes not only the drawbacks of the µ0-scheme, but also the mass
84
+ amplifications in the primitive version of the generalized µ0-scheme [46, 47].
85
+ However, the covariance in the aforementioned models is usually broken [50–53]. Follow-
86
+ ing the idea of the anomaly-free polymerization in [54], the authors in [55, 56] construct a
87
+ covariant model of a spherically symmetric BH with holonomy correction. The polymeriza-
88
+ tion scale ¯λ is a constant in this model, and it is related to a fundamental length scal r0
89
+ by a constant of motion m. The resulting geometry corresponds to a singularity-free inte-
90
+ rior region and two asymptotically flat exterior regions of equal mass. It is expected that
91
+ this covariant model can alleviate certain long-standing concerns in the LQG community.
92
+ Therefore, it certainly deserves further exploration.
93
+ In this paper, we will mainly study the properties of the quasi-normal modes (QNMs)
94
+ of a probe scalar field and a probe Maxwell field over this covariant polymer BH. As we all
95
+ know, during the ringdown phase of binary system coalescence, the BH emits the gravita-
96
+ tional waves (GWs) with typical discrete frequencies, i.e., quasi-normal frequencies (QNFs).
97
+ According to [57], QNFs encode decaying scales and damped oscillating frequencies . Cer-
98
+ tainly, quantum effects have the imprints in the QNM spectra, which are expected to be
99
+ detected in GW observations. Also, conversely, GW detection will serve as an important
100
+ criterion for the correctness of candidate quantum gravity theories.
101
+ 3
102
+
103
+ Our paper is organized as follows. In section II, we present a brief discussion on the
104
+ effective potentials of scalar and Maxwell fields over the covariant LQG-BH. Section III
105
+ is dedicated to the properties of the QNM spectra. Then, we further study the ringdown
106
+ waveform in section IV. We present the conclusions and discussions in section V. Appendixes
107
+ A and B present the detailed derivation of the wave equations and the QNMs in the eikonal
108
+ limit.
109
+ II.
110
+ SCALAR FIELD AND MAXWELL FIELD OVER A COVARIANT LQG-BH
111
+ In [55, 56], the authors proposed a novel effective LQG black hole model with holonomy
112
+ correction that is covariant. The exterior geometry of this covariant LQG black hole is given
113
+ by
114
+ ds2 = −f(r)dt2 +
115
+ 1
116
+ g(r)f(r)dr2 + r2dΩ2 ,
117
+ f(r) = 1 − 2m
118
+ r ,
119
+ g(r) = 1 − r0
120
+ r .
121
+ (1)
122
+ Thanks to the quantum gravity effects, a new length scale r0 is introduced
123
+ r0 = 2m
124
+ ¯λ2
125
+ 1 + ¯λ2 ,
126
+ (2)
127
+ where the parameter ¯λ is a dimensionless constant related to the fiducial length of the
128
+ holonomies and m is a constant of motion. The length scale r0 define a minimum area r2
129
+ 0
130
+ of this model [55, 56]. In the ¯λ → ∞ limit, this new length scale vanishes, i.e., r0 = 0,
131
+ restoring the classical Schwarzschild black hole is recovered. In addition, as we move to the
132
+ low curvature regions, the quantum gravity effects die off. Without loss of generality, we
133
+ shall set m = 1/2 through this paper, which leads to the horizon located at rh = 1.
134
+ We focus on the perturbations of the massless scalar field Φ and electromagnetic field Aµ
135
+ over this LQG black hole and study their response. We write down the covariant equations
136
+ for the test scalar field and electromagnetic field as follows:
137
+ 1
138
+ √−g(gµν√−gΦµ),ν = 0 ,
139
+ (3)
140
+ 1
141
+ √−g(gαµgσν√−gFασ),ν = 0 ,
142
+ (4)
143
+ where Fασ = ∂αAσ − ∂σAα is the field strength of the Maxwell field. After the separation of
144
+ variables, the aforementioned equations can be packaged into the Schr¨odinger-like form (for
145
+ 4
146
+
147
+ more details, see Appendix A)
148
+ ∂2Ψ
149
+ ∂r2
150
+
151
+ + (ω2 − Veff)Ψ = 0 ,
152
+ (5)
153
+ where r∗ is the tortoise coordinate and Veff is the effective potentials:
154
+ Veff = f(r)l(l + 1)
155
+ r2
156
+ + 1 − s
157
+ r
158
+ d
159
+ dr∗
160
+ f(r)
161
+
162
+ g(r) ,
163
+ (6)
164
+ with l being the angular quantum numbers. s = 0 and s = 1 correspond to the scalar field
165
+ and electromagnetic field, respectively. Figs.1 and 2 demonstrate the effective potentials as
166
+ a function of r∗ for scalar and electromagnetic fields with different l and r0. It is found that
167
+ both effective potentials are positive, indicating the LQG black hole is stable under scalar
168
+ and electromagnetic perturbations. Furthermore, we would like to compare the differences
169
+ in effective potentials between scalar and electromagnetic fields. It is easy to find that for
170
+ the electromagnetic field (s = 1), the second term in Eq.(6) vanishes, such that all the peaks
171
+ of the effective potentials Vel have the same height for different r0 (see Fig.2). However, for
172
+ the scalar field, i.e., s = 0, the second term in Eq.(6) survives and the height of the effective
173
+ potential Vs depends on r0. In particular, with increasing r0, the height of Vs decreases
174
+ (Fig.1). The shape of the effective potentials shall definitely results in different properties
175
+ of the QNMs.
176
+ r0=0
177
+ r0=1/4
178
+ r0=1/2
179
+ r0=3/4
180
+ -10
181
+ 10
182
+ 20
183
+ r*
184
+ 0.02
185
+ 0.04
186
+ 0.06
187
+ 0.08
188
+ 0.10
189
+ Vs(r)
190
+ l=0
191
+ r0=0
192
+ r0=1/4
193
+ r0=1/2
194
+ r0=3/4
195
+ -10
196
+ 10
197
+ 20
198
+ r*
199
+ 0.1
200
+ 0.2
201
+ 0.3
202
+ 0.4
203
+ Vs(r)
204
+ l=1
205
+ FIG. 1: The effective potentials Vs(r∗) of the scalar field for different r0 with fixed l.
206
+ 5
207
+
208
+ r0=0
209
+ r0=1/4
210
+ r0=1/2
211
+ r0=3/4
212
+ 0.0
213
+ 0.5
214
+ 1.0
215
+ 1.5
216
+ 0.275
217
+ 0.280
218
+ 0.285
219
+ 0.290
220
+ 0.295
221
+ 0.300
222
+ -10
223
+ 10
224
+ 20
225
+ r*
226
+ 0.05
227
+ 0.10
228
+ 0.15
229
+ 0.20
230
+ 0.25
231
+ 0.30
232
+ Vel(r)
233
+ l=1
234
+ r0=0
235
+ r0=1/4
236
+ r0=1/2
237
+ r0=3/4
238
+ 0.0
239
+ 0.5
240
+ 1.0
241
+ 1.5
242
+ 0.86
243
+ 0.87
244
+ 0.88
245
+ 0.89
246
+ 0.90
247
+ -10
248
+ 10
249
+ 20
250
+ r*
251
+ 0.2
252
+ 0.4
253
+ 0.6
254
+ 0.8
255
+ Vel(r)
256
+ l=2
257
+ FIG. 2: The effective potentials Vel(r∗) of the electromagnetic field for different r0 with fixed l.
258
+ III.
259
+ QUASI-NORMAL MODES
260
+ In this section, we investigate the QNMs spectra and specially focus on the effects from
261
+ quantum gravity corrections. The nature of determining the QNMs is to solve the eigenvalue
262
+ problem. To this end, we will impose a purely outgoing wave at infinity and purely ingoing
263
+ wave at the horizon:
264
+ horizon: ∂tΨ − ∂r∗Ψ = 0,
265
+ infinity: ∂tΨ + ∂r∗Ψ = 0.
266
+ (7)
267
+ By solving Eq.(5) with the aforementioned boundary conditions, numerous techniques have
268
+ been developed to determining the QNMs spectra, such as the WKB method [58–63],
269
+ Horowitz-Hubeny method [64], continued fraction method [65], asymptotic iteration method
270
+ [66–68], pseudo-spectral method [69, 70], and so on. In this paper, we will solve the eigen-
271
+ value problem using the pseudo-spectral method. For more applications of pseudo-spectral
272
+ method in determining the QNMs in the black hole physics, we can refer to [71–80] and
273
+ references therein. It is convenient to work in the Eddington-Finkelstein coordinate, which
274
+ makes the wave equation (5) is linear in the frequency. To achieve this goal, it is direct to
275
+ make a transformation as
276
+ r → 1/u and Ψ = e−iωr∗(u)ψ .
277
+ (8)
278
+ 6
279
+
280
+ Then, the wave equation (5) turns into the following form:
281
+ ψ′′(u) +
282
+
283
+ f ′(u)
284
+ f(u) + g′(u)
285
+ 2g(u) +
286
+ 2iω
287
+ u2f(u)
288
+
289
+ g(u)
290
+
291
+ ψ′(u)
292
+ −1
293
+ u
294
+
295
+ 2iω
296
+ u2f(u)
297
+
298
+ g(u)
299
+ +
300
+ Veff(u)
301
+ u3f(u)2g(u) + f ′(u)
302
+ f(u) + g′(u)
303
+ 2g(u)
304
+
305
+ ψ(u) = 0.
306
+ (9)
307
+ Combining with the boundary conditions (7), one can solve Eq.(9) by the pseudo-spectral
308
+ method.
309
+ 0.0
310
+ 0.2
311
+ 0.4
312
+ 0.6
313
+ 0.8
314
+ 1.0r0
315
+ 0.17
316
+ 0.18
317
+ 0.19
318
+ 0.20
319
+ 0.21
320
+ 0.22
321
+ Reω
322
+ l=0, n=0
323
+ 0.2
324
+ 0.4
325
+ 0.6
326
+ 0.8
327
+ r0
328
+ -0.20
329
+ -0.18
330
+ -0.16
331
+ -0.14
332
+ Imω
333
+ l=0, n=0
334
+ 0.0
335
+ 0.2
336
+ 0.4
337
+ 0.6
338
+ 0.8
339
+ 1.0r0
340
+ 0.570
341
+ 0.575
342
+ 0.580
343
+ 0.585
344
+ Reω
345
+ l=1, n=0
346
+ 0.2
347
+ 0.4
348
+ 0.6
349
+ 0.8
350
+ r0
351
+ -0.18
352
+ -0.16
353
+ -0.14
354
+ Imω
355
+ l=1, n=0
356
+ FIG. 3: QNFs as a function of r0 for the scalar field perturbation.
357
+ Now, we evaluate the QNM spectra for various values of the free parameter r0 to explore
358
+ the LQG effects on these spectra, as well as the differences between them and those of the
359
+ Schwarzschild black hole (r0 = 0). First, we focus on the fundamental modes and show the
360
+ QNF as a function of r0 in Fig.3 for the scalar field and Fig.4 for the electromagnetic field.
361
+ The main properties are presented as what follows.
362
+ • For scalar field, the real parts of the QNF, Reω decreases with increasing r0 (left plots
363
+ in Fig.3).
364
+ It means that the LQG effect reduces the oscillations in comparison to
365
+ 7
366
+
367
+ the Schwarzschild black hole. By contrast, Reω of the electromagnetic field exhibits
368
+ an inverse tendency.
369
+ That is, when r0 increases, so does Reω.
370
+ As a result, the
371
+ LQG effect produces stronger oscillations in the electromagnetic field than that of the
372
+ Schwarzschild black hole.
373
+ • Whether for scalar or electromagnetic field, the imaginary part of QNF Imω al-
374
+ ways lives in the lower half-plane, and their absolute values are less than that of
375
+ the Schwarzschild black hole. Therefore, the system is stable in the presence of scalar
376
+ or electromagnetic field perturbations, and the LQG effect results in faster decaying
377
+ modes.
378
+ • When we fix r0, the scalar field has larger absolute values of Reω or Imω than the
379
+ electromagnetic field (see Figs.3 and 4). It indicates that in comparison to the elec-
380
+ tromagnetic field, the system under scalar field perturbation enjoys faster decaying
381
+ modes with greater oscillations.
382
+ 0.0
383
+ 0.2
384
+ 0.4
385
+ 0.6
386
+ 0.8
387
+ r0
388
+ 0.500
389
+ 0.505
390
+ 0.510
391
+ 0.515
392
+ 0.520
393
+ 0.525
394
+ Reω
395
+ l=1, n=0
396
+ 0.2
397
+ 0.4
398
+ 0.6
399
+ 0.8
400
+ r0
401
+ -0.18
402
+ -0.16
403
+ -0.14
404
+ -0.12
405
+ Imω
406
+ l=1, n=0
407
+ 0.0
408
+ 0.2
409
+ 0.4
410
+ 0.6
411
+ 0.8
412
+ r0
413
+ 0.920
414
+ 0.925
415
+ 0.930
416
+ Reω
417
+ l=2, n=0
418
+ 0.2
419
+ 0.4
420
+ 0.6
421
+ 0.8
422
+ r0
423
+ -0.18
424
+ -0.16
425
+ -0.14
426
+ -0.12
427
+ Imω
428
+ l=2, n=0
429
+ FIG. 4: QNFs as a function of r0 for the electromagnetic field perturbation.
430
+ We also work out the QNM spectra with high overtones. Table I shows the results for
431
+ 8
432
+
433
+ scalar field. For l = 0, we see that the LQG-BH has a lower Reω than the Schwarzschild
434
+ black hole (SS-BH). This is consistent with the fundamental mode’s behavior. The most
435
+ striking difference occurs at the case with l > 0. To demonstrate this, we define the difference
436
+ in QNFs between SS-BH (r0 = 0) and LQG-BH (r0 ̸= 0) as follows:
437
+ δω = ωLQG − ωSS .
438
+ (10)
439
+ Observing the left plot in Fig.5 (also see the second column in Table I), Reω with high
440
+ overtones is larger for the LQG-BH than the SS-BH for l > 0. It indicates that the overtones
441
+ may play an important role in the modes with l > 0. We shall testify this point by the time
442
+ evolution of the field.
443
+ We also briefly address the properties of the QNMs for the electromagnetic field. Table
444
+ II shows the QNM spectra for the electromagnetic field. It is found that for all modes, Reω
445
+ for the LQG-BH is always smaller than that of the SS-BH. It differs from the scalar field,
446
+ where Reω with high overtones is larger for the LQG-BH than the SS-BH. This discrepancy
447
+ also results in the corresponding difference in the evolutions of the scalar field and the
448
+ electromagnetic field, which will be addressed below.
449
+ l = 0
450
+ l = 1
451
+ n
452
+ ω (r0 = 0)
453
+ ω (r0 = 1/2)
454
+ ω (r0 = 0)
455
+ ω (r0 = 1/2)
456
+ 0 0.220910-0.209792i 0.200799-0.164527i 0.585872-0.195320i 0.579649-0.158265i
457
+ 1 0.172223-0.696106i 0.160438-0.534669i 0.528897-0.612515i 0.547089-0.487735i
458
+ 2 0.151564-1.202642i 0.126266-0.921455i 0.459079-1.080267i 0.502191-0.843179i
459
+ 3 0.142272-1.705216i 0.076951-1.314823i 0.406517-1.576596i 0.461392-1.216781i
460
+ 4 0.134739-2.211987i 0.053109-1.808749i 0.370218-2.081524i 0.426854-1.597881i
461
+ 5 0.129639-2.712112i 0.098367-2.213623i 0.344154-2.588236i 0.395550-1.981462i
462
+ TABLE I: The QNM spectra for the scalar field perturbation with different n, l, and r0.
463
+ 9
464
+
465
+ r0=1/100
466
+ r0=1/10
467
+ r0=1/2
468
+ 1
469
+ 2
470
+ 3
471
+ 4 n
472
+ 0.01
473
+ 0.02
474
+ 0.03
475
+ 0.04
476
+ 0.05
477
+ 0.06
478
+ δReω
479
+ l=1
480
+ r0=1/100
481
+ r0=1/10
482
+ r0=1/2
483
+ 1
484
+ 2
485
+ 3
486
+ 4 n
487
+ 0.1
488
+ 0.2
489
+ 0.3
490
+ 0.4
491
+ 0.5
492
+ δImω
493
+ l=1
494
+ FIG. 5: The difference of QNFs of scalar field for the mode with l = 1 between the LQG-BH and
495
+ SS-BH.
496
+ l = 1
497
+ l = 2
498
+ n
499
+ ω (r0 = 0)
500
+ ω (r0 = 1/2)
501
+ ω (r0 = 0)
502
+ ω (r0 = 1/2)
503
+ 0 0.496527-0.184975i 0.513377-0.152855i 0.915191-0.190009i 0.924716-0.155637i
504
+ 1 0.429031-0.587335i 0.476434-0.474099i 0.873085-0.581420i 0.901934-0.472399i
505
+ 2 0.349547-1.050375i 0.427459-0.825862i 0.802373-1.003175i 0.862549-0.803573i
506
+ 3 0.292353-1.543818i 0.385422-1.197325i 0.725190-1.460397i 0.816205-1.152343i
507
+ 4 0.253105-2.045090i 0.351646-1.576036i 0.657473-1.943219i 0.770903-1.515796i
508
+ 5 0.224562-2.547950i 0.322640-1.956797i 0.602986-2.439431i 0.730157-1.888692i
509
+ TABLE II: The QNM spectra for the electromagnetic field perturbation with different system
510
+ parameters n, l and r0.
511
+ Finally, we will discuss the properties of the QNMs in the eikonal limit (l → ∞). In [81],
512
+ Cardoso et al have demonstrated that, in the eikonal limit, QNMs may be connected with the
513
+ behavior of null particle trapped on the unstable circular geodesic of the spacetime, which
514
+ have been validated in most static, spherically symmetric, asymptotically flat spacetime.
515
+ The Reω is determined by the angular velocity Ωc at the unstable null geodesic [82–86],
516
+ whereas the Imω is connected to the Lyapunov exponent λ [87, 88].
517
+ In the LQG-BH
518
+ background, we can calculate the QNMs in the eikonal limit, which is given by
519
+ ω = Ωcl − i
520
+
521
+ n + 1
522
+ 2
523
+
524
+ |λ| .
525
+ (11)
526
+ 10
527
+
528
+ For the detailed calculation, we can refer to Appendix B. It is found that as SS-BH, the
529
+ angular velocity Ωc is completely determined by the black hole mass:
530
+ Ωc =
531
+ 1
532
+ 3
533
+
534
+ 3m .
535
+ (12)
536
+ Therefore, the Reω is independent of the LQG parameter r0. While the Lyapunov exponent
537
+ λ is given by
538
+ λ =
539
+
540
+ − r2
541
+ c
542
+ f(rc)
543
+ � d2
544
+ dr2
545
+
546
+ f(r)
547
+ r2
548
+ � ���
549
+ r=rc ,
550
+ (13)
551
+ where rc is the radius of the photon sphere. Obviously, the Lyapunov exponent is affected
552
+ by the LQG correction. Left plot in Fig.6 shows the Lyapunov exponent λ as a function of
553
+ r0. We see that the Lyapunov exponent decreases with r0 increasing. Correspondingly, the
554
+ absolution value of Imω is suppressed by the the LQG effect (see the right plot in Fig.6).
555
+ 0.2
556
+ 0.4
557
+ 0.6
558
+ 0.8
559
+ 1.0r0
560
+ -0.18
561
+ -0.16
562
+ -0.14
563
+ -0.12
564
+ -|λ|/2
565
+ 38.60
566
+ 38.65
567
+ 38.70
568
+ 38.75
569
+ -0.20
570
+ -0.18
571
+ -0.16
572
+ -0.14
573
+ -0.12
574
+ Re ω
575
+ Im ω
576
+ l=100,n=0
577
+ FIG. 6: Left plot: The Lyapunov exponent λ as a function of the LQG corrected parameter r0.
578
+ Right plot: The QNFs for different r0 for large l.
579
+ We notice that since the real part of QNF is independent of the LQG parameter r0 in
580
+ the eikonal limit. Therefore, we expect that as l increases, the difference in Reω between
581
+ LQG-BH and SS-BH will be suppressed and vanish. Fig.7 validates this argument that as l
582
+ increases, the difference rapidly decreases and goes to zero.
583
+ 11
584
+
585
+ 90
586
+ 92
587
+ 94
588
+ 96
589
+ 98
590
+ -0.000100
591
+ -0.000098
592
+ -0.000096
593
+ -0.000094
594
+ -0.000092
595
+ 20
596
+ 40
597
+ 60
598
+ 80
599
+ 100l
600
+ -0.020
601
+ -0.015
602
+ -0.010
603
+ -0.005
604
+ δReω n=0
605
+ 90 92 94 96 98
606
+ 0.00040
607
+ 0.00041
608
+ 0.00042
609
+ 0.00043
610
+ 0.00044
611
+ 20
612
+ 40
613
+ 60
614
+ 80
615
+ 100 l
616
+ -0.010
617
+ -0.005
618
+ 0.005
619
+ 0.010
620
+ 0.015
621
+ δReω n=1
622
+ FIG. 7: The difference of QNFs of scalar field between the LQG-BH and SS-BH. Left plot is for
623
+ n = 0 and right plot for n = 1.
624
+ IV.
625
+ RINGDOWN WAVEFORM
626
+ In this section, we will study the time evolution of the scalar and electromagnetic per-
627
+ turbations, which help us to further know the total contributions from overtones. Here, we
628
+ will use the finite difference method (FDM) technics to implement the dynamical evolution.
629
+ For more details on the FDM, we can refer to Refs.[73, 89–92] and references therein. To
630
+ this end, we write the wave equation in difference form as
631
+ −(Ψi+1,j − 2Ψi,j + Ψi−1,j)
632
+ △t2
633
+ + (Ψi,j+1 − 2Ψi,j + Ψi,j−1)
634
+ △r2
635
+
636
+ − VjΨi,j + O(△t2) + O(△r2
637
+ ∗) = 0 ,
638
+ (14)
639
+ where △t and △r∗ are the time and radial intervals, respectively, wihch are defined by
640
+ t = i△t and r∗ = j△r∗. The Vj is the discrete form of the effective potential (6). Then, the
641
+ iterate formula is derived as:
642
+ Ψi+1,j = −Ψi−1,j + △t2
643
+ △r2
644
+
645
+ (Ψi,j+1 + Ψi,j−1) + (2 − 2 △t2
646
+ △r2
647
+
648
+ − △t2Vj)Ψi,j .
649
+ (15)
650
+ Notice that the Courant-Friedrichs-Lewy (CFL) condition for instability requires that
651
+ △t/△r∗ < 1.
652
+ Using the iterate formula (15) with the initial Gaussian distribution
653
+ Ψ(r∗, t < 0) = 0 and Ψ(r∗, t = 0) = exp − (r∗−a)2
654
+ 2b2
655
+ , one can obtain the ringdown profiles.
656
+ In general, there are three different stages in time-evolution profile: initial outburst,
657
+ quasinormal ringing, which depends only on the black hole’s characteristics and is very
658
+ important for GW observations [57, 93–95], and the late tail, which exhibits the power-law
659
+ 12
660
+
661
+ behavior for the asymptotically flat spacetimes or exponential behavior for asymptotically
662
+ de-Sitter spacetimes. We will focus on the properties of the latter two stages in this section.
663
+ Schwarschild BH
664
+ r0=1/4
665
+ r0=1/2
666
+ 0
667
+ 50
668
+ 100
669
+ 150
670
+ 200
671
+ 0.001
672
+ 0.010
673
+ 0.100
674
+ 1
675
+ 10
676
+ 100
677
+ t
678
+ |Ψs|
679
+ l=0
680
+ Schwarschild BH
681
+ r0=1/4
682
+ r0=1/2
683
+ 0
684
+ 50
685
+ 100
686
+ 150
687
+ 200
688
+ 10-9
689
+ 10-7
690
+ 10-5
691
+ 0.001
692
+ 0.100
693
+ 10
694
+ t
695
+ |Ψs|
696
+ l=1
697
+ FIG. 8: The time evolution of the scalar field |Ψs(r)| for different r0 with fixed l (left plot for l = 0
698
+ and right plot for l = 1).
699
+ Left plot in Fig.8 shows the time-domain profile for the scalar field perturbation with
700
+ l = 0. In comparison to the SS-BH, the oscillation is slightly weaker and the decay becomes
701
+ slower during the intermediate time. Recalling that both the real and imaginary parts of
702
+ the fundamental QNF all reduce as the LQG corrected parameter r0 enhances. It suggests
703
+ that in the scalar evolution of LQG-BH with l = 0 the fundamental QNMs dominate over
704
+ the ones with high overtones, which is consistent with the case of SS-BH. At asymptotically
705
+ late-times, quasinormal ringing is suppressed, and it follows the same power-law tail as
706
+ Ψ(t) ∼ t−(2l+3) for both LQG-BH and SS-BH [96–98].
707
+ However, for multipoles l > 0, we observe some peculiar behavior that differs from that
708
+ of l = 0. Carefully observing the right plot in Fig.8, we find that the slope of quasinormal
709
+ ringing is smaller for the LQG-BH than for the SS-BH. This observation is consistent with
710
+ the QNFs, which show that the absolute value of Imω for all discrete overtones is smaller
711
+ for the LQG-BH than for the SS-BH (see the right column in Table I). Nevertheless, the
712
+ oscillation for the LQG-BH is nearly coincide with that for the SS-BH (the right plot in
713
+ Fig.8 and also see Fig.9). Recalling that for l > 0, the LQG-BH has a lower Reω for the
714
+ fundamental mode than the SS-BH, whereas the case is reversed for high overtones.
715
+ It
716
+ means that the contribution from the high overtone modes reduces the difference between
717
+ the LQG-BH and SS-BH time-domain profiles. That is, the high overtone modes in the Reω
718
+ 13
719
+
720
+ of the LQG-BH play an important role in determining the oscillation of the time evolution
721
+ of the scalar field.
722
+ r0=0
723
+ r0=1/100
724
+ r0=1/10
725
+ r0=1/4
726
+ 0
727
+ 10
728
+ 20
729
+ 30
730
+ 40
731
+ 50
732
+ -30
733
+ -20
734
+ -10
735
+ 0
736
+ 10
737
+ 20
738
+ 30
739
+ 40
740
+ t
741
+ |Ψs|
742
+ l=1
743
+ r0=0
744
+ r0=1/100
745
+ r0=1/10
746
+ r0=1/4
747
+ 10
748
+ 20
749
+ 30
750
+ 40
751
+ 50
752
+ 60
753
+ 70
754
+ 10-4
755
+ 0.001
756
+ 0.010
757
+ 0.100
758
+ 1
759
+ 10
760
+ t
761
+ |Ψs|
762
+ l=1
763
+ FIG. 9: The time evolution of the scalar field |Ψs(r)| for different r0 with fixed l. Notice that the
764
+ right plot is the semi-log plot.
765
+ Schwarschild BH
766
+ r0=1/2
767
+ 0
768
+ 50
769
+ 100
770
+ 150
771
+ 200
772
+ 10-6
773
+ 0.001
774
+ 1
775
+ t
776
+ |Ψel|
777
+ l=1
778
+ Schwarschild BH
779
+ r0=1/2
780
+ 0
781
+ 50
782
+ 100
783
+ 150
784
+ 200
785
+ 10-9
786
+ 10-6
787
+ 0.001
788
+ 1
789
+ t
790
+ |Ψel|
791
+ l=2
792
+ FIG. 10: The semi-log plot of the time evolution of the electromagnetic field |Ψel(r)| for different
793
+ r0 with fixed l.
794
+ We also study the time evolution of the electromagnetic field, as seen in Fig.10. We
795
+ observe that the slop of quasinormal ringing is smaller for the LQG-BH than the SS-BH,
796
+ which is similar to the scalar field. However, unlike in the case of scalar field, the oscillations
797
+ of the LQG-BH and the SS-BH are not the same. It is expected because for l > 0, the
798
+ anomalous phenomena seen in the QNMs of scalar field that Reω with high overtones is
799
+ larger for the LQG-BH than the SS-BH does not happen for electromagnetic field.
800
+ 14
801
+
802
+ V.
803
+ CONCLUSION AND DISCUSSION
804
+ As the rapid development of the GW detection technics, it is expected to detect the
805
+ quantum gravity effect. To extract substantial information from GW detectors, one must
806
+ thoroughly know the main features and behaviors of QNM for LQG-BH. As the first step,
807
+ we investigate the QNM for scalar and electromagnetic fields over the covariant LQG-BH
808
+ proposed in [55, 56]. The QNM spectra for scalar and electromagnetic fields share some
809
+ common features. But they also exhibit many different features and behaviors.
810
+ First, we focus on the fundamental modes. It is found that the system is always stable
811
+ under scalar field or electromagnetic field perturbations, and the LQG effect results in faster
812
+ decaying modes. The difference is that the LQG effect reduces the oscillations in the scalar
813
+ field, however it enhances oscillations in the electromagnetic field.
814
+ In addition, we find
815
+ that the system under the scalar field perturbation enjoys faster decaying modes with more
816
+ oscillations than the electromagnetic field.
817
+ Some peculiar phenomena emerge in the scalar field QNM spectra with high overtones
818
+ for l > 0. It is that the LQG-BH has a larger ωR with high overtones than the SS-BH. Such
819
+ an anomalous phenomenon results in the oscillation of the scalar field in the LQG-BH to be
820
+ nearly identical to that in the SS-BH. Therefore, the high overtone modes of the scalar field
821
+ in LQG-BH play an important role in the modes with l > 0. This anomalous phenomenon,
822
+ however, cannot be observed in the electromagnetic field’s QNM spectra.
823
+ Finally, we comment some open questions deserving further exploration.
824
+ • It would be interesting to extend our investigation to the Dirac field and see if the
825
+ peculiar property still emerges in the QNM spectra.
826
+ • It is definitely interesting and valuable to further study the QNM spectrum of the
827
+ gravity perturbations. It provides us a platform for detecting quantum gravity effects
828
+ using the GW detector. In addition, we can examinate if the isospectrality still holds
829
+ in this LQG-BH model.
830
+ • In [99], the anomalous decay rate of QNMs of a massive scalar field is observed. De-
831
+ pending on how large the mass of the scalar field is, the decay timescales of the QNMs
832
+ either grow or decay with increasing angular harmonic numbers. This anomalous be-
833
+ havior is seen in much larger class models beyond a simple massive scalar field, see
834
+ 15
835
+
836
+ [100–104] and references therein. It will interesting to see how the LQG effect affects
837
+ this anomalous behavior.
838
+ • We can also construct an effective rotating LQG-BH solution using the Newman-Janis
839
+ algorithm, starting with this spherical sysmetric LQG-BH, and study the LQG effects
840
+ on its QNM spectrum and shadow, allowing us to constrain the LQG parameters using
841
+ the GW detector and the Event Horizon Telescope (EHT).
842
+ We plan to investigate these questions and publish our results in the near future.
843
+ Acknowledgments
844
+ This work is supported by National Key R&D Program of China (No. 2020YFC2201400),
845
+ the Natural Science Foundation of China under Grants No. 11905083, the Postgraduate Re-
846
+ search & Practice Innovation Program of Jiangsu Province under Grant No. KYCX20 2973,
847
+ the Postgraduate Scientific Research Innovation Project of Hunan Province, the Science and
848
+ Technology Planning Project of Guangzhou (202201010655), the Fok Ying Tung Education
849
+ Foundation under Grant No. 171006, the Natural Science Foundation of Jiangsu Province
850
+ under Grant No.BK20211601. J.-P.W. is also supported by Top Talent Support Program
851
+ from Yangzhou University.
852
+ Appendix A: Wave equations
853
+ In this appendix, we will derive the wave equations for the scalar and electromagnetic
854
+ fields in detail. First, we shall provide a generic version of the wave equation in a static
855
+ spherically symmetric spacetime. The cases of scalar field and electromagnetic field are then
856
+ discussed in detail.
857
+ Because the spacetime is static spherically symmetric, we can separate variables using
858
+ the spherical function and write the radial equation in the form
859
+ (K(r)S(r)ˆΨ′(r))′ +
860
+
861
+ ΛF(r) + K(r) ω2
862
+ S(r)
863
+
864
+ ˆΨ(r) = 0 ,
865
+ (A1)
866
+ where ˆΨ is the radial part of the wave function, the coefficient functions {K , F , S} only
867
+ depend on the radial coordinate r, and Λ is the separation constant. After introducing the
868
+ 16
869
+
870
+ tortoise coordinate r∗ and redefining the wave function as
871
+ dr∗
872
+ dr =
873
+ 1
874
+ S(r) ,
875
+ ˆΨ(r) =
876
+ Ψ
877
+
878
+ K(r)
879
+ ,
880
+ (A2)
881
+ Eq.(A1) can be recasted into the following form
882
+ d2Ψ(r∗)
883
+ dr2
884
+
885
+ + (ω2 − Veff(r∗))Ψ(r∗) = 0 .
886
+ (A3)
887
+ The above formula provides a general transformation from the usual wave equation to its
888
+ Schr¨odinger-like counterpart.
889
+ In the following, we will go over the specific form of the wave equations for scalar
890
+ and electromagnetic fields.
891
+ For the scalar field equation, we perform the separation as
892
+ Φ(t, r, θ, φ) = �
893
+ l,m ˆΨ(r)e−iωtYlm(θ, φ), where Ylm(θ, φ) is the spherical harmonics. When
894
+ the particular form of the LQG-BH background (1) is substituted into the wave equation
895
+ (A1), one obtains
896
+
897
+ r2f(r)
898
+
899
+ g(r)ˆΨ′(r)
900
+ �′
901
+ +
902
+
903
+ r2ω2
904
+ f(r)
905
+
906
+ g(r)
907
+ − l(l + 1)
908
+
909
+ g(r)
910
+
911
+ ˆΨ(r) = 0 .
912
+ (A4)
913
+ We can read off the coefficient functions by comparing Eq.(A4) to Eq.(A1)
914
+ K(r) = r2 ,
915
+ S = f(r)
916
+
917
+ g(r) .
918
+ (A5)
919
+ The Schr¨odinger-like version of the wave equation is then easily given as
920
+ ∂2Ψ
921
+ ∂r2
922
+
923
+ + (ω2 − Vs)Ψ = 0 ,
924
+ (A6)
925
+ Vs = f(r)l(l + 1)
926
+ r2
927
+ + 1
928
+ 2r
929
+ d
930
+ drf(r)2g(r) .
931
+ (A7)
932
+ For the electromagnetic field, we can expand the gauge field Aµ in vector spherical har-
933
+ monics [105, 106],
934
+ Aµ(t, r, θ, φ) =
935
+
936
+ l,m
937
+
938
+
939
+
940
+
941
+
942
+
943
+
944
+
945
+ ������
946
+ 0
947
+ 0
948
+ alm(r)
949
+ sin θ ∂φYlm
950
+ −alm(r) sin θ∂θYlm
951
+
952
+ ������
953
+ +
954
+
955
+ ������
956
+ plm(r)Ylm
957
+ hlm(r)Ylm
958
+ klm(r)∂θYlm
959
+ klm(r)∂φYlm
960
+
961
+ ������
962
+
963
+
964
+
965
+
966
+
967
+
968
+
969
+ e−iωt ,
970
+ (A8)
971
+ where the first term is the odd (axial) perturbation and second term is even (polar) pertur-
972
+ bation. Then, in the following, we will show how to derive the odd perturbation equation
973
+ and even perturbation equation.
974
+ 17
975
+
976
+ When we switch on the odd electromagnetic field perturbation, we can explicitly write
977
+ down the radial equation as
978
+
979
+ f(r)
980
+
981
+ g(r)a′
982
+ lm(r)
983
+ �′
984
+ +
985
+
986
+ ω2
987
+ f(r)
988
+
989
+ g(r)
990
+ − l(l + 1)
991
+ r2�
992
+ g(r)
993
+
994
+ alm(r) = 0 ,
995
+ (A9)
996
+ It is easy to find that K = 1 and S = f(r)
997
+
998
+ g(r). Thus, we have
999
+ Vodd = f(r)l(l + 1)
1000
+ r2
1001
+ ,
1002
+ (A10)
1003
+ where Ψ = alm(r).
1004
+ For the even perturbation of the electromagnetic field, the radial equation becomes
1005
+ p′′
1006
+ lm(r) + q(r)p′
1007
+ lm(r) + iω (h′
1008
+ lm(r) + q(r)hlm(r)) +
1009
+ l(l + 1)
1010
+ r2f(r)g(r)(plm(r) + iωklm(r)) = 0 ,
1011
+ −iωp′
1012
+ lm(r) + ω2hlm(r) + l(l + 1)
1013
+ r2
1014
+ f(r)(−hlm(r) + k′
1015
+ lm(r)) = 0 ,
1016
+ (A11)
1017
+ where q(r) = 2
1018
+ r + g′(r)
1019
+ 2g(r). After introducing a new variable
1020
+ ˆΨ(r) = −p′
1021
+ lm(r) − iωhlm(r) ,
1022
+ (A12)
1023
+ Eq.(A11) can be reduced to
1024
+ (r4f(r)g(r)3/2 ˆΨ′(r))′ +
1025
+
1026
+ r4ω2�
1027
+ g(r)
1028
+ f(r)
1029
+ − l(l + 1)r2
1030
+
1031
+ g(r) + 1
1032
+ 2J(r)
1033
+
1034
+ ˆΨ(r) = 0
1035
+ (A13)
1036
+ where J(r) = r2�
1037
+ g(r)(rf ′(r)(4g(r) + rg′(r)) + f(r)(4g(r) + r(6g′(r) + rg′′(r)))). Thus, the
1038
+ coefficient functions are K = r4�
1039
+ g(r) and S = f(r)
1040
+
1041
+ g(r) and then we have
1042
+ Veven = f(r)l(l + 1)
1043
+ r2
1044
+ .
1045
+ (A14)
1046
+ We find that the effective potentials for odd and even electromagnetic field perturbations are
1047
+ the same. Therefore, we will use Vel to signify the effective potential of the electromagnetic
1048
+ field rather than Vodd and Veven.
1049
+ Appendix B: QNMs in the eikonal limit
1050
+ In this appendix, we will show the connection between the QNMs in the eikonal limit and
1051
+ the behavior of null particle trapped on the unstable circular geodesic. For a null particle,
1052
+ 18
1053
+
1054
+ the Lagrange is1
1055
+ L(x, ˙x) = 1/2gµν ˙xµ ˙xν .
1056
+ (B1)
1057
+ We start with the spherically symmetric geometry (1). Thanks to the symmetry, one can
1058
+ only consider the geodesics in the equatorial plane: θ = π/2. Then the Lagrangian (B1)
1059
+ becomes
1060
+ 2L = −f(r)˙t2 +
1061
+ ˙r2
1062
+ f(r)g(r) + r2 ˙φ2 ,
1063
+ (B2)
1064
+ where the dot represents the derivative with respect to the affine parameter τ.
1065
+ In this
1066
+ system, there are two constants of the motion, which are
1067
+ Pt = −f(r)˙t = −E ,
1068
+ Pφ = r2 ˙φ = L .
1069
+ (B3)
1070
+ Using the canonical transform and combining the above equations (B2) and (B3), we have
1071
+ the following reduced Hamiltonian system:
1072
+ 2H = E ˙t +
1073
+ ˙r2
1074
+ f(r)g(r) + L ˙φ .
1075
+ (B4)
1076
+ Since the Hamiltonian H satisfies the constraint H = 0, we have
1077
+ ˙r2 + Veff = 0 ,
1078
+ (B5)
1079
+ where the effective potential is
1080
+ Veff = g(r)
1081
+ �L2
1082
+ r2 f(r) − E2
1083
+
1084
+ ,
1085
+ (B6)
1086
+ Because ˙r2 > 0, the photon can only emerge in the area of negative potential. When the
1087
+ angular momentum is small, the photon will fall from infinity into the black hole. However,
1088
+ for the large angular momentum, the photon will escape the bondage of the black hole and
1089
+ go back to infinity. Therefore, the critical circular orbit for the photon can be derived from
1090
+ the unstable conditions
1091
+ Veff = 0 ,
1092
+ ∂Veff
1093
+ ∂r
1094
+ = 0 ,
1095
+ ∂2Veff
1096
+ ∂r2
1097
+ < 0 .
1098
+ (B7)
1099
+ 1 For the calculation details of the geodesic of a null particle, please refer to [81, 82, 107, 108].
1100
+ 19
1101
+
1102
+ From the above conditions, we can obtain the equation for the critical radius rc
1103
+ 2fc(r) = rcf ′
1104
+ c(r) .
1105
+ (B8)
1106
+ Correspondingly, we have the critical impact parameters bc:
1107
+ bc = L
1108
+ E =
1109
+ rc
1110
+
1111
+ fc(r)
1112
+ .
1113
+ (B9)
1114
+ Then, the shadow radius Rs and Lyapunov exponents λ can be calculated as follows:
1115
+ Rs =
1116
+
1117
+ ζ2 + η2 = bc = 3
1118
+
1119
+ 3m ,
1120
+ (B10)
1121
+ λ =
1122
+
1123
+ V ′′
1124
+ eff
1125
+ 2˙t2 =
1126
+
1127
+ − r2
1128
+ c
1129
+ f(rc)
1130
+ � d2
1131
+ dr2
1132
+
1133
+ f(r)
1134
+ r2
1135
+ � ���
1136
+ r=rc ,
1137
+ (B11)
1138
+ where {ζ , η} are the celestial coordinates. We find that the shadow radius reduces to the
1139
+ one of SS-BH [109, 110]. It means that the LQG effect doesn’t change the shape of the
1140
+ shadow. However, the LQG correction affects the Lyapunov exponent λ.
1141
+ On the other hand, we shall use the first order WKB approximation to obtain the analytic
1142
+ form of the QNMs in the eikonal limit (l → ∞). In this limit, the last term of the effective
1143
+ potential (6) can be ignored, resulting in the following form of the effective potential
1144
+ V∞(r) = f(r) l2
1145
+ r2 .
1146
+ (B12)
1147
+ Reminding that the potential (B6) and (B12) are the same. Therefore, in the eikonal limit,
1148
+ the QNMs may be obtained by the multiples of the frequency and the instability timescale
1149
+ of the unstable circular null geodesic [81]:
1150
+ ω = Ωcl − i(n + 1
1151
+ 2)|λ| ,
1152
+ (B13)
1153
+ where Ωc is the angular velocity and can be worked out as
1154
+ Ωc =
1155
+ ˙φ
1156
+ ˙t = 1
1157
+ bc
1158
+ .
1159
+ (B14)
1160
+ [1] C. Rovelli, Quantum Gravity, Cambridge University Press, Cambridge, UK (2004).
1161
+ [2] T. Thiemann, Modern canonical quantum general relativity, gr-qc/0110034.
1162
+ 20
1163
+
1164
+ [3] A. Ashtekar and J. Lewandowski, Background independent quantum gravity: A Status
1165
+ report, Class. Quant. Grav. 21 (2004) R53, [gr-qc/0404018].
1166
+ [4] M. Han, W. Huang, and Y. Ma, Fundamental structure of loop quantum gravity, Int. J.
1167
+ Mod. Phys. D 16 (2007) 1397–1474, [gr-qc/0509064].
1168
+ [5] M. Bojowald, Absence of singularity in loop quantum cosmology, Phys. Rev. Lett. 86 (2001)
1169
+ 5227–5230, [gr-qc/0102069].
1170
+ [6] A. Ashtekar, T. Pawlowski, and P. Singh, Quantum nature of the big bang, Phys. Rev. Lett.
1171
+ 96 (2006) 141301, [gr-qc/0602086].
1172
+ [7] A. Ashtekar, T. Pawlowski, and P. Singh, Quantum Nature of the Big Bang: An Analytical
1173
+ and Numerical Investigation. I., Phys. Rev. D 73 (2006) 124038, [gr-qc/0604013].
1174
+ [8] A. Ashtekar, T. Pawlowski, and P. Singh, Quantum Nature of the Big Bang: Improved
1175
+ dynamics, Phys. Rev. D 74 (2006) 084003, [gr-qc/0607039].
1176
+ [9] A. Ashtekar, M. Bojowald, and J. Lewandowski, Mathematical structure of loop quantum
1177
+ cosmology, Adv. Theor. Math. Phys. 7 (2003), no. 2 233–268, [gr-qc/0304074].
1178
+ [10] M. Bojowald, Loop quantum cosmology, Living Rev. Rel. 8 (2005) 11, [gr-qc/0601085].
1179
+ [11] A. Ashtekar and P. Singh, Loop Quantum Cosmology: A Status Report, Class. Quant. Grav.
1180
+ 28 (2011) 213001, [arXiv:1108.0893].
1181
+ [12] E. Wilson-Ewing, Testing loop quantum cosmology, Comptes Rendus Physique 18 (2017)
1182
+ 207–225, [arXiv:1612.04551].
1183
+ [13] V. Taveras, Corrections to the Friedmann Equations from LQG for a Universe with a Free
1184
+ Scalar Field, Phys. Rev. D 78 (2008) 064072, [arXiv:0807.3325].
1185
+ [14] Y. Ding, Y. Ma, and J. Yang, Effective Scenario of Loop Quantum Cosmology, Phys. Rev.
1186
+ Lett. 102 (2009) 051301, [arXiv:0808.0990].
1187
+ [15] J. Yang, Y. Ding, and Y. Ma, Alternative quantization of the Hamiltonian in loop quantum
1188
+ cosmology II: Including the Lorentz term, Phys. Lett. B 682 (2009) 1–7,
1189
+ [arXiv:0904.4379].
1190
+ [16] M. Bojowald and A. Tsobanjan, Effective Constraints for Relativistic Quantum Systems,
1191
+ Phys. Rev. D 80 (2009) 125008, [arXiv:0906.1772].
1192
+ [17] M. Bojowald and A. Tsobanjan, Effective Constraints and Physical Coherent States in
1193
+ Quantum Cosmology: A Numerical Comparison, Class. Quant. Grav. 27 (2010) 145004,
1194
+ [arXiv:0911.4950].
1195
+ 21
1196
+
1197
+ [18] M. Bojowald, D. Brizuela, H. H. Hernandez, M. J. Koop, and H. A. Morales-Tecotl,
1198
+ High-order quantum back-reaction and quantum cosmology with a positive cosmological
1199
+ constant, Phys. Rev. D 84 (2011) 043514, [arXiv:1011.3022].
1200
+ [19] A. Ashtekar, M. Campiglia, and A. Henderson, Loop Quantum Cosmology and Spin Foams,
1201
+ Phys. Lett. B 681 (2009) 347–352, [arXiv:0909.4221].
1202
+ [20] A. Ashtekar, M. Campiglia, and A. Henderson, Casting Loop Quantum Cosmology in the
1203
+ Spin Foam Paradigm, Class. Quant. Grav. 27 (2010) 135020, [arXiv:1001.5147].
1204
+ [21] A. Ashtekar, M. Campiglia, and A. Henderson, Path Integrals and the WKB approximation
1205
+ in Loop Quantum Cosmology, Phys. Rev. D 82 (2010) 124043, [arXiv:1011.1024].
1206
+ [22] H. Huang, Y. Ma, and L. Qin, Path Integral and Effective Hamiltonian in Loop Quantum
1207
+ Cosmology, Gen. Rel. Grav. 45 (2013) 1191–1210, [arXiv:1102.4755].
1208
+ [23] L. Qin, G. Deng, and Y.-G. Ma, Path integrals and alternative effective dynamics in loop
1209
+ quantum cosmology, Commun. Theor. Phys. 57 (2012) 326–332, [arXiv:1206.1131].
1210
+ [24] L. Qin and Y. Ma, Coherent State Functional Integrals in Quantum Cosmology, Phys. Rev.
1211
+ D 85 (2012) 063515, [arXiv:1110.5480].
1212
+ [25] L. Qin and Y. Ma, Coherent State Functional Integral in Loop Quantum Cosmology:
1213
+ Alternative Dynamics, Mod. Phys. Lett. A 27 (2012) 1250078, [arXiv:1206.1128].
1214
+ [26] M. Bojowald, G. Date, and K. Vandersloot, Homogeneous loop quantum cosmology: The
1215
+ Role of the spin connection, Class. Quant. Grav. 21 (2004) 1253–1278, [gr-qc/0311004].
1216
+ [27] P. Singh and A. Toporensky, Big crunch avoidance in K=1 semiclassical loop quantum
1217
+ cosmology, Phys. Rev. D 69 (2004) 104008, [gr-qc/0312110].
1218
+ [28] G. V. Vereshchagin, Qualitative approach to semi-classical loop quantum cosmology, JCAP
1219
+ 07 (2004) 013, [gr-qc/0406108].
1220
+ [29] G. Date, Absence of the Kasner singularity in the effective dynamics from loop quantum
1221
+ cosmology, Phys. Rev. D 71 (2005) 127502, [gr-qc/0505002].
1222
+ [30] G. Date and G. M. Hossain, Genericity of big bounce in isotropic loop quantum cosmology,
1223
+ Phys. Rev. Lett. 94 (2005) 011302, [gr-qc/0407074].
1224
+ [31] R. Goswami, P. S. Joshi, and P. Singh, Quantum evaporation of a naked singularity, Phys.
1225
+ Rev. Lett. 96 (2006) 031302, [gr-qc/0506129].
1226
+ [32] M. Bojowald, The Early universe in loop quantum cosmology, J. Phys. Conf. Ser. 24 (2005)
1227
+ 77–86, [gr-qc/0503020].
1228
+ 22
1229
+
1230
+ [33] T. Stachowiak and M. Szydlowski, Exact solutions in bouncing cosmology, Phys. Lett. B
1231
+ 646 (2007) 209–214, [gr-qc/0610121].
1232
+ [34] A. Ashtekar and M. Bojowald, Quantum geometry and the Schwarzschild singularity, Class.
1233
+ Quant. Grav. 23 (2006) 391–411, [gr-qc/0509075].
1234
+ [35] L. Modesto, Loop quantum black hole, Class. Quant. Grav. 23 (2006) 5587–5602,
1235
+ [gr-qc/0509078].
1236
+ [36] L. Modesto, Semiclassical loop quantum black hole, Int. J. Theor. Phys. 49 (2010)
1237
+ 1649–1683, [arXiv:0811.2196].
1238
+ [37] M. Campiglia, R. Gambini, and J. Pullin, Loop quantization of spherically symmetric
1239
+ midi-superspaces, Class. Quant. Grav. 24 (2007) 3649–3672, [gr-qc/0703135].
1240
+ [38] M. Bojowald and S. Brahma, Signature change in two-dimensional black-hole models of
1241
+ loop quantum gravity, Phys. Rev. D 98 (2018), no. 2 026012, [arXiv:1610.08850].
1242
+ [39] C. G. Boehmer and K. Vandersloot, Loop Quantum Dynamics of the Schwarzschild
1243
+ Interior, Phys. Rev. D 76 (2007) 104030, [arXiv:0709.2129].
1244
+ [40] D.-W. Chiou, Phenomenological loop quantum geometry of the Schwarzschild black hole,
1245
+ Phys. Rev. D 78 (2008) 064040, [arXiv:0807.0665].
1246
+ [41] D.-W. Chiou, Phenomenological dynamics of loop quantum cosmology in Kantowski-Sachs
1247
+ spacetime, Phys. Rev. D 78 (2008) 044019, [arXiv:0803.3659].
1248
+ [42] A. Joe and P. Singh, Kantowski-Sachs spacetime in loop quantum cosmology: bounds on
1249
+ expansion and shear scalars and the viability of quantization prescriptions, Class. Quant.
1250
+ Grav. 32 (2015), no. 1 015009, [arXiv:1407.2428].
1251
+ [43] J. Yang, C. Zhang, and Y. Ma, Loop quantum black hole in a gravitational collapse model,
1252
+ arXiv:2211.04263.
1253
+ [44] W.-C. Gan, X.-M. Kuang, Z.-H. Yang, Y. Gong, A. Wang, and B. Wang, Non-existence of
1254
+ quantum black hole horizons in the improved dynamics approach, arXiv:2212.14535.
1255
+ [45] A. Corichi, T. Vukasinac, and J. A. Zapata, Polymer Quantum Mechanics and its
1256
+ Continuum Limit, Phys. Rev. D 76 (2007) 044016, [arXiv:0704.0007].
1257
+ [46] A. Corichi and P. Singh, Loop quantization of the Schwarzschild interior revisited, Class.
1258
+ Quant. Grav. 33 (2016), no. 5 055006, [arXiv:1506.08015].
1259
+ [47] J. Olmedo, S. Saini, and P. Singh, From black holes to white holes: a quantum gravitational,
1260
+ symmetric bounce, Class. Quant. Grav. 34 (2017), no. 22 225011, [arXiv:1707.07333].
1261
+ 23
1262
+
1263
+ [48] A. Ashtekar, J. Olmedo, and P. Singh, Quantum Transfiguration of Kruskal Black Holes,
1264
+ Phys. Rev. Lett. 121 (2018), no. 24 241301, [arXiv:1806.00648].
1265
+ [49] A. Ashtekar, J. Olmedo, and P. Singh, Quantum extension of the Kruskal spacetime, Phys.
1266
+ Rev. D 98 (2018), no. 12 126003, [arXiv:1806.02406].
1267
+ [50] M. Bojowald, Black-Hole Models in Loop Quantum Gravity, Universe 6 (2020), no. 8 125,
1268
+ [arXiv:2009.13565].
1269
+ [51] M. Bojowald, No-go result for covariance in models of loop quantum gravity, Phys. Rev. D
1270
+ 102 (2020), no. 4 046006, [arXiv:2007.16066].
1271
+ [52] M. Bojowald, S. Brahma, and J. D. Reyes, Covariance in models of loop quantum gravity:
1272
+ Spherical symmetry, Phys. Rev. D 92 (2015), no. 4 045043, [arXiv:1507.00329].
1273
+ [53] A. Alonso-Bardaji and D. Brizuela, Holonomy and inverse-triad corrections in spherical
1274
+ models coupled to matter, Eur. Phys. J. C 81 (2021), no. 4 283, [arXiv:2010.14437].
1275
+ [54] A. Alonso-Bardaji and D. Brizuela, Anomaly-free deformations of spherical general
1276
+ relativity coupled to matter, Phys. Rev. D 104 (2021), no. 8 084064, [arXiv:2106.07595].
1277
+ [55] A. AlonsoBardaji, D. Brizuela, and R. Vera, An effective model for the quantum
1278
+ Schwarzschild black hole, Phys. Lett. B 829 (2022) 137075, [arXiv:2112.12110].
1279
+ [56] A. AlonsoBardaji, D. Brizuela, and R. Vera, Nonsingular spherically symmetric black-hole
1280
+ model with holonomy corrections, Phys. Rev. D 106 (2022), no. 2 024035,
1281
+ [arXiv:2205.02098].
1282
+ [57] E. Berti, V. Cardoso, and A. O. Starinets, Quasinormal modes of black holes and black
1283
+ branes, Class. Quant. Grav. 26 (2009) 163001, [arXiv:0905.2975].
1284
+ [58] B. F. Schutz and C. M. Will, Black hole normal modes - A semianalytic approach,
1285
+ Astrophys. J. Lett , Astrophys. J. Lett, 291 (Apr., 1985) L33–L36.
1286
+ [59] S. Iyer and C. M. Will, Black Hole Normal Modes: A WKB Approach. 1. Foundations and
1287
+ Application of a Higher Order WKB Analysis of Potential Barrier Scattering, Phys. Rev. D
1288
+ 35 (1987) 3621.
1289
+ [60] J. W. Guinn, C. M. Will, Y. Kojima, and B. F. Schutz, High Overtone Normal Modes of
1290
+ Schwarzschild Black Holes, Class. Quant. Grav. 7 (1990) L47.
1291
+ [61] R. A. Konoplya, Quasinormal modes of the Schwarzschild black hole and higher order WKB
1292
+ approach, J. Phys. Stud. 8 (2004) 93–100.
1293
+ [62] R. A. Konoplya, Quasinormal behavior of the d-dimensional Schwarzschild black hole and
1294
+ 24
1295
+
1296
+ higher order WKB approach, Phys. Rev. D 68 (2003) 024018, [gr-qc/0303052].
1297
+ [63] J. Matyjasek and M. Opala, Quasinormal modes of black holes. The improved semianalytic
1298
+ approach, Phys. Rev. D 96 (2017), no. 2 024011, [arXiv:1704.00361].
1299
+ [64] G. T. Horowitz and V. E. Hubeny, Quasinormal modes of AdS black holes and the approach
1300
+ to thermal equilibrium, Phys. Rev. D 62 (2000) 024027, [hep-th/9909056].
1301
+ [65] E. W. Leaver, An Analytic representation for the quasi normal modes of Kerr black holes,
1302
+ Proc. Roy. Soc. Lond. A 402 (1985) 285–298.
1303
+ [66] H. Ciftci, R. L. Hall, and N. Saad, Perturbation theory in a framework of iteration methods,
1304
+ Phys. Lett. A 340 (2005) 388–396, [math-ph/0504056].
1305
+ [67] H. T. Cho, A. S. Cornell, J. Doukas, and W. Naylor, Black hole quasinormal modes using
1306
+ the asymptotic iteration method, Class. Quant. Grav. 27 (2010) 155004, [arXiv:0912.2740].
1307
+ [68] H. T. Cho, A. S. Cornell, J. Doukas, T. R. Huang, and W. Naylor, A New Approach to
1308
+ Black Hole Quasinormal Modes: A Review of the Asymptotic Iteration Method, Adv. Math.
1309
+ Phys. 2012 (2012) 281705, [arXiv:1111.5024].
1310
+ [69] J. P. Boyd, Chebyshev & Fourier Spectral Methods, Courier Dover Publications.
1311
+ [70] A. Jansen, Overdamped modes in Schwarzschild-de Sitter and a Mathematica package for
1312
+ the numerical computation of quasinormal modes, Eur. Phys. J. Plus 132 (2017), no. 12
1313
+ 546, [arXiv:1709.09178].
1314
+ [71] J.-P. Wu and P. Liu, Quasi-normal modes of holographic system with Weyl correction and
1315
+ momentum dissipation, Phys. Lett. B 780 (2018) 616–621, [arXiv:1804.10897].
1316
+ [72] G. Fu and J.-P. Wu, EM Duality and Quasinormal Modes from Higher Derivatives with
1317
+ Homogeneous Disorder, Adv. High Energy Phys. 2019 (2019) 5472310,
1318
+ [arXiv:1812.11522].
1319
+ [73] G. Fu, D. Zhang, P. Liu, X.-M. Kuang, Q. Pan, and J.-P. Wu, Quasi-normal modes and
1320
+ Hawking radiation of a charged Weyl black hole, arXiv:2207.12927.
1321
+ [74] W. Xiong, P. Liu, C.-Y. Zhang, and C. Niu, Quasi-normal modes of the
1322
+ Einstein-Maxwell-aether Black Hole, arXiv:2112.12523.
1323
+ [75] P. Liu, C. Niu, and C.-Y. Zhang, Linear instability of charged massless scalar perturbation
1324
+ in regularized 4D charged Einstein-Gauss-Bonnet anti de-Sitter black holes, Chin. Phys. C
1325
+ 45 (2021), no. 2 025111.
1326
+ [76] P. Liu, C. Niu, and C.-Y. Zhang, Instability of regularized 4D charged
1327
+ 25
1328
+
1329
+ Einstein-Gauss-Bonnet de-Sitter black holes, Chin. Phys. C 45 (2021), no. 2 025104.
1330
+ [77] J. L. Jaramillo, R. Panosso Macedo, and L. Al Sheikh, Pseudospectrum and Black Hole
1331
+ Quasinormal Mode Instability, Phys. Rev. X 11 (2021), no. 3 031003, [arXiv:2004.06434].
1332
+ [78] J. L. Jaramillo, R. Panosso Macedo, and L. A. Sheikh, Gravitational wave signatures of
1333
+ black hole quasi-normal mode instability, arXiv:2105.03451.
1334
+ [79] K. Destounis, R. P. Macedo, E. Berti, V. Cardoso, and J. L. Jaramillo, Pseudospectrum of
1335
+ Reissner-Nordstr¨om black holes: Quasinormal mode instability and universality, Phys. Rev.
1336
+ D 104 (2021), no. 8 084091, [arXiv:2107.09673].
1337
+ [80] L. A. H. Mamani, A. D. D. Masa, L. T. Sanches, and V. T. Zanchin, Revisiting the
1338
+ quasinormal modes of the Schwarzschild black hole: Numerical analysis, arXiv:2206.03512.
1339
+ [81] V. Cardoso, A. S. Miranda, E. Berti, H. Witek, and V. T. Zanchin, Geodesic stability,
1340
+ Lyapunov exponents and quasinormal modes, Phys. Rev. D 79 (2009), no. 6 064016,
1341
+ [arXiv:0812.1806].
1342
+ [82] S.-W. Wei and Y.-X. Liu, Null Geodesics, Quasinormal Modes, and Thermodynamic Phase
1343
+ Transition for Charged Black Holes in Asymptotically Flat and dS Spacetimes, Chin. Phys.
1344
+ C 44 (2020), no. 11 115103, [arXiv:1909.11911].
1345
+ [83] K. Jusufi, Quasinormal Modes of Black Holes Surrounded by Dark Matter and Their
1346
+ Connection with the Shadow Radius, Phys. Rev. D 101 (2020), no. 8 084055,
1347
+ [arXiv:1912.13320].
1348
+ [84] H. Guo, H. Liu, X.-M. Kuang, and B. Wang, Acoustic black hole in Schwarzschild
1349
+ spacetime: quasi-normal modes, analogous Hawking radiation and shadows, Phys. Rev. D
1350
+ 102 (2020) 124019, [arXiv:2007.04197].
1351
+ [85] C. Liu, T. Zhu, Q. Wu, K. Jusufi, M. Jamil, M. Azreg-A¨ınou, and A. Wang, Shadow and
1352
+ quasinormal modes of a rotating loop quantum black hole, Phys. Rev. D 101 (2020), no. 8
1353
+ 084001, [arXiv:2003.00477]. [Erratum: Phys.Rev.D 103, 089902 (2021)].
1354
+ [86] R. Ling, H. Guo, H. Liu, X.-M. Kuang, and B. Wang, Shadow and near-horizon
1355
+ characteristics of the acoustic charged black hole in curved spacetime, Phys. Rev. D 104
1356
+ (2021), no. 10 104003, [arXiv:2107.05171].
1357
+ [87] L. Bombelli and E. Calzetta, Chaos around a black hole, Class. Quant. Grav. 9 (1992)
1358
+ 2573–2599.
1359
+ [88] N. J. Cornish and J. J. Levin, Lyapunov timescales and black hole binaries, Class. Quant.
1360
+ 26
1361
+
1362
+ Grav. 20 (2003) 1649–1660, [gr-qc/0304056].
1363
+ [89] K. Lin and W.-L. Qian, Echoes in star quasinormal modes using an alternative finite
1364
+ difference method, arXiv:2204.09531.
1365
+ [90] Z. Zhu, S.-J. Zhang, C. E. Pellicer, B. Wang, and E. Abdalla, Stability of
1366
+ Reissner-Nordstr¨om black hole in de Sitter background under charged scalar perturbation,
1367
+ Phys. Rev. D 90 (2014), no. 4 044042, [arXiv:1405.4931]. [Addendum: Phys.Rev.D 90,
1368
+ 049904 (2014)].
1369
+ [91] E. Abdalla, C. E. Pellicer, J. de Oliveira, and A. B. Pavan, Phase transitions and regions of
1370
+ stability in Reissner-Nordstr¨om holographic superconductors, Phys. Rev. D 82 (2010)
1371
+ 124033, [arXiv:1010.2806].
1372
+ [92] Z.-H. Yang, G. Fu, X.-M. Kuang, and J.-P. Wu, Instability of de-Sitter black hole with
1373
+ massive scalar field coupled to Gauss–Bonnet invariant and the scalarized black holes, Eur.
1374
+ Phys. J. C 82 (2022), no. 10 868, [arXiv:2112.15052].
1375
+ [93] LIGO Scientific, Virgo Collaboration, B. P. Abbott et al., Observation of Gravitational
1376
+ Waves from a Binary Black Hole Merger, Phys. Rev. Lett. 116 (2016), no. 6 061102,
1377
+ [arXiv:1602.03837].
1378
+ [94] R. A. Konoplya and A. Zhidenko, Quasinormal modes of black holes: From astrophysics to
1379
+ string theory, Rev. Mod. Phys. 83 (2011) 793–836, [arXiv:1102.4014].
1380
+ [95] K. D. Kokkotas and B. G. Schmidt, Quasinormal modes of stars and black holes, Living
1381
+ Rev. Rel. 2 (1999) 2, [gr-qc/9909058].
1382
+ [96] C. Gundlach, R. H. Price, and J. Pullin, Late time behavior of stellar collapse and
1383
+ explosions: 1. Linearized perturbations, Phys. Rev. D 49 (1994) 883–889, [gr-qc/9307009].
1384
+ [97] R. H. Price, Nonspherical Perturbations of Relativistic Gravitational Collapse. II.
1385
+ Integer-Spin, Zero-Rest-Mass Fields, Phys. Rev. D 5 (1972) 2439–2454.
1386
+ [98] R. H. Price, Nonspherical perturbations of relativistic gravitational collapse. 1. Scalar and
1387
+ gravitational perturbations, Phys. Rev. D 5 (1972) 2419–2438.
1388
+ [99] M. Lagos, P. G. Ferreira, and O. J. Tattersall, Anomalous decay rate of quasinormal modes,
1389
+ Phys. Rev. D 101 (2020), no. 8 084018, [arXiv:2002.01897].
1390
+ [100] A. Arag´on, P. A. Gonz´alez, E. Papantonopoulos, and Y. V´asquez, Anomalous decay rate of
1391
+ quasinormal modes in Schwarzschild-dS and Schwarzschild-AdS black holes, JHEP 08
1392
+ (2020) 120, [arXiv:2004.09386].
1393
+ 27
1394
+
1395
+ [101] A. Arag´on, R. B´ecar, P. A. Gonz´alez, and Y. V´asquez, Massive Dirac quasinormal modes
1396
+ in Schwarzschild–de Sitter black holes: Anomalous decay rate and fine structure, Phys. Rev.
1397
+ D 103 (2021), no. 6 064006, [arXiv:2009.09436].
1398
+ [102] R. D. B. Fontana, P. A. Gonz´alez, E. Papantonopoulos, and Y. V´asquez, Anomalous decay
1399
+ rate of quasinormal modes in Reissner-Nordstr¨om black holes, Phys. Rev. D 103 (2021),
1400
+ no. 6 064005, [arXiv:2011.10620].
1401
+ [103] P. A. Gonz´alez, E. Papantonopoulos, J. Saavedra, and Y. V´asquez, Quasinormal modes for
1402
+ massive charged scalar fields in Reissner-Nordstr¨om dS black holes: anomalous decay rate,
1403
+ JHEP 06 (2022) 150, [arXiv:2204.01570].
1404
+ [104] P. A. Gonz´alez, E. Papantonopoulos, A. Rinc´on, and Y. V´asquez, Quasinormal modes of
1405
+ massive scalar fields in four-dimensional wormholes: Anomalous decay rate, Phys. Rev. D
1406
+ 106 (2022), no. 2 024050, [arXiv:2205.06079].
1407
+ [105] R.Ruffini, in Black Holes: les Astres Occlus, Gordon and Breach Science Publishers.
1408
+ [106] V. Cardoso, Quasinormal modes and gravitational radiation in black hole spacetimes, other
1409
+ thesis, 12, 2003.
1410
+ [107] S. Chandrasekhar, The Mathematical Theory of Black Holes, Oxford University Press, New
1411
+ York.
1412
+ [108] V. Perlick, O. Y. Tsupko, and G. S. Bisnovatyi-Kogan, Influence of a plasma on the shadow
1413
+ of a spherically symmetric black hole, Phys. Rev. D 92 (2015), no. 10 104031,
1414
+ [arXiv:1507.04217].
1415
+ [109] H.-J. Blome and B. Mashhoon, Quasi-Normal Oscillations Of A Schwarzschild Black Hole,
1416
+ Phys. Lett. A110, 231.
1417
+ [110] M. S. Churilova, Analytical quasinormal modes of spherically symmetric black holes in the
1418
+ eikonal regime, Eur. Phys. J. C 79 (2019), no. 7 629, [arXiv:1905.04536].
1419
+ 28
1420
+
3tFAT4oBgHgl3EQfERy2/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
4NFKT4oBgHgl3EQf9C5Z/content/2301.11952v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdbf45b8efa670b6cd5c9ad2af471284e1b70904496060274ba7d532704c9fb7
3
+ size 869962
4NFKT4oBgHgl3EQf9C5Z/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22788fb4f60c9e862a3a5092a95ea02f1545a2388da7ab2f8bd125670ab72296
3
+ size 1769517
5dAyT4oBgHgl3EQfpfgA/content/tmp_files/2301.00524v1.pdf.txt ADDED
@@ -0,0 +1,4048 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ In Quest of Ground Truth: Learning Confident Models and Estimating
2
+ Uncertainty in the Presence of Annotator Noise
3
+ Asma Ahmed Hashmi
4
+ Artem Agafonov
5
+ Aigerim Zhumabayeva
6
+ Mohammad Yaqub
7
+ Martin Takáˇc
8
+ Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)
9
+ Masdar City, Abu Dhabi, UAE
10
+ https://mbzuai.ac.ae/
11
+ Abstract
12
+ The performance of the Deep Learning (DL) models de-
13
+ pends on the quality of labels. In some areas, the involvement
14
+ of human annotators may lead to noise in the data. When
15
+ these corrupted labels are blindly regarded as the ground
16
+ truth (GT), DL models suffer from performance deficiency.
17
+ This paper presents a method that aims to learn a confident
18
+ model in the presence of noisy labels. This is done in conjunc-
19
+ tion with estimating the uncertainty of multiple annotators.
20
+ We robustly estimate the predictions given only the noisy
21
+ labels by adding entropy or information-based regularizer
22
+ to the classifier network. We conduct our experiments on a
23
+ noisy version of MNIST, CIFAR-10, and FMNIST datasets.
24
+ Our empirical results demonstrate the robustness of our
25
+ method as it outperforms or performs comparably to other
26
+ state-of-the-art (SOTA) methods. In addition, we evaluated
27
+ the proposed method on the curated dataset, where the noise
28
+ type and level of various annotators depend on the input
29
+ image style. We show that our approach performs well and
30
+ is adept at learning annotators’ confusion. Moreover, we
31
+ demonstrate how our model is more confident in predicting
32
+ GT than other baselines. Finally, we assess our approach for
33
+ segmentation problem and showcase its effectiveness with
34
+ experiments.
35
+ 1. Introduction
36
+ Real world data is replete with noisy labels. Since the
37
+ labeling process of large-scale datasets is costly and time-
38
+ consuming, researchers often resort to less expensive options,
39
+ such as internet inquiries and crowdsourcing to circumvent
40
+ this issue [32,38]. Unfortunately, these methods are viable
41
+ in producing datasets with incorrect labels. Smaller datasets
42
+ are also vulnerable to the presence of corrupted labels. In
43
+ this case, usually the labelling process is either challenging
44
+ or the annotators have divergent opinions [3,21]. In medical
45
+ imaging, for example, it is imperative to procure annotations
46
+ from the clinical experts. However, it is not only expensive
47
+ to obtain annotated data, but it also suffers from high inter-
48
+ reader variability among domain’s experts [17,20].
49
+ Deep Neural Networks (DNN) noticeably suffer a degen-
50
+ eration in performance when trained on noisy labels. To
51
+ combat this issue, various algorithms have been devised to
52
+ adapt to the presence of noisy labels without compromis-
53
+ ing on the performance of DNNs. Sample Selection meth-
54
+ ods [10,12,22,34,40] started to gain momentum recently;
55
+ these methods involve a two network, Student-Teacher, for
56
+ learning from noisy labels. It uses a small loss trick to sam-
57
+ ple clean instances for additional training by its peer network.
58
+ While these methods aid in selecting the clean samples, the
59
+ small loss trick does not perform well when the loss distri-
60
+ bution of true-labelled and false-labelled examples overlap
61
+ substantially.
62
+ In the instance when there is a significant level of dispute
63
+ in the labels by the annotators, conventional training meth-
64
+ ods that consider such labels as "the truth" result in models
65
+ with low predictive ability. Tanno et al [31] proposed an
66
+ algorithm that jointly estimates the annotators’ confusion
67
+ and the underlying label distribution. The annotators’ con-
68
+ fusion is represented by a stochastic transition probability
69
+ matrix. In their approach, the loss function is augmented by
70
+ adding a regularization term that is the trace of annotators’
71
+ confusion matrix. However, the caveat is that this regular-
72
+ ization may still penalize in instances when the annotator is
73
+ not confused, therefore it will not learn the true annotator’s
74
+ noise distribution. Furthermore, there is no incentive in the
75
+ training process to enforce the classifier network to predict
76
+ the class probabilities.
77
+ Our work is inspired by [31, 41], with a motivation to
78
+ make our model confident in its predictions while also jointly
79
+ 1
80
+ arXiv:2301.00524v1 [cs.CV] 2 Jan 2023
81
+
82
+ estimating annotator’s confusion in the presence of noisy
83
+ labels. We explored entropy and information regularizer
84
+ techniques to encourage our classifier to make confident
85
+ predictions about each class.
86
+ Problem Statement. In this paper, we focus on supervised
87
+ learning problem with noisy labels. We assume that each
88
+ object xn, n = 1, . . . , N is assigned with a set of noisy
89
+ labels {˜y(r)
90
+ n }R
91
+ r=1, where ˜y(r)
92
+ n
93
+ is a label given to the object xn
94
+ by annotator R. Here N denotes the total number of samples
95
+ in the data, and R denotes the total number of annotators.
96
+ The main goal is to construct an algorithm that learns the
97
+ distribution of true labels p(y|x) and to make confident pre-
98
+ dictions about the classes. This is achieved in conjunction
99
+ with estimating annotator’s noise.
100
+ To achieve this, we use the classifier-annotator approach
101
+ [31]. We jointly train two neural networks: classifier and
102
+ annotator. The first network, the classifier, aims to learn
103
+ the ground truth/class true label. So it outputs the class
104
+ probability vector ˆpθ(x). The second network learns each
105
+ annotator’s confusion matrix U(x), which represents the
106
+ likelihood of the annotator being wrong in the class markup
107
+ for a given input.
108
+ However, it is not enough to minimize the loss between
109
+ matrix-vector product ˆUψ(x)ˆpθ(x) and annotator’s label ˜y
110
+ due to various reasons. First of all, there is no evidence why
111
+ annotator and classifier neural networks will learn confusion
112
+ matrix and class probabilities. There are infinite number
113
+ of pairs ( ˆUψ(x), ˆpθ(x)) that approximate ˜y well. Without
114
+ a modification of loss functions, it may turn out that they
115
+ just learn some features of inputs. Secondly, we want to be
116
+ confident in the predictions of the classifier. In evaluation
117
+ mode, this model will be used to make real-time predictions.
118
+ It is important to train the model in a such a way that it
119
+ makes confident and true predictions.
120
+ To tackle the aforementioned problems, we penalize the
121
+ classifier network for uncertainty. We propose two regular-
122
+ ization techniques based on Shannon’s entropy and infor-
123
+ mation. Our methodology for classification is summarized
124
+ in Figure 1. Moreover, we apply our methodology for seg-
125
+ mentation problem. In this case we make predictions and
126
+ estimate the confusion pixel-wise.
127
+ Contributions. The main contributions of our paper are
128
+ outlined as follows:
129
+ 1. Learning the ground truth label. Our approach is capa-
130
+ ble of disentangling the GT from the annotation noise. We
131
+ distinguish the noise through the usage of the annotator-
132
+ classifier methodology. We enforce the classifier network
133
+ to learn class probabilities, not some features of the in-
134
+ put, by regularizing its output via Shannons’ entropy and
135
+ information-based regularizer.
136
+ 2. Learning confident model. Our choice of regularization
137
+ technique is enforcing the classifier network to make con-
138
+ vincing predictions about the respective classes. This
139
+ CLASSIFIER
140
+ ANNOTATOR 2
141
+ Input
142
+ Annotators'
143
+ confusion
144
+ matrices
145
+ Annotators'
146
+ class
147
+ probabilities
148
+ Class
149
+ probabilities
150
+ Negative log
151
+ likelihood loss
152
+ ANNOTATOR 1
153
+ ANNOTATOR 3
154
+ or
155
+ Regularization
156
+ Regularization
157
+ Figure 1. Model architecture. We consider the problem with 4
158
+ classes and 3 annotators. Architecture consists of two neural net-
159
+ works: 1) classifier network predicts class probabilities ˆpθ(x), 2)
160
+ annotator NNs predict confusion matrix U (r)(x) for each annotator
161
+ r. Matrix-vector product U (r)(x)ˆpθ(x) estimates the annotator’s
162
+ prediction. Note, that 2nd annotator tends to confuse classes 1 and
163
+ 2. To jointly train two neural networks we minimize regularized
164
+ negative log likelihood loss (NLL). We propose two options for
165
+ regularization: information-based regularizer and entropy.
166
+ has various befitting practical applications in different
167
+ domains, including medical imaging. We use our regular-
168
+ izer to push the predicted probabilities of the first network
169
+ to be closer to 1 or 0 and to make the model to distinguish
170
+ between classes better.
171
+ 3. Competitive numerical experiments.
172
+ We have per-
173
+ formed extensive numerical experiments that compares
174
+ our algorithm with other SOTA baselines. We conducted
175
+ experiments on MNIST, CIFAR-10 and FMNIST datasets
176
+ to gauge the performance of our algorithm in the exis-
177
+ tence of noisy labels. The noisy labels were simulated
178
+ using pairflip and symmetric noise.
179
+ Our experiments showed that our algorithm outperforms
180
+ all the evaluated baselines for the higher noise levels such
181
+ as pairflip 45% and symmetric 50%. For smaller noise
182
+ rates, we perform at par with [10,31,34,35,40]. Moreover,
183
+ we show better results than in annotator-classifier setup
184
+ with trace regularizer proposed in [30,31]. Moreover, we
185
+ conduct experiments for segmentation, where our model
186
+ also shows better accuracy and confidence compared to
187
+ trace regularization [41].
188
+ 4. Curated dataset. We have also executed experiments for
189
+ a curated dataset, where noise type and level for various
190
+ annotators depend on input image style. The proposed
191
+ approach with the choice of our regularizer results in
192
+ more confident model compared to the one without the
193
+ regularizer. Moreover, we show that our approach is able
194
+ to learn true annotators’ confusion.
195
+ 5. Open code. Our code is available online. Our implemen-
196
+ tation includes a suite that easily allows researchers to
197
+ compare their approach against all benchmarks consid-
198
+ ered in this paper.
199
+ Organization. The remainder of the paper is organized as
200
+ 2
201
+
202
+ follows. Related works are described in Section 2. Section 3
203
+ presents the methodology and probabilistic model behind it.
204
+ In Section 4 we describe the proposed regularizers. Numeri-
205
+ cal experiments are provided in Section 5 (additional experi-
206
+ ments are provided in Appendix C). Section 6 is dedicated
207
+ to the segmentation problem, and concluding remarks and
208
+ potential future research directions are given in Section 7.
209
+ 2. Related Literature
210
+ Learning with noisy labelled training data has been an
211
+ active area of research for some time. Various algorithms
212
+ have been introduced and have shown resistance to noise
213
+ during training. We highlight the core research being done
214
+ in this domain.
215
+ 2.1. Classification
216
+ Noise Transition Matrix/Loss Correction. Loss correc-
217
+ tion approach using noise transition matrix, T, is a crucial
218
+ branch that is used in deep learning systems. The goal of loss
219
+ correction is for training on noisy labels with the corrected
220
+ loss to be roughly equivalent to training on clean labels with
221
+ the original loss. The majority of the early approaches deal-
222
+ ing with noisy labels relied on estimating a noise transition
223
+ matrix to figure out how labels switch across classes.
224
+ Patrini et al. [25] introduced two different approaches for
225
+ loss correction using a stochastic matrix T that delineates
226
+ the probability of a class being flipped with another under a
227
+ certain noise. This two loss correction approaches, namely,
228
+ forward correction and backward correction. The backward
229
+ procedure corrects the loss by multiplying the loss with
230
+ inverse transition matrix T −1; while the forward procedure
231
+ corrects the network predictions by multiplying it with T.
232
+ Hendrycks et al. [11] suggested Gold Loss Correc-
233
+ tion(GLC) based on Forward Correction to address extreme
234
+ noise. The transition matrix cannot be accurately predicted
235
+ by solely noisy data when there is significant noise present.
236
+ The main driver is the assumption that a limited portion of
237
+ the training data is reliable and accessible.
238
+ Sukhbaatar et al. [30] demonstrated a method of forward
239
+ loss correction by introducing a stochastic matrix that quan-
240
+ tifies label corruption, and cannot be calculated without ac-
241
+ cessing the true labels. In order to include learning about the
242
+ label noise, forward loss correction involves adding a linear
243
+ layer to the model’s end and adjusting the loss as necessary.
244
+ Through the use of soft and hard bootstrapping, Reed
245
+ et al. added the concept of consistency to the prediction
246
+ objective [26]. The soft version is identical to softmax re-
247
+ gression with minimum entropy regularization, whereas the
248
+ hard version adjusts regression targets by employing MAP
249
+ estimation. This bootstrapping process, intuitively, gives
250
+ the learner the opportunity to contest an inconsistent train-
251
+ ing label and re-label the training data to enhance the label
252
+ quality. Whereas Goldberger et al. [9] made use of the
253
+ expectation-maximization (EM) algorithm to determine the
254
+ optimal network and noise parameters. The use of transition
255
+ matrices has been investigated further [4,7,36].
256
+ Multi-Network Learning.
257
+ Multi-network training fre-
258
+ quently employs collaborative learning and co-training.
259
+ Therefore, the sample selection procedure is governed by
260
+ the mentor network in the case of joint learning and the
261
+ peer network in the case of co-training. These algorithms
262
+ can be defined as learning to teach methods, and they com-
263
+ prise of a student and teacher network. The responsibility
264
+ of the teacher network is to select more informative sam-
265
+ ples for enhanced student network training. Malach et al.
266
+ proposed decoupling method that simultaneously trains two
267
+ DNNs while only updating parameters on examples/samples
268
+ in cases when the two classifiers disagree [22].
269
+ MentorNet [12] selects clean instances to guide the train-
270
+ ing of the student network after it has trained the teacher
271
+ network first. Co-teaching [10] and [40] also employ two
272
+ DNNs, but each DNN selects a certain number of small-loss
273
+ examples and feeds them to its peer DNN for additional
274
+ training. Co-teaching+ additionally utilize the disagreement
275
+ strategy of Decouple. In comparison, JoCoR [34] reduces
276
+ the diversity of two networks by means of co-regularization,
277
+ making the predictions of the two networks more similar.
278
+ Robust Regularization. Tanno et al. [31] showcased a
279
+ method for simultaneously learning the individual annotator
280
+ model and the underlying true label distribution. Each anno-
281
+ tator’s confusion is represented by a confusion matrix, which
282
+ is estimated in conjunction with the classifier predictions.
283
+ The algorithm comprised of a loss function include a trace
284
+ regularization term. Menon et al. [23] suggests a composite
285
+ loss-based gradient clipping for label noise robustness. It is
286
+ expected that clipping would provide noise robustness, given
287
+ that one does not place excessive trust in any single sample.
288
+ Robust early-learning [35] distinguishes between critical and
289
+ non-critical parameters for fitting clean and corrupted labels,
290
+ respectively. Then, only non-critical updates are penalized
291
+ with a different update rule.
292
+ Other Deep Learning/Statistical Methods.
293
+ DivideMix
294
+ [18] is a framework that splits the training data into a la-
295
+ beled set with clean samples and an unlabeled set with noisy
296
+ samples-samples that comprise of noisy labels; it trains the
297
+ model on both the labeled and unlabeled data in a semi-
298
+ supervised approach. Kun Yi et al [39] proposed a prob-
299
+ abilistic end-to-end noise correction in labels (PENCIL)
300
+ framework. This method only uses noisy labels to initialize
301
+ label distributions; the label distributions get updated by an
302
+ iterative correction of the noisy labels. Consequently, la-
303
+ bel distributions are used in the calculation of the network
304
+ loss instead of the noisy labels. Xia et al. [35] suggested a
305
+ robust early-training method to diminish the side effect of
306
+ noisy labels prior to early stopping. This helps with improv-
307
+ ing the memorization of clean labels. The parameters are
308
+ 3
309
+
310
+ split into critical and non-critical parameters. Each of these
311
+ parameters are updated with a different update rule.
312
+ 2.2. Segmentation
313
+ Several strategies have been developed to solve the is-
314
+ sue of annotator-related bias for segmentation in medical
315
+ imaging. We review some prominent work in the field.
316
+ Inter-reader variability among annotators gave promi-
317
+ nence to Simultaneous Truth and Performance level Es-
318
+ timation (STAPLE) [33] algorithm that uses expectation-
319
+ maximization method to merge segmentations from various
320
+ annotators into estimating a single ground truth. There are
321
+ several algorithms that drew their inspiration from STAPLE
322
+ framework such as [1,2,13,14,29]. These methods are reflec-
323
+ tive of generative modelling of annotator’s behaviour. Here
324
+ the latent variables are the true labels which are unobserved,
325
+ and the confidence/expertise of various annotators.
326
+ Mirikharaji et al. [24] provides a sample re-weighting
327
+ strategy that considers the expertise level of annotators. This
328
+ strategy gives greater weights in the loss function for the
329
+ samples annotated by professionals. To disengage annotator
330
+ bias, Tanno et al. [41] uses two coupled CNNs. Similar
331
+ to [31], the CNN for segmentation estimates the label distri-
332
+ bution, while the CNN for annotation is representative of the
333
+ annotator bias using a confusion matrix.
334
+ Annotation distribution learning has been another active
335
+ area that has inspired pioneer work of probabilistic U-Net
336
+ (PU-NET) [15]. This method given an input, examines the
337
+ problem of learning a distribution over segmentations. This
338
+ proposed architecture is a generative segmentation model
339
+ which is an integration of U-Net [27] and conditional varia-
340
+ tional autoencoders (VAE), and is effective in developing an
341
+ extensive number of conceivable hypotheses/segmentation
342
+ results.
343
+ 3. Methodology
344
+ 3.1. Probabilistic Model For Noisy Labels
345
+ Let X denote the space that contains a set of input data
346
+ X := {x1, . . . , xn}. Each of these objects x in the input
347
+ data are assigned a corresponding label y such that Y :=
348
+ {y1, y2, . . . , yN} ⊆ Y, where Y is the space of labels.
349
+ We synthetically induce noise in our original label set Y
350
+ to corrupt the clean labels. There are multiple different ways
351
+ through which we create the noisy labels for our data, namely
352
+ symmetric and pairflip noise types. In Section 5.1 and in the
353
+ Appendix, we discuss in details about the mainstream noise
354
+ types that we used to create noisy labels for the datasets that
355
+ we utilized in this paper.
356
+ We denote the set of noisy labels given by annotator r that
357
+ labels objects from the set X as ˜Y (r) = {˜y(r)
358
+ 1 , . . . , ˜y(r)
359
+ N },
360
+ where r = 1, . . . , R. Our objective is to jointly estimate
361
+ annotator noise as a function of input x, as well as to esti-
362
+ mate the distribution for latent GT label from noisy dataset,
363
+ D = {X, ˜Y (1), . . . , ˜Y (R)}. In our architecture we add an
364
+ entropy/information-based regularization term with the main
365
+ loss function. The goal is to enforce our algorithm to make
366
+ confident predictions while also learning the true labels.
367
+ Following the strategy of [31, 41], we would now demon-
368
+ strate how to set up a probabilistic model for data that has
369
+ been annotated by multiple sources.
370
+ To model annotator-specific characteristics, there are
371
+ some pivotal factors that are to be considered. In modelling
372
+ multiple annotators, it is common to assume that annotators
373
+ exercise their independence according to their expertise and
374
+ experience in labelling an input data point xi. The precision
375
+ of annotator’s labeling may depend on the properties of the
376
+ data point itself. Thus, we do not assume that annotators
377
+ are equally competent (or incompetent) at labeling all the
378
+ data; rather, it depends on the input they observe. This can
379
+ be represented as a probabilistic model for random variables
380
+ x, y, and ˜y. Following the work of [31,38], we describe the
381
+ joint conditional distribution of our probabilistic model as:
382
+ P( ˜Y (r), Y |X) = �N
383
+ i=1p(yi|xi) �R
384
+ r=1 p(˜y(r)
385
+ i
386
+ |xi, yi).
387
+ Here p(yi|xi) represents the distribution for the clean
388
+ labels of the data samples.
389
+ Conditional distribution
390
+ p(˜y(r)
391
+ i
392
+ |xi, yi) signifies that the model estimates a noisy ver-
393
+ sion of clean labels , represented as ˜y(r)
394
+ i
395
+ for each annota-
396
+ tor r. This makes intuitive sense as the noisy labels are
397
+ not only conditional on true latent labels, but also on the
398
+ input data. It is likely for the annotators to label some
399
+ instances of data xi with more precision than other sam-
400
+ ples. Since the annotators’ noise is dependent on the sam-
401
+ ple x, this allows us to model noisy label distribution as
402
+ p(˜y(r) = j|y = i, x) =: u(r)
403
+ j,i (x). We denote by U(x) a
404
+ C ×C confusion matrix [U]j,i(x) = uj,i(x), where C repre-
405
+ sents the number of classes for the true labels, y ∈ [1, ..., C].
406
+ Now using the confusion U(x), we can show the probability
407
+ that input data x, labelled as i originally, is mislabelled as j
408
+ in the set of noisy data:
409
+ p(˜y = j|x) = �C
410
+ i=1p(˜y = j|y = i, x) · p(y = i|x)
411
+ = �
412
+ iuji(x) · p(y = i|x).
413
+ (1)
414
+ To represent the joint probability distribution of noisy la-
415
+ bels using the confusion matrix of each annotator r, we can
416
+ simplify (1) as:
417
+ p(˜y(1), ..., ˜y(R)|x) =
418
+ R
419
+
420
+ r=1
421
+ C
422
+
423
+ y=1
424
+ u(r)
425
+ ˜y(r),y(x) · p(y|x).
426
+ 3.2. Jointly Optimizing the two Networks to esti-
427
+ mate the Ground Truth and Confusion
428
+ We minimize negative log-likelihood (NLL) to jointly
429
+ optimize the parameters θ and ψ of classification and
430
+ annotator networks respectively.
431
+ Given the data that
432
+ 4
433
+
434
+ comprises of training inputs and noisy labels, we would
435
+ minimize the negative log likelihood between the ob-
436
+ served noisy labels and predictions from annotator la-
437
+ bel distribution as follows: − log p(�Y (1), ..., �Y (R)|X) =
438
+ �N
439
+ i=1
440
+ �R
441
+ r=1 NLL( ˆU (r)
442
+ ψ (xi)ˆpθ(xi), ˜y(r)
443
+ i
444
+ ), where ˆpθ(xi) is
445
+ the output of the classification network and ˆU (r)
446
+ ψ (x) is the
447
+ output of annotator’s r network. Minimizing this loss func-
448
+ tion alone can cause several problems. Firstly, it does not
449
+ ensure that predictions of the classification network will be
450
+ class probabilities. It can learn some feature of inputs in
451
+ order to minimize the NLL loss between the pipeline output
452
+ ˆU (r)
453
+ ψ (x)ˆpθ(x) and corrupted labels ˜y(r). Secondly, there is
454
+ no guarantee that annotator matrices ˆU (r)
455
+ ψ (x) are correctly
456
+ learned to distinguish the noise from true labels. It can also
457
+ learn some uninterpretable features of inputs x, such that
458
+ ˆU (r)
459
+ ψ (x)ˆpθ(x) is close to ˜y(r).
460
+ To tackle these problems, we would add a regulariza-
461
+ tion term that is attached to the base classifier which helps
462
+ in estimating the true class probabilities of the predicted
463
+ ground truth. The main loss, NLL loss is then jointly op-
464
+ timized with a regularization R(ˆpθ(x)). We propose two
465
+ options for regularization: entropy regularization R(p) =
466
+ − �
467
+ i pi(x) log pi(x) and information-based regularization
468
+ R(p) = − log maxi(pi). The combined loss is then given
469
+ as:
470
+ − log p(�Y (1), ..., �Y (R)|X) =
471
+ N
472
+
473
+ i=1
474
+ R
475
+
476
+ r=1
477
+ NLL( ˆU (r)
478
+ ψ (x)ˆpθ(xi), ˜y(r)
479
+ i
480
+ )+λ 1
481
+ N
482
+ N
483
+
484
+ i=1
485
+ R(ˆpθ(xi)).
486
+ Our classification network helps with learning the features
487
+ of our data and gives us an estimate of the ground truth. The
488
+ outputs of this network are probabilities with dimension
489
+ B × C, where B is the batch-size and C is the number of
490
+ classes. Ideally we desire that the predictions we get from
491
+ the classifier network are forced to give us 1 for the most
492
+ probable class and 0 elsewhere.
493
+ We take the sum of this regularization term for the number
494
+ of batch samples and then multiply it with a regularization
495
+ parameter λ and then taking its average for the batch sam-
496
+ ples.
497
+ 4. Confident Regularization
498
+ In this section, we will explain in detail the motivation
499
+ for the choice of our regularizer. We used entropy and infor-
500
+ mation based regularizer with the first network to enhance
501
+ the predictions of our model in learning the ground truth.
502
+ 4.1. Entropy Regularizer
503
+ Entropy is regarded as a measure for gauging uncertainty.
504
+ The higher the entropy, the more disordered the state. Shan-
505
+ non et al. [28] mathematically described entropy as:
506
+ R(p) := −�
507
+ ipi log pi = E[− log p].
508
+ where pi denotes the i-th class probability. It is to be noted
509
+ that entropy is a feasible choice as it a smooth function. So
510
+ when pi is 0, the function is still differentiable, since 0 log 0
511
+ = limpi→0 pi log pi.
512
+ 4.2. Information Regularizer
513
+ We evaluated our experiments on another regularizer
514
+ which resembles the information part of entropy. The motiva-
515
+ tion behind using this regularizer is the same as the entropy
516
+ regularizer mentioned in Section 4.1. This regularizer is
517
+ expressed as :
518
+ R(p) = min
519
+ i (− log pi).
520
+ (2)
521
+ This regularizer would also push the classifier to make
522
+ confident predictions. The caveat in using this regularizer is
523
+ that it becomes undefined when pi = 0. To counter that, we
524
+ modified the regularizer function to:
525
+ R(p) = − log(max
526
+ i
527
+ pi).
528
+ Also, we would show that we achieve similar results when
529
+ we compare it with entropy regularizer. The advantage of
530
+ using entropy regularizer is that it’s a smooth function unlike
531
+ (2).
532
+ 4.3. Motivation for Confident Regularizarion
533
+ As we mentioned before, it is not enough to minimize the
534
+ loss between ˆUψ(x)ˆpθ(x) and ˜y. Indeed, let U be the true
535
+ confusion of the annotator and Pθ(x) the confusion matrix
536
+ of classifier. Ideally, we want Pθ(x) = I and ��Uψ(x) = U.
537
+ However, there can be a lot of pairs ( ˆUψ(x), Pθ(x)) that
538
+ satisfy ˆUψ(x)ˆpθ(x) = U. Therefore, we add an entropy
539
+ regularizer, to enforce Pθ converge to I.
540
+ Theorem 1. Assume that classifier is confident, i.e. ˆpθ = ei
541
+ if y = i, where ei is a basis vector of i-th coordinate. Then,
542
+ minimizing NLL loss between ˆU (r)
543
+ ψ (x)ˆpθ(x) and ˜y(r) over
544
+ ˆU (r)
545
+ ψ (x) we get
546
+ [ ˆU (r)
547
+ ψ (x)]i,j = p(˜y(r) = j|y = i, x).
548
+ In Tables 1, Table 2, and Table 3, we show the perfor-
549
+ mance of our algorithm with entropy and information-based
550
+ regularizers on CIFAR-10, MNIST, and FMNIST datasets
551
+ for symmetric and pairflip noise types. The theoretical com-
552
+ parison between the two regularizers is further discussed in
553
+ Appendix ??.
554
+ 5. Classification Experiments
555
+ 5.1. Implementation Details
556
+ In this section we describe implementation details. We
557
+ used a convolutional neural network (CNN) as a classifier
558
+ 5
559
+
560
+ model which estimates the ground truth. The predictions
561
+ of the classifier network are multiplied to the outputs of a
562
+ fully connected annotator network that learns the confusion
563
+ of the noisy labels. True labels are never introduced to the
564
+ model during training. In our experiments, we synthetically
565
+ introduce noise to the training data; we chose various noise
566
+ rates, such as 20%, 30%, 45% and 50%.
567
+ We evaluate the performance of our algorithm for the clas-
568
+ sifier network as this aids in estimating the GT via making
569
+ confident predictions about the true class. We are partic-
570
+ ularly interested in the performance of the classifier, as in
571
+ evaluation stage this network will be used separately to make
572
+ predictions.
573
+ Baselines. We compare our algorithm with the following ap-
574
+ proaches: (i) Co-teaching [10], which simultaneously trains
575
+ two DNN models, with each network selecting the batch
576
+ of data for the other, based on the instances with a small
577
+ loss. (ii) Co-teaching+ [40], also employs samples with
578
+ small loss, but with disagreement about predictions. This is
579
+ the selection criteria for the networks to pick data for each
580
+ other. (iii) JoCoR [34], extends on the idea of [10,40], but
581
+ uses co-regularization to minimize the diversity of the two
582
+ networks, thus bringing the predictions of the two networks
583
+ closer together. (iv) Robust Early-learning (CDR) [35], cat-
584
+ egorizes the critical and non-critical parameters for clean
585
+ and noisy label fitting, respectively. Different update rules
586
+ are applied to update these parameters. (v) Annotator Con-
587
+ fusion (Trace) [31] is a regularized approach that assumes
588
+ the existence of various annotators to simultaneously learn
589
+ the individual annotator model and the underlying true label
590
+ distribution, using only noisy observations.
591
+ Datasets.
592
+ We used the standard benchmark datasets:
593
+ MNIST [8], FMNIST [37], and CIFAR-10 [16] to demon-
594
+ strate the effectiveness of our methodology.
595
+ Types of Noise. The noise types, used in the experiments,
596
+ are described below.
597
+ 1. Pairflip Noise. The pairflip noise involves swapping the
598
+ labels of two adjacent categories/classes based on a preset
599
+ ratio. [19]
600
+ 2. Symmetric Noise. In symmetric noise, a portion of the
601
+ original labels are retained, while the remainder are uni-
602
+ formly reassigned to all other categories [21]. This noise
603
+ type is intended to imitate the random noise in the actual
604
+ world, which is typically the result of random web crawl-
605
+ ing or manual annotation errors. It does not consider the
606
+ similarities between classes.
607
+ 5.2. Comparison with State-Of-The-Arts
608
+ Results on MNIST. We used the same backbone architec-
609
+ ture to compare our algorithm against the baselines. Table 1
610
+ shows the performance comparison of our algorithm with
611
+ the other methods. We see that for a smaller noise rate such
612
+ as 20% and 30%, which is evidently the least challenging
613
+ case, all algorithms seem to show comparable performance
614
+ above 97% for both pairflip and symmetric noise. However,
615
+ when noise rates increases to 45% or above, there seems to
616
+ be a distinct contrast in the performance of other algorithms,
617
+ as the accuracy of some methods visibly decline to below
618
+ 90% in the case of pairflip noise. Our method achieves an
619
+ accuracy of 99.10% for pairflip 45% noise using entropy
620
+ regularizer, followed closely by the Trace method with an
621
+ accuracy of 97.95%. Whereas Co-teaching, JoCoR and CDR
622
+ achieves the test accuracy of 87.63%, 85.86% and 87.04%
623
+ respectively. For symmetric-50% noise, we got test accu-
624
+ racy of 98.94% with information regularizer, with Trace and
625
+ CDR following closely behind at 98.87% and 97.72% re-
626
+ spectively. The results of the our methodology with entropy
627
+ and information-based regularizers are comparable.
628
+ Results on CIFAR-10. Table 2 shows the test accuracy
629
+ results on CIFAR-10 dataset. Our algorithm performs dis-
630
+ tinctly superior when noise gets extreme; we achieve 80.03%
631
+ accuracy for symmetric 50% noise with information regu-
632
+ larizer, surpassing all the baselines. For pairflip 45%, we
633
+ distinctly outperform all the baselines by a considerable mar-
634
+ gin. We acquired an accuracy of 83.43%, which is about 8%
635
+ better than Trace method. All the other baselines acquired
636
+ accuracy of less than 70%. Hence, our algorithm clearly
637
+ outperforms all the baselines. This reinforces that for higher
638
+ noise ratios, our algorithm consistently gives better perfor-
639
+ mance as entropy and information regularization strategy
640
+ helps the model to be more certain in its predictions.
641
+ Our algorithm still surpasses in performance when the
642
+ noise rate is small for symmetric and pairflip noise types.
643
+ For symmetric noise 20% and 30%, we achieved an accuracy
644
+ of 84.22% and 83.85% respectively. The other algorithms
645
+ contested close for symmetric 20% noise by accomplishing a
646
+ test accuracy of 82.86% for Trace, 82.82% for Co-teaching,
647
+ 81.12% for JoCoR, 81.01% for CDR, and with Co-teaching+
648
+ settling with an accuracy of 79.51%.
649
+ In pairflip noise 20% and 30%, we again outperform other
650
+ methods by accomplishing an accuracy of 84.92% and 84.5%
651
+ in the given sequence. Here, CDR and Trace follow closely
652
+ behind with an accuracy of 82.89% and 83.86% respectively
653
+ for pairflip 20% noise. For pairflip 30%, CDR attained an
654
+ accuracy of 82.08%, while Trace achieved 83.15%.
655
+ Results on FMNIST. The experimental results of our algo-
656
+ rithm compared with other baselines is shown in Table 3.
657
+ Our algorithm has shown robust performance across most
658
+ baselines.
659
+ We see comparable performance among all the algorithms
660
+ when the noise rate is 20% and 30% for both symmetric and
661
+ pairflip noise types. We see distinguishing performance
662
+ when the noise gets extreme.
663
+ At symmetric 50% noise, we perform about 12% better
664
+ than Co-teaching+ algorithm, while we outperformed CDR
665
+ 6
666
+
667
+ Table 1. Test accuracy (%) on MNIST dataset.
668
+ Noise rate
669
+ Ours-Inf
670
+ Ours-Ent
671
+ Co-tea.
672
+ Co-tea.+
673
+ JoCoR
674
+ Trace
675
+ CDR
676
+ symmetric 20%
677
+ 99.20
678
+ 99.48
679
+ 99.01
680
+ 98.88
681
+ 98.82
682
+ 99.16
683
+ 98.97
684
+ symmetric 30%
685
+ 99.11
686
+ 99.09
687
+ 98.78
688
+ 98.38
689
+ 98.40
690
+ 99.01
691
+ 98.75
692
+ symmetric 50%
693
+ 98.94
694
+ 98.93
695
+ 92.24
696
+ 95.26
697
+ 96.83
698
+ 98.87
699
+ 97.72
700
+ pairflip 20%
701
+ 99.08
702
+ 99.55
703
+ 98.84
704
+ 98.59
705
+ 98.89
706
+ 99.13
707
+ 98.88
708
+ pairflip 30%
709
+ 98.94
710
+ 99.54
711
+ 98.57
712
+ 97.95
713
+ 98.56
714
+ 99.08
715
+ 98.50
716
+ pairflip 45%
717
+ 98.77
718
+ 99.10
719
+ 87.63
720
+ 71.36
721
+ 85.86
722
+ 97.95
723
+ 87.04
724
+ Table 2. Test accuracy (%) on CIFAR-10 dataset
725
+ Noise rate
726
+ Ours-Inf
727
+ Ours-Ent
728
+ Co-tea
729
+ Co-tea+
730
+ JoCoR
731
+ Trace
732
+ CDR
733
+ symmetric 20%
734
+ 84.22
735
+ 84.00
736
+ 81.82
737
+ 79.51
738
+ 82.12
739
+ 82.86
740
+ 81.01
741
+ symmetric 30%
742
+ 83.85
743
+ 83.26
744
+ 80.69
745
+ 79.29
746
+ 80.95
747
+ 80.45
748
+ 78.90
749
+ symmetric 50%
750
+ 80.03
751
+ 79.64
752
+ 75.74
753
+ 73.19
754
+ 76.60
755
+ 77.82
756
+ 69.68
757
+ pairflip 20%
758
+ 84.92
759
+ 84.78
760
+ 81.17
761
+ 79.59
762
+ 81.86
763
+ 83.86
764
+ 82.89
765
+ pairflip 30%
766
+ 84.36
767
+ 84.54
768
+ 79.53
769
+ 77.83
770
+ 79.52
771
+ 83.15
772
+ 82.08
773
+ pairflip 45%
774
+ 83.43
775
+ 81.23
776
+ 59.04
777
+ 47.72
778
+ 67.59
779
+ 75.88
780
+ 58.56
781
+ Table 3. Test accuracy (%) on FMNIST dataset.
782
+ Noise rate
783
+ Ours-Inf
784
+ Ours-Ent
785
+ Co-tea
786
+ Co-tea+
787
+ JoCoR
788
+ Trace
789
+ CDR
790
+ symmetric 20%
791
+ 90.67
792
+ 90.79
793
+ 90.48
794
+ 88.69
795
+ 91.88
796
+ 90.61
797
+ 88.69
798
+ symmetric 30%
799
+ 91.35
800
+ 90.34
801
+ 90.36
802
+ 88.50
803
+ 91.33
804
+ 89.64
805
+ 87.38
806
+ symmetric 50%
807
+ 89.51
808
+ 89.49
809
+ 89.37
810
+ 77.96
811
+ 89.21
812
+ 88.94
813
+ 85.36
814
+ pairflip 20%
815
+ 90.90
816
+ 90.77
817
+ 90.68
818
+ 89.12
819
+ 91.37
820
+ 90.40
821
+ 90.01
822
+ pairflip 30%
823
+ 90.38
824
+ 90.65
825
+ 90.11
826
+ 89.06
827
+ 89.67
828
+ 90.33
829
+ 88.78
830
+ pairflip 45%
831
+ 89.37
832
+ 89.02
833
+ 78.86
834
+ 52.61
835
+ 88.10
836
+ 89.08
837
+ 64.63
838
+ by about 4%. We surpassed other baselines by small margins
839
+ for this instance of noise. For pairflip 45%, we performed
840
+ significantly better than Co-teaching+ and CDR algorithms
841
+ which achieved accuracy of 52.61% and 64.63% respectively.
842
+ Trace algorithm comes second in performance with 89.08%
843
+ accuracy, followed closely by JoCoR at 88.10%.
844
+ Two network architectures such as Co-teaching, Co-
845
+ teaching+, and JoCoR suffers in performance when the
846
+ noise level increase in both symmetric and pairflip noise
847
+ types. Trace comes closer in comparison with our algorithm,
848
+ but we outperform it in all experiments. Both entropy and
849
+ information-based regularizers perform at par compared to
850
+ each other.
851
+ 5.3. Curated Dataset
852
+ We have assembled the dataset that is based on MNIST,
853
+ where noise level depends on input image style for vari-
854
+ ous annotators. Three type of image styles were simulated
855
+ by performing morphological transformations (in particu-
856
+ lar, thinning and thickening) on the original images, using
857
+ Morpho-MNIST software [5]. In addition to the noise types
858
+ described in Section 5.1, asymmetric and pairflip with per-
859
+ mutation where applied. In the latter, the ordered label
860
+ categories were first permuted randomly and labels of two
861
+ adjacent categories after permutation were swapped based
862
+ on a preset ratio. Asymmetric noise is a block matrix trans-
863
+ formation, where a portion of original labels are retained
864
+ and the remainder is uniformly reassigned to closest four
865
+ categories. The type and level of noises applied to original
866
+ labels are provided in Table 4.
867
+ For a dataset consisting three different type of images
868
+ (original, thin, and thick) and three different annotators (Ta-
869
+ ble 4), we compare (i) classifier model without annotators
870
+ and regularization, (ii) our approach without regularization,
871
+ Table 4. Annotator Information for three different styles (MNIST).
872
+ Annotators
873
+ Original
874
+ Thin
875
+ Thick
876
+ Annotator 1
877
+ symmetric 80%
878
+ asymmetric 40%
879
+ pairflip 95%
880
+ Annotator 2
881
+ pairflip with permutation 40%
882
+ symmetric 95%
883
+ asymmetric 70%
884
+ Annotator 3
885
+ pairflip 60%
886
+ pairflip with permutation 40%
887
+ symmetric 80%
888
+ 0
889
+ 2
890
+ 4
891
+ 6
892
+ 8
893
+ 10
894
+ Epoch
895
+ 0.0
896
+ 0.2
897
+ 0.4
898
+ 0.6
899
+ 0.8
900
+ 1.0
901
+ Training Accuracy
902
+ MNIST
903
+ Model w/o annotators and =0
904
+ Our approach ( =0)
905
+ Our approach ( =0.01, m=2)
906
+ (a) Training accuracy
907
+ 0
908
+ 2
909
+ 4
910
+ 6
911
+ 8
912
+ 10
913
+ Epoch
914
+ 10
915
+ 3
916
+ 10
917
+ 2
918
+ 10
919
+ 1
920
+ 100
921
+ Training Entropy
922
+ MNIST
923
+ Model w/o annotators and =0
924
+ Our approach ( =0)
925
+ Our approach ( =0.01, m=2)
926
+ (b) Training entropy
927
+ Figure 2. Accuracy and entropy for Curated MNIST training data.
928
+ 0
929
+ 2
930
+ 4
931
+ 6
932
+ 8
933
+ 10
934
+ Epoch
935
+ 0.0
936
+ 0.2
937
+ 0.4
938
+ 0.6
939
+ 0.8
940
+ 1.0
941
+ Testing Accuracy
942
+ MNIST
943
+ Model w/o annotators and =0
944
+ Our approach ( =0)
945
+ Our approach ( =0.01, m=2)
946
+ (a) Testing accuracy
947
+ 0
948
+ 2
949
+ 4
950
+ 6
951
+ 8
952
+ 10
953
+ Epoch
954
+ 10
955
+ 3
956
+ 10
957
+ 2
958
+ 10
959
+ 1
960
+ 100
961
+ Testing Entropy
962
+ MNIST
963
+ Model w/o annotators and =0
964
+ Our approach ( =0)
965
+ Our approach ( =0.01, m=2)
966
+ (b) Testing entropy
967
+ Figure 3. Accuracy and Entropy for Curated MNIST testing data.
968
+ Image
969
+ Original
970
+ Our (λ=0.01, m=2)
971
+ Our (λ=0)
972
+ Thin
973
+ 0
974
+ 2
975
+ 4
976
+ 6
977
+ 8
978
+ 0
979
+ 2
980
+ 4
981
+ 6
982
+ 8
983
+ Annotator1 Thin
984
+ 0.0
985
+ 0.1
986
+ 0.2
987
+ 0.3
988
+ 0.4
989
+ 0.5
990
+ 0.6
991
+ 0.7
992
+ 0
993
+ 1
994
+ 2
995
+ 3
996
+ 4
997
+ 5
998
+ 6
999
+ 7
1000
+ 8
1001
+ 9
1002
+ 0
1003
+ 1
1004
+ 2
1005
+ 3
1006
+ 4
1007
+ 5
1008
+ 6
1009
+ 7
1010
+ 8
1011
+ 9
1012
+ Annotator1 Thin
1013
+ 0.0
1014
+ 0.1
1015
+ 0.2
1016
+ 0.3
1017
+ 0.4
1018
+ 0.5
1019
+ 0.6
1020
+ 0.7
1021
+ 0
1022
+ 1
1023
+ 2
1024
+ 3
1025
+ 4
1026
+ 5
1027
+ 6
1028
+ 7
1029
+ 8
1030
+ 9
1031
+ 0
1032
+ 1
1033
+ 2
1034
+ 3
1035
+ 4
1036
+ 5
1037
+ 6
1038
+ 7
1039
+ 8
1040
+ 9
1041
+ Annotator1 Thin
1042
+ 0.0
1043
+ 0.1
1044
+ 0.2
1045
+ 0.3
1046
+ 0.4
1047
+ 0.5
1048
+ 0.6
1049
+ 0.7
1050
+ Original
1051
+ 0
1052
+ 2
1053
+ 4
1054
+ 6
1055
+ 8
1056
+ 0
1057
+ 2
1058
+ 4
1059
+ 6
1060
+ 8
1061
+ Annotator2 Original
1062
+ 0.0
1063
+ 0.1
1064
+ 0.2
1065
+ 0.3
1066
+ 0.4
1067
+ 0.5
1068
+ 0.6
1069
+ 0.7
1070
+ 0
1071
+ 1
1072
+ 2
1073
+ 3
1074
+ 4
1075
+ 5
1076
+ 6
1077
+ 7
1078
+ 8
1079
+ 9
1080
+ 0
1081
+ 1
1082
+ 2
1083
+ 3
1084
+ 4
1085
+ 5
1086
+ 6
1087
+ 7
1088
+ 8
1089
+ 9
1090
+ Annotator2 Original
1091
+ 0.0
1092
+ 0.1
1093
+ 0.2
1094
+ 0.3
1095
+ 0.4
1096
+ 0.5
1097
+ 0.6
1098
+ 0.7
1099
+ 0
1100
+ 1
1101
+ 2
1102
+ 3
1103
+ 4
1104
+ 5
1105
+ 6
1106
+ 7
1107
+ 8
1108
+ 9
1109
+ 0
1110
+ 1
1111
+ 2
1112
+ 3
1113
+ 4
1114
+ 5
1115
+ 6
1116
+ 7
1117
+ 8
1118
+ 9
1119
+ Annotator2 Original
1120
+ 0.0
1121
+ 0.1
1122
+ 0.2
1123
+ 0.3
1124
+ 0.4
1125
+ 0.5
1126
+ 0.6
1127
+ 0.7
1128
+ Thick
1129
+ 0
1130
+ 2
1131
+ 4
1132
+ 6
1133
+ 8
1134
+ 0
1135
+ 2
1136
+ 4
1137
+ 6
1138
+ 8
1139
+ Annotator3 Thick
1140
+ 0.000
1141
+ 0.025
1142
+ 0.050
1143
+ 0.075
1144
+ 0.100
1145
+ 0.125
1146
+ 0.150
1147
+ 0.175
1148
+ 0.200
1149
+ 0
1150
+ 1
1151
+ 2
1152
+ 3
1153
+ 4
1154
+ 5
1155
+ 6
1156
+ 7
1157
+ 8
1158
+ 9
1159
+ 0
1160
+ 1
1161
+ 2
1162
+ 3
1163
+ 4
1164
+ 5
1165
+ 6
1166
+ 7
1167
+ 8
1168
+ 9
1169
+ Annotator3 Thick
1170
+ 0.000
1171
+ 0.025
1172
+ 0.050
1173
+ 0.075
1174
+ 0.100
1175
+ 0.125
1176
+ 0.150
1177
+ 0.175
1178
+ 0.200
1179
+ 0
1180
+ 1
1181
+ 2
1182
+ 3
1183
+ 4
1184
+ 5
1185
+ 6
1186
+ 7
1187
+ 8
1188
+ 9
1189
+ 0
1190
+ 1
1191
+ 2
1192
+ 3
1193
+ 4
1194
+ 5
1195
+ 6
1196
+ 7
1197
+ 8
1198
+ 9
1199
+ Annotator3 Thick
1200
+ 0.000
1201
+ 0.025
1202
+ 0.050
1203
+ 0.075
1204
+ 0.100
1205
+ 0.125
1206
+ 0.150
1207
+ 0.175
1208
+ 0.200
1209
+ Figure 4. Original and Predicted confusion for different Annotators
1210
+ using different models: our approach with regularizer (λ = 2,
1211
+ m=1.5) and without it (λ = 0). (MNIST).
1212
+ and (iii) our approach with information-based regularization.
1213
+ Each annotator NN has similar architecture as in classifier
1214
+ model and takes images as an input. Everything else is the
1215
+ same as described in Section 3.
1216
+ The result of experiments can be seen in Figures 2 and 3.
1217
+ Our approach is more accurate and confident compared to
1218
+ the classifier model. The accuracy of our approach with reg-
1219
+ ularizer is higher and more confident than the model without
1220
+ the regularizer. This observation can be seen for both train-
1221
+ ing and testing data. The proposed approach is able to learn
1222
+ the annotators’ confusion. Predicted confusion matrices for
1223
+ each annotators and different image types are provided in
1224
+ 7
1225
+
1226
+ 6Table 5. Test DICE (%) and entropy evaluation on MNIST dataset
1227
+ for segmentation.
1228
+ Metrics
1229
+ Ours-Inf
1230
+ Trace
1231
+ DICE
1232
+ 96.97
1233
+ 96.62
1234
+ Entropy
1235
+ 0.0453
1236
+ 0.0696
1237
+ Figure 4. More results can be found in Appendix C.
1238
+ 6. Segmentation Experiments
1239
+ We explored the performance of our algorithm with
1240
+ information-based regularization for segmentation.
1241
+ The
1242
+ whole approach is the same as for classification, but pre-
1243
+ dictions would now be pixel-wise. Holistically, we followed
1244
+ the same idea in both the settings of classification and seg-
1245
+ mentation. The inputs of the model are the original images
1246
+ from MNIST with a Gaussian noise. Annotators (thin, thick,
1247
+ fractured) were simulated using morphological transforma-
1248
+ tions [5] as mentioned in Section 5.3. The details for the
1249
+ MNIST segmentation dataset are provided in Appendix B.2.
1250
+ For our method, we used the same model architecture as
1251
+ in [41], which is implemented as U-Net [27] with multiple
1252
+ output layers: the first is for prediction of true segmentation
1253
+ and the second is for predictions of noisy segmentations.
1254
+ We compared our method, which has information-based
1255
+ regularizer, with the trace-regularized approach [41].
1256
+ 6.1. Results
1257
+ Table 5 shows the accuracy of the our method in com-
1258
+ parison with the trace, and it also highlights the entropy
1259
+ calculated for these two methods. It can be seen that our
1260
+ model achieves better DICE similarity score and is more con-
1261
+ fident. Furthermore, in Figure 5, we visualised the results
1262
+ of the predictions for our method. It is clearly demonstrated
1263
+ that for an input image with Gaussian noise, our algorithm
1264
+ is able to produce excellent predictions about the true seg-
1265
+ mentation, with given annotators, as it matches closely the
1266
+ GT image.
1267
+ 7. Discussion & Conclusion
1268
+ In this research, we proposed an approach of jointly train-
1269
+ ing a two network model in a confident way. We improve
1270
+ classification/segmentation network by attaching regulariza-
1271
+ tion term (Information and Entropy) to make assured pre-
1272
+ dictions. Moreover, our algorithm also learns annotators’
1273
+ noise and separate it from the true labels under extreme
1274
+ noisy supervision. We evaluated our algorithm on the stan-
1275
+ dard datasets such as CIFAR-10, FMNIST and MNIST. In
1276
+ comparison with other state-of-the-arts, our method secured
1277
+ mature robust results. In classification task, we outperformed
1278
+ all baselines for extreme noise levels such as pairflip 45%
1279
+ and symmetric 50%. For smaller noise levels, we achieved
1280
+ comparable performance with SOTAs. In segmentation prob-
1281
+ lem, we achieved better DICE similarity score than [41]. We
1282
+ also show that prediction of classifier/segmentation model
1283
+ are more confident compared to other baselines. This demon-
1284
+ strates the effectiveness of our algorithm in making confident
1285
+ robust predictions about the true class labels/ground truth.
1286
+ References
1287
+ [1] Andrew J Asman and Bennett A Landman. Non-local statisti-
1288
+ cal label fusion for multi-atlas segmentation. Medical Image
1289
+ Analysis, 17(2):194–208, 2013. 4
1290
+ [2] Landman BA Asman AJ.
1291
+ Robust statistical label fusion
1292
+ through consensus level, labeler accuracy, and truth estima-
1293
+ tion (collate). IEEE transactions on medical imaging, pages
1294
+ 1779–94, 10 2011. 4
1295
+ [3] Ella Barkan, Alon Hazan, and Vadim Ratner. Reduce discrep-
1296
+ ancy of human annotators in medical imaging by automatic
1297
+ visual comparison to similar cases, Feb. 9 2021. US Patent
1298
+ 10,916,343. 1
1299
+ [4] Alan Joseph Bekker and Jacob Goldberger. Training deep
1300
+ neural-networks based on unreliable labels. In 2016 IEEE
1301
+ International Conference on Acoustics, Speech and Signal
1302
+ Processing (ICASSP), pages 2682–2686, 2016. 3
1303
+ [5] Daniel Coelho Castro, Jeremy Tan, Bernhard Kainz, Ender
1304
+ Konukoglu, and Ben Glocker. Morpho-mnist: Quantitative
1305
+ assessment and diagnostics for representation learning. CoRR,
1306
+ abs/1809.10780, 2018. 7, 8
1307
+ [6] Daniel C. Castro, Jeremy Tan, Bernhard Kainz, Ender
1308
+ Konukoglu, and Ben Glocker. Morpho-MNIST: Quantita-
1309
+ tive assessment and diagnostics for representation learning.
1310
+ Journal of Machine Learning Research, 20(178), 2019. 11
1311
+ [7] Xinlei Chen and Abhinav Gupta. Webly supervised learning
1312
+ of convolutional networks, 2015. 3
1313
+ [8] Li Deng. The mnist database of handwritten digit images for
1314
+ machine learning research. IEEE Signal Processing Maga-
1315
+ zine, 29(6):141–142, 2012. 6
1316
+ [9] Jacob Goldberger and Ehud Ben-Reuven.
1317
+ Training deep
1318
+ neural-networks using a noise adaptation layer. In ICLR,
1319
+ 2017. 3
1320
+ [10] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu,
1321
+ Weihua Hu, Ivor W. Tsang, and Masashi Sugiyama. Co-
1322
+ teaching: Robust training of deep neural networks with ex-
1323
+ tremely noisy labels. Advances in Neural Information Pro-
1324
+ cessing Systems, 2018-Decem(Nips):8527–8537, 2018. 1, 2,
1325
+ 3, 6
1326
+ [11] Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin
1327
+ Gimpel. Using trusted data to train deep networks on la-
1328
+ bels corrupted by severe noise. Advances in Neural Infor-
1329
+ mation Processing Systems, 2018-December(Nips):10456–
1330
+ 10465, 2018. 3
1331
+ [12] Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li Jia Li, and Li
1332
+ Fei-Fei. Mentornet: Learning data-driven curriculum for very
1333
+ deep neural networks on corrupted labels. 35th International
1334
+ Conference on Machine Learning, ICML 2018, 5:3601–3620,
1335
+ 2018. 1, 3
1336
+ 8
1337
+
1338
+ Input
1339
+ Thin
1340
+ Thick
1341
+ Fractured
1342
+ Prediction
1343
+ GT
1344
+ Figure 5. Visualisation of the predictions of the true segmentation along with the predictions of multiple annotators-Thin, Thick and
1345
+ Fractured using our algorithm in comparison with the test image and GT.
1346
+ [13] Modat M Jorge Cardoso M, Leung K. Steps: Similarity and
1347
+ truth estimation for propagated segmentations and its appli-
1348
+ cation to hippocampal segmentation and brain parcelation.
1349
+ Medical image analysis, 17:671–84, 02 2013. 4
1350
+ [14] Eytan Kats, Jacob Goldberger, and Hayit Greenspan. A Soft
1351
+ STAPLE Algorithm Combined with Anatomical Knowledge.
1352
+ Lecture Notes in Computer Science (including subseries Lec-
1353
+ ture Notes in Artificial Intelligence and Lecture Notes in Bioin-
1354
+ formatics), 11766 LNCS:510–517, 2019. 4
1355
+ [15] Simon A.A. Kohl, Bernardino Romera-Paredes, Clemens
1356
+ Meyer, Jeffrey De Fauw, Joseph R. Ledsam, Klaus H. Maier-
1357
+ Hein, S. M. Ali Eslami, Danilo Jimenez Rezende, and Olaf
1358
+ Ronneberger. A probabilistic U-net for segmentation of am-
1359
+ biguous images. Advances in Neural Information Processing
1360
+ Systems, 2018-December(NeurIPS):6965–6975, 2018. 4
1361
+ [16] Alex Krizhevsky. Learning multiple layers of features from
1362
+ tiny images. Technical report, 2009. 6
1363
+ [17] Elizabeth Lazarus, Martha B Mainiero, Barbara Schepps, Su-
1364
+ san L Koelliker, and Linda S Livingston. Bi-rads lexicon for
1365
+ us and mammography: interobserver variability and positive
1366
+ predictive value. Radiology, 239(2):385–391, 2006. 1
1367
+ [18] Junnan Li, Richard Socher, and Steven CH Hoi. Dividemix:
1368
+ Learning with noisy labels as semi-supervised learning. arXiv
1369
+ preprint arXiv:2002.07394, 2020. 3
1370
+ [19] Xuefeng Liang, Xingyu Liu, and Longshan Yao. Review–A
1371
+ Survey of Learning from Noisy Labels. ECS Sensors Plus,
1372
+ 1(2):021401, 2022. 6
1373
+ [20] Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud
1374
+ Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoo-
1375
+ rian, Jeroen A.W.M. van der Laak, Bram van Ginneken, and
1376
+ Clara I. Sánchez. A survey on deep learning in medical image
1377
+ analysis. Medical Image Analysis, 42(1995):60–88, 2017. 1
1378
+ [21] Kede Ma, Xuelin Liu, Yuming Fang, and Eero P. Simoncelli.
1379
+ Blind image quality assessment by learning from multiple
1380
+ annotators. In 2019 IEEE International Conference on Image
1381
+ Processing (ICIP), pages 2344–2348, 2019. 1, 6
1382
+ [22] Eran Malach and Shai Shalev-Shwartz. Decoupling "when
1383
+ to update" from "how to update". Advances in Neural Infor-
1384
+ mation Processing Systems, 2017-Decem:961–971, 2017. 1,
1385
+ 3
1386
+ [23] Aditya Krishna Menon, Ankit Singh Rawat, Sashank J. Reddi,
1387
+ and Sanjiv Kumar. Can gradient clipping mitigate label noise?
1388
+ In International Conference on Learning Representations,
1389
+ 2020. 3
1390
+ [24] Zahra Mirikharaji, Yiqi Yan, and Ghassan Hamarneh. Learn-
1391
+ ing to segment skin lesions from noisy annotations, 2019.
1392
+ 4
1393
+ [25] Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon,
1394
+ Richard Nock, and Lizhen Qu. Making deep neural networks
1395
+ robust to label noise: A loss correction approach. Proceedings
1396
+ - 30th IEEE Conference on Computer Vision and Pattern
1397
+ Recognition, CVPR 2017, 2017-January:2233–2241, 2017. 3
1398
+ [26] Scott E. Reed, Honglak Lee, Dragomir Anguelov, Christian
1399
+ Szegedy, Dumitru Erhan, and Andrew Rabinovich. Training
1400
+ deep neural networks on noisy labels with bootstrapping. 3rd
1401
+ International Conference on Learning Representations, ICLR
1402
+ 2015 - Workshop Track Proceedings, pages 1–11, 2015. 3
1403
+ [27] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net:
1404
+ Convolutional networks for biomedical image segmentation,
1405
+ 2015. 4, 8
1406
+ [28] Claude Elwood Shannon. A mathematical theory of commu-
1407
+ nication. The Bell system technical journal, 27(3):379–423,
1408
+ 1948. 5
1409
+ [29] Ji Songbai, David W. Roberts, Hartov Alex, and Keith D.
1410
+ Paulsen. Combining Multiple Ture 3D Ultrasound Image
1411
+ 9
1412
+
1413
+ 3Volumes through Re-registration and Rasterization.
1414
+ Med
1415
+ Image Comut Comput Asist Interv, 23(7):903–921, 2005. 4
1416
+ [30] Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir
1417
+ Bourdev, and Rob Fergus. Training convolutional networks
1418
+ with noisy labels. 3rd International Conference on Learning
1419
+ Representations, ICLR 2015 - Workshop Track Proceedings,
1420
+ pages 1–11, 2015. 2, 3
1421
+ [31] Ryutaro Tanno, Ardavan Saeedi, Swami Sankaranarayanan,
1422
+ Daniel C Alexander, and Nathan Silberman. Learning from
1423
+ noisy labels by regularized estimation of annotator confusion.
1424
+ In Proceedings of the IEEE/CVF conference on computer
1425
+ vision and pattern recognition, pages 11244–11253, 2019. 1,
1426
+ 2, 3, 4, 6
1427
+ [32] Andreas Veit, Neil Alldrin, Gal Chechik, Ivan Krasin, Abhi-
1428
+ nav Gupta, and Serge Belongie. Learning from noisy large-
1429
+ scale datasets with minimal supervision. In Proceedings of the
1430
+ IEEE conference on computer vision and pattern recognition,
1431
+ pages 839–847, 2017. 1
1432
+ [33] Simon Warfield, Kelly Zou, and William Wells. Simultaneous
1433
+ truth and performance level estimation (staple): An algorithm
1434
+ for the validation of image segmentation. IEEE transactions
1435
+ on medical imaging, 23:903–21, 08 2004. 4
1436
+ [34] Hongxin Wei, Lei Feng, Xiangyu Chen, and Bo An. Com-
1437
+ bating Noisy Labels by Agreement: A Joint Training Method
1438
+ with Co-Regularization. Proceedings of the IEEE Computer
1439
+ Society Conference on Computer Vision and Pattern Recogni-
1440
+ tion, pages 13723–13732, 2020. 1, 2, 3, 6
1441
+ [35] Xiaobo Xia, Tongliang Liu, Bo Han, Chen Gong, Nannan
1442
+ Wang, Zongyuan Ge, and Yi Chang. Robust early-learning:
1443
+ Hindering the memorization of noisy labels. In International
1444
+ conference on learning representations, 2020. 2, 3, 6
1445
+ [36] Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen
1446
+ Gong, Gang Niu, and Masashi Sugiyama. Are Anchor Points
1447
+ Really Indispensable in Label-Noise Learning? In H Wallach,
1448
+ H Larochelle, A Beygelzimer, F d'Alché-Buc, E Fox, and R
1449
+ Garnett, editors, Advances in Neural Information Processing
1450
+ Systems, volume 32. Curran Associates, Inc., 2019. 3
1451
+ [37] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist:
1452
+ a novel image dataset for benchmarking machine learning
1453
+ algorithms. arXiv preprint arXiv:1708.07747, 2017. 6
1454
+ [38] Yan Yan, Rómer Rosales, Glenn Fung, Ramanathan Subra-
1455
+ manian, and Jennifer Dy. Learning from multiple annotators
1456
+ with varying expertise. Machine Learning, 95(3):291–327,
1457
+ 2014. 1, 4
1458
+ [39] Kun Yi and Jianxin Wu. Probabilistic end-to-end noise cor-
1459
+ rection for learning with noisy labels. Proceedings of the
1460
+ IEEE Computer Society Conference on Computer Vision and
1461
+ Pattern Recognition, 2019-June:7010–7018, 2019. 3
1462
+ [40] Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor W. Tsang,
1463
+ and Masashi Sugiyama. How does disagreement help gener-
1464
+ alization against label corruption? 36th International Confer-
1465
+ ence on Machine Learning, ICML 2019, 2019-June:12407–
1466
+ 12417, 2019. 1, 2, 3, 6
1467
+ [41] Le Zhang, Ryutaro Tanno, Mou Cheng Xu, Chen Jin, Joseph
1468
+ Jacob, Olga Ciccarelli, Frederik Barkhof, and Daniel C.
1469
+ Alexander. Disentangling human error from the ground truth
1470
+ in segmentation of medical images. Advances in Neural In-
1471
+ formation Processing Systems, 2020-Decem(NeurIPS):1–13,
1472
+ 2020. 1, 2, 4, 8, 11
1473
+ 10
1474
+
1475
+ A. Proof of Theorem 1
1476
+ Proof. Given samples x, y = i we want to minimize
1477
+ 1
1478
+ R
1479
+ R
1480
+
1481
+ r=1
1482
+ E˜y|x,y
1483
+
1484
+ l( ˆU (r)
1485
+ ψ (x)pθ(x), ˜y)
1486
+
1487
+ = 1
1488
+ R
1489
+ R
1490
+
1491
+ r=1
1492
+ C
1493
+
1494
+ j=1
1495
+ p(˜y = j|x, y = i)l( ˆU (r)
1496
+ ψ (x)ei, ˜y)
1497
+ = − 1
1498
+ R
1499
+ R
1500
+
1501
+ r=1
1502
+ C
1503
+
1504
+ j=1
1505
+ p(˜y = j|x, y = i) log
1506
+ [ ˆU (r)
1507
+ ψ (x)]j,i
1508
+ C
1509
+
1510
+ j=1
1511
+ [ ˆU (r)
1512
+ ψ (x)]j,i
1513
+ w.r.t. ˆU (r)
1514
+ ψ (x), r = 1, . . . , R. Since ˆU (r)
1515
+ ψ (x) is a stochastic
1516
+ matrix, we have
1517
+ C�
1518
+ j=1
1519
+ [ ˆU (r)
1520
+ ψ (x)]j,i = 1. Taking the derivative
1521
+ over [ ˆU (r)
1522
+ ψ (x)]j,i, we get
1523
+ [ ˆU (r)
1524
+ ψ (x)]j,i = p(˜y = j|x, y = i).
1525
+ B. Experimental Details
1526
+ B.1. Classification Datasets
1527
+ MNIST: dataset comprise of 60,000 samples for training,
1528
+ and 10,000 data samples reserved for testing. The number
1529
+ of classes in the dataset is 10.
1530
+ CIFAR-10: The CIFAR-10 dataset contains 60,000 color
1531
+ images in 10 classifications, with 6000 images each class.
1532
+ There are 50,000 training and 10,000 test images. The
1533
+ dataset is divided into five training batches and one test
1534
+ batch, each contains 10,000 images. The test batch is a col-
1535
+ lection of exactly 1,000 data samples randomly selected from
1536
+ each class. The training batches comprise of the remaining
1537
+ images in a random order, however it’s likely that certain
1538
+ training batches may have more images from one class than
1539
+ another.
1540
+ FMNIST: Fashion-MNIST is a dataset of article images
1541
+ from Zalando. It consists of a training set of 60,000 instances
1542
+ and a test set of 10,000 instances. Each instance is a 28 × 28
1543
+ grayscale image with a label from one of ten classes.
1544
+ B.2. Segmentation Datasets
1545
+ MNIST.
1546
+ We also use the dataset used by [41]; synthetic
1547
+ noisy annotations were created based on the assumed GT to
1548
+ demonstrate the effectiveness of the method in a hypothetical
1549
+ setting where the GT is known. We apply the morphological
1550
+ alterations (such as thinning, thickening, fractures, etc.) to
1551
+ the ground-truth (GT) segmentation labels using the Morpho-
1552
+ MNIST software [6], we mimic a group of five annotators
1553
+ with a variety of distinguishing features/transformations. In
1554
+ particular, the first annotator near accurately segments the
1555
+ image ("good-segmentation"), and look similar to the GT.
1556
+ The second tends to over-segment ("thick-segmentation"),
1557
+ the third tends to under-segment ("thin-segmentation"), the
1558
+ fourth is prone to a combination of over-segmentation and
1559
+ small fractures ("fracture-segmentation").
1560
+ In a multi-class scenario, we first select a target class and
1561
+ then perform morphological operations on the provided GT
1562
+ mask to produce 4 different types of synthetic noisy labels:
1563
+ over-segmentation, under-segmentation, fracture segmenta-
1564
+ tion, and good segmentation. Through the use of simulated
1565
+ annotators, we derive labels to create training data. How-
1566
+ ever, the good segmentation remain remain latent and are
1567
+ not included during training of our algorithm.
1568
+ B.3. Types of Noise
1569
+ Figure 6 shows the an example of noise transition ma-
1570
+ trices for pairflip 20% and symmetric 50% noise types. In
1571
+ addition, Figure 7 signifies the noise labels distributions
1572
+ for CIFAR-10 dataset for pairflip 45% and symmetric 50%
1573
+ noise types; this distribution of the label noise is used in the
1574
+ training process.
1575
+ B.4. Fine-tuning/Training
1576
+ In this section, we would now further elaborate on the
1577
+ experimental details for each dataset that we used to validate
1578
+ our algorithm against.
1579
+ MNIST.
1580
+ We used a LeNet model as a classifier for our
1581
+ backbone network. For the annotator network, we have a
1582
+ linear layer of size C × C, C denotes the number of classes.
1583
+ This linear layer represents our annotator confusion matrices,
1584
+ and we apply a softmax layer to it to make it a stochastic
1585
+ matrix along a certain dimension. We fine-tuned our model
1586
+ for a combination of learning rates, α = [0.01, 0.001, 0.0001,
1587
+ 0.000001, 0.0016, 0.008, 0.0064, 0.005], and about 50 dif-
1588
+ ferent lambda values, λ, for our regularizer hyper-parameter.
1589
+ We started with a very small value of λ = 0.001506746, and
1590
+ slowly increased it exponentially (geometric progression)
1591
+ per epoch with a rate, r = 1.18; we trained the model for
1592
+ 70 epochs and used Adam as an optimizer. In addition, the
1593
+ experiments were regulated to assess the performance of the
1594
+ model when the confusion matrix is initialized as an identity.
1595
+ These fine-tunings are done across all 2 different types of
1596
+ noise described in Section 5.1 with the respective noise rate
1597
+ that is associated with each of the noise types.
1598
+ CIFAR-10.
1599
+ For CIFAR-10, we used ResNet-18 as our
1600
+ backbone network for the classifier. The annotator network
1601
+ remains unchanged (still has one linear layer that represents
1602
+ the confusion matrices of class C × C that are stochas-
1603
+ tic). For this dataset, we fine-tuned the model for an assort-
1604
+ ment of learning rates, such as α = [0.001, 0.00064, 0.0016,
1605
+ 0.000001, 0.005, 0.008, 0.0016, 0.00064]. We ran the model
1606
+ for 150 epochs; the hyperparameter λ for our regularizer was
1607
+ slowly increased exponentially again with a rate, r= 1.11.
1608
+ 11
1609
+
1610
+ 0.8 0.2
1611
+ 0
1612
+ 0
1613
+ 0
1614
+ 0
1615
+ 0
1616
+ 0
1617
+ 0
1618
+ 0
1619
+ 0
1620
+ 0.8 0.2
1621
+ 0
1622
+ 0
1623
+ 0
1624
+ 0
1625
+ 0
1626
+ 0
1627
+ 0
1628
+ 0
1629
+ 0
1630
+ 0.8 0.2
1631
+ 0
1632
+ 0
1633
+ 0
1634
+ 0
1635
+ 0
1636
+ 0
1637
+ 0
1638
+ 0
1639
+ 0
1640
+ 0.8 0.2
1641
+ 0
1642
+ 0
1643
+ 0
1644
+ 0
1645
+ 0
1646
+ 0
1647
+ 0
1648
+ 0
1649
+ 0
1650
+ 0.8 0.2
1651
+ 0
1652
+ 0
1653
+ 0
1654
+ 0
1655
+ 0
1656
+ 0
1657
+ 0
1658
+ 0
1659
+ 0
1660
+ 0.8 0.2
1661
+ 0
1662
+ 0
1663
+ 0
1664
+ 0
1665
+ 0
1666
+ 0
1667
+ 0
1668
+ 0
1669
+ 0
1670
+ 0.8 0.2
1671
+ 0
1672
+ 0
1673
+ 0
1674
+ 0
1675
+ 0
1676
+ 0
1677
+ 0
1678
+ 0
1679
+ 0
1680
+ 0.8 0.2
1681
+ 0
1682
+ 0
1683
+ 0
1684
+ 0
1685
+ 0
1686
+ 0
1687
+ 0
1688
+ 0
1689
+ 0
1690
+ 0.8 0.2
1691
+ 0.2
1692
+ 0
1693
+ 0
1694
+ 0
1695
+ 0
1696
+ 0
1697
+ 0
1698
+ 0
1699
+ 0
1700
+ 0.8
1701
+ Pairflip, ε = 20%
1702
+ 0.0
1703
+ 0.1
1704
+ 0.2
1705
+ 0.3
1706
+ 0.4
1707
+ 0.5
1708
+ 0.6
1709
+ 0.7
1710
+ 0.8
1711
+ (a)
1712
+ 0.5
1713
+ 0.06
1714
+ 0.06
1715
+ 0.06
1716
+ 0.06
1717
+ 0.06
1718
+ 0.06
1719
+ 0.06
1720
+ 0.06
1721
+ 0.06
1722
+ 0.06
1723
+ 0.5
1724
+ 0.06
1725
+ 0.06
1726
+ 0.06
1727
+ 0.06
1728
+ 0.06
1729
+ 0.06
1730
+ 0.06
1731
+ 0.06
1732
+ 0.06
1733
+ 0.06
1734
+ 0.5
1735
+ 0.06
1736
+ 0.06
1737
+ 0.06
1738
+ 0.06
1739
+ 0.06
1740
+ 0.06
1741
+ 0.06
1742
+ 0.06
1743
+ 0.06
1744
+ 0.06
1745
+ 0.5
1746
+ 0.06
1747
+ 0.06
1748
+ 0.06
1749
+ 0.06
1750
+ 0.06
1751
+ 0.06
1752
+ 0.06
1753
+ 0.06
1754
+ 0.06
1755
+ 0.06
1756
+ 0.5
1757
+ 0.06
1758
+ 0.06
1759
+ 0.06
1760
+ 0.06
1761
+ 0.06
1762
+ 0.06
1763
+ 0.06
1764
+ 0.06
1765
+ 0.06
1766
+ 0.06
1767
+ 0.5
1768
+ 0.06
1769
+ 0.06
1770
+ 0.06
1771
+ 0.06
1772
+ 0.06
1773
+ 0.06
1774
+ 0.06
1775
+ 0.06
1776
+ 0.06
1777
+ 0.06
1778
+ 0.5
1779
+ 0.06
1780
+ 0.06
1781
+ 0.06
1782
+ 0.06
1783
+ 0.06
1784
+ 0.06
1785
+ 0.06
1786
+ 0.06
1787
+ 0.06
1788
+ 0.06
1789
+ 0.5
1790
+ 0.06
1791
+ 0.06
1792
+ 0.06
1793
+ 0.06
1794
+ 0.06
1795
+ 0.06
1796
+ 0.06
1797
+ 0.06
1798
+ 0.06
1799
+ 0.06
1800
+ 0.5
1801
+ 0.06
1802
+ 0.06
1803
+ 0.06
1804
+ 0.06
1805
+ 0.06
1806
+ 0.06
1807
+ 0.06
1808
+ 0.06
1809
+ 0.06
1810
+ 0.06
1811
+ 0.5
1812
+ Symmetric, ε = 50%
1813
+ 0.10
1814
+ 0.15
1815
+ 0.20
1816
+ 0.25
1817
+ 0.30
1818
+ 0.35
1819
+ 0.40
1820
+ 0.45
1821
+ 0.50
1822
+ (b)
1823
+ Figure 6. Noise Transition matrices for Pairflip and Symmetric noise.
1824
+ y
1825
+ y
1826
+ 2713
1827
+ 2287
1828
+ 0
1829
+ 0
1830
+ 0
1831
+ 0
1832
+ 0
1833
+ 0
1834
+ 0
1835
+ 0
1836
+ 0
1837
+ 2846
1838
+ 2154
1839
+ 0
1840
+ 0
1841
+ 0
1842
+ 0
1843
+ 0
1844
+ 0
1845
+ 0
1846
+ 0
1847
+ 0
1848
+ 2766
1849
+ 2234
1850
+ 0
1851
+ 0
1852
+ 0
1853
+ 0
1854
+ 0
1855
+ 0
1856
+ 0
1857
+ 0
1858
+ 0
1859
+ 2764
1860
+ 2236
1861
+ 0
1862
+ 0
1863
+ 0
1864
+ 0
1865
+ 0
1866
+ 0
1867
+ 0
1868
+ 0
1869
+ 0
1870
+ 2742
1871
+ 2258
1872
+ 0
1873
+ 0
1874
+ 0
1875
+ 0
1876
+ 0
1877
+ 0
1878
+ 0
1879
+ 0
1880
+ 0
1881
+ 2745
1882
+ 2255
1883
+ 0
1884
+ 0
1885
+ 0
1886
+ 0
1887
+ 0
1888
+ 0
1889
+ 0
1890
+ 0
1891
+ 0
1892
+ 2797
1893
+ 2203
1894
+ 0
1895
+ 0
1896
+ 0
1897
+ 0
1898
+ 0
1899
+ 0
1900
+ 0
1901
+ 0
1902
+ 0
1903
+ 2746
1904
+ 2254
1905
+ 0
1906
+ 0
1907
+ 0
1908
+ 0
1909
+ 0
1910
+ 0
1911
+ 0
1912
+ 0
1913
+ 0
1914
+ 2780
1915
+ 2220
1916
+ 2299
1917
+ 0
1918
+ 0
1919
+ 0
1920
+ 0
1921
+ 0
1922
+ 0
1923
+ 0
1924
+ 0
1925
+ 2701
1926
+ CIFAR-10, Pairflip-45%
1927
+ 0
1928
+ 500
1929
+ 1000
1930
+ 1500
1931
+ 2000
1932
+ 2500
1933
+ (a)
1934
+ y
1935
+ y
1936
+ 2562
1937
+ 282
1938
+ 257
1939
+ 264
1940
+ 274
1941
+ 292
1942
+ 272
1943
+ 268
1944
+ 264
1945
+ 265
1946
+ 287
1947
+ 2486
1948
+ 273
1949
+ 285
1950
+ 271
1951
+ 290
1952
+ 272
1953
+ 284
1954
+ 271
1955
+ 281
1956
+ 308
1957
+ 272
1958
+ 2492
1959
+ 285
1960
+ 284
1961
+ 267
1962
+ 256
1963
+ 308
1964
+ 284
1965
+ 244
1966
+ 260
1967
+ 238
1968
+ 280
1969
+ 2560
1970
+ 266
1971
+ 264
1972
+ 270
1973
+ 302
1974
+ 280
1975
+ 280
1976
+ 264
1977
+ 286
1978
+ 288
1979
+ 289
1980
+ 2461
1981
+ 275
1982
+ 309
1983
+ 272
1984
+ 277
1985
+ 279
1986
+ 280
1987
+ 258
1988
+ 275
1989
+ 285
1990
+ 263
1991
+ 2531
1992
+ 291
1993
+ 259
1994
+ 275
1995
+ 283
1996
+ 279
1997
+ 297
1998
+ 273
1999
+ 277
2000
+ 311
2001
+ 271
2002
+ 2465
2003
+ 270
2004
+ 271
2005
+ 286
2006
+ 278
2007
+ 245
2008
+ 272
2009
+ 308
2010
+ 250
2011
+ 306
2012
+ 279
2013
+ 2523
2014
+ 281
2015
+ 258
2016
+ 266
2017
+ 268
2018
+ 275
2019
+ 287
2020
+ 287
2021
+ 282
2022
+ 298
2023
+ 230
2024
+ 2548
2025
+ 259
2026
+ 271
2027
+ 266
2028
+ 277
2029
+ 252
2030
+ 287
2031
+ 282
2032
+ 278
2033
+ 273
2034
+ 280
2035
+ 2534
2036
+ CIFAR-10, Symmetric-50%
2037
+ 500
2038
+ 1000
2039
+ 1500
2040
+ 2000
2041
+ 2500
2042
+ (b)
2043
+ Figure 7. Confusion matrix between clean (y) and noisy labels (˜y) of CIFAR-10 dataset for (a) Pairflip-45% and (b) Symmetric-50% noise.
2044
+ However, the starting value this time is λ= 3.0517578125e-
2045
+ 05. We used a standard batch-size, BS=128. We used the
2046
+ standard augmentations of random crop of size 32 × 32 and
2047
+ horizontal random flipping. These are the standard augmen-
2048
+ tations that have been used across all the baselines that we
2049
+ have evaluated. The remaining settings remain the same as
2050
+ described in MNIST above.
2051
+ FMNIST.
2052
+ We kept the same settings of CIFAR-10, such as
2053
+ ResNet-18 model and batch-size of 128 for FMNIST dataset.
2054
+ The model was again fine-tuned for the same set of hyperpa-
2055
+ rameters. However, the starting value of λ= 6.103515625e-
2056
+ 05, and it was increased exponentially with a rate of r=1.12.
2057
+ We also retained the same set of augmentations that we used
2058
+ in CIFAR-10 dataset.
2059
+ C. Additional experimental results
2060
+ In our earlier experiments, we kept the same type of noise
2061
+ and noise levels across all the number of annotators in the
2062
+ annotator network. This is usually not representative of
2063
+ the noise in the real world data, as it is possible that each
2064
+ annotator would be independent in the way it is confused
2065
+ about labelling and annotating the data (subject to their own
2066
+ biases). Therefore, we confuse each annotator with different
2067
+ types and levels of noise. Table 6 shows the test accuracy of
2068
+ the classifier network on CIFAR-10, FMNIST and MNIST
2069
+ datasets for different types of noise for each annotators. We
2070
+ achieved comparable results with an accuracy of 84.12%,
2071
+ 91.12% and 98.97% for CIFAR-10, FMNIST and MNIST
2072
+ respectively. It’s particularly notable that the accuracy of the
2073
+ classifier network remains at par even with using high level
2074
+ noise, such as pairflip 45% and symmetric 50% for two of
2075
+ the three annotators.
2076
+ Table 6. Test accuracy (%) with three different annotators (Annota-
2077
+ tor1: Pairflip 45%, Annotator2: Symmetric 20%, Annotator3: 50%)
2078
+ representing different noise types and noise levels on CIFAR-10,
2079
+ FMNIST and MNIST datasets.
2080
+ CIFAR-10 FMNIST MNIST
2081
+ 84.12
2082
+ 91.62
2083
+ 98.97
2084
+ 12
2085
+
2086
+ 0
2087
+ 10
2088
+ 20
2089
+ 30
2090
+ 40
2091
+ 50
2092
+ 60
2093
+ epoch
2094
+ 20
2095
+ 40
2096
+ 60
2097
+ 80
2098
+ 100
2099
+ test accuracy
2100
+ MNIST, Pairflip 45%
2101
+ Ours
2102
+ Trace
2103
+ Co-teaching
2104
+ Co-teaching+
2105
+ CDR
2106
+ JoCoR
2107
+ (a) Pairflip-45%
2108
+ 0
2109
+ 10
2110
+ 20
2111
+ 30
2112
+ 40
2113
+ 50
2114
+ 60
2115
+ epoch
2116
+ 20
2117
+ 40
2118
+ 60
2119
+ 80
2120
+ 100
2121
+ test accuracy
2122
+ MNIST, Symmetric 30%
2123
+ Ours
2124
+ Trace
2125
+ Co-teaching
2126
+ Co-teaching+
2127
+ CDR
2128
+ JoCoR
2129
+ (b) Symmetry-30%
2130
+ 0
2131
+ 10
2132
+ 20
2133
+ 30
2134
+ 40
2135
+ 50
2136
+ 60
2137
+ epoch
2138
+ 20
2139
+ 40
2140
+ 60
2141
+ 80
2142
+ 100
2143
+ test accuracy
2144
+ MNIST, Symmetric 50%
2145
+ Ours
2146
+ Trace
2147
+ Co-teaching
2148
+ Co-teaching+
2149
+ CDR
2150
+ JoCoR
2151
+ (c) Symmetry-50%
2152
+ Figure 8. Test accuracy (%) vs. number of epochs on MNIST dataset.
2153
+ 0
2154
+ 20
2155
+ 40
2156
+ 60
2157
+ 80
2158
+ 100
2159
+ 120
2160
+ 140
2161
+ epoch
2162
+ 10
2163
+ 20
2164
+ 30
2165
+ 40
2166
+ 50
2167
+ 60
2168
+ 70
2169
+ 80
2170
+ test accuracy
2171
+ (CIFAR-10, Pairflip 45%)
2172
+ Ours
2173
+ Trace
2174
+ Co-teaching
2175
+ Co-teaching+
2176
+ CDR
2177
+ JoCoR
2178
+ (a) Pairflip-45%
2179
+ 0
2180
+ 20
2181
+ 40
2182
+ 60
2183
+ 80
2184
+ 100
2185
+ 120
2186
+ 140
2187
+ epoch
2188
+ 10
2189
+ 20
2190
+ 30
2191
+ 40
2192
+ 50
2193
+ 60
2194
+ 70
2195
+ 80
2196
+ test accuracy
2197
+ CIFAR-10, Symmetric 30%
2198
+ Ours
2199
+ Trace
2200
+ Co-teaching
2201
+ Co-teaching+
2202
+ CDR
2203
+ JoCoR
2204
+ (b) Symmetry-30%
2205
+ 0
2206
+ 20
2207
+ 40
2208
+ 60
2209
+ 80
2210
+ 100
2211
+ 120
2212
+ 140
2213
+ epoch
2214
+ 10
2215
+ 20
2216
+ 30
2217
+ 40
2218
+ 50
2219
+ 60
2220
+ 70
2221
+ 80
2222
+ test accuracy
2223
+ CIFAR-10, Symmetric 50%
2224
+ Ours
2225
+ Trace
2226
+ Co-teaching
2227
+ Co-teaching+
2228
+ CDR
2229
+ JoCoR
2230
+ (c) Symmetry-50%
2231
+ Figure 9. Test accuracy (%) vs. epochs on CIFAR-10 dataset.
2232
+ MNIST
2233
+ In Figure 8, we highlight the test accuracy vs.
2234
+ number of epochs. We can see clearly that for symmetric
2235
+ noise types, all algorithms gave comparable performance.
2236
+ It can be clearly seen that for symmetric noise, our test
2237
+ accuracy starts to decline a bit. This could be alleviated
2238
+ with an early stopping criteria which was not incorporated in
2239
+ these experiments. For pairflip 45%, the test accuracy starts
2240
+ to increase and stabilize in the later epochs of the experiment
2241
+ and transcends all the baselines.
2242
+ CIFAR-10
2243
+ Figure 9 shows the illustrative results of test
2244
+ accuracy vs. number of epochs. In all the three plots, it can
2245
+ be clearly seen that our algorithm performs at par with the
2246
+ other algorithms, but the performance gets robustly superior
2247
+ in the extreme noise type of pairflip 45%. This shows that
2248
+ our method is particularly robust again harder noise as it is
2249
+ able to make confident predictions.
2250
+ FMNIST
2251
+ Figure 10 gives an illustrative result of test accu-
2252
+ racy vs. number of epochs on FMNIST dataset. It showcases
2253
+ the test performance of our algorithm in comparison with
2254
+ other baselines. We can see that for all noise instances, our
2255
+ algorithm performs at par with the high achieving method
2256
+ like JoCoR. We perform considerably better against sample
2257
+ selection methods like Co-teaching and Co-teaching+, as
2258
+ well as against other method like CDR in the instance of
2259
+ pairflip 45%.
2260
+ In addition, Figure 11 highlights the confusion matrices
2261
+ 13
2262
+
2263
+ (Cifar-10. Symmetric 30%)
2264
+ 90
2265
+ 80
2266
+ M
2267
+ 70
2268
+ accuracy
2269
+ 60
2270
+ 50
2271
+ test
2272
+ 40
2273
+ 30
2274
+ 20
2275
+ 10
2276
+ 0
2277
+ 25
2278
+ 50
2279
+ 75
2280
+ 100
2281
+ 125
2282
+ 150
2283
+ epoch
2284
+ Ours
2285
+ Trace
2286
+ Co-teaching
2287
+ Co-teaching+
2288
+ CDR
2289
+ JoCoR(Cifar-10. Symmetric 30%)
2290
+ 90
2291
+ 80
2292
+ M
2293
+ 70
2294
+ accuracy
2295
+ 60
2296
+ 50
2297
+ test
2298
+ 40
2299
+ 30
2300
+ 20
2301
+ 10
2302
+ 0
2303
+ 25
2304
+ 50
2305
+ 75
2306
+ 100
2307
+ 125
2308
+ 150
2309
+ epoch
2310
+ Ours
2311
+ Trace
2312
+ Co-teaching
2313
+ Co-teaching+
2314
+ CDR
2315
+ JoCoRof the true class and the predicted class by the classifier net-
2316
+ work of our algorithm. We show the confusion matrices plots
2317
+ for two extreme noise types pairflip 45% and symmetric 50%
2318
+ for all the datasets used. It is clearly seen that the confu-
2319
+ sion matrices are diagonally dominant thus highlighting the
2320
+ robust performance of our method.
2321
+ MNIST Curated Dataset
2322
+ In Figure 12 we demonstrate
2323
+ annotators’ confusion using our algorithm on the curated
2324
+ MNIST dataset that showcases different image styles of
2325
+ Original, Thin and Thick. The strength of the regularizer,
2326
+ λ=0.01, is increased by the multiplicative scalar m=2 ev-
2327
+ ery epoch. Figures 13, 14 and 15 highlights the original
2328
+ and predicted confusion of annotator 1, annotator 2 and an-
2329
+ notator 3 using our approach with the regularizer and the
2330
+ non-regularized approach (that is, when λ=0).
2331
+ MNIST Segmentation
2332
+ In Figure 16, the results of the
2333
+ annotators’ (Thin, Thick and Fractured) predictions are vi-
2334
+ sualised for our algorithm. The results demonstrate that
2335
+ our algorithm has produced good prediction results for the
2336
+ annotators.
2337
+ 14
2338
+
2339
+ 0
2340
+ 20
2341
+ 40
2342
+ 60
2343
+ 80
2344
+ 100
2345
+ epoch
2346
+ 20
2347
+ 40
2348
+ 60
2349
+ 80
2350
+ test accuracy
2351
+ FMNIST, Pairflip 45%
2352
+ (a) Pairflip-45%
2353
+ 0
2354
+ 20
2355
+ 40
2356
+ 60
2357
+ 80
2358
+ 100
2359
+ epoch
2360
+ 20
2361
+ 40
2362
+ 60
2363
+ 80
2364
+ test accuracy
2365
+ FMNIST, Symmetric 30%
2366
+ Ours
2367
+ Trace
2368
+ Co-teaching
2369
+ Co-teaching+
2370
+ CDR
2371
+ JoCoR
2372
+ (b) Symmetry-30%
2373
+ 0
2374
+ 20
2375
+ 40
2376
+ 60
2377
+ 80
2378
+ 100
2379
+ epoch
2380
+ 20
2381
+ 40
2382
+ 60
2383
+ 80
2384
+ test accuracy
2385
+ FMNIST, Symmetric 50%
2386
+ Ours
2387
+ Trace
2388
+ Co-teaching
2389
+ Co-teaching+
2390
+ CDR
2391
+ JoCoR
2392
+ (c) Symmetry-50%
2393
+ Figure 10. Results of test accuracy vs. number of epochs on FMNIST dataset.
2394
+ (a)
2395
+ 0
2396
+ 1
2397
+ 2
2398
+ 3
2399
+ 4
2400
+ 5
2401
+ 6
2402
+ 7
2403
+ 8
2404
+ 9
2405
+ Predicted class
2406
+ 0
2407
+ 1
2408
+ 2
2409
+ 3
2410
+ 4
2411
+ 5
2412
+ 6
2413
+ 7
2414
+ 8
2415
+ 9
2416
+ True class
2417
+ 972
2418
+ 2
2419
+ 1
2420
+ 0
2421
+ 0
2422
+ 0
2423
+ 2
2424
+ 1
2425
+ 2
2426
+ 0
2427
+ 0
2428
+ 1123
2429
+ 6
2430
+ 1
2431
+ 0
2432
+ 1
2433
+ 1
2434
+ 0
2435
+ 3
2436
+ 0
2437
+ 1
2438
+ 0
2439
+ 1004
2440
+ 21
2441
+ 1
2442
+ 0
2443
+ 0
2444
+ 4
2445
+ 1
2446
+ 0
2447
+ 1
2448
+ 0
2449
+ 1
2450
+ 998
2451
+ 3
2452
+ 3
2453
+ 0
2454
+ 1
2455
+ 2
2456
+ 1
2457
+ 0
2458
+ 0
2459
+ 0
2460
+ 1
2461
+ 972
2462
+ 1
2463
+ 4
2464
+ 0
2465
+ 0
2466
+ 4
2467
+ 2
2468
+ 0
2469
+ 0
2470
+ 5
2471
+ 0
2472
+ 879
2473
+ 4
2474
+ 1
2475
+ 0
2476
+ 1
2477
+ 1
2478
+ 2
2479
+ 0
2480
+ 1
2481
+ 1
2482
+ 3
2483
+ 948
2484
+ 1
2485
+ 1
2486
+ 0
2487
+ 0
2488
+ 3
2489
+ 6
2490
+ 3
2491
+ 1
2492
+ 0
2493
+ 0
2494
+ 1011
2495
+ 3
2496
+ 1
2497
+ 2
2498
+ 0
2499
+ 2
2500
+ 6
2501
+ 2
2502
+ 1
2503
+ 1
2504
+ 1
2505
+ 951
2506
+ 8
2507
+ 5
2508
+ 2
2509
+ 0
2510
+ 4
2511
+ 9
2512
+ 3
2513
+ 0
2514
+ 3
2515
+ 0
2516
+ 983
2517
+ MNIST, Pairflip 45%
2518
+ 0
2519
+ 1
2520
+ 2
2521
+ 3
2522
+ 4
2523
+ 5
2524
+ 6
2525
+ 7
2526
+ 8
2527
+ 9
2528
+ Predicted class
2529
+ True class
2530
+ 975
2531
+ 0
2532
+ 2
2533
+ 0
2534
+ 0
2535
+ 0
2536
+ 0
2537
+ 1
2538
+ 2
2539
+ 0
2540
+ 0
2541
+ 1129
2542
+ 1
2543
+ 1
2544
+ 1
2545
+ 0
2546
+ 2
2547
+ 1
2548
+ 0
2549
+ 0
2550
+ 1
2551
+ 0
2552
+ 1026
2553
+ 0
2554
+ 1
2555
+ 0
2556
+ 0
2557
+ 3
2558
+ 1
2559
+ 0
2560
+ 1
2561
+ 0
2562
+ 3
2563
+ 997
2564
+ 0
2565
+ 5
2566
+ 0
2567
+ 2
2568
+ 2
2569
+ 0
2570
+ 1
2571
+ 0
2572
+ 0
2573
+ 0
2574
+ 969
2575
+ 0
2576
+ 4
2577
+ 1
2578
+ 1
2579
+ 6
2580
+ 1
2581
+ 0
2582
+ 0
2583
+ 5
2584
+ 0
2585
+ 884
2586
+ 1
2587
+ 0
2588
+ 0
2589
+ 1
2590
+ 4
2591
+ 4
2592
+ 0
2593
+ 1
2594
+ 2
2595
+ 2
2596
+ 943
2597
+ 0
2598
+ 2
2599
+ 0
2600
+ 0
2601
+ 1
2602
+ 7
2603
+ 1
2604
+ 0
2605
+ 0
2606
+ 0
2607
+ 1013
2608
+ 1
2609
+ 5
2610
+ 0
2611
+ 0
2612
+ 4
2613
+ 1
2614
+ 2
2615
+ 1
2616
+ 1
2617
+ 2
2618
+ 962
2619
+ 1
2620
+ 1
2621
+ 2
2622
+ 0
2623
+ 0
2624
+ 3
2625
+ 3
2626
+ 0
2627
+ 3
2628
+ 1
2629
+ 996
2630
+ MNIST, Symmetric 50%
2631
+ 0
2632
+ 200
2633
+ 400
2634
+ 600
2635
+ 800
2636
+ 1000
2637
+ 0
2638
+ 200
2639
+ 400
2640
+ 600
2641
+ 800
2642
+ 1000
2643
+ (b)
2644
+ 0
2645
+ 1
2646
+ 2
2647
+ 3
2648
+ 4
2649
+ 5
2650
+ 6
2651
+ 7
2652
+ 8
2653
+ 9
2654
+ Predicted class
2655
+ 0
2656
+ 1
2657
+ 2
2658
+ 3
2659
+ 4
2660
+ 5
2661
+ 6
2662
+ 7
2663
+ 8
2664
+ 9
2665
+ True class
2666
+ 780
2667
+ 18
2668
+ 7
2669
+ 27
2670
+ 7
2671
+ 3
2672
+ 139
2673
+ 15
2674
+ 3
2675
+ 1
2676
+ 0
2677
+ 935
2678
+ 35
2679
+ 23
2680
+ 4
2681
+ 0
2682
+ 2
2683
+ 1
2684
+ 0
2685
+ 0
2686
+ 17
2687
+ 1
2688
+ 735
2689
+ 46
2690
+ 124
2691
+ 2
2692
+ 74
2693
+ 1
2694
+ 0
2695
+ 0
2696
+ 2
2697
+ 2
2698
+ 1
2699
+ 920
2700
+ 60
2701
+ 3
2702
+ 6
2703
+ 5
2704
+ 1
2705
+ 0
2706
+ 2
2707
+ 0
2708
+ 18
2709
+ 13
2710
+ 903
2711
+ 23
2712
+ 40
2713
+ 1
2714
+ 0
2715
+ 0
2716
+ 1
2717
+ 0
2718
+ 0
2719
+ 0
2720
+ 0
2721
+ 982
2722
+ 3
2723
+ 10
2724
+ 0
2725
+ 4
2726
+ 68
2727
+ 0
2728
+ 27
2729
+ 27
2730
+ 81
2731
+ 3
2732
+ 512 279
2733
+ 3
2734
+ 0
2735
+ 0
2736
+ 0
2737
+ 0
2738
+ 0
2739
+ 0
2740
+ 6
2741
+ 2
2742
+ 961
2743
+ 1
2744
+ 30
2745
+ 1
2746
+ 0
2747
+ 0
2748
+ 1
2749
+ 4
2750
+ 2
2751
+ 1
2752
+ 2
2753
+ 976
2754
+ 13
2755
+ 44
2756
+ 0
2757
+ 0
2758
+ 0
2759
+ 0
2760
+ 5
2761
+ 1
2762
+ 22
2763
+ 2
2764
+ 926
2765
+ FMNIST, Pairflip 45%
2766
+ 0
2767
+ 1
2768
+ 2
2769
+ 3
2770
+ 4
2771
+ 5
2772
+ 6
2773
+ 7
2774
+ 8
2775
+ 9
2776
+ Predicted class
2777
+ True class
2778
+ 782
2779
+ 3
2780
+ 24
2781
+ 17
2782
+ 6
2783
+ 3
2784
+ 159
2785
+ 0
2786
+ 6
2787
+ 0
2788
+ 2
2789
+ 976
2790
+ 1
2791
+ 13
2792
+ 3
2793
+ 0
2794
+ 3
2795
+ 0
2796
+ 2
2797
+ 0
2798
+ 16
2799
+ 1
2800
+ 861
2801
+ 7
2802
+ 59
2803
+ 1
2804
+ 52
2805
+ 0
2806
+ 3
2807
+ 0
2808
+ 24
2809
+ 6
2810
+ 11
2811
+ 888
2812
+ 30
2813
+ 0
2814
+ 38
2815
+ 0
2816
+ 3
2817
+ 0
2818
+ 0
2819
+ 0
2820
+ 107
2821
+ 25
2822
+ 800
2823
+ 0
2824
+ 64
2825
+ 0
2826
+ 4
2827
+ 0
2828
+ 0
2829
+ 0
2830
+ 0
2831
+ 0
2832
+ 0
2833
+ 967
2834
+ 1
2835
+ 25
2836
+ 0
2837
+ 7
2838
+ 72
2839
+ 1
2840
+ 70
2841
+ 27
2842
+ 60
2843
+ 0
2844
+ 757
2845
+ 0
2846
+ 13
2847
+ 0
2848
+ 0
2849
+ 0
2850
+ 0
2851
+ 0
2852
+ 0
2853
+ 18
2854
+ 0
2855
+ 972
2856
+ 0
2857
+ 10
2858
+ 1
2859
+ 1
2860
+ 1
2861
+ 3
2862
+ 2
2863
+ 3
2864
+ 5
2865
+ 2
2866
+ 982
2867
+ 0
2868
+ 0
2869
+ 0
2870
+ 0
2871
+ 0
2872
+ 0
2873
+ 10
2874
+ 0
2875
+ 50
2876
+ 0
2877
+ 940
2878
+ FMNIST, Symmetric 50%
2879
+ 0
2880
+ 200
2881
+ 400
2882
+ 600
2883
+ 800
2884
+ 0
2885
+ 200
2886
+ 400
2887
+ 600
2888
+ 800
2889
+ (c)
2890
+ 0
2891
+ 1
2892
+ 2
2893
+ 3
2894
+ 4
2895
+ 5
2896
+ 6
2897
+ 7
2898
+ 8
2899
+ 9
2900
+ Predicted class
2901
+ 0
2902
+ 1
2903
+ 2
2904
+ 3
2905
+ 4
2906
+ 5
2907
+ 6
2908
+ 7
2909
+ 8
2910
+ 9
2911
+ True class
2912
+ 891
2913
+ 13
2914
+ 21
2915
+ 23
2916
+ 5
2917
+ 2
2918
+ 6
2919
+ 4
2920
+ 20
2921
+ 15
2922
+ 13
2923
+ 918
2924
+ 5
2925
+ 6
2926
+ 1
2927
+ 2
2928
+ 7
2929
+ 1
2930
+ 3
2931
+ 44
2932
+ 28
2933
+ 1
2934
+ 651 208
2935
+ 34
2936
+ 21
2937
+ 38
2938
+ 10
2939
+ 4
2940
+ 5
2941
+ 21
2942
+ 1
2943
+ 16
2944
+ 760
2945
+ 49
2946
+ 86
2947
+ 42
2948
+ 13
2949
+ 5
2950
+ 7
2951
+ 15
2952
+ 1
2953
+ 22
2954
+ 66
2955
+ 773
2956
+ 41
2957
+ 24
2958
+ 48
2959
+ 9
2960
+ 1
2961
+ 9
2962
+ 2
2963
+ 6
2964
+ 210
2965
+ 29
2966
+ 691
2967
+ 24
2968
+ 20
2969
+ 5
2970
+ 4
2971
+ 9
2972
+ 1
2973
+ 13
2974
+ 49
2975
+ 19
2976
+ 11
2977
+ 890
2978
+ 3
2979
+ 2
2980
+ 3
2981
+ 15
2982
+ 3
2983
+ 6
2984
+ 51
2985
+ 17
2986
+ 39
2987
+ 3
2988
+ 790
2989
+ 72
2990
+ 4
2991
+ 67
2992
+ 13
2993
+ 2
2994
+ 18
2995
+ 2
2996
+ 3
2997
+ 1
2998
+ 0
2999
+ 795
3000
+ 99
3001
+ 52
3002
+ 62
3003
+ 6
3004
+ 12
3005
+ 0
3006
+ 2
3007
+ 7
3008
+ 5
3009
+ 13
3010
+ 841
3011
+ CIFAR-10, Pairflip 45%
3012
+ 0
3013
+ 1
3014
+ 2
3015
+ 3
3016
+ 4
3017
+ 5
3018
+ 6
3019
+ 7
3020
+ 8
3021
+ 9
3022
+ Predicted class
3023
+ True class
3024
+ 641
3025
+ 26
3026
+ 45
3027
+ 33
3028
+ 24
3029
+ 20
3030
+ 18
3031
+ 9
3032
+ 155
3033
+ 29
3034
+ 20
3035
+ 752
3036
+ 17
3037
+ 15
3038
+ 10
3039
+ 20
3040
+ 12
3041
+ 11
3042
+ 82
3043
+ 61
3044
+ 62
3045
+ 14
3046
+ 551
3047
+ 68
3048
+ 59
3049
+ 69
3050
+ 43
3051
+ 27
3052
+ 87
3053
+ 20
3054
+ 35
3055
+ 27
3056
+ 60
3057
+ 469
3058
+ 57
3059
+ 157
3060
+ 50
3061
+ 39
3062
+ 88
3063
+ 18
3064
+ 19
3065
+ 10
3066
+ 42
3067
+ 33
3068
+ 640
3069
+ 55
3070
+ 44
3071
+ 54
3072
+ 91
3073
+ 12
3074
+ 22
3075
+ 19
3076
+ 34
3077
+ 115
3078
+ 30
3079
+ 593
3080
+ 20
3081
+ 51
3082
+ 95
3083
+ 21
3084
+ 20
3085
+ 10
3086
+ 30
3087
+ 43
3088
+ 26
3089
+ 36
3090
+ 756
3091
+ 18
3092
+ 44
3093
+ 17
3094
+ 27
3095
+ 14
3096
+ 14
3097
+ 39
3098
+ 34
3099
+ 66
3100
+ 11
3101
+ 698
3102
+ 88
3103
+ 9
3104
+ 31
3105
+ 14
3106
+ 15
3107
+ 12
3108
+ 9
3109
+ 9
3110
+ 4
3111
+ 7
3112
+ 887
3113
+ 12
3114
+ 43
3115
+ 67
3116
+ 11
3117
+ 12
3118
+ 10
3119
+ 20
3120
+ 9
3121
+ 17
3122
+ 89
3123
+ 722
3124
+ CIFAR-10, Symmetric 50%
3125
+ 0
3126
+ 200
3127
+ 400
3128
+ 600
3129
+ 800
3130
+ 100
3131
+ 200
3132
+ 300
3133
+ 400
3134
+ 500
3135
+ 600
3136
+ 700
3137
+ 800
3138
+ Figure 11. Confusion matrices of true class and predicted class for our algorithm for CIFAR-10, MNIST and FMNIST datasets.
3139
+ 15
3140
+
3141
+ (Cifar-10. Symmetric 30%)
3142
+ 90
3143
+ 80
3144
+ M
3145
+ 70
3146
+ accuracy
3147
+ 60
3148
+ 50
3149
+ test
3150
+ 40
3151
+ 30
3152
+ 20
3153
+ 10
3154
+ 0
3155
+ 25
3156
+ 50
3157
+ 75
3158
+ 100
3159
+ 125
3160
+ 150
3161
+ epoch
3162
+ Ours
3163
+ Trace
3164
+ Co-teaching
3165
+ Co-teaching+
3166
+ CDR
3167
+ JoCoROriginal
3168
+ Thin
3169
+ Thick
3170
+ Image
3171
+ Image
3172
+ Image
3173
+ 0
3174
+ 2
3175
+ 4
3176
+ 6
3177
+ 8
3178
+ 0
3179
+ 2
3180
+ 4
3181
+ 6
3182
+ 8
3183
+ Annotator 1
3184
+ 0.08
3185
+ 0.10
3186
+ 0.12
3187
+ 0.14
3188
+ 0.16
3189
+ 0
3190
+ 2
3191
+ 4
3192
+ 6
3193
+ 8
3194
+ 0
3195
+ 2
3196
+ 4
3197
+ 6
3198
+ 8
3199
+ Annotator 1
3200
+ 0.1
3201
+ 0.2
3202
+ 0.3
3203
+ 0.4
3204
+ 0.5
3205
+ 0.6
3206
+ 0
3207
+ 2
3208
+ 4
3209
+ 6
3210
+ 8
3211
+ 0
3212
+ 2
3213
+ 4
3214
+ 6
3215
+ 8
3216
+ Annotator 1
3217
+ 0.2
3218
+ 0.4
3219
+ 0.6
3220
+ 0.8
3221
+ 0
3222
+ 2
3223
+ 4
3224
+ 6
3225
+ 8
3226
+ 0
3227
+ 2
3228
+ 4
3229
+ 6
3230
+ 8
3231
+ Annotator 2
3232
+ 0.1
3233
+ 0.2
3234
+ 0.3
3235
+ 0.4
3236
+ 0.5
3237
+ 0.6
3238
+ 0.7
3239
+ 0.8
3240
+ 0
3241
+ 2
3242
+ 4
3243
+ 6
3244
+ 8
3245
+ 0
3246
+ 2
3247
+ 4
3248
+ 6
3249
+ 8
3250
+ Annotator 2
3251
+ 0.07
3252
+ 0.08
3253
+ 0.09
3254
+ 0.10
3255
+ 0.11
3256
+ 0.12
3257
+ 0
3258
+ 2
3259
+ 4
3260
+ 6
3261
+ 8
3262
+ 0
3263
+ 2
3264
+ 4
3265
+ 6
3266
+ 8
3267
+ Annotator 2
3268
+ 0.05
3269
+ 0.10
3270
+ 0.15
3271
+ 0.20
3272
+ 0.25
3273
+ 0.30
3274
+ 0
3275
+ 2
3276
+ 4
3277
+ 6
3278
+ 8
3279
+ 0
3280
+ 2
3281
+ 4
3282
+ 6
3283
+ 8
3284
+ Annotator 3
3285
+ 0.1
3286
+ 0.2
3287
+ 0.3
3288
+ 0.4
3289
+ 0.5
3290
+ 0.6
3291
+ 0.7
3292
+ 0
3293
+ 2
3294
+ 4
3295
+ 6
3296
+ 8
3297
+ 0
3298
+ 2
3299
+ 4
3300
+ 6
3301
+ 8
3302
+ Annotator 3
3303
+ 0.1
3304
+ 0.2
3305
+ 0.3
3306
+ 0.4
3307
+ 0.5
3308
+ 0.6
3309
+ 0.7
3310
+ 0
3311
+ 2
3312
+ 4
3313
+ 6
3314
+ 8
3315
+ 0
3316
+ 2
3317
+ 4
3318
+ 6
3319
+ 8
3320
+ Annotator 3
3321
+ 0.05
3322
+ 0.10
3323
+ 0.15
3324
+ 0.20
3325
+ 0.25
3326
+ 0.30
3327
+ 0.35
3328
+ Figure 12. Learned Annotators’ confusion for different image styles using our approach with the regularizer (λ=0.01, m=2) on MNIST
3329
+ dataset.
3330
+ 16
3331
+
3332
+ 山a6Image
3333
+ Original
3334
+ Our (λ=0.01, m=2)
3335
+ Our (λ=0)
3336
+ Original
3337
+ 0
3338
+ 2
3339
+ 4
3340
+ 6
3341
+ 8
3342
+ 0
3343
+ 2
3344
+ 4
3345
+ 6
3346
+ 8
3347
+ Annotator1 Original
3348
+ 0.00
3349
+ 0.05
3350
+ 0.10
3351
+ 0.15
3352
+ 0.20
3353
+ 0
3354
+ 1
3355
+ 2
3356
+ 3
3357
+ 4
3358
+ 5
3359
+ 6
3360
+ 7
3361
+ 8
3362
+ 9
3363
+ 0
3364
+ 1
3365
+ 2
3366
+ 3
3367
+ 4
3368
+ 5
3369
+ 6
3370
+ 7
3371
+ 8
3372
+ 9
3373
+ Annotator1 Original
3374
+ 0.00
3375
+ 0.05
3376
+ 0.10
3377
+ 0.15
3378
+ 0.20
3379
+ 0
3380
+ 1
3381
+ 2
3382
+ 3
3383
+ 4
3384
+ 5
3385
+ 6
3386
+ 7
3387
+ 8
3388
+ 9
3389
+ 0
3390
+ 1
3391
+ 2
3392
+ 3
3393
+ 4
3394
+ 5
3395
+ 6
3396
+ 7
3397
+ 8
3398
+ 9
3399
+ Annotator1 Original
3400
+ 0.00
3401
+ 0.05
3402
+ 0.10
3403
+ 0.15
3404
+ 0.20
3405
+ Thick
3406
+ 0
3407
+ 2
3408
+ 4
3409
+ 6
3410
+ 8
3411
+ 0
3412
+ 2
3413
+ 4
3414
+ 6
3415
+ 8
3416
+ Annotator1 Thick
3417
+ 0.0
3418
+ 0.2
3419
+ 0.4
3420
+ 0.6
3421
+ 0.8
3422
+ 0
3423
+ 1
3424
+ 2
3425
+ 3
3426
+ 4
3427
+ 5
3428
+ 6
3429
+ 7
3430
+ 8
3431
+ 9
3432
+ 0
3433
+ 1
3434
+ 2
3435
+ 3
3436
+ 4
3437
+ 5
3438
+ 6
3439
+ 7
3440
+ 8
3441
+ 9
3442
+ Annotator1 Thick
3443
+ 0.0
3444
+ 0.2
3445
+ 0.4
3446
+ 0.6
3447
+ 0.8
3448
+ 0
3449
+ 1
3450
+ 2
3451
+ 3
3452
+ 4
3453
+ 5
3454
+ 6
3455
+ 7
3456
+ 8
3457
+ 9
3458
+ 0
3459
+ 1
3460
+ 2
3461
+ 3
3462
+ 4
3463
+ 5
3464
+ 6
3465
+ 7
3466
+ 8
3467
+ 9
3468
+ Annotator1 Thick
3469
+ 0.0
3470
+ 0.2
3471
+ 0.4
3472
+ 0.6
3473
+ 0.8
3474
+ Thin
3475
+ 0
3476
+ 2
3477
+ 4
3478
+ 6
3479
+ 8
3480
+ 0
3481
+ 2
3482
+ 4
3483
+ 6
3484
+ 8
3485
+ Annotator1 Thin
3486
+ 0.0
3487
+ 0.1
3488
+ 0.2
3489
+ 0.3
3490
+ 0.4
3491
+ 0.5
3492
+ 0.6
3493
+ 0.7
3494
+ 0
3495
+ 1
3496
+ 2
3497
+ 3
3498
+ 4
3499
+ 5
3500
+ 6
3501
+ 7
3502
+ 8
3503
+ 9
3504
+ 0
3505
+ 1
3506
+ 2
3507
+ 3
3508
+ 4
3509
+ 5
3510
+ 6
3511
+ 7
3512
+ 8
3513
+ 9
3514
+ Annotator1 Thin
3515
+ 0.0
3516
+ 0.1
3517
+ 0.2
3518
+ 0.3
3519
+ 0.4
3520
+ 0.5
3521
+ 0.6
3522
+ 0.7
3523
+ 0
3524
+ 1
3525
+ 2
3526
+ 3
3527
+ 4
3528
+ 5
3529
+ 6
3530
+ 7
3531
+ 8
3532
+ 9
3533
+ 0
3534
+ 1
3535
+ 2
3536
+ 3
3537
+ 4
3538
+ 5
3539
+ 6
3540
+ 7
3541
+ 8
3542
+ 9
3543
+ Annotator1 Thin
3544
+ 0.0
3545
+ 0.1
3546
+ 0.2
3547
+ 0.3
3548
+ 0.4
3549
+ 0.5
3550
+ 0.6
3551
+ 0.7
3552
+ Figure 13. Original and Predicted confusion for Annotator 1 using different models: our approach with regularizer (λ = 0.01, m=2) and
3553
+ without it (λ = 0) on MNIST dataset.
3554
+ 17
3555
+
3556
+ 6Image
3557
+ Original
3558
+ Our (λ=0.01, m=2)
3559
+ Our (λ=0)
3560
+ Original
3561
+ 0
3562
+ 2
3563
+ 4
3564
+ 6
3565
+ 8
3566
+ 0
3567
+ 2
3568
+ 4
3569
+ 6
3570
+ 8
3571
+ Annotator2 Original
3572
+ 0.0
3573
+ 0.1
3574
+ 0.2
3575
+ 0.3
3576
+ 0.4
3577
+ 0.5
3578
+ 0.6
3579
+ 0.7
3580
+ 0
3581
+ 1
3582
+ 2
3583
+ 3
3584
+ 4
3585
+ 5
3586
+ 6
3587
+ 7
3588
+ 8
3589
+ 9
3590
+ 0
3591
+ 1
3592
+ 2
3593
+ 3
3594
+ 4
3595
+ 5
3596
+ 6
3597
+ 7
3598
+ 8
3599
+ 9
3600
+ Annotator2 Original
3601
+ 0.0
3602
+ 0.1
3603
+ 0.2
3604
+ 0.3
3605
+ 0.4
3606
+ 0.5
3607
+ 0.6
3608
+ 0.7
3609
+ 0
3610
+ 1
3611
+ 2
3612
+ 3
3613
+ 4
3614
+ 5
3615
+ 6
3616
+ 7
3617
+ 8
3618
+ 9
3619
+ 0
3620
+ 1
3621
+ 2
3622
+ 3
3623
+ 4
3624
+ 5
3625
+ 6
3626
+ 7
3627
+ 8
3628
+ 9
3629
+ Annotator2 Original
3630
+ 0.0
3631
+ 0.1
3632
+ 0.2
3633
+ 0.3
3634
+ 0.4
3635
+ 0.5
3636
+ 0.6
3637
+ 0.7
3638
+ Thick
3639
+ 0
3640
+ 2
3641
+ 4
3642
+ 6
3643
+ 8
3644
+ 0
3645
+ 2
3646
+ 4
3647
+ 6
3648
+ 8
3649
+ Annotator2 Thick
3650
+ 0.00
3651
+ 0.05
3652
+ 0.10
3653
+ 0.15
3654
+ 0.20
3655
+ 0.25
3656
+ 0.30
3657
+ 0
3658
+ 1
3659
+ 2
3660
+ 3
3661
+ 4
3662
+ 5
3663
+ 6
3664
+ 7
3665
+ 8
3666
+ 9
3667
+ 0
3668
+ 1
3669
+ 2
3670
+ 3
3671
+ 4
3672
+ 5
3673
+ 6
3674
+ 7
3675
+ 8
3676
+ 9
3677
+ Annotator2 Thick
3678
+ 0.00
3679
+ 0.05
3680
+ 0.10
3681
+ 0.15
3682
+ 0.20
3683
+ 0.25
3684
+ 0.30
3685
+ 0
3686
+ 1
3687
+ 2
3688
+ 3
3689
+ 4
3690
+ 5
3691
+ 6
3692
+ 7
3693
+ 8
3694
+ 9
3695
+ 0
3696
+ 1
3697
+ 2
3698
+ 3
3699
+ 4
3700
+ 5
3701
+ 6
3702
+ 7
3703
+ 8
3704
+ 9
3705
+ Annotator2 Thick
3706
+ 0.00
3707
+ 0.05
3708
+ 0.10
3709
+ 0.15
3710
+ 0.20
3711
+ 0.25
3712
+ 0.30
3713
+ Thin
3714
+ 0
3715
+ 2
3716
+ 4
3717
+ 6
3718
+ 8
3719
+ 0
3720
+ 2
3721
+ 4
3722
+ 6
3723
+ 8
3724
+ Annotator2 Thin
3725
+ 0.00
3726
+ 0.02
3727
+ 0.04
3728
+ 0.06
3729
+ 0.08
3730
+ 0.10
3731
+ 0.12
3732
+ 0.14
3733
+ 0.16
3734
+ 0
3735
+ 1
3736
+ 2
3737
+ 3
3738
+ 4
3739
+ 5
3740
+ 6
3741
+ 7
3742
+ 8
3743
+ 9
3744
+ 0
3745
+ 1
3746
+ 2
3747
+ 3
3748
+ 4
3749
+ 5
3750
+ 6
3751
+ 7
3752
+ 8
3753
+ 9
3754
+ Annotator2 Thin
3755
+ 0.00
3756
+ 0.02
3757
+ 0.04
3758
+ 0.06
3759
+ 0.08
3760
+ 0.10
3761
+ 0.12
3762
+ 0.14
3763
+ 0.16
3764
+ 0
3765
+ 1
3766
+ 2
3767
+ 3
3768
+ 4
3769
+ 5
3770
+ 6
3771
+ 7
3772
+ 8
3773
+ 9
3774
+ 0
3775
+ 1
3776
+ 2
3777
+ 3
3778
+ 4
3779
+ 5
3780
+ 6
3781
+ 7
3782
+ 8
3783
+ 9
3784
+ Annotator2 Thin
3785
+ 0.00
3786
+ 0.02
3787
+ 0.04
3788
+ 0.06
3789
+ 0.08
3790
+ 0.10
3791
+ 0.12
3792
+ 0.14
3793
+ 0.16
3794
+ Figure 14. Original and Predicted confusion for Annotator 2 using different models: our approach with regularizer (λ = 0.01, m=2) and
3795
+ without it (λ = 0) on MNIST dataset.
3796
+ 18
3797
+
3798
+ 6Image
3799
+ Original
3800
+ Our (λ=0.01, m=2)
3801
+ Our (λ=0)
3802
+ Original
3803
+ 0
3804
+ 2
3805
+ 4
3806
+ 6
3807
+ 8
3808
+ 0
3809
+ 2
3810
+ 4
3811
+ 6
3812
+ 8
3813
+ Annotator3 Original
3814
+ 0.0
3815
+ 0.1
3816
+ 0.2
3817
+ 0.3
3818
+ 0.4
3819
+ 0.5
3820
+ 0.6
3821
+ 0.7
3822
+ 0.8
3823
+ 0
3824
+ 1
3825
+ 2
3826
+ 3
3827
+ 4
3828
+ 5
3829
+ 6
3830
+ 7
3831
+ 8
3832
+ 9
3833
+ 0
3834
+ 1
3835
+ 2
3836
+ 3
3837
+ 4
3838
+ 5
3839
+ 6
3840
+ 7
3841
+ 8
3842
+ 9
3843
+ Annotator3 Original
3844
+ 0.0
3845
+ 0.1
3846
+ 0.2
3847
+ 0.3
3848
+ 0.4
3849
+ 0.5
3850
+ 0.6
3851
+ 0.7
3852
+ 0.8
3853
+ 0
3854
+ 1
3855
+ 2
3856
+ 3
3857
+ 4
3858
+ 5
3859
+ 6
3860
+ 7
3861
+ 8
3862
+ 9
3863
+ 0
3864
+ 1
3865
+ 2
3866
+ 3
3867
+ 4
3868
+ 5
3869
+ 6
3870
+ 7
3871
+ 8
3872
+ 9
3873
+ Annotator3 Original
3874
+ 0.0
3875
+ 0.1
3876
+ 0.2
3877
+ 0.3
3878
+ 0.4
3879
+ 0.5
3880
+ 0.6
3881
+ 0.7
3882
+ 0.8
3883
+ Thick
3884
+ 0
3885
+ 2
3886
+ 4
3887
+ 6
3888
+ 8
3889
+ 0
3890
+ 2
3891
+ 4
3892
+ 6
3893
+ 8
3894
+ Annotator3 Thick
3895
+ 0.000
3896
+ 0.025
3897
+ 0.050
3898
+ 0.075
3899
+ 0.100
3900
+ 0.125
3901
+ 0.150
3902
+ 0.175
3903
+ 0.200
3904
+ 0
3905
+ 1
3906
+ 2
3907
+ 3
3908
+ 4
3909
+ 5
3910
+ 6
3911
+ 7
3912
+ 8
3913
+ 9
3914
+ 0
3915
+ 1
3916
+ 2
3917
+ 3
3918
+ 4
3919
+ 5
3920
+ 6
3921
+ 7
3922
+ 8
3923
+ 9
3924
+ Annotator3 Thick
3925
+ 0.000
3926
+ 0.025
3927
+ 0.050
3928
+ 0.075
3929
+ 0.100
3930
+ 0.125
3931
+ 0.150
3932
+ 0.175
3933
+ 0.200
3934
+ 0
3935
+ 1
3936
+ 2
3937
+ 3
3938
+ 4
3939
+ 5
3940
+ 6
3941
+ 7
3942
+ 8
3943
+ 9
3944
+ 0
3945
+ 1
3946
+ 2
3947
+ 3
3948
+ 4
3949
+ 5
3950
+ 6
3951
+ 7
3952
+ 8
3953
+ 9
3954
+ Annotator3 Thick
3955
+ 0.000
3956
+ 0.025
3957
+ 0.050
3958
+ 0.075
3959
+ 0.100
3960
+ 0.125
3961
+ 0.150
3962
+ 0.175
3963
+ 0.200
3964
+ Thin
3965
+ 0
3966
+ 2
3967
+ 4
3968
+ 6
3969
+ 8
3970
+ 0
3971
+ 2
3972
+ 4
3973
+ 6
3974
+ 8
3975
+ Annotator3 Thin
3976
+ 0.0
3977
+ 0.2
3978
+ 0.4
3979
+ 0.6
3980
+ 0.8
3981
+ 0
3982
+ 1
3983
+ 2
3984
+ 3
3985
+ 4
3986
+ 5
3987
+ 6
3988
+ 7
3989
+ 8
3990
+ 9
3991
+ 0
3992
+ 1
3993
+ 2
3994
+ 3
3995
+ 4
3996
+ 5
3997
+ 6
3998
+ 7
3999
+ 8
4000
+ 9
4001
+ Annotator3 Thin
4002
+ 0.0
4003
+ 0.2
4004
+ 0.4
4005
+ 0.6
4006
+ 0.8
4007
+ 0
4008
+ 1
4009
+ 2
4010
+ 3
4011
+ 4
4012
+ 5
4013
+ 6
4014
+ 7
4015
+ 8
4016
+ 9
4017
+ 0
4018
+ 1
4019
+ 2
4020
+ 3
4021
+ 4
4022
+ 5
4023
+ 6
4024
+ 7
4025
+ 8
4026
+ 9
4027
+ Annotator3 Thin
4028
+ 0.0
4029
+ 0.2
4030
+ 0.4
4031
+ 0.6
4032
+ 0.8
4033
+ Figure 15. Original and Predicted confusion for Annotator 3 using different models: our approach with regularizer (λ = 0.01, m=2) and
4034
+ without it (λ = 0) on MNIST dataset.
4035
+ 19
4036
+
4037
+ 6Input
4038
+ Thin
4039
+ Thick
4040
+ Fractured
4041
+ Pred
4042
+ GT
4043
+ Figure 16. Visualisation of the predictions of the annotators’ segmentations (Thin, Thick and Fractured) together with the predictions of
4044
+ the estimated true labels using our algorithm in comparison with the test image and GT. Black is true positive, White is true negative, Red
4045
+ represents false positive, whilst Green is false negative.
4046
+ 20
4047
+
4048
+ 44400006.
5dAyT4oBgHgl3EQfpfgA/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
5dE4T4oBgHgl3EQf1Q2P/content/tmp_files/2301.05289v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
5dE4T4oBgHgl3EQf1Q2P/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
5tE1T4oBgHgl3EQfBAK1/content/2301.02847v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e685d637f7b520b2b2bb93615bcb8c1690b0103d118df1f70fb8524c37283372
3
+ size 574361
5tE1T4oBgHgl3EQfBAK1/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94d7f9fd698a013f492aa18b4e3b8796039c1c568366a5b25570b32c860bfbe4
3
+ size 164922
69E1T4oBgHgl3EQf7AXC/content/tmp_files/2301.03530v1.pdf.txt ADDED
@@ -0,0 +1,1375 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Exciton dissociation mediated by phonons in organic photovoltaics
2
+ Stepan Fomichev,1, 2, ∗ Leonard Ruocco,1, 2, ∗ Alexandra Tully,1, 2 and Mona Berciu1, 2, 3
3
+ 1Department of Physics and Astronomy, University of British Columbia, Vancouver, British Columbia, V6T 1Z1 Canada
4
+ 2Stewart Blusson Quantum Matter Institute, University of British Columbia, Vancouver, British Columbia, V6T 1Z4 Canada
5
+ 3Leibniz Institute for Solid State and Materials Research (IFW) Dresden, Helmholtzstrasse 20, 01069 Dresden, Germany
6
+ (Dated: January 10, 2023)
7
+ It is well known that phonons can overscreen the bare Coulomb electron-electron repulsion, turning
8
+ it into the effective attraction that binds the Cooper pairs responsible for BCS superconductivity.
9
+ Here, we use a simple lattice model to prove that the counterpart of this is also possible, whereby
10
+ phonons overscreen the bare electron-hole attraction and may turn it repulsive at short distances,
11
+ driving exciton dissociation in certain regions of the parameter space. We argue that this phonon-
12
+ mediated short-range screening plays an important role in the physics of organic solar cell materials
13
+ (and other materials with strong electron-phonon coupling) and could point the way to new strategies
14
+ for optimizing their efficiencies.
15
+ I.
16
+ INTRODUCTION
17
+ Organic solar cells (OSCs) have been heralded as a rev-
18
+ olutionary technology in the renewable energy sector due
19
+ to their flexible and light-weight nature and low produc-
20
+ tion cost.1–4 While power conversion efficiencies of OSC
21
+ devices have been improving,5 they have not yet reached
22
+ levels high enough for OSCs to realize their promise; this
23
+ is largely due to the challenge of efficiently extracting free
24
+ charge carriers without detrimental losses.6,7
25
+ All light-harvesting devices start by capturing a pho-
26
+ ton to excite a bound electron-hole pair – an exciton.
27
+ Voltage is ultimately produced through the generation
28
+ of free charge carriers, requiring the dissociation of the
29
+ exciton through some internal mechanism.
30
+ Conventional (inorganic) solar cells, such as those
31
+ based on Si or GaAs, have highly effective charge screen-
32
+ ing. Because the screened Coulomb attraction is weak,
33
+ the Wannier excitons it creates are highly extended and
34
+ have small binding energies (few tens of meV). A combi-
35
+ nation of thermal fluctuations and external electric fields
36
+ is therefore sufficient to drive dissociation.
37
+ By contrast, OSC materials have poor charge screen-
38
+ ing, resulting in small Frenkel excitons with large binding
39
+ energies of a hundred meV or more.8,9 These are stable
40
+ against thermal fluctuations and fairly long-lived, lead-
41
+ ing to high recombination losses and reduced efficiencies.
42
+ This is why understanding and engineering exciton dis-
43
+ sociation in OSCs remains a foundational challenge.
44
+ To date, the most investigated approach to engi-
45
+ neering dissociation is to use bulk-heterojunction inter-
46
+ faces combining donor and acceptor materials, chosen so
47
+ that the potential gradient at their interface helps over-
48
+ come the high binding energies. This setup was shown
49
+ to produce higher yields, which was attributed to en-
50
+ hanced dissociation of so-called charge-transfer states at
51
+ the donor/acceptor (D/A) interface.10,11 Charge-transfer
52
+ states are believed to be relatively short-lived excitons
53
+ ∗ These authors contributed equally.
54
+ FIG. 1.
55
+ Lattice distortion from an exciton.
56
+ Left panel:
57
+ when the electron and the hole are far apart (red and blue
58
+ circles, respectively) their excess charge induces local lattice
59
+ distortions, giving rise to polarons.
60
+ Right panel: A small
61
+ Frenkel exciton produces a much weaker electric potential and
62
+ thus a much smaller lattice distortion.
63
+ composed of an electron and a hole that span neigh-
64
+ bouring molecular sites. While such excitons are quite
65
+ commonly generated in the bulk,12 they delocalize more
66
+ easily when they span a D/A interface.13–15 However,
67
+ more work is needed to understand both the nature of
68
+ these states, and how they can be engineered to optimize
69
+ exciton dissociation.
70
+ Alongside charge screening, the vibrational character-
71
+ istics (phonon modes) of OSCs are also relevant to disso-
72
+ ciation – and even less well-understood. Phonons couple
73
+ strongly to molecular orbitals, as evidenced by photoe-
74
+ mission experiments,16 and thus may be playing a role
75
+ in the exciton dynamics.17 Most of the studies to date
76
+ have focused on the role of phonons in the formation of
77
+ charge transfer states,18,19 and how electron-phonon cou-
78
+ pling affects the yield across the D/A interface.20–25
79
+ Here we present a fundamentally different way whereby
80
+ electron-phonon coupling can influence exciton dissocia-
81
+ tion, even in the absence of a D/A interface. We show
82
+ that sufficiently strong electron-phonon coupling can be
83
+ directly responsible for exciton dissociation, despite the
84
+ presence of significant Coulomb attraction between the
85
+ electron and the hole.
86
+ The basic idea is sketched out in Fig.
87
+ 1, where we
88
+ arXiv:2301.03530v1 [cond-mat.str-el] 9 Jan 2023
89
+
90
+ 2
91
+ compare the e��ects of electron-phonon coupling when the
92
+ hole and electron are far apart (left panel) versus when
93
+ bound in a small exciton (right panel). The addition of
94
+ an excess carrier results in a local lattice distortion that
95
+ dresses that carrier into a polaron. Because the electron
96
+ and the hole have opposite charges, in a polar material
97
+ they create opposite lattice distortions in their vicinity.
98
+ However, when they are bound into a small exciton, their
99
+ clouds essentially cancel each other out, and locally there
100
+ is no distortion. Another way to say this is that there is
101
+ no excess local charge in the presence of a small exciton
102
+ – hence no local lattice distortion is expected.
103
+ In this picture, electron-phonon coupling is seen to
104
+ lower the energy of the dissociated state through polaron
105
+ formation, while having little effect on the exciton bind-
106
+ ing energy. For large enough electron-phonon coupling
107
+ this leads to outright dissociation, as we show next. Even
108
+ when that is not the case, our work shows that one must
109
+ take polaron formation into consideration when choosing
110
+ the donor/acceptor materials, because the polaronic con-
111
+ tribution to the energetic landscape can be considerable.
112
+ It is important to acknowledge that the idea of exci-
113
+ ton dissociation driven by electron-phonon coupling was
114
+ proposed previously by Sumi in Ref. 26, where he used a
115
+ variational approximation to study the effect of Fr¨ohlich
116
+ coupling on an exciton. His prediction of a sharp transi-
117
+ tion between bound (exciton) and unbound (free electron
118
+ and hole polarons) states was later discredited by Ger-
119
+ lach and L¨owen,27 who proved that sharp transitions are
120
+ forbidden in this class of Hamiltonians and concluded
121
+ that overscreening is impossible in this context. We find
122
+ a smooth crossover between the two types of states, fully
123
+ consistent with the mathematical proof of Ref. 27. Our
124
+ work shows that the contradiction between Refs. 26 and
125
+ 27 is not because overscreening is impossible, but be-
126
+ cause the predicted sharp transition was an artifact of
127
+ the variational approximation28 used by Sumi.
128
+ The article is organized as follows: Sec. II introduces
129
+ the model we use to study this problem, and Sec. III
130
+ explains our formalism and approach. Key results are
131
+ shown in Sec. IV, while Sec. V contains an extended dis-
132
+ cussion of the various approximations made in the model
133
+ and the relevance of this phenomenology in the context
134
+ of OSCs.
135
+ II.
136
+ THE MODEL
137
+ We consider a single electron-hole pair in a one-
138
+ dimensional (1D) ionic chain, where each site supports a
139
+ single on-site orbital and a dispersionless Einstein phonon
140
+ mode. The single electron-hole pair assumption is reason-
141
+ able if, for example, the concentration of photo-generated
142
+ electron-hole pairs in the material is very low. We focus
143
+ on the 1D chain because here it is known that Coulomb
144
+ attraction always results in the formation of strongly
145
+ bound excitons, unlike in higher dimensions where ex-
146
+ citons can be either exponentially weakly bound (in 2D)
147
+ or unstable unless the attraction is sufficiently strong (in
148
+ 3D).29 Thus, demonstrating dissociation in 1D would im-
149
+ ply similar behaviour in higher dimensions, given that the
150
+ exciton is even more loosely bound there.
151
+ Our Hamiltonian reads:
152
+ ˆH = ˆTe + ˆVe−h + ˆHph + ˆVe−ph + ˆVh−ph.
153
+ (1)
154
+ Here, ˆTe = �
155
+ kσ ϵkc†
156
+ kσckσ is the kinetic energy of free
157
+ electrons in the conduction band, described by a tight-
158
+ binding model with a dispersion ϵk = −2t cos k defined
159
+ by the hopping t and momentum k ∈ (−π, π] of the bare
160
+ electron (the lattice constant is set to a = 1, also ℏ = 1).
161
+ The creation operator c†
162
+ kσ adds an electron with momen-
163
+ tum k and spin σ in this band. Its real space counterpart
164
+ is c†
165
+ nσ, where n = 1 . . . N indexes the sites of the chain,
166
+ with N → ∞. Hole creation operators in real space are
167
+ denoted by h†
168
+ nσ. For simplicity, we assume that holes are
169
+ localized (we reflect on this assumption in Sec. V).
170
+ The electron-hole interaction ˆVe−h is modeled as an
171
+ on-site Coulomb attraction
172
+ ˆVe−h = −U
173
+
174
+ n,σ,σ′
175
+ h†
176
+ nσhnσc†
177
+ nσ′cnσ′,
178
+ (2)
179
+ characterized by U > 0. Longer (but finite) range attrac-
180
+ tions can be treated similarly and lead to quantitative
181
+ changes only, at the cost of adding more parameters.
182
+ Optical phonons are described with an Einstein model:
183
+ ˆHph = Ω
184
+
185
+ n
186
+ b†
187
+ nbn
188
+ where b†
189
+ n creates a phonon with energy Ω at site n.
190
+ Finally, the Holstein carrier-lattice couplings are:
191
+ ˆVe−ph =Me
192
+
193
+
194
+ c†
195
+ nσcnσ(bn + b†
196
+ n)
197
+ (3)
198
+ ˆVh−ph =Mh
199
+
200
+
201
+ h†
202
+ nσhnσ(bn + b†
203
+ n)
204
+ (4)
205
+ with electron/hole-phonon couplings Me and Mh, respec-
206
+ tively.
207
+ Even after all these simplifications, there are four di-
208
+ mensionless parameters: U/t, Ω/t, Me/t, Mh/t. To avoid
209
+ further complications, we set the temperature T = 0.
210
+ This is justified because we are interested in cases where
211
+ all energy scales (including the exciton binding energy)
212
+ are much larger than the thermal energy, as is typically
213
+ the case in organic photovoltaics.
214
+ III.
215
+ METHODS
216
+ Finite Coulomb attraction in 1D always leads to a
217
+ ground-state with a stable, bound exciton. Our aim is to
218
+ investigate the influence of the carrier-phonon couplings
219
+ on the stability of the exciton. To do this, we calculate
220
+ the Green’s function
221
+ Gij(z) ≡ ⟨0| cihi ˆG(z)h†
222
+ ic†
223
+ j |0⟩
224
+ (5)
225
+
226
+ 3
227
+ where we reserve the index i to label the site hosting
228
+ the immobile hole (the spin degree of freedom is irrele-
229
+ vant for this calculation and we ignore them from now).
230
+ The electron can move and the propagator above is the
231
+ Fourier transform (at energy z = ω + iη) of the am-
232
+ plitude of probability that if the hole is at site i, the
233
+ electron moves from site j to site i within a given time
234
+ interval, with both the initial and the final states having
235
+ no phonons bn|0⟩ = 0. The broadening η → 0 introduces
236
+ an artificial lifetime ∝ 1/η for the pair to recombine, and
237
+ ˆG(z) = (z − ˆH)−1 is the resolvent. The associated local
238
+ density of states (LDOS), plotted in the figures, is de-
239
+ fined as A(ω) = −ImGii(z)/π; invariance to translations
240
+ ensures that the LDOS is the same at all sites i.
241
+ The propagator of Eq.
242
+ (5) for the full interacting
243
+ Hamiltonian is calculated using a novel, generalized ver-
244
+ sion of the Momentum Average approximation (MA) – a
245
+ method well established and validated for studying sin-
246
+ gle polarons30–33 and bipolarons.34–36 This generalization
247
+ allows, for the first time, to include into the variational
248
+ space configurations with two phonon clouds located ar-
249
+ bitrarily far apart: a hole cloud at site i, and an electron
250
+ cloud elsewhere in the chain.
251
+ We now briefly describe this method, before moving to
252
+ discuss the results.
253
+ A.
254
+ Non-interacting spectrum
255
+ The first step is to obtain the Green’s function in the
256
+ absence of carrier-phonon coupling (Me = Mh = 0). The
257
+ Green’s function G(i,0)
258
+ ij
259
+ (z) corresponding to ˆH0 = ˆTe +
260
+ ˆVe−h + ˆHph, i.e. for the system without carrier-phonon
261
+ coupling, can be calculated analytically (see Appendix
262
+ A for details). The spectrum extracted from the poles
263
+ of this Green’s function has a discrete eigenstate at ω =
264
+
265
+
266
+ 4t2 + U 2 and a continuum for ω ∈ [−2t, 2t].
267
+ The
268
+ continuum describes the electron unbound to the hole,
269
+ i.e. free to move throughout the system. The discrete
270
+ eigenstate is the energy of the bound exciton, lying below
271
+ this continuum for any value of U > 0. All these features
272
+ would be shifted by nΩ if there were n phonons in the
273
+ system, but for the propagator of interest to us n = 0.
274
+ B.
275
+ Turning on interactions: Lang-Firsov
276
+ transformation
277
+ In the presence of carrier-phonon couplings (finite
278
+ Me, Mh), if the carriers are not bound then they each cre-
279
+ ate phonon clouds in their vicinity, turning into polarons.
280
+ In the bound state their clouds combine, resulting in an
281
+ exciton-polaron.
282
+ Because the hole cannot move in our simplified model,
283
+ and because its coupling to the lattice is local, its phonon
284
+ cloud is definitely located at hole site i.
285
+ We then use
286
+ the Lang-Firsov transformation Ui = exp[ Mf
287
+ Ω (bi − b†
288
+ i)] to
289
+ integrate out the hole-phonon coupling:
290
+ ˜Hi = U†
291
+ i ˆHUi = ˆTe −
292
+
293
+ U + 2MeMh
294
+
295
+
296
+ c†
297
+ ici − M 2
298
+ h
299
+ Ω +
300
+ + Ω
301
+
302
+ l
303
+ b†
304
+ l bl + Me
305
+
306
+ l
307
+ c†
308
+ l cl(b†
309
+ l + bl)
310
+ (6)
311
+ after noting that U†
312
+ i blUi = bl − δi,l
313
+ Mh
314
+ Ω . This transforma-
315
+ tion is exact and shows the hole-polaron formation en-
316
+ ergy −M 2
317
+ h/Ω but also a change of the effective Coulomb
318
+ attraction experienced by the electron when at site i,
319
+ U → ˜U = U+ 2MeMh
320
+
321
+ , arising from the electron’s coupling
322
+ to the hole’s cloud in addition to the Coulomb interac-
323
+ tion with the hole. The propagator for the electron-hole
324
+ pair
325
+ Gij(z) ≡ ⟨0| cihi ˆG(z)h†
326
+ ic†
327
+ j |0⟩
328
+ (7)
329
+ is then rewritten in terms of the transformed Hamilto-
330
+ nian:
331
+ Gij(z) = e−
332
+ M2
333
+ h
334
+ 2Ω2
335
+
336
+
337
+ n=0
338
+ 1
339
+ n!
340
+ �Mh
341
+
342
+ �n
343
+ Hij(n, ˜z)
344
+ (8)
345
+ where the new propagators
346
+ Hij(n, ˜z) = ⟨0| hiciUi ˜G(˜z)h†
347
+ ic†
348
+ jb†n
349
+ i |0⟩
350
+ (9)
351
+ describe the propagation of the electron in the presence
352
+ of phonons created by the hole. To obtain Eq. (8) we
353
+ used the Baker–Campbell–Hausdorff formula to rewrite
354
+ U†
355
+ i |0⟩ = e−M 2
356
+ h/2Ω2 �∞
357
+ n=0
358
+ 1
359
+ n!
360
+
361
+ b†
362
+ i
363
+ Mh
364
+
365
+ �n
366
+ |0⟩, and we intro-
367
+ duced ˜z = z + M 2
368
+ h/Ω and the transformed resolvent
369
+ U†
370
+ i ˆG(z)Ui ≡ ˜G(˜z) = (˜z − ˆhi)−1
371
+ where
372
+ ˆhi = ˜Hi + M 2
373
+ h
374
+
375
+ = ˆTe − ˜Uc†
376
+ ici + ˆHph + ˆVe−ph
377
+ describes the electron’s kinetic energy, effective interac-
378
+ tion with the hole located at i, and coupling to the lattice.
379
+ So far, everything is exact.
380
+ C.
381
+ Analogy to the disorder MA
382
+ Note that ˆhi obtained above is formally equivalent to
383
+ the Hamiltonian for an electron with Holstein coupling
384
+ in the presence of an on-site ‘disorder’ at site i. In pre-
385
+ vious work, we have already demonstrated that for such
386
+ problems, even the simplest version of the variational mo-
387
+ mentum average (MA) approximation, namely the one-
388
+ site MA(0) version, is quantitatively accurate if t/Ω is
389
+ not too large.37,38 We use the same approximation here,
390
+ straightforwardly generalized to include the presence of
391
+ phonons created by the hole at site i. Specifically, we im-
392
+ plement an MA where the variational space allows for the
393
+
394
+ 4
395
+ presence of two phonon clouds: one at site i due primar-
396
+ ily to the hole, and one at any other site of the system,
397
+ created by the electron. We note that the electron cloud
398
+ can be allowed to spread over more sites,39 increasing the
399
+ accuracy of the approximation: however, the resulting
400
+ improvements are quantitatively small and do not affect
401
+ the physics. For our purposes it suffices to proceed with
402
+ the one-site cloud approximation, which predicts energies
403
+ to within an accuracy of a few percent.30,37,38,40
404
+ Proceeding by analogy with the disorder MA calcula-
405
+ tion, the equations-of-motion (EOMs) for the propaga-
406
+ tors in this two-cloud generalization of MA are obtained
407
+ by repeated use of the Dyson identity ˆG = ˆG0 + ˆG ˆV ˆG0
408
+ with ˆV = ˆVe−ph. The resulting system of equations (B1-
409
+ B3) and its derivation are shown for completeness in Ap-
410
+ pendix B. This linear system that emerges turns out to
411
+ be amenable to further simplifications driven by the in-
412
+ tuition that not all propagators contribute equally: in-
413
+ deed, we find that about half the propagators may be
414
+ set to zero (halving the size of the system) with no no-
415
+ ticeable changes to the resulting spectrum. More details
416
+ on this further approximation and the intuition behind it
417
+ are given in C, and in Appendix D we show some results
418
+ that justify the validity of this futher approximation.
419
+ D.
420
+ Exciton wavefunction and the phonon cloud
421
+ Once the Green’s functions Gij are obtained by solv-
422
+ ing the linear system, to further elucidate the nature
423
+ of the ground-state properties of our model we char-
424
+ acterize the spatial extent of the exciton wavefunction,
425
+ as well as calculate the size of its phonon cloud.
426
+ To
427
+ obtain the former, we use the Lehmann decomposition
428
+ Gij(z) = �
429
+ n ⟨0|hici|ψn⟩ ⟨ψn|c†
430
+ jh†
431
+ i|0⟩/(z − En), where
432
+ ˆH|ψn⟩ = En|ψn⟩ are the eigenstates with one electron
433
+ and one hole.
434
+ At the exciton energy E0, and if η is
435
+ much smaller than the gap to the continuum, there is
436
+ only one dominant contribution to the Lehmann sum:
437
+ Gij(z = E0 + iη) ≈ ⟨0|hici|ψ0⟩ ⟨ψ0|c†
438
+ jh†
439
+ i|0⟩/iη. Therefore
440
+ we can use
441
+ ρij(E0) = |⟨0|hicj|ψ0⟩|2
442
+ |⟨0|hici|ψ0⟩|2 ≈ |Gij(E0)|2
443
+ |Gii(E0)|2
444
+ (10)
445
+ to characterize the probability that the electron is at a
446
+ distance |j −i| from the hole in the exciton ground-state,
447
+ scaled such that ρii(E0) = 1.
448
+ To calculate the average number of phonons Nph
449
+ in the exciton cloud, we use the Hellmann-Feynman
450
+ theorem:41,42
451
+ Nph = ⟨ψ0|
452
+
453
+ l
454
+ b†
455
+ l bl|ψ0⟩ = ∂E0
456
+ ∂Ω .
457
+ (11)
458
+ The derivative is computed numerically with the finite-
459
+ difference approach.
460
+ Both of these metrics give addi-
461
+ tional glimpses at the impact of phonons on the dissoci-
462
+ ation process.
463
+ IV.
464
+ RESULTS
465
+ A.
466
+ Exciton dissociation driven by electron-phonon
467
+ coupling
468
+ Armed with the methods from the previous section,
469
+ we calculate the spectrum of a system with one electron
470
+ and one hole, in the presence of short range (on-site)
471
+ Coulomb attraction of magnitude U > 0, and of Hol-
472
+ stein carrier-phonon couplings Me and Mh, respectively,
473
+ to an optical dispersionless phonon mode of energy Ω.
474
+ As stated previously, we focus on 1D chains, where the
475
+ carriers’ tendency to bind into an exciton is enhanced.
476
+ The electron’s nearest neighbor hopping is t = 1; mean-
477
+ while the hole is localized, modeling either a valence band
478
+ with a very large effective mass or a hole trapped by an
479
+ acceptor impurity.
480
+ Exciton dissociation driven by the electron-phonon
481
+ coupling is demonstrated graphically in Fig.
482
+ 2.
483
+ The
484
+ panels show the contour plot of the LDOS A(ω) at the
485
+ hole site versus energy and coupling Me, when U = 1,
486
+ Ω = 0.5 and Me = −Mh (panel a); Me = −0.5Mh (panel
487
+ b); Me = −2Mh (panel c); and Me = Mh (panel d).
488
+ At Me = Mh = 0, the lowest energy feature in the
489
+ electron+hole spectrum is a discrete peak marking the
490
+ existence of the exciton, just as discussed in Sec. III A. If
491
+ MeMh < 0, the discrete peak merges smoothly with the
492
+ continuum at M (c)
493
+ e
494
+ and the exciton dissociates into un-
495
+ bound electron- and hole-polarons for Me > M (c)
496
+ e . There
497
+ is no discontinuity in the LDOS at M (c)
498
+ e : thus, there is
499
+ no contradiction between our result and Ref.
500
+ 27.
501
+ By
502
+ contrast, if MeMh > 0 (panel d), the exciton is further
503
+ stabilized by increasing coupling.29
504
+ The carrier-phonon coupling M is set by the gradient
505
+ of the carrier-lattice potential with respect to a small
506
+ lattice displacement. Because the hole and the electron
507
+ have opposite charge, their respective carrier-lattice po-
508
+ tentials have opposite signs and thus Me and Mh have
509
+ opposite signs. Physically, this is because a lattice dis-
510
+ tortion that is energetically favorable for an electron is
511
+ generically unfavorable for a hole (left panel of Fig. 1).
512
+ Moreover, a very small Frenkel exciton, with the electron
513
+ and hole at the same site, creates no local charge imbal-
514
+ ance so no lattice distortion is expected (right panel of
515
+ Fig. 1). In the atomic limit (t = 0), a vanishing exciton-
516
+ polaron binding energy −(Me+Mh)2/Ω ≈ 0 implies that
517
+ Me ≈ −Mh. Of course, one can envision more complex
518
+ situations where |Me| ̸= |Mh|, however panels (b) and (c)
519
+ of Fig. 2 show the same dissociation phenomenology for
520
+ different ratios Mh/Me < 0, demonstrating that exciton
521
+ dissociation does not require fine-tuning: it is guaranteed
522
+ to happen at large enough couplings. On the other hand,
523
+ the exciton is always stable if Mh/Me > 0 (see panel (d)
524
+ of Fig. 2), because in this case the cloud created by the
525
+ exciton is larger than the sum of the individual clouds
526
+ created by the two unbound carriers, further stabilizing
527
+ the exciton.27,29
528
+
529
+ 5
530
+ FIG. 2. Contour plots of the LDOS A(ω) at the hole site when Ω = 0.5 and U = 1. The electron-phonon coupling Me is shown
531
+ on the x axis (the corresponding Mh is indicated on the figure). The dashed red line shows where we expect the lower edge of the
532
+ continuum of eigenstates describing unbound electron- and hole-polarons, based on their individually calculated MA energies.
533
+ Its good agreement with the calculated spectral weight provides a validation of the generalized MA we developed. The fast
534
+ oscillations in the continuum weight are finite size effects, due to the cutoff |l − i|m = 50 for the maximum distance between
535
+ the two clouds; the maximum numbers of phonons in the two clouds are set to nm = km = 20, sufficient for convergence. The
536
+ discrete peak appearing below the continuum at small Me is the exciton bound state, broadened into a Lorentzian by the finite
537
+ η = 0.01. With increasing coupling, the exciton approaches the continuum and eventually merges smoothly with it, marking
538
+ its dissociation into a pair of unbound electron and hole polarons. This behaviour is robust so long as the couplings are of
539
+ opposite sign, so that MeMh < 0, see panels (a)-(c). In contrast, when MeMh > 0, the exciton is always stable, see panel (d).
540
+ B.
541
+ Exciton dissociation phase diagram
542
+ Figure 3 traces the crossover (blue line) between the
543
+ ground-states with an exciton-polaron and those with
544
+ unbound electron- and hole-polarons. The dashed line
545
+ shows the perturbation theory prediction (details in Ap-
546
+ pendix E). The agreement is excellent at small U, as
547
+ expected, while at larger U perturbation theory overes-
548
+ timates the critical coupling needed for dissociation.
549
+ C.
550
+ Exciton-polaron characteristics
551
+ Next, we calculate the average number of phonons Nph
552
+ in the exciton cloud, and also the probability ρij that the
553
+ electron is at a distance |j−i| from the hole in the exciton
554
+ ground-state, scaled such that ρii = 1 (see Sec. III D for
555
+ details).
556
+ Representative results are shown in Fig. 4. For com-
557
+ pleteness, panel (a) shows the LDOS versus ω and Me,
558
+ with dissociation occurring slightly above Me = 0.6.
559
+ Panel (b) shows Nph of the exciton-polaron (solid yel-
560
+ low line), compared to the sum of the ground-state av-
561
+ erage numbers of phonons in the electron-polaron and
562
+ the hole-polaron clouds (red dashed line); the latter are
563
+ calculated individually and then summed. As expected,
564
+ when tightly bound by an attractive U, the electron and
565
+ the hole largely cancel each other’s lattice distortions, re-
566
+ sulting in many fewer phonons than for the free polarons.
567
+ Panels (c)-(e) show ρij vs. j − i for Me = 0.4, 0.5, 0.6,
568
+ respectively (see vertical dotted lines in panels (a) and
569
+ (b)). At small couplings, ρij is sharply peaked at the hole
570
+
571
+ -1.0
572
+ 1.0
573
+ (a)
574
+ Me= Mh
575
+ 0.8
576
+ -1.5
577
+ tanh(Aoo(w)
578
+ 0.6
579
+ -2.0
580
+ 0.4
581
+ -2.5
582
+ 0.2
583
+ -3.0,
584
+ 0.0
585
+ 0
586
+ 0.2
587
+ 0.4
588
+ 0.6
589
+ 0.8
590
+ 1.0
591
+ Me-1.0
592
+ 1.0
593
+ (b)
594
+ Me = - 0.5Mh
595
+ 0.8
596
+ -1.5
597
+ tanh(Aoo(w)
598
+ 0.6
599
+ -2.0
600
+ 0.4
601
+ -2.5
602
+ 0.2
603
+ -3.0,
604
+ 0.0
605
+ 0
606
+ 0.2
607
+ 0.4
608
+ 0.6
609
+ 0.8
610
+ 1.0
611
+ Me-1.0
612
+ 1.0
613
+ (c)
614
+ 2Mh
615
+ 0.8
616
+ -1.5
617
+ tanh(Aoo(w)
618
+ 0.6
619
+ 1/m
620
+ -2.0
621
+ 0.4
622
+ -2.5
623
+ 0.2
624
+ -3.0,
625
+ 0.0
626
+ 0
627
+ 0.2
628
+ 0.4
629
+ 0.6
630
+ 0.8
631
+ 1.0
632
+ Me-1.0
633
+ 1.0
634
+ (d)
635
+ Me
636
+ Mr
637
+ 0.8
638
+ -1.5
639
+ tanh(Aoo(w)
640
+ 0.6
641
+ -2.0
642
+ 0.4
643
+ -2.5
644
+ 0.2
645
+ -3.0,
646
+ 0.0
647
+ 0
648
+ 0.2
649
+ 0.4
650
+ 0.6
651
+ 0.8
652
+ 1.0
653
+ Me6
654
+ FIG. 3.
655
+ Exciton dissociation phase diagram.
656
+ The critical
657
+ electron-phonon coupling for dissociation increases with the
658
+ Coulomb attraction U: it is calculated with MA (blue solid)
659
+ and with perturbation theory (orange dashed). The orange
660
+ region above the critical line indicates the region where we ex-
661
+ pect dissociated electron and hole polarons, whereas the blue
662
+ region below the line represents the bound exciton-polaron
663
+ region. Other parameters are Ω = 0.5, Me = −Mh.
664
+ site i, as expected for a strongly bound, small Frenkel ex-
665
+ citon. As the coupling increases, ρij acquires “fat tails”,
666
+ that are consistent with a larger exciton.
667
+ Just before
668
+ dissociation, ρij spreads over very many sites, consis-
669
+ tent with the smooth crossover to an unbound electron-
670
+ polaron that is (nearly) equally likely to be at any dis-
671
+ tance from the hole.
672
+ V.
673
+ CONCLUSIONS
674
+ We have shown that strong carrier-phonon coupling
675
+ favors the dissociation of excitons into free polarons,
676
+ even on 1D chains where excitons should be stable for
677
+ any electron-hole attraction.
678
+ This phenomenology is
679
+ the counterpart to what drives BCS superconductivity.43
680
+ There, phonons overscreen the electron-electron repul-
681
+ sion turning it into an effective attraction. Here, phonons
682
+ screen the electron-hole attraction and can turn it repul-
683
+ sive, at sufficiently strong coupling.
684
+ This phenomenology is robust and should be consid-
685
+ ered when analyzing exciton stability in materials with
686
+ carrier-phonon coupling because the critical coupling for
687
+ dissociation need not be very large.
688
+ Figure 4 shows a
689
+ critical value Me = −Mh ≈ 0.6, which corresponds to a
690
+ weak effective Holstein coupling λc = M 2
691
+ e /2tΩ ≈ 0.36 for
692
+ the electron, even though the bare exciton binding energy
693
+ is a considerable 0.5t for those parameters. Indeed, panel
694
+ (b) of Fig. 4 confirms that the average phonon numbers
695
+ are small. Of course, to some extent this is because of
696
+ the rather large phonon frequency Ω = 0.5t used there,
697
+ although such ratios are reasonable in some organic ma-
698
+ terials.
699
+ Regarding the main approximations in our model:
700
+ FIG. 4. Characterization of the phonon cloud of the exciton-
701
+ polaron. a) Contour plot of the LDOS at the hole site, as a
702
+ function of the coupling Me and energy ω. The yellow solid
703
+ line tracks the exciton energy while the dashed red line tracks
704
+ the lower edge of the continuum; their intersection marks
705
+ the dissociation point.
706
+ We track the exciton energy up to
707
+ Me = 0.6, where its binding energy becomes comparable to η.
708
+ b) Average number of phonons Nph in the exciton cloud (solid
709
+ yellow line) and in the combined electron- and hole-polaron
710
+ clouds (red dashed line). c)-e) Probability ρij that the elec-
711
+ tron is at a distance |j−i| from the hole in the exciton ground-
712
+ state, scaled such that ρii = 1, for Me = 0.4, 0.5, 0.6, respec-
713
+ tively. Other parameters are Ω = 0.5, U = 1.5, Me = −Mh,
714
+ η = 0.01, nm = km = 20, |l − i|m = 50.
715
+ (i) we do not expect different dimensionality to change
716
+ this phenomenology.
717
+ In 3D, a bare exciton is stable
718
+ only if the Coulomb attraction is above a critical value.29
719
+ Whether the critical value is 0 (like in 1D) or finite (like
720
+ in 3D) is irrelevant: strong enough carrier-phonon cou-
721
+ pling will lower the effective attraction below this critical
722
+ value and make the exciton unstable. Our MA method
723
+ can be straightforwardly used to study higher-D systems.
724
+ (ii) the assumption that the hole is immobile is also
725
+ not essential: ‘releasing’ the hole does not change this
726
+ picture qualitatively, only quantitatively. Moreover, in
727
+
728
+ 0.6
729
+ unbound electron
730
+ 0.5
731
+ + hole polarons
732
+ 0.4
733
+ M 0.3
734
+ 0.2
735
+ 0.1
736
+ exciton polaron
737
+ 0.2
738
+ 0.0
739
+ 0.8
740
+ 1.0
741
+ 0.4
742
+ 0.6
743
+ U1.0
744
+ -2.0
745
+ exciton
746
+ -2.2
747
+ polarons
748
+ 0.8
749
+ -2.4
750
+ tanh(Aoo(k, w))
751
+ 0.6
752
+ -2.6
753
+ 3
754
+ -2.8
755
+ 0.4
756
+ -3.0
757
+ -3.2
758
+ 0.2
759
+ -3.4
760
+ 0.0
761
+ 0
762
+ 0.2
763
+ 0.4
764
+ 0.6
765
+ 0.8
766
+ Mew
767
+ ..............................
768
+ exciton
769
+ polarons
770
+ 2
771
+ ................
772
+ 1
773
+ ..
774
+ 0
775
+ 0.2
776
+ 0.4
777
+ 0.6
778
+ 0.8
779
+ Me7
780
+ FIG. 5. Schematic of the effective potential for the exciton-
781
+ polaron. Screened electron-hole interaction (black line), ob-
782
+ tained by summing the bare long-range Coulomb attraction
783
+ (dashed line) and the contribution from phonon screening
784
+ (blue line). Top: when the coupling is weak, the combined
785
+ polaron radius D is large and the screening is weak. Mid-
786
+ dle: strong coupling leads to small polarons with a strong
787
+ short-range repulsion. The total potential has a minimum at
788
+ r ∼ D. Bottom: for a rapidly-decreasing bare attraction, a
789
+ metastable exciton may be trapped at the r = 0 local mini-
790
+ mum, before tunneling into a dissociated state.
791
+ the context of OSC materials doped with either acceptor
792
+ or donor molecules, it is possible to envision trapping one
793
+ species of the carriers on such molecules.
794
+ (iii) the assumptions that the coupling is to a single
795
+ optical mode and that it is of Holstein type are also not
796
+ essential. Regardless of such details, polaron formation
797
+ associated with local excess charge leads to a lowering of
798
+ the energy. That is the only ingredient necessary for the
799
+ mechanism discussed here.
800
+ (iv) The assumption of a short-range Coulomb attrac-
801
+ tion is non-trivial, however, and relaxing it can lead to
802
+ qualitative changes. This is because the phonon screen-
803
+ ing discussed here acts only at electron-hole distances
804
+ r < D, where D is the sum of the radii of the two po-
805
+ larons. If the electron and hole are sufficiently far, so that
806
+ each can create its polaron cloud (r > D), the phonon
807
+ screening vanishes. This contribution looks roughly like
808
+ the blue lines in Fig.
809
+ 5, where ∆EB ≈ −2MeMh/Ω
810
+ is the difference between the exciton-polaron and the
811
+ free polarons formation energies. While ∆EB increases
812
+ with increasing coupling, D decreases as the polarons be-
813
+ come smaller. If the Coulomb attraction decreases rather
814
+ slowly with r (dashed line), it is possible that as the
815
+ coupling goes from weak (top panel) to strong (middle
816
+ panel), the total potential has a well whose minimum
817
+ moves from r ∼ 0 to r ∼ D. The latter well can still trap
818
+ a stable exciton in 1D, because both the lower dimension-
819
+ ality and the increased effective mass of strongly-coupled
820
+ Holstein polarons would favor a bound state.
821
+ We be-
822
+ lieve that this explains why exciton dissociation was not
823
+ observed in Ref. 44. However, in higher dimensions rele-
824
+ vant for OSCs and/or for lighter Peierls polarons,33 such
825
+ a ‘donut’-shaped trap might not suffice to bind the po-
826
+ larons and the ground-state at strong coupling would still
827
+ exhibit dissociation.
828
+ A new scenario can occur if the bare Coulomb attrac-
829
+ tion decreases significantly from r = 0 to r = D.
830
+ As
831
+ sketched in Fig.
832
+ 5(c), r = 0 can be a local minimum
833
+ of the total potential (black line) followed by a potential
834
+ barrier and a very shallow potential well for r > D. A
835
+ Frenkel exciton with radius smaller than D can then be
836
+ metastable, with a lifetime inversely proportional to the
837
+ probability of tunneling through the barrier.
838
+ Even though the ground state is the dissociated state,
839
+ small excitons loaded optically into the metastable state
840
+ might live long enough to control the OSC’s behavior.
841
+ This may explain the very puzzling fact that some OSC
842
+ materials, like pure C60 films, have both very strongly
843
+ bound excitons45,46 and finite, albeit small, charge sepa-
844
+ ration efficiency.47 The latter would represent the small
845
+ fraction of excitons that tunnel out and dissociate. This
846
+ scenario is also qualitatively consistent with the ob-
847
+ servation that a dilute (∼10%) concentration of donor
848
+ molecules increases the charge separation efficiency. Such
849
+ molecules boost light absorption, so the metastable exci-
850
+ ton state is populated more efficiently. This will increase
851
+ the concentration of charge-separated pairs accordingly
852
+ if the donor molecules are dilute enough to allow charge
853
+ separation to proceed, explaining why peak efficiency oc-
854
+ curs at a very low donor molecules concentration.47 The
855
+ above scenario cannot be verified with MA; however, a
856
+ recent study found a weak potential barrier due to nonlo-
857
+ cal phonon screening in lead halide perovskites.48 While
858
+ their parameters are very different than ours, their find-
859
+ ing supports the possible appearance of this new scenario
860
+ in the right circumstances.
861
+ The results presented in this work illustrate some of
862
+ the interesting physics expected in the many OSCs that
863
+ have strong carrier-phonon coupling, and point towards
864
+ possible ways to exploit it. We plan to investigate some
865
+ of these topics in more detail in future works.
866
+
867
+ Vtot(R)
868
+ △EB
869
+ D
870
+ R
871
+ U
872
+ Vtot(R)
873
+ △EB
874
+ D
875
+ -U
876
+ Vtot(R)
877
+ AEB
878
+ >R
879
+ -U8
880
+ ACKNOWLEDGMENTS
881
+ We thank Sarah Burke for bringing this problem to
882
+ our attention and for many useful discussions. We thank
883
+ David Reichman, Holger Fehske and Krzysztof Bieniasz
884
+ for insightful comments. We acknowledge support from
885
+ the Max Planck-UBC-UTokyo Centre for Quantum Ma-
886
+ terials and the Canada First Research Excellence Fund,
887
+ Quantum Materials and Future Technologies Program of
888
+ the Stewart Blusson Quantum Matter Institute, and from
889
+ the Natural Sciences and Engineering Research Council
890
+ of Canada (NSERC). We gratefully acknowledge the use
891
+ of computing resources from the Stewart Blusson Quan-
892
+ tum Matter Institute computing cluster LISA.
893
+ Appendix A: Free carrier Green’s function
894
+ The Green’s function G(i,0)
895
+ ij
896
+ (z) corresponding to ˆH0 =
897
+ ˆTe + ˆVe−h + ˆHph, i.e.
898
+ for the system without carrier-
899
+ phonon coupling, can be calculated analytically. In the
900
+ absence of electron-phonon coupling there is only an elec-
901
+ tron hopping on a 1D tight-binding lattice, subject to an
902
+ on-site attractive potential from the static hole located
903
+ at i. The corresponding Hamiltonian is H0 = T − Uc†
904
+ ici.
905
+ Here we calculate its lattice Green’s function:
906
+ G(i,0)
907
+ lj
908
+ (z) = ⟨0| cl[z − H0]−1c†
909
+ j |0⟩
910
+ (A1)
911
+ Applying Dyson’s identity, we find the EOM:
912
+ G(i,0)
913
+ l,j (z) = gl−j(z) − Ugi−j(z)G(i,0)
914
+ l,i
915
+ (z)
916
+ (A2)
917
+ where the free lattice Green’s functions gl−j(z)
918
+ =
919
+ ⟨0| cl[z − T]−1c†
920
+ j |0⟩
921
+ can
922
+ be
923
+ calculated
924
+ analytically:
925
+ gδ(z) = |ζ(z)||δ|/√z − 2t√z + 2t, with ζ(z) = z/2t −
926
+
927
+ z/2t − 1
928
+
929
+ z/2t + 1.
930
+ Equation (A2) can be solved trivially to find:
931
+ G(i,0)
932
+ li
933
+ (z) = G(i,0)
934
+ l−i (z) =
935
+ gl−i(z)
936
+ 1 + Ug0(z)
937
+ (A3)
938
+ and
939
+ G(i,0)
940
+ ll
941
+ (z) = g0(z) − U [gi−l(z)]2
942
+ 1 + Ug0(z).
943
+ (A4)
944
+ The propagators ˜G(i,0)
945
+ il
946
+ (z) appearing in the main text
947
+ and in other appendices have the same expressions but
948
+ with U → ˜U, where ˜U is the overscreened Coulomb at-
949
+ traction defined in Sec. III.
950
+ Appendix B: Green’s function with carrier-lattice
951
+ coupling
952
+ Here the MA equations of motion are obtained by re-
953
+ peated application of the Dyson identity ˜G(˜z) = ˆG(i)
954
+ 0 (˜z)+
955
+ ˜G(˜z) ˆVe−ph ˆG(i)
956
+ 0 (˜z) where ˆG(i)
957
+ 0 (z) = (z − ˆTe + ˜Uc†
958
+ ici)−1
959
+ is the resolvent in the absence of electron-phonon cou-
960
+ pling. We note that its corresponding Green’s functions
961
+ ˜G(i,0)
962
+ ij
963
+ (z) = ⟨0|ci ˆG(i)
964
+ 0 (z)c†
965
+ j|0⟩ equal those calculated in
966
+ Sec. A upon replacing U → ˜U.
967
+ Using Dyson���s identity once, we find:
968
+ Hij(n, ˜z) = ˜G(i,0)
969
+ ij
970
+ (˜z − nΩ)
971
+ �Mh
972
+
973
+ �n
974
+ e−M 2
975
+ h/2Ω2+
976
+ + ˜G(i,0)
977
+ ij
978
+ (˜z − nΩ)Me [nHii(n − 1, ˜z) + Hii(n + 1, ˜z)] +
979
+ +
980
+
981
+ l̸=i
982
+ ˜G(i,0)
983
+ lj
984
+ (˜z − nΩ)MeFill(n, 1, ˜z).
985
+ (B1)
986
+ Here, the terms on the 2nd line arise when the electron
987
+ travels to site i and adds to or removes from the phonons
988
+ already present there, while the last line describes terms
989
+ where the electron moves to some other site l and starts
990
+ a new cloud there, with the corresponding generalized
991
+ two-cloud propagator:
992
+ Fijl(n, k, ˜z) ≡ ⟨0| cihiUi ˜G(˜z)h†
993
+ ic†
994
+ j(b†
995
+ i)n(b†
996
+ l )k |0⟩ .
997
+ (B2)
998
+ The equation of motion (B1) is exact. Solving it neces-
999
+ sitates calculating the propagators Fill that appear in it.
1000
+ We generate their equations of motion using again the
1001
+ Dyson identity, but now also imposing the variational
1002
+ constraint consistent with the one-site MA(0) approxi-
1003
+ mation for the electron cloud, namely that additional
1004
+ phonons cannot be created away from the two existing
1005
+ clouds. The resulting EOMs are
1006
+ Fill(n, k, ˜z) =Me ˜G(i,0)
1007
+ ll
1008
+ (˜z − (n + k)Ω) [kFill(n, k − 1, ˜z) + Fill(n, k + 1, ˜z)]
1009
+ + Me ˜G(i,0)
1010
+ il
1011
+ (˜z − (n + k)Ω) [kFiil(n, k − 1, ˜z) + Fiil(n, k + 1, ˜z)]
1012
+ Fiil(n, k, ˜z) =Me ˜G(i,0)
1013
+ ii
1014
+ (˜z − (n + k)Ω) [kFiil(n, k − 1, ˜z) + Fiil(n, k + 1, ˜z)]
1015
+ + Me ˜G(i,0)
1016
+ il
1017
+ (˜z − (n + k)Ω) [kFiil(n, k − 1, ˜z) + Fiil(n, k + 1, ˜z)] .
1018
+ (B3)
1019
+ Eqs. (B1-B3) define a linear, inhomogeneous system
1020
+ of coupled equations that can be numerically solved for
1021
+
1022
+ 9
1023
+ each value of z, with the resulting Hij(n, ˜z) then used in
1024
+ Eq. (8) to construct Gij(z). However, this approach is
1025
+ computationally intensive because one needs large cut-
1026
+ offs for the maximum numbers km, nm of phonons in the
1027
+ two clouds, as well as for the maximum distance |l − i|m
1028
+ between the clouds, before convergence is reached. An
1029
+ improved approach is discussed in Appendix C.
1030
+ Appendix C: Simplifying the EOMs
1031
+ A much more efficient yet still accurate solution to Eqs.
1032
+ (B1-B3) can be obtained by taking advantage of the fact
1033
+ that for the energies of interest, which lie below the free
1034
+ electron continuum, the free propagators ˜G(i,0)
1035
+ il
1036
+ (z) de-
1037
+ crease exponentially with the distance |l − i|. If we keep
1038
+ only the largest term with l = i, then Eqs. (B3) split into
1039
+ two uncoupled recurrence relations, one for Fill and one
1040
+ for Fiil, with only the former needed in Eq. (B1). This
1041
+ former recurrence relation can be solved with the ansatz:
1042
+ Fill(n, k, ˜z) = A(i,l)
1043
+ k
1044
+ (˜z − nΩ)Fill(n, k − 1, ˜z)
1045
+ (C1)
1046
+ where we note that Fill(n, 0, ˜z) ≡ Hil(n, ˜z). The contin-
1047
+ ued fractions
1048
+ A(i,l)
1049
+ k
1050
+ (z) =
1051
+ kMe ˜G(i,0)
1052
+ ll
1053
+ (z − kΩ)
1054
+ 1 − Me ˜G(i,0)
1055
+ ll
1056
+ (z − kΩ)A(i,l)
1057
+ k+1(z)
1058
+ (C2)
1059
+ are calculated starting from A(i,l)
1060
+ km+1(z) = 0 for a suffi-
1061
+ ciently large km to ensure the desired accuracy. In par-
1062
+ ticular, this means that we can replace Fill(n, 1, ˜z) =
1063
+ A(i,l)
1064
+ 1
1065
+ (˜z − nΩ)Hil(n, ˜z) in Eq. (B1) to convert it into a
1066
+ linear system linking only the Hij propagators. This still
1067
+ requires a summation over all the sites in the system,
1068
+ which in practice means summing over sites l up to a
1069
+ distance large enough from i that the sum converges.
1070
+ An efficient solution of such a linear system was pro-
1071
+ posed in Refs. 37 and 38 and we adopt it here. It is
1072
+ based on the observation that for |l − i| ≫ 1, the local
1073
+ potential ˜U created by the hole becomes irrelevant and
1074
+ the impurity Green’s function reduces to the free electron
1075
+ propagator
1076
+ ˜G(i,0)
1077
+ ll
1078
+ (˜z) → g0(˜z) = 1
1079
+ N
1080
+
1081
+ k
1082
+ 1
1083
+ ˜z − ϵk
1084
+ =
1085
+ 1
1086
+ √˜z − 2t√˜z + 2t.
1087
+ (C3)
1088
+ As a result, for |l − i| ≫ 1, the continued fractions ap-
1089
+ proach an asymptotic value that becomes independent
1090
+ of i, l: A(i,l)
1091
+ 1
1092
+ (˜z − nΩ) → ΣMA(˜z − nΩ)/Me . Physically,
1093
+ ΣMA(z) is the MA(0) self-energy of the electron-polaron
1094
+ in the absence of the ‘impurity’ potential created by the
1095
+ hole located at i (see Ref. 30 for a derivation)
1096
+ ΣMA(z) =
1097
+ M 2
1098
+ e g0(z − Ω)
1099
+ 1 −
1100
+ 2M 2
1101
+ e g0(z − Ω)g0(z − 2Ω)
1102
+ 1 − 3M 2
1103
+ e g0(z − 2Ω)g0(z − 3Ω)
1104
+ 1 − . . .
1105
+ (C4)
1106
+ Because this asymptotic value is independent of l, we
1107
+ can define a renormalized energy
1108
+ vil(˜z − nΩ) = MeA(i,l)
1109
+ 1
1110
+ (˜z − nΩ) − ΣMA(˜z − nΩ)
1111
+ (C5)
1112
+ which vanishes fast with increasing |l − i|. The sum in
1113
+ Eq. (B1) can be recast in terms of it by renormalizing
1114
+ the energy argument of the free propagators:
1115
+ Hij(n, ˜z) = ˜G(i,0)
1116
+ ij
1117
+ (˜˜zn)
1118
+ �Mh
1119
+
1120
+ �n
1121
+ e−
1122
+ M2
1123
+ h
1124
+ 2Ω2 + ˜G(i,0)
1125
+ ij
1126
+ (˜˜zn)Me [nHii(n − 1, ˜z) + Hii(n + 1, ˜z)]
1127
+ +
1128
+
1129
+ l̸=i
1130
+ ˜G(i,0)
1131
+ lj
1132
+ (˜˜zn)vil(˜z − nΩ)Hil(n, ˜z)
1133
+ (C6)
1134
+ where we defined ˜˜zn ≡ ˜z −nΩ−ΣMA(˜z −nΩ). Equations
1135
+ (C6) converge much more quickly with the summation
1136
+ over l and can be solved efficiently.
1137
+ The accuracy of the approximation of replacing the
1138
+ coupled Eqs. (B1-B3) with the much more compact and
1139
+ efficienct Eq. (C6) is validated in Appendix D.
1140
+ Appendix D: Full vs approximate variational
1141
+ solutions
1142
+ The full variational solution of the particle+hole prop-
1143
+ agator can be obtained by simultaneously solving Eqs.
1144
+ (8), (B1) and (B3). They can be solved numerically, but
1145
+ this is slow because exceedingly large truncation cutoffs
1146
+ (system sizes) are required for convergence.
1147
+ Above in
1148
+ Appendix C, we proposed a much more efficient approx-
1149
+ imation which replaces Eqs. (B1)-(B3) with Eqs. (C6).
1150
+ To validate this approximation, in Fig. 6 we show a
1151
+ typical comparison of the results of the two methods for
1152
+ the LDOS at the hole site, focusing on the lower-energy
1153
+ part of the spectrum, of interest for the dissociation issue.
1154
+ Evidently, the agreement is very good. Similar diagrams
1155
+ were produced in all parameter regimes explored in this
1156
+ paper, thus effectively validating our approximation.
1157
+
1158
+ 10
1159
+ FIG. 6. Comparison of the LDOS at the hole site from solving
1160
+ the full variational solution described by Eqs. (8),(B1),(B3),
1161
+ shown in the left panel, versus the simplified and much more
1162
+ efficient Eq. (C6), shown in the right panel. Visually, the
1163
+ two are nearly indistinguishable, with most differences coming
1164
+ in at higher energies.
1165
+ Model parameters are U = 1, Ω =
1166
+ 0.5, η = 0.01, Me = −Mh and convergence parameters are
1167
+ nm = km = 12, |l − i|m = 50.
1168
+ Appendix E: Perturbation theory for exciton
1169
+ dissociation
1170
+ Here we summarize the perturbation theory (PT) cal-
1171
+ culation used to draw the dissociation line in Fig.
1172
+ 3
1173
+ in the main text. We begin by estimating the ground-
1174
+ state energies for the individual polarons.
1175
+ The result
1176
+ for the (static) hole-polaron is Eh
1177
+ P = −M 2
1178
+ h/Ω. To find
1179
+ the electron-polaron’s PT counterpart, we use the sin-
1180
+ gle polaron Green’s function at the same one-site MA(0)
1181
+ level of approximation:30 G(k, z) = [z − ϵk − ΣMA(z)]−1
1182
+ where the full expression for ΣMA(z) is shown in Eq.
1183
+ (C4).
1184
+ To lowest non-trivial order in PT, it becomes
1185
+ ΣMA ≈ M 2
1186
+ e g0(ω − Ω). Using this expression to find the
1187
+ lowest k = 0 pole, we find the polaron ground-state en-
1188
+ ergy to be:
1189
+ Ee
1190
+ P (k = 0) = −2t −
1191
+ M 2
1192
+ e
1193
+
1194
+ Ω(Ω + 4t)
1195
+ (E1)
1196
+ The PT-predicted lower edge of the continuum is then at
1197
+ Emin = Ee
1198
+ P (k = 0) + Eh
1199
+ P .
1200
+ To find the bound exciton energy, we proceed similarly,
1201
+ essentially solving the EOMs to lowest order in the cou-
1202
+ plings, and then finding the location of the lowest peak
1203
+ for k = 0.
1204
+ For simplicity, we only list here the result
1205
+ when Me = −Mh. We find the exciton ground-state en-
1206
+ ergy to be given by Eexc = z0 + αg0(z0 − Ω)M 2
1207
+ e where
1208
+ z0 = −
1209
+
1210
+ U 2 + 4t2 is the bare exciton energy, and
1211
+ α =
1212
+ 4G(i,0)
1213
+ ii
1214
+ (z0)[g0(z0 − Ω) − 2G(i,0)
1215
+ ii
1216
+ (z0 − Ω)]
1217
+ 1 + 2G(i,0)
1218
+ ii
1219
+ (z0)[g0(z0 − Ω) − 2G(i,0)
1220
+ ii
1221
+ (z0 − Ω)]
1222
+ The dissociation occurs when Eexc = Emin.
1223
+ 1 M. Kaltenbrunner, M. S. White, E. D. G�lowacki, T. Seki-
1224
+ tani, T. Someya, N. S. Sariciftci, and S. Bauer, Nat. Com-
1225
+ mun. 3 (2012).
1226
+ 2 X. Xu, K. Fukuda, A. Karki, S. Park, H. Kimura, H. Jinno,
1227
+ N. Watanabe, S. Yamamoto, S. Shimomura, D. Kitazawa,
1228
+ T. Yokota, S. Umezu, T.-Q. Nguyen,
1229
+ and T. Someya,
1230
+ PNAS 115, 4589 (2018).
1231
+ 3 A. Gambhir, P. Sandwell,
1232
+ and J. Nelson, Sol. Energy
1233
+ Mater Sol. Cells 156, 49 (2016).
1234
+ 4 L. X. Chen, ACS Energy Lett. 4,10, 2537 (2019).
1235
+ 5 E. H. dos Santos Rosa, E. L. Kowalski and L. F. Toledo,
1236
+ Sol Energy 221 (2021).
1237
+ 6 A. J. Heeger, Adv. Mater. 26 (2013).
1238
+ 7 J. B. K. Vandewal, S. Mertens and Q. Liu, J. Phys. Chem.
1239
+ Lett. 11 (2020).
1240
+ 8 S. E. Gledhill, B. Scott, and B. A. Gregg, J. Mater. Res.
1241
+ 20, 3167 (2005).
1242
+ 9 J. Nelson, Elsevier 6, 87 (2002).
1243
+ 10 S. G´elinas, A. Rao, A. Kumar, S. L. Smith, A. W. Chin,
1244
+ J. Clark, T. S. Van Der Poll, G. C. Bazan,
1245
+ and R. H.
1246
+ Friend, Science 343, 512 (2014).
1247
+ 11 S. Sutty, G. Williams, and H. Aziz, J. Photonics Energy
1248
+ 4, 1 (2014).
1249
+ 12 S. Emmerich, S. Hedwig, B. Arnoldi, J. Stockl, F. Haag,
1250
+ R. Hemm, M. Cinchetti, S. Mathias, B. Stadtm¨uller, and
1251
+ M. Aeschlimann, J. Phys. Chem. C 124, 23579 (2020).
1252
+ 13 A. A. Bakulin, A. Rao, V. G. Pavelyev, P. H. M. van Loos-
1253
+ drecht, M. S. Pshenichnikov, D. Niedzialek, J. Cornil, D.
1254
+ Beljonne & R. H. Friend, Science 335, 1340 (2012).
1255
+ 14 B. Bernardo, D. Cheyns, B. Verreet, R.D. Schaller, B.P.
1256
+ Rand & N.C. Giebink, Nat. Commun. 5 (2014).
1257
+ 15 F. J. Kahle, C. Saller, S. Olthof, C. Li, J. Lebert, S. Weiß,
1258
+ E. M. Herzig, S. H¨uttner, K. Meerholz, P. Strohriegl, and
1259
+ A. K¨ohler, J. Phys. Chem. C 122, 21792–21802 (2018).
1260
+ 16 S. E. Canton, A. J. Yencha, E. Kukk, J. D. Bozek, M.C.A
1261
+ Lopes, G. Snell and N. Berrah, Phys. Rev. Lett. 89, 045502
1262
+ (2002).
1263
+ 17 A. Zhugayevych and S. Tretiak, Annu. Rev. Phys. Chem.
1264
+ 66:1, 305 (2015).
1265
+ 18 S. M. Falke, C. A. Rozzi, D. Brida , M. Maiuri, M. Amato,
1266
+ E. Sommer, A. De Sio, A. Rubio, G. Cerullo, E. Molinari
1267
+ and C. Lienau, Science 344, 1001 (2014).
1268
+ 19 Y. Song, S. N. Clafton, R. D. Pensack, T. W. Kee & G. D.
1269
+ Scholes, Nat. Commun. 5, 4933 (2014).
1270
+ 20 S. Bera, N. Gheeraert, S. Fratini, S. Ciuchi & S. Florens,
1271
+ Phys. Rev. B 91, 041107(R) (2015).
1272
+ 21 Z. X. Z. Hu and G. Chen, J. Chem. Phys. 154 (2021).
1273
+ 22 A. E. Jailaubekov, A. P. Willard, J. R. Tritsch, W. L.
1274
+ Chan, N. Sai, R. Gearba, L. G. Kaake, K. J. Williams,
1275
+ K. Leung, P. J. Rossky & X-Y. Zhu , Nat. Mater. 12, 66
1276
+ (2013).
1277
+ 23 H. Tamura and I. Burghardt, J. Am. Chem. Soc. 135,
1278
+ 16364 (2013).
1279
+ 24 H. Tamura,
1280
+ J. G. S. Ramon,
1281
+ E. R. Bittner and I.
1282
+ Burghardt, J. Phys. Chem. B. 112, 495 (2008).
1283
+ 25 E. R. Bittner and C. Silva, Nat. Commun. 5, 3119 (2014).
1284
+ 26 A. Sumi, J. Phys. Soc. Jpn. 43, 1286 (1977).
1285
+ 27 B. Gerlach and H. L¨owen, Phys. Rev. B 42, 3537 (1990).
1286
+ 28 J. Pollmann and H. B¨uttner, Phys. Rev. B 16, 4480 (1977).
1287
+ 29 E. Burovski, H. Fehske, and A. S. Mishchenko, Phys. Rev.
1288
+ Lett. 101, 116403 (2008).
1289
+ 30 M. Berciu, Phys. Rev. Lett. 97, 036402 (2006).
1290
+
1291
+ -2.0
1292
+ 1.0
1293
+ -2.2
1294
+ 0.8
1295
+ tanh(Aoo(k, w)
1296
+ -2.4
1297
+ 0.6
1298
+ 3
1299
+ -2.6
1300
+ 0.4
1301
+ -2.8
1302
+ 0.2
1303
+ -3.0
1304
+ 0.0
1305
+ 0.0
1306
+ 0.2
1307
+ 0.4
1308
+ 0.6
1309
+ 0.8
1310
+ 1.0
1311
+ Me-2.0
1312
+ 1.0
1313
+ -2.2
1314
+ 0.8
1315
+ tanh(Aoo(k, w)
1316
+ -2.4
1317
+ 0.6
1318
+ 3
1319
+ -2.6
1320
+ 0.4
1321
+ -2.8
1322
+ 0.2
1323
+ -3.0
1324
+ 0.0
1325
+ 0.0
1326
+ 0.2
1327
+ 0.4
1328
+ 0.6
1329
+ 0.8
1330
+ 1.0
1331
+ Me11
1332
+ 31 M. Berciu and G. L. Goodvin, Phys. Rev. B 76, 165109
1333
+ (2007).
1334
+ 32 M. Berciu and H. Fehske, Phys. Rev. B 82, 085116 (2010).
1335
+ 33 D. Marchand, G. De Filippis, V. Cataudella, M. Berciu,
1336
+ N. Nagaosa, N. Prokof’Ev, A. Mishchenko, and P. Stamp,
1337
+ Phys. Rev. Lett. 105, 266605 (2010).
1338
+ 34 C. P. Adolphs and M. Berciu, Phys. Rev. B 90, 085149
1339
+ (2014).
1340
+ 35 J. Sous, M. Chakraborty, C. Adolphs, R. Krems,
1341
+ and
1342
+ M. Berciu, Scientific reports 7, 1 (2017).
1343
+ 36 J. Sous, M. Chakraborty, R. V. Krems,
1344
+ and M. Berciu,
1345
+ Phys. Rev. Lett. 121, 247001 (2018).
1346
+ 37 M. Berciu, A. S. Mishchenko, and N. Nagaosa, Europhys.
1347
+ Lett. 89 (2010).
1348
+ 38 H. Ebrahimnejad and M. Berciu, Phys. Rev. B 85 (2012).
1349
+ 39 D. Marchand, P. C. E. Stamp, and M. Berciu, Phys. Rev.
1350
+ B 95 (2017).
1351
+ 40 G. L. Goodvin, M. Berciu,
1352
+ and G. A. Sawatzky, Phys.
1353
+ Rev. B 74 (2006).
1354
+ 41 H. Hellmann, Einfuhrung in die Quantenchemie (Leipzig:
1355
+ Franz Deuticke, 1937).
1356
+ 42 R. P. Feynman, Phys. Rev. 56, 340 (1939).
1357
+ 43 J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Phys. Rev.
1358
+ 106 (1957).
1359
+ 44 M. Hohenadler, P. B. Littlewood,
1360
+ and H. Fehske, Phys.
1361
+ Rev. B 76, 184303 (2007).
1362
+ 45 Y. Ishijima and T. Ishiguro, J. Phys. Soc. Jpn. 65, 1574
1363
+ (1996).
1364
+ 46 H. Schlaich,
1365
+ M. Muccini,
1366
+ J. Feldmann,
1367
+ H. B¨assler,
1368
+ E. G¨obel, R. Zamboni, C. Taliani, J. Erxmeyer,
1369
+ and
1370
+ A. Weidinger, Chem. Phys. Lett. 236, 135 (1995).
1371
+ 47 M. Zhang, H. Wang, H. Tian, Y. Geng, and C. W. Tang,
1372
+ Adv. Mater. 23, 4960 (2011).
1373
+ 48 Y. Park, A. Obliger, and D. T. Limmer, Nano Lett. 22,
1374
+ 2398 (2022).
1375
+
69E1T4oBgHgl3EQf7AXC/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
6tFAT4oBgHgl3EQfnx0W/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5dc31e369b7c87d7eacd9da7f2ee7dd895ca6a9dd0bde984c62818302fb3adb2
3
+ size 129303
89FLT4oBgHgl3EQfBi6R/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b2ee4984c14c41d8ee792bf921ec9f407870a8de0c60de9c0522c64910dd0e8
3
+ size 6094893
8NFLT4oBgHgl3EQfsy_c/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e3b871f20500423481b0d2314a1062dfc13ad0f6226a878a223b31a736745e9
3
+ size 4128813
ANFQT4oBgHgl3EQf8jdP/content/2301.13447v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31aa9f1fdaa1a3385d99eafa13fb25cd5c04768ea21852ad3f859490bf354421
3
+ size 2224545
ANFQT4oBgHgl3EQf8jdP/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ac924d3a8e04a2abecd4983ee1efe83cdec6e8b0ab371359a3f7f990e459ba2
3
+ size 2621485
AtE1T4oBgHgl3EQfpAWX/content/tmp_files/2301.03327v1.pdf.txt ADDED
@@ -0,0 +1,1369 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A-posteriori QMC-FEM error estimation
2
+ for Bayesian inversion and optimal control
3
+ with entropic risk measure
4
+ Marcello Longo∗, Christoph Schwab∗, Andreas Stein∗†
5
+ January 10, 2023
6
+ Abstract
7
+ We propose a novel a-posteriori error estimation technique where the
8
+ target quantities of interest are ratios of high-dimensional integrals, as occur
9
+ e.g. in PDE constrained Bayesian inversion and PDE constrained optimal
10
+ control subject to an entropic risk measure. We consider in particular
11
+ parametric, elliptic PDEs with affine-parametric diffusion coefficient, on
12
+ high-dimensional parameter spaces. We combine our recent a-posteriori
13
+ Quasi-Monte Carlo (QMC) error analysis, with Finite Element a-posteriori
14
+ error estimation. The proposed approach yields a computable a-posteriori
15
+ estimator which is reliable, up to higher order terms. The estimator’s
16
+ reliability is uniform with respect to the PDE discretization, and robust
17
+ with respect to the parametric dimension of the uncertain PDE input.
18
+ 1
19
+ Introduction
20
+ The efficient numerical approximation of high-dimensional, parametric partial
21
+ differential equations (PDEs for short) received increasing attention during the
22
+ past years. In this work, we address two classes of high-dimensional numerical
23
+ integration problems which arise in connection with data assimilation and PDE
24
+ constrained optimization. We illustrate the abstract concepts for a parametric,
25
+ linear elliptic PDE. The first class is the so-called Bayesian inverse problem (BIP).
26
+ There, we are interested in the posterior expectation of a (linear) functional of
27
+ the solution u of a parametric PDE, conditional on observation data subject to
28
+ additive, centered Gaussian observation noise [18,19]. See also [10]. A second
29
+ problem class is PDE-constrained optimization. Specifically, the optimal control
30
+ problem (OCP) of parametric PDEs under an entropic risk measure [7], where
31
+ the state variable satisfies a parametric PDE constraint [9].
32
+ When numerically approximating solutions of a BIP or an OCP, it is essential
33
+ to quantify the error due to the numerical discretization in order to meet a
34
+ prescribed numerical tolerance without wasting computational resources. As
35
+ solving PDEs exactly is in general not possible, discretizations such as Finite
36
+ Element Methods (FEM) must be used instead. Additionally, the parametric
37
+ ∗Seminar for Applied Mathematics, ETH Zürich
38
+ †Corresponding author. Email: [email protected]
39
+ 1
40
+ arXiv:2301.03327v1 [math.NA] 9 Jan 2023
41
+
42
+ uncertainty in the forward PDE model is passed on to the solution, and this
43
+ must be taken into account both in the computation and in the error estimation.
44
+ This justifies the need for an a-posteriori error analysis.
45
+ Assuming that the uncertain PDE coefficients can be described by means of a
46
+ parameter vector y ∈ U :=
47
+
48
+ − 1
49
+ 2, 1
50
+ 2
51
+ �s where s ∈ N, s ≫ 1, e.g. (6) below, we can
52
+ employ suitable quasi-Monte Carlo (QMC) rules to approximate integrals over U.
53
+ Here, we select extrapolated polynomial lattice (EPL) rules as first introduced
54
+ in [3,5]. This choice is motivated by the deterministic nature or their quadrature
55
+ nodes Pm, |Pm| = bm for some prime b [15], and good convergence properties
56
+ under quantified parametric regularity of the integrand functions with respect
57
+ to y ∈ U, uniformly in the dimension s [3]. Moreover, it was shown in [5,14]
58
+ that under assumptions, EPL quadratures allow for computable a-posteriori
59
+ quadrature error estimators that are asymptotically exact as m → ∞, with
60
+ dimension robust ratio between estimated and actual quadrature error.
61
+ Both, BIP and OCP problems for PDEs with parametric input take the form
62
+ Z′
63
+ Z ∈ Y,
64
+ where Z =
65
+
66
+ U
67
+ Θ(y) dy,
68
+ Z′ =
69
+
70
+ U
71
+ Θ′(y) dy,
72
+ (1)
73
+ for some suitable integrable functions Θ: U → R and Θ′ : U → Y, where Y
74
+ is a separable Hilbert space and Z, Z′ are Bochner integrals with respect to a
75
+ product measure dy on the possibly high-dimensional parameter space U. In
76
+ particular, we have Y = R for the BIP case and Y ∈ L2(D) for the OCP case,
77
+ with D being the physical domain of the considered PDE. Approximating the
78
+ high-dimensional integrals with averages over polynomial lattices Pm ⊂ U yields
79
+ a first approximation
80
+ Z′
81
+ m
82
+ Zm
83
+ ∈ Y,
84
+ where Zm = 1
85
+ bm
86
+
87
+ y∈Pm
88
+ Θ(y),
89
+ Z′
90
+ m = 1
91
+ bm
92
+
93
+ y∈Pm
94
+ Θ′(y).
95
+ (2)
96
+ Then, since the integrands Θ, Θ′ depend on the solution of a y-parametric
97
+ PDE, we discretize the parametric PDEs for y ∈ Pm, resulting in computable,
98
+ parametric integrand functions Θh(y), Θ′
99
+ h(y) and in the computable estimates
100
+ Z′
101
+ m,h
102
+ Zm,h
103
+ ∈ Y,
104
+ where Zm,h = 1
105
+ bm
106
+
107
+ y∈Pm
108
+ Θh(y),
109
+ Z′
110
+ m,h = 1
111
+ bm
112
+
113
+ y∈Pm
114
+ Θ′
115
+ h(y).
116
+ (3)
117
+ Here, the parameter h > 0 denotes the meshwidth of conforming Lagrangian
118
+ Finite Element discretizations. We present a computable a-posteriori estimator
119
+ for the combined Finite Element discretization and quadrature error
120
+ err =
121
+ ����
122
+ Z′
123
+ Z −
124
+ Z′
125
+ m,h
126
+ Zm,h
127
+ ����
128
+ Y
129
+ .
130
+ (4)
131
+ In the rest of this section we introduce the setting and we describe the two
132
+ problems of interest, namely the BIP and the OCP with entropic risk measure.
133
+ Section 2 and Section 3 are devoted to the QMC and the FEM a-posteriori
134
+ error analysis, respectively, and these results will be combined in Section 4. We
135
+ present numerical experiments in Section 5 and summary and conclusions in
136
+ Section 6.
137
+ 2
138
+
139
+ 1.1
140
+ Affine-Parametric Forward PDE
141
+ For brevity of presentation, we consider a model, linear elliptic PDE with
142
+ homogeneous Dirichlet boundary conditions. Given a bounded polygon D ⊆ R2
143
+ and a parameter sequence y ∈ U, s ∈ N, consider the following parametric,
144
+ linear, second order elliptic PDE in variational form: find u(·, y) ∈ X = H1
145
+ 0(D)
146
+ such that
147
+
148
+ D
149
+ a(x, y)∇xu(x, y) · ∇xv(x) dx =
150
+
151
+ D
152
+ f(x)v(x) dx
153
+ ∀v ∈ X.
154
+ (5)
155
+ We assume that a is affine-parametric, namely that we are given a family of
156
+ functions {ψj}j∈N0 ⊆ L∞(D) such that, with essinf ψ0 > κ > 0 and bj :=
157
+ 1
158
+ κ ∥ψj∥L∞(D), we have
159
+ a(x, y) = ψ0(x) +
160
+ s
161
+
162
+ j=1
163
+ yjψj,
164
+
165
+ j≥1
166
+ bj < 2.
167
+ (6)
168
+ Then essinf a(·, y) > essinf ψ0 − κ > 0 for all y ∈ U. By the Lax-Milgram
169
+ lemma, the parametric weak solution u(·, y) ∈ X is well defined for any f ∈
170
+ X ∗ = H−1(D), where ∗ denotes the topological dual. To justify the a-posteriori
171
+ QMC error analysis of Section 2, we will additionally require the summability
172
+ b = (bj)j≥1 ∈ ℓp(N)
173
+ for some p ∈ (0, 1/2].
174
+ (7)
175
+ For the FEM approximation, we consider conforming subspaces1 Xh ⊆ X h ∈
176
+ H ⊆ (0, ∞), dim(Xh) < ∞, that are linked to shape-regular, simplicial partitions
177
+ {Th}h∈H of D [6, Section 8]. Assume that the resulting spaces are nested and
178
+ conforming, that is Xh ⊆ Xh′ for any h, h′ ∈ H, h > h′ and that H accumulates
179
+ at 0. We construct the Galerkin discretizations uh(y) ∈ Xh of (5), by solving
180
+
181
+ D
182
+ a(x, y)∇xuh(x, y) · ∇xv(x) dx =
183
+
184
+ D
185
+ f(x)v(x) dx
186
+ ∀v ∈ Xh.
187
+ (8)
188
+ To simplify notation, we write a(y) = a(·, y) and u(y) = u(·, y) and we omit the
189
+ variable x for ∇x = ∇ and divx = div.
190
+ 1.2
191
+ Bayesian inverse problem (BIP)
192
+ Let X = {a ∈ L∞(D) : essinf a > 0} and fix f ∈ X ∗. Then, we can define the
193
+ data-to-solution map S : X → X for the forward problem (5). We also define the
194
+ observation functional O ∈ (X ∗)K, with a finite number K ∈ N of observations
195
+ (e.g. representing sensors), and a goal functional (also called quantity of interest)
196
+ G ∈ X ∗. We define the prior measure π0 to be the uniform distribution on U.
197
+ The observations O(S(a)) are assumed to be additionally subject to additive
198
+ observation noise η, which we assume to be centered Gaussian, i.e., η ∼ N(0, Γ)
199
+ for some known, nondegenerate covariance matrix Γ ∈ RK×K. In other words,
200
+ we assume given noisy observation data δ ∈ RK modeled as
201
+ δ = O(S(a)) + η ∈ L2
202
+ Γ(RK).
203
+ (9)
204
+ 1In practice, h either parametrizes the local mesh-size maxT ∈Th |T|1/2, for quasi-uniform
205
+ collections of partitions, or it relates to the refinement level in case of adaptive refinement [16].
206
+ 3
207
+
208
+ We consider the Bayesian inverse problem of recovering the expected value of
209
+ G(u), given observation data δ, that is Eπ0[G(u)|δ]. By Bayes’ theorem [19], the
210
+ posterior distribution πδ of y|δ is absolutely continuous with respect to π0 and
211
+ its Radon-Nikodym derivative with respect to the prior π0 is
212
+ dπδ
213
+ dπ0
214
+ (y) = Θ(y)
215
+ Z
216
+ ,
217
+ (10)
218
+ where Θ(y) := exp(− 1
219
+ 2|δ − O(S(a(y)))|2
220
+ Γ) = exp(− 1
221
+ 2|δ − O(u(y))|2
222
+ Γ) denotes the
223
+ likelihood with the observation noise covariance-weighted, data-to-observation
224
+ misfit, where |x|2
225
+ Γ := x⊤Γ−1x and Z is defined in (1). As Θ(y) > 0 for all y ∈ U,
226
+ Z > 0. In the present setting, Bayesian inversion amounts to the numerical
227
+ evaluation of the posterior mean
228
+ Eπδ[G(u)] = 1
229
+ Z
230
+
231
+ U
232
+ G(u(y))Θ(y) dy.
233
+ This is (1) upon setting Θ′(y) := G(u(y))Θ(y) and Y = R. Define the FE
234
+ solution operator Sh : X → Xh as the mapping Sha(y) = uh(y) via (8). The FE
235
+ approximations of Θ, Θ′ used in (3) are then Θh = exp(− 1
236
+ 2|δ − O(uh(y))|2
237
+ Γ) and
238
+ Θ′
239
+ h = G(uh(y))Θh(y), respectively.
240
+ 1.3
241
+ Optimal control with entropic risk measure (OCP)
242
+ Let Y = L2(D), assume a parameter independent target state ˆu ∈ Y and a
243
+ nonempty, closed and convex set X ⊆ Y of admissible controls. Throughout
244
+ the rest of the paper, we identify Y with its dual via Riesz representation and
245
+ write ⟨·, ·⟩ for the inner product on Y. Once the affine parametric diffusion
246
+ coefficient a(y) is fixed, (5) defines a linear solution operator Ly : Y → Y by
247
+ Lyf = ι ◦ u(y) for all f ∈ Y, where ι denotes the continuous embedding X ⊂ Y.
248
+ In particular, we view u(y) as a function of the right-hand side f of (5). For
249
+ a function Φ: U → R and some θ ∈ (0, ∞), the entropic risk measure [12] is
250
+ defined by
251
+ R(Φ) = 1
252
+ θ log
253
+ ��
254
+ U
255
+ exp(θΦ(y)) dy
256
+
257
+ .
258
+ (11)
259
+ The entropic risk is especially relevant when favoring a risk averse behavior [7].
260
+ We consider the following minimization problem, for fixed constants α1, α2 > 0
261
+ f ∗ := argmin
262
+ f∈X
263
+ J(f),
264
+ J(f) := R( α1
265
+ 2 ∥Lyf − ˆu∥2
266
+ Y) + α2
267
+ 2 ∥f∥2
268
+ Y .
269
+ (12)
270
+ Due to convexity of R and α2 > 0, the functional J is strongly convex so that
271
+ (12) is a well-posed minimization problem [9,12].
272
+ Define the shorthand notation Φf(y) = α1
273
+ 2 ∥Lyf − ˆu∥2
274
+ Y and the adjoint state
275
+ given by qf(y) = α1Ly(Lyf − ˆu) ∈ Y. Under the above conditions on X, (12) is
276
+ equivalent to the inequality ⟨J′(f ∗), f − f ∗⟩ ≥ 0 for all f ∈ X, where in analogy
277
+ with [9, Lemma 3.6] the Fréchet derivative J′(f) ∈ Y of J at f ∈ X is
278
+ J′(f) =
279
+ 1
280
+
281
+ U exp(θΦf(y)) dy
282
+
283
+ U
284
+ exp(θΦf(y))qf(y) dy + α2f.
285
+ (13)
286
+ 4
287
+
288
+ Next, we replace in (12) the integral over U by QMC rules and the exact solution
289
+ operator Ly by the Galerkin solution Ly
290
+ h : Y → Xh defined by Ly
291
+ hf = uh(y).
292
+ Then, we obtain the discrete formulation
293
+ f ∗
294
+ m,h := argmin
295
+ f∈X
296
+ Jm,h(f),
297
+ Jm,h(f) := Rm( α1
298
+ 2 ∥Ly
299
+ hf − ˆu∥2
300
+ Y) + α2
301
+ 2 ∥f∥2
302
+ Y , (14)
303
+ where Rm(Φ) = 1
304
+ θ log
305
+
306
+ 1
307
+ bm
308
+
309
+ y∈Pm exp(θΦ(y))
310
+
311
+ is again convex, due to positivity
312
+ of the QMC quadrature weights. The derivative J′
313
+ m,h(f) ∈ Y of Jm,h is analogous
314
+ to (13), again replacing the integrals by sample averages over Pm, Φh,f(y) =
315
+ α1
316
+ 2 ∥Ly
317
+ hf − ˆu∥2
318
+ Y and qh,f(y) = α1Ly
319
+ h(Ly
320
+ hf − ˆu) ∈ Y. The next proposition recasts
321
+ the error in the approximation f ∗
322
+ m,h ≈ f ∗ to the form (4). Whenever it does not
323
+ cause confusion, we will write q(y) = qf ∗
324
+ m,h(y) and qh(y) = qh,f ∗
325
+ m,h(y).
326
+ Proposition 1. For Y = L2(D), assume Z, Z′ in (1) are defined by Θ(y) =
327
+ exp(θΦf ∗
328
+ m,h(y)) and Θ′(y) = q(y)Θ(y).
329
+ Similarly, let Zm,h, Z′
330
+ m,h in (3) be
331
+ defined by Θh(y) = exp(θΦh,f ∗
332
+ m,h(y)) and Θ′
333
+ h(y) = qh(y)Θh(y).
334
+ Then
335
+ ��f ∗ − f ∗
336
+ m,h
337
+ ��
338
+ Y ≤ 1
339
+ α2
340
+ ��J′(f ∗
341
+ m,h) − J′
342
+ m,h(f ∗
343
+ m,h)
344
+ ��
345
+ Y = 1
346
+ α2
347
+ ����
348
+ Z′
349
+ Z −
350
+ Z′
351
+ m,h
352
+ Zm,h
353
+ ����
354
+ Y
355
+ .
356
+ (15)
357
+ Proof. The inequalities ⟨J′(f ∗), f − f ∗⟩ ≥ 0 and ⟨J′
358
+ m,h(f ∗
359
+ m,h), f − f ∗
360
+ m,h⟩ ≥ 0 for
361
+ all f ∈ X imply that ⟨J′
362
+ m,h(f ∗
363
+ m,h) − J′(f ∗), f ∗ − f ∗
364
+ m,h⟩ ≥ 0. Moreover, strong
365
+ convexity of J yields the relation ⟨J′(f ∗)−J′(f ∗
366
+ m,h)−α2(f ∗−f ∗
367
+ m,h), f ∗−f ∗
368
+ m,h⟩ ≥ 0.
369
+ Thus, we get
370
+ α2
371
+ ��f ∗ − f ∗
372
+ m,h
373
+ ��2
374
+ Y ≤ ⟨J′
375
+ m,h(f ∗
376
+ m,h) − J′(f ∗) + α2(f ∗ − f ∗
377
+ m,h), f ∗ − f ∗
378
+ m,h⟩
379
+ ≤ ⟨J′
380
+ m,h(f ∗
381
+ m,h) − J′(f ∗
382
+ m,h), f ∗ − f ∗
383
+ m,h⟩
384
+
385
+ ��J′
386
+ m,h(f ∗
387
+ m,h) − J′(f ∗
388
+ m,h)
389
+ ��
390
+ Y
391
+ ��f ∗ − f ∗
392
+ m,h
393
+ ��
394
+ Y ,
395
+ which implies the inequality in (15).
396
+ The equality follows by substituting
397
+ J′
398
+ m,h(f ∗
399
+ m,h) =
400
+ Z′
401
+ m,h
402
+ Zm,h + α2f ∗
403
+ m,h and (13).
404
+ Remark 1. Note that Θ, Θ′ as defined in Proposition 1, and hence Z, Z′, also
405
+ implicitly depend on h via the discrete minimizer f ∗
406
+ m,h. In particular, the exact
407
+ minimizer f ∗ does not appear in the right hand side of (15). This fact will be
408
+ crucial for the ensuing a-posteriori error estimation methodology.
409
+ 2
410
+ A-posteriori QMC error estimation
411
+ We develop computable a-posteriori error estimators for the PDE discretization
412
+ error and for the QMC-quadrature error, the latter being reliable independent
413
+ of the integration dimension s.
414
+ 2.1
415
+ Parametric regularity
416
+ To leverage the results from [5, 14] and overcome the curse of dimensionality,
417
+ we need to quantify the regularity with respect to y ∈ U. To this end, we
418
+ 5
419
+
420
+ write |ν| = �
421
+ j νj, supp(ν) = {j : νj ̸= 0} and, given smooth F : U → Y and
422
+ ν ∈ F := {ν ∈ NN
423
+ 0 : | supp(ν)| < ∞} we introduce the multi-index notation
424
+ ∂ν
425
+ yF(y) = �
426
+ j∈supp(ν) ∂νj
427
+ yj F(y) for the derivatives with respect to y. Moreover,
428
+ given β = (βj)j∈N ∈ ℓp(N) for some p ∈ (0, 1), n ∈ N and c > 0, we define the
429
+ SPOD weights γ = (γu)u⊆{1:s} via
430
+ γu =
431
+
432
+ ν∈{1:α}|u|
433
+ (|ν| + n)!
434
+
435
+ j∈u
436
+ cβνj
437
+ j .
438
+ (16)
439
+ Definition 1. Let Y be a separable Hilbert space, α ∈ N, α ≥ 2. We define the
440
+ weighted unanchored Sobolev space Ws,α,γ with dominating mixed smoothness
441
+ as the completion of C∞(U, Y) with respect to the norm
442
+ ∥F∥s,α,γ := max
443
+ u⊆{1:s} γ−1
444
+ u
445
+
446
+ v⊆u
447
+
448
+ νu\v∈{1:α}|u\v|
449
+
450
+ [− 1
451
+ 2 , 1
452
+ 2 ]|v|
453
+ �����
454
+
455
+ [− 1
456
+ 2 , 1
457
+ 2 ]s−|v| ∂
458
+ (νu\v,αv)
459
+ y
460
+ F(y) dy{1:s}\v
461
+ �����
462
+ Y
463
+ dyv,
464
+ where µ = (νu\v, αv) ∈ F is such that µj = νj if j ∈ u \ v, µj = α if j ∈ v and
465
+ µj = 0 otherwise. The inner integral is interpreted as a Bochner integral.
466
+ The relevance of the space Ws,α,γ in our context is justified by the following
467
+ result, which will be the starting point of our analysis. This is a consequence of
468
+ the so-called component-by-component (CBC) construction as described in [5,14],
469
+ which takes as input s, α, γ and m and returns a polynomial lattice Pm.
470
+ Theorem 1. Let α ∈ N, α ≥ 2 and F : U → Y for some separable Hilbert space
471
+ Y be such that F ∈ Ws,α,γ for some weights γ of the form (16) for β ∈ ℓp(N),
472
+ p ∈ (0, 1/2]. Then, there exists a sequence (Pm)m∈N of polynomial lattice rules
473
+ such that as m → ∞
474
+ Em(F) :=
475
+
476
+ U
477
+ F − 1
478
+ bm
479
+
480
+ y∈Pm
481
+ F(y) = ∥F∥s,α,γ O(b−m),
482
+ as well as for all ε > 0
483
+ Em(F) = 1
484
+ bm
485
+
486
+ y∈Pm
487
+ F(y) −
488
+ 1
489
+ bm−1
490
+
491
+ y∈Pm−1
492
+ F(y) + ∥F∥s,α,γ O(b−2m+ε).
493
+ Here the constants hidden in O(·) are independent of s, m ∈ N and F, but depend
494
+ on ε and γ. In addition, the point sets (Pm)m∈N can be constructed explicitly by
495
+ a CBC construction in O(smbm + s2bm) operations.
496
+ Proof. For the case Y = R, this is proved in [5, Theorem 4.1]. Otherwise, let
497
+ v ∈ Y arbitrary such that ∥v∥Y = 1. Then, ∂ν
498
+ y⟨F(y), v⟩ = ⟨∂ν
499
+ yF(y), v⟩ for all
500
+ ν ∈ F implies ∥⟨F, v⟩∥s,α,γ ≤ ∥F∥s,α,γ. Hence, we reduced to the case Y = R
501
+ and by linearity of the integral and the quadrature we conclude.
502
+ Corollary 1. Assume that the PDE (5) satisfies (6) and (7). Then we can
503
+ construct polynomial lattice rules (Pm)m∈N (depending on b) in O(smbm + s2bm)
504
+ operations such that for all ε > 0 it holds for the BIP
505
+ Z − Zm = Zm − Zm−1 + O(b−2m+ε),
506
+ Z′ − Z′
507
+ m = Z′
508
+ m − Z′
509
+ m−1 + O(b−2m+ε),
510
+ as m → ∞. The hidden constants in O(·) are independent of s, m ∈ N.
511
+ 6
512
+
513
+ Proof. From [18, Section 4.1], [4, Theorem 3.1] and ν! := �
514
+ j∈supp(ν) νj! ≤ |ν|!,
515
+ we have that sups ∥Θ∥s,α,γ + sups ∥Θ′∥s,α,γ < ∞ for α = 1 + ⌊1/p⌋ and some
516
+ SPOD weights (16) defined by a sequence β ∼ b and n = 0. Hence we can apply
517
+ Theorem 1 and conclude.
518
+ A similar result can be given for the OCP, based on the results in [8,9].
519
+ Corollary 2. Assume that the PDE (5) satisfies (6) and (7). Then we can
520
+ construct polynomial lattice rules (Pm)m∈N (depending on b) in O(smbm + s2bm)
521
+ operations such that for all ε > 0 it holds for the OCP
522
+ Z − Zm = Zm − Zm−1 + O(b−2m+ε),
523
+ Z′ − Z′
524
+ m = Z′
525
+ m − Z′
526
+ m−1 + O(b−2m+ε),
527
+ as m → ∞. The hidden constants in O(·) are independent of s, m ∈ N.
528
+ Proof. Applying the result of [8, Lemma 4.6] in [9, Theorems 5.4 and 5.6], we
529
+ conclude that sups ∥Θ∥s,α,γ + sups ∥Θ′∥s,α,γ < ∞ for α = 1 + ⌊1/p⌋ and for
530
+ some SPOD weights (16) defined by a sequence β ∼ b and n = 2.
531
+ 2.2
532
+ Ratio error estimator
533
+ Proposition 2. Assume that the PDE (5) satisfies (6) and (7). Then, for the
534
+ BIP and the OCP, we can construct polynomial lattice rules (Pm)m∈N such that
535
+ Z′
536
+ Z − Z′
537
+ m
538
+ Zm
539
+ =
540
+ 1
541
+ bZm − Zm−1
542
+ �Zm−1Z′
543
+ m − ZmZ′
544
+ m−1
545
+ Zm
546
+
547
+ + O(b−2m+ε),
548
+ (17)
549
+ holds for all ε > 0 as m → ∞. The hidden constant in O(·) is independent of
550
+ s, m.
551
+ Proof. First, Z − Zm → 0 as m → ∞ implies that Zm > Z/2 > 0 for m
552
+ sufficiently large. Next, due to either Corollary 1 or 2, for all ε > 0 we have
553
+ Z′
554
+ Z − Z′
555
+ m
556
+ Zm
557
+ = (Z′ − Z′
558
+ m)Zm − (Z − Zm)Z′
559
+ m
560
+ (Z − Zm)Zm + Z2m
561
+ =
562
+ 1
563
+ b−1(Z′
564
+ m − Z′
565
+ m−1 + O(b−2m+ε))Zm −
566
+ 1
567
+ b−1(Zm − Zm−1 + O(b−2m+ε))Z′
568
+ m
569
+ 1
570
+ b−1(Zm − Zm−1 + O(b−2m+ε))Zm + Z2m
571
+ =
572
+ 1
573
+ b−1(Z′
574
+ m − Z′
575
+ m−1)
576
+ 1
577
+ b−1(bZm − Zm−1) + O(b−2m+ε)
578
+
579
+ 1
580
+ b−1(Zm − Zm−1)Z′
581
+ m
582
+ (
583
+ 1
584
+ b−1(bZm − Zm−1) + O(b−2m+ε))Zm
585
+ + O(b−2m+ε).
586
+ Clearly Am :=
587
+ 1
588
+ b−1(bZm − Zm−1) → Z as m → ∞, so that Am > Z/2 > 0 for
589
+ m sufficiently large. Hence, by a geometric sum argument, collecting in Bm all
590
+ terms contained in O(b−2m+ε) in the denominator, we obtain for m → ∞ that
591
+ 1
592
+ Am + Bm
593
+ =
594
+ 1
595
+ Am
596
+
597
+
598
+ k=0
599
+ (−1)k
600
+ �Bm
601
+ Am
602
+ �k
603
+ =
604
+ 1
605
+ Am
606
+ + O(b−2m+ε).
607
+ 7
608
+
609
+ Therefore, as m → ∞
610
+ Z′
611
+ Z − Z′
612
+ m
613
+ Zm
614
+ =
615
+ 1
616
+ b−1(Z′
617
+ m − Z′
618
+ m−1)
619
+ Am
620
+
621
+ 1
622
+ b−1(Zm − Zm−1)Z′
623
+ m
624
+ AmZm
625
+ + O(b−2m+ε),
626
+ (18)
627
+ which, upon rearranging the terms, is the claim.
628
+ Since Z′
629
+ Z − Z′
630
+ m
631
+ Zm
632
+ = O(b−m) ≫ O(b−2m+ε), Proposition 2 states that
633
+ Ebm(Θ′, Θ) :=
634
+ 1
635
+ bZm − Zm−1
636
+ �Zm−1Z′
637
+ m − ZmZ′
638
+ m−1
639
+ Zm
640
+
641
+ (19)
642
+ is a computable, asymptotically exact error estimator.
643
+ 3
644
+ A-posteriori FEM error estimation
645
+ In practice the parametric solution u(y) ∈ X is not exactly available, and hence
646
+ Zm, Z′
647
+ m are not computable. For any y ∈ U, u(y) will be approximated by the
648
+ corresponding Galerkin discretizations uh(y) ∈ Xh.
649
+ 3.1
650
+ Ratio error estimator
651
+ Let Θh, Θ′
652
+ h, Zh, Z′
653
+ h be defined replacing u and q by uh, qh in the definitions of
654
+ Θ, Θ′, Z, Z′, respectively. Similarly, let Zm,h, Z′
655
+ m,h be defined replacing u and q
656
+ by uh, qh in the definitions of Zm, Z′
657
+ m, respectively.
658
+ Proposition 3. Assume there exist ζm,h, ζ′
659
+ m,h such that for some c > 0 inde-
660
+ pendent of h ∈ H, |Zm − Zm,h| ≤ cζm,h and
661
+ ���Z′
662
+ m − Z′
663
+ m,h
664
+ ���
665
+ Y ≤ cζ′
666
+ m,h. Assume
667
+ that ζm,h → 0 as h → 0. Then, there exists h0 ∈ H such that for all h ≤ h0
668
+ ����
669
+ Z′
670
+ m
671
+ Zm
672
+
673
+ Z′
674
+ m,h
675
+ Zm,h
676
+ ����
677
+ Y
678
+ ≤ c
679
+ Zm,hζ′
680
+ m,h +
681
+ ���Z′
682
+ m,h
683
+ ���
684
+ Y ζm,h
685
+ Z2
686
+ m,h − cζm,hZm,h
687
+ .
688
+ Proof. Due to the limit ζm,h → 0 there exists h1 ∈ H such that Zm,h ≥ Zm/2 > 0
689
+ for all h ≤ h1, h ∈ H. Therefore, we can pick h0 ∈ H such that Zm,h > cζm,h
690
+ for all h ≤ h0, h ∈ H. Thus
691
+ ����
692
+ Z′
693
+ m
694
+ Zm
695
+
696
+ Z′
697
+ m,h
698
+ Zm,h
699
+ ����
700
+ Y
701
+ =
702
+ �����
703
+ (Z′
704
+ m − Z′
705
+ m,h)Zm,h − (Zm − Zm,h)Z′
706
+ m,h
707
+ (Zm − Zm,h)Zm,h + Z2
708
+ m,h
709
+ �����
710
+ Y
711
+
712
+ ���Z′
713
+ m − Z′
714
+ m,h
715
+ ���
716
+ Y Zm,h + |Zm − Zm,h|
717
+ ���Z′
718
+ m,h
719
+ ���
720
+ Y
721
+ Z2
722
+ m,h − |Zm − Zm,h| Zm,h
723
+ ≤ c
724
+ Zm,hζ′
725
+ m,h +
726
+ ���Z′
727
+ m,h
728
+ ���
729
+ Y ζm,h
730
+ Z2
731
+ m,h − cζm,hZm,h
732
+ .
733
+ 8
734
+
735
+ Due to this result, we are left with the task of finding computable ζm,h, ζ′
736
+ m,h
737
+ satisfying the conditions |Zm − Zm,h| ≤ cζm,h and
738
+ ���Z′
739
+ m − Z′
740
+ m,h
741
+ ���
742
+ Y ≤ cζ′
743
+ m,h for
744
+ some c > 0 independent of m, h. Thus, in the following sections we will provide
745
+ such error estimators for BIP and OCP.
746
+ 3.2
747
+ FEM error estimators for BIP
748
+ For a finite collection of observation functionals O = (O1, . . . , OK) ∈ (X ∗)K,
749
+ define ∥O∥X ∗ =
750
+ ��K
751
+ k=1 ∥Ok∥2
752
+ X ∗. The starting point to estimate the FEM error
753
+ will be the following well-known result, e.g. [16]. Let {Th}h∈H be a family of
754
+ shape-regular, simplicial meshes of D and let Pk(Th) be the set of piecewise
755
+ polynomial functions on Th of degree at most k ∈ N0 in each T ∈ Th. Let Eh be
756
+ the set of interior edges of all elements T ∈ Th. We assume that Xh := P1(Th)∩X,
757
+ that f ∈ L2(D) and that a(y) ∈ W 1,∞(D). Let hT = |T|1/2 for T ∈ Th and he
758
+ the length of an edge e ∈ Eh. Then we define the a-posteriori error estimator
759
+ η2
760
+ y,h :=
761
+
762
+ T ∈Th
763
+
764
+ h2
765
+ T ∥f + div(a(y)∇uh(y))∥2
766
+ L2(T )
767
+ + 1
768
+ 2
769
+
770
+ e⊆∂T,e∈Eh
771
+ he ∥�a(y)∇uh(y)�∥2
772
+ L2(e)
773
+
774
+ .
775
+ (20)
776
+ By [16, Theorem 6.3] there exists some c∗ > 0, only depending on D and the
777
+ shape regularity constant of {Th}h∈H, and in particular independent of y ∈ U
778
+ and h ∈ H, such that
779
+ ∥u(y) − uh(y)∥X ≤ c∗ηy,h.
780
+ (21)
781
+ For the important special case O ∈ (L2(D))K and G ∈ L2(D) we may derive
782
+ sharper estimates. To simplify the presentation, we assume here that the physical
783
+ domain D ⊆ R2 is a convex polygon (see also Remark 3 below), and introduce
784
+ the L2(D)−residual estimator
785
+ ˜η2
786
+ y,h :=
787
+
788
+ T ∈Th
789
+
790
+ h4
791
+ T
792
+ ��f ∗
793
+ m,h + div(a(y)∇uh(y))
794
+ ��2
795
+ L2(T )
796
+ + 1
797
+ 2
798
+
799
+ e⊆∂T,e∈Eh
800
+ h3
801
+ e ∥�a(y)∇uh(y)�∥2
802
+ L2(e)
803
+
804
+ .
805
+ (22)
806
+ The additional factors hT , he are derived from a standard duality argument, see,
807
+ e.g. [20, Section 1.11]. Then, there exists some c∗ > 0 depending only on D
808
+ and the shape regularity constant of {Th}h∈H, and in particular independent of
809
+ y ∈ U and h ∈ H, such that
810
+ ∥u(y) − uh(y)∥L2(D) ≤ c∗˜ηy,h.
811
+ (23)
812
+ Lemma 1. Fix a regular mesh Th of simplices in D and a parameter vector
813
+ y ∈ U and assume (21). Then
814
+ |Θ(y) − Θh(y)| ≤ Θh(y)(eχy,h − 1) =: ζy,h
815
+ 9
816
+
817
+ holds for
818
+ χy,h :=
819
+ ���Γ−1/2O
820
+ ���
821
+ X ∗
822
+
823
+ |δ − O(uh(y))|Γ + 1
824
+ 2
825
+ ���Γ−1/2O
826
+ ���
827
+ X ∗ c∗ηy,h
828
+
829
+ c∗ηy,h.
830
+ Furthermore, if O ∈ (L2(D))K and G ∈ L2(D), then
831
+ |Θ(y) − Θh(y)| ≤ Θh(y)(e˜χy,h − 1) := ˜ζy,h
832
+ holds for
833
+ ˜χy,h :=
834
+ ���Γ−1/2O
835
+ ���
836
+ L2(D)
837
+
838
+ |δ − O(uh(y))|Γ + 1
839
+ 2
840
+ ���Γ−1/2O
841
+ ���
842
+ L2(D) c∗˜ηy,h
843
+
844
+ c∗˜ηy,h.
845
+ Proof. We define ∆h(y) := − 1
846
+ 2 |δ − O(u(y))|2
847
+ Γ + 1
848
+ 2 |δ − O(uh(y))|2
849
+ Γ to obtain
850
+ |Θ(y) − Θh(y)| = |Θh(y)(e∆h(y) − 1)| ≤ Θh(y)(e|∆h(y)| − 1).
851
+ The first part of the claim now follows with (21) and
852
+ |∆h(y)| ≤ 1
853
+ 2
854
+ ���Γ−1/2O
855
+ ���
856
+ X ∗ |2δ − O(u(y) + uh(y))|Γ ∥u(y) − uh(y)∥X
857
+
858
+ ���Γ−1/2O
859
+ ���
860
+ X ∗ |δ − O(uh(y))|Γ ∥u(y) − uh(y)∥X
861
+ + 1
862
+ 2
863
+ ���Γ−1/2O
864
+ ���
865
+ 2
866
+ X ∗ ∥u(y) − uh(y)∥2
867
+ X .
868
+ The second part of the proof follows analogously by replacing X by L2(D) and
869
+ using (23) instead of (21).
870
+ Lemma 2. Fix a regular mesh of simplices Th and y ∈ U and assume (21).
871
+ Then there exists a constant c∗ > 0 such that for all y ∈ U and h ∈ H
872
+ |Θ′(y) − Θ′
873
+ h(y)| ≤ ∥G∥X ∗ (c∗ηy,hΘh(y)eχy,h + ζy,h ∥uh(y)∥X ) =: ζ′
874
+ y,h.
875
+ Furthermore, if O ∈ (L2(D))K and G ∈ L2(D), then
876
+ |Θ′(y) − Θ′
877
+ h(y)| ≤ ∥G∥L2(D)
878
+
879
+ c∗˜ηy,hΘh(y)e˜χy,h + ˜ζy,h ∥uh(y)∥L2(D)
880
+
881
+ =: ˜ζ′
882
+ y,h.
883
+ Proof. Since Θ′(y) = G(u(y))Θ(y) = G(u(y)Θ(y)), we get
884
+ |Θ′(y) − Θ′
885
+ h(y)| ≤ ∥G∥X ∗ ∥u(y)Θ(y) − uh(y)Θh(y)∥X
886
+ ≤ ∥G∥X ∗ (∥u(y) − uh(y)∥X Θ(y) + ∥uh(y)∥X |Θ(y) − Θh(y)|),
887
+ and hence the claim follows with Lemma 1 since Θ(y) ≤ Θh(y)eχy,h. The second
888
+ part of the claim follows analogously by the second part of Lemma 1.
889
+ Remark 2. We remark that the estimates in the proof of Lemma 2 are conservative:
890
+ we used that K > 1 in Lemma 1. For K = 1, i.e. for a single observation
891
+ functional, goal-oriented AFEM results from [1] and the references there, can be
892
+ used to obtain sharper a-posteriori error bounds.
893
+ 10
894
+
895
+ We now can define ζm,h by averaging ζy,h for y ∈ Pm, that is ζm,h :=
896
+ 1
897
+ bm
898
+
899
+ y∈Pm ζy,h. Then we get from Lemma 1
900
+ |Zm − Zm,h| ≤ 1
901
+ bm
902
+
903
+ y∈Pm
904
+ |Θ(y) − Θh(y)| ≤ ζm,h.
905
+ (24)
906
+ Analogously, with ζ′
907
+ m,h =
908
+ 1
909
+ bm
910
+
911
+ y∈Pm ζ′
912
+ y,h, Lemma 2 implies
913
+ ��Z′
914
+ m − Z′
915
+ m,h
916
+ �� ≤ 1
917
+ bm
918
+
919
+ y∈Pm
920
+ |Θ′(y) − Θ′
921
+ h(y)| ≤ ζ′
922
+ m,h.
923
+ (25)
924
+ In particular, if we construct Th such that it also holds ηy,h → 0 as h → 0 for
925
+ all y ∈ U, (24) and (25) verify the hypotheses of Proposition 3 with c = 1.
926
+ 3.3
927
+ FEM error estimators for OCP with entropic risk
928
+ In the case of OCP we require error estimates for the parametric state at the
929
+ discrete optimal control f ∗
930
+ m,h, i.e. u(y) = Lyf ∗
931
+ m,h and for the corresponding
932
+ adjoint state q(y) = α1Ly(Lyf ∗
933
+ m,h − ˆu). The error will be measured in the
934
+ L2(D)-norm. Again we assume that Xh := P1(Th) ∩ X, that f ∈ L2(D) and
935
+ a(y) ∈ W 1,∞(D), and that D ⊆ R2 is a convex polygon.
936
+ Lemma 3. Fix a mesh Th and y ∈ U and impose (23). With the notation of
937
+ Proposition 1 and ˜ηy,h defined as in (22), we have
938
+ |Θ(y) − Θh(y)| ≤ Θh(y)(eχy,h − 1) =: ζy,h,
939
+ where
940
+ χy,h := θc∗ �α1
941
+ 2 ˜η2
942
+ y,h + α1
943
+ ��Ly
944
+ hf ∗
945
+ m,h − ˆu
946
+ ��
947
+ L2(D) ˜ηy,h
948
+
949
+ .
950
+ Proof. By twofold application of the triangle inequality we have
951
+ ���Φf ∗
952
+ m,h(y) − Φh,f ∗
953
+ m,h(y)
954
+ ��� ≤ α1
955
+ 2
956
+ ��(Ly − Ly
957
+ h)f ∗
958
+ m,h
959
+ ��2
960
+ L2(D)
961
+ + α1
962
+ ��Ly
963
+ hf ∗
964
+ m,h − ˆu
965
+ ��
966
+ L2(D)
967
+ ��(Ly − Ly
968
+ h)f ∗
969
+ m,h
970
+ ��
971
+ L2(D)
972
+ ≤ c∗ �α1
973
+ 2 ˜η2
974
+ y,h + α1
975
+ ��Ly
976
+ hf ∗
977
+ m,h − ˆu
978
+ ��
979
+ L2(D) ˜ηy,h
980
+
981
+ =: c∗ξy,h.
982
+ Note that ξy,h is computable due to Remark 1. Then, it follows with ∆h(y) :=
983
+ θ(Φf ∗
984
+ m,h(y) − Φh,f ∗
985
+ m,h(y)) that
986
+ |Θ(y) − Θh(y)| =
987
+ ���Θh(y)(e∆h(y) − 1)
988
+ ��� ≤ Θh(y)(e|∆h(y)| − 1).
989
+ Using a residual estimator in the form (22) for the adjoint problem yields
990
+ ∥q(y) − qh(y)∥L2(D) ≤ 2 max(c∗, 1)c∗˜˜ηy,h,
991
+ (26)
992
+ where
993
+ ˜˜η2
994
+ y,h := α2
995
+ 1
996
+
997
+ T ∈Th
998
+
999
+ h4
1000
+ T ∥uh(y) − ˆu + div(a(y)∇qh(y))∥2
1001
+ L2(T )
1002
+ (27)
1003
+ + 1
1004
+ 2
1005
+
1006
+ e⊆∂T,e∈Eh
1007
+ h3
1008
+ e ∥�a(y)∇qh(y)�∥2
1009
+ L2(e)
1010
+
1011
+ + (max
1012
+ T ∈Th h4
1013
+ T )˜η2
1014
+ y,h.
1015
+ 11
1016
+
1017
+ Lemma 4. Let Y = L2(D). Fix a mesh Th and y ∈ U and impose (23) and
1018
+ (26). With the notation of Proposition 1, we have
1019
+ ∥Θ′(y) − Θ′
1020
+ h(y)∥Y ≤ ζy,h ∥qh(y)∥Y + 2c∗Θh(y)eχy,h ˜˜ηy,h =: ζ′
1021
+ y,h
1022
+ Proof. As in the proof of Lemma 2, we get by Θ(y) ≤ Θh(y)eχy,h that
1023
+ ∥q(y)Θ(y) − qh(y)Θh(y)∥Y ≤ |Θ(y) − Θh(y)| ∥qh(y)∥Y
1024
+ + Θ(y) ∥q(y) − qh(y)∥Y
1025
+ ≤ ζy,h ∥qh(y)∥Y + 2c∗Θh(y)eχy,h ˜˜ηy,h.
1026
+ Remark 3. If D ⊆ R2 is a non-convex polygon, the reliability assumption (23)
1027
+ and the corresponding definitions (22), (27) of ˜ηy,h, ˜˜ηy,h must be adapted by
1028
+ using weighted L2 norms, with weights near the re-entrant corners. We refer
1029
+ to [21, Theorem 3.1] for a precise result in the case of the Poisson equation.
1030
+ 4
1031
+ Combined QMC-FEM estimator
1032
+ In view of Propositions 2 and 3 we employ the computable a-posteriori estimator
1033
+ ESTbm,h := ∥Ebm(Θ′
1034
+ h, Θh)∥Y +
1035
+ Zm,hζ′
1036
+ m,h +
1037
+ ���Z′
1038
+ m,h
1039
+ ���
1040
+ Y ζm,h
1041
+ Z2
1042
+ m,h − ζm,hZm,h
1043
+ .
1044
+ (28)
1045
+ Note that the QMC error estimator ∥Ebm(Θ′, Θ)∥Y derived from Proposition 2
1046
+ is itself approximated by the computable expression ∥Ebm(Θ′
1047
+ h, Θh)∥Y. In the
1048
+ next proposition we precise that the additional error committed due to this
1049
+ extra approximation is of higher asymptotic order, as m → ∞. We equip the set
1050
+ C0(U, Y) with the norm ∥F∥∞ = supy∈U ∥F(y)∥Y.
1051
+ Proposition 4. Fix a family of regular meshes {Th}h∈H such that for some
1052
+ ˜C > 0 independent of s ∈ N, h ∈ H and some SPOD weights (16), it holds
1053
+ max(∥Θ∥s,α,γ , ∥Θ′∥s,α,γ , ∥Θh∥s,α,γ , ∥Θ′
1054
+ h∥s,α,γ) ≤ ˜C.
1055
+ (29)
1056
+ Assume that the spaces {Xh}h are contained in X and that they are selected so
1057
+ that ∥Θ − Θh∥∞ → 0 as h → 0. Then we can construct a sequence of polynomial
1058
+ lattices (Pm)m∈N in O(smbm + s2bm) operations such that, for some h0 ∈ H and
1059
+ some constant C > 0 (independent of m, h, s) we have for any h < h0 that
1060
+ ∥Ebm(Θ′, Θ) − Ebm(Θ′
1061
+ h, Θh)∥Y
1062
+ ≤ Cb−m(∥Θ − Θh∥∞ + ∥Θ′ − Θ′
1063
+ h∥∞ + ∥Θ − Θh∥s,α,γ + ∥Θ′ − Θ′
1064
+ h∥s,α,γ).
1065
+ Proof. Throughout the proof, C > 0 is a generic constant independent of m, h, s.
1066
+ We compute the difference of the numerators
1067
+ ∆1 := Zm−1Z′
1068
+ m − ZmZ′
1069
+ m−1 − Zm−1,hZ′
1070
+ m,h + Zm,hZ′
1071
+ m−1,h
1072
+ = −(Zm − Zm−1)(Z′
1073
+ m − Z′
1074
+ m,h) − (Zm − Zm,h − Zm−1 + Zm−1,h)Z′
1075
+ m,h
1076
+ + (Zm − Zm,h)(Z′
1077
+ m − Z′
1078
+ m−1) + Zm,h(Z′
1079
+ m − Z′
1080
+ m,h − Z′
1081
+ m−1 + Z′
1082
+ m−1,h).
1083
+ 12
1084
+
1085
+ We have |Zm − Zm,h| ≤ ∥Θ − Θh∥∞ and
1086
+ ���Z′
1087
+ m − Z′
1088
+ m,h
1089
+ ���
1090
+ Y ≤ ∥Θ′ − Θ′
1091
+ h∥∞. From
1092
+ Theorem 1, we know that Zm = Zm−1 + ∥Θ∥s,α,γ O(b−m) and Z′
1093
+ m = Z′
1094
+ m−1 +
1095
+ ∥Θ′∥s,α,γ O(b���m) as m → ∞, with hidden constants in O(·) independent of
1096
+ s, m, h. Furthermore Zm − Zm,h − Zm−1 + Zm−1,h = ∥Θ − Θh∥s,α,γ O(b−m),
1097
+ and Z′
1098
+ m − Z′
1099
+ m,h − Z′
1100
+ m−1 + Z′
1101
+ m−1,h = ∥Θ′ − Θ′
1102
+ h∥s,α,γ O(b−m) also follow by
1103
+ Theorem 1. Therefore, we have
1104
+ ∥∆1∥Y ≤ Cb−m(∥Θ − Θh∥∞ + ∥Θ′ − Θ′
1105
+ h∥∞ + ∥Θ − Θh∥s,α,γ + ∥Θ′ − Θ′
1106
+ h∥s,α,γ).
1107
+ Next, we define T1 := Zm−1,hZ′
1108
+ m,h − Zm,hZ′
1109
+ m−1,h and obtain the estimate
1110
+ ∥T1∥Y ≤ C(∥Θh∥s,α,γ + ∥Θ′
1111
+ h∥s,α,γ)b−m. Moreover,
1112
+ ∆2 := (bZm − Zm−1)Zm − (bZm,h − Zm−1,h)Zm,h
1113
+ = (b(Zm − Zm,h) + (Zm−1,h − Zm−1))Zm + (bZm,h − Zm−1,h)(Zm − Zm,h)
1114
+ gives |∆2| ≤ C ∥Θ − Θh∥∞. Next, we observe that T2 = (bZm,h − Zm−1,h)Zm,h
1115
+ is bounded from below away from 0, for h sufficiently small. Therefore, for h
1116
+ sufficiently small we apply the elementary inequality
1117
+ ����
1118
+ T1 + ∆1
1119
+ T2 + ∆2
1120
+ − T1
1121
+ T2
1122
+ ����
1123
+ Y
1124
+ ≤ max(∥∆1∥Y , ∥T1∆2∥Y)
1125
+ 1 + |T2|
1126
+ |T2| (|T2| − |∆2|),
1127
+ valid for T1, T2, ∆1, ∆2 ∈ R with |∆2| < |T2|, which is satisfied since |∆2| → 0
1128
+ as h → 0. Combining all these observations we obtain the claim.
1129
+ Theorem 2. For either the BIP or the OCP, assume that D ⊆ R2 is a con-
1130
+ vex polygon and that the PDE (5) satisfies (6), (7), f ∈ L2(D) and b′ ∈
1131
+ ℓp(N), p ∈ (0, 1/2], with b′j = ∥ψj∥W 1,∞(D). Let Xh = P1(Th) ∩ X for a family
1132
+ of shape-regular meshes Th such that h = maxT ∈Th hT . Then, we can construct
1133
+ polynomial lattices (Pm)m∈N such that the estimator ESTbm,h in (28) satisfies
1134
+ ����
1135
+ Z′
1136
+ Z −
1137
+ Z′
1138
+ m,h
1139
+ Zm,h
1140
+ ����
1141
+ Y
1142
+ ≤ ESTbm,h + O(b−2m+ε + b−mh),
1143
+ for any ε > 0 as m → ∞, h → 0. The constant in O(·) is independent of s, m
1144
+ and h, but depends on ε.
1145
+ Proof. Since bj ≤ b′
1146
+ j, from either Corollary 1 or 2, there exist SPOD weights γ′
1147
+ as in (16) with β′ ∼ b′, such that sups ∥Θ∥s,α,γ′ + ∥Θ′∥s,α,γ′ < ∞. Combining
1148
+ (20) with Lemma 1 and h → 0 we have ζm,h → 0 for the BIP case for any m ∈ N.
1149
+ Similarly, combining (22) with Lemma 3 yields the same observation for the
1150
+ OCP case. Therefore we can apply Propositions 2 and 3 to get that we can
1151
+ construct polynomial lattices so that as m → ∞
1152
+ ����
1153
+ Z′
1154
+ Z −
1155
+ Z′
1156
+ m,h
1157
+ Zm,h
1158
+ ����
1159
+ Y
1160
+ ≤ ∥Ebm(Θ′, Θ)∥Y +
1161
+ Zm,hζ′
1162
+ m,h +
1163
+ ���Z′
1164
+ m,h
1165
+ ���
1166
+ Y ζm,h
1167
+ Z2
1168
+ m,h − ζm,hZm,h
1169
+ + O(b−2m+ε).
1170
+ Next, we say that ρ ∈ (1, ∞)N is (b′, ε)-admissible if �
1171
+ j≥1(ρj − 1)b′
1172
+ j ≤ ε, see [4].
1173
+ Then we define Tb′,ε = �
1174
+ ρ,(b′,ε)-adm.{y ∈ Cs : dist(yj, [− 1
1175
+ 2, 1
1176
+ 2]) ≤ ρj − 1}. Fol-
1177
+ lowing the computations in [2, Theorem 4.1], yield h0 ∈ H and ε > 0 sufficiently
1178
+ 13
1179
+
1180
+ small such that for all h ≤ h0, h ∈ H, we have supy∈Tb′,ε |Θ(y) − Θh(y)| +
1181
+ ∥Θ′(y) − Θ′
1182
+ h(y)∥Y ≤ Ch. By [4, Theorem 3.1], this implies that for a constant
1183
+ C independent of h, s,
1184
+ ∥Θ − Θh∥ ∞ + ∥Θ′ − Θ′
1185
+ h∥∞ + ∥Θ − Θh∥s,α,γ′ + ∥Θ′ − Θ′
1186
+ h∥s,α,γ′ ≤ Ch.
1187
+ Thus, (29) and ∥Θ − Θh∥∞ → 0 hold, and we apply Proposition 4 to conclude.
1188
+ 5
1189
+ Numerical experiment
1190
+ We consider the PDE (5) on the physical domain D := (0, 1)2, with f ≡ 10, and
1191
+ parametric diffusion coefficient given by
1192
+ a(x, y) = 1
1193
+ 2 +
1194
+ 16
1195
+
1196
+ j=1
1197
+ yj
1198
+ (k2
1199
+ j,1 + k2
1200
+ j,2)2 sin(kj,1x1) sin(kj,2x2).
1201
+ The pairs (kj,1, kj,2) ∈ N2 are defined by the ordering of N2 such that for j ∈ N,
1202
+ k2
1203
+ j,1 + k2
1204
+ j,2 ≤ k2
1205
+ j+1,1 + k2
1206
+ j+1,2, and the ordering is arbitrary when equality holds.
1207
+ We investigate a BIP with observation functional O = (O1, . . . , O4) ∈ (L2(D))4,
1208
+ given by Ok(v) :=
1209
+ 1
1210
+ 0.01
1211
+
1212
+ Ik vdx for v ∈ L2(D) and k = 1, . . . , 4, where I1 :=
1213
+ [0.1, 0.2]2, I2 := [0.1, 0.2] × [0.8, 0.9], I3 := [0.8, 0.9] × [0.1, 0.2], I4 := [0.8, 0.9]2.
1214
+ We draw a (random) sample of a to compute the "ground truth" of observa-
1215
+ tions O(S(a)) on a sequence of regular FE meshes of triangles obtained by
1216
+ uniform refinement, with 525.313 degrees of freedom (dofs).
1217
+ We add ran-
1218
+ dom noise η ∼ N(0, σ2I4) to the observations, where σ is set as 10% of
1219
+ the average of O(S(a)) ∈ R4. The realized synthetic data is then given by
1220
+ δ = (0.5205, 0.5037, 0.5443, 0.4609)⊤.
1221
+ Our aim is to approximate Eπδ[G(u)] by the ratio estimator
1222
+ Z′
1223
+ m,h
1224
+ Zm,h , where
1225
+ G ∈ L2(D) is given by G(v) :=
1226
+ 1
1227
+ 0.5
1228
+
1229
+ [0.25,0.75]2 vdx for v ∈ L2(D). The FE
1230
+ mesh and the polynomial lattice rule, that eventually determine h and m, are
1231
+ refined successively based on the combined estimator in (28). For tolerances
1232
+ τFEM, τQMC > 0, we start from an initial FE mesh of D, that is uniformly refined
1233
+ until the stopping criterion
1234
+ Zm,hζ′
1235
+ m,h+∥Z′
1236
+ m,h∥Yζm,h
1237
+ Z2
1238
+ m,h−ζm,hZm,h
1239
+ ≤ τFEM is met. Thereafter, we
1240
+ increase the number bm of lattice points until there holds |Ebm(Θ′
1241
+ h, Θh)| ≤ τQMC.
1242
+ We initialize by a FE mesh with 41 dofs and bm0 QMC points with base b = 2
1243
+ and m0 = 2, and set the tolerances to τFEM = τQMC = 2−6. To assess the total
1244
+ realized error, we compute a reference solution Z′
1245
+ ref
1246
+ Zref by a multilevel Monte Carlo
1247
+ ratio estimator, see [17], and report absolute error
1248
+ ���
1249
+ Z′
1250
+ m,h
1251
+ Zm,h − Z′
1252
+ ref
1253
+ Zref
1254
+ ���. The reference
1255
+ estimator uses 6 refinement levels with 545/525.313 dofs on the coarsest/finest
1256
+ level, respectively, and uniform (pseudo-) random numbers y. The number
1257
+ of samples is adjusted to balance statistical error and discretization bias on
1258
+ each level. The experiment has been implemented in MATLAB using the
1259
+ MooAFEM library [11] for the FE discretization. All arising linear systems are
1260
+ solved directly by the \-operator in MATLAB.
1261
+ The estimated and realized errors vs.
1262
+ the number of iterations (in the
1263
+ sense of refinement steps) are depicted Figure 1. Here, the FE a-posteriori
1264
+ 14
1265
+
1266
+ estimator gives negative values on rather coarse meshes, where c∗˜ηy,h > 1.
1267
+ Therefore, we discarded these "pre-asymptotic" values in the plot. We see
1268
+ that the FE a-posteriori estimator from Proposition 3 is rather conservative at
1269
+ first, but eventually approaches the actual error for finer meshes. The QMC
1270
+ estimator |Ebm(Θ′
1271
+ h, Θ′
1272
+ h)| is of the same magnitude as σ at first, and only two
1273
+ more refinement steps are needed once the FE mesh is sufficiently fine. The
1274
+ combined error estimate ESTbm,h aligns well with the realized error, as expected
1275
+ from our theoretical analysis.
1276
+ Figure 1: Results for the QMC-FEM ratio estimator with a-posterior ratio refinement.
1277
+ First the FE mesh is refined until the tolerance τFEM is achieved (dashed w. circles),
1278
+ then the QMC a-posterior refinement takes place (dashed w. triangles). The estimated
1279
+ error (solid w. stars) is conservative for coarse meshes, but eventually approaches the
1280
+ realized error (solid w. diamonds).
1281
+ 6
1282
+ Conclusion
1283
+ In this paper we outlined the construction of an a-posteriori QMC-FEM estimator,
1284
+ that allows to quantify the approximation error to a) posterior expectation
1285
+ in Bayesian inverse problems and b) the optimal control under the entropic
1286
+ risk measure. The estimator is computable and viable for large number of
1287
+ parameters s and it is asymptotically an upper bound for the errors in a) and b).
1288
+ Furthermore, the particular ratio structure Z′
1289
+ Z of the sought quantities allows
1290
+ to tackle both the BIP and OCP, in a unified manner. In either case, we work
1291
+ under the assumption that the underlying model is a parametric elliptic PDE
1292
+ with affine-parametric diffusion. Nevertheless, the present QMC methodology to
1293
+ high-dimensional integration is applicable to non-affine parametric PDEs with
1294
+ quantified, holomorphic-parametric dependence, see [4] and the references there.
1295
+ Since the error estimators we consider ηy,h, ˜ηy,h, ˜˜ηy,h are expressed as sums of
1296
+ local error contributions for T ∈ Th, a possible direction of research is to employ
1297
+ the presently proposed estimators ζy,h, ζ′
1298
+ y,h to steer an adaptive QMC-FEM
1299
+ algorithm [13].
1300
+ 15
1301
+
1302
+ total error
1303
+ Z
1304
+ Zm.h
1305
+ Total/estimated errors
1306
+ -QMC-error est.
1307
+ *-FEM+QMC-error est.
1308
+ ... TFEM/TQMC
1309
+ LC
1310
+ 7
1311
+ # IterationsReferences
1312
+ [1] R. Becker, M. Brunner, M. Innerberger, J. M. Melenk, and D. Praetorius. Rate-
1313
+ optimal goal-oriented adaptive FEM for semilinear elliptic PDEs. Comput. Math.
1314
+ Appl., 118:18–35, 2022.
1315
+ [2] J. Dick, R. N. Gantner, Q. T. L. Gia, and Ch. Schwab. Multilevel higher-order
1316
+ quasi-Monte Carlo Bayesian estimation. Math. Mod. Meth. Appl. Sci., 27(5):953–
1317
+ 995, 2017.
1318
+ [3] J. Dick, T. Goda, and T. Yoshiki. Richardson extrapolation of polynomial lattice
1319
+ rules. SIAM J. Numer. Anal., 57(1):44–69, 2019.
1320
+ [4] J. Dick, Q. T. Le Gia, and Ch. Schwab. Higher order quasi-Monte Carlo integra-
1321
+ tion for holomorphic, parametric operator equations. SIAM/ASA J. Uncertain.
1322
+ Quantif., 4(1):48–79, 2016.
1323
+ [5] J. Dick, M. Longo, and Ch. Schwab. Extrapolated polynomial lattice rule inte-
1324
+ gration in computational uncertainty quantification. SIAM/ASA J. Uncertain.
1325
+ Quantif., 10(2):651–686, 2022.
1326
+ [6] A. Ern and J.-L. Guermond. Finite elements I—Approximation and interpolation,
1327
+ volume 72 of Texts in Applied Mathematics. Springer, Cham, 2021.
1328
+ [7] H. Föllmer and T. Knispel. Entropic risk measures: coherence vs. convexity,
1329
+ model ambiguity, and robust large deviations. Stoch. Dyn., 11(2-3):333–351, 2011.
1330
+ [8] P. A. Guth, V. Kaarnioja, F. Y. Kuo, C. Schillings, and I. H. Sloan. A quasi-Monte
1331
+ Carlo method for optimal control under uncertainty. SIAM/ASA J. Uncertain.
1332
+ Quantif., 9(2):354–383, 2021.
1333
+ [9] P. A. Guth, V. Kaarnioja, F. Y. Kuo, C. Schillings, and I. H. Sloan. Parabolic
1334
+ PDE-constrained optimal control under uncertainty with entropic risk measure
1335
+ using quasi-Monte Carlo integration. arXiv:2208.02767, 2022.
1336
+ [10] L. Herrmann, M. Keller Ch. Schwab. Quasi-Monte Carlo Bayesian estimation
1337
+ under Besov priors in elliptic inverse problems. Math. Comp., 90(2021) 1831–1860.
1338
+ [11] M. Innerberger and D. Praetorius. "MooAFEM: An object oriented Matlab code
1339
+ for higher-order (nonlinear) adaptive FEM." , arXiv:2203.01845, 2022.
1340
+ [12] D. P. Kouri and T. M. Surowiec. Existence and optimality conditions for risk-averse
1341
+ PDE-constrained optimization. SIAM/ASA J. Uncertain. Quantif., 6(2):787–815,
1342
+ 2018.
1343
+ [13] M. Longo. Adaptive Quasi-Monte Carlo Finite Element Methods for Parametric
1344
+ Elliptic PDEs. J. Sci. Comput., 92(1), 2022.
1345
+ [14] M. Longo.
1346
+ Extrapolated polynomial lattices and adaptivity in computational
1347
+ Uncertainty Quantification. PhD thesis, ETH Zürich, 2022.
1348
+ [15] H. Niederreiter. Low-discrepancy point sets obtained by digital constructions over
1349
+ finite fields. Czechoslovak Math. J., 42(117)(1):143–166, 1992.
1350
+ [16] R. H. Nochetto, K. G. Siebert, and A. Veeser. Theory of adaptive finite element
1351
+ methods: an introduction. In Multiscale, nonlinear and adaptive approximation,
1352
+ pages 409–542. Springer, Berlin, 2009.
1353
+ 16
1354
+
1355
+ [17] R. Scheichl, A.M. Stuart, and A.L. Teckentrup. Quasi-Monte Carlo and multilevel
1356
+ Monte Carlo methods for computing posterior expectations in elliptic inverse
1357
+ problems. SIAM/ASA J. Uncertain. Quantif., 5(1): 493-518, 2017.
1358
+ [18] C. Schillings and Ch. Schwab.
1359
+ Sparsity in Bayesian inversion of parametric
1360
+ operator equations. Inverse Problems, 30(6):065007, 30, 2014.
1361
+ [19] A. M. Stuart. Inverse problems: a Bayesian perspective. Acta Numer., 19:451–559,
1362
+ 2010.
1363
+ [20] R. Verfürth. a-posteriori error estimation techniques for finite element methods.
1364
+ Numerical Mathematics and Scientific Computation. Oxford University Press,
1365
+ Oxford, 2013.
1366
+ [21] T. P. Wihler. Weighted L2-norm a-posteriori error estimation of FEM in polygons.
1367
+ Int. J. Numer. Anal. Model., 4(1):100–115, 2007.
1368
+ 17
1369
+
AtE1T4oBgHgl3EQfpAWX/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
AtE2T4oBgHgl3EQf8QmS/content/2301.04217v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99efd703f9c26f33fcb4ab9183451bf34491e311fc4464ad6b4afe550b872231
3
+ size 189717
AtE2T4oBgHgl3EQf8QmS/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62d4ebfa2800bf1b71eddebe452d1e3547774319e658dd7ba9639bb29438c4e1
3
+ size 65947
AtE4T4oBgHgl3EQf5A4w/content/tmp_files/2301.05318v1.pdf.txt ADDED
@@ -0,0 +1,589 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Language-Informed Transfer Learning for Embodied Household Activities
2
+ Yuqian Jiang 1*, Qiaozi Gao 2, Govind Thattai 2, Gaurav Sukhatme 2,3
3
+ 1 The University of Texas at Austin, 2 Amazon Alexa AI, 3 University of Southern California
4
+ [email protected], {qzgao, thattg}@amazon.com, [email protected]
5
+ Abstract
6
+ For service robots to become general-purpose in everyday
7
+ household environments, they need not only a large library
8
+ of primitive skills, but also the ability to quickly learn novel
9
+ tasks specified by users. Fine-tuning neural networks on a va-
10
+ riety of downstream tasks has been successful in many vision
11
+ and language domains, but research is still limited on trans-
12
+ fer learning between diverse long-horizon tasks. We propose
13
+ that, compared to reinforcement learning for a new household
14
+ activity from scratch, home robots can benefit from transfer-
15
+ ring the value and policy networks trained for similar tasks.
16
+ We evaluate this idea in the BEHAVIOR simulation bench-
17
+ mark which includes a large number of household activities
18
+ and a set of action primitives. For easy mapping between state
19
+ spaces of different tasks, we provide a text-based representa-
20
+ tion and leverage language models to produce a common em-
21
+ bedding space. The results show that the selection of similar
22
+ source activities can be informed by the semantic similarity
23
+ of state and goal descriptions with the target task. We further
24
+ analyze the results and discuss ways to overcome the problem
25
+ of catastrophic forgetting.
26
+ Introduction
27
+ Domestic service robots have been envisioned to help in a
28
+ variety of household activities. Imagine a single robot that
29
+ can be versatile enough from tidying up the rooms to play-
30
+ ing with kids. Such a robot not only requires the sensing,
31
+ navigation, and manipulation capabilities, but also needs to
32
+ intelligently combine these skills to perform each activity as
33
+ requested by the users.
34
+ Since every home is different, a simple library of pre-
35
+ programmed tasks will hardly serve the purpose. For ex-
36
+ ample, when a user wants to clean the kitchen cupboard,
37
+ the specific goal conditions they would like to achieve will
38
+ depend on their personal preferences and constraints of the
39
+ environment. Does the robot re-arrange the dishes in a cer-
40
+ tain pattern? Does the robot dust the outside of the cup-
41
+ board? The reality is that there could be an infinite number
42
+ of combinations of goals, and a robot will most likely have
43
+ to learn to solve new goals after it is deployed in the individ-
44
+ ual homes.
45
+ In this paper, we study the problem of learning novel
46
+ user-specified household activities for a service robot that
47
+ *Work completed during an internship with Amazon Alexa AI.
48
+ is shipped with pre-trained policies for a set of standard ac-
49
+ tivities. We propose to learn the new activity by transferring
50
+ from the policy of a similar activity. Our hypothesis is that
51
+ the transfer can be more efficient than learning the new activ-
52
+ ity from scratch if their initial state and goal conditions are
53
+ similar. Intuitively, a robot should be able to learn putting
54
+ away cleaned dishes efficiently if it has a good policy for
55
+ cleaning kitchen cupboard. Further, we can measure activity
56
+ similarities by leveraging language models to embed their
57
+ state and goal descriptions.
58
+ We test our hypothesis using the BEHAVIOR bench-
59
+ mark (Srivastava et al. 2021). BEHAVIOR simulates a large
60
+ number of household activities for an embodied AI to learn.
61
+ We first present a reinforcement learning (RL) approach to
62
+ solve a subset of activities from scratch. The approach lever-
63
+ ages text descriptions of the agent’s current state and goal to
64
+ allow the policies to operate in a common state space. We
65
+ then initialize the learner with each of the pretrained policies
66
+ when training it on a new activity, and evaluate the hypothe-
67
+ sis that the transfer performance corresponds to the semantic
68
+ similarity between the activity text descriptions. We present
69
+ some initial results to show the potential of this approach for
70
+ enabling versatile and adaptive home robots.
71
+ Related Work
72
+ Transfer learning leverages the knowledge learned in a
73
+ source domain to improve the performance of a learner on
74
+ the target domain. Transfer learning in reinforcement learn-
75
+ ing has been studied to transfer knowledge between different
76
+ Markov Decision Processes (MDPs) (Zhu, Lin, and Zhou
77
+ 2021; Taylor and Stone 2009). While many approaches are
78
+ evaluated in tasks with the same high-level goal and only
79
+ different configurations in Mujoco, navigation, and Atari do-
80
+ mains (Barreto et al. 2017; Schaul et al. 2015), a few re-
81
+ cent transfer learning approaches have demonstrated posi-
82
+ tive transfer between distinct Atari games (Rusu et al. 2016;
83
+ Fernando et al. 2017). Soemers et al. introduces an approach
84
+ that transfers policy and value networks between distinct
85
+ board games that have different action spaces (Soemers et al.
86
+ 2021). Encouraged by these successes, we propose to trans-
87
+ fer RL policies among distinct embodied household activi-
88
+ ties which require high-level long-horizon reasoning about a
89
+ large variety of goal conditions. Further, this work proposes
90
+ to use language models on activity descriptions to inform the
91
+ arXiv:2301.05318v1 [cs.RO] 12 Jan 2023
92
+
93
+ selection of source domains.
94
+ BEHAVIOR is a benchmark where embodied AI so-
95
+ lutions are evaluated on household activities in a realis-
96
+ tic physics simulation. The activities are selected from the
97
+ American Time Use Survey to reflect the real distribution of
98
+ household chores. There has been very little success using
99
+ RL to solve BEHAVIOR in its original setting (Srivastava
100
+ et al. 2021). In this paper, the method of providing the text-
101
+ based, fully observable state representation is most similar
102
+ to the work done by Shridhar et al. for the ALFRED bench-
103
+ mark (Shridhar et al. 2021).
104
+ Approach
105
+ Our approach consists of two steps. In the first part, we in-
106
+ troduce a text-based state representation for a RL agent to
107
+ efficiently learn a set of diverse BEHAVIOR activities from
108
+ scratch. The state representation is also in a common embed-
109
+ ding space to allow easy knowledge transfer to other activi-
110
+ ties. In the second part, we introduce how these pre-trained
111
+ policies are re-used for learning new activities, and test our
112
+ hypothesis that the semantic similarity between activity de-
113
+ scriptions can be used to predict transfer performances.
114
+ Learning Single Activities
115
+ We introduce a different RL formulation from the original
116
+ one in the BEHAVIOR benchmark, in order to speed up
117
+ learning these activities using standard RL algorithms.
118
+ Text-Based State and Goal Representation
119
+ Given the
120
+ low RL performance in the original setting of BEHAVIOR,
121
+ we take a similar approach to ALFWORLD (Shridhar et al.
122
+ 2021) by providing full observability of the logical state
123
+ in the form of language. The simulator backbone of BE-
124
+ HAVIOR extracts logical predicates that describe the current
125
+ states and relations of all objects in the world. We filter the
126
+ logical predicates to the ones relevant to the activity, and use
127
+ a template to generate text descriptions of the logical state.
128
+ Similarly, the goal conditions are represented with text de-
129
+ scriptions. Figure 1 shows the initial state for one instance
130
+ of the cleaning kitchen cupboard activity. Figure 2 shows
131
+ the goal definition of the cleaning kitchen cupboard activity.
132
+ There are two goals: 1) dust every cabinet and 2) move all
133
+ cups to one cabinet and all bowls to the other. For the exam-
134
+ ple initial state, there are two ways to ground the second goal
135
+ based on how the cups and bowls are assigned to cabinets,
136
+ and each grounding leads to a distinct set of subgoals.
137
+ Action Primitives
138
+ The action space includes a set of dis-
139
+ crete action primitives implemented in BEHAVIOR: GRASP,
140
+ TOGGLE ON, TOGGLE OFF, OPEN, CLOSE, PLACE INSIDE,
141
+ PLACE ON TOP. Each action primitive takes a parameter that
142
+ refers to an object. For example, PLACE INSIDE(cabinet 0)
143
+ means the robot will put the object currently in its gripper
144
+ into the cabinet.
145
+ Problem Formulation
146
+ We formulate a BEHAVIOR ac-
147
+ tivity as a Markov Decision Process denoted by the tuple
148
+ M = (S, A, P, R). S is the space that consists of tok-
149
+ enized state and goal descriptions. A is the space of action
150
+ top cabinet 47 is dusty. top cabinet 47 is next to cup 1. bot-
151
+ tom cabinet 41 is dusty. bottom cabinet 41 is on top cup 0.
152
+ bottom cabinet 41 is next to cup 0. bottom cabinet 41 is
153
+ next to bowl 1. countertop 26 is under bath towel 0. coun-
154
+ tertop 26 is in reach of robot. countertop 26 is in same room
155
+ as robot. bath towel 0 is on top countertop 26. bath towel 0
156
+ is in reach of robot. soap 0 is on top countertop 26.
157
+ soap 0 is in reach of robot. bowl 0 is on top counter-
158
+ top 26. bowl 0 is in reach of robot. bowl 1 is inside
159
+ bottom cabinet 41. bowl 1 is next to bottom cabinet 41.
160
+ cup 0 is inside bottom cabinet 41. cup 0 is next to bot-
161
+ tom cabinet 41. cup 1 is inside top cabinet 47. cup 1 is next
162
+ to top cabinet 47. room floor kitchen 0 is in reach of robot.
163
+ room floor kitchen 0 is in field of view of robot.
164
+ Figure 1: An example initial state of cleaning kitchen cup-
165
+ board
166
+ For every cabinet, the following is NOT true:
167
+ the cabinet is dusty.
168
+ For at least one cabinet, for every bowl, the bowl is inside
169
+ the cabinet, and the following is NOT true:
170
+ cup1 is inside the cabinet.
171
+ For at least one cabinet, for every cup, the cup is inside the
172
+ cabinet, and the following is NOT true:
173
+ bowl1 is inside the cabinet.
174
+ Figure 2: An example goal definition of cleaning kitchen
175
+ cupboard
176
+ primitives, parameterized by the objects relevant to the ac-
177
+ tivity. P(·|s, a) is the unknown stochastic transition prob-
178
+ abilities. R : S × A × S → R is the reward function.
179
+ Given the grounded subgoals of the activity, R is defined
180
+ as follows: if a is not executable at s, R(s, a, s′) = −1; oth-
181
+ erwise, let g(s) be the number of subgoals satisfied in the
182
+ state s, R(s, a, s′) =
183
+ g(s′)−g(s)
184
+ total number of subgoals · c where c
185
+ is a large constant. The reward function penalizes choosing
186
+ action primitives that are not executable, such as TOGGLE
187
+ OFF(cup 0), and generously rewards achieving new sub-
188
+ goals. The objective is to learn a policy π : S → A that
189
+ maximizes the expected total reward.
190
+ Actor-Critic Policy
191
+ The policy can be trained by pol-
192
+ icy gradient methods such as PPO (Schulman et al. 2017).
193
+ Figure 3 shows the actor-critic architecture. We use a pre-
194
+ trained DistilBert model (Sanh et al. 2020) to tokenize and
195
+ encode the input text. The actor network outputs a tuple of
196
+ the action primitive index and the object index.
197
+ Transfer Learning
198
+ Since the aim of this work is not to achieve top performances
199
+ on BEHAVIOR, but rather to explore the connection be-
200
+ tween transfer performance and activity similarity, we adopt
201
+ a straightforward method to re-use pre-trained policies and
202
+ compare the learning curves.
203
+
204
+ Figure 3: Actor-critic network architecture for learning one
205
+ BEHAVIOR activity.
206
+ State and Action Mappings
207
+ Since S is a space of tok-
208
+ enized state and goal descriptions, the state space is common
209
+ for all activities. However, the action primitives are param-
210
+ eterized by the objects in the scene, so the action space can
211
+ have different sizes. To re-use a policy for a new activity, we
212
+ copy all the weights in the network (Figure 3) except for the
213
+ actor output layer. Then we resize the actor output layer to
214
+ match the new action space and randomly initialize it before
215
+ training.
216
+ Semantic Similarity
217
+ Given a new activity with an initial
218
+ state and a set of goal conditions, the text-based state and
219
+ goal representation constructed for the MDP formulation is
220
+ also a unique description of this activity. We use the pre-
221
+ trained SimCSE model (Gao, Yao, and Chen 2022) to embed
222
+ activity descriptions, and compute the consine similarity be-
223
+ tween the embeddings of any pair of activities.
224
+ Transfer Metric
225
+ We evaluate the transfer performance of
226
+ each pair of activities by the transfer ratio (or transfer score)
227
+ metric (Taylor and Stone 2009; Rusu et al. 2016). The trans-
228
+ fer ratio measures the ratio of the total reward given to the
229
+ transfer learner and the total reward given to the non-transfer
230
+ learner after a certain number of training steps. It can be
231
+ computed by the ratio of the area under the transfer learning
232
+ curve over the area under the non-transfer learning curve.
233
+ Experiments
234
+ We choose to study 7 activities from BEHAVIOR: storing
235
+ food, cleaning kitchen cupboard, putting away Halloween
236
+ decorations, collect misplaced items, putting away cleaned
237
+ dishes, locking every window, cleaning microwave oven.
238
+ The policies are trained with the PPO algorithm as imple-
239
+ mented in the stable-baselines3 library (Raffin et al. 2021).
240
+ An episode terminates when all the subgoals are achieved
241
+ or the maximum number of steps (64) has been taken. The
242
+ hyperparameter c in the reward function is set to 200. As
243
+ a result, the highest total reward of an episode is 200, i.e.
244
+ achieving all subgoals without any penalty. The lowest total
245
+ Figure 4: Semantic similarities between source and target
246
+ activities.
247
+ reward is -64, i.e. always executing invalid actions.
248
+ Training from Scratch
249
+ To obtain a policy for each activ-
250
+ ity, we train for 512 episodes and take the top performing
251
+ policy out of 3 runs. Table 1 shows the mean reward per
252
+ episode achieved at the end of training by the top policy for
253
+ each activity. Note that there is a wide gap between how
254
+ well these activities are solved by our policies. The policies
255
+ for locking every window and cleaning microwave oven are
256
+ near optimal, whereas the policy for cleaning kitchen cup-
257
+ board never manages to achieve all subgoals during training.
258
+ This difference is due to the solution length and the stochas-
259
+ ticity of executing the action primitives. Some activities re-
260
+ quire executing more than 10 actions in the correct order,
261
+ and some actions (e.g. grasp) have a low success rate in pro-
262
+ ducing the desired effects. The uncertain action effects re-
263
+ flect the challenge for real robots, since the task-level policy
264
+ should know how to recover when there are failures during
265
+ execution.
266
+ Since it’s much faster to learn window and microwave
267
+ than the other activities, they are only used as source tasks
268
+ but not target tasks in the transfer experiments below.
269
+ Semantic Similarity
270
+ Figure 4 summarizes the semantic
271
+ similarity in a matrix. Each row is a source activity and
272
+ each column is a target activity. A high number (or warm
273
+ color) means the descriptions of the two activities are close
274
+ in the embedding space, whereas a low number (or cool
275
+ color) indicates that the embeddings are distant. It may not
276
+ be intuitive why some activities are more similar than oth-
277
+ ers based on their abbreviated names. For example, stor-
278
+ ing food, cleaning kitchen cupboard, putting away dishes,
279
+ putting away Halloween decorations all involve moving ob-
280
+
281
+ Grasp cup_o
282
+ (0,
283
+ 5)
284
+ 123.456
285
+ ActionOutput Layer
286
+ Value Output Layer
287
+ Actor Layer 3
288
+ Critic Layer 3
289
+ Actor Layer 2
290
+ Critic Layer 2
291
+ Actor Layer 1
292
+ Critic Layer 1
293
+ Features Extractor (DistilBert Encoder
294
+ Tokenizedtextobservationstarget
295
+ source
296
+ food
297
+ cupboard
298
+ halloween
299
+ misplaced
300
+ dishes
301
+ 0.5
302
+ window
303
+ 0.19
304
+ 0.35
305
+ 0.18
306
+ 0.07
307
+ 0.24
308
+ 0.3
309
+ 0.37
310
+ 0.25
311
+ 0.1
312
+ 0.25
313
+ 0.4
314
+ microwave
315
+ X
316
+ poor
317
+ 0.39
318
+ 0.37
319
+ 0.14
320
+ 0.39
321
+ 0.3
322
+ X
323
+ cupboard
324
+ 0.39
325
+ 0.3
326
+ 0.07
327
+ 0.43
328
+ X
329
+ 0.2
330
+ halloween
331
+ 0.37
332
+ 0.3
333
+ 0.2
334
+ 0.4
335
+ X
336
+ misplaced
337
+ 0.14
338
+ 0.07
339
+ 0.2
340
+ 0.19
341
+ 0.1
342
+ X
343
+ dishes
344
+ 0.39
345
+ 0.43
346
+ 0.4
347
+ 0.19food
348
+ cupboard
349
+ halloween
350
+ misplaced
351
+ dishes
352
+ window
353
+ microwave
354
+ -8.5
355
+ -34.5
356
+ 1.1
357
+ 4.0
358
+ -7.0
359
+ 196.0
360
+ 189.0
361
+ Table 1: Mean reward per episode achieved at the end of training.
362
+ Figure 5: Transfer ratios of the first 80 episodes.
363
+ jects into cabinets, so their similarity scores are high when
364
+ taking into account the full descriptions.
365
+ Transfer Ratios
366
+ Figure 5 presents the transfer ratio ma-
367
+ trix after 80 episodes (or about 5000 steps). A ratio above
368
+ 1 indicates positive transfer, i.e. the transfer learner receives
369
+ higher total reward during training. Comparing with the sim-
370
+ ilarity score matrix, we can make two observations. First, a
371
+ high-quality source policy can lead to positive transfer, even
372
+ if the activity is not similar. The activities storing food and
373
+ putting away Halloween decorations (two difficult tasks) are
374
+ not similar to locking every window or cleaning microwave
375
+ oven (two easy tasks), but we see high transfer ratios in the
376
+ first two rows of their columns. Second, for each target ac-
377
+ tivity, higher semantic similarity has a higher chance of pos-
378
+ itive transfer. Cleaning kitchen cupboard and putting away
379
+ cleaned dishes have a high semantic similarity (0.43). The
380
+ only positive transfer to cupboard was from dishes and vice
381
+ versa. On the other hand, collecting misplaced items is se-
382
+ mantically very different from all other activities, and gets
383
+ some of the worst transfer ratios.
384
+ Catastrophic Forgetting
385
+ While there are clear signs that
386
+ re-using policies can jump start learning a new activity, the
387
+ benefits of transfer quickly disappear as catastrophic forget-
388
+ ting takes place. Figure 6 shows the transfer ratios after 160
389
+ episodes (or about 10,000 steps). The general observations
390
+ in Figure 5 still hold, but the ratios are getting lower and
391
+ Figure 6: Transfer ratios of the first 160 episodes.
392
+ there are fewer cases of positive transfer.
393
+ For future studies, one of the ideas to transfer knowl-
394
+ edge without suffering from the conflicting goals is by de-
395
+ coupling the task-independent knowledge from the task-
396
+ dependent knowledge. In the case of household activities,
397
+ there is a lot of shared knowledge across activities, espe-
398
+ cially the preconditions and effects of actions. For example,
399
+ TOGGLE OFF(cup 0) is an invalid action in any activity. To
400
+ this end, successor features (Barreto et al. 2017) and uni-
401
+ versal value function approximation (Schaul et al. 2015) are
402
+ both methods to learn representations that decouple the dy-
403
+ namics from the rewards so they will generalize over differ-
404
+ ent goals. Meanwhile, there are neural representations de-
405
+ signed to avoid catastrophic forgetting. Progressive neural
406
+ nets (Rusu et al. 2016) add a new column of network while
407
+ preserving the weights learned in previous tasks.
408
+ Conclusion
409
+ We propose that home robots can efficiently learn novel
410
+ household tasks from similar but distinct activities, and
411
+ present our analysis in the BEHAVIOR benchmark. Our ex-
412
+ periments show encouraging results: activity similarity mea-
413
+ sured by language embeddings can be used as a predictor for
414
+ transfer performance, and a high-quality source policy of an
415
+ easy but different activity can sometimes lead to a jump-
416
+ start. We also observe the problem of catastrophic forgetting
417
+ and suggest future research in this direction.
418
+
419
+ target
420
+ food
421
+ cupboard halloween misplaced
422
+ source
423
+ dishes
424
+ 2.00
425
+ window
426
+ 2.24
427
+ 0.78
428
+ 1.52
429
+ 0.30
430
+ 0.64
431
+ 1.75
432
+ microwave
433
+ 6.61
434
+ 0.52
435
+ 2.08
436
+ 1.15
437
+ 0.75
438
+ 1.50
439
+ pooy
440
+ X
441
+ 0.81
442
+ 1.01
443
+ 0.71
444
+ 0.54
445
+ 1.25
446
+ 0.70
447
+ X
448
+ cupboard
449
+ 1.00
450
+ 0.32
451
+ 1.09
452
+ 1.00
453
+ 2.39
454
+ 0.29
455
+ X
456
+ halloween
457
+ 0.28
458
+ 0.58
459
+ 0.75
460
+ X
461
+ 0.50
462
+ misplaced
463
+ 1.43
464
+ 0.33
465
+ 0.79
466
+ 0.74
467
+ 0.25
468
+ 1.78
469
+ 1.26
470
+ 1.30
471
+ 0.19
472
+ X
473
+ dishes
474
+ 0.00target
475
+ source
476
+ food
477
+ cupboard halloween misplaced
478
+ dishes
479
+ 2.00
480
+ window
481
+ 1.08
482
+ 0.76
483
+ 0.92
484
+ 0.43
485
+ 0.48
486
+ 1.75
487
+ microwave
488
+ 3.28
489
+ 0.32
490
+ 1.33
491
+ 0.47
492
+ 0.51
493
+ 1.50
494
+ food
495
+ X
496
+ 0.76
497
+ 0.67
498
+ 0.74
499
+ 0.62
500
+ 1.25
501
+ X
502
+ cupboard
503
+ 0.88
504
+ 0.72
505
+ 0.40
506
+ 1.09
507
+ 1.00
508
+ X
509
+ 1.76
510
+ 0.46
511
+ 0.40
512
+ 0.62
513
+ 0.75
514
+ halloween
515
+ X
516
+ 0.50
517
+ misplaced
518
+ 1.10
519
+ 0.23
520
+ 0.65
521
+ 0.84
522
+ 0.25
523
+ 1.40
524
+ 1.00
525
+ 0.21
526
+ X
527
+ dishes
528
+ 0.87
529
+ 0.00References
530
+ Barreto, A.; Dabney, W.; Munos, R.; Hunt, J. J.; Schaul, T.;
531
+ van Hasselt, H. P.; and Silver, D. 2017. Successor Features
532
+ for Transfer in Reinforcement Learning.
533
+ In Advances in
534
+ Neural Information Processing Systems, volume 30. Curran
535
+ Associates, Inc.
536
+ Fernando, C.; Banarse, D.; Blundell, C.; Zwols, Y.; Ha, D.;
537
+ Rusu, A. A.; Pritzel, A.; and Wierstra, D. 2017. Pathnet:
538
+ Evolution Channels Gradient Descent in Super Neural Net-
539
+ works. arXiv preprint arXiv:1701.08734.
540
+ Gao, T.; Yao, X.; and Chen, D. 2022.
541
+ SimCSE:
542
+ Simple Contrastive Learning of Sentence Embeddings.
543
+ arXiv:2104.08821.
544
+ Raffin, A.; Hill, A.; Gleave, A.; Kanervisto, A.; Ernestus,
545
+ M.; and Dormann, N. 2021. Stable-Baselines3: Reliable Re-
546
+ inforcement Learning Implementations. Journal of Machine
547
+ Learning Research, 22(268): 1–8.
548
+ Rusu, A. A.; Rabinowitz, N. C.; Desjardins, G.; Soyer, H.;
549
+ Kirkpatrick, J.; Kavukcuoglu, K.; Pascanu, R.; and Hadsell,
550
+ R. 2016. Progressive Neural Networks. arXiv:1606.04671.
551
+ Sanh, V.; Debut, L.; Chaumond, J.; and Wolf, T. 2020.
552
+ DistilBERT, a Distilled Version of BERT: Smaller, Faster,
553
+ Cheaper and Lighter. arXiv:1910.01108.
554
+ Schaul, T.; Horgan, D.; Gregor, K.; and Silver, D. 2015. Uni-
555
+ versal value function approximators. In International con-
556
+ ference on machine learning, 1312–1320. PMLR.
557
+ Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and
558
+ Klimov, O. 2017. Proximal Policy Optimization Algorithms.
559
+ arXiv preprint arXiv:1707.06347.
560
+ Shridhar, M.; Yuan, X.; Cˆot´e, M.-A.; Bisk, Y.; Trischler,
561
+ A.; and Hausknecht, M. 2021.
562
+ ALFWorld: Aligning
563
+ Text and Embodied Environments for Interactive Learning.
564
+ arXiv:2010.03768.
565
+ Soemers, D. J. N. J.; Mella, V.; Piette, E.; Stephenson, M.;
566
+ Browne, C.; and Teytaud, O. 2021. Transfer of Fully Convo-
567
+ lutional Policy-Value Networks Between Games and Game
568
+ Variants. arXiv:2102.12375.
569
+ Srivastava, S.; Li, C.; Lingelbach, M.; Mart´ın-Mart´ın, R.;
570
+ Xia, F.; Vainio, K.; Lian, Z.; Gokmen, C.; Buch, S.; Liu,
571
+ C. K.; Savarese, S.; Gweon, H.; Wu, J.; and Fei-Fei, L. 2021.
572
+ BEHAVIOR: Benchmark for Everyday Household Activ-
573
+ ities in Virtual, Interactive, and Ecological Environments.
574
+ arXiv:2108.03332.
575
+ Taylor, M. E.; and Stone, P. 2009. Transfer Learning for Re-
576
+ inforcement Learning Domains: A Survey. Journal of Ma-
577
+ chine Learning Research, 10(7).
578
+ Zhu,
579
+ Z.;
580
+ Lin,
581
+ K.;
582
+ and
583
+ Zhou,
584
+ J.
585
+ 2021.
586
+ Transfer
587
+ Learning in Deep Reinforcement Learning: A Survey.
588
+ arXiv:2009.07888 [cs, stat].
589
+
AtE4T4oBgHgl3EQf5A4w/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,505 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf,len=504
2
+ page_content='Language-Informed Transfer Learning for Embodied Household Activities Yuqian Jiang 1*, Qiaozi Gao 2, Govind Thattai 2, Gaurav Sukhatme 2,3 1 The University of Texas at Austin, 2 Amazon Alexa AI, 3 University of Southern California jiangyuqian@utexas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
3
+ page_content='edu, {qzgao, thattg}@amazon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
4
+ page_content='com, gaurav@usc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
5
+ page_content='edu Abstract For service robots to become general-purpose in everyday household environments, they need not only a large library of primitive skills, but also the ability to quickly learn novel tasks specified by users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
6
+ page_content=' Fine-tuning neural networks on a va- riety of downstream tasks has been successful in many vision and language domains, but research is still limited on trans- fer learning between diverse long-horizon tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
7
+ page_content=' We propose that, compared to reinforcement learning for a new household activity from scratch, home robots can benefit from transfer- ring the value and policy networks trained for similar tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
8
+ page_content=' We evaluate this idea in the BEHAVIOR simulation bench- mark which includes a large number of household activities and a set of action primitives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
9
+ page_content=' For easy mapping between state spaces of different tasks, we provide a text-based representa- tion and leverage language models to produce a common em- bedding space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
10
+ page_content=' The results show that the selection of similar source activities can be informed by the semantic similarity of state and goal descriptions with the target task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
11
+ page_content=' We further analyze the results and discuss ways to overcome the problem of catastrophic forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
12
+ page_content=' Introduction Domestic service robots have been envisioned to help in a variety of household activities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
13
+ page_content=' Imagine a single robot that can be versatile enough from tidying up the rooms to play- ing with kids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
14
+ page_content=' Such a robot not only requires the sensing, navigation, and manipulation capabilities, but also needs to intelligently combine these skills to perform each activity as requested by the users.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
15
+ page_content=' Since every home is different, a simple library of pre- programmed tasks will hardly serve the purpose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
16
+ page_content=' For ex- ample, when a user wants to clean the kitchen cupboard, the specific goal conditions they would like to achieve will depend on their personal preferences and constraints of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
17
+ page_content=' Does the robot re-arrange the dishes in a cer- tain pattern?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
18
+ page_content=' Does the robot dust the outside of the cup- board?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
19
+ page_content=' The reality is that there could be an infinite number of combinations of goals, and a robot will most likely have to learn to solve new goals after it is deployed in the individ- ual homes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
20
+ page_content=' In this paper, we study the problem of learning novel user-specified household activities for a service robot that Work completed during an internship with Amazon Alexa AI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
21
+ page_content=' is shipped with pre-trained policies for a set of standard ac- tivities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
22
+ page_content=' We propose to learn the new activity by transferring from the policy of a similar activity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
23
+ page_content=' Our hypothesis is that the transfer can be more efficient than learning the new activ- ity from scratch if their initial state and goal conditions are similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
24
+ page_content=' Intuitively, a robot should be able to learn putting away cleaned dishes efficiently if it has a good policy for cleaning kitchen cupboard.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
25
+ page_content=' Further, we can measure activity similarities by leveraging language models to embed their state and goal descriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
26
+ page_content=' We test our hypothesis using the BEHAVIOR bench- mark (Srivastava et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
27
+ page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
28
+ page_content=' BEHAVIOR simulates a large number of household activities for an embodied AI to learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
29
+ page_content=' We first present a reinforcement learning (RL) approach to solve a subset of activities from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
30
+ page_content=' The approach lever- ages text descriptions of the agent’s current state and goal to allow the policies to operate in a common state space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
31
+ page_content=' We then initialize the learner with each of the pretrained policies when training it on a new activity, and evaluate the hypothe- sis that the transfer performance corresponds to the semantic similarity between the activity text descriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
32
+ page_content=' We present some initial results to show the potential of this approach for enabling versatile and adaptive home robots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
33
+ page_content=' Related Work Transfer learning leverages the knowledge learned in a source domain to improve the performance of a learner on the target domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
34
+ page_content=' Transfer learning in reinforcement learn- ing has been studied to transfer knowledge between different Markov Decision Processes (MDPs) (Zhu, Lin, and Zhou 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
35
+ page_content=' Taylor and Stone 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
36
+ page_content=' While many approaches are evaluated in tasks with the same high-level goal and only different configurations in Mujoco, navigation, and Atari do- mains (Barreto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
37
+ page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
38
+ page_content=' Schaul et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
39
+ page_content=' 2015), a few re- cent transfer learning approaches have demonstrated posi- tive transfer between distinct Atari games (Rusu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
40
+ page_content=' 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
41
+ page_content=' Fernando et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
42
+ page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
43
+ page_content=' Soemers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
44
+ page_content=' introduces an approach that transfers policy and value networks between distinct board games that have different action spaces (Soemers et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
45
+ page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
46
+ page_content=' Encouraged by these successes, we propose to trans- fer RL policies among distinct embodied household activi- ties which require high-level long-horizon reasoning about a large variety of goal conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
47
+ page_content=' Further, this work proposes to use language models on activity descriptions to inform the arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
48
+ page_content='05318v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
49
+ page_content='RO] 12 Jan 2023 selection of source domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
50
+ page_content=' BEHAVIOR is a benchmark where embodied AI so- lutions are evaluated on household activities in a realis- tic physics simulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
51
+ page_content=' The activities are selected from the American Time Use Survey to reflect the real distribution of household chores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
52
+ page_content=' There has been very little success using RL to solve BEHAVIOR in its original setting (Srivastava et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
53
+ page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
54
+ page_content=' In this paper, the method of providing the text- based, fully observable state representation is most similar to the work done by Shridhar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
55
+ page_content=' for the ALFRED bench- mark (Shridhar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
56
+ page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
57
+ page_content=' Approach Our approach consists of two steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
58
+ page_content=' In the first part, we in- troduce a text-based state representation for a RL agent to efficiently learn a set of diverse BEHAVIOR activities from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
59
+ page_content=' The state representation is also in a common embed- ding space to allow easy knowledge transfer to other activi- ties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
60
+ page_content=' In the second part, we introduce how these pre-trained policies are re-used for learning new activities, and test our hypothesis that the semantic similarity between activity de- scriptions can be used to predict transfer performances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
61
+ page_content=' Learning Single Activities We introduce a different RL formulation from the original one in the BEHAVIOR benchmark, in order to speed up learning these activities using standard RL algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
62
+ page_content=' Text-Based State and Goal Representation Given the low RL performance in the original setting of BEHAVIOR, we take a similar approach to ALFWORLD (Shridhar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
63
+ page_content=' 2021) by providing full observability of the logical state in the form of language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
64
+ page_content=' The simulator backbone of BE- HAVIOR extracts logical predicates that describe the current states and relations of all objects in the world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
65
+ page_content=' We filter the logical predicates to the ones relevant to the activity, and use a template to generate text descriptions of the logical state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
66
+ page_content=' Similarly, the goal conditions are represented with text de- scriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
67
+ page_content=' Figure 1 shows the initial state for one instance of the cleaning kitchen cupboard activity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
68
+ page_content=' Figure 2 shows the goal definition of the cleaning kitchen cupboard activity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
69
+ page_content=' There are two goals: 1) dust every cabinet and 2) move all cups to one cabinet and all bowls to the other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
70
+ page_content=' For the exam- ple initial state, there are two ways to ground the second goal based on how the cups and bowls are assigned to cabinets, and each grounding leads to a distinct set of subgoals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
71
+ page_content=' Action Primitives The action space includes a set of dis- crete action primitives implemented in BEHAVIOR: GRASP, TOGGLE ON, TOGGLE OFF, OPEN, CLOSE, PLACE INSIDE, PLACE ON TOP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
72
+ page_content=' Each action primitive takes a parameter that refers to an object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
73
+ page_content=' For example, PLACE INSIDE(cabinet 0) means the robot will put the object currently in its gripper into the cabinet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
74
+ page_content=' Problem Formulation We formulate a BEHAVIOR ac- tivity as a Markov Decision Process denoted by the tuple M = (S, A, P, R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
75
+ page_content=' S is the space that consists of tok- enized state and goal descriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
76
+ page_content=' A is the space of action top cabinet 47 is dusty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
77
+ page_content=' top cabinet 47 is next to cup 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
78
+ page_content=' bot- tom cabinet 41 is dusty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
79
+ page_content=' bottom cabinet 41 is on top cup 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
80
+ page_content=' bottom cabinet 41 is next to cup 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
81
+ page_content=' bottom cabinet 41 is next to bowl 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
82
+ page_content=' countertop 26 is under bath towel 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
83
+ page_content=' coun- tertop 26 is in reach of robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
84
+ page_content=' countertop 26 is in same room as robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
85
+ page_content=' bath towel 0 is on top countertop 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
86
+ page_content=' bath towel 0 is in reach of robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
87
+ page_content=' soap 0 is on top countertop 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
88
+ page_content=' soap 0 is in reach of robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
89
+ page_content=' bowl 0 is on top counter- top 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
90
+ page_content=' bowl 0 is in reach of robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
91
+ page_content=' bowl 1 is inside bottom cabinet 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
92
+ page_content=' bowl 1 is next to bottom cabinet 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
93
+ page_content=' cup 0 is inside bottom cabinet 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
94
+ page_content=' cup 0 is next to bot- tom cabinet 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
95
+ page_content=' cup 1 is inside top cabinet 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
96
+ page_content=' cup 1 is next to top cabinet 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
97
+ page_content=' room floor kitchen 0 is in reach of robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
98
+ page_content=' room floor kitchen 0 is in field of view of robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
99
+ page_content=' Figure 1: An example initial state of cleaning kitchen cup- board For every cabinet, the following is NOT true: the cabinet is dusty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
100
+ page_content=' For at least one cabinet, for every bowl, the bowl is inside the cabinet, and the following is NOT true: cup1 is inside the cabinet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
101
+ page_content=' For at least one cabinet, for every cup, the cup is inside the cabinet, and the following is NOT true: bowl1 is inside the cabinet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
102
+ page_content=' Figure 2: An example goal definition of cleaning kitchen cupboard primitives, parameterized by the objects relevant to the ac- tivity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
103
+ page_content=' P(·|s, a) is the unknown stochastic transition prob- abilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
104
+ page_content=' R : S × A × S → R is the reward function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
105
+ page_content=' Given the grounded subgoals of the activity, R is defined as follows: if a is not executable at s, R(s, a, s′) = −1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
106
+ page_content=' oth- erwise, let g(s) be the number of subgoals satisfied in the state s, R(s, a, s′) = g(s′)−g(s) total number of subgoals · c where c is a large constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
107
+ page_content=' The reward function penalizes choosing action primitives that are not executable, such as TOGGLE OFF(cup 0), and generously rewards achieving new sub- goals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
108
+ page_content=' The objective is to learn a policy π : S → A that maximizes the expected total reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
109
+ page_content=' Actor-Critic Policy The policy can be trained by pol- icy gradient methods such as PPO (Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
110
+ page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
111
+ page_content=' Figure 3 shows the actor-critic architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
112
+ page_content=' We use a pre- trained DistilBert model (Sanh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
113
+ page_content=' 2020) to tokenize and encode the input text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
114
+ page_content=' The actor network outputs a tuple of the action primitive index and the object index.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
115
+ page_content=' Transfer Learning Since the aim of this work is not to achieve top performances on BEHAVIOR, but rather to explore the connection be- tween transfer performance and activity similarity, we adopt a straightforward method to re-use pre-trained policies and compare the learning curves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
116
+ page_content=' Figure 3: Actor-critic network architecture for learning one BEHAVIOR activity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
117
+ page_content=' State and Action Mappings Since S is a space of tok- enized state and goal descriptions, the state space is common for all activities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
118
+ page_content=' However, the action primitives are param- eterized by the objects in the scene, so the action space can have different sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
119
+ page_content=' To re-use a policy for a new activity, we copy all the weights in the network (Figure 3) except for the actor output layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
120
+ page_content=' Then we resize the actor output layer to match the new action space and randomly initialize it before training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
121
+ page_content=' Semantic Similarity Given a new activity with an initial state and a set of goal conditions, the text-based state and goal representation constructed for the MDP formulation is also a unique description of this activity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
122
+ page_content=' We use the pre- trained SimCSE model (Gao, Yao, and Chen 2022) to embed activity descriptions, and compute the consine similarity be- tween the embeddings of any pair of activities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
123
+ page_content=' Transfer Metric We evaluate the transfer performance of each pair of activities by the transfer ratio (or transfer score) metric (Taylor and Stone 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
124
+ page_content=' Rusu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
125
+ page_content=' 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
126
+ page_content=' The trans- fer ratio measures the ratio of the total reward given to the transfer learner and the total reward given to the non-transfer learner after a certain number of training steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
127
+ page_content=' It can be computed by the ratio of the area under the transfer learning curve over the area under the non-transfer learning curve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
128
+ page_content=' Experiments We choose to study 7 activities from BEHAVIOR: storing food, cleaning kitchen cupboard, putting away Halloween decorations, collect misplaced items, putting away cleaned dishes, locking every window, cleaning microwave oven.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
129
+ page_content=' The policies are trained with the PPO algorithm as imple- mented in the stable-baselines3 library (Raffin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
130
+ page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
131
+ page_content=' An episode terminates when all the subgoals are achieved or the maximum number of steps (64) has been taken.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
132
+ page_content=' The hyperparameter c in the reward function is set to 200.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
133
+ page_content=' As a result, the highest total reward of an episode is 200, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
134
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
135
+ page_content=' achieving all subgoals without any penalty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
136
+ page_content=' The lowest total Figure 4: Semantic similarities between source and target activities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
137
+ page_content=' reward is -64, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
138
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
139
+ page_content=' always executing invalid actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
140
+ page_content=' Training from Scratch To obtain a policy for each activ- ity, we train for 512 episodes and take the top performing policy out of 3 runs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
141
+ page_content=' Table 1 shows the mean reward per episode achieved at the end of training by the top policy for each activity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
142
+ page_content=' Note that there is a wide gap between how well these activities are solved by our policies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
143
+ page_content=' The policies for locking every window and cleaning microwave oven are near optimal, whereas the policy for cleaning kitchen cup- board never manages to achieve all subgoals during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
144
+ page_content=' This difference is due to the solution length and the stochas- ticity of executing the action primitives.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
145
+ page_content=' Some activities re- quire executing more than 10 actions in the correct order, and some actions (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
146
+ page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
147
+ page_content=' grasp) have a low success rate in pro- ducing the desired effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
148
+ page_content=' The uncertain action effects re- flect the challenge for real robots, since the task-level policy should know how to recover when there are failures during execution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
149
+ page_content=' Since it’s much faster to learn window and microwave than the other activities, they are only used as source tasks but not target tasks in the transfer experiments below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
150
+ page_content=' Semantic Similarity Figure 4 summarizes the semantic similarity in a matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
151
+ page_content=' Each row is a source activity and each column is a target activity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
152
+ page_content=' A high number (or warm color) means the descriptions of the two activities are close in the embedding space, whereas a low number (or cool color) indicates that the embeddings are distant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
153
+ page_content=' It may not be intuitive why some activities are more similar than oth- ers based on their abbreviated names.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
154
+ page_content=' For example, stor- ing food, cleaning kitchen cupboard, putting away dishes, putting away Halloween decorations all involve moving ob- Grasp cup_o (0, 5) 123.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
155
+ page_content='456 ActionOutput Layer Value Output Layer Actor Layer 3 Critic Layer 3 Actor Layer 2 Critic Layer 2 Actor Layer 1 Critic Layer 1 Features Extractor (DistilBert Encoder Tokenizedtextobservationstarget source food cupboard halloween misplaced dishes 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
156
+ page_content='5 window 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
157
+ page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
158
+ page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
159
+ page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
160
+ page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
161
+ page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
162
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
163
+ page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
164
+ page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
165
+ page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
166
+ page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
167
+ page_content='4 microwave X poor 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
168
+ page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
169
+ page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
170
+ page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
171
+ page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
172
+ page_content='3 X cupboard 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
173
+ page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
174
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
175
+ page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
176
+ page_content='43 X 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
177
+ page_content='2 halloween 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
178
+ page_content='37 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
179
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
180
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
181
+ page_content='4 X misplaced 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
182
+ page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
183
+ page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
184
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
185
+ page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
186
+ page_content='1 X dishes 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
187
+ page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
188
+ page_content='43 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
189
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
190
+ page_content='19food cupboard halloween misplaced dishes window microwave 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
191
+ page_content='5 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
192
+ page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
193
+ page_content='1 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
194
+ page_content='0 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
195
+ page_content='0 196.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
196
+ page_content='0 189.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
197
+ page_content='0 Table 1: Mean reward per episode achieved at the end of training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
198
+ page_content=' Figure 5: Transfer ratios of the first 80 episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
199
+ page_content=' jects into cabinets, so their similarity scores are high when taking into account the full descriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
200
+ page_content=' Transfer Ratios Figure 5 presents the transfer ratio ma- trix after 80 episodes (or about 5000 steps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
201
+ page_content=' A ratio above 1 indicates positive transfer, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
202
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
203
+ page_content=' the transfer learner receives higher total reward during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
204
+ page_content=' Comparing with the sim- ilarity score matrix, we can make two observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
205
+ page_content=' First, a high-quality source policy can lead to positive transfer, even if the activity is not similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
206
+ page_content=' The activities storing food and putting away Halloween decorations (two difficult tasks) are not similar to locking every window or cleaning microwave oven (two easy tasks), but we see high transfer ratios in the first two rows of their columns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
207
+ page_content=' Second, for each target ac- tivity, higher semantic similarity has a higher chance of pos- itive transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
208
+ page_content=' Cleaning kitchen cupboard and putting away cleaned dishes have a high semantic similarity (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
209
+ page_content='43).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
210
+ page_content=' The only positive transfer to cupboard was from dishes and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
211
+ page_content=' On the other hand, collecting misplaced items is se- mantically very different from all other activities, and gets some of the worst transfer ratios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
212
+ page_content=' Catastrophic Forgetting While there are clear signs that re-using policies can jump start learning a new activity, the benefits of transfer quickly disappear as catastrophic forget- ting takes place.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
213
+ page_content=' Figure 6 shows the transfer ratios after 160 episodes (or about 10,000 steps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
214
+ page_content=' The general observations in Figure 5 still hold, but the ratios are getting lower and Figure 6: Transfer ratios of the first 160 episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
215
+ page_content=' there are fewer cases of positive transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
216
+ page_content=' For future studies, one of the ideas to transfer knowl- edge without suffering from the conflicting goals is by de- coupling the task-independent knowledge from the task- dependent knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
217
+ page_content=' In the case of household activities, there is a lot of shared knowledge across activities, espe- cially the preconditions and effects of actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
218
+ page_content=' For example, TOGGLE OFF(cup 0) is an invalid action in any activity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
219
+ page_content=' To this end, successor features (Barreto et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
220
+ page_content=' 2017) and uni- versal value function approximation (Schaul et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
221
+ page_content=' 2015) are both methods to learn representations that decouple the dy- namics from the rewards so they will generalize over differ- ent goals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
222
+ page_content=' Meanwhile, there are neural representations de- signed to avoid catastrophic forgetting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
223
+ page_content=' Progressive neural nets (Rusu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
224
+ page_content=' 2016) add a new column of network while preserving the weights learned in previous tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
225
+ page_content=' Conclusion We propose that home robots can efficiently learn novel household tasks from similar but distinct activities, and present our analysis in the BEHAVIOR benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
226
+ page_content=' Our ex- periments show encouraging results: activity similarity mea- sured by language embeddings can be used as a predictor for transfer performance, and a high-quality source policy of an easy but different activity can sometimes lead to a jump- start.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
227
+ page_content=' We also observe the problem of catastrophic forgetting and suggest future research in this direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
228
+ page_content=' target food cupboard halloween misplaced source dishes 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
229
+ page_content='00 window 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
230
+ page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
231
+ page_content='78 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
232
+ page_content='52 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
233
+ page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
234
+ page_content='64 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
235
+ page_content='75 microwave 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
236
+ page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
237
+ page_content='52 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
238
+ page_content='08 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
239
+ page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
240
+ page_content='75 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
241
+ page_content='50 pooy X 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
242
+ page_content='81 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
243
+ page_content='01 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
244
+ page_content='71 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
245
+ page_content='54 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
246
+ page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
247
+ page_content='70 X cupboard 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
248
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
249
+ page_content='32 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
250
+ page_content='09 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
251
+ page_content='00 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
252
+ page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
253
+ page_content='29 X halloween 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
254
+ page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
255
+ page_content='58 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
256
+ page_content='75 X 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
257
+ page_content='50 misplaced 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
258
+ page_content='43 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
259
+ page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
260
+ page_content='79 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
261
+ page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
262
+ page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
263
+ page_content='78 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
264
+ page_content='26 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
265
+ page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
266
+ page_content='19 X dishes 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
267
+ page_content='00target source food cupboard halloween misplaced dishes 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
268
+ page_content='00 window 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
269
+ page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
270
+ page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
271
+ page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
272
+ page_content='43 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
273
+ page_content='48 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
274
+ page_content='75 microwave 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
275
+ page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
276
+ page_content='32 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
277
+ page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
278
+ page_content='47 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
279
+ page_content='51 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
280
+ page_content='50 food X 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
281
+ page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
282
+ page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
283
+ page_content='74 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
284
+ page_content='62 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
285
+ page_content='25 X cupboard 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
286
+ page_content='88 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
287
+ page_content='72 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
288
+ page_content='40 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
289
+ page_content='09 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
290
+ page_content='00 X 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
291
+ page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
292
+ page_content='46 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
293
+ page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
294
+ page_content='62 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
295
+ page_content='75 halloween X 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
296
+ page_content='50 misplaced 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
297
+ page_content='10 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
298
+ page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
299
+ page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
300
+ page_content='84 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
301
+ page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
302
+ page_content='40 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
303
+ page_content='00 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
304
+ page_content='21 X dishes 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
305
+ page_content='87 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
306
+ page_content='00References Barreto, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
307
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
308
+ page_content=' Dabney, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
309
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
310
+ page_content=' Munos, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
311
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
312
+ page_content=' Hunt, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
313
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
314
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
315
+ page_content=' Schaul, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
316
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
317
+ page_content=' van Hasselt, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
318
+ page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
319
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
320
+ page_content=' and Silver, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
321
+ page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
322
+ page_content=' Successor Features for Transfer in Reinforcement Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
323
+ page_content=' In Advances in Neural Information Processing Systems, volume 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
324
+ page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
325
+ page_content=' Fernando, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
326
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
327
+ page_content=' Banarse, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
328
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
329
+ page_content=' Blundell, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
330
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
331
+ page_content=' Zwols, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
332
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
333
+ page_content=' Ha, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
334
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
335
+ page_content=' Rusu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
336
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
337
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
338
+ page_content=' Pritzel, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
339
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
340
+ page_content=' and Wierstra, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
341
+ page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
342
+ page_content=' Pathnet: Evolution Channels Gradient Descent in Super Neural Net- works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
343
+ page_content=' arXiv preprint arXiv:1701.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
344
+ page_content='08734.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
345
+ page_content=' Gao, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
346
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
347
+ page_content=' Yao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
348
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
349
+ page_content=' and Chen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
350
+ page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
351
+ page_content=' SimCSE: Simple Contrastive Learning of Sentence Embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
352
+ page_content=' arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
353
+ page_content='08821.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
354
+ page_content=' Raffin, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
355
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
356
+ page_content=' Hill, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
357
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
358
+ page_content=' Gleave, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
359
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
360
+ page_content=' Kanervisto, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
361
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
362
+ page_content=' Ernestus, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
363
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
364
+ page_content=' and Dormann, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
365
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
366
+ page_content=' Stable-Baselines3: Reliable Re- inforcement Learning Implementations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
367
+ page_content=' Journal of Machine Learning Research, 22(268): 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
368
+ page_content=' Rusu, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
369
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
370
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
371
+ page_content=' Rabinowitz, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
372
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
373
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
374
+ page_content=' Desjardins, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
375
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
376
+ page_content=' Soyer, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
377
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
378
+ page_content=' Kirkpatrick, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
379
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
380
+ page_content=' Kavukcuoglu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
381
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
382
+ page_content=' Pascanu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
383
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
384
+ page_content=' and Hadsell, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
385
+ page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
386
+ page_content=' Progressive Neural Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
387
+ page_content=' arXiv:1606.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
388
+ page_content='04671.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
389
+ page_content=' Sanh, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
390
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
391
+ page_content=' Debut, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
392
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
393
+ page_content=' Chaumond, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
394
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
395
+ page_content=' and Wolf, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
396
+ page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
397
+ page_content=' DistilBERT, a Distilled Version of BERT: Smaller, Faster, Cheaper and Lighter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
398
+ page_content=' arXiv:1910.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
399
+ page_content='01108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
400
+ page_content=' Schaul, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
401
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
402
+ page_content=' Horgan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
403
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
404
+ page_content=' Gregor, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
405
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
406
+ page_content=' and Silver, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
407
+ page_content=' 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
408
+ page_content=' Uni- versal value function approximators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
409
+ page_content=' In International con- ference on machine learning, 1312–1320.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
410
+ page_content=' PMLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
411
+ page_content=' Schulman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
412
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
413
+ page_content=' Wolski, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
414
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
415
+ page_content=' Dhariwal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
416
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
417
+ page_content=' Radford, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
418
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
419
+ page_content=' and Klimov, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
420
+ page_content=' 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
421
+ page_content=' Proximal Policy Optimization Algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
422
+ page_content=' arXiv preprint arXiv:1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
423
+ page_content='06347.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
424
+ page_content=' Shridhar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
425
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
426
+ page_content=' Yuan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
427
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
428
+ page_content=' Cˆot´e, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
429
+ page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
430
+ page_content=' ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
431
+ page_content=' Bisk, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
432
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
433
+ page_content=' Trischler, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
434
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
435
+ page_content=' and Hausknecht, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
436
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
437
+ page_content=' ALFWorld: Aligning Text and Embodied Environments for Interactive Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
438
+ page_content=' arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
439
+ page_content='03768.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
440
+ page_content=' Soemers, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
441
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
442
+ page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
443
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
444
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
445
+ page_content=' Mella, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
446
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
447
+ page_content=' Piette, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
448
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
449
+ page_content=' Stephenson, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
450
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
451
+ page_content=' Browne, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
452
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
453
+ page_content=' and Teytaud, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
454
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
455
+ page_content=' Transfer of Fully Convo- lutional Policy-Value Networks Between Games and Game Variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
456
+ page_content=' arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
457
+ page_content='12375.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
458
+ page_content=' Srivastava, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
459
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
460
+ page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
461
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
462
+ page_content=' Lingelbach, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
463
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
464
+ page_content=' Mart´ın-Mart´ın, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
465
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
466
+ page_content=' Xia, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
467
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
468
+ page_content=' Vainio, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
469
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
470
+ page_content=' Lian, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
471
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
472
+ page_content=' Gokmen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
473
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
474
+ page_content=' Buch, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
475
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
476
+ page_content=' Liu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
477
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
478
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
479
+ page_content=' Savarese, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
480
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
481
+ page_content=' Gweon, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
482
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
483
+ page_content=' Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
484
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
485
+ page_content=' and Fei-Fei, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
486
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
487
+ page_content=' BEHAVIOR: Benchmark for Everyday Household Activ- ities in Virtual, Interactive, and Ecological Environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
488
+ page_content=' arXiv:2108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
489
+ page_content='03332.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
490
+ page_content=' Taylor, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
491
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
492
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
493
+ page_content=' and Stone, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
494
+ page_content=' 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
495
+ page_content=' Transfer Learning for Re- inforcement Learning Domains: A Survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
496
+ page_content=' Journal of Ma- chine Learning Research, 10(7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
497
+ page_content=' Zhu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
498
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
499
+ page_content=' Lin, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
500
+ page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
501
+ page_content=' and Zhou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
502
+ page_content=' 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
503
+ page_content=' Transfer Learning in Deep Reinforcement Learning: A Survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
504
+ page_content=' arXiv:2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
505
+ page_content='07888 [cs, stat].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtE4T4oBgHgl3EQf5A4w/content/2301.05318v1.pdf'}
BdFQT4oBgHgl3EQf9zeq/content/tmp_files/2301.13452v1.pdf.txt ADDED
@@ -0,0 +1,2126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Distribution of the number of pivots needed using Gaussian elimination with
2
+ partial pivoting on random matrices∗
3
+ John Peca-Medlin†
4
+ Abstract. Gaussian elimination with partial pivoting (GEPP) remains the most common method to solve dense
5
+ linear systems. Each GEPP step uses a row transposition pivot movement if needed to ensure the
6
+ leading pivot entry is maximal in magnitude for the leading column of the remaining untriangularized
7
+ subsystem. We will use theoretical and numerical approaches to study how often this pivot movement
8
+ is needed. We provide full distributional descriptions for the number of pivot movements needed
9
+ using GEPP using particular Haar random ensembles, as well as compare these models to other
10
+ common transformations from randomized numerical linear algebra. Additionally, we introduce new
11
+ random ensembles with fixed pivot movement counts and fixed sparsity, α. Experiments estimating
12
+ the empirical spectral density (ESD) of these random ensembles leads to a new conjecture on a
13
+ universality class of random matrices with fixed sparsity whose scaled ESD converges to a measure
14
+ on the complex unit disk that depends on α and is an interpolation of the uniform measure on the
15
+ unit disk and the Dirac measure at the origin.
16
+ Key words. Gaussian elimination, partial pivoting, butterfly matrices, Stirling numbers of the first kind, nu-
17
+ merical linear algebra, universality
18
+ AMS subject classifications. 60B20, 15A23, 65F99
19
+ 1. Introduction and background. Gaussian elimination (GE) is the most used method
20
+ to solve linear systems
21
+ (1.1)
22
+ Ax = b
23
+ for A ∈ Rn×n, and remains a staple of introductory linear algebra courses. If no leading prin-
24
+ cipal minors of A vanish, then GE iteratively transforms (1.1) into two equivalent triangular
25
+ systems, resulting in the factorization A = LU for L unipotent lower triangular and U upper
26
+ triangular matrices using 2
27
+ 3n2(1 +o(1)) FLOPs. If A is nonsingular but does have a vanishing
28
+ principal minor, then GE would need to be combined with a selected pivoting strategy to
29
+ ensure GE can be continued at each intermediate step without encountering a zero pivot. The
30
+ row and column movements used to ensure nonzero pivots would then result in additional
31
+ permutation matrices P, Q for a modified GE factorization PAQ = LU. Even when pivoting
32
+ is not necessary, it remains desirable for added computational stability for certain pivoting
33
+ strategies.
34
+ This includes GE with partial pivoting (GEPP), the most prominent pivoting
35
+ strategy for dense linear systems, which is the default strategy used by MATLAB with its
36
+ built-in lu function. GEPP uses only row permutations and so results in a final PA = LU
37
+ factorization. (See Subsection 1.3 for relevant description of GE, while Section 3 provides
38
+ further background for GEPP.)
39
+ With high performance computing, choosing a desired pivoting strategy with GE often
40
+ becomes a balancing act that takes into account the total computation time (viz., total FLOP
41
+ ∗Submitted to the editors February 1, 2023.
42
+ †Department of Mathematics, University of Arizona, Tucson, AZ ([email protected]).
43
+ 1
44
+ arXiv:2301.13452v1 [math.NA] 31 Jan 2023
45
+
46
+ 2
47
+ J. PECA-MEDLIN
48
+ count) rather than just accuracy (viz., the numerical stability of computed solutions). The
49
+ cost of moving large amounts of data can be very expensive on high performance machines.
50
+ For example, numerical experiments on a hybrid CPU/GPU setup have shown GEPP used
51
+ with moderately sized random matrices have pivoting account for over 20 percent of the total
52
+ computation time [1]. Hence, limiting pivot movements using GEPP is desirable to save time.
53
+ Parker introduced a preconditioning method through the use of random butterfly matrices
54
+ to remove the need for pivoting overall for any nonsingular linear system [16].
55
+ Butterfly
56
+ matrices are a recursively defined class of orthogonal matrices (see Section 4 for a full definition
57
+ of random butterfly matrices) for which matrix-vector multiplication Ax is computed using
58
+ 3n log2 n FLOPs rather than the O(n2) FLOPs needed using a general dense matrix. Parker
59
+ established for U, V iid random butterfly matrices, then Ax = b can be transformed into the
60
+ equivalent system
61
+ (1.2)
62
+ UAy = Ub
63
+ and
64
+ x = V ∗y
65
+ for which GE without pivoting (GENP) can be carried out with probability near 1. The above
66
+ computations shows then transforming (1.1) int to (1.2) can be performed using O(n2 log2 n)
67
+ FLOPs, and hence does not impact the leading order complexity of GE.
68
+ In [17], Peca-Medlin and Trogdon further explored the numerical stability of using GE with
69
+ a variety of pivoting strategies in addition to using randomized preconditioning methods,
70
+ which included random butterfly and Haar orthogonal transformations.
71
+ One output was
72
+ certain preconditioners had the impact that running the preconditioner followed then by GE
73
+ with another pivoting strategy could “upgrade” the pivoting strategy in terms of the numerical
74
+ accuracy for the computed solution. For instance, even though GENP often leads to accuracy
75
+ far from that achieved using GEPP or GE with complete pivoting (GECP), a preconditioned
76
+ matrix using GEPP would lead to accuracy on par with using GECP. Adding one step of
77
+ iterative refinement would further seal this alignment in accuracy.
78
+ A natural direction that arose out of this previous analysis in [17] was to better understand
79
+ how many actual pivot movements are needed with these different pivoting strategies on the
80
+ preconditioned linear systems. The goal of this paper is to provide a clearer answer to this
81
+ question with respect to GEPP. GEPP can use a pivot movement at each GE step, for up to
82
+ n − 1 total pivot movements. So applying GEPP to a random linear system will result in the
83
+ number of pivot movements being a random variable with support in 0, 1, . . . , n − 1. We will
84
+ study the question of how much movement one should expect if they choose to use GEPP in
85
+ conjunction with randomized preconditioning methods1.
86
+ Our results include both theoretical and numerical approaches focusing on applying GEPP
87
+ to several random matrix ensembles. The theoretical results rely on input matrices from Haar
88
+ orthogonal and butterfly ensembles (see Subsection 1.2 for a review of Haar measures). Our
89
+ numerical studies use further study these models in relation to other common transformations
90
+ from randomized numerical linear algebra transformations, which expand studies in [17].
91
+ 1.1. Outline of paper. This paper is structured to explore the question of how much
92
+ actual pivot movement is necessary when using GEPP with a variety of randomized precon-
93
+ 1This is a simpler focus than the more general study of the distribution of the GEPP permutation matrix
94
+ factor.
95
+
96
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
97
+ 3
98
+ ditioning methods on particular nonsingular linear systems. The remainder of Section 1 will
99
+ establish notation and preliminary background on GE (Subsection 1.3) and the Stirling-1
100
+ distribution (Subsection 1.4). This includes connections to previous statistical results of dis-
101
+ tributions using Stirling numbers of the first kind, as well as formally establishing standard
102
+ statistics for this distribution that will be used for comparison in later numerical results in
103
+ Section 6.
104
+ Section 2 provides a statement of the main theoretical result, Theorem 2.1, that provides
105
+ the full distribution for the number of GEPP pivot movements needed for particular random
106
+ ensembles. This includes the distribution of pivot movements when
107
+ I. the resulting GEPP permutation matrix factor P is uniformly distributed among the
108
+ permutation matrices, which uses the Stirling-1 distribution, Υn, that satisfies
109
+ (1.3)
110
+ P(Υn = k) = |s(n, k)|
111
+ n!
112
+ for k = 1, 2, . . . , n where s(n, k) is the Stirling number of the first kind; and
113
+ II. when GEPP is applied to a Haar-butterfly matrix, which results in a scaled Bernoulli
114
+ distribution.
115
+ The remainder of Section 2 provides results and implications of (I) in connection with QR
116
+ factorizations and other standard Haar unitary models. The proof of Theorem 2.1 is postponed
117
+ under Sections 3 and 4.
118
+ Section 3 provides the necessary background to establish a proof of part (I) of Theo-
119
+ rem 2.1. This includes introducing explicit decompositions of permutations in Sn, the group
120
+ of permutations of n objects, that connect explicitly to GEPP permutation matrix factors as
121
+ well as uniform sampling of Sn. Section 4 provides more background for P(B)
122
+ N , the butterfly
123
+ permutation matrices, and yields a proof for part (II) of Theorem 2.1. Additionally, explicit
124
+ structural configurations of exact pivot movement locations of Haar-butterfly matrices are
125
+ established, that yield a distribution on the pivot location configurations.
126
+ Section 5 builds on top of Theorem 2.1 to introduce a new ensemble of random matrices
127
+ that align with uniformly sampling from GLn(R)/Un(R), the left cosets of the group of non-
128
+ singular upper triangular matrices in the general linear group. This ensemble can be used
129
+ to sample from random ensembles with fixed pivot movement distributions. General random
130
+ ensembles are introduced with fixed sparsity conditions. A new conjecture is provided for the
131
+ asymptotic empirical spectral distribution for this generalized random ensemble with fixed
132
+ sparsity that connects to and subsumes the famous circular law in random matrix theory.
133
+ Section 6 uses numerical experiments to further explore the distribution of pivot move-
134
+ ments needed using other transformations from randomized numerical linear algebra. These
135
+ experiments focus on three initial models, two that need minimal GEPP pivot movements and
136
+ one that require the maximal number of GEPP pivot movements. These approaches build
137
+ on top of the numerical experiments used in [17], as well as connect to other random models
138
+ used in earlier sections.
139
+ 1.2. Notation and preliminaries. For convenience, N will be reserved for powers of 2, with
140
+ N = 2n. For A ∈ Fn×m where F = R or C, Aij denotes the entry in the ith row and jth column
141
+ of A, while Aα,β will denote the submatrix of A with row indices α ⊂ [n] := {1, 2, . . . , n} and
142
+
143
+ 4
144
+ J. PECA-MEDLIN
145
+ β ⊂ [m]. Let ei denote the standard basis elements of Fn and Eij = eieT
146
+ j , the standard basis
147
+ elements of Fn×m. I denotes the identity matrix and 0 the zero matrix or vector (with the
148
+ dimensions implicit from context if not stated explicitly). If A ∈ Fn×n is nonsingular, then
149
+ A0 := I. Let D = {z ∈ C : |z| < 1} denote the unit complex disk, with ∂D denoting the unit
150
+ complex circle. We will write Sn−1 = {x ∈ Fn : ∥x∥2 = 1}, where ∥ · ∥2 denotes the standard
151
+ ℓ2-norm.
152
+ Let Sn denote the symmetric group on n elements. Recall every permutation σ ∈ Sn can
153
+ be written in cycle notation, with σ = τ1τ2 · · · τj where τi = (ai1 ai2 · · · aik) is a k-cycle, such
154
+ that τi(aim) = aim+1 for m < k and τi(aik) = ai1. Moreover, recall every permutation can be
155
+ written as a product of disjoint cycles. For σ ∈ Sn, let Pσ denote the orthogonal permutation
156
+ matrix such that Pσei = eσ(i). For example, for (1 2) ∈ S2, then
157
+ (1.4)
158
+ P(1 2) =
159
+ �0
160
+ 1
161
+ 1
162
+ 0
163
+
164
+ .
165
+ Let Pn denote the n×n permutation matrices, i.e., the left regular representation of the action
166
+ of Sn on [n].
167
+ Let ∥·∥max denote the element-wise max norm of a matrix defined by ∥A∥max = maxi,j |Aij|.
168
+ Define A ⊕ B ∈ F(n1+m1)×(n2+m2) to be the block diagonal matrix with blocks A ∈ Fn1×m1
169
+ and B ∈ Fn2×m2. Define A ⊗ B ∈ Fn1n2×m1m2 to be the Kronecker product of A ∈ Rn1×m1
170
+ and B ∈ Rn2×m2, given by
171
+ (1.5)
172
+ A ⊗ B =
173
+
174
+ ��
175
+ A11B
176
+ · · ·
177
+ A1,m1B
178
+ ...
179
+ ...
180
+ ...
181
+ An1,1B
182
+ · · ·
183
+ An1,m1B
184
+
185
+ �� .
186
+ Recall Kronecker products satisfy the mixed-product property: if all matrix sizes are compat-
187
+ ible for the necessary matrix multiplications, then
188
+ (1.6)
189
+ (A ⊗ B)(C ⊗ D) = (AC) ⊗ (BD),
190
+ i.e., the product of Kronecker products is the Kronecker product of the products. As a result,
191
+ Kronecker products inherit certain shared properties of their input matrices. For example, if
192
+ A and B are both orthogonal or unitary matrices, then so is A ⊗ B. Similarly, if A ∈ Pn and
193
+ B ∈ Pm then A ⊗ B ∈ Pnm.
194
+ Let GLn(F) denote the group of nonsingular matrices with entries in F. Let Un(F) denote
195
+ the subgroup of nonsingular upper triangular matrices and Ln(F) denote the subgroup of
196
+ unipotent (i.e., with all diagonal entries equal to 1) lower triangular matrices. O(n) and U(n)
197
+ denotes the orthogonal and unitary groups of n × n matrices and SO(n), SU(n) denote the
198
+ respective special orthogonal and special unitary subgroups; note O(n) will be used for the
199
+ orthogonal matrices while O(n) is the classical “big-oh” notation. Recall if H is a subgroup
200
+ of G, then G/H = {xH : x ∈ G} will denote the set of left-cosets of H in G and G\H =
201
+ {Hx : x ∈ G} the set of right-cosets of H in G.
202
+ We write X ∼ Y if X and Y are equal in distribution.
203
+ Standard distributions that
204
+ will be used in this document include X ∼ N(0, 1) to denote a standard Gaussian random
205
+
206
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
207
+ 5
208
+ variable (with probability density (2π)−1/2e−x2/2); X ∼ NC(0, 1) to denote a standard complex
209
+ Gaussian random variable (with X ∼ (Z1 + iZ2)/
210
+
211
+ 2 for Z1, Z2 iid N(0, 1)); X ∼ Uniform(A)
212
+ to denote a uniform random variable with support on a compact set A with probability
213
+ density
214
+ 1
215
+ |A|1A (for |A| either denoting the cardinality of A if A is finite or the corresponding
216
+ appropriate Lebesgue-measure of A); ξ ∼ Bernoulli(p) to denote a Bernoulli random variable
217
+ with parameter p ∈ [0, 1] where P(ξ = 1) = p = 1 − P(ξ = 0); and ξ ∼ Rademacher
218
+ to denote a Rademacher random variable that takes only the values 1 and −1 with equal
219
+ probability (i.e., ξ ∼ (−1)Bernoulli(1/2)). A random variable is called continuous if its associated
220
+ probability density is a continuous function. Let Gin(n, m) denote the n×m Ginibre ensemble,
221
+ consisting of random matrices with independent and identically distributed (iid) standard
222
+ Gaussian entries; GinC(n, m) will denote the similarly defined complex Ginibre ensemble,
223
+ whose entries are iid standard complex Gaussian random variables. Let GOE(n) and GUE(n)
224
+ denote the Gaussian Orthogonal and Gaussian Unitary Ensembles, respectively; recall these
225
+ can be sampled using the Ginibre ensembles as follows: if G ∼ Gin(n, n) and H ∼ GinC(n, n),
226
+ then (G + GT )/
227
+
228
+ 2 ∼ GOE(n) and (H + H∗)/
229
+
230
+ 2 ∼ GUE(n).
231
+ Let ϵmachine denote the machine-epsilon, which is the minimal positive number such that
232
+ fl(1 + ϵmachine) ̸= 1 using floating-point arithmetic.2 If using t-bit mantissa precision, then
233
+ ϵmachine = 2−t. Our later experiments in Section 6 will use double precision in MATLAB,
234
+ which uses a 52-bit mantissa.
235
+ Standard models from randomized numerical linear algebra will be used for comparison in
236
+ Section 6. These will include the Walsh transformation and Discrete Cosine Transformations
237
+ (DCT), which were previously used in [17]. Sampling for the following experiments will use
238
+ native (deterministic) MATLAB functions (viz., the Fast Walsh-Hadamard transform fwht
239
+ and the default Type II Discrete cosine transform dct) applied after an independent row
240
+ sign transformation chosen uniformly from {±1}N. See [20, 24] for an overview of numerical
241
+ properties of the Walsh and DCT transforms, and [13] for a thorough survey that provides
242
+ proper context for use of these transforms and other tools from randomized numerical linear
243
+ algebra.
244
+ Additionally, we will utilize left and right invariance properties of the Haar measure on
245
+ locally compact Hausdorff topological groups, first established by Weil [25]. For a compact
246
+ group G, this measure can be normalized to yield a probability measure Haar(G), which in-
247
+ herits the invariance and regularity properties of the original measure and yields a means
248
+ to uniformly sample from compact groups, such as O(n) and SO(N). Recall every nonsin-
249
+ gular matrix A ∈ Fn×n has a QR factorization, with A = QR for R upper triangular with
250
+ positive diagonal entries and Q ∈ O(n) if F = R or Q ∈ U(n) if F = C. Stewart provided
251
+ an outline to sample from Haar(O(n)) by using Gin(n, n) through the QR factorization: if
252
+ A ∼ Gin(n, n) and A = QR is the QR decomposition of A where R has positive diagonal
253
+ entries, then Q ∼ Haar(O(n)) [19]. Similarly, Haar(U(n)) can be sampled using GinC(n, n).
254
+ Our experiments will employ efficient sampling methods for Haar(O(n)) that use Gaussian
255
+ Householder reflectors, in line with the QR factorization of Gin(n, n) (see [15] for an outline
256
+ of this method).
257
+ 2We will use the IEEE standard model for floating-point arithmetic.
258
+
259
+ 6
260
+ J. PECA-MEDLIN
261
+ 1.3. Gaussian elimination and growth factors. GENP iteratively works through the bot-
262
+ tom right untriangularized n − k + 1 dimensional submatrices of the GE transformed matrix
263
+ A(k) to result in the factorization A = LU for L a unipotent lower triangular matrix and U
264
+ an upper triangular matrix. A(k) represents the resulting transformed matrix of A at the kth
265
+ GE step that is zero below the first k − 1 diagonals and
266
+ (1.7)
267
+ Lij =
268
+ A(j)
269
+ ij
270
+ A(j)
271
+ jj
272
+ for i > j, with A(1) = A and A(n−1) = U. When GENP can be completed (viz., when all
273
+ leading principal minors are nonzero), the final factorization A = LU can be reused with
274
+ different input b to solve the computationally simpler triangular systems
275
+ (1.8)
276
+ Ly = b
277
+ and
278
+ Ux = y.
279
+ Moreover, if A has nonvanishing principal minors, then the resulting LU factorization is
280
+ unique. See standard references, such as [10], for an explicit outline of GE.
281
+ If GENP cannot be completed, then a pivoting strategy can be applied so that GE can
282
+ continue at each step, which can involve row or column movements that ensure the leading
283
+ diagonal entry (i.e., the pivot) of the untriangularized subsystem is nonzero. Different pivot-
284
+ ing strategies then result in the modified GE factorization PAQ = LU for P, Q permutation
285
+ matrices. GEPP remains the most popular pivoting strategy, which uses only row permu-
286
+ tations to ensure the leading pivot at the kth GE step is maximal in magnitude among the
287
+ lower entries in its column. By construction, the L from the resulting GEPP factorization
288
+ PA = LU satisfies ∥L∥max = 1. If there is ever a “tie” during an intermediate GEPP pivot
289
+ search, which occurs when |Aj
290
+ ij| = |Aj
291
+ jj| and would result in |Lij| = 1 for some i > j, then
292
+ the L and U factors are not unique with respect to row transformed linear systems, i.e., if A
293
+ has the GEPP factorization PA = LU and B = QA for Q a permutation matrix, then we do
294
+ not necessarily have the GEPP factorization (PQT )B = LU. When ties are avoided, GEPP
295
+ results in unique L and U factors.
296
+ Theorem 1.1 ([17]).
297
+ Let A be a nonsingular square matrix. Then the L and U factors in
298
+ the GEPP factorization PA = LU are invariant under row permutations on A iff |Lij| < 1
299
+ for all i > j.
300
+ Moreover, when no ties are encountered with a nonsingular A with B = QA defined as above,
301
+ then GEPP does necessarily result in the factorization (PQT )B = LU.
302
+ Even when pivoting is not necessary, pivoting can remain desirable for its numerical stabil-
303
+ ity properties when using floating-point arithmetic. Wilkinson first established the backward
304
+ stability of GEPP by showing the growth factor,
305
+ (1.9)
306
+ ρ(A) = maxk ∥A(k)∥max
307
+ ∥A∥max
308
+ ,
309
+ satisfies the upper exponential bound 1 ≤ ρ(A) ≤ 2n−1 for all matrices A [26]. The growth
310
+ factor controls the backwards relative error for computed solutions ˆx using GE, as Wilkinson
311
+
312
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
313
+ 7
314
+ further established through
315
+ (1.10)
316
+ ∥ˆx − x∥∞
317
+ ∥x∥∞
318
+ ≤ 4n2κ∞(A)ρ(A)ϵmachine
319
+ for κ∞(A) = ∥A∥∞∥A−1∥∞ the ℓ∞-condition number. Section 6 will consider particular linear
320
+ models that maximize the GEPP growth factors.
321
+ In practice, GEPP implementations result in computed solutions with higher accuracy far
322
+ from the worst-case exponential behavior Wilkinson’s analysis first highlighted. Understand-
323
+ ing this behavior remains an important question in numerical analysis. This was partially
324
+ answered by Huang and Tikhomirov through the use of average-case analysis of GEPP using
325
+ A ∼ Gin(n, n): they showed with probability near 1, both the number of bits of prevision
326
+ needed to solve Ax = b to m bits of accuracy is m + O(log n) while also the computed and
327
+ exact GEPP permutation matrix factors align [11].
328
+ 1.4. Stirling-1 distribution. This section will delve further into some properties of the
329
+ Stirling-1 distribution, Υn, with probability mass function given by (1.3). Recall the Stirling
330
+ numbers of the first kind, s(n, k), arise as the coefficients using the generating function
331
+ (1.11)
332
+ (x)n =
333
+ n
334
+
335
+ k=0
336
+ s(n, k)xk
337
+ for (x)n = x(x − 1) · · · (x − n + 1), where s(n, k) = 0 if not 1 ≤ k ≤ n except s(0, 0) = 1.3 The
338
+ absolute Stirling numbers of the first kind, |s(n, k)|, can similarly be generated using (1.11)
339
+ along with |s(n, k)| = (−1)n+ks(n, k); alternatively, |s(n, k)| are determined by the generating
340
+ function
341
+ (1.12)
342
+ ⟨x⟩n =
343
+ n
344
+
345
+ k=0
346
+ |s(n, k)|xk
347
+ for ⟨x⟩n = x(x + 1) · · · (x + n − 1). (1.12) further establishes the relation
348
+ (1.13)
349
+ |s(n, k)| = sn−k(1, 2, . . . , n − 1)
350
+ where
351
+ (1.14)
352
+ sj(a1, a2, . . . , am) =
353
+
354
+ i1<···<ij
355
+ j�
356
+ ℓ=1
357
+ aik
358
+ denotes the elementary symmetric polynomials. This relationship can be used to establish the
359
+ recurrence
360
+ (1.15)
361
+ |s(n, k)| = |s(n − 1, k − 1)| + (n − 1)|s(n − 1, k)|
362
+ 3The notation for Stirling numbers is inconsistent throughout the literature. We are adopting the convention
363
+ used in [5].
364
+
365
+ 8
366
+ J. PECA-MEDLIN
367
+ for k > 0.
368
+ Plugging x = 1 into (1.12) can be used to establish the identity
369
+ (1.16)
370
+ n! =
371
+ n
372
+
373
+ k=1
374
+ |s(n, k)|,
375
+ which yields (1.3) denotes a valid probability density. An alternative justification for (1.16)
376
+ follows from the standard interpretation that |s(n, k)| counts the number of permutations
377
+ σ ∈ Sn that have exactly k cycles in their disjoint cycle decomposition, where fixed points
378
+ are counted as 1-cycles4. This interpretation using Sn can be used to provide a combinatorial
379
+ proof of (1.16): the left hand side of (1.16) is the number of elements of Sn, and the right
380
+ hand side is the sum of each subset of permutations with a fixed number of k cycles for k
381
+ ranging from 1 (viz., the n-cycles, of which there are |s(n, 1)| = (n−1)!) to n (viz., the identity
382
+ permutation, in which each object comprises its own cycle of length 1, so that |s(n, n)| = 1).5
383
+ Stirling numbers have appeared in connection with statistical problems dating back to
384
+ their original formulation by Stirling in the 1730s (cf. [4]). Probabilistic tools have been
385
+ used to establish and analyze properties of Stirling numbers in the mid- to late-20th century
386
+ [3, 4, 9]. Υn has appeared as a variant of a more general ensemble of Stirling distributions
387
+ but has not been studied extensively in the literature. For instance, the mean and variance
388
+ have been computed for Υn (cf. [3]), but general higher moment computations have not been
389
+ touched. Applying successive derivatives in x to (1.12) and then plugging in x = 16 yields
390
+ EΥn = Hn
391
+ and
392
+ Var Υn = Hn − H(2)
393
+ n ,
394
+ (1.17)
395
+ where
396
+ (1.18)
397
+ H(m)
398
+ n
399
+ =
400
+ n
401
+
402
+ j=1
403
+ 1
404
+ jm
405
+ are the generalized Harmonic numbers and Hn = H(1)
406
+ n
407
+ the standard Harmonic numbers. Well
408
+ known asymptotic results as n → ∞ using Harmonic numbers include Hn − log n → γ ≈
409
+ 4This correspondence is justified by noting
410
+ �n
411
+ k
412
+
413
+ , the number of permutations σ ∈ Sn with k disjoint cycles
414
+ in their disjoint cycle decomposition, satisfies both the initial conditions along with the recurrence (1.15) (a
415
+ combinatorial argument yields the analogous recurrence by separating whether 1 comprises its own cycle, which
416
+ aligns with |s(n − 1, k − 1)|, or if 1 is contained in a larger cycle, which aligns with (n − 1)|s(n − 1, k)| since
417
+ there are then n − 1 places to insert 1 into an existing cycle).
418
+ 5Similarly, the Stirling numbers of the second kind, S(n, k), can be defined as the number of partitions of
419
+ n objects into k nonempty sets, which can be connected to Sn. S(n, k) and s(n, k) further are related as they
420
+ yield the coordinates n, k for lower triangular n × n matrices that are mutual inverses (cf. pg. 144 in [5]).
421
+ 6For example, using
422
+ d
423
+ dx⟨x⟩n = ⟨x⟩n ·
424
+ �n−1
425
+
426
+ j=0
427
+ 1
428
+ x + j
429
+
430
+ =
431
+ n
432
+
433
+ k=1
434
+ |s(n, k)|kxk−1
435
+ and then plugging in x = 1 yields n!Hn = �n
436
+ k=1 k|s(n, k)| = n!EΥn.
437
+
438
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
439
+ 9
440
+ 0.5772156649, the Euler–Mascheroni constant, and H(m)
441
+ n
442
+ → ζ(m) for m > 1, where
443
+ (1.19)
444
+ ζ(m) =
445
+
446
+ n≥1
447
+ 1
448
+ nm
449
+ is the Riemann-zeta function, with ζ(2) = π2
450
+ 6 .
451
+ Continuing with higher moment computations using (1.12) then yields EΥm
452
+ n is a function
453
+ of H(j)
454
+ n
455
+ for j = 1, 2, . . . , m. Moreover, after computing also the third and fourth moments
456
+ of Υn, gathering all resulting terms that use only Hn = H(1)
457
+ n
458
+ results in a Bell polynomial of
459
+ order m with inputs Hn for each component, i.e.,
460
+ (1.20)
461
+ EΥn mod (H(2)
462
+ n , H(3)
463
+ n , . . . , H(n)
464
+ n ) = Bn(Hn, . . . , Hn).
465
+ This aligns also with the above formulas for EΥn = Hn = B1(Hn) and EΥ2
466
+ n = Var(Υn) +
467
+ (EΥn)2 = H2
468
+ n + Hn − H(2)
469
+ n
470
+ = B2(Hn, Hn) − H(2)
471
+ n . A future research area can explore finding
472
+ closed forms for the higher moments of Υn, including establishing (1.20) for higher orders, as
473
+ well as other general Stirling distributions.
474
+ 1.5. Butterfly matrices. This section will define and introduce relevant background for
475
+ butterfly matrices and random butterfly matrices, including Haar-butterfly matrices, Bs(N, ΣS).
476
+ See [17, 23] for a fuller discussion of additional numerical and spectral properties of butterfly
477
+ matrices and random butterfly matrices.
478
+ The butterfly matrices of order N are an ensemble of special orthogonal matrices defined
479
+ recursively as follows: let {1} as the order 1 butterfly matrices; for each order N > 1 butterfly
480
+ matrix B, there exist order N/2 butterfly matrices A1, A2 and order N/2 symmetric matrices
481
+ C, S such that CS = SC and C2 + S2 = IN/2 where
482
+ (1.21)
483
+ B =
484
+ � C
485
+ S
486
+ −S
487
+ C
488
+ � �A1
489
+ 0
490
+ 0
491
+ A2
492
+
493
+ =
494
+ � CA1
495
+ SA2
496
+ −SA1
497
+ CA2
498
+
499
+ .
500
+ The simple butterfly matrices are formed such that A1 = A2 at each recursive step. The
501
+ scalar butterfly matrices, B(N), are formed using (C, S) = (cos θI, sin θI) for some angle θ
502
+ at each recursive step, where Bs(N) then denotes the simple scalar butterfly matrices. Note
503
+ then each B ∈ B(N) is of the form
504
+ (1.22)
505
+ B = (B(θ) ⊗ I2)(A1 ⊕ A2)
506
+ for
507
+ (1.23)
508
+ B(θ) =
509
+ � cos θ
510
+ sin θ
511
+ − sin θ
512
+ cos θ
513
+
514
+ the (counter-clockwise) rotation matrix, while each B ∈ Bs(N) can then be written of the
515
+ form
516
+ (1.24)
517
+ B = B(θ) =
518
+ n
519
+
520
+ j=1
521
+ B(θn−j+1)
522
+
523
+ 10
524
+ J. PECA-MEDLIN
525
+ for θ ∈ [0, 2π)n. Note Bs(N) ⊂ B(N) ⊂ SO(N), with equality when N ≤ 2. While B(N) is
526
+ not multiplicatively closed, Bs(N) forms a closed subgroup of SO(N) with
527
+ (1.25)
528
+ B(θ)B(ψ) = B(θ + ψ)
529
+ and
530
+ B(θ)−1 = B(−θ)
531
+ for B(θ), B(ψ) ∈ Bs(N).
532
+ Let Σ be a collection of dimension 2k pairs (Ck, Sk) of random symmetric matrices with
533
+ CkSk = SkCk and C2
534
+ k + S2
535
+ k = I2k for k ≥ 1. We will write B(N, Σ) and Bs(N, Σ) to denote
536
+ the ensembles of random butterfly matrices and random simple butterfly matrices formed by
537
+ independently sampling (C, S) from Σ at each recursive step. Let
538
+ ΣS = {(cos θ(k)I2k−1, sin θ(k)I2k−1) : θ(k) iid Uniform([0, 2π), k ≥ 1}
539
+ and
540
+ (1.26)
541
+ ΣD = {
542
+ 2k−1
543
+
544
+ j=1
545
+ (cos θ(k)
546
+ j , sin θ(k)
547
+ j ) : θ(k)
548
+ j
549
+ iid Uniform([0, 2π), k ≥ 1}.
550
+ (1.27)
551
+ A large focus for the remainder of this paper is on the Haar-butterfly matrices, Bs(N, ΣS),
552
+ while numerical experiments in Section 6 will also use the other random scalar butterfly ensem-
553
+ ble, B(N, ΣS), along with the random diagonal butterfly ensembles, B(N, ΣD) and Bs(N, ΣD).
554
+ Since Bs(N) is a compact abelian group, it has a Haar measure that enables uniform sampling
555
+ of its elements. The name of Haar-butterfly matrices for Bs(N, ΣS) is precisely because this
556
+ construction aligns exactly with this Haar measure on Bs(N).
557
+ Proposition 1.2 ([23]).
558
+ Bs(N, ΣS) ∼ Haar(Bs(N))
559
+ Using the mixed-product property, matrix factorizations of each Kronecker component
560
+ lead to a matrix factorization of Kronecker products. In particular, this holds for the LU
561
+ factorizations of Bs(N) using GENP and GEPP (see Proposition 4.1).
562
+ In particular, the
563
+ permutation matrix factors from the GEPP factorization of B ∈ Bs(N) are from the butterfly
564
+ permutation matrices, P(B)
565
+ N , which are defined recursively as P(B)
566
+ 2
567
+ = {I2, P(1 2)} and
568
+ (1.28)
569
+ P(B)
570
+ N
571
+ =
572
+
573
+
574
+
575
+ n
576
+
577
+ j=1
578
+ P ej
579
+ (1 2) : ej ∈ {0, 1}
580
+
581
+
582
+ � = P(B)
583
+ 2
584
+ ⊗ P (B)
585
+ N/2
586
+ for N > 2. These resulting permutations are explicit examples of perfect shuffles. See [6] for a
587
+ thorough overview of perfect shuffles and some of their inherent applications and properties.
588
+ The mixed-product property further yields P(B)
589
+ N
590
+ comprises a subgroup of permutation matrices
591
+ that is isomorphic to (Z/2Z)n.
592
+ Moreover, if B ∼ Bs(N, ΣS), then P ∼ Haar(P(B)
593
+ N ) for
594
+ PB = LU the GEPP factorization of B (see Corollary 4.2).
595
+ 2. Distribution of GEPP pivot movements for particular random matrices. We first will
596
+ state the main theoretical result on the distribution of the number of GEPP pivot movements
597
+ used that applies to particular input random matrix models. These will make use of Υn, the
598
+ Stirling-1 distribution (see Subsection 1.4), and P(B)
599
+ N , the butterfly permutation matrices (see
600
+ Subsection 1.5).
601
+
602
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
603
+ 11
604
+ Theorem 2.1. (I) If A is a n × n random matrix with independent columns whose first
605
+ n − 1 columns have continuous iid entries, then P ∼ Uniform(Pn) for PA = LU the GEPP
606
+ factorization of A for n ≥ 2. Moreover, the number of GEPP pivot movements needed for A
607
+ is equal in distribution to n − Υn.
608
+ (II) If B ∼ Bs(N, ΣS), then P ∼ Uniform(P(B)
609
+ N ) for PB = LU the GEPP factorization.
610
+ Moreover, the number of GEPP pivot movements needed for B is equal in distribution to
611
+ N
612
+ 2 Bernoulli
613
+
614
+ 1 − 1
615
+ N
616
+
617
+ .
618
+ The proof of Theorem 2.1 will be postponed until Section 3 for (I) and Section 4 for (II).
619
+ Part (I) gives the most general result that yields pivot movements following the Stirling-1
620
+ distribution. This includes iid random matrices when the entry distribution is continuous.
621
+ Corollary 2.2. If A is a n×n matrix with continuous iid entries, then the number of GEPP
622
+ pivot movements needed for A is equal in distribution to n − Υn.
623
+ If A is an iid matrix with entries taken from a distribution ξ with atoms (i.e., P(ξ = c) > 0
624
+ for some c), then GEPP would yield a different distribution on the number of pivot movements.
625
+ Example 2.3. If A is 2 × 2 where Aij are each iid Bernoulli(p), then GEPP yields the
626
+ permutation matrix factor P = P ζ
627
+ (1 2) where ζ ∼ Bernoulli(p(1 − p)). This follows since a
628
+ pivot movement is needed only if A11 = 0 and A21 = 1. A pivot is needed with probability
629
+ p(1 − p) ≤ 1
630
+ 4, so the number of GEPP pivot movements is equal in distribution to ζ.
631
+ Other continuous random models that do not fit the conditions in (I) would yield different
632
+ distributions on the resulting GEPP permutation matrix factors.
633
+ Example 2.4. Consider G ∼ GOE(2), where G11, G22 ∼ N(0, 2) and G21 = G12 ∼ N(0, 1)
634
+ are independent. The resulting number of GEPP pivot movements for G is equal in distribu-
635
+ tion to Bernoulli(p) for
636
+ p = P(|G21| > |G11|) = P(|Z1| >
637
+
638
+ 2|Z2|) = P(Z2
639
+ 1/Z2
640
+ 2 > 2)
641
+ = P(F1,1 > 2) = 1
642
+ π
643
+ � ∞
644
+ 2
645
+ dx
646
+ √x(1 + x) = 2
647
+ π arctan
648
+ � 1
649
+
650
+ 2
651
+
652
+ ≈ 0.391826552
653
+ using Zi ∼ N(0, 1) iid and Fµ,ν denotes the F-distribution with µ > 0 numerator degrees of
654
+ freedom (d.f.) and ν > 0 denominator d.f.
655
+ Example 2.5. Consider now G ∼ GUE(2), where G11, G22 ∼ N(0, 1) while G12 = G21 ∼
656
+ NC(0, 1), then the resulting number of GEPP pivot movements would similarly be equal in
657
+ distribution to Bernoulli(q) for
658
+ q = P(|G21| > |G11|) = P
659
+ ��
660
+ (Z2
661
+ 1 + Z2
662
+ 2)/2 > |Z3|
663
+
664
+ = P((Z2
665
+ 1 + Z2
666
+ 2)/2 > Z2
667
+ 3)
668
+ = P(F2,1 > 1) =
669
+ � ∞
670
+ 1
671
+ dx
672
+ (1 + 2x)3/2 =
673
+ 1
674
+
675
+ 3 ≈ 0.577350269
676
+ Remark 2.6. In comparison to Examples 2.3 to 2.5, Corollary 2.2 yields if G is a continuous
677
+ iid 2 × 2 matrix, then the number of GEPP pivots needed would be equal in distribution to
678
+ 2 − Υ2 ∼ Bernoulli(1
679
+ 2), where we note P(Υ2 = 1) = |s(2,1)|
680
+ 2!
681
+ = 1
682
+ 2 = |s(2,2)|
683
+ 2!
684
+ = P(Υ2 = 2).
685
+
686
+ 12
687
+ J. PECA-MEDLIN
688
+ A further result from Theorem 2.1 and Corollary 2.2 involves the relationship of the
689
+ LU factorization of an iid matrix to its QR factorization. If A has the GEPP factorization
690
+ PA = LU, then its QR factorization A = QR yields PQ = AR−1 = L(UR−1).7 In particular,
691
+ the resulting permutation matrix and hence the pivot movements that would be needed using
692
+ GEPP on Q and A are identical when no ties are encountered by Theorem 1.1. This obser-
693
+ vation then can be combined with Stewart’s realization of Haar orthogonal and Haar unitary
694
+ sampling using Ginibre ensembles (cf. Subsection 1.2) to yield:
695
+ Corollary 2.7. If A ∼ Haar(O(n)) or A ∼ Haar(U(n)), then the number of GEPP pivot
696
+ movements needed for A is equal in distribution to n − Υn.
697
+ A similar approach can then yield information about the pivot movements needed on
698
+ Haar(SO(n)) and Haar(SU(n)).
699
+ Note Stewart’s sampling method for Haar(O(n)) can be
700
+ indirectly rephrased in terms of the Subgroup algorithm, which was formally established by
701
+ Diaconis and Shahshahani [7]. The Subgroup algorithm enables a uniform sampling method
702
+ for a compact group by using a subgroup and its associated cosets:
703
+ Theorem 2.8 (Subgroup algorithm,[7]). If G is a compact group, H is a closed subgroup of
704
+ G, and H/G is the set of left-costs of H, then if x ∼ Uniform(G/H) and y ∼ Haar(H), then
705
+ xy ∼ Haar(G).
706
+ Theorem 2.8 can analogously be stated using right cosets.
707
+ Stewart’s approach for sampling Haar orthogonal matrices can be realized in light of The-
708
+ orem 2.8 by iteratively using Householder reflectors as coset representatives of the subgroup
709
+ of orthogonal matrices whose first row and column have 1 in the leading diagonal and are
710
+ zero elsewhere.8 More directly, one can then realize Haar(SO(n)) by using the coset repre-
711
+ sentatives of O(n)/ SO(n) of D(x) = diag(x, 1, . . . , 1) for x = ±1: if A ∼ Haar(SO(n)) and
712
+ x ∼ Uniform(±1), then D(x)A ∼ Haar(O(n)).9 Moreover, group homomorphisms can be used
713
+ to yield uniform measures on cosets as push forward measures of Haar measures. In particular,
714
+ since SO(n) is a normal subgroup of O(n) (since it is the kernel of the group homomorphism
715
+ det : O(n) → R), then the natural quotient map C2 = {±1} ∼= O(n)/ SO(n) ∼= O(n)\ SO(n)
716
+ yields C2 × SO(n) ∼= O(n). This yields uniform sampling from both C2 and SO(n) using
717
+ push forwards of the Haar sampling on O(n) along with the natural projection maps to each
718
+ component, which similarly holds using instead U(n) and SU(n):
719
+ Corollary 2.9. x ∼ Uniform(±1) and A ∼ Haar(SO(n)) iff AD(x) ∼ Haar(O(n)), while
720
+ y ∼ Uniform(T) and A ∼ Haar(SU(n)) iff AD(y) ∼ Haar(U(n)).
721
+ Hence, mimicking the result aligning the GEPP permutation matrix factors between A and
722
+ the Q from the A = QR factorization, we similarly have identical P factors for A ∼ Haar(O(n))
723
+ and B ∼ Haar(SO(n)) where A = BD(x).10
724
+ Corollary 2.10. If A ∼ Haar(SO(n)) or A ∼ Haar(SU(n)), then the number of GEPP pivot
725
+ 7Note UR−1 ∈ U(F, n) when U, R ∈ U(F, n) since U(F, n) is a group.
726
+ 8The Householder reflectors then can be uniformly sampled by choosing x ∈ Sn−1 uniformly.
727
+ 9Similarly Haar(U(n)) and Haar(SU(n)) can be related using the coset representatives D(x) for x ∈ T.
728
+ 10The argument from the paragraph before Corollary 2.7 is identical after replacing R ∈ U(F, n) with
729
+ D(x) ∈ U(F, n).
730
+
731
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
732
+ 13
733
+ movements needed for A is equal in distribution to n − Υn.
734
+ In [16, 17], the authors studied particular random preconditioning transformations of the
735
+ form UAV ∗ for iid U, V so that the resulting output can almost surely (a.s.) have a LU
736
+ factorization. In [17], the authors further studied the numerical properties for GE pivoting
737
+ strategies, including GEPP, after applying these random preconditioning transformations.
738
+ Relating this to our current topic, one could ask how many pivot movements are needed using
739
+ GEPP after these random transformations?
740
+ For the Haar orthogonal case (as well as the unitary, special orthogonal and special unitary
741
+ cases), the result is immediate: Let A be a nonsingular real matrix. Suppose U, V are iid
742
+ Haar(O(n)) and let B = AV ∗, which is independent of U. Let B = QR be the QR factorization
743
+ of B. Then UAV ∗ = UB = U(QR) = (UQ)R. The permutation matrix factor resulting
744
+ from GEPP for UAV ∗ is then identical (if no ties are encountered, which holds a.s.) to the
745
+ permutation matrix factor needed for UQ. Since Haar(O(n)) is right invariant, then UQ ∼ U.
746
+ This then yields the resulting number of pivots under this two-sided transformations is equal
747
+ in distribution to the (I) case.
748
+ Corollary 2.11. If A is a n×n nonsingular matrix, and U, V are iid from the Haar measure
749
+ on O(n), U(n), SO(n) or SU(n), then the number of GEPP pivot movements needed for UAV ∗
750
+ is equal in distribution to n − Υn.
751
+ Remark 2.12. Extensions of (II) from Theorem 2.1 are less direct, since (II) relies heavily
752
+ on explicit structural properties of Haar-butterfly matrices, Bs(N, ΣS). Analogous results can
753
+ be established if the focus is restricted to matrices in
754
+ n
755
+
756
+ F2×2.
757
+ 3. Permutations from GEPP and uniform sampling of Sn. Recall how the permuta-
758
+ tion matrix factor is formed when applying GEPP to a matrix. First a search for a largest
759
+ magnitude element is performed on the first column of A = A(1). After that value is found,
760
+ say at index i1 (for i1 ≥ 1), then the corresponding row permutation for the transposition
761
+ σ1 = (1 i1) is applied to A (using P (1) = Pσ1), after which the standard GE step follows to
762
+ iteratively eliminate each element under the first diagonal value to form A(2) using the pivot
763
+ element and the resulting lower triangular GE elimination factor ˜L(1,1), so that
764
+ (3.1)
765
+ A(2) = (L(1,1))−1P (1)A(1).
766
+ Then GE continues with a search for the largest magnitude element on the leading column of
767
+ A(2)
768
+ 2:n,2:n, which results in a transposition of the form σ2 = (2 i2) for i2 ≥ 2. The corresponding
769
+ row permutation is performed (using P (2)) followed by the standard GE elimination step using
770
+ the second pivot (using ˜L(2,2)), with which one groups the lower triangular and permutation
771
+ matrix factors using the relation L(j,k) = P (j)L(j−1,k)P (j)11, so that
772
+ (3.2)
773
+ A(3) = (L(2,2))−1P (2)A(2) = (L(2,1)L(2,2))−1P (2)P (1)A(1).
774
+ The process then continues, moving one column at a time, which results in the final GEPP
775
+ factorization PA = LU for P = P (n−1) · · · P (2)P (1), L = L(n−1,n−1) · · · L(n−1,2)L(n−1,1) and
776
+ U = A(n−1).
777
+ 11Note P (j) = (P (j))−1 since these permutation matrices correspond to transpositions that have order 2.
778
+
779
+ 14
780
+ J. PECA-MEDLIN
781
+ Hence, the resulting permutation matrix factor is built up step by step using σk = (k ik),
782
+ resulting in
783
+ (3.3)
784
+ P = P (n−1) · · · P (2)P (1) = Pσn−1 · · · Pσ2Pσ1 = Pσn−1···σ2σ1 = Pσ
785
+ for
786
+ (3.4)
787
+ σ = (n − 1 in−1) · · · (2 i2)(1 i1)
788
+ where j ≤ ij ≤ n for each j.
789
+ Remark 3.1. If ik = k, then (3.4) can abstain from including the trivial permutation (k ik).
790
+ In particular, (3.4) can trivially be expanded to σ = (n in)σ where necessarily in = n.
791
+ (3.4) is useful because every permutation can be put in this form.
792
+ Lemma 3.2. Every permutation in Sn can be written in the form (3.4).
793
+ In particular, every permutation is realizable as corresponding to the GEPP permutation
794
+ matrix factor for some input nonsingular matrix
795
+ Proof. By counting, it is clear n! inputs can be used to form σ (n choices for i1, n−1 choices
796
+ for in−1, and so on). Moreover, we see this correspondence is one-to-one: suppose σ and σ′
797
+ are formed using distinct inputs, and let k be the minimal index such that ik ̸= i′
798
+ k. Without
799
+ loss of generality, assume k = 1. Let ρ = σ(1 i1) and ρ′ = σ′(1 i1). Then ρ(1) = σ(i1) = 1
800
+ while ρ′(1) = σ′(i1) > 1; it follows σ ̸= σ′, which yields this mapping is an injection and hence
801
+ a bijection since |Sn| is finite.
802
+ Alternatively, this can be realized through induction: this clearly holds for n = 2 since
803
+ S2 = {1, (1 2)}. Now assume it holds for n − 1, then one can realize (using the inductive
804
+ hypothesis) that Sn−1 ∼= {(n − 1 in−1) · · · (2 i2) : j ≤ ij ≤ n} as a subgroup of Sn, by
805
+ recognizing the permutations that fix 1 in Sn is isomorphic to Sn−1. Moreover, adopting the
806
+ crude labeling of Sn−1 for this subgroup of Sn, [Sn : Sn−1] = n!/(n−1)! = n, and every (right)
807
+ coset representative for Sn−1 can be provided by Sn−1(1 i1) so that Sn = �n+1
808
+ j=1 Sn−1(1 j).
809
+ The coset decomposition structure for Sn used in the alternative proof of Lemma 3.2 was
810
+ utilized by Diaconis and Shashahani through an application of the Subgroup algorithm for
811
+ generating a Haar measure on Sn [7]. Their result yields a means to sample uniformly from
812
+ Sn as the convolution of the Haar measure on Sn−1 and the uniform measure on {(1 j) : 1 ≤
813
+ j ≤ n}. Explicitly, you can generate a uniform permutation σ ∈ Sn by first uniformly (and
814
+ independently) sampling both ρ ∈ Sn−1 and σ1 ∈ {(1 j) : 1 ≤ j ≤ n}, and then forming
815
+ σ = ρσ1. Iteratively applying this, one can uniformly sample a permutation by uniformly
816
+ sampling each ik ∈ {k, k + 1, . . . , n}, which yields a permutation in the form (3.4).
817
+ This
818
+ establishes:
819
+ Corollary 3.3. If ik ∼ Uniform{k, k + 1, . . . , n} for each k = 1, . . . , n − 1, then
820
+ (3.5)
821
+ (n − 1 in−1) · · · (2 i2)(1 i1) ∼ Uniform(Sn).
822
+ Moreover, the steps where pivot movements occur can be read off directly from σ when
823
+ it is written in the form (3.4). If Pσ is the resulting permutation matrix factor from applying
824
+
825
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
826
+ 15
827
+ GEPP to A, then we explicitly know at step k in GE, the pivot search on A(k) resulted in
828
+ the transposition (k ik). This translates directly to how many pivot movements are needed
829
+ through a particular implementation of GEPP by counting how many of these transpositions
830
+ have ik > k. (So no pivot movement is needed if k = ik, since this translates to the leading
831
+ pivot entry being maximal in its respective column.)
832
+ It follows then, explicitly, if A is a random matrix such that the resulting GEPP permu-
833
+ tation matrix factor P satisfies P ∼ Uniform(Pn) = Haar(Pn)12, then X, the number of pivot
834
+ movements needed during this implementation of GEPP on A, necessarily satisfies
835
+ (3.6)
836
+ P(X = k) = #{σ ∈ Sn : j = ij for n − k indices j}
837
+ |Sn|
838
+ .
839
+ Furthermore, for σ of the form (3.4), the number of times j > ij then corresponds precisely
840
+ to the number of disjoint cycles in the representation of σ: iff ik = k, then k is contained
841
+ in a cycle with elements no larger than k (each successive row transposition will fix k); so
842
+ k is the maximal element in its cycle. (Recall if k is a fixed point of a permutation, then k
843
+ comprises its own 1-cycle.) If ik > k, then k belongs to a cycle with a larger element. So
844
+ counting the number of times ik = k is then equivalent to counting the number of disjoint
845
+ cycles in the disjoint cycle decomposition of σ, since these can be enumerated by using their
846
+ maximal elements.
847
+ As is established in Subsection 1.4, these then align exactly with the
848
+ absolute Stirling numbers of the first kind, |s(n, k)|.
849
+ Example 3.4. Consider σ = (5 6)(3 4)(2 4)(1 3) ∈ S6. By default in = n, so i6 = 6 will
850
+ be considered as not needing a pivot movement on the nth GE step (this is always vacuously
851
+ true since GE completes after n − 1 steps). Here, we have only ik = k for k = 4 and k = 6, so
852
+ we should expect σ consists of precisely 2 cycles in its disjoint cycle decomposition, and 4 and
853
+ 6 are the maximal elements in those two cycles. This is verified by finding the final disjoint
854
+ cycle form of σ = (1 4 2 3)(5 6), for which 4 and 6 are the maximal elements of each of the
855
+ disjoint cycles.
856
+ Remark 3.5. Connecting the above conversation with the form (3.4), one can form a n-
857
+ cycle in Sn by requiring ik > k for all k up to n − 1. Moreover, connecting this further to
858
+ Corollary 3.3, one can uniformly sample n-cycles by sampling ik ∼ Uniform{k + 1, . . . , n} for
859
+ k = 1, 2, . . . , n − 1. Let
860
+ (3.7)
861
+ Pn -cycles
862
+ n
863
+ = {Pσ : σ is a n-cycle in Sn}
864
+ denote the subset of permutation matrices Pn that correspond to the n-cycles. If the cor-
865
+ responding GEPP permutation matrix factor is in Pn -cycles
866
+ n
867
+ for a n × n matrix A, then ev-
868
+ ery GE step required a pivot movement for A.
869
+ Moreover, if ik ∼ Uniform{k + 1, . . . , n}
870
+ for each k is used to generate the corresponding n-cycle σ = (n − 1 in−1) · · · (1 i1), then
871
+ Pσ ∼ Uniform(Pn -cycles
872
+ n
873
+ ).
874
+ 3.1. Proof of (I) of Theorem 2.1. Now we have the sufficient tools to establish:
875
+ 12This is equivalent to the statement P = Pσ for σ ∼ Uniform(Sn).
876
+
877
+ 16
878
+ J. PECA-MEDLIN
879
+ Theorem 2.1: (I) If A is a n × n random matrix with independent columns
880
+ whose first n − 1 columns have continuous iid entries, then P ∼ Uniform(Pn)
881
+ for PA = LU the GEPP factorization of A for n ≥ 2. Moreover, the number
882
+ of GEPP pivot movements needed for A is equal in distribution to n − Υn.
883
+ Proof. Suppose A satisfies the (I) hypothesis, and let P be the associated GEPP permu-
884
+ tation matrix factor for A.
885
+ We will prove P ∼ Uniform(Pn) using induction on n ≥ 2.
886
+ Suppose n = 2, so the first column of A has continuous iid entries A11 and A21.
887
+ Us-
888
+ ing GEPP, a pivot will be needed only if |A21| > |A11|, but P(|A11| > |A21|) =
889
+ 1
890
+ 2 since
891
+ A11 ∼ A21 (and P(|A11| = |A21|) = 0 since these are continuous random variables). Hence,
892
+ P = P ζ
893
+ (1 2) ∼ Haar(P2) for ζ ∼ Bernoulli(1
894
+ 2).
895
+ Now assume the result holds for any random matrix of dimension n − 1 with independent
896
+ columns whose first n − 2 columns have continuous iid entries.
897
+ Let A be the dimension
898
+ n matrix satisfying the statement. Using GEPP on A, for the first pivot search using the
899
+ leading column of A = A(1), we have
900
+ (3.8)
901
+ P(max(|A11|, |A21|, . . . , |An1|) = |A11|) = P(max(|A11|, |A21|, . . . , |An1|) = |Ak1|)
902
+ for each k = 1, 2, . . . , n since A11 ∼ Ak1 are continuous iid.
903
+ It follows the first GEPP
904
+ row transposition is of the form P (1) = Pσ1 for σ1 = (1 i1) for i1 ∼ Uniform{1, 2, . . . , n},
905
+ where i1 = argmaxk |Ak1|. Now applying the GE elimination factor L(1,1) results in A(2) =
906
+ (L(1,1))−1P (1)A(1). Moreover, ˜A(1) = P (1)A(1) still satisfies the (I) hypothesis since row per-
907
+ mutations preserve each of the column independence and iid properties of A(1). A(2) then has
908
+ entries of the form
909
+ (3.9)
910
+ A(2)
911
+ ij = ˜Aij − ˜A1j · L(1)
912
+ i1 = ˜Aij − ˜A1j ·
913
+ ˜Ai1
914
+ ˜A11
915
+ .
916
+ for i, j ≥ 2. In particular, since the columns are independent and have continuous iid entries,
917
+ then L(1)
918
+ i1
919
+ is independent of ˜Aij for each i when j ≥ 2, while ˜A1j · L(1)
920
+ i1
921
+ is independent of
922
+ ˜Aij for each i ≥ 2, so that B = A(2)
923
+ 2:n,2:n also satisfies the (I) hypothesis.
924
+ Now we can
925
+ apply the inductive hypothesis to B to yield a GEPP factorization PB = LU where P ∼
926
+ Uniform(Pn−1). Embedding Pn−1 into Pn using the map Q �→ 1 ⊕ Q yields then the resulting
927
+ GEPP permutation matrix factor
928
+ (3.10)
929
+ Pσ = (1 ⊕ P)P (1)
930
+ for A. Moreover, since P ∼ Uniform(Pn−1), then 1 ⊕ P = Pρ for ρ ∼ Uniform(Sn−1)13 while
931
+ P (1) = Pσ1 for σ1 ∼ Uniform{(1 j) : j = 1, 2, . . . , n}, so that by the Subgroup algorithm
932
+ we have σ = ρσ1 ∼ Uniform(Sn). It follows Pσ ∼ Uniform(Pn). This establishes the first
933
+ statement part of (I) from Theorem 2.1.
934
+ Now suppose X is the number of pivot movements needed using GEPP on A. The prior
935
+ 13Now associating Sn−1 with the isomorphic subgroup of Sn that fixes 1.
936
+
937
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
938
+ 17
939
+ conversation up to (3.6) establishes the correspondence
940
+ P(X = n − k) = #{σ ∈ Sn : j = ij for k indices j}
941
+ n!
942
+ = #{σ ∈ Sn : σ has k cycles in its disjoint cycle decomposition}
943
+ n!
944
+ = |s(n, k)|
945
+ n!
946
+ = P(Υn = k)
947
+ for k = 1, 2, . . . , n, which yields the desired result n − X ∼ Υn.
948
+ 4. Butterfly permutations. Using the Kronecker product factorization for B(θ) ∈ Bs(N)
949
+ (where N = 2n) along with the mixed-product property, then matrix factorizations of each
950
+ Kronecker component yield the matrix factorization of the resulting butterfly matrix. This was
951
+ used in [17] to yield both the eigenvalue (Schur) decomposition as well as the LU factorizations
952
+ of scalar simple butterfly matrices, Bs(N). For completeness, we will restate this latter result:
953
+ Proposition 4.1 ([17]).
954
+ Let B = B(θ) ∈ Bs(N).
955
+ (I) If cos θi ̸= 0 for all i, then B has the GENP factorization B = LθUθ where Lθ =
956
+ �n
957
+ j=1 Lθn−j+1 and Uθ = �n
958
+ j=1 Uθn−j+1 for
959
+ (4.1)
960
+ Lθ =
961
+
962
+ 1
963
+ 0
964
+ − tan θ
965
+ 1
966
+
967
+ and
968
+ Uθ =
969
+ �cos θ
970
+ sin θ
971
+ 0
972
+ sec θ
973
+
974
+ .
975
+ (II) Using θ ∈ [0, 2π)n, let
976
+ (4.2)
977
+ ej =
978
+ � 1
979
+ if | tan θj| ≤ 1,
980
+ 0
981
+ if | tan θj| > 1
982
+ for each j. Let θ′ ∈ [0, 2π)n be such that θ′
983
+ j = π
984
+ 2 ej + (−1)ejθj = θjej + ( π
985
+ 2 − θj)(1 − ej)
986
+ for each j. If | tan θj| ̸= 1 for any j, then the GEPP factorization of B is PB = LU where
987
+ P = Pθ = �n
988
+ j=1 Pθn−j+1, L = Lθ′, and U = Uθ′Dθ for
989
+ (4.3)
990
+ Pθ = P ej
991
+ (1 2)
992
+ and
993
+ Dθ = (−1)1−ej ⊕ 1.
994
+ Moreover, (PB)(k) = B(θ′)(k)Dθ for all k where B(θ′) ∈ Bs(N).
995
+ In particular, P ∈ P(B)
996
+ N
997
+ (cf. (1.28)) for PB = LU the GEPP factorization of B ∈ Bs(N).
998
+ Note if θ ∼ Uniform([0, 2π)) then P(| tan θ| ≤ 1) = 1
999
+ 2, so the resulting GEPP permutation
1000
+ matrix factor Pθ for B(θ) ∼ Bs(N, ΣS), a Haar-butterfly matrix, then satisfies
1001
+ (4.4)
1002
+ P(Pθ = Q) = 2−n = 1
1003
+ N =
1004
+ 1
1005
+ |P(B)
1006
+ N |
1007
+ for each Q ∈ P(B)
1008
+ N
1009
+ (using also Propositions 1.2 and 4.1). This establishes the first part of (II)
1010
+ from Theorem 2.1:
1011
+
1012
+ 18
1013
+ J. PECA-MEDLIN
1014
+ Corollary 4.2. If B ∼ Bs(N, ΣS), then P ∼ Uniform(P(B)
1015
+ N ) for the GEPP factorization
1016
+ PB = LU.
1017
+ Next, we can connect the resulting GEPP permutation matrix factors for Haar-butterfly
1018
+ matrices to the corresponding number of total pivot movements needed.
1019
+ This can be ac-
1020
+ complished by finding the associated permutation σ ∈ Sn written in the form (3.4) for the
1021
+ corresponding butterfly permutation.
1022
+ Proposition 4.3. If n = 1, P(B)
1023
+ 2
1024
+ = P2. For n > 1, then Pσ ∈ P(B)
1025
+ 2N where σ ∈ S2N written
1026
+ in the form (3.4) is one of four options: either σ = 1 if Pσ = I2 ⊗ IN or σ ̸= 1 is the product
1027
+ of N disjoint transpositions, where σ = (N 2N) · · · (2 N + 2)(1 N + 1) if Pσ = P(1 2) ⊗ IN, or
1028
+ (4.5)
1029
+ σ = (a1 + N a2 + N) · · · (aN−1 + N aN + N)(a1 a2) · · · (aN−1 aN)
1030
+ if Pσ = I2 ⊗ Pρ, or
1031
+ σ = (a2 a1 + N)(a1 a2 + N) · · · (aN aN−1 + N)(aN−1 aN + N)
1032
+ if Pσ = P(1 2) ⊗ Pρ,
1033
+ where Pρ ∈ P(B)
1034
+ N
1035
+ with ρ = (a1 a2) · · · (aN−1 aN) ∈ SN in the form (3.4) such that a2k−1 < a2k
1036
+ for each k unless ρ = 1 and a2k−1 > a2k+1 for each k when n > 2.
1037
+ Proof. We will use induction on n = log2 N along with the fact P(B)
1038
+ 2N = P(B)
1039
+ 2
1040
+ ⊗ P(B)
1041
+ N . For
1042
+ n = 2, starting with P(B)
1043
+ 2
1044
+ = P2 = {I2, P(1 2)} we have
1045
+ P(B)
1046
+ 4
1047
+ = P2 ⊗ P2 = {I2 ⊗ I2, P(1 2) ⊗ I2, I2 ⊗ P(1 2), P(1 2) ⊗ P(1 2)}
1048
+ (4.6)
1049
+ = {I4, P(2 4)(1 3), P(3 4)(1 2), P(2 3)(1 4)}.
1050
+ (4.7)
1051
+ The corresponding permutations as written above then all satisfy the form (3.4), where we
1052
+ abstain from including the trivial permutations (ik k) when ik = k. Moreover, writing each
1053
+ associated non-trivial permutation can be written as the product of N = 2 disjoint transposi-
1054
+ tions of the form (a1 a2)(a3 a4), which then have a1 > a3 and a2k−1 < a2k for k = 1, 2. (Note
1055
+ the (distinct) transpositions used must necessarily be disjoint since necessarily P 2
1056
+ σ = Pσ2 = I.)
1057
+ Assume the result holds for n. The n+1 case follows by just reading off the corresponding
1058
+ permutations I2 ⊗ Pρ and P(1 2) ⊗ Pρ for ρ = (a1 a2) · · · (aN−1 aN) such that Pρ ∈ P(B)
1059
+ N
1060
+ (for
1061
+ ρ already in form (3.4)). For ρ = 1, then P1 = IN yields I2 ⊗ IN = I2N and P(1 2) ⊗ IN =
1062
+ P(N 2N)···(2 N+2)(1 N+1); if Pρ ∈ P(B)
1063
+ N
1064
+ for ρ = (a1 a2) · · · (aN−1 aN) ∈ SN where a2k+1 <
1065
+ a2k−1 < a2k for each k, then
1066
+ (4.8)
1067
+ Pσ = I2 ⊗ Pρ = P(a1+N a2+N)···(aN−1+N aN+N)(a1 a2)···(aN−1 aN)
1068
+ and
1069
+ Pσ = P(1 2) ⊗ Pρ = P(a2 a1+N)(a1 a2+N)···(aN aN−1+N)(aN−1 aN+N).
1070
+ Moreover, each associated permutation σ in the form (3.4) then is either the trivial per-
1071
+ mutation or is the product of N disjoint transpositions and can be written in the form
1072
+ (b1 b2) · · · (b2N−1 b2N) where b2k+1 < b2k−1 < b2k for each k. This holds directly by con-
1073
+ struction for the associated permutations for I2 ⊗ IN, P(1 2) ⊗ IN and I2 ⊗ Pρ, which follows
1074
+ since ak ≤ N along with a2k+1 < a2k−1 < a2k for all k. It remains to show σ can be written
1075
+ in this form when Pσ = P(1 2) ⊗ Pρ.
1076
+ By (4.8), σ = (a2 a1 + N)(a1 a2 + N) · · · (aN aN−1 + N)(aN−1 aN + N). Note this is the
1077
+ product of N disjoint transpositions again since ak ≤ N and ai ̸= aj for all i ̸= j. Moreover,
1078
+
1079
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
1080
+ 19
1081
+ since the aj are distinct, we can label b2k−1 = a(N−k+1) for k = 1, 2, . . . , N using the ordered
1082
+ statistics subscript (so a(1) < a(2) < · · · < a(N)), with then b2k = ρ(a(N−k+1)) + N. We can
1083
+ then further write σ = (b1 b2) · · · (b2N−1 b2N) since disjoint cycles commute, for which it then
1084
+ follows also b2k+1 < b2k−1 < b2k for each k.
1085
+ Example 4.4. The corresponding permutations are {1, (1 2)} = S2 for P(B)
1086
+ 2
1087
+ , {1, (2 4)(1 3),
1088
+ (3 4)(1 2), (2 3)(1 4)} for P(B)
1089
+ 4
1090
+ , and
1091
+ (4.9)
1092
+
1093
+
1094
+
1095
+
1096
+
1097
+
1098
+
1099
+ 1, (4 8)(3 7)(2 6)(1 5),
1100
+ (6 8)(5 7)(2 4)(1 3), (4 6)(3 5)(2 8)(1 7),
1101
+ (7 8)(5 6)(3 4)(1 2), (4 7)(3 8)(2 5)(1 6),
1102
+ (6 7)(5 8)(2 3)(1 4), (4 5)(3 6)(2 7)(1 8)
1103
+
1104
+
1105
+
1106
+
1107
+
1108
+
1109
+
1110
+ for P(B)
1111
+ 8
1112
+ .
1113
+ Remark 4.5. If no pivot movements are needed on the first GE step, then no pivot move-
1114
+ ments will be needed at any GE step (using (3.4) and Proposition 4.3).
1115
+ Furthermore, it
1116
+ follows inductively that half the time all of the pivot movements occur in the first N/2 steps;
1117
+ a quarter of the time all of the pivot movements occur as the first N/4 steps of each N/2
1118
+ partitioning; an eighth of the time all of the pivots occur at the first N/8 steps of each N/4
1119
+ partitioning; and so on, where 2−k of the time the pivots occur at each step in the first half
1120
+ of each N21−k partitioning of the GE steps, which stops with exactly one permutation (i.e.,
1121
+ 2−n =
1122
+ 1
1123
+ N of the time) does pivoting occur at precisely every other GE step. The remaining
1124
+ permutation accounts for no pivoting ever occurring when Pθ = I. This yields a Cantor set-
1125
+ like decomposition of [n] into the possible configuration of locations of where the pivots can
1126
+ occur using GEPP. To find out which configuration will be used on a particular simple scalar
1127
+ butterfly matrix (i.e., the exact pivoting locations one will encounter), this is determined by
1128
+ the first step where k = ik.
1129
+ For example, using the associated permutations from P(B)
1130
+ 8
1131
+ from Example 4.4, we see
1132
+ 4 permutations have pivot movements only at the first half of the total GE steps (i.e.,
1133
+ (4 8)(3 7)(2 6), (1 5), (4 6)(3 5)(2 8)(1 7), (4 7)(3 8)(2 5)(1 6), and (4 5)(3 6)(2 7)(1 8)
1134
+ each yield pivot movements at GE steps 1 through 4); 2 permutations have pivot movements
1135
+ only at the first half of each half partitioning of the total GE steps (i.e, (6 8)(5 7)(2 4)(1 3)
1136
+ and (6 7)(5 8)(2 3)(1 4) yield pivot movements only at steps 1,2 and 5,6); 1 partition has
1137
+ pivot movements at every other GE step (i.e., (7 8)(5 6)(3 4)(1 2) yields pivot movements
1138
+ only at steps 1,3,5,7); with only one permutation (the trivial permutation) having no GE pivot
1139
+ movements.
1140
+ Moreover, applying GEPP to Haar-butterfly matrices, where then each butterfly permuta-
1141
+ tion occurs with equal probability, then induces a probability measure on these configurations.
1142
+ Figure 1 shows the possible GEPP pivot movement locations associated with Haar-butterfly
1143
+ permutations of size N = 28, along with the probability for each particular configuration.
1144
+ 4.1. Proof of (II) of Theorem 2.1. Now we have the sufficient tools to establish:
1145
+ Theorem 2.1: (II) If B ∼ Bs(N, ΣS), then P ∼ Uniform(P(B)
1146
+ N ) for PB = LU
1147
+ the GEPP factorization.
1148
+ Moreover, the number of GEPP pivot movements
1149
+ needed for B is equal in distribution to N
1150
+ 2 Bernoulli
1151
+
1152
+ 1 − 1
1153
+ N
1154
+
1155
+ .
1156
+
1157
+ 20
1158
+ J. PECA-MEDLIN
1159
+ 0
1160
+ 50
1161
+ 100
1162
+ 150
1163
+ 200
1164
+ 250
1165
+ 2-8
1166
+ 2-8
1167
+ 2-7
1168
+ 2-6
1169
+ 2-5
1170
+ 2-4
1171
+ 2-3
1172
+ 2-2
1173
+ 2-1
1174
+ Figure 1: GEPP pivot movement configurations for Haar-butterfly permutations for N = 28
1175
+ and their associated probabilities, pk, with the exact pivot movement locations indicated by
1176
+ blue
1177
+ Proof. Corollary 4.2 yields directly that applying GEPP to a Haar-butterfly matrix results
1178
+ in a uniform butterfly permutation matrix factor, which is the first part of (II). Moreover,
1179
+ using the associated permutations from GEPP in the form (3.4) to then be able to read off
1180
+ explicitly which GE steps need pivot movements, then we have by Proposition 4.3 that ik > k
1181
+ precisely for N/2 indices k for each non-trivial case and for precisely 0 times in the trivial
1182
+ case. It follows then if Y is the number of pivot movements needed when using GEPP on
1183
+ B(θ) ∼ Bs(N, ΣS), then since Pθ ∼ Uniform(P(B)
1184
+ N ) we have
1185
+ P(Y = 0) = P(Pθ = IN) = 2−n = 1
1186
+ N , and
1187
+ (4.10)
1188
+ P
1189
+
1190
+ Y = N
1191
+ 2
1192
+
1193
+ = P(Pθ ̸= IN) = 1 − P(Pθ = IN) = 1 − 1
1194
+ N .
1195
+ (4.11)
1196
+ Hence, Y ∼ N
1197
+ 2 Bernoulli(1 − 1
1198
+ N ).
1199
+ 5. Random ensembles PLmax
1200
+ n
1201
+ (ξ), PLn(ξ), PLmax
1202
+ n
1203
+ (ξ, α) and PLn(ξ, α). Using the
1204
+ prior discussions, we can build a random ensemble of matrices that require a maximal number
1205
+ of GEPP pivot movements. As mentioned in Remark 3.5, a maximal number of GEPP pivot
1206
+ movements occurs when the corresponding permutation matrix factor P ∈ Pn -cycles
1207
+ n
1208
+ . Using
1209
+
1210
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
1211
+ 21
1212
+ this, we can define
1213
+ (5.1)
1214
+ PLmax
1215
+ n
1216
+ (ξ) = {PL : P ∼ Uniform(Pn -cycles
1217
+ n
1218
+ ), L ∈ Ln independent of P, Lij ∼ ξ iid for i > j}.
1219
+ Recall, by construction, the corresponding GEPP lower triangular matrix factor L satisfies
1220
+ ∥L∥max = 1. In particular, |Lij| ≤ 1 for all i > j. Moreover, Theorem 1.1 yields the L factor is
1221
+ invariant under row permutations if |Lij| < 1 for all i > j. Hence, if A = PLU where U ∈ Un
1222
+ and PL ∼ PLn(ξ) for any distribution ξ with |ξ| < 1, then A will always require n − 1 GEPP
1223
+ pivot movements. Similar ensembles can be constructed where the P factor is restricted to
1224
+ particular pivot configurations.
1225
+ We will also study the more general model
1226
+ (5.2)
1227
+ PLn(ξ) = {PL : P ∼ Uniform(Pn), L ∈ Ln independent of P, Lij ∼ ξ iid for i > j}.
1228
+ When |ξ| < 1, then A = PLU for PL ∼ PLn(ξ) and U ∈ Un corresponds to the GEPP LU
1229
+ factorization of A.
1230
+ Remark 5.1. Two natural distributions to consider are ξ ∼ Uniform([−1, 1]) and ξ ∼
1231
+ Uniform(D).
1232
+ Using GEPP on GLn(F), the left-coset representatives of Un(F) in GLn(F),
1233
+ GLn(F)/Un(F), are precisely of the form PL for P ∈ Pn and L ∈ Ln where |Lij| ≤ 1 for all i >
1234
+ j. Hence, PLn(ξ) then corresponds to uniformly sampled representatives of GLn(R)/Un(R)
1235
+ when ξ ∼ Uniform([−1, 1]) and uniformly sampled representatives of GLn(C)/Un(C) when
1236
+ ξ ∼ Uniform(D).
1237
+ 5.1. Eigenvalues of PLmax
1238
+ n
1239
+ (ξ) and PLn(ξ). Suppose PL ∼ PLmax
1240
+ n
1241
+ (ξ). Since L ∈ Ln,
1242
+ then its eigenvalues are exactly 1 with multiplicity n. Since P ∈ Pn -cycles
1243
+ n
1244
+ , then its eigenvalues
1245
+ are exactly the nth roots of unity, e2πi/n. The spectral pictures for each P and L separately
1246
+ fall exactly on ∂D, and these are deterministic despite each matrix being random.
1247
+ So a natural next question is what does the spectral picture look like for their product, PL?
1248
+ The eigenvalues no longer stay on ∂D, but they appear to remain asymptotically concentrated
1249
+ inside D when scaled by
1250
+
1251
+ nσ2/2 when Eξ = 0 (i.e., ξ is centered) and σ2 = E|ξ|2 is the variance
1252
+ of ξ. Figure 2 shows the (computed) eigenvalue locations for PLmax
1253
+ n
1254
+ (ξ) scaled by
1255
+
1256
+ nσ2/2
1257
+ using n = 214 = 16, 384 and ξ sampled from Uniform([−1, 1]), Uniform(D), Rademacher and
1258
+ N(0, 1). Noticeably, Figure 2 further suggests a universality result for this limiting behavior.
1259
+ Recall the empirical spectral distribution (ESD) of a n × n matrix A is the probability
1260
+ measure
1261
+ (5.3)
1262
+ µA = 1
1263
+ n
1264
+ n
1265
+
1266
+ k=1
1267
+ δλk(A),
1268
+ which gives equal weight to the location of each eigenvalue of A. Note if A is a random matrix,
1269
+ then µA is a random probability measure.
1270
+ Empirically, Figure 2 then suggests µA is converging to a probability measure on D that is
1271
+ an interpolation between the uniform measure on D and the Dirac measure at the origin when
1272
+ A = PL/
1273
+
1274
+ nσ2/2 for PL ∼ PLn(ξ) with ξ having 0 mean and finite variance. Although the
1275
+ eigenvalues of A have a higher density near the origin, they can never include the origin since
1276
+ PL is nonsingular.
1277
+
1278
+ 22
1279
+ J. PECA-MEDLIN
1280
+ (a) ξ ∼ Uniform([−1, 1])
1281
+ (b) ξ ∼ Uniform(D)
1282
+ (c) ξ ∼ Rademacher
1283
+ (d) ξ ∼ N(0, 1)
1284
+ Figure 2: Computed eigenvalues (in blue) for PL/
1285
+
1286
+ nσ2/2 where n = 214 = 16, 384 and
1287
+ PL ∼ PLmax
1288
+ n
1289
+ (ξ) for (a) ξ ∼ Uniform([−1, 1]) where σ2 =
1290
+ 1
1291
+ 3, (b) ξ ∼ Uniform(D) where
1292
+ σ2 = 1
1293
+ 2, (c) ξ ∼ Rademacher where σ2 = 1, and (d) ξ ∼ N(0, 1) where σ2 = 1, mapped
1294
+ against the unit complex circle ∂D (in red)
1295
+ The pictures are indistinguishable to Figure 2 when instead using PLn(ξ), where P ∼
1296
+ Uniform(Pn) instead of P ∼ Uniform(Pn -cycles
1297
+ n
1298
+ ). In both cases, the ESD of P limit to the
1299
+ uniform measure on ∂D in probability in the weak star sense [8].
1300
+ However, replacing P
1301
+ with another random matrix from an ensemble whose ESDs similarly limit to the uniform
1302
+ measure on ∂D (e.g., using Haar sampling for O(n), U(n), or Bs(n) [8, 23]) yield different
1303
+ spectral pictures that extend beyond D when still using the scaling
1304
+
1305
+ nσ2/2. This points to
1306
+ the significance of the divisor of 2 in the above scaling, which corresponds to the density of
1307
+
1308
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
1309
+ 23
1310
+ the L factor of 1
1311
+ 2; for example, UL has full density when U ∼ Haar(O(n)) or U ∼ Bs(N, ΣS).
1312
+ Since multiplication by permutation matrices preserves density, one might expect the
1313
+ same scaling when uniformly sampling permutation matrices versus corresponding n-cycle
1314
+ permutation matrices. Both should adequately mix up the rows of L so that the eigenvalues
1315
+ move away from 1. However, preserving density is not sufficient on its own, as can be seen if
1316
+ using a diagonal matrix D, since then DL will only have eigenvalues located at the diagonal
1317
+ entries of D (since L is unipotent lower triangular). A future area of research can study a
1318
+ more general random ensemble that could replace P in PL ∼ PLn(ξ) with another random
1319
+ matrix that sufficiently randomizes the rows of L without changing the sparsity of L.
1320
+ 5.2. Fixed sparsity random ensembles PLmax
1321
+ n
1322
+ (ξ, α) and PLn(ξ, α). We can similarly
1323
+ study random ensembles with (approximately) fixed sparsity. Let the sparsity of a matrix
1324
+ A be determined by α = α(A), the ratio of the number of nonzero entries of A to the total
1325
+ number of entries of A. If α = 1, then A has full density, and if α = 0 then A is the zero
1326
+ matrix. A full lower triangular matrix has density given by α =
1327
+ �n+1
1328
+ 2
1329
+
1330
+ /n2 = 1
1331
+ 2(1 + 1
1332
+ n) ≈ 1
1333
+ 2 for
1334
+ large n. We can then construct a matrix with fixed (approximate) sparsity by zeroing out all
1335
+ entries above a specific diagonal. For a n × n matrix that has zero entries only above a set
1336
+ diagonal k ∈ [−n, n] with entries at indices (i, i + k) (where k = 0 is the main diagonal, k > 0
1337
+ is the super diagonal, and k < 0 are the sub diagonals), one can determine the sparsity by
1338
+ computing:
1339
+ (5.4)
1340
+ gn(k) = 1
1341
+ n2
1342
+ �1
1343
+ 2(n + k)(n + k + 1) − k(k + 1) · 1(k > 0)
1344
+
1345
+ .
1346
+ This is the ratio of nonzero entries to the total number of matrix entries for a matrix that
1347
+ is zero above and full at or below the kth diagonal; the triangular numbers naturally show
1348
+ up for the numerator. Note gn(k) is a quadratic polynomial in k for fixed n, so gn(k) can be
1349
+ extended to include non-integral k. In particular, one could uniquely solve for
1350
+ (5.5)
1351
+ kα ∈ [−n, n]
1352
+ such that
1353
+ gn(kα) = α.
1354
+ Using this, we can introduce another random ensemble
1355
+ (5.6)
1356
+ PLn(ξ, α) =
1357
+
1358
+ PL : P ∼ Uniform(Pn), L independent of P, Lij = 0
1359
+ if i + ⌊kα⌋ ≤ j,
1360
+ Lij ∼ ξ iid
1361
+ if i + ⌊kα⌋ > j
1362
+
1363
+ We can similarly define PLmax
1364
+ n
1365
+ (ξ, α) by requiring P ∼ Uniform(Pn -cycles
1366
+ n
1367
+ ) (as well as other
1368
+ ensembles with fixed pivot configurations). Note if PL ∼ PLn(ξ, 1
1369
+ 2) then P(L+In) ∼ PLn(ξ).
1370
+ Known results for asymptotic limiting behavior of ESDs to probability measures on D
1371
+ include the famous Circular law introduced by Bai in [2] and proven with a fourth moment
1372
+ condition by Tao and Vu [21].
1373
+ Theorem 5.2 (Circular law [21]).
1374
+ Let A be a n × n matrix with iid entries sampled from
1375
+ ξ where Eξ = 0, σ2 = E|ξ|2 and E|ξ|4 < ∞. Then µA/
1376
+
1377
+ nσ2 converges weakly to the uniform
1378
+ measure on D in probability and almost surely.
1379
+
1380
+ 24
1381
+ J. PECA-MEDLIN
1382
+ Theorem 5.2 yields the asymptotic spectral picture for PLn(ξ, 1) when ξ is centered with
1383
+ finite fourth moment. Figure 3a shows the map of the computed eigenvalues PLn(N(0, 1), 1) ∼
1384
+ Gin(n, n) for n = 214, while Figure 3b plots n random sample points from Uniform(D).
1385
+ Comparing this to the Ginibre picture (i.e., Figure 3a) also exemplifies how the asymptotic
1386
+ result does not hold for fixed n. The Ginibre picture has repulsions between its eigenvalues
1387
+ that lead to points being similarly spaced, while the asymptotic endpoint (i.e., Figure 3b) has
1388
+ Poisson clumping.
1389
+ (a) ξ ∼ N(0, 1), α = 1
1390
+ (b) n = 214 iid samples from Uniform(D)
1391
+ (c) ξ ∼ N(0, 1), α = 3
1392
+ 4
1393
+ (d) ξ ∼ N(0, 1), α = 1
1394
+ 4
1395
+ Figure 3: Computed eigenvalues (in blue) for PL/
1396
+
1397
+ αnσ2 where n = 214 = 16, 384 and
1398
+ PL ∼ PLn(ξ, α) for ξ ∼ N(0, 1) (where σ2 = 1) and (a) α = 1, (c) α = 3/4, and (d) α = 1/4,
1399
+ along with (b) n iid samples from Uniform(D), mapped against the unit complex circle ∂D
1400
+ (in red)
1401
+
1402
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
1403
+ 25
1404
+ Using A ∼ PLn(ξ, α) for fixed α ∈ (0, 1] and ξ ∼ N(0, 1), empirical data suggest a similar
1405
+ asymptotic result for the ESD of A/
1406
+
1407
+ αnσ2. Note the scaling matches that of Theorem 5.2
1408
+ where α = 1 as well as that seen in Figure 2 where α = 1
1409
+ 2. In particular, following the trajec-
1410
+ tory for α = 1 in Figure 3a, α = 3
1411
+ 4 in Figure 3d, α = 1
1412
+ 2 in Figure 2 (recall P ∼ Uniform(Pn)
1413
+ and P ∼ Uniform(Pn -cycles
1414
+ n
1415
+ ) empirically result in indistinguishable spectral pictures), and
1416
+ α = 1
1417
+ 4 in Figure 3c, suggests the scaling by
1418
+
1419
+ αnσ2 have the corresponding ESDs limit to
1420
+ να, a fixed probability measure with support on D that depends on α (and not on ξ), which
1421
+ further converge to ν, the uniform measure on D, as α → 1 and converge to δ0, the Dirac
1422
+ measure at the origin, as α → 0. So the limiting measure is an interpolation between ν and
1423
+ δ0.
1424
+ Together, these suggest different universality classes than those included by the Circular
1425
+ law. Previous studies of sparsity and the Circular law have studied iid ensembles that have
1426
+ sparsity α = αn converging to 0 slow enough whose ESDs still limit to ν [14, 18]. Other studies
1427
+ have similarly explored the impact of sparsity on the extreme eigenvalues in the Hermitian
1428
+ case, which has ESDs limiting to the semicircular law [12]. Results in the literature for fixed
1429
+ sparsity random ensembles remain sparse. The above discussion provides supporting evidence
1430
+ for the following conjecture:
1431
+ Conjecture 5.3. Fix α ∈ (0, 1]. Let A = PLQ be the n × n matrix, where P and Q are iid
1432
+ uniformly chosen permutation matrices and L is a n×n random matrix independent of P and
1433
+ Q whose nonzero entries are iid from ξ with Eξ = 0, E|ξ| = σ2 and E|ξ|4 < ∞, where Lij = 0
1434
+ if i + ⌊kα⌋ < j. Then there exists a probability measure, να, on D that is an interpolation
1435
+ between the uniform measure on D, ν, and the Dirac measure at the origin, δ0, such that
1436
+ µAn/
1437
+
1438
+ αnσ2 converges weakly to να in probability and almost surely. Furthermore, να → ν as
1439
+ α → 1 and να → δ0 as α → 0, with both convergences holding uniformly with respect to the
1440
+ total variation distance.
1441
+ Remark 5.4. For α = 1, this is the circular law.
1442
+ Remark 5.5. Note the right permutation matrix Q term is irrelevant, since A is similar
1443
+ to QPL, and QP ∼ P since the uniform measure on Sn is left- and right-invariant (it is the
1444
+ Haar measure on Pn). So the study of the above ensembles reduces to the ensembles of the
1445
+ form PLn(ξ, α) for centered ξ with finite fourth moments.
1446
+ Remark 5.6. Additional random ensembles that can be studied in light of Conjecture 5.3
1447
+ include perturbations of PLn(ξ, α) for deterministic matrices (similar to those studied in
1448
+ [14, 21]), as well as P(L + In) for PL ∼ PLn(ξ, α). This latter model is interesting when
1449
+ α ≤ 1
1450
+ 2 since the corresponding L factor has eigenvalues of 0 with multiplicity n; in particular,
1451
+ L and hence PL does not have full rank. Using experiments for several fixed α < 1
1452
+ 2, then
1453
+ the nullity of PL ∼ PLn(N(0, 1), α) appears to be approximately (1 −
1454
+
1455
+ 2α)n, while 0 is an
1456
+ eigenvalue of multiplicity approximately (1−2α)n; when α ≥ 1
1457
+ 2, both are 0 (a.s.). Conversely,
1458
+ now considering A = P(L + In), then 0 is never an eigenvalue for A and so A is always full
1459
+ rank for all α.
1460
+ 6. Numerical experiments. The final section will focus on a set of experiments that will
1461
+ study the number of GEPP pivot movements needed on particular random linear systems.
1462
+ These experiments will expand on results first presented in [17], which studied the impact on
1463
+
1464
+ 26
1465
+ J. PECA-MEDLIN
1466
+ the growth factors and relative error computations when using common random transforma-
1467
+ tions from numerical linear algebra on particular fixed linear systems, Ax = b. Both initial
1468
+ models represent scenarios when no GEPP pivot movements are needed, so we will refer to
1469
+ both here as min-movement models. Carrying forward the naming scheme from [17], the two
1470
+ linear systems studied include:
1471
+ 1. the min-movement (na¨ıve) model14, with A = In where ρ(In) = 1 is minimized, and
1472
+ 2. the min-movement (worst-case) model15, with A = An a particular linear model that
1473
+ maximizes the growth factor ρ(An) = 2n−1.
1474
+ In the na¨ıve model, the authors studied the 1-sided random transformation ΩI = Ω, which
1475
+ provides a means to directly study the corresponding random matrix, Ω, itself. The worst-case
1476
+ model considered the 2-sided random transformations, UAnV ∗, where U, V were independently
1477
+ sampled random matrices; this 2-sided transformation follows the construction used by Parker
1478
+ that would remove the need for pivoting in GEPP (with high probability), so that UAnV ∗ =
1479
+ LU has a GE factorization [16]. The matrix An is of the form
1480
+ (6.1)
1481
+ An = In −
1482
+
1483
+ i>j
1484
+ Eij +
1485
+ n−1
1486
+
1487
+ j=1
1488
+ Ein.
1489
+ Wilkinson introduced An to establish the growth factor bound ρ(A) ≤ 2n−1 is sharp [26]. By
1490
+ construction, no GEPP pivoting would be needed at any intermediate GE step when using
1491
+ An, so the final GENP and GEPP factorizations of An both align, with An = LnUn for
1492
+ Ln = In − �
1493
+ i>j Eij and Un = In − Enn + �n
1494
+ k=1 2k−1Ekn. It follows ρ(An) = |Unn| = 2n−1.
1495
+ For example,
1496
+ (6.2)
1497
+ A4 =
1498
+
1499
+ ���
1500
+ 1
1501
+ 0
1502
+ 0
1503
+ 1
1504
+ −1
1505
+ 1
1506
+ 0
1507
+ 1
1508
+ −1
1509
+ −1
1510
+ 1
1511
+ 1
1512
+ −1
1513
+ −1
1514
+ −1
1515
+ 1
1516
+
1517
+ ��� =
1518
+
1519
+ ���
1520
+ 1
1521
+ 0
1522
+ 0
1523
+ 0
1524
+ −1
1525
+ 1
1526
+ 0
1527
+ 0
1528
+ −1
1529
+ −1
1530
+ 1
1531
+ 0
1532
+ −1
1533
+ −1
1534
+ −1
1535
+ 1
1536
+
1537
+ ���
1538
+
1539
+ ���
1540
+ 1
1541
+ 0
1542
+ 0
1543
+ 1
1544
+ 0
1545
+ 1
1546
+ 0
1547
+ 2
1548
+ 0
1549
+ 0
1550
+ 1
1551
+ 4
1552
+ 0
1553
+ 0
1554
+ 0
1555
+ 8
1556
+
1557
+ ��� = L4U4
1558
+ has ρ(A4) = 8 = 23.
1559
+ With respect to GEPP, both the na¨ıve and worst-case models use input matrices that
1560
+ need 0 total pivot movements. We will study both of these min-movement models in addition
1561
+ to a third model, that looks at the other extreme in terms of the number of GEPP pivot
1562
+ movements:
1563
+ 3. the max-movement model, where A ∼ PLmax
1564
+ n
1565
+ (ξ) for ξ ∼ Uniform([−1, 1]).
1566
+ Note if A = PL ∼ PLmax
1567
+ n
1568
+ (ξ) when |ξ| < 1 a.s., then ρ(A) = ρ(L) = 1.
1569
+ We will consider only the 1-sided transformation case for the na¨ıve model (so that we
1570
+ can study the random matrices themselves) and only the 2-sided transformation cases for the
1571
+ worst-case and max-movement models. Together, these three models will allow us to study the
1572
+ 14The na¨ıve model was named to reflect that using any method is unnecessary to solve the trivial linear
1573
+ system Ix = x = b.
1574
+ 15The worst-case moniker was chosen to reflect the numerical stability of computed solutions using GEPP,
1575
+ which is controlled by the growth factors: solving Anx = b sees relative errors of order O(1) starting when
1576
+ n ≈ 60.
1577
+
1578
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
1579
+ 27
1580
+ impact of random transformations on two different systems where no GEPP pivot movements
1581
+ are needed as well as one (independently sampled random) system where a maximal number
1582
+ of GEPP pivot movements are needed.
1583
+ For each set of experiments, we will consider the following random transformations for
1584
+ fixed N = 2n:
1585
+ • Bs(N, ΣS)
1586
+ • B(N, ΣS)
1587
+ • Bs(N, ΣD)
1588
+ • B(N, ΣD)
1589
+ • Walsh transform
1590
+ • Haar(O(N))
1591
+ • Discrete Cosine Transform (DCT II)
1592
+ To ease the following discussion, we choose N = 24 = 16 and N = 28 = 256 to focus on as we
1593
+ feel they are representative of the behavior we saw for other choices of N. For the na¨ıve model,
1594
+ which will study the pivot movements for each of the associated random matrices themselves
1595
+ (using the 1-sided preconditioning with A = I), our experiments will additionally use:
1596
+ • GOE(N)
1597
+ • GUE(N)
1598
+ • Bernoulli(1
1599
+ 2)
1600
+ These models were touched on for N = 2 in Examples 2.3 to 2.5.
1601
+ Each of the butterfly models is sampled using custom MATLAB recursive functions with
1602
+ iid uniformly chosen angles16 in line with methods outlined in [17, 23]. See Subsection 1.2 for
1603
+ more information and sampling techniques of the Walsh, Haar orthogonal, DCT, GOE(N)
1604
+ and GUE(N) transformations. The Bernoulli ensemble uses iid Bernoulli(1
1605
+ 2) entries17. Each
1606
+ set of experiments (using N = 24 and N = 28 for all three models) will use 10,000 trials using
1607
+ MATLAB in double precision, where ϵ = 2−52 (ϵ ≈ 2.220446 · 10−16).
1608
+ 6.1. Min-movement (na¨ıve) model. For the na¨ıve model, our goal is to study the number
1609
+ of GEPP pivot movements needed for 10 different random ensembles.
1610
+ These allow us to
1611
+ gauge the impact on (1-sided) random transformations on the simplest linear system, Ix =
1612
+ b, in terms of the number of GEPP pivot movements needed.
1613
+ Starting with I, no pivot
1614
+ movements are needed, so the transformed system ΩI = Ω then allows us to study how much
1615
+ this transformation introduces new pivot movements. This model also then enables us to
1616
+ directly study the number of pivot movements needed for each random matrix, Ω.
1617
+ Table 1 shows the sample medians, means (¯x), and standard deviations (s) for the 10,000
1618
+ trials each for N = 24 and N = 28, while Figure 4 summarizes the total number of GEPP
1619
+ pivot movements encountered for each sampled random matrix across each set of trials. Note
1620
+ the axes in Figure 4 show each possible step output for 0 to N − 1.
1621
+ 16The number of angles used depends on if the cosine and sine matrices are scalar and if the butterfly matrix
1622
+ is simple. For Bs(N, ΣS), then one uniform angle is sampled at each recursive step, for n = log2 N total uniform
1623
+ angles needed, while similarly B(N, ΣS) and Bs(N, ΣD) both sample a total of N − 1 uniform angles, with
1624
+ B(N, ΣD) using 1
1625
+ 2Nn total uniform angles. These compare to Haar(O(N)), which (using Givens rotations to
1626
+ find the QR factorization of Gin(N, N)) can be sampled using
1627
+ �N
1628
+ 2
1629
+
1630
+ = 1
1631
+ 2N(N − 1) uniform angles. The above
1632
+
1633
+ 28
1634
+ J. PECA-MEDLIN
1635
+ N = 16
1636
+ N = 256
1637
+ Median
1638
+ ¯x
1639
+ s
1640
+ Median
1641
+ ¯x
1642
+ s
1643
+ Bs(N, ΣS)
1644
+ 8
1645
+ 7.492
1646
+ 1.951
1647
+ 128
1648
+ 127.501
1649
+ 7.978
1650
+ B(N, ΣS)
1651
+ 11
1652
+ 10.934
1653
+ 2.622
1654
+ 232
1655
+ 230.392
1656
+ 14.418
1657
+ Bs(N, ΣD)
1658
+ 12
1659
+ 11.287
1660
+ 2.459
1661
+ 245
1662
+ 241.915
1663
+ 10.308
1664
+ B(N, ΣD)
1665
+ 13
1666
+ 12.535
1667
+ 1.430
1668
+ 250
1669
+ 249.901
1670
+ 2.112
1671
+ Walsh
1672
+ 6
1673
+ 6
1674
+ -
1675
+ 120
1676
+ 120
1677
+ -
1678
+ Haar(O(N))
1679
+ 13
1680
+ 12.624
1681
+ 1.345
1682
+ 250
1683
+ 249.884
1684
+ 2.123
1685
+ DCT II
1686
+ 13
1687
+ 13
1688
+ -
1689
+ 249
1690
+ 249
1691
+ -
1692
+ GOE(N)
1693
+ 11
1694
+ 10.954
1695
+ 1.780
1696
+ 249
1697
+ 248.696
1698
+ 2.359
1699
+ GUE(N)
1700
+ 11
1701
+ 11.132
1702
+ 1.761
1703
+ 249
1704
+ 248.867
1705
+ 2.348
1706
+ Bernoulli
1707
+ 11
1708
+ 10.509
1709
+ 1.774
1710
+ 248
1711
+ 247.783
1712
+ 2.467
1713
+ Table 1: Pivot counts for numerical experiments for GEPP with 10,000 trials, for random
1714
+ matrices of orders N = 24 and N = 28
1715
+ 0
1716
+ 5
1717
+ 10
1718
+ 15
1719
+ 0
1720
+ 50
1721
+ 100
1722
+ 150
1723
+ 200
1724
+ 250
1725
+ Figure 4: Histogram of 104 samples of pivot movement counts for random matrices of order
1726
+ N = 24 and N = 28.
1727
+ ordering reflects this ordering of complexity.
1728
+ 17Bernoulli matrices are sampled using native MATLAB functions, round(rand(n)).
1729
+
1730
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
1731
+ 29
1732
+ 6.1.1. Discussion. For each set of experiments, the Haar-butterfly and Walsh transforms
1733
+ introduce the least amount of additional movement from the initial minimal GEPP movement
1734
+ setup, each with total pivot movements at most N/2 while the remaining models had total
1735
+ pivot movements closer to the upper bound of N − 1.
1736
+ By construction, both the Walsh and DCT models have deterministic output. This follows
1737
+ since the transformations are of the form WD where W is a deterministic associated matrix
1738
+ (the default Fast-Walsh Hadamard matrix or the DCT matrix used by the native MATLAB
1739
+ functions for the associated multiplication operators) while D is a random row sign matrix
1740
+ sampled uniformly from {±1}N. Hence, if the GEPP factorization of W is PW = LU, then
1741
+ the GEPP factorization of WD is PWD = L(UD). So the permutation matrix factor is
1742
+ independent of D for these models.
1743
+ This is reflected in Figure 4 and Table 1 (e.g., both
1744
+ sample standard deviations are 0).
1745
+ The two Haar random ensembles studied (viz., Bs(N, ΣS) and Haar(O(N))) have the full
1746
+ distribution on the number of GEPP pivot movements determined by Theorem 2.1, using also
1747
+ Corollary 2.7. From Figure 4, these two models also appear to represent relative extreme
1748
+ models among the random ensembles considered in these trials, with the resulting uniform
1749
+ GEPP permutation matrix factor yielding the most pivot movements.
1750
+ For Haar-butterfly matrices, we can directly compare the sample statistics against the
1751
+ exact distribution statistics for YN ∼ N
1752
+ 2 Bernoulli(1 − 1
1753
+ N ). We can compute exactly
1754
+ EYN = N
1755
+ 2
1756
+
1757
+ 1 − 1
1758
+ N
1759
+
1760
+ and
1761
+ (6.3)
1762
+ σYN = N
1763
+ 2
1764
+ ��
1765
+ 1 − 1
1766
+ N
1767
+
1768
+ · 1
1769
+ N ,
1770
+ (6.4)
1771
+ This yields that EY16 = 7.5 and σY16 ≈ 1.93649167, which align (as expected) with the
1772
+ associated sample mean of 7.492 and sample standard deviation of 1.951 from Table 1 for
1773
+ N = 16. Similarly, the exact values EY256 = 127.5 and σY256 ≈ 7.98435971 align with the
1774
+ sample statistics ¯x = 127.501 and s = 7.978 for N = 256.
1775
+ Moreover, as can be seen in
1776
+ Figure 4, the trials resulted only in total GEPP movements of 0 or N
1777
+ 2 , as should be expected
1778
+ for a scaled Bernoulli distribution. This agrees with the result from [11] that the computed
1779
+ GEPP permutation matrix factors using floating-point arithmetic and exact arithmetic align
1780
+ with very high probability for Gin(n, n). Other standard sample statistic comparisons for the
1781
+ Haar-butterfly matrices similarly align, as expected18.
1782
+ Similarly, we can compare the output for the Haar(O(N)) trials, which have XN, the total
1783
+ GEPP pivot movements needed, equal in distribution to N − ΥN. We can compute exactly
1784
+ EXN = N − EΥN = N − HN
1785
+ and
1786
+ (6.5)
1787
+ σXN = σΥN =
1788
+
1789
+ HN − H(2)
1790
+ N .
1791
+ (6.6)
1792
+ 18For example, the sample medians exactly match the exact medians of N/2. Also, we can compare the
1793
+ sample proportion ˆpN to the population success parameter pN = 1 − 1
1794
+ N . These again compare very favorably,
1795
+ where 1 − ˆp16 = 0.0635 aligns with 1 − p16 =
1796
+ 1
1797
+ 16 = 0.0625 and 1 − ˆp256 = 0.0039 aligns with 1 − p256 =
1798
+ 1
1799
+ 256 =
1800
+ 0.00390625.
1801
+
1802
+ 30
1803
+ J. PECA-MEDLIN
1804
+ This yields that EX16 ≈ 12.619271006771006 and σX16 ≈ 1.340291930806123, which align
1805
+ with the associated sample mean of 12.624 and sample standard deviation of 1.345 from
1806
+ Table 1 for N = 16. Similarly, the exact values EX256 ≈ 249.8756550371827 and σX256 ≈
1807
+ 2.117382706670809 align with the sample statistics ¯x = 249.696 and s = 2.123 for N = 256.
1808
+ Figure 4 shows the butterfly models pivot movements lie strictly between the pivot move-
1809
+ ments for Haar-butterfly matrices and Haar(O(N)), with the increase in associated number
1810
+ of uniform angles needed for the butterfly models leading to the sample distributions progres-
1811
+ sively moving toward the Haar(O(N)) model for both N = 16 and N = 256. While B(N, ΣD)
1812
+ results in pivot movements very close to those modeled by the uniform GEPP permutation ma-
1813
+ trix factors, B(N, ΣS) and Bs(N, ΣD) lie strictly in between both the Haar-butterfly and Haar
1814
+ orthogonal pivot movements. Moreover, the remaining random models for GOE(N), GUE(N)
1815
+ and Bernoulli have pivot movement distributions staying to the left of the Haar orthogonal
1816
+ model, which move closer to the Haar orthogonal model distribution as N increases. This
1817
+ suggests as N increases for these remaining models, the resulting random GEPP permutation
1818
+ matrix moves closer to a uniform permutation matrix.
1819
+ Remark 6.1. For both the Haar-butterfly and Haar orthogonal models, the 1-sided na¨ıve
1820
+ model is equivalent to the 2-sided na¨ıve model since UINV ∗ = UV ∗ ∼ U by the right-
1821
+ invariance of the Haar measure. This does not hold, however, for any other random matrices
1822
+ in the na¨ıve experiments.
1823
+ Remark 6.2. For the Bernoulli model, it is possible to generate a singular random matrix,
1824
+ which occurs with probability 2−N(1 + o(1)) [22]. Of the 10,000 trials, this occurred 48 times
1825
+ for N = 16 and 0 times for N = 256. Table 1 and Figure 4 show the summary statistics
1826
+ and overall GEPP pivot counts for the remaining 9,952 nonsingular Bernoulli matrices when
1827
+ N = 16.
1828
+ 6.2. Min-movement (worst-case) model. For the worst-case model, we want to study
1829
+ the number of GEPP pivot movements needed when starting with a fixed linear system, AN,
1830
+ that again requires no pivot movements. This provides a means to measure how much new
1831
+ GEPP pivot movements are generated by these random transformations. For this model, we
1832
+ will consider only the 2-sided transformation, UANV ∗, where U and V are iid samples from
1833
+ each random model used in the experiments.
1834
+ Analogously to the na¨ıve model, Table 2 shows the sample medians, means (¯x), and stan-
1835
+ dard deviations (s) for the 10,000 trials again for N = 24 and N = 28, while Figure 5 sum-
1836
+ marizes the total number of GEPP pivot movements encountered for each sampled UANV ∗
1837
+ across each set of trials.
1838
+ 6.2.1. Discussion. Only the Haar(O(N)) model for UANV ∗ is sampled from a distri-
1839
+ bution determined in Theorem 2.1, since Corollary 2.11 yields resulting GEPP permutation
1840
+ matrix factor is a uniform permutation matrix, so that XN, the number of GEPP pivot move-
1841
+ ments, is equal in distribution to N −ΥN. Since AN does not preserve the Kronecker product
1842
+ structure, the Haar-butterfly pivot movement distribution is not preserved. Hence, the num-
1843
+ ber of GEPP pivot movements is no longer a scaled Bernoulli distribution and now has full
1844
+ support on 0, 1, . . . , N − 1.
1845
+ Again, both the Haar-butterfly and Haar orthogonal models provide representatives for
1846
+
1847
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
1848
+ 31
1849
+ N = 16
1850
+ N = 256
1851
+ Median
1852
+ ¯x
1853
+ s
1854
+ Median
1855
+ ¯x
1856
+ s
1857
+ Bs(N, ΣS)
1858
+ 11
1859
+ 10.563
1860
+ 2.380
1861
+ 179
1862
+ 181.784
1863
+ 27.493
1864
+ B(N, ΣS)
1865
+ 12
1866
+ 12.217
1867
+ 1.760
1868
+ 249
1869
+ 247.936
1870
+ 4.322
1871
+ Bs(N, ΣD)
1872
+ 12
1873
+ 11.967
1874
+ 1.995
1875
+ 249
1876
+ 247.493
1877
+ 5.521
1878
+ B(N, ΣD)
1879
+ 13
1880
+ 12.558
1881
+ 1.406
1882
+ 250
1883
+ 249.876
1884
+ 2.090
1885
+ Walsh
1886
+ 12
1887
+ 12.245
1888
+ 1.617
1889
+ 250
1890
+ 249.654
1891
+ 2.253
1892
+ Haar(O(N))
1893
+ 13
1894
+ 12.636
1895
+ 1.335
1896
+ 250
1897
+ 249.900
1898
+ 2.129
1899
+ DCT II
1900
+ 12
1901
+ 11.820
1902
+ 1.719
1903
+ 250
1904
+ 249.447
1905
+ 2.313
1906
+ Table 2: Pivot counts for numerical experiments for GEPP with 10,000 trials, for 2-sided
1907
+ transformation of Worst-case model of orders N = 24 and N = 28
1908
+ 0
1909
+ 5
1910
+ 10
1911
+ 15
1912
+ 0
1913
+ 50
1914
+ 100
1915
+ 150
1916
+ 200
1917
+ 250
1918
+ Figure 5: Histogram of 104 samples of pivot movement counts for 2-sided random transfor-
1919
+ mations of order N = 24 and N = 28 worst-case model, UANV ∗.
1920
+ the extremes in the number of pivot movements introduced by these random transformations.
1921
+ As in the na¨ıve model, the Haar-butterfly transformation introduced the least amount of new
1922
+ GEPP pivot movements for the initial minimal GEPP pivot movement model AN.
1923
+ Of the remaining models, only B(N, ΣS) and Bs(N, ΣD) have resulting distributions that
1924
+ do not appear to align with the Haar orthogonal model, although they both appear much
1925
+ closer to the Haar orthogonal than Haar-butterfly models.
1926
+ The remaining models’ align-
1927
+
1928
+ 32
1929
+ J. PECA-MEDLIN
1930
+ ment with the Haar orthogonal models manifests even for the small N = 16 experiments for
1931
+ B(N, ΣD) and the Walsh transform: the exact values EX16 ≈ 12.619271006771006 and σX16 ≈
1932
+ 1.340291930806123 compare to the remaining respective samples means of 12.558 and 12.245
1933
+ and sample standard deviations of 1.406 and 1.617 for the B(N, ΣD) and Walsh models. This
1934
+ alignment is even more pronounced for N = 256: the exact values EX256 ≈ 249.8756550371827
1935
+ and σX256 ≈ 2.117382706670809 line up very well for B(N, ΣD), Walsh, and DCT II models,
1936
+ whose sample means range from 249.447 to 249.876 and whose sample standard deviations
1937
+ range from 2.090 to 2.313. Moreover, the remaining models have sample medians each of
1938
+ 250 that exactly match that for the Haar orthogonal model for N = 256, while the sample
1939
+ medians match or are smaller by one than the true Haar orthogonal sample median of 13 for
1940
+ N = 16. Again, these suggest performance for the non-butterfly models moving toward the
1941
+ uniform row permutations as N increases.
1942
+ 6.3. Max-movement model. While the min-movement models studied the impact of
1943
+ random transformations on the number of pivot movements introduced to initial models that
1944
+ require no GEPP pivot movements, the max-movement model will instead study the impact
1945
+ of the random transformations on a model that has maximal GEPP pivot movements, PL ∼
1946
+ PLmax
1947
+ N
1948
+ (ξ) for ξ ∼ Uniform([−1, 1]). (Unlike the min-movements models, the input matrix
1949
+ PL is random.) This provides a means to measure how much GEPP pivot movement can
1950
+ be removed by these random transformations. As in the worst-case model, we will consider
1951
+ only the 2-sided transformation, UPLV ∗, where U and V are iid samples from each random
1952
+ model.
1953
+ Table 3 shows the sample medians, means (¯x), and standard deviations (s) for the 10,000
1954
+ trials each for N = 24 and N = 28, while Figure 6 summarizes the total number of GEPP
1955
+ pivot movements encountered for each sampled matrix UPLV ∗.
1956
+ N = 16
1957
+ N = 256
1958
+ Median
1959
+ ¯x
1960
+ s
1961
+ Median
1962
+ ¯x
1963
+ s
1964
+ Bs(N, ΣS)
1965
+ 13
1966
+ 12.580
1967
+ 1.348
1968
+ 250
1969
+ 249.864
1970
+ 2.126
1971
+ B(N, ΣS)
1972
+ 13
1973
+ 12.594
1974
+ 1.369
1975
+ 250
1976
+ 249.899
1977
+ 2.090
1978
+ Bs(N, ΣD)
1979
+ 13
1980
+ 12.613
1981
+ 1.357
1982
+ 250
1983
+ 249.901
1984
+ 2.120
1985
+ B(N, ΣD)
1986
+ 13
1987
+ 12.626
1988
+ 1.322
1989
+ 250
1990
+ 249.887
1991
+ 2.121
1992
+ Walsh
1993
+ 13
1994
+ 12.630
1995
+ 1.332
1996
+ 250
1997
+ 249.879
1998
+ 2.123
1999
+ Haar(O(N))
2000
+ 13
2001
+ 12.625
2002
+ 1.339
2003
+ 250
2004
+ 249.833
2005
+ 2.130
2006
+ DCT II
2007
+ 13
2008
+ 12.573
2009
+ 1.344
2010
+ 250
2011
+ 249.923
2012
+ 2.116
2013
+ Table 3: Pivot counts for numerical experiments for GEPP with 10,000 trials, for 2-sided
2014
+ transformation of max-movement model of orders N = 24 and N = 28
2015
+ 6.3.1. Discussion. As in the worst-case model, only the Haar orthogonal transformed
2016
+ model UPLV ∗ has its distribution determined by Theorem 2.1, where Corollary 2.11 again
2017
+ yields XN, the number of GEPP pivot movements, correspond to uniform row permutations,
2018
+ so XN ∼ N − ΥN. Unlike both min-movement models, all of the resulting experiments align
2019
+ strongly with this uniform row permutation model. All of the sample means are within 0.05 of
2020
+
2021
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
2022
+ 33
2023
+ 0
2024
+ 5
2025
+ 10
2026
+ 15
2027
+ 0
2028
+ 50
2029
+ 100
2030
+ 150
2031
+ 200
2032
+ 250
2033
+ Figure 6: Histogram of 104 samples of pivot movement counts for 2-sided random transfor-
2034
+ mations of order N = 24 and N = 28 maximal movement real model, UPLV ∗.
2035
+ the exact means EXN and all of the sample standard deviations are within 0.02 of the exact
2036
+ standard deviations σXN for both N = 16 and N = 256 (see Table 3). Moreover, every set
2037
+ of experiments exactly matched the true medians for the uniform row permutation models of
2038
+ 13 for N = 16 and 250 for N = 256. Hence, this suggests every random transformation had
2039
+ essentially equivalent dampening impacts on the total GEPP pivot movements when starting
2040
+ with a maximal pivot movement model.
2041
+ 6.4. Conclusions. The Haar orthogonal model, which had GEPP pivot movements Xn ∼
2042
+ n − Υn, remains a strong comparison point for each random transformation across each min-
2043
+ and max-movement model. In each case, Xn represents both an upper bound for overall per-
2044
+ formance in terms of numbers of pivot movements, as well as a limiting class for most random
2045
+ transformations, which further suggest a universality result in terms of GEPP pivot move-
2046
+ ments. Since the Haar orthogonal model results in uniform GEPP permutation matrix factors,
2047
+ this suggests most random transformation classes have sufficient random mixing properties
2048
+ using both minimal and maximal GEPP movement input models. Undesirably, however, this
2049
+ model asymptotically is concentrated near the upper bound of n − 1 in terms of total pivot
2050
+ movements, with average n − Hn = (n − 1)(1 + o(1)).
2051
+ The Haar-butterfly model introduced the least amount of additional pivot movements
2052
+ among the min-movement models, while the remaining butterfly models introduced increasing
2053
+ pivot movements as they increased in randomness (i.e., from B(N, ΣS) and BS(N, ΣD) to
2054
+
2055
+ 34
2056
+ J. PECA-MEDLIN
2057
+ B(N, ΣD)). However, only the Haar-butterfly models remained far from the upper bound
2058
+ for the min-movement models. In [17], a future direction wanted to explore the impact of
2059
+ combining random transformations (that remove the need of GEPP pivoting) with GEPP on
2060
+ the total number of pivot movements. To address this concern, these experiments suggest
2061
+ the butterfly models do the least amount of damage in terms of introducing new GEPP pivot
2062
+ movements when starting with a linear system with little GEPP pivot movements necessary.
2063
+ However, no models had strong dampening performance when starting with a max-movement
2064
+ input system.
2065
+ 7. Acknowledgements. The author would like to thank a referee on a previous paper who
2066
+ had asked about the number of movements still needed after using a random transformation
2067
+ on a linear system, which led to the particular direction pursued here. Additionally, the author
2068
+ thanks Tom Trogdon and Nick Ercolani for many helpful thoughts and insights during the
2069
+ project.
2070
+ REFERENCES
2071
+ [1] M. Baboulin, X. S. Li, and F.-H. Rouet, Using random butterfly transformations to avoid pivoting
2072
+ in sparse direct methods, In: Proc. of Int. Con. on Vector and Par. Proc., (2014), https://doi.org/10.
2073
+ 1007/978-3-319-17353-5 12.
2074
+ [2] Z. Bai, Circular law, Ann. Probab., 1 (1997), pp. 494–529, https://doi.org/10.2307/3214948.
2075
+ [3] L. Bellavista, On the Stirling numbers of the first kind arising from probabilistic and statistical problems,
2076
+ Rend. Circ. Mat. Palermo, 32 (1983), pp. 19–26, https://doi.org/10.1007/BF02851099.
2077
+ [4] C. Charalambides and J. Singh, A review of the Stirling numbers, their generalization and statisti-
2078
+ cal applications, Comm. in Stat.-Theory Meth., 17 (1988), pp. 2533–2593, https://doi.org/10.1080/
2079
+ 03610928808829760.
2080
+ [5] L. Comtet, Advanced Combinatorics, D. Reidel, Dordrecht, Holland, 1974.
2081
+ [6] P. Diaconis, R. L. Graham, and W. M. Kantor, The mathematics of perfect shuffles, Adv. in App.
2082
+ Math., 4 (1983), pp. 175–196, https://doi.org/10.1016/0196-8858(83)90009-X.
2083
+ [7] P. Diaconis and M. Shahshahani, The subgroup algorithm for generating uniform random variables,
2084
+ Prob. in the Eng. and Info. Sci., 1 (1987), pp. 15–32, https://doi.org/10.1017/S0269964800000255.
2085
+ [8] P. Diaconis and M. Shahshahani, On the eigenvalues of random matrices, J. of App. Prob., 31 (1994),
2086
+ pp. 49–62, https://doi.org/10.2307/3214948.
2087
+ [9] Y. Hajime, A probabilistic approach to Stirling numbers of the first kind, Comm. in Stat.-Theory Meth.,
2088
+ 19 (1990), pp. 3915–3923, https://doi.org/10.1080/03610929008830421.
2089
+ [10] N. J. Higham, Accuracy and Stability of Numerical Algorithms, Second Edition, SIAM, Philadelphia,
2090
+ PA, 2002.
2091
+ [11] H. Huang and K. Tikhomirov, Average-case analysis of the Gaussian elimination with partial pivoting,
2092
+ 2022, https://arxiv.org/abs/arXiv:2206.01726.
2093
+ [12] P. Y. Konstantin Tikhomirov, Outliers in spectrum of sparse Wigner matrices, Rand. Struct. & Alg.,
2094
+ 58 (2021), pp. 517–605, https://doi.org/10.1002/rsa.20982.
2095
+ [13] P.-G. Martinsson and J. A. Tropp, Randomized numerical linear algebra: Foundations and algorithms,
2096
+ Acta Numerica, 29 (2020), pp. 403–572, https://doi.org/10.1017/S0962492920000021.
2097
+ [14] P. Matchett Wood, Universality and the circular law for sparse random matrices, Ann. of Appl. Prob.,
2098
+ 22 (2012), pp. 1266–1300.
2099
+ [15] F. Mezzadri, How to generate random matrices from the classical compact groups, Notices of the Amer-
2100
+ ican Mathematical Society, 54 (2007), pp. 592 – 604.
2101
+ [16] D. S. Parker, Random butterfly transformations with applications in computational linear algebra, Tech.
2102
+ rep., UCLA, (1995).
2103
+ [17] J. Peca-Medlin and T. Trogdon, Growth factors of random butterfly matrices and the stability of
2104
+
2105
+ DISTRIBUTION OF THE NUMBER OF PIVOTS NEEDED USING GEPP
2106
+ 35
2107
+ removing pivoting, 2022, https://arxiv.org/abs/arXiv:2203.15921.
2108
+ [18] M. Rudelson and K. Tikhomirov, The sparse circular law under minimal assumptions, Geom. Funct.
2109
+ Anal., 29 (2019), pp. 561–637, https://doi.org/10.1007/s00039-019-00492-6.
2110
+ [19] G. W. Stewart, The efficient generation of random orthogonal matrices with an application to condition
2111
+ estimators, SIAM J. Numer. Anal., 17 (1980), pp. 403–409, https://doi.org/10.1137/0717034.
2112
+ [20] G. Strang, The discrete cosine transform, SIAM Review, 41 (1999), pp. 135–147, https://doi.org/10.
2113
+ 1137/S0036144598336745.
2114
+ [21] T. Tao and V. Vu, Random matrices: Universality of ESDs and the circular law, Ann. Probab., 38
2115
+ (2010), pp. 2023—-2065, https://doi.org/10.1214/10-AOP534.
2116
+ [22] K. Tikhomirov, Singularity of random Bernoulli matrices, Ann. Math, 191 (2020), pp. 593–634, https:
2117
+ //doi.org/10.4007/annals.2020.191.2.6.
2118
+ [23] T. Trogdon, On spectral and numerical properties of random butterfly matrices, Applied Math. Letters,
2119
+ 95 (2019), pp. 48–58, https://doi.org/10.1016/j.aml.2019.03.024.
2120
+ [24] J. A. Tropp, Improved analysis of the subsampled randomized Hadamard transform, Adv. Adapt. Data
2121
+ Anal., 3 (2011), pp. 115–126, https://doi.org/10.1142/S1793536911000787.
2122
+ [25] A. Weil, L’int´egration dans les groupes topologiques et ses applications, Actualit´es Scientifiques et In-
2123
+ dustrielles, vol. 869, Paris: Hermann, 1940.
2124
+ [26] J. Wilkinson, Error analysis of direct methods of matrix inversion, J. Assoc. Comput. Mach., 8 (1961),
2125
+ pp. 281–330, https://doi.org/10.1145/321075.321076.
2126
+
BdFQT4oBgHgl3EQf9zeq/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
BtE1T4oBgHgl3EQfpgUG/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8b74d644e43c8ed94d24a606ba030d1194cbd323a46bf60c94d6cdbaa2ef324
3
+ size 2490413
GNAyT4oBgHgl3EQfrfkX/content/tmp_files/2301.00560v1.pdf.txt ADDED
@@ -0,0 +1,851 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.00560v1 [quant-ph] 2 Jan 2023
2
+ PauliComposer: Compute Tensor Products of Pauli Matrices Efficiently
3
+ Sebasti´an V. Romero
4
+ 1∗ and Juan Santos-Su´arez
5
+ 2†
6
+ 1TECNALIA, Basque Research and Technology Alliance (BRTA), 48160 Derio, Spain
7
+ 2Instituto Galego de F´ısica de Altas Enerx´ıas (IGFAE),
8
+ Universidade de Santiago de Compostela, 15705 Santiago de Compostela, Spain
9
+ (Dated: January 3, 2023)
10
+ We introduce a simple algorithm that efficiently computes tensor products of Pauli matrices. This
11
+ is done by tailoring the calculations to this specific case, which allows to avoid unnecessary calcu-
12
+ lations. The strength of this strategy is benchmarked against state-of-the-art techniques, showing
13
+ a remarkable acceleration. As a side product, we provide an optimized method for one key calculus
14
+ in quantum simulations: the Pauli basis decomposition of Hamiltonians.
15
+ I.
16
+ INTRODUCTION
17
+ Pauli matrices [1] are one of the most important and
18
+ well-known set of matrices within the field of quantum
19
+ physics. They are particularly important both in physics
20
+ and chemistry when used to describe Hamiltonians of
21
+ many-body spin glasses [2–7] or for quantum simula-
22
+ tions [8–13].
23
+ The vast majority of these systems are
24
+ out of analytic control so that non-equilibrium states
25
+ are usually studied through exact diagonalization which
26
+ requires their Hamiltonians to be written in its matrix
27
+ form. While this task may be regarded as a trivial mat-
28
+ ter in a mathematical sense, it involves the calculation of
29
+ an exponentially growing number of operations.
30
+ In this work, we present the PauliComposer (PC) al-
31
+ gorithm which significantly expedites this calculation. It
32
+ exploits the fact that any Pauli word only has one ele-
33
+ ment different from zero per row and column, so a num-
34
+ ber of calculations can be avoided.
35
+ Additionally, each
36
+ matrix entry can be computed without performing any
37
+ multiplications. This algorithm can be used to boost in-
38
+ ner calculations where several tensor products involving
39
+ Pauli matrices appear. In particular, those that appear
40
+ while building Hamiltonians as weighted sums of Pauli
41
+ strings or decomposing an operator in the Pauli basis.
42
+ The PC algorithm could be implemented in compu-
43
+ tational frameworks in which this sort of operations are
44
+ crucial, such as the Python modules Qiskit [14], Penny-
45
+ Lane [15], OpenFermion [16] and Cirq [17]. It can also
46
+ potentially be used in many other applications, such as
47
+ the Pauli basis decomposition of the Fock space [18] and
48
+ conventional computation of Ising model Hamiltonians
49
+ to solve optimization problems [19–22], among others.
50
+ The rest of the article is organized as follows: in Sec-
51
+ tion II we describe the algorithm formulation in depth,
52
+ showing a pseudocode-written routine for its computa-
53
+ tion. In Section III, a set of benchmark tests is performed
54
+ to show that a remarkable speed-up can be achieved
55
56
57
+ when compared to state-of-the-art techniques.
58
+ In Sec-
59
+ tion IV, we show how this Pauli Composer algorithm
60
+ can be used to solve relevant problems. Finally, the con-
61
+ clusions drawn from the presented results are given in
62
+ Section V. We provide proofs for several statements and
63
+ details of the algorithm in the appendices.
64
+ II.
65
+ ALGORITHM FORMULATION
66
+ In this section we discuss the PC algorithm formulation
67
+ in detail. Pauli matrices are hermitian, involutory and
68
+ unitary matrices that together with the identity form the
69
+ set σ{0,1,2,3} = {I, X, Y, Z}. Given an input string x =
70
+ xn−1 . . . x0 ∈ {0, 1, 2, 3}n, the PC algorithm constructs
71
+ P(x) := σxn−1 ⊗ σxn−2 ⊗ · · · ⊗ σx0.
72
+ (1)
73
+ Let us denote its matrix elements as Pj,k(x) with
74
+ j, k = 0, . . . , 2n − 1. It is important to remark that for
75
+ each row j, there will be a single column k(j) such that
76
+ Pj,k(j) ̸= 0 (see Appendix A). The solution amounts to a
77
+ map from the initial Pauli string to the positions and val-
78
+ ues of the 2n nonzero elements. This calculation will be
79
+ done sequentially, hence the complexity of the algorithm
80
+ will be bounded from below by this number.
81
+ As a first step, it is worth noting that Pauli string
82
+ matrices are either real (all elements are ±1) or purely
83
+ imaginary (all are ±i). This depends on nY , the number
84
+ of Y operators in P(x). We can redefine ˜Y := iY , so that
85
+ ˜σ{0,1,2,3} = {I, X, ˜Y , Z} and
86
+ ˜P(x) := ˜σxn−1 ⊗ · · · ⊗ ˜σx0.
87
+ As a result, every entry in ˜P(x) will be ±1. This implies
88
+ that there is no need to compute any multiplication: the
89
+ problem reduces to locating the nonzero entries in ˜P(x)
90
+ and tracking sign changes. The original P(x) can be re-
91
+ covered as P(x) = (−i)nY mod 4 ˜P(x).
92
+ We will now present an iterative procedure to compute
93
+ ˜P by finding for each row j the nonzero column number
94
+ k(j) and its corresponding value ˜Pj,k(j). For the first row,
95
+ j = 0, the nonzero element ˜P0,k(0), can be found at
96
+ k(0) = [y(xn−1) . . . y(x0)]10,
97
+ (2)
98
+ where [an−1 . . . a0]10 is the decimal representation of the
99
+ bit string a = an−12n−1 + · · ·+ a020 and y(xi) tracks the
100
+
101
+ 2
102
+ diagonality of σxi, being equal to 0 if xi ∈ {0, 3} and 1
103
+ otherwise. The value of this entry is
104
+ ˜P0,k(0) = +1 =⇒ P0,k(0) = (−i)nY mod 4.
105
+ (3)
106
+ The following entries can be computed iteratively. At
107
+ the end of stage l, with l = 0, · · · , n − 1, all nonzero
108
+ elements in the first 2l+1 rows of Pj,k(j) will have been
109
+ computed using the information given by the substring
110
+ xl . . . x0. At the next step, l + 1, the following 2l rows
111
+ are filled using the ones that had already been computed,
112
+ where the row-column relation k(j) is given by
113
+ k(j + 2l) = k(j) + (−1)y(xl)2l,
114
+ j = 0, . . . , 2l − 1.
115
+ (4)
116
+ The second term of the RHS of this relation takes into ac-
117
+ count the way that the blocks of zeros returned at stage
118
+ l affect the new relative location of the nonzero blocks
119
+ within the new 2l+1 × 2l+1 subcomposition. Its corre-
120
+ sponding values are obtained from the previous ones, up
121
+ to a possible change of sign given by
122
+ Pj+2l,k(j+2l) = ǫlPj,k(j),
123
+ (5)
124
+ with ǫl equal to 1 if xl ∈ {0, 1} and −1 otherwise. This ǫl
125
+ is nothing but a parameter that takes into account if σxl
126
+ introduces a sign flip. In Alg. 1 a pseudocode that sum-
127
+ marises the presented algorithm using (2)-(5), is shown.
128
+ For the particular case of diagonal Pauli strings (only
129
+ I and Z matrices), there is no need to compute the row-
130
+ column relation k(j), just the sign assignment is enough.
131
+ Even if this is also the case for anti-diagonal matrices, we
132
+ focus on the diagonal case due to its relevance in combi-
133
+ natorial problems [19–22]. See Alg. 2 for the pseudocode
134
+ of this case (PDC stands for Pauli Diagonal Composer).
135
+ The PC algorithm is able to circumvent the calculation
136
+ of a significant amount of operations. When generic Kro-
137
+ necker product routines (see Appendix B) are used for
138
+ the same task, the amount of multiplications needed for
139
+ computing a Pauli string is O[n22n] and O[n2n] for dense
140
+ and sparse matrices, respectively. In contrast, the PC al-
141
+ gorithm, considering the worst-case scenarios, needs
142
+ • {I, Z}⊗n: O[2n] changes of sign.
143
+ • Otherwise: O[2n] sums and O[2n] changes of sign.
144
+ In all cases this novel algorithm can significantly out-
145
+ perform those that are not specifically designed for Pauli
146
+ matrices.
147
+ On top of that, this method is also advantageous
148
+ for computing weighted Pauli strings.
149
+ Following (3),
150
+ W := ωP, with arbitrary ω, can be computed by defining
151
+ W0,k(0) = ω(−i)nY mod 4 which avoids having to do any
152
+ extra multiplication. This change is reflected in Alg. 1 by
153
+ changing line 6 to m(0) ← ω(−i)nY mod 4 and line 4 to
154
+ m(0) ← ω in Alg. 2. This is specially important as it can
155
+ be used to compute Hamiltonians written as a weighted
156
+ sum of Pauli strings, where H = �
157
+ x ωxP(x).
158
+ Algorithm 1: PC: compose n Pauli matrices
159
+ input : xn−1xn−2 . . . x0 ← string with xi ∈ {0, 1, 2, 3}
160
+ 1 n ← len(x)
161
+ 2 nY ← number of Y matrices in x
162
+ 3 j ← range(0, 2n − 1)
163
+ // rows
164
+ 4 k, m ← empty 2n-array
165
+ // columns/entries
166
+ 5 k(0) ← y(xn−1) . . . y(x0) in base 10
167
+ 6 m(0) ← (−i)nY mod 4
168
+ 7 for l ∈ range(0, n − 1) do
169
+ 8
170
+ k(2l : 2l+1 − 1) ← k(0 : 2l − 1) + (−1)y(xl)2l
171
+ 9
172
+ if xl ∈ {0, 1} then
173
+ // ǫl = 1
174
+ 10
175
+ m(2l : 2l+1 − 1) ← m(0 : 2l − 1)
176
+ 11
177
+ else
178
+ // ǫl = −1
179
+ 12
180
+ m(2l : 2l+1 − 1) ← −m(0 : 2l − 1)
181
+ output: P(x) as a sparse matrix stacking (j, k, m)
182
+ Algorithm 2: PDC: compose n diagonal Pauli matrices
183
+ input : xn−1xn−2 . . . x0 ← string with xi ∈ {0, 3}
184
+ 1 n ← len(x)
185
+ 2 j, k ← range(0, 2n − 1)
186
+ // rows/columns
187
+ 3 m ← empty 2n-array
188
+ // entries
189
+ 4 m(0) ← 1
190
+ 5 for l ∈ range(0, n − 1) do
191
+ 6
192
+ if xl = 0 then
193
+ // ǫl = 1
194
+ 7
195
+ m(2l : 2l+1 − 1) ← m(0 : 2l − 1)
196
+ 8
197
+ else
198
+ // ǫl = −1
199
+ 9
200
+ m(2l : 2l+1 − 1) ← −m(0 : 2l − 1)
201
+ output: P(x) as a sparse matrix stacking (j, k, m)
202
+ III.
203
+ BENCHMARKING
204
+ In this section we analyse the improvement that the
205
+ PC strategy introduces against the methods presented in
206
+ Appendix B in two figures of merit: memory storage and
207
+ execution times. For this purpose, we use MATLAB [23]
208
+ (which incorporates optimized routines of the well-known
209
+ BLAS and LAPACK libraries [24–28]) and, only for the
210
+ PC, also Python [29] since many quantum computing
211
+ libraries are written in this language [14–17]. See Tab. I
212
+ for a full description of the computational resources used.
213
+ Concerning memory needs, with this algorithm only 2n
214
+ nonzero elements out of 22n are stored. This is exactly
215
+ the same as using sparse matrices, thus, no major im-
216
+ provement is to be expected. As for the computational
217
+ time, we compare how different algorithms behave as the
218
+ length n of the Pauli string increases. In Fig. 1 execu-
219
+ Table I. Computer and software specifications.
220
+ Processor
221
+ Intel® Core™ i7-11850H (16×2.50 GHz)
222
+ RAM
223
+ 32.0 GB (DDR4)
224
+ OS
225
+ Ubuntu 22.04.1 LTS (×64)
226
+ MATLAB [23]
227
+ 9.12.0.1884302 (R2022a)
228
+ Python [29]
229
+ 3.9.12
230
+ NumPy [30]
231
+ 1.23.2
232
+ SciPy [31]
233
+ 1.9.0
234
+ Qiskit [14]
235
+ 0.38.0
236
+ PennyLane [15]
237
+ 0.23.1
238
+
239
+ 3
240
+ 2
241
+ 4
242
+ 6
243
+ 8 10 12 14 16 18 20 22 24 26 28 30
244
+ 10−5
245
+ 10−4
246
+ 10−3
247
+ 10−2
248
+ 10−1
249
+ 100
250
+ 101
251
+ n
252
+ Execution times (s)
253
+ Naive
254
+ Mixed
255
+ Alg993 [32]
256
+ Tree
257
+ PC/PDC (M)
258
+ PC/PDC (P)
259
+ Figure 1. Execution times for computing general (solid line)
260
+ and diagonal n-Pauli strings (dashed line) using different
261
+ methods. Here, M stands for MATLAB and P for Python.
262
+ tion times for general and diagonal Pauli strings (solid
263
+ and dashed lines, respectively) are shown. For the Pauli
264
+ Composer methods, we use the PC routine (Alg. 1) for
265
+ the general case and the PDC routine (Alg. 2) for the di-
266
+ agonal one. In accordance to our theoretical analysis, the
267
+ PC algorithm proves to be the best performing routine.
268
+ On a more technical note, when using the PC rou-
269
+ tine, matrices with complex values (nY odd) take twice as
270
+ much time as real valued ones (nY even). Consequently,
271
+ we compute their execution times separately and then
272
+ average them. Moreover, it is convenient to choose when
273
+ to use PC or PDC as the latter can be up to 10 times faster.
274
+ IV.
275
+ REAL USE CASES OF THE PAULI
276
+ COMPOSER ALGORITHM
277
+ The PC algorithm can be used to perform useful cal-
278
+ culations in physics. In this section, the Pauli basis de-
279
+ composition of a Hamiltonian and the construction of a
280
+ Hamiltonian as a sum of weighted Pauli strings are dis-
281
+ cussed in detail. Another worth mentioning scenario is
282
+ the digital implementation of the complex exponential of
283
+ a Pauli string, i.e. e−iθP (x) = cos(θ)I − i sin(θ)P(x).
284
+ A.
285
+ Pauli basis decomposition of a Hamiltonian
286
+ The decomposition of a Hamiltonian written as a 2n ×
287
+ 2n matrix into the Pauli basis is a common problem in
288
+ quantum computing. Given a general Hamiltonian H,
289
+ this decomposition can be written as
290
+ H =
291
+
292
+ x ωx
293
+
294
+ σxn−1 ⊗ · · · ⊗ σx0
295
+
296
+ =
297
+
298
+ x ωxP(x),
299
+ (6)
300
+ with x = xn−1 . . . x0 and P(x) as in (1). The coefficients
301
+ ωx are obtained from the orthogonal projection as
302
+ ωx = 1
303
+ 2n tr[P(x)H] = 1
304
+ 2n
305
+ 2n−1
306
+
307
+ j=0
308
+ Pj,k(j)(x)Hk(j),j.
309
+ (7)
310
+ Following the discussion in Section II, the double sum
311
+ collapses to a single one in (7) since there is only one
312
+ nonzero element per row and column.
313
+ Additionally, in some special cases, it can be known in
314
+ advance if some set of ωx will vanish:
315
+ • If H is symmetric, strings with an odd number of
316
+ Y matrices can be avoided (2n−1(2n + 1) terms).
317
+ • If H is diagonal, only strings composed by I and Z
318
+ will contribute (2n terms).
319
+ The amount of operations made by this Pauli Decom-
320
+ poser (PD) is given by the following list
321
+ • If H is diagonal (O[2n] strings): O[22n] operations.
322
+ • Otherwise (O[22n] strings): O[23n] operations.
323
+ This PD algorithm checks if the input matrix satisfies
324
+ one of the special cases defined above, discards all van-
325
+ ishing Pauli strings and computes the coefficients of the
326
+ remaining ones using the PC routine and (7). This work-
327
+ flow considerably enhances our results, especially for di-
328
+ agonal matrices.
329
+ In Tab. II, we tested the most extended methods for de-
330
+ composing matrices into weighted sums of Pauli strings
331
+ against PD using Python [29] to compare their perfor-
332
+ mance. In particular, we used the SparsePauliOp class
333
+ from Qiskit [14] and the decompose hamiltonian func-
334
+ tion from PennyLane [15] (only works with hermitian
335
+ Hamiltonians). Four types of random 2n × 2n matrices
336
+ were generated, namely non-hermitian HNH, hermitian
337
+ HH, symmetric HS and diagonal HD matrices. The PD
338
+ vastly outperforms Qiskit and PennyLane routines, spe-
339
+ cially for the symmetric and diagonal cases.
340
+ B.
341
+ Building of a Hamiltonian as a sum of weighted
342
+ Pauli strings
343
+ Many Hamiltonians are written in terms of weighted
344
+ Pauli strings. As mentioned, our method can compute
345
+ weighted Pauli strings directly without performing extra
346
+ computations.
347
+ In Fig. 2 we show a performance com-
348
+ parison of the presented methods for computing Hamil-
349
+ tonians written as sums of weighted Pauli strings. The
350
+ Hamiltonian used is similar to the one proposed in [21],
351
+ H =
352
+ n−1
353
+
354
+ i=0
355
+ αiσi
356
+ 3 +
357
+ n−1
358
+
359
+ i<j
360
+ βi,jσi
361
+ 3σj
362
+ 3,
363
+ (8)
364
+
365
+ 4
366
+ Table II. Execution times (in seconds) for decomposing an arbitrary 2n × 2n matrix. In brackets we see the number of threads
367
+ used by each routine. Here, PC and PDC run under Python code as well as Qiskit [14] and PennyLane [15].
368
+ n
369
+ 2
370
+ 3
371
+ 4
372
+ 5
373
+ 6
374
+ 7
375
+ 8
376
+ 9
377
+ 10
378
+ Non-hermitian matrix HNH
379
+ PC (×1)
380
+ 0.0005
381
+ 0.0021
382
+ 0.012
383
+ 0.078
384
+ 0.55
385
+ 4.06
386
+ 31.2
387
+ 254
388
+ 2008
389
+ Qiskit (×16)
390
+ 0.0015
391
+ 0.0050
392
+ 0.020
393
+ 0.14
394
+ 1.16
395
+ 8.78
396
+ 92.38
397
+ 1398
398
+ 26938
399
+ Hermitian matrix HH
400
+ PC (×1)
401
+ 0.0004
402
+ 0.0021
403
+ 0.012
404
+ 0.078
405
+ 0.56
406
+ 4.24
407
+ 32.86
408
+ 261
409
+ 2007
410
+ Qiskit (×16)
411
+ 0.0010
412
+ 0.0035
413
+ 0.018
414
+ 0.10
415
+ 1.47
416
+ 12.02
417
+ 108.3
418
+ 1295
419
+ 26848
420
+ PennyLane (×16)
421
+ 0.0013
422
+ 0.0060
423
+ 0.030
424
+ 0.15
425
+ 2.23
426
+ 10.66
427
+ 97.6
428
+ 2019
429
+ 35014
430
+ Symmetric matrix HS
431
+ PC (×1)
432
+ 0.0003
433
+ 0.0010
434
+ 0.0058
435
+ 0.036
436
+ 0.24
437
+ 1.78
438
+ 14.05
439
+ 108
440
+ 794
441
+ Qiskit (×16)
442
+ 0.0010
443
+ 0.0036
444
+ 0.018
445
+ 0.10
446
+ 1.45
447
+ 11.07
448
+ 104.6
449
+ 1320
450
+ 26399
451
+ PennyLane (×16)
452
+ 0.0011
453
+ 0.0054
454
+ 0.027
455
+ 0.13
456
+ 1.36
457
+ 9.22
458
+ 91.52
459
+ 1477
460
+ 31583
461
+ Diagonal matrix HD
462
+ PDC (×1)
463
+ 0.0001
464
+ 0.0002
465
+ 0.0006
466
+ 0.0018
467
+ 0.0068
468
+ 0.025
469
+ 0.094
470
+ 0.37
471
+ 1.49
472
+ Qiskit (×16)
473
+ 0.0010
474
+ 0.0035
475
+ 0.018
476
+ 0.10
477
+ 1.46
478
+ 11.0
479
+ 103.3
480
+ 1270
481
+ 25977
482
+ PennyLane (×16)
483
+ 0.0010
484
+ 0.0047
485
+ 0.023
486
+ 0.11
487
+ 1.20
488
+ 8.29
489
+ 86.17
490
+ 1370
491
+ 30941
492
+ being the corresponding weigths ⃗α = [α0, . . . , αn−1] and
493
+ ⃗β = [β0,1, . . . , β0,n−1, β1,2, . . . , βn−2,n−1] arbitrary and σi
494
+ 3
495
+ as defined in (B1) ∀i, j. This Hamiltonian is computed
496
+ using Alg. 3, which uses the PDC routine (see Alg. 2)
497
+ with two inputs: the string x ∈ {0, 3}n to compute and
498
+ the weights to consider. In the PDC case, we use two
499
+ strategies: compute each weighted term of (8) directly
500
+ and compute each Pauli string and then multiply it by
501
+ its corresponding weight (solid and dashed lines in Fig. 2,
502
+ respectively). This is done by changing lines 6 to H ←
503
+ H + αiPDC(str1) and 10 to H ← H + βi,jPDC(str2)
504
+ in Alg. 3 for the second one.
505
+ There is no remarkable
506
+ difference between both methods.
507
+ 2
508
+ 4
509
+ 6
510
+ 8 10 12 14 16 18 20 22 24 26 28 30
511
+ 10−5
512
+ 10−4
513
+ 10−3
514
+ 10−2
515
+ 10−1
516
+ 100
517
+ 101
518
+ 102
519
+ 103
520
+ 104
521
+ n
522
+ Execution times (s)
523
+ Naive
524
+ Tree
525
+ PDC (M)
526
+ PDC (P)
527
+ Figure 2.
528
+ Execution times for computing (8) using Alg. 3
529
+ (solid line) and computing previously the Pauli string for then
530
+ multiply it by its corresponding weight (dashed line).
531
+ Algorithm 3: Ising model Hamiltonian computation
532
+ input : ⃗α, ⃗β ← lists of weights
533
+ 1 n ← len(⃗α)
534
+ 2 H ← 2n × 2n sparse matrix of zeros
535
+ 3 for i ∈ range(0, n − 1) do
536
+ 4
537
+ str1 ← string of n zeros
538
+ // n identities
539
+ 5
540
+ str1(i) ← 3
541
+ // Z in the i-th position
542
+ 6
543
+ H ← H + PDC(str1, αi)
544
+ 7
545
+ for j ∈ range(i + 1, n − 1) do
546
+ 8
547
+ str2 ← copy(str1)
548
+ 9
549
+ str2(j) ← 3
550
+ // Z in the j-th position
551
+ 10
552
+ H ← H + PDC(str2, βi,j)
553
+ output: Hamiltonian H as a sparse matrix
554
+ V.
555
+ CONCLUSIONS
556
+ The fast and reliable computation of tensor products
557
+ of Pauli matrices is crucial in the field of quantum me-
558
+ chanics and, in particular, of quantum computing.
559
+ In
560
+ this article we propose a novel algorithm with proven
561
+ theoretical and experimental enhancements over similar
562
+ methods of this key yet computationally tedious task.
563
+ This is achieved by taking advantage of the properties of
564
+ Pauli matrices and the tensor product definition, which
565
+ implies that one can avoid trivial operations such as mul-
566
+ tiplying constants by one and waste time computing ele-
567
+ ments with value zero that could be known in advance.
568
+ Concerning memory resources, it is convenient to store
569
+ the obtained results as sparse matrices since only 2n out
570
+ of 22n entries will not be zero for a Pauli string of length
571
+ n, i.e. the density of the resultant matrix will be 2−n
572
+ (see Appendix A).
573
+
574
+ 5
575
+ Our benchmark tests suggest that the Pauli Composer
576
+ algorithm and its variants can achieve a remarkable accel-
577
+ eration when compared to the most well-known methods
578
+ for the same purpose both for single Pauli strings and
579
+ real use cases. In particular, the most considerable out-
580
+ performance can be seen in Tab. II for the symmetric and
581
+ diagonal matrix decomposition over the Pauli basis.
582
+ Finally, its simple implementation (Alg. 1-2) can po-
583
+ tentially allow to integrate the PC routines into quantum
584
+ simulation packages to enhance inner calculations.
585
+ ACKNOWLEDGMENTS
586
+ We would like to thank Javier Mas Sol´e, Yue Ban
587
+ and Mikel Garc´ıa de Andoin for the helpful discus-
588
+ sions that led to the present article.
589
+ This research is
590
+ funded by the QUANTEK project (ELKARTEK pro-
591
+ gram from the Basque Government, expedient no. KK-
592
+ 2021/00070) and the project “BRTA QUANTUM: Hacia
593
+ una especializaci´on armonizada en tecnolog´ıas cu´anticas
594
+ en BRTA” (expedient no. KK-2022/00041). The work
595
+ of JSS has received support from Xunta de Galicia
596
+ (Centro singular de investigaci´on de Galicia acceditation
597
+ 2019-2022) by European Union ERDF, from the Span-
598
+ ish Research State Agency (grant PID2020-114157GB-
599
+ 100) and from MICIN with funding from the European
600
+ Union NextGenerationEU (PRTR-C17.I1) and the Gali-
601
+ cian Regional Government with own funding through
602
+ the “Planes Complementarios de I+D+I con las Comu-
603
+ nidades Aut´onomas” in Quantum Communication.
604
+ Data and code availability statement.
605
+ The data
606
+ and code used in the current study are available upon
607
+ reasonable request from the corresponding authors.
608
+ Appendix A: Some proofs regarding Pauli strings
609
+ In this section we prove two key properties of Pauli
610
+ strings on which our algorithm is based.
611
+ Theorem A.1. A Pauli string P(x) of length n given
612
+ by (1) has only 2n nonzero entries.
613
+ Proof. With the help of Fig. 3, we can compute the num-
614
+ ber of zeros in the resulting matrix as
615
+ n0(n) = 2
616
+
617
+ 2n−1 × 2n−1�
618
+ + 4
619
+
620
+ 2n−2 × 2n−2�
621
+ + 8
622
+
623
+ 2n−3 × 2n−3�
624
+ + · · · + 2n(1 × 1)
625
+ =
626
+ 2n−1
627
+
628
+ k=n
629
+ 2k = 2n (2n − 1) .
630
+ (A1)
631
+ In other words, P(x) will have only 2n nonzero terms.
632
+ We can prove (A1) by induction easily: n0(n = 1) is true
633
+ n−1
634
+
635
+ i=0
636
+ σxn−i−1 =
637
+
638
+ 
639
+ 0
640
+ · · ·
641
+ 0
642
+ 0
643
+ · · ·
644
+ · · ·
645
+ 0
646
+ 0
647
+ · · ·
648
+ 0
649
+ 0
650
+ 0
651
+ 0
652
+ · · ·
653
+ 0
654
+ 0
655
+ · · ·
656
+ · · ·
657
+ 0
658
+ 0
659
+ · · ·
660
+ 0
661
+
662
+ 
663
+ Figure 3. Scheme for computing the number of zeros of an
664
+ arbitrary composition of n Pauli matrices.
665
+ since n0(1) = 21(21 − 1) = 2 and if we assume that n0(n)
666
+ holds, we can see that
667
+ n0(n + 1) =
668
+ 2(n+1)−1
669
+
670
+ k=n+1
671
+ 2k = 2n+1 �
672
+ 2n+1 − 1
673
+
674
+ also holds true.
675
+ From this result and the unitarity of P(x), we can infer
676
+ another important aspect.
677
+ Corollary A.1.1. A Pauli string P(x) of length n given
678
+ by (1) has only one nonzero entry per row and column.
679
+ Proof. Since the tensor product of unitary matrices is
680
+ also unitary, then |det P(x)| = 1. From Th. A.1, only 2n
681
+ entries of the resulting 2n × 2n matrix are nonzero. So
682
+ the logical conclusion to be drawn is that the unique way
683
+ to locate them without having a row and a column full
684
+ of zeros, thus returning a zero determinant, is that each
685
+ row and column must have only one nonzero entry.
686
+ Appendix B: Standard methods for computing
687
+ tensor products
688
+ For the sake of completeness, in this appendix, we will
689
+ briefly review the well established algorithms that were
690
+ used in the benchmark [32–34]. First, one can consider
691
+ what we call the Naive algorithm, which consists on per-
692
+ forming the calculations directly. It is clearly highly inef-
693
+ ficient as it scales in the number of operations as O[n2n]
694
+ for sparse Pauli matrices. Second, the Mixed algorithm
695
+ uses the mixed-product property
696
+ n−1
697
+
698
+ i=0
699
+ σxn−i−1 =
700
+ n−1
701
+
702
+ i=0
703
+ σi
704
+ xn−i−1,
705
+ with
706
+ σi
707
+ xi :=
708
+
709
+
710
+
711
+
712
+
713
+ I⊗n−1 ⊗ σx0
714
+ if i = 0
715
+ I⊗n−i−1 ⊗ σxi ⊗ I⊗i
716
+ if 0 < i < n − 1
717
+ σxn−1 ⊗ I⊗n−1
718
+ if i = n − 1
719
+ , (B1)
720
+
721
+ 6
722
+ to simplify the calculation into a simple product of block
723
+ diagonal matrices. Based on this procedure, Algorithm
724
+ 993 is presented in [32]. It can be shown that this method
725
+ performs over O[n2n] operations. Besides that, as Fig. 1
726
+ suggests, the fact that it requires to transpose and re-
727
+ shape several matrices has a non-negligible effect that
728
+ fatally increases its computation time. Finally, the Tree
729
+ routine starts storing pairs of tensor products as
730
+
731
+ σxn−2i−1 ⊗ σxn−2i−2
732
+ �n/2−1
733
+ i=0
734
+ if n is even
735
+
736
+ σxn−1
737
+
738
+
739
+
740
+ σxn−2i−1 ⊗ σxn−2i−2
741
+ �⌊n/2⌋
742
+ i=0
743
+ otherwise
744
+ ,
745
+ and proceeds with the resultant matrices following the
746
+ same logic, which allows to compute (1) by iteratively
747
+ grouping its terms by pairs.
748
+ For better results, this
749
+ method can be parallelized.
750
+ [1] W. Pauli, Zur Quantenmechanik des Magnetischen Elek-
751
+ trons, Zeitschrift f¨ur Physik 43, 601 (1927).
752
+ [2] W. Heisenberg, Zur Theorie des Ferromagnetismus,
753
+ Zeitschrift f¨ur Physik 49, 619 (1928).
754
+ [3] H. Bethe, Zur Theorie der Metalle, Zeitschrift f¨ur Physik
755
+ 71, 205 (1931).
756
+ [4] D. Sherrington and S. Kirkpatrick, Solvable Model of a
757
+ Spin-Glass, Phys. Rev. Lett. 35, 1792 (1975).
758
+ [5] D. Panchenko, The Sherrington-Kirkpatrick Model: An
759
+ Overview, Journal of Statistical Physics 149, 362 (2012).
760
+ [6] J. Hubbard and B. H. Flowers, Electron Correlations in
761
+ Narrow Energy Bands, Proceedings of the Royal Society
762
+ of London. Series A. Mathematical and Physical Sciences
763
+ 276, 238 (1963).
764
+ [7] A. Altland and B. Simons, Second Quantization, in
765
+ Condensed Matter Field Theory (Cambridge University
766
+ Press, 2006) pp. 39–93.
767
+ [8] P. Jordan and E. Wigner, ¨Uber das Paulische ¨Aquivalen-
768
+ zverbot, Zeitschrift f¨ur Physik 47, 631 (1928).
769
+ [9] S. B. Bravyi and A. Y. Kitaev, Fermionic Quantum Com-
770
+ putation, Annals of Physics 298, 210 (2002).
771
+ [10] J. T. Seeley, M. J. Richard, and P. J. Love, The Bravyi-
772
+ Kitaev Transformation for Quantum Computation of
773
+ Electronic Structure, The Journal of Chemical Physics
774
+ 137, 224109 (2012).
775
+ [11] A. Tranter, S. Sofia, J. Seeley, M. Kaicher, J. McClean,
776
+ R. Babbush, P. V. Coveney, F. Mintert, F. Wilhelm, and
777
+ P. J. Love, The Bravyi-Kitaev Transformation: Proper-
778
+ ties and Applications, International Journal of Quantum
779
+ Chemistry 115, 1431 (2015).
780
+ [12] A. Tranter, P. J. Love, F. Mintert, and P. V. Coveney,
781
+ A Comparison of the Bravyi-Kitaev and Jordan-Wigner
782
+ Transformations for the Quantum Simulation of Quan-
783
+ tum Chemistry, Journal of Chemical Theory and Com-
784
+ putation 14, 5617 (2018).
785
+ [13] M. Steudtner and S. Wehner, Fermion-to-Qubit Map-
786
+ pings with Varying Resource Requirements for Quantum
787
+ Simulation, New Journal of Physics 20, 063010 (2018).
788
+ [14] Qiskit Community, Qiskit: An Open-source Framework
789
+ for Quantum Computing (2021).
790
+ [15] PennyLane Community, PennyLane: Automatic Differ-
791
+ entiation of Hybrid Quantum-Classical Computations
792
+ (2018).
793
+ [16] OpenFermion Developers, OpenFermion: The Electronic
794
+ Structure Package for Quantum Computers (2017).
795
+ [17] Cirq Developers, Cirq (2022).
796
+ [18] R. Liu, S. V. Romero, I. Oregi, E. Osaba, E. Villar-
797
+ Rodriguez, and Y. Ban, Digital Quantum Simulation and
798
+ Circuit Learning for the Generation of Coherent States,
799
+ Entropy 24, 1529 (2022).
800
+ [19] A. Lucas, Ising Formulations of Many NP Problems,
801
+ Frontiers in Physics 2, 5 (2014).
802
+ [20] E. Osaba, E. Villar-Rodriguez, and I. Oregi, A System-
803
+ atic Literature Review of Quantum Computing for Rout-
804
+ ing Problems, IEEE Access 10, 55805 (2022).
805
+ [21] M. G. de Andoin, E. Osaba, I. Oregi, E. Villar-Rodriguez,
806
+ and M. Sanz, Hybrid Quantum-Classical Heuristic for
807
+ the Bin Packing Problem, in Proceedings of the Genetic
808
+ and Evolutionary Computation Conference Companion,
809
+ GECCO ’22 (Association for Computing Machinery, New
810
+ York, NY, USA, 2022) pp. 2214–2222.
811
+ [22] M. G. de Andoin, I. Oregi, E. Villar-Rodriguez, E. Osaba,
812
+ and M. Sanz, Comparative Benchmark of a Quantum
813
+ Algorithm for the Bin Packing Problem (2022).
814
+ [23] MATLAB version 9.12.0.1884302 (R2022a), The Math-
815
+ works, Inc., Natick, Massachusetts (2022).
816
+ [24] C. L. Lawson, R. J. Hanson, D. R. Kincaid, and F. T.
817
+ Krogh, Basic Linear Algebra Subprograms for Fortran
818
+ Usage, ACM Trans. Math. Softw. 5, 308 (1979).
819
+ [25] J. J. Dongarra, J. Du Croz, S. Hammarling, and R. J.
820
+ Hanson, An Extended Set of FORTRAN Basic Linear
821
+ Algebra Subprograms, ACM Trans. Math. Softw. 14, 1
822
+ (1988).
823
+ [26] J. J. Dongarra, J. Du Croz, S. Hammarling, and I. S.
824
+ Duff, A Set of Level 3 Basic Linear Algebra Subprograms,
825
+ ACM Trans. Math. Softw. 16, 1 (1990).
826
+ [27] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel,
827
+ J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling,
828
+ A. McKenney, and D. Sorensen, LAPACK users’ guide,
829
+ 3rd ed., Software, environments, tools (Society for Indus-
830
+ trial and Applied Mathematics, Philadelphia, PA, 1999).
831
+ [28] K. Goto and R. Van De Geijn, High-Performance Im-
832
+ plementation of the Level-3 BLAS, ACM Trans. Math.
833
+ Softw. 35, 1 (2008).
834
+ [29] Python Core Team, Python: A Dynamic, Open Source
835
+ Programming Language, Python Software Foundation
836
+ (2022), Python Version 3.9.12.
837
+ [30] Charles R. Harris and K. Jarrod Millman et al., Array
838
+ Programming with NumPy, Nature 585, 357 (2020).
839
+ [31] SciPy Community, SciPy 1.0: Fundamental Algorithms
840
+ for Scientific Computing in Python, Nature Methods 17,
841
+ 261 (2020).
842
+ [32] P. L. Fackler, Algorithm 993: Efficient Computation with
843
+ Kronecker Products, ACM Trans. Math. Softw. 45, 1
844
+ (2019).
845
+ [33] R. A. Horn and C. R. Johnson, Matrix Equations and the
846
+ Kronecker Product, in Topics in Matrix Analysis (Cam-
847
+ bridge University Press, 1991) p. 239–297.
848
+ [34] Implementing Kronecker Products Efficiently, in Au-
849
+ tomatic Generation of Prime Length FFT Programs
850
+ (OpenStax CNX, 2009) pp. 23–28.
851
+
GNAyT4oBgHgl3EQfrfkX/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,494 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf,len=493
2
+ page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
3
+ page_content='00560v1 [quant-ph] 2 Jan 2023 PauliComposer: Compute Tensor Products of Pauli Matrices Efficiently Sebasti´an V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
4
+ page_content=' Romero 1∗ and Juan Santos-Su´arez 2† 1TECNALIA, Basque Research and Technology Alliance (BRTA), 48160 Derio, Spain 2Instituto Galego de F´ısica de Altas Enerx´ıas (IGFAE), Universidade de Santiago de Compostela, 15705 Santiago de Compostela, Spain (Dated: January 3, 2023) We introduce a simple algorithm that efficiently computes tensor products of Pauli matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
5
+ page_content=' This is done by tailoring the calculations to this specific case, which allows to avoid unnecessary calcu- lations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
6
+ page_content=' The strength of this strategy is benchmarked against state-of-the-art techniques, showing a remarkable acceleration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
7
+ page_content=' As a side product, we provide an optimized method for one key calculus in quantum simulations: the Pauli basis decomposition of Hamiltonians.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
8
+ page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
9
+ page_content=' INTRODUCTION Pauli matrices [1] are one of the most important and well-known set of matrices within the field of quantum physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
10
+ page_content=' They are particularly important both in physics and chemistry when used to describe Hamiltonians of many-body spin glasses [2–7] or for quantum simula- tions [8–13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
11
+ page_content=' The vast majority of these systems are out of analytic control so that non-equilibrium states are usually studied through exact diagonalization which requires their Hamiltonians to be written in its matrix form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
12
+ page_content=' While this task may be regarded as a trivial mat- ter in a mathematical sense, it involves the calculation of an exponentially growing number of operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
13
+ page_content=' In this work, we present the PauliComposer (PC) al- gorithm which significantly expedites this calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
14
+ page_content=' It exploits the fact that any Pauli word only has one ele- ment different from zero per row and column, so a num- ber of calculations can be avoided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
15
+ page_content=' Additionally, each matrix entry can be computed without performing any multiplications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
16
+ page_content=' This algorithm can be used to boost in- ner calculations where several tensor products involving Pauli matrices appear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
17
+ page_content=' In particular, those that appear while building Hamiltonians as weighted sums of Pauli strings or decomposing an operator in the Pauli basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
18
+ page_content=' The PC algorithm could be implemented in compu- tational frameworks in which this sort of operations are crucial, such as the Python modules Qiskit [14], Penny- Lane [15], OpenFermion [16] and Cirq [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
19
+ page_content=' It can also potentially be used in many other applications, such as the Pauli basis decomposition of the Fock space [18] and conventional computation of Ising model Hamiltonians to solve optimization problems [19–22], among others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
20
+ page_content=' The rest of the article is organized as follows: in Sec- tion II we describe the algorithm formulation in depth, showing a pseudocode-written routine for its computa- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
21
+ page_content=' In Section III, a set of benchmark tests is performed to show that a remarkable speed-up can be achieved ∗ sebastian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
22
+ page_content='vidal@tecnalia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
23
+ page_content='com † juansantos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
24
+ page_content='suarez@usc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
25
+ page_content='es when compared to state-of-the-art techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
26
+ page_content=' In Sec- tion IV, we show how this Pauli Composer algorithm can be used to solve relevant problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
27
+ page_content=' Finally, the con- clusions drawn from the presented results are given in Section V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
28
+ page_content=' We provide proofs for several statements and details of the algorithm in the appendices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
29
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
30
+ page_content=' ALGORITHM FORMULATION In this section we discuss the PC algorithm formulation in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
31
+ page_content=' Pauli matrices are hermitian, involutory and unitary matrices that together with the identity form the set σ{0,1,2,3} = {I, X, Y, Z}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
32
+ page_content=' Given an input string x = xn−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
33
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
34
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
35
+ page_content=' x0 ∈ {0, 1, 2, 3}n, the PC algorithm constructs P(x) := σxn−1 ⊗ σxn−2 ⊗ · · · ⊗ σx0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
36
+ page_content=' (1) Let us denote its matrix elements as Pj,k(x) with j, k = 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
37
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
38
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
39
+ page_content=' , 2n − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
40
+ page_content=' It is important to remark that for each row j, there will be a single column k(j) such that Pj,k(j) ̸= 0 (see Appendix A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
41
+ page_content=' The solution amounts to a map from the initial Pauli string to the positions and val- ues of the 2n nonzero elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
42
+ page_content=' This calculation will be done sequentially, hence the complexity of the algorithm will be bounded from below by this number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
43
+ page_content=' As a first step, it is worth noting that Pauli string matrices are either real (all elements are ±1) or purely imaginary (all are ±i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
44
+ page_content=' This depends on nY , the number of Y operators in P(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
45
+ page_content=' We can redefine ˜Y := iY , so that ˜σ{0,1,2,3} = {I, X, ˜Y , Z} and ˜P(x) := ˜σxn−1 ⊗ · · · ⊗ ˜σx0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
46
+ page_content=' As a result, every entry in ˜P(x) will be ±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
47
+ page_content=' This implies that there is no need to compute any multiplication: the problem reduces to locating the nonzero entries in ˜P(x) and tracking sign changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
48
+ page_content=' The original P(x) can be re- covered as P(x) = (−i)nY mod 4 ˜P(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
49
+ page_content=' We will now present an iterative procedure to compute ˜P by finding for each row j the nonzero column number k(j) and its corresponding value ˜Pj,k(j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
50
+ page_content=' For the first row, j = 0, the nonzero element ˜P0,k(0), can be found at k(0) = [y(xn−1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
51
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
52
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
53
+ page_content=' y(x0)]10, (2) where [an−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
54
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
55
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
56
+ page_content=' a0]10 is the decimal representation of the bit string a = an−12n−1 + · · ·+ a020 and y(xi) tracks the 2 diagonality of σxi, being equal to 0 if xi ∈ {0, 3} and 1 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
57
+ page_content=' The value of this entry is ˜P0,k(0) = +1 =⇒ P0,k(0) = (−i)nY mod 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
58
+ page_content=' (3) The following entries can be computed iteratively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
59
+ page_content=' At the end of stage l, with l = 0, · · · , n − 1, all nonzero elements in the first 2l+1 rows of Pj,k(j) will have been computed using the information given by the substring xl .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
60
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
61
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
62
+ page_content=' x0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
63
+ page_content=' At the next step, l + 1, the following 2l rows are filled using the ones that had already been computed, where the row-column relation k(j) is given by k(j + 2l) = k(j) + (−1)y(xl)2l, j = 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
64
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
65
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
66
+ page_content=' , 2l − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
67
+ page_content=' (4) The second term of the RHS of this relation takes into ac- count the way that the blocks of zeros returned at stage l affect the new relative location of the nonzero blocks within the new 2l+1 × 2l+1 subcomposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
68
+ page_content=' Its corre- sponding values are obtained from the previous ones, up to a possible change of sign given by Pj+2l,k(j+2l) = ǫlPj,k(j), (5) with ǫl equal to 1 if xl ∈ {0, 1} and −1 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
69
+ page_content=' This ǫl is nothing but a parameter that takes into account if σxl introduces a sign flip.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
70
+ page_content=' In Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
71
+ page_content=' 1 a pseudocode that sum- marises the presented algorithm using (2)-(5), is shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
72
+ page_content=' For the particular case of diagonal Pauli strings (only I and Z matrices), there is no need to compute the row- column relation k(j), just the sign assignment is enough.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
73
+ page_content=' Even if this is also the case for anti-diagonal matrices, we focus on the diagonal case due to its relevance in combi- natorial problems [19–22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
74
+ page_content=' See Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
75
+ page_content=' 2 for the pseudocode of this case (PDC stands for Pauli Diagonal Composer).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
76
+ page_content=' The PC algorithm is able to circumvent the calculation of a significant amount of operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
77
+ page_content=' When generic Kro- necker product routines (see Appendix B) are used for the same task, the amount of multiplications needed for computing a Pauli string is O[n22n] and O[n2n] for dense and sparse matrices, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
78
+ page_content=' In contrast, the PC al- gorithm, considering the worst-case scenarios, needs {I, Z}⊗n: O[2n] changes of sign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
79
+ page_content=' Otherwise: O[2n] sums and O[2n] changes of sign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
80
+ page_content=' In all cases this novel algorithm can significantly out- perform those that are not specifically designed for Pauli matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
81
+ page_content=' On top of that, this method is also advantageous for computing weighted Pauli strings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
82
+ page_content=' Following (3), W := ωP, with arbitrary ω, can be computed by defining W0,k(0) = ω(−i)nY mod 4 which avoids having to do any extra multiplication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
83
+ page_content=' This change is reflected in Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
84
+ page_content=' 1 by changing line 6 to m(0) ← ω(−i)nY mod 4 and line 4 to m(0) ← ω in Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
85
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
86
+ page_content=' This is specially important as it can be used to compute Hamiltonians written as a weighted sum of Pauli strings, where H = � x ωxP(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
87
+ page_content=' Algorithm 1: PC: compose n Pauli matrices input : xn−1xn−2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
88
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
89
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
90
+ page_content=' x0 ← string with xi ∈ {0, 1, 2, 3} 1 n ← len(x) 2 nY ← number of Y matrices in x 3 j ← range(0, 2n − 1) // rows 4 k, m ← empty 2n-array // columns/entries 5 k(0) ← y(xn−1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
91
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
92
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
93
+ page_content=' y(x0) in base 10 6 m(0) ← (−i)nY mod 4 7 for l ∈ range(0, n − 1) do 8 k(2l : 2l+1 − 1) ← k(0 : 2l − 1) + (−1)y(xl)2l 9 if xl ∈ {0, 1} then // ǫl = 1 10 m(2l : 2l+1 − 1) ← m(0 : 2l − 1) 11 else // ǫl = −1 12 m(2l : 2l+1 − 1) ← −m(0 : 2l − 1) output: P(x) as a sparse matrix stacking (j, k, m) Algorithm 2: PDC: compose n diagonal Pauli matrices input : xn−1xn−2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
94
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
95
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
96
+ page_content=' x0 ← string with xi ∈ {0, 3} 1 n ← len(x) 2 j, k ← range(0, 2n − 1) // rows/columns 3 m ← empty 2n-array // entries 4 m(0) ← 1 5 for l ∈ range(0, n − 1) do 6 if xl = 0 then // ǫl = 1 7 m(2l : 2l+1 − 1) ← m(0 : 2l − 1) 8 else // ǫl = −1 9 m(2l : 2l+1 − 1) ← −m(0 : 2l − 1) output: P(x) as a sparse matrix stacking (j, k, m) III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
97
+ page_content=' BENCHMARKING In this section we analyse the improvement that the PC strategy introduces against the methods presented in Appendix B in two figures of merit: memory storage and execution times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
98
+ page_content=' For this purpose, we use MATLAB [23] (which incorporates optimized routines of the well-known BLAS and LAPACK libraries [24–28]) and, only for the PC, also Python [29] since many quantum computing libraries are written in this language [14–17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
99
+ page_content=' See Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
100
+ page_content=' I for a full description of the computational resources used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
101
+ page_content=' Concerning memory needs, with this algorithm only 2n nonzero elements out of 22n are stored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
102
+ page_content=' This is exactly the same as using sparse matrices, thus, no major im- provement is to be expected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
103
+ page_content=' As for the computational time, we compare how different algorithms behave as the length n of the Pauli string increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
104
+ page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
105
+ page_content=' 1 execu- Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
106
+ page_content=' Computer and software specifications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
107
+ page_content=' Processor Intel® Core™ i7-11850H (16×2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
108
+ page_content='50 GHz) RAM 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
109
+ page_content='0 GB (DDR4) OS Ubuntu 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
110
+ page_content='04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
111
+ page_content='1 LTS (×64) MATLAB [23] 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
112
+ page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
113
+ page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
114
+ page_content='1884302 (R2022a) Python [29] 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
115
+ page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
116
+ page_content='12 NumPy [30] 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
117
+ page_content='23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
118
+ page_content='2 SciPy [31] 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
119
+ page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
120
+ page_content='0 Qiskit [14] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
121
+ page_content='38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
122
+ page_content='0 PennyLane [15] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
123
+ page_content='23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
124
+ page_content='1 3 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 10−5 10−4 10−3 10−2 10−1 100 101 n Execution times (s) Naive Mixed Alg993 [32] Tree PC/PDC (M) PC/PDC (P) Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
125
+ page_content=' Execution times for computing general (solid line) and diagonal n-Pauli strings (dashed line) using different methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
126
+ page_content=' Here, M stands for MATLAB and P for Python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
127
+ page_content=' tion times for general and diagonal Pauli strings (solid and dashed lines, respectively) are shown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
128
+ page_content=' For the Pauli Composer methods, we use the PC routine (Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
129
+ page_content=' 1) for the general case and the PDC routine (Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
130
+ page_content=' 2) for the di- agonal one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
131
+ page_content=' In accordance to our theoretical analysis, the PC algorithm proves to be the best performing routine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
132
+ page_content=' On a more technical note, when using the PC rou- tine, matrices with complex values (nY odd) take twice as much time as real valued ones (nY even).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
133
+ page_content=' Consequently, we compute their execution times separately and then average them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
134
+ page_content=' Moreover, it is convenient to choose when to use PC or PDC as the latter can be up to 10 times faster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
135
+ page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
136
+ page_content=' REAL USE CASES OF THE PAULI COMPOSER ALGORITHM The PC algorithm can be used to perform useful cal- culations in physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
137
+ page_content=' In this section, the Pauli basis de- composition of a Hamiltonian and the construction of a Hamiltonian as a sum of weighted Pauli strings are dis- cussed in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
138
+ page_content=' Another worth mentioning scenario is the digital implementation of the complex exponential of a Pauli string, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
139
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
140
+ page_content=' e−iθP (x) = cos(θ)I − i sin(θ)P(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
141
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
142
+ page_content=' Pauli basis decomposition of a Hamiltonian The decomposition of a Hamiltonian written as a 2n × 2n matrix into the Pauli basis is a common problem in quantum computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
143
+ page_content=' Given a general Hamiltonian H, this decomposition can be written as H = � x ωx � σxn−1 ⊗ · · · ⊗ σx0 � = � x ωxP(x), (6) with x = xn−1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
144
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
145
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
146
+ page_content=' x0 and P(x) as in (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
147
+ page_content=' The coefficients ωx are obtained from the orthogonal projection as ωx = 1 2n tr[P(x)H] = 1 2n 2n−1 � j=0 Pj,k(j)(x)Hk(j),j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
148
+ page_content=' (7) Following the discussion in Section II, the double sum collapses to a single one in (7) since there is only one nonzero element per row and column.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
149
+ page_content=' Additionally, in some special cases, it can be known in advance if some set of ωx will vanish: If H is symmetric, strings with an odd number of Y matrices can be avoided (2n−1(2n + 1) terms).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
150
+ page_content=' If H is diagonal, only strings composed by I and Z will contribute (2n terms).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
151
+ page_content=' The amount of operations made by this Pauli Decom- poser (PD) is given by the following list If H is diagonal (O[2n] strings): O[22n] operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
152
+ page_content=' Otherwise (O[22n] strings): O[23n] operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
153
+ page_content=' This PD algorithm checks if the input matrix satisfies one of the special cases defined above, discards all van- ishing Pauli strings and computes the coefficients of the remaining ones using the PC routine and (7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
154
+ page_content=' This work- flow considerably enhances our results, especially for di- agonal matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
155
+ page_content=' In Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
156
+ page_content=' II, we tested the most extended methods for de- composing matrices into weighted sums of Pauli strings against PD using Python [29] to compare their perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
157
+ page_content=' In particular, we used the SparsePauliOp class from Qiskit [14] and the decompose hamiltonian func- tion from PennyLane [15] (only works with hermitian Hamiltonians).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
158
+ page_content=' Four types of random 2n × 2n matrices were generated, namely non-hermitian HNH, hermitian HH, symmetric HS and diagonal HD matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
159
+ page_content=' The PD vastly outperforms Qiskit and PennyLane routines, spe- cially for the symmetric and diagonal cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
160
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
161
+ page_content=' Building of a Hamiltonian as a sum of weighted Pauli strings Many Hamiltonians are written in terms of weighted Pauli strings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
162
+ page_content=' As mentioned, our method can compute weighted Pauli strings directly without performing extra computations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
163
+ page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
164
+ page_content=' 2 we show a performance com- parison of the presented methods for computing Hamil- tonians written as sums of weighted Pauli strings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
165
+ page_content=' The Hamiltonian used is similar to the one proposed in [21], H = n−1 � i=0 αiσi 3 + n−1 � i<j βi,jσi 3σj 3, (8) 4 Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
166
+ page_content=' Execution times (in seconds) for decomposing an arbitrary 2n × 2n matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
167
+ page_content=' In brackets we see the number of threads used by each routine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
168
+ page_content=' Here, PC and PDC run under Python code as well as Qiskit [14] and PennyLane [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
169
+ page_content=' n 2 3 4 5 6 7 8 9 10 Non-hermitian matrix HNH PC (×1) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
170
+ page_content='0005 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
171
+ page_content='0021 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
172
+ page_content='012 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
173
+ page_content='078 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
174
+ page_content='55 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
175
+ page_content='06 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
176
+ page_content='2 254 2008 Qiskit (×16) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
177
+ page_content='0015 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
178
+ page_content='0050 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
179
+ page_content='020 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
180
+ page_content='14 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
181
+ page_content='16 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
182
+ page_content='78 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
183
+ page_content='38 1398 26938 Hermitian matrix HH PC (×1) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
184
+ page_content='0004 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
185
+ page_content='0021 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
186
+ page_content='012 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
187
+ page_content='078 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
188
+ page_content='56 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
189
+ page_content='24 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
190
+ page_content='86 261 2007 Qiskit (×16) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
191
+ page_content='0010 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
192
+ page_content='0035 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
193
+ page_content='018 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
194
+ page_content='10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
195
+ page_content='47 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
196
+ page_content='02 108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
197
+ page_content='3 1295 26848 PennyLane (×16) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
198
+ page_content='0013 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
199
+ page_content='0060 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
200
+ page_content='030 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
201
+ page_content='15 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
202
+ page_content='23 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
203
+ page_content='66 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
204
+ page_content='6 2019 35014 Symmetric matrix HS PC (×1) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
205
+ page_content='0003 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
206
+ page_content='0010 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
207
+ page_content='0058 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
208
+ page_content='036 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
209
+ page_content='24 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
210
+ page_content='78 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
211
+ page_content='05 108 794 Qiskit (×16) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
212
+ page_content='0010 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
213
+ page_content='0036 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
214
+ page_content='018 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
215
+ page_content='10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
216
+ page_content='45 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
217
+ page_content='07 104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
218
+ page_content='6 1320 26399 PennyLane (×16) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
219
+ page_content='0011 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
220
+ page_content='0054 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
221
+ page_content='027 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
222
+ page_content='13 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
223
+ page_content='36 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
224
+ page_content='22 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
225
+ page_content='52 1477 31583 Diagonal matrix HD PDC (×1) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
226
+ page_content='0001 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
227
+ page_content='0002 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
228
+ page_content='0006 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
229
+ page_content='0018 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
230
+ page_content='0068 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
231
+ page_content='025 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
232
+ page_content='094 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
233
+ page_content='37 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
234
+ page_content='49 Qiskit (×16) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
235
+ page_content='0010 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
236
+ page_content='0035 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
237
+ page_content='018 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
238
+ page_content='10 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
239
+ page_content='46 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
240
+ page_content='0 103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
241
+ page_content='3 1270 25977 PennyLane (×16) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
242
+ page_content='0010 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
243
+ page_content='0047 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
244
+ page_content='023 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
245
+ page_content='11 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
246
+ page_content='20 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
247
+ page_content='29 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
248
+ page_content='17 1370 30941 being the corresponding weigths ⃗α = [α0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
249
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
250
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
251
+ page_content=' , αn−1] and ⃗β = [β0,1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
252
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
253
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
254
+ page_content=' , β0,n−1, β1,2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
255
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
256
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
257
+ page_content=' , βn−2,n−1] arbitrary and σi 3 as defined in (B1) ∀i, j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
258
+ page_content=' This Hamiltonian is computed using Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
259
+ page_content=' 3, which uses the PDC routine (see Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
260
+ page_content=' 2) with two inputs: the string x ∈ {0, 3}n to compute and the weights to consider.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
261
+ page_content=' In the PDC case, we use two strategies: compute each weighted term of (8) directly and compute each Pauli string and then multiply it by its corresponding weight (solid and dashed lines in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
262
+ page_content=' 2, respectively).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
263
+ page_content=' This is done by changing lines 6 to H ← H + αiPDC(str1) and 10 to H ← H + βi,jPDC(str2) in Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
264
+ page_content=' 3 for the second one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
265
+ page_content=' There is no remarkable difference between both methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
266
+ page_content=' 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 10−5 10−4 10−3 10−2 10−1 100 101 102 103 104 n Execution times (s) Naive Tree PDC (M) PDC (P) Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
267
+ page_content=' Execution times for computing (8) using Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
268
+ page_content=' 3 (solid line) and computing previously the Pauli string for then multiply it by its corresponding weight (dashed line).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
269
+ page_content=' Algorithm 3: Ising model Hamiltonian computation input : ⃗α, ⃗β ← lists of weights 1 n ← len(⃗α) 2 H ← 2n × 2n sparse matrix of zeros 3 for i ∈ range(0, n − 1) do 4 str1 ← string of n zeros // n identities 5 str1(i) ← 3 // Z in the i-th position 6 H ← H + PDC(str1, αi) 7 for j ∈ range(i + 1, n − 1) do 8 str2 ← copy(str1) 9 str2(j) ← 3 // Z in the j-th position 10 H ← H + PDC(str2, βi,j) output: Hamiltonian H as a sparse matrix V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
270
+ page_content=' CONCLUSIONS The fast and reliable computation of tensor products of Pauli matrices is crucial in the field of quantum me- chanics and, in particular, of quantum computing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
271
+ page_content=' In this article we propose a novel algorithm with proven theoretical and experimental enhancements over similar methods of this key yet computationally tedious task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
272
+ page_content=' This is achieved by taking advantage of the properties of Pauli matrices and the tensor product definition, which implies that one can avoid trivial operations such as mul- tiplying constants by one and waste time computing ele- ments with value zero that could be known in advance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
273
+ page_content=' Concerning memory resources, it is convenient to store the obtained results as sparse matrices since only 2n out of 22n entries will not be zero for a Pauli string of length n, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
274
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
275
+ page_content=' the density of the resultant matrix will be 2−n (see Appendix A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
276
+ page_content=' 5 Our benchmark tests suggest that the Pauli Composer algorithm and its variants can achieve a remarkable accel- eration when compared to the most well-known methods for the same purpose both for single Pauli strings and real use cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
277
+ page_content=' In particular, the most considerable out- performance can be seen in Tab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
278
+ page_content=' II for the symmetric and diagonal matrix decomposition over the Pauli basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
279
+ page_content=' Finally, its simple implementation (Alg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
280
+ page_content=' 1-2) can po- tentially allow to integrate the PC routines into quantum simulation packages to enhance inner calculations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
281
+ page_content=' ACKNOWLEDGMENTS We would like to thank Javier Mas Sol´e, Yue Ban and Mikel Garc´ıa de Andoin for the helpful discus- sions that led to the present article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
282
+ page_content=' This research is funded by the QUANTEK project (ELKARTEK pro- gram from the Basque Government, expedient no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
283
+ page_content=' KK- 2021/00070) and the project “BRTA QUANTUM: Hacia una especializaci´on armonizada en tecnolog´ıas cu´anticas en BRTA” (expedient no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
284
+ page_content=' KK-2022/00041).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
285
+ page_content=' The work of JSS has received support from Xunta de Galicia (Centro singular de investigaci´on de Galicia acceditation 2019-2022) by European Union ERDF, from the Span- ish Research State Agency (grant PID2020-114157GB- 100) and from MICIN with funding from the European Union NextGenerationEU (PRTR-C17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
286
+ page_content='I1) and the Gali- cian Regional Government with own funding through the “Planes Complementarios de I+D+I con las Comu- nidades Aut´onomas” in Quantum Communication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
287
+ page_content=' Data and code availability statement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
288
+ page_content=' The data and code used in the current study are available upon reasonable request from the corresponding authors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
289
+ page_content=' Appendix A: Some proofs regarding Pauli strings In this section we prove two key properties of Pauli strings on which our algorithm is based.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
290
+ page_content=' Theorem A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
291
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
292
+ page_content=' A Pauli string P(x) of length n given by (1) has only 2n nonzero entries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
293
+ page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
294
+ page_content=' With the help of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
295
+ page_content=' 3, we can compute the num- ber of zeros in the resulting matrix as n0(n) = 2 � 2n−1 × 2n−1� + 4 � 2n−2 × 2n−2� + 8 � 2n−3 × 2n−3� + · · · + 2n(1 × 1) = 2n−1 � k=n 2k = 2n (2n − 1) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
296
+ page_content=' (A1) In other words, P(x) will have only 2n nonzero terms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
297
+ page_content=' We can prove (A1) by induction easily: n0(n = 1) is true n−1 � i=0 σxn−i−1 = \uf8ee \uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8ef\uf8f0 0 · · 0 0 · · · · 0 0 · · 0 0 0 0 · · 0 0 · · · · 0 0 · · 0 \uf8f9 \uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fa\uf8fb Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
298
+ page_content=' Scheme for computing the number of zeros of an arbitrary composition of n Pauli matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
299
+ page_content=' since n0(1) = 21(21 − 1) = 2 and if we assume that n0(n) holds, we can see that n0(n + 1) = 2(n+1)−1 � k=n+1 2k = 2n+1 � 2n+1 − 1 � also holds true.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
300
+ page_content=' From this result and the unitarity of P(x), we can infer another important aspect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
301
+ page_content=' Corollary A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
302
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
303
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
304
+ page_content=' A Pauli string P(x) of length n given by (1) has only one nonzero entry per row and column.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
305
+ page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
306
+ page_content=' Since the tensor product of unitary matrices is also unitary, then |det P(x)| = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
307
+ page_content=' From Th.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
308
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
309
+ page_content='1, only 2n entries of the resulting 2n × 2n matrix are nonzero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
310
+ page_content=' So the logical conclusion to be drawn is that the unique way to locate them without having a row and a column full of zeros, thus returning a zero determinant, is that each row and column must have only one nonzero entry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
311
+ page_content=' Appendix B: Standard methods for computing tensor products For the sake of completeness, in this appendix, we will briefly review the well established algorithms that were used in the benchmark [32–34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
312
+ page_content=' First, one can consider what we call the Naive algorithm, which consists on per- forming the calculations directly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
313
+ page_content=' It is clearly highly inef- ficient as it scales in the number of operations as O[n2n] for sparse Pauli matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
314
+ page_content=' Second, the Mixed algorithm uses the mixed-product property n−1 � i=0 σxn−i−1 = n−1 � i=0 σi xn−i−1, with σi xi := \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 I⊗n−1 ⊗ σx0 if i = 0 I⊗n−i−1 ⊗ σxi ⊗ I⊗i if 0 < i < n − 1 σxn−1 ⊗ I⊗n−1 if i = n − 1 , (B1) 6 to simplify the calculation into a simple product of block diagonal matrices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
315
+ page_content=' Based on this procedure, Algorithm 993 is presented in [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
316
+ page_content=' It can be shown that this method performs over O[n2n] operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
317
+ page_content=' Besides that, as Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
318
+ page_content=' 1 suggests, the fact that it requires to transpose and re- shape several matrices has a non-negligible effect that fatally increases its computation time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
319
+ page_content=' Finally, the Tree routine starts storing pairs of tensor products as � σxn−2i−1 ⊗ σxn−2i−2 �n/2−1 i=0 if n is even � σxn−1 � ∪ � σxn−2i−1 ⊗ σxn−2i−2 �⌊n/2⌋ i=0 otherwise , and proceeds with the resultant matrices following the same logic, which allows to compute (1) by iteratively grouping its terms by pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
320
+ page_content=' For better results, this method can be parallelized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
321
+ page_content=' [1] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
322
+ page_content=' Pauli, Zur Quantenmechanik des Magnetischen Elek- trons, Zeitschrift f¨ur Physik 43, 601 (1927).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
323
+ page_content=' [2] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
324
+ page_content=' Heisenberg, Zur Theorie des Ferromagnetismus, Zeitschrift f¨ur Physik 49, 619 (1928).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
325
+ page_content=' [3] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
326
+ page_content=' Bethe, Zur Theorie der Metalle, Zeitschrift f¨ur Physik 71, 205 (1931).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
327
+ page_content=' [4] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
328
+ page_content=' Sherrington and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
329
+ page_content=' Kirkpatrick, Solvable Model of a Spin-Glass, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
330
+ page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
331
+ page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
332
+ page_content=' 35, 1792 (1975).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
333
+ page_content=' [5] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
334
+ page_content=' Panchenko, The Sherrington-Kirkpatrick Model: An Overview, Journal of Statistical Physics 149, 362 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
335
+ page_content=' [6] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
336
+ page_content=' Hubbard and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
337
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
338
+ page_content=' Flowers, Electron Correlations in Narrow Energy Bands, Proceedings of the Royal Society of London.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
339
+ page_content=' Series A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
340
+ page_content=' Mathematical and Physical Sciences 276, 238 (1963).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
341
+ page_content=' [7] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
342
+ page_content=' Altland and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
343
+ page_content=' Simons, Second Quantization, in Condensed Matter Field Theory (Cambridge University Press, 2006) pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
344
+ page_content=' 39–93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
345
+ page_content=' [8] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
346
+ page_content=' Jordan and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
347
+ page_content=' Wigner, ¨Uber das Paulische ¨Aquivalen- zverbot, Zeitschrift f¨ur Physik 47, 631 (1928).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
348
+ page_content=' [9] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
349
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
350
+ page_content=' Bravyi and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
351
+ page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
352
+ page_content=' Kitaev, Fermionic Quantum Com- putation, Annals of Physics 298, 210 (2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
353
+ page_content=' [10] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
354
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
355
+ page_content=' Seeley, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
356
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
357
+ page_content=' Richard, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
358
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
359
+ page_content=' Love, The Bravyi- Kitaev Transformation for Quantum Computation of Electronic Structure, The Journal of Chemical Physics 137, 224109 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
360
+ page_content=' [11] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
361
+ page_content=' Tranter, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
362
+ page_content=' Sofia, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
363
+ page_content=' Seeley, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
364
+ page_content=' Kaicher, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
365
+ page_content=' McClean, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
366
+ page_content=' Babbush, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
367
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
368
+ page_content=' Coveney, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
369
+ page_content=' Mintert, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
370
+ page_content=' Wilhelm, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
371
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
372
+ page_content=' Love, The Bravyi-Kitaev Transformation: Proper- ties and Applications, International Journal of Quantum Chemistry 115, 1431 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
373
+ page_content=' [12] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
374
+ page_content=' Tranter, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
375
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
376
+ page_content=' Love, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
377
+ page_content=' Mintert, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
378
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
379
+ page_content=' Coveney, A Comparison of the Bravyi-Kitaev and Jordan-Wigner Transformations for the Quantum Simulation of Quan- tum Chemistry, Journal of Chemical Theory and Com- putation 14, 5617 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
380
+ page_content=' [13] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
381
+ page_content=' Steudtner and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
382
+ page_content=' Wehner, Fermion-to-Qubit Map- pings with Varying Resource Requirements for Quantum Simulation, New Journal of Physics 20, 063010 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
383
+ page_content=' [14] Qiskit Community, Qiskit: An Open-source Framework for Quantum Computing (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
384
+ page_content=' [15] PennyLane Community, PennyLane: Automatic Differ- entiation of Hybrid Quantum-Classical Computations (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
385
+ page_content=' [16] OpenFermion Developers, OpenFermion: The Electronic Structure Package for Quantum Computers (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
386
+ page_content=' [17] Cirq Developers, Cirq (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
387
+ page_content=' [18] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
388
+ page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
389
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
390
+ page_content=' Romero, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
391
+ page_content=' Oregi, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
392
+ page_content=' Osaba, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
393
+ page_content=' Villar- Rodriguez, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
394
+ page_content=' Ban, Digital Quantum Simulation and Circuit Learning for the Generation of Coherent States, Entropy 24, 1529 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
395
+ page_content=' [19] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
396
+ page_content=' Lucas, Ising Formulations of Many NP Problems, Frontiers in Physics 2, 5 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
397
+ page_content=' [20] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
398
+ page_content=' Osaba, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
399
+ page_content=' Villar-Rodriguez, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
400
+ page_content=' Oregi, A System- atic Literature Review of Quantum Computing for Rout- ing Problems, IEEE Access 10, 55805 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
401
+ page_content=' [21] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
402
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
403
+ page_content=' de Andoin, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
404
+ page_content=' Osaba, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
405
+ page_content=' Oregi, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
406
+ page_content=' Villar-Rodriguez, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
407
+ page_content=' Sanz, Hybrid Quantum-Classical Heuristic for the Bin Packing Problem, in Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO ’22 (Association for Computing Machinery, New York, NY, USA, 2022) pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
408
+ page_content=' 2214–2222.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
409
+ page_content=' [22] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
410
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
411
+ page_content=' de Andoin, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
412
+ page_content=' Oregi, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
413
+ page_content=' Villar-Rodriguez, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
414
+ page_content=' Osaba, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
415
+ page_content=' Sanz, Comparative Benchmark of a Quantum Algorithm for the Bin Packing Problem (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
416
+ page_content=' [23] MATLAB version 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
417
+ page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
418
+ page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
419
+ page_content='1884302 (R2022a), The Math- works, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
420
+ page_content=', Natick, Massachusetts (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
421
+ page_content=' [24] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
422
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
423
+ page_content=' Lawson, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
424
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
425
+ page_content=' Hanson, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
426
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
427
+ page_content=' Kincaid, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
428
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
429
+ page_content=' Krogh, Basic Linear Algebra Subprograms for Fortran Usage, ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
430
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
431
+ page_content=' Softw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
432
+ page_content=' 5, 308 (1979).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
433
+ page_content=' [25] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
434
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
435
+ page_content=' Dongarra, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
436
+ page_content=' Du Croz, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
437
+ page_content=' Hammarling, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
438
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
439
+ page_content=' Hanson, An Extended Set of FORTRAN Basic Linear Algebra Subprograms, ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
440
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
441
+ page_content=' Softw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
442
+ page_content=' 14, 1 (1988).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
443
+ page_content=' [26] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
444
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
445
+ page_content=' Dongarra, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
446
+ page_content=' Du Croz, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
447
+ page_content=' Hammarling, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
448
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
449
+ page_content=' Duff, A Set of Level 3 Basic Linear Algebra Subprograms, ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
450
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
451
+ page_content=' Softw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
452
+ page_content=' 16, 1 (1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
453
+ page_content=' [27] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
454
+ page_content=' Anderson, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
455
+ page_content=' Bai, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
456
+ page_content=' Bischof, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
457
+ page_content=' Blackford, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
458
+ page_content=' Demmel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
459
+ page_content=' Dongarra, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
460
+ page_content=' Du Croz, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
461
+ page_content=' Greenbaum, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
462
+ page_content=' Hammarling, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
463
+ page_content=' McKenney, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
464
+ page_content=' Sorensen, LAPACK users’ guide, 3rd ed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
465
+ page_content=', Software, environments, tools (Society for Indus- trial and Applied Mathematics, Philadelphia, PA, 1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
466
+ page_content=' [28] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
467
+ page_content=' Goto and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
468
+ page_content=' Van De Geijn, High-Performance Im- plementation of the Level-3 BLAS, ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
469
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
470
+ page_content=' Softw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
471
+ page_content=' 35, 1 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
472
+ page_content=' [29] Python Core Team, Python: A Dynamic, Open Source Programming Language, Python Software Foundation (2022), Python Version 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
473
+ page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
474
+ page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
475
+ page_content=' [30] Charles R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
476
+ page_content=' Harris and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
477
+ page_content=' Jarrod Millman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
478
+ page_content=', Array Programming with NumPy, Nature 585, 357 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
479
+ page_content=' [31] SciPy Community, SciPy 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
480
+ page_content='0: Fundamental Algorithms for Scientific Computing in Python, Nature Methods 17, 261 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
481
+ page_content=' [32] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
482
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
483
+ page_content=' Fackler, Algorithm 993: Efficient Computation with Kronecker Products, ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
484
+ page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
485
+ page_content=' Softw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
486
+ page_content=' 45, 1 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
487
+ page_content=' [33] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
488
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
489
+ page_content=' Horn and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
490
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
491
+ page_content=' Johnson, Matrix Equations and the Kronecker Product, in Topics in Matrix Analysis (Cam- bridge University Press, 1991) p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
492
+ page_content=' 239–297.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
493
+ page_content=' [34] Implementing Kronecker Products Efficiently, in Au- tomatic Generation of Prime Length FFT Programs (OpenStax CNX, 2009) pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
494
+ page_content=' 23–28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/GNAyT4oBgHgl3EQfrfkX/content/2301.00560v1.pdf'}
GdAyT4oBgHgl3EQfrflY/content/tmp_files/2301.00561v1.pdf.txt ADDED
@@ -0,0 +1,1425 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.00561v1 [cs.LG] 2 Jan 2023
2
+ Local Differential Privacy for Sequential Decision Making in a Changing
3
+ Environment
4
+ Pratik Gajane
5
+ Eindhoven University of Technology
6
7
+ Abstract
8
+ We study the problem of preserving privacy while still pro-
9
+ viding high utility in sequential decision making scenarios
10
+ in a changing environment. We consider abruptly changing
11
+ environment: the environment remains constant during peri-
12
+ ods and it changes at unknown time instants. To formulate
13
+ this problem, we propose a variant of multi-armed bandits
14
+ called non-stationary stochastic corrupt bandits. We construct
15
+ an algorithm called SW-KLUCB-CF and prove an upper
16
+ bound on its utility using the performance measure of regret.
17
+ The proven regret upper bound for SW-KLUCB-CF is near-
18
+ optimal in the number of time steps and matches the best
19
+ known bound for analogous problems in terms of the num-
20
+ ber of time steps and the number of changes. Moreover, we
21
+ present a provably optimal mechanism which can guarantee
22
+ the desired level of local differential privacy while providing
23
+ high utility.
24
+ Introduction
25
+ Several practically relevant applications including recom-
26
+ mender systems, Internet advertising have been formulated
27
+ as sequential decision making problems using the frame-
28
+ work of multi-armed bandits. The importance of privacy in
29
+ such sequential decision making problems has been exten-
30
+ sively discussed in the literature (see for example, Thakurta
31
+ and Smith (2013); Mishra and Thakurta (2015); Tossou and
32
+ Dimitrakakis (2016)).
33
+ Differential privacy, introduced by Dwork et al. (2006),
34
+ is one of the popular approaches to address such privacy
35
+ concerns. In sequential decision making problems, algo-
36
+ rithms providing differential privacy preserve data privacy
37
+ by adding appropriate statistical noise to the data. Duchi,
38
+ Jordan, and Wainwright (2014) extend this notion to local
39
+ differential privacy in which data remains private even from
40
+ the algorithm. The main difference between global and local
41
+ differential privacy is whether privacy is to be maintained
42
+ from the algorithm or the (possibly unintended) recipient of
43
+ the output of the algorithm. In global differential privacy,
44
+ noise is added by the algorithm so the output does not re-
45
+ veal private information about the input. In local differential
46
+ privacy, noise is added to the input of the algorithm so that
47
+ privacy is maintained even from the algorithm.
48
+ Copyright © 2023, Association for the Advancement of Artificial
49
+ Intelligence (www.aaai.org). All rights reserved.
50
+ To understand the motivation for local differential privacy,
51
+ let us consider the practical application of Internet adver-
52
+ tising 1. An advertising system receives, as input, feedback
53
+ from the users which may reveal private information about
54
+ them. The advertising system employs a suitable learning
55
+ algorithm and selects ads for the users tailored to the feed-
56
+ back given by them. These selected ads are then given to
57
+ the advertisers as output. While using global differential pri-
58
+ vacy, privacy is maintained from the advertisers by ensuring
59
+ that the output of the learning algorithms does not reveal in-
60
+ formation about the input (i.e., user information). Typically,
61
+ advertising systems are established by leading social me-
62
+ dia networks, web browsers and other popular websites. Ko-
63
+ rolova (2010); Kosinski, Stillwell, and Graepel (2013) show
64
+ that it is possible to accurately predict a range of highly sen-
65
+ sitive personal attributes including age, sexual orientation,
66
+ relationship status, political and religious affiliation using
67
+ the feedback available to the advertising systems. Such pos-
68
+ sible breach of privacy necessitates us to protect personal
69
+ user information not only from the advertisers but also from
70
+ the advertising systems. Local differential privacy is able to
71
+ achieve this objective unlike global differential privacy.
72
+ In this article, we propose to use low privacy regime using
73
+ local differential privacy. In low privacy regime, the noise
74
+ added to the data is small and the aim of the privacy mecha-
75
+ nism is to send as much information about data as allowed,
76
+ but no more (Kairouz, Oh, and Viswanath 2014). This is in
77
+ alignment with our dual goal of using privacy in recommen-
78
+ dation systems or Internet advertising, and other similar ap-
79
+ plications: provide useful recommendations/ads to the users
80
+ while respecting their privacy as much as possible.
81
+ We measure the utility of our proposed algorithm using
82
+ regret which is a measure of the total mistake cost (precise
83
+ definitions will follow in the next Section). When rewards
84
+ are bounded (as assumed in most works in the literature),
85
+ the regret of any algorithm is trivially bounded linearly in the
86
+ number of time steps T . An algorithm is said to be learning
87
+ if its regret is bounded sub-linearly in T .
88
+ Main Contributions
89
+ 1. We propose non-stationary stochastic corrupt bandits, a
90
+ novel formulation which aims to preserve local differen-
91
+ 1We consider a simplistic scenario for illustrative purposes.
92
+
93
+ tial privacy while still providing high utility for sequen-
94
+ tial decision making in a non-stationary environment.
95
+ 2. We construct an algorithm called SW-KLUCB-CF for
96
+ the considered problem.
97
+ 3. We prove an upper bound on the utility of SW-KLUCB-
98
+ CF in terms of its regret. This upper bound is near-
99
+ optimal in terms of the number of time steps and matches
100
+ the best known bound for analogous problems in terms of
101
+ the number of time steps and the number of changes.
102
+ 4. We provide an optimal mechanism to achieve a desired
103
+ level of local differential privacy while achieving high
104
+ utility.
105
+ This work is an extension of Gajane, Urvoy, and Kauf-
106
+ mann (2018) to non-stationary environments and reuses
107
+ some of the concepts used there. However, it should be noted
108
+ that the algorithms proposed in Gajane, Urvoy, and Kauf-
109
+ mann (2018) will not be able to solve the problem con-
110
+ sidered in this article. In fact, it is easy to construct non-
111
+ stationary environments for which the algorithms proposed
112
+ in Gajane, Urvoy, and Kaufmann (2018) (and all other dif-
113
+ ferentially private algorithms designed for stationary envi-
114
+ ronment) will suffer regret linear in the number of time steps
115
+ T . On the other hand, the algorithm proposed in this article
116
+ can guarantee regret sub-linear in T in such scenarios. Fur-
117
+ thermore, due to the changing environment and the use of
118
+ a sliding window, the regret analysis in our article presents
119
+ challenges not encountered in stationary settings.
120
+ Our extension to non-stationary environments is practi-
121
+ cally relevant as the assumption of stationarity is some-
122
+ times unrealistic in real-world applications. Such an exten-
123
+ sion providing local differential privacy in non-stationary
124
+ environments for the problem of data collection is given by
125
+ Joseph et al. (2018). Our problem is different than Joseph
126
+ et al. (2018) as we study learning to make optimal sequen-
127
+ tial decisions in a non-stationary environment while provid-
128
+ ing local differential privacy. Note that a naive strategy of
129
+ restarting an algorithm (designed for a stationary environ-
130
+ ment) after each change is not possible in the problem con-
131
+ sidered here as the time instants at which the changes occur
132
+ are unknown.
133
+ Related Work
134
+ In the context of sequential decision-
135
+ making, global differential privacy has been studied in
136
+ various settings including stochastic bandits (Mishra and
137
+ Thakurta 2015; Tossou and Dimitrakakis 2016), adversar-
138
+ ial bandits (Thakurta and Smith 2013; Tossou and Dimi-
139
+ trakakis 2017) and collaborative bandits (Wang et al. 2020).
140
+ In the context of sequential decision-making, local differ-
141
+ ential privacy has been considered in stochastic bandit set-
142
+ ting (Gajane, Urvoy, and Kaufmann 2018; Tao et al. 2022),
143
+ contextual bandits (Zheng et al. 2020), collaborative bandits
144
+ (Wang et al. 2020) and Markov decision processes (Chowd-
145
+ hury and Zhou 2022; Garcelon et al. 2020). For a compre-
146
+ hensive overview of differential privacy and its application
147
+ to other problems, see Dwork and Roth (2014).
148
+ The notion of using a sliding window mechanism (as we
149
+ do in our proposed algorithm) to deal with a non-stationary
150
+ environment has been employed in classical bandits (Gariv-
151
+ ier and Moulines 2011) as well as Markov decision pro-
152
+ cesses (Gajane, Ortner, and Auer 2018).
153
+ Non-Stationary Stochastic Corrupt Bandits
154
+ A non-stationary stochastic corrupt bandits problem is for-
155
+ mally characterized by a set of arms A = {1, . . ., K}
156
+ on which are indexed a list of unknown sub-Gaussian
157
+ reward distributions {νa(1)}a∈A, . . . , {νa(LT )}a∈A,
158
+ a
159
+ list
160
+ of
161
+ unknown
162
+ sub-Gaussian
163
+ feedback
164
+ distributions
165
+ {ςa(1)}a∈A, . . . , {ςa(LT )}a∈A, and a list of known mean-
166
+ corruption functions {ga}a∈A. Here, the total number of
167
+ time steps (i.e., the horizon) is indicated as T . The environ-
168
+ ment undergoes LT abrupt changes at unknown time steps
169
+ called as breakpoints and it remains constant in the intervals
170
+ between two successive breakpoints.
171
+ For notational convenience, we assume that the first
172
+ breakpoint occurs at t = 1. From ith breakpoint till the
173
+ subsequent breakpoint (or the horizon, in case of the last
174
+ breakpoint), if the learner pulls an arm a ∈ A at time t,
175
+ they receive a (hidden) reward Rt drawn from the distri-
176
+ bution νa(i) with mean µa(i) and observe a feedback Ft
177
+ drawn from the distribution ςa(i) with mean λa(i). We as-
178
+ sume that, for each arm, there exists a loose link between the
179
+ reward and the feedback through a known corruption func-
180
+ tion ga which maps the mean of the reward distribution to the
181
+ mean of the feedback distribution : ga(µa(i)) = λa(i), ∀a ∈
182
+ A and 1 ≤ i ≤ LT . Our proposed algorithm and the proven
183
+ regret bound also work if the corruption function for an arm
184
+ changes across time as long as the current corruption func-
185
+ tion is known.
186
+ Note that these ga functions may be completely different
187
+ from one arm to another. For Bernoulli distributions, the re-
188
+ ward distributions and the feedback distributions are in [0, 1]
189
+ for all a ∈ A and we assume all the corruption functions
190
+ {ga}a∈A to be continuous in this interval. We also assume
191
+ the corruption functions {ga}a∈A to be strictly monotonic
192
+ and denote the corresponding inverse functions by g−1
193
+ a . The
194
+ assumption of monotonicity is required for efficient learning
195
+ as proved in Gajane, Urvoy, and Kaufmann (2018).
196
+ Another way to define the link between the reward and
197
+ the feedback is to provide a corruption scheme operator ˜ga
198
+ which maps the rewards into feedback distributions.
199
+ Randomized Response
200
+ Randomized response (a privacy
201
+ protection technique introduced by (Warner 1965)) can also
202
+ be simulated by a Bernoulli corrupt bandits problem and the
203
+ corresponding corruption scheme ˜ga is encoded as:
204
+ Ma :=
205
+
206
+ 0
207
+ 1
208
+ 0
209
+ p00(a)
210
+ 1 − p11(a)
211
+ 1
212
+ 1 − p00(a)
213
+ p11(a)
214
+
215
+ (1)
216
+ Each item in Ma denotes the probability of observing a par-
217
+ ticular feedback for a particular reward i.e., Ma(y, x) :=
218
+ P
219
+
220
+ Feedback from arm a = y | Reward from arm a = x
221
+
222
+ .
223
+ The corresponding corruption function is ga(x) = 1 −
224
+ p00(a) + [p00(a) + p11(a) − 1] · x.
225
+ To measure the utility of an algorithm for this problem,
226
+ we define the notion of regret in the following. Let us de-
227
+ note the mean reward of arm a at time step t as µa,t.
228
+
229
+ The objective of an algorithm, which chooses the arm ˆat
230
+ at time t based only on the previously observed feedback,
231
+ F1, . . . , Ft−1, is to maximize the expected sum of rewards
232
+ i.e., to achieve high utility. This is equivalent to minimiz-
233
+ ing the regret, Regret(T ) := �T
234
+ t=1 µ∗,t − E
235
+ ��T
236
+ t=1 µˆat,t
237
+
238
+ ,
239
+ where µ∗,t := maxa∈A µa,t. Regret measures the perfor-
240
+ mance of the algorithm against an omniscient policy that at
241
+ each time step chooses the arm with the maximal mean re-
242
+ ward. Thus, low regret translates to achieving high utility.
243
+ The Proposed Algorithm
244
+ To solve the problem at hand, we propose SW-KLUCB-
245
+ CF, an adaptation of the kl-UCB algorithm of Capp´e et al.
246
+ (2013). The algorithm takes as input: the window size w,
247
+ a non-decreasing function f, the horizon T and the corrup-
248
+ tions functions g1, . . . , gK. We assume that the horizon T
249
+ is known; an unknown T can be handled using the doubling
250
+ trick (Besson and Kaufmann 2018). We use d(x, y) to denote
251
+ the Kullback–Leibler divergence between two Bernoulli dis-
252
+ tributions with mean x and y. We also use a shorthand of
253
+ x ∧ y to denote min(x, y).
254
+ At each time time step t, the algorithm computes an
255
+ Indexa(t), which is an upper-confidence bound on µa,t
256
+ built from a confidence interval on λa,t based on the KL-
257
+ divergence. The quantity Na(t, w) denotes the number of
258
+ times arm a was chosen in the last w time steps until time t.
259
+ Correspondingly, ˆλa(t, w) denotes the empirical mean of the
260
+ feedback observed from arm a in the last w time steps until
261
+ time t: ˆλa(t, w) :=
262
+ 1
263
+ Na(t,w)
264
+ �t
265
+ s=min{1,t−w+1} Fs · 1(ˆas=a).
266
+ Theorem 1 gives an upper bound on the regret of SW-
267
+ KLUCB-CF. A more explicit bound is proved in the Ap-
268
+ pendix.
269
+ Theorem 1 The regret of SW-KLUCB-CF using f(x) :=
270
+ log(x) + 3 log(log(x)) and w =
271
+
272
+ 4eT
273
+ LT +4 on a Bernoulli
274
+ non-stationary stochastic corrupt bandits problem with
275
+ strictly monotonic and continuous corruption functions
276
+ {ga}a∈A at time T is upper-bounded by 2
277
+ ˜O
278
+
279
+ �
280
+ a∈A
281
+
282
+ LT T +
283
+ LT
284
+
285
+ i=1
286
+
287
+ a̸=a∗(i)
288
+ log
289
+ ��
290
+ T
291
+ LT
292
+
293
+ d(λa(i), ga(µ∗(i))
294
+
295
+  ,
296
+ where a∗(i) and µ∗(i) are the optimum arm and the cor-
297
+ responding optimal mean respectively after ith change and
298
+ before the subsequent change.
299
+ The lower bound on regret in terms T for classical
300
+ non-stationary stochastic bandits is Ω(
301
+
302
+ T) (Garivier and
303
+ Moulines 2011). Theorem 1 matches the lower bound up to
304
+ logarithmic factors, so SW-KLUCB-CF has near-optimal
305
+ regret guarantees in terms of the time horizon T . The
306
+ best known regret upper bounds for classical non-stationary
307
+ stochastic bandits (e.g., Auer, Gajane, and Ortner (2019))
308
+ also feature logarithmic terms besides the lower bound,
309
+ hence our regret bound is in line with the best known results
310
+ 2 ˜O ignores logarithmic factors and constants.
311
+ Algorithm 1: Sliding Window KLUCB for Non-Stationary
312
+ Stochastic Corrupt Bandits (SW-KLUCB-CF)
313
+ Input: Window size w, a non-decreasing function
314
+ f : N → R, T , monotonic and continuous corruption
315
+ functions g1, . . . , gK and d(x, y) := KL(B(x), B(y)),
316
+ 1. Initialization: Pull each arm once.
317
+ 2. for time t = K, . . . , T − 1 do
318
+ (a) Compute for each arm a ∈ A the quantity
319
+ Indexa(t)
320
+ := max
321
+
322
+ q : Na(t, w) · d(ˆλa(t, w), ga(q)) ≤ f (t ∧ w)
323
+
324
+ (b) Pull arm ˆat+1 := argmax
325
+ a∈A
326
+ Indexa(t) and observe the
327
+ feedback Ft+1.
328
+ end for
329
+ for analogous problems. Moreover, the bound in Theorem 1
330
+ also matches the best known regret bound in terms of LT for
331
+ classical non-stationary stochastic bandits which is O√LT .
332
+ We can use SW-KLUCB-CF on non-stationary stochas-
333
+ tic corrupts bandits where the corruption is done via random-
334
+ ized response. The following corollary bounds the resulting
335
+ regret.
336
+ Corollary 1 The regret of SW-KLUCB-CF on a Bernoulli
337
+ non-stationary stochastic corrupt bandits problem with ran-
338
+ domized response using corruption matrices {M}a∈A at
339
+ time T is upper-bounded by
340
+ ˜O
341
+
342
+ �
343
+ a∈A
344
+
345
+ LT T +
346
+ LT
347
+
348
+ i=1
349
+
350
+ a̸=a∗(i)
351
+ log
352
+ ��
353
+ T
354
+ LT
355
+
356
+ (p00(a) + p11(a) − 1)2
357
+
358
+  .
359
+ This corollary follows from Theorem 1 and Pinsker’s in-
360
+ equality: d(x, y) > 2(x−y)2. The term (p00(a)+p11(a)−1)
361
+ can be understood as the slope of the corruption function ga.
362
+ Corruption Mechanism to Preserve Local
363
+ Privacy in Non-Stationary Environment
364
+ First, let us formally define local differential privacy.
365
+ Definition 1 (Locally differentially private mechanism) Any
366
+ randomized mechanism M is ǫ-locally differentially private
367
+ for ǫ ≥ 0, if for all d1, d2 ∈ Domain(M) and for all S ⊂
368
+ Range(M),
369
+ P[M(d1) ∈ S] ≤ eǫ · P[M(d2) ∈ S].
370
+ As done in Gajane, Urvoy, and Kaufmann (2018), a straight-
371
+ forward approach to achieve local differential privacy us-
372
+ ing corrupt bandits is to employ a corruption scheme on the
373
+ user feedback. This is similar to how randomized response
374
+ is used in data collection by Wang, Wu, and Hu (2016).
375
+ Definition 2 (ǫ-locally differentially private bandit feed-
376
+ back corruption scheme) A bandit feedback corruption
377
+ scheme ˜g is ǫ-locally differentially private for ǫ ≥ 0, if for
378
+
379
+ all reward sequences Rt1, . . . , Rt2 and R′
380
+ t1 . . . , R′
381
+ t2, and for
382
+ all S ⊂ Range(˜g),
383
+ P[˜g(Rt1, . . . , Rt2) ∈ S] ≤ eǫ · P[˜g(R′
384
+ t1, . . . , R′
385
+ t2) ∈ S].
386
+ When
387
+ corruption
388
+ is
389
+ done
390
+ by
391
+ randomized
392
+ re-
393
+ sponse,
394
+ local
395
+ differential
396
+ privacy
397
+ requires
398
+ that
399
+ max1≤a≤K
400
+
401
+ p00(a)
402
+ 1−p11(a),
403
+ p11(a)
404
+ 1−p00(a)
405
+
406
+ ≤ eǫ. From Corollary 1,
407
+ we can see that to achieve lower regret, p00(a) + p11(a) is
408
+ to be maximized for all a ∈ A. Using Wang, Wu, and Hu
409
+ (2016, Result 1), we can state that, in order to achieve ǫ-local
410
+ differential privacy while maximizing p00(a) + p11(a),
411
+ Ma =
412
+
413
+ 0
414
+ 1
415
+ 0
416
+
417
+ 1+eǫ
418
+ 1
419
+ 1+eǫ
420
+ 1
421
+ 1
422
+ 1+eǫ
423
+
424
+ 1+eǫ
425
+
426
+ .
427
+ (2)
428
+ As it turns out, this is equivalent to the staircase mechanism
429
+ for local privacy which is the optimal local differential pri-
430
+ vacy mechanism for low privacy regime (Kairouz, Oh, and
431
+ Viswanath 2016, Theorem 14). The trade-off between utility
432
+ and privacy is controlled by ǫ.
433
+ Using the corruption parameters from Eq. (2) with Corol-
434
+ lary 1, we arrive at the following upper bound.
435
+ Corollary 2 At time T , the regret of
436
+ SW-KLUCB-
437
+ CF
438
+ with
439
+ ǫ-locally
440
+ differentially
441
+ private
442
+ bandit
443
+ feedback
444
+ corruption
445
+ scheme
446
+ given
447
+ by
448
+ Eq.
449
+ (2)
450
+ is
451
+ ˜O
452
+
453
+
454
+ a∈A
455
+ √LTT + �LT
456
+ i=1
457
+
458
+ a̸=a∗(i)
459
+ log
460
+ ��
461
+ T
462
+ LT
463
+
464
+ ( eǫ−1
465
+ eǫ+1)
466
+ 2
467
+
468
+ .
469
+ The term
470
+ � eǫ−1
471
+ eǫ+1
472
+ �2 in the above expression conveys the rela-
473
+ tionship of the regret with the level of local differential pri-
474
+ vacy symbolized by ǫ. For low values of ǫ,
475
+ � eǫ−1
476
+ eǫ+1
477
+
478
+ ≈ ǫ/2.
479
+ This is in line with other bandit algorithms providing differ-
480
+ ential privacy (e.g., Mishra and Thakurta (2015)).
481
+ Elements of Mathematical Analysis
482
+ Here, we provide a proof outline for Theorem 1. Please refer
483
+ to the Appendix for the complete proof.
484
+ We start by bounding the expected number of times a sub-
485
+ optimal arm (i.e., an arm other than the optimal arm at the
486
+ time of selection) is pulled by the algorithm till horizon T .
487
+ Recall that, at any time step t, SW-KLUCB-CF pulls an
488
+ arm maximizing an index defined as
489
+ Indexa(t)
490
+ := max
491
+
492
+ q : Na(t, w) · d
493
+
494
+ ˆλa(t, w), ga(q)
495
+
496
+ ≤ f (t ∧ w)
497
+
498
+ = max g−1
499
+ a
500
+ ��
501
+ q : Na(t, w) · d
502
+
503
+ ˆλa(t, w), q
504
+
505
+ ≤ f (t ∧ w)
506
+ ��
507
+ .
508
+ We further decompose the computation of index as follows,
509
+ Indexa(t) :=
510
+ �g−1
511
+ a (ℓa(t))
512
+ if ga is decreasing,
513
+ g−1
514
+ a (ua(t))
515
+ if ga is increasing
516
+ where,
517
+ ℓa(t) := min
518
+
519
+ q : Na(t, w) · d
520
+
521
+ ˆλa(t, w), q
522
+
523
+ ≤ f (t ∧ w)
524
+
525
+ ,
526
+ ua(t) := max
527
+
528
+ q : Na(t, w) · d
529
+
530
+ ˆλa(t, w), q
531
+
532
+ ≤ f (t ∧ w)
533
+
534
+ .
535
+ The interval [ℓa(t), ua(t)] is a KL-based confidence in-
536
+ terval on the mean feedback λa,t of arm a at time t. This
537
+ is in contrast to kl-UCB (Capp´e et al. 2013) where a con-
538
+ fidence interval is placed on the mean reward. Furthermore,
539
+ This differs from kl-UCB-CF (Gajane, Urvoy, and Kauf-
540
+ mann 2018) where the mean feedback of an arm remains the
541
+ same for all the time steps and f does not feature w.
542
+ In our analysis, we use the fact that when an arm a is
543
+ picked at time t+1 by SW-KLUCB-CF, one of the follow-
544
+ ing is true: Either the mean feedback of the optimal arm a∗,t
545
+ with mean reward µ∗,t is outside its confidence interval (i.e.,
546
+ ga∗,t(µ∗,t) < ℓa∗,t(t) or ga∗,t(µ∗,t) > ua∗,t(t)) which is
547
+ unlikely. Or, the mean feedback of the optimal arm is where
548
+ it should be, and then the fact that arm a is selected indicates
549
+ that the confidence interval on λa cannot be too small as ei-
550
+ ther (ua(t) ≥ ga(µ∗,t)) or (ℓa(t) ≤ ga(µ∗,t)). The previous
551
+ statement follows from considering various cases depending
552
+ on whether the corruption functions ga and ga∗,t are increas-
553
+ ing or decreasing. We then need to control the two terms in
554
+ the decomposition of the expected number of draws of arm
555
+ a. The term regarding the “unlikely” event, is bounded using
556
+ the same technique as in the kl-UCB analysis, however with
557
+ some added challenges due to the use of a sliding window. In
558
+ particular, the analysis of a typical upper confidence bound
559
+ algorithm for bandits relies on the fact that the confidence
560
+ interval for any arm is always non-increasing, however this
561
+ is not true while using a sliding window. To control the sec-
562
+ ond term, depending on the monotonicity of the corruption
563
+ functions ga and ga∗,t, we need to meticulously adapt the
564
+ arguments in Capp´e et al. (2013) to control the number of
565
+ draws of a suboptimal arm, as can be seen in the Appendix.
566
+ Concluding Remarks
567
+ In this work, we proposed the setting of non-stationary
568
+ stochastic corrupt bandits for preserving privacy while still
569
+ maintaining high utility in sequential decision making in a
570
+ changing environment. We devised an algorithm called SW-
571
+ KLUCB-CF and proved its regret upper bound which is
572
+ near-optimal in the number of time steps and matches the
573
+ best known bound for analogous problems in terms of the
574
+ number of time steps and the number of changes. Moreover,
575
+ we provided an optimal corruption scheme to be used with
576
+ our algorithm in order to attain the dual goal of achieving
577
+ high utility while maintaining the desired level of privacy.
578
+ Interesting directions for future work include:
579
+ 1. Complete an empirical evaluation of the proposed algo-
580
+ rithm on simulated as well as real-life data.
581
+ 2. Characterize the changes in the environment by a varia-
582
+ tion budget (as done in Besbes, Gur, and Zeevi (2014) for
583
+ classical bandits) instead of the number of changes.
584
+ 3. Incorporate contextual information in the learning pro-
585
+ cess.
586
+ 4. Propose a Bayesian algorithm for non-stationary stochas-
587
+ tic corrupt bandits.
588
+ 5. Propose a (near-)optimal differentially private algorithm
589
+ which does not need to know the number of changes.
590
+
591
+ References
592
+ Auer, P.; Gajane, P.; and Ortner, R. 2019. Adaptively Track-
593
+ ing the Best Bandit Arm with an Unknown Number of Dis-
594
+ tribution Changes. In Beygelzimer, A.; and Hsu, D., eds.,
595
+ Proceedings of the Thirty-Second Conference on Learning
596
+ Theory, volume 99 of Proceedings of Machine Learning Re-
597
+ search, 138–158. PMLR.
598
+ Besbes, O.; Gur, Y.; and Zeevi, A. 2014. Stochastic Multi-
599
+ Armed-Bandit Problem with Non-stationary Rewards.
600
+ In
601
+ Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N.; and
602
+ Weinberger, K., eds., Advances in Neural Information Pro-
603
+ cessing Systems, volume 27. Curran Associates, Inc.
604
+ Besson, L.; and Kaufmann, E. 2018. What Doubling Tricks
605
+ Can and Can’t Do for Multi-Armed Bandits. Working paper
606
+ or preprint.
607
+ Capp´e, O.; Garivier, A.; Maillard, O.-A.; Munos, R.; and
608
+ Stoltz, G. 2013. Kullback-Leibler upper confidence bounds
609
+ for optimal sequential allocation. Annals of Statistics, 41(3):
610
+ 1516–1541.
611
+ Chowdhury, S. R.; and Zhou, X. 2022. Differentially Pri-
612
+ vate Regret Minimization in Episodic Markov Decision Pro-
613
+ cesses. Proceedings of the AAAI Conference on Artificial
614
+ Intelligence, 36(6): 6375–6383.
615
+ Corless, R. M.; Gonnet, G. H.; Hare, D. E. G.; Jeffrey, D. J.;
616
+ and Knuth, D. E. 1996. On the LambertW function. Ad-
617
+ vances in Computational Mathematics, 5(1): 329–359.
618
+ Duchi, J. C.; Jordan, M. I.; and Wainwright, M. J. 2014. Pri-
619
+ vacy Aware Learning. J. ACM, 61(6): 38:1–38:57.
620
+ Dwork, C.; Mcsherry, F.; Nissim, K.; and Smith, A. 2006.
621
+ Calibrating noise to sensitivity in private data analysis. In In
622
+ Proceedings of the 3rd Theory of Cryptography Conference,
623
+ 265–284. Springer.
624
+ Dwork, C.; and Roth, A. 2014. The Algorithmic Founda-
625
+ tions of Differential Privacy. Found. Trends Theor. Comput.
626
+ Sci., 9: 211–407.
627
+ Gajane, P.; Ortner, R.; and Auer, P. 2018. A Sliding-Window
628
+ Approach for Reinforcement Learning in MDPs with Arbi-
629
+ trarily Changing Rewards and Transitions. In the 2nd work-
630
+ shop for Lifelong Learning: A Reinforcement Learning Ap-
631
+ proach (LLARLA).
632
+ Gajane, P.; Urvoy, T.; and Kaufmann, E. 2018. Corrupt Ban-
633
+ dits for Preserving Local Privacy. In Janoos, F.; Mohri, M.;
634
+ and Sridharan, K., eds., Proceedings of Algorithmic Learn-
635
+ ing Theory, volume 83 of Proceedings of Machine Learning
636
+ Research, 387–412. PMLR.
637
+ Garcelon, E.; Perchet, V.; Pike-Burke, C.; and Pirotta, M.
638
+ 2020. Local Differentially Private Regret Minimization in
639
+ Reinforcement Learning. CoRR, abs/2010.07778.
640
+ Garivier, A.; and Moulines, E. 2011. On Upper-Confidence
641
+ Bound Policies for Switching Bandit Problems. In Kivinen,
642
+ J.; Szepesv´ari, C.; Ukkonen, E.; and Zeugmann, T., eds.,
643
+ Algorithmic Learning Theory, 174–188. Berlin, Heidelberg:
644
+ Springer Berlin Heidelberg. ISBN 978-3-642-24412-4.
645
+ Joseph, M.; Roth, A.; Ullman, J.; and Waggoner, B. 2018.
646
+ Local Differential Privacy for Evolving Data. In Bengio,
647
+ S.; Wallach, H.; Larochelle, H.; Grauman, K.; Cesa-Bianchi,
648
+ N.; and Garnett, R., eds., Advances in Neural Information
649
+ Processing Systems, volume 31. Curran Associates, Inc.
650
+ Kairouz, P.; Oh, S.; and Viswanath, P. 2014. Extremal Mech-
651
+ anisms for Local Differential Privacy. In Ghahramani, Z.;
652
+ Welling, M.; Cortes, C.; Lawrence, N. D.; and Weinberger,
653
+ K. Q., eds., Advances in Neural Information Processing Sys-
654
+ tems 27, 2879–2887. Curran Associates, Inc.
655
+ Kairouz, P.; Oh, S.; and Viswanath, P. 2016. Extremal Mech-
656
+ anisms for Local Differential Privacy. Journal of Machine
657
+ Learning Research, 17(17): 1–51.
658
+ Korolova, A. 2010. Privacy Violations Using Microtargeted
659
+ Ads: A Case Study. In ICDMW 2010, The 10th IEEE In-
660
+ ternational Conference on Data Mining Workshops, Sydney,
661
+ Australia, 13 December 2010, 474–482.
662
+ Kosinski, M.; Stillwell, D.; and Graepel, T. 2013. Private
663
+ traits and attributes are predictable from digital records of
664
+ human behavior. Proceedings of the National Academy of
665
+ Sciences, 110(15): 5802–5805.
666
+ Mishra, N.; and Thakurta, A. 2015. (Nearly) Optimal Differ-
667
+ entially Private Stochastic Multi-Arm Bandits. In Proceed-
668
+ ings of the Thirty-First Conference on Uncertainty in Arti-
669
+ ficial Intelligence, UAI 2015, July 12-16, 2015, Amsterdam,
670
+ The Netherlands, 592–601.
671
+ Tao, Y.; Wu, Y.; Zhao, P.; and Wang, D. 2022.
672
+ Optimal
673
+ Rates of (Locally) Differentially Private Heavy-tailed Multi-
674
+ Armed Bandits.
675
+ In Camps-Valls, G.; Ruiz, F. J. R.; and
676
+ Valera, I., eds., Proceedings of The 25th International Con-
677
+ ference on Artificial Intelligence and Statistics, volume 151
678
+ of Proceedings of Machine Learning Research, 1546–1574.
679
+ PMLR.
680
+ Thakurta, A. G.; and Smith, A. D. 2013. (Nearly) Optimal
681
+ Algorithms for Private Online Learning in Full-information
682
+ and Bandit Settings. In Advances in Neural Information Pro-
683
+ cessing Systems 26: 27th Annual Conference on Neural In-
684
+ formation Processing Systems 2013. Proceedings of a meet-
685
+ ing held December 5-8, 2013, Lake Tahoe, Nevada, United
686
+ States., 2733–2741.
687
+ Tossou, A. C. Y.; and Dimitrakakis, C. 2016. Algorithms for
688
+ Differentially Private Multi-Armed Bandits. In 13th Inter-
689
+ national Conference on Artificial Intelligence (AAAI 2016).
690
+ Tossou, A. C. Y.; and Dimitrakakis, C. 2017. Achieving pri-
691
+ vacy in the adversarial multi-armed bandit. In 14th Interna-
692
+ tional Conference on Artificial Intelligence (AAAI 2017).
693
+ Wang, H.; Zhao, Q.; Wu, Q.; Chopra, S.; Khaitan, A.; and
694
+ Wang, H. 2020. Global and Local Differential Privacy for
695
+ Collaborative Bandits. In Proceedings of the 14th ACM Con-
696
+ ference on Recommender Systems, RecSys ’20, 150–159.
697
+ New York, NY, USA: Association for Computing Machin-
698
+ ery. ISBN 9781450375832.
699
+ Wang, Y.; Wu, X.; and Hu, D. 2016. Using Randomized Re-
700
+ sponse for Differential Privacy Preserving Data Collection.
701
+ In Proceedings of the Workshops of the EDBT/ICDT 2016
702
+ Joint Conference, EDBT/ICDT Workshops 2016, Bordeaux,
703
+ France, March 15, 2016.
704
+ Warner, S. L. 1965. Randomized Response: A Survey Tech-
705
+ nique for Eliminating Evasive Answer Bias. Journal of the
706
+ American Statistical Association, 60(309): 63+.
707
+
708
+ Zheng, K.; Cai, T.; Huang, W.; Li, Z.; and Wang, L. 2020.
709
+ Locally Differentially Private (Contextual) Bandits Learn-
710
+ ing. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.;
711
+ and Lin, H., eds., Advances in Neural Information Process-
712
+ ing Systems, volume 33, 12300–12310. Curran Associates,
713
+ Inc.
714
+
715
+ Proof of Theorem 1
716
+ Proof. The proof follows along the lines of the proof for Theorem 2 from Gajane, Urvoy, and Kaufmann (2018).
717
+ The index used by SW-KLUCB-CFis defined by
718
+ Indexa(t) := max
719
+
720
+ q : Na(t, w) · d
721
+
722
+ ˆλa(t, w), ga(q)
723
+
724
+ ≤ f (t ∧ w)
725
+
726
+ = max g−1
727
+ a
728
+ ��
729
+ q : Na(t, w) · d
730
+
731
+ ˆλa(t, w), q
732
+
733
+ ≤ f (t ∧ w)
734
+ ��
735
+ .
736
+ For the purpose of this proof, we further decompose the computation of index as follows,
737
+ Indexa(t) :=
738
+ �g−1
739
+ a (ℓa(t))
740
+ if ga is decreasing,
741
+ g−1
742
+ a (ua(t))
743
+ if ga is increasing
744
+ where,
745
+ ℓa(t) := min
746
+
747
+ q : Na(t, w) · d
748
+
749
+ ˆλa(t, w), q
750
+
751
+ ≤ f (t ∧ w)
752
+
753
+ and
754
+ ua(t) := max
755
+
756
+ q : Na(t, w) · d
757
+
758
+ ˆλa(t, w), q
759
+
760
+ ≤ f (t ∧ w)
761
+
762
+ .
763
+ Note that, the optimal arm at time t is denoted as a∗,t and µ∗,t is the corresponding optimal mean. Along the same lines, let
764
+ ℓ∗(t) := ℓa∗,t(t) and u∗(t) := ua∗,t(t).
765
+ Let Na(t) be the number of times arm a has been pulled till time t. To get an upper bound on the regret of our algorithm,
766
+ we first bound E[Na(t)] for all the non-optimal arms a (i.e., a ̸= a∗,t at time t). Recall that µi,t is the mean reward of arm i
767
+ at time step t. Let us define T (w) as the set of indices t ∈ {K + 1, . . . , T } such that µi,s = µi,t for all i ∈ {1, . . . , K} and
768
+ all t − w < s ≤ t. That is to say T (w) is the set of all time steps t ∈ {K + 1, . . . , T } for which there was no change in the
769
+ previous w time steps. Recall that ˆat is the arm chosen by the algorithm at time step t. Then,
770
+ E(Na(T )) = 1 +
771
+ T −1
772
+
773
+ t=K
774
+ P(ˆat+1 = a)
775
+ ≤ 1 + LT · w +
776
+
777
+ K≤t≤T −1, t∈T (w)
778
+ P(ˆat+1 = a).
779
+ Depending upon if ga and ga∗,t are increasing or decreasing there are four possible sub-cases:
780
+ • Both ga∗,t and ga are increasing.
781
+ (ˆat+1 = a)
782
+
783
+
784
+ u∗(t) < ga∗,t(µ∗,t)
785
+
786
+
787
+
788
+ ˆat+1 = a, u∗(t) ≥ ga∗,t(µ∗,t)
789
+
790
+ =
791
+
792
+ u∗(t) < ga∗,t(µ∗,t)
793
+
794
+
795
+
796
+ ˆat+1 = a, g−1
797
+ a∗,t(u∗(t)) ≥ µ∗,t
798
+
799
+ since ga∗,t is increasing
800
+ =
801
+
802
+ u∗(t) < ga∗,t(µ∗,t)
803
+
804
+
805
+
806
+ ˆat+1 = a, g−1
807
+ a (ua(t)) ≥ µ∗,t
808
+
809
+ since Indexa ≥ Indexa∗,t
810
+ =
811
+
812
+ u∗(t) < ga∗,t(µ∗,t)
813
+
814
+ ∪ (ˆat+1 = a, ua(t) ≥ ga(µ∗,t))
815
+ since ga is increasing.
816
+ ∴ E(Na(T )) ≤1 + LT · w +
817
+
818
+ K≤t≤T −1, t∈T (w)
819
+ P
820
+
821
+ u∗(t) < ga∗,t(µ∗,t)
822
+
823
+ +
824
+
825
+ K≤t≤T −1, t∈T (w)
826
+ P (ˆat+1 = a, ua(t) ≥ ga(µ∗,t)) .
827
+ (3)
828
+ • ga∗,t is decreasing and ga is increasing.
829
+ (ˆat+1 = a)
830
+
831
+
832
+ ℓ∗(t) > ga∗,t(µ∗,t)
833
+
834
+
835
+
836
+ ˆat+1 = a, ℓ∗(t) ≤ ga∗,t(µ∗,t)
837
+
838
+ =
839
+
840
+ ℓ∗(t) > ga∗,t(µ∗,t)
841
+
842
+
843
+
844
+ ˆat+1 = a, g−1
845
+ a∗,t(ℓ∗(t)) ≥ µ∗,t
846
+
847
+ since ga∗,t is decreasing
848
+ =
849
+
850
+ ℓ∗(t) > ga∗,t(µ∗,t)
851
+
852
+
853
+
854
+ ˆat+1 = a, g−1
855
+ a (ua(t)) ≥ µ∗,t
856
+
857
+ since Indexa ≥ Indexa∗,t
858
+ =
859
+
860
+ ℓ∗(t) > ga∗,t(µ∗,t)
861
+
862
+ ∪ (ˆat+1 = a, ua(t) ≥ ga(µ∗,t))
863
+ since ga is increasing.
864
+
865
+ ∴ E(Na(T )) ≤1 + LT · w +
866
+
867
+ K≤t≤T −1, t∈T (w)
868
+ P
869
+
870
+ ℓ∗(t) > ga∗,t(µ∗,t)
871
+
872
+ +
873
+
874
+ K≤t≤T −1, t∈T (w)
875
+ P (ˆat+1 = a, ua(t) ≥ ga(µ∗,t)) .
876
+ (4)
877
+ • ga∗,t is increasing and ga is decreasing.
878
+ (ˆat+1 = a)
879
+
880
+
881
+ u∗(t) < ga∗,t(µ∗,t)
882
+
883
+
884
+
885
+ ˆat+1 = a, u∗(t) ≥ ga∗,t(µ∗,t)
886
+
887
+ =
888
+
889
+ u∗(t) < ga∗,t(µ∗,t)
890
+
891
+
892
+
893
+ ˆat+1 = a, g−1
894
+ a∗,t(u∗(t)) ≥ µ∗,t
895
+
896
+ since ga∗,t is increasing
897
+ =
898
+
899
+ u∗(t) < ga∗,t(µ∗,t)
900
+
901
+
902
+
903
+ ˆat+1 = a, g−1
904
+ a (ℓa(t)) ≥ µ∗,t
905
+
906
+ since Indexa > Indexa∗,t
907
+ =
908
+
909
+ u∗(t) < ga∗,t(µ∗,t)
910
+
911
+ ∪ (ˆat+1 = a, ℓa(t) ≤ ga(µ∗,t))
912
+ since ga is decreasing.
913
+ ∴ E(Na(T )) ≤1 + LT · w +
914
+
915
+ K≤t≤T −1, t∈T (w)
916
+ P
917
+
918
+ u∗(t) < ga∗,t(µ∗,t)
919
+
920
+ +
921
+
922
+ K≤t≤T −1, t∈T (w)
923
+ P (ˆat+1 = a, ℓa(t) ≤ ga(µ∗,t)) .
924
+ (5)
925
+ • ga∗,t is decreasing and ga is decreasing.
926
+ (ˆat+1 = a)
927
+
928
+
929
+ ℓ∗(t) > ga∗,t(µa∗,t)
930
+
931
+
932
+
933
+ ˆat+1 = a, ℓ∗(t) ≤ ga∗,t(µa∗,t
934
+
935
+ =
936
+
937
+ ℓ∗(t) > ga∗,t(µa∗,t)
938
+
939
+
940
+
941
+ ˆat+1 = a, g−1
942
+ a∗,t(ℓ∗(t)) ≥ µa∗,t
943
+
944
+ since ga∗,t is decreasing
945
+ =
946
+
947
+ ℓ∗(t) > ga∗,t(µa∗,t)
948
+
949
+
950
+
951
+ ˆat+1 = a, g−1
952
+ a (ℓa(t)) ≥ µa∗,t
953
+
954
+ since Indexa > Indexa∗,t
955
+ =
956
+
957
+ ℓ∗(t) > ga∗,t(µa∗,t)
958
+
959
+
960
+
961
+ ˆat+1 = a, ℓa(t) ≤ ga(µa∗,t)
962
+
963
+ since ga is decreasing.
964
+ ∴ E(Na(T )) ≤1 + LT · w +
965
+
966
+ K≤t≤T −1, t∈T (w)
967
+ P
968
+
969
+ ℓ∗(t) > ga∗,t(µa∗,t)
970
+
971
+ +
972
+
973
+ K≤t≤T −1, t∈T (w)
974
+ P
975
+
976
+ ˆat+1 = a, ℓa(t) ≤ ga(µa∗,t)
977
+
978
+ .
979
+ (6)
980
+ We first upper bound the two sums
981
+
982
+ K≤t≤T −1, t∈T (w)
983
+ P
984
+
985
+ u∗(t) < ga∗,t(µ∗,t)
986
+
987
+ and
988
+
989
+ K≤t≤T −1, t∈T (w)
990
+ P
991
+
992
+ ℓ∗(t) > ga∗,t(µa∗,t)
993
+
994
+ (7)
995
+ using that ℓ∗(t) and u∗(t) are respectively lower and upper confidence bound on ga∗,t(µ∗,t). Recall that min {t, w} is denoted
996
+ as t ∧ w.
997
+ P
998
+
999
+ ua∗,t < ga∗,t(µ∗,t)
1000
+
1001
+ ≤ P
1002
+
1003
+ ga∗,t(µ∗,t) > ˆλa∗,t(t, w) and Na∗,t(t, w) · d
1004
+
1005
+ ˆλa∗,t(t, w), ga∗,t(µ∗,t)
1006
+
1007
+ ≥ f (t ∧ w)
1008
+
1009
+ ≤ P
1010
+
1011
+ ∃s ∈ {1, . . . , (t ∧ w)} : ga∗,t(µ∗,t) > ˆλa∗,t,s and s · d(ˆλa∗,t,s, ga∗,t(µ∗,t)) ≥ f (t ∧ w)
1012
+
1013
+ ≤ min
1014
+
1015
+ 1, e ⌈f (t ∧ w) log t⌉ e−f(t∧w)�
1016
+ ,
1017
+ (8)
1018
+ where the upper bound follows from Lemma 2 in Capp´e et al. (2013), and the fact that ˆλa∗,t,s is the empirical mean of s
1019
+ Bernoulli samples with mean ga∗,t(µ∗,t). Similarly, one has
1020
+ P
1021
+
1022
+ ℓ∗(t) > ga∗,t(µa∗,t)
1023
+
1024
+ ≤ min
1025
+
1026
+ 1, e ⌈f (t ∧ w) log t⌉ e−f(t∧w)�
1027
+ .
1028
+ (9)
1029
+ As f(x) := log x + 3(log log x), for x ≥ 3,
1030
+ e⌈f(x) log x⌉ ≤ 4e log2 x.
1031
+
1032
+ Then, using Eq. (8) and Eq. (9), the two quantities in Eq. (7) can be upper bounded by
1033
+ 1 +
1034
+ T −1
1035
+
1036
+ t=3
1037
+ e ⌈f (t ∧ w) log t⌉ e−f(t∧w) ≤ 1 +
1038
+ T −1
1039
+
1040
+ t=3
1041
+ 4e · log2 (t ∧ w) · e−f(t∧w)
1042
+ = 1 + 4e
1043
+ T −1
1044
+
1045
+ t=3
1046
+ 1
1047
+ (t ∧ w) · log (t ∧ w)
1048
+ = 1 + 4e
1049
+ w
1050
+
1051
+ t=3
1052
+ 1
1053
+ (t ∧ w) · log (t ∧ w) + 4e
1054
+ T
1055
+
1056
+ t=w+1
1057
+ 1
1058
+ (t ∧ w) · log (t ∧ w)
1059
+ ≤ 1 + 4e
1060
+ w
1061
+
1062
+ t=3
1063
+ 1
1064
+ 3 log 3 + 4e
1065
+ T
1066
+
1067
+ t=w+1
1068
+ 1
1069
+ w log w
1070
+ ≤ 1 +
1071
+ 4ew
1072
+ 3 log 3 +
1073
+ 4eT
1074
+ w log w.
1075
+ This proves that
1076
+
1077
+ K≤t≤T −1, t∈T (w)
1078
+ P
1079
+
1080
+ u∗(t) < ga∗,t(µ∗,t)
1081
+
1082
+ ≤ 1 +
1083
+ 4ew
1084
+ 3 log 3 +
1085
+ 4eT
1086
+ w log w
1087
+ and,
1088
+ (10)
1089
+
1090
+ K≤t≤T −1, t∈T (w)
1091
+ P
1092
+
1093
+ ℓ∗(t) > ga∗,t(µa∗,t)
1094
+
1095
+ ≤ 1 +
1096
+ 4ew
1097
+ 3 log 3 +
1098
+ 4eT
1099
+ w log w .
1100
+ (11)
1101
+ We now turn our attention to the other two sums involved in the upper bound we gave for E(Na(T )). Let the unknown time-
1102
+ step at which ith change occurs be denoted as ti. For notational convenience, we assume that the first change occurs at t = 1 so
1103
+ t1 = 1 and change L+1 takes place at t = T +1 where T is the horizon. We introduce the notation d+(x, y) = d(x, y)·1(x<y)
1104
+ and d−(x, y) = d(x, y) · 1(x>y). So we can write, when ga is increasing,
1105
+
1106
+ K≤t≤T −1, t∈T (w)
1107
+ P (ˆat+1 = a, ua(t) ≥ ga(µ∗,t))
1108
+
1109
+ L
1110
+
1111
+ i=1
1112
+
1113
+ ti≤t<ti+1−1, t∈T (w)
1114
+ P (ˆat+1 = a, ua(t) ≥ ga(µ∗,t))
1115
+ = E
1116
+
1117
+
1118
+ L
1119
+
1120
+ i=1
1121
+
1122
+ ti≤t<ti+1−1, t∈T (w)
1123
+ 1ˆat+1=a · 1Na(t,w)·d+(ˆλa,Na(t,w),ga(µ∗,t))≤f(t∧w)
1124
+
1125
+
1126
+ ≤ E
1127
+
1128
+
1129
+ L
1130
+
1131
+ i=1
1132
+
1133
+ ti≤t<ti+1−1, t∈T (w)
1134
+ t∧w
1135
+
1136
+ s=1
1137
+ 1ˆat+1=a · 1Na(t,w)=s · 1s·d+(ˆλa,s,ga(µ∗,t))≤f(t∧w)
1138
+
1139
+
1140
+ ≤ E
1141
+
1142
+
1143
+ L
1144
+
1145
+ i=1
1146
+
1147
+ ti≤t<ti+1−1, t∈T (w)
1148
+ t∧w
1149
+
1150
+ s=1
1151
+ 1ˆat+1=a · 1Na(t)=s · 1s·d+(ˆλa,s,ga(µ∗,t))≤f(t∧w)
1152
+
1153
+
1154
+ ≤ E
1155
+
1156
+ L
1157
+
1158
+ i=1
1159
+ t∧w
1160
+
1161
+ s=1
1162
+ 1s·d+(ˆλa,s,ga(µ∗,t))≤f(t∧w)
1163
+
1164
+ ti≤t<ti+1−1, t∈T (w)
1165
+ 1ˆat+1=a · 1Na(t)=s
1166
+
1167
+ ��
1168
+
1169
+ ≤1
1170
+
1171
+ .
1172
+ In the above, the penultimate steps follows from the fact that the event Na(t, w) = s is subsumed by the event Na(t) = s. So,
1173
+ one obtains, when ga is increasing,
1174
+
1175
+ K≤t≤T −1, t∈T (w)
1176
+ P (ˆat+1 = a, ua(t) ≥ ga(µ∗,t)) ≤ P
1177
+ � L
1178
+
1179
+ l=1
1180
+ t∧w
1181
+
1182
+ s=1
1183
+ s · d+(ˆλa,s, ga(µ∗,t)) ≤ f(t ∧ w)
1184
+
1185
+ .
1186
+ (12)
1187
+ Using similar arguments, one can show that when ga is decreasing,
1188
+
1189
+ K≤t≤T −1, t∈T (w)
1190
+ P
1191
+
1192
+ ˆat+1 = a, ℓa(t) ≤ ga(µa∗,t)
1193
+
1194
+ ≤ P
1195
+ � L
1196
+
1197
+ l=1
1198
+ t∧w
1199
+
1200
+ s=1
1201
+ s · d−(ˆλa,s, ga(µ∗,t)) ≤ f(t ∧ w)
1202
+
1203
+ .
1204
+ (13)
1205
+
1206
+ Recall that µa(i) is the mean reward of arm a after ith change and before the subsequent change. Correspondingly, let λa(i)
1207
+ be the mean feedback of arm a after ith change and and before the subsequent change. Furthermore, let µ∗(i) be the optimum
1208
+ mean after ith change and and before the subsequent change.
1209
+ Using Appendix A.2. of (Capp´e et al. 2013), the quantity in the right-hand side of (12) can be upper-bounded by
1210
+ L
1211
+
1212
+ i=1
1213
+ f(w)
1214
+ d(λa(i), ga(µ∗(i)) +
1215
+ L
1216
+
1217
+ i=1
1218
+
1219
+
1220
+
1221
+ d′(λa(i), ga(µ∗(i))2
1222
+ (d(λa(i), ga(µ∗(i))3
1223
+
1224
+ f(w) +
1225
+ L
1226
+
1227
+ i=1
1228
+ 2
1229
+ �d′(λa(i), ga(µ∗(i))
1230
+ d(λa(i), ga(µ∗(i))
1231
+ �2
1232
+ + 1.
1233
+ (14)
1234
+ For (13), noting that d−(x, y) = d+(1 − x, 1 − y), one has
1235
+ P
1236
+
1237
+ s · d−(ˆλa,s, ga(µ∗,t)) ≤ f(t ∧ w)
1238
+
1239
+ =P
1240
+
1241
+ s · d+(1 − ˆλa,s, 1 − ga(µ∗,t)) ≤ f(t ∧ w)
1242
+
1243
+ =P
1244
+
1245
+ s · d+(ˆµa,s, 1 − ga(µ∗,t)) ≤ f(t ∧ w)
1246
+
1247
+ ,
1248
+ where ˆµa,s := 1−ˆλa,s, is the empirical mean of s observations of a Bernoulli random variable with mean 1−λa < 1−ga(µ∗,t).
1249
+ Hence, the analysis of (Capp´e et al. 2013) can be applied, and using that d(1 − x, 1 − y) = d(x, y) and d′(1 − x, 1 − y) =
1250
+ −d′(x, y), the right hand side of (13) can also be upper bound by (14).
1251
+ Combining inequalities (10), (11) and (12),(13), (14) with the initial decomposition of E[Na(T )], and substituting f(x) :=
1252
+ log(x) + 3 log log(x) yields in all cases,
1253
+ E[Na(T )] ≤ LT · w +
1254
+ 4ew
1255
+ 3 log 3 +
1256
+ 4eT
1257
+ w log w +
1258
+ LT
1259
+
1260
+ i=1
1261
+ f(w)
1262
+ d(λa(i), ga(µ∗(i))
1263
+ +
1264
+ LT
1265
+
1266
+ i=1
1267
+
1268
+
1269
+
1270
+ d′(λa(i), ga(µ∗(i))2
1271
+ (d(λa(i), ga(µ∗(i))3
1272
+
1273
+ f(w)
1274
+ +
1275
+ LT
1276
+
1277
+ i=1
1278
+ 2
1279
+ �d′(λa(i), ga(µ∗(i))
1280
+ d(λa(i), ga(µ∗(i))
1281
+ �2
1282
+ + 5
1283
+ ≤ (LT + 4) · w +
1284
+ 4eT
1285
+ w log w +
1286
+ LT
1287
+
1288
+ i=1
1289
+ log(w) + 3 log log(w)
1290
+ d(λa(i), ga(µ∗(i))
1291
+ +
1292
+ LT
1293
+
1294
+ i=1
1295
+
1296
+
1297
+
1298
+ d′(λa(i), ga(µ∗(i))2
1299
+ (d(λa(i), ga(µ∗(i))3
1300
+
1301
+ log(w) + 3 log log(w)
1302
+ +
1303
+ LT
1304
+
1305
+ i=1
1306
+ 2
1307
+ �d′(λa(i), ga(µ∗(i))
1308
+ d(λa(i), ga(µ∗(i))
1309
+ �2
1310
+ + 5.
1311
+ (15)
1312
+ Minimizing the leading terms in the RHS from eq. (15) via taking the first derivative with respect to w and equating it to 0,
1313
+ leads to solving for w in
1314
+ w2 �
1315
+ log2 w
1316
+
1317
+ log w + 1
1318
+ =
1319
+ 4eT
1320
+ LT + 4
1321
+ ≃ w2 log (w2) =
1322
+ 8eT
1323
+ LT + 4
1324
+ Here, w must be positive for the log to exist, so we can write w2 = eu for some u, and the equation becomes
1325
+ ueu =
1326
+ 8eT
1327
+ LT + 4.
1328
+ This equation has no solution in an elementary expression, although it can be expressed in terms of the Lambert W function
1329
+ (Corless et al. 1996). Opting for an elementary expression for w, we can choose w =
1330
+
1331
+ 4eT
1332
+ LT +4, which leads to the following
1333
+
1334
+ bound,
1335
+ E[Na(T )] ≤
1336
+
1337
+ 4e(LT + 4)T +
1338
+
1339
+ 4e(LT + 4)T
1340
+ log
1341
+ ��
1342
+ 4eT
1343
+ LT +4
1344
+ � +
1345
+ LT
1346
+
1347
+ i=1
1348
+ log
1349
+ ��
1350
+ 4eT
1351
+ LT +4
1352
+
1353
+ + 3 log log
1354
+ ��
1355
+ 4eT
1356
+ LT +4
1357
+
1358
+ d(λa(i), ga(µ∗(i))
1359
+ +
1360
+ LT
1361
+
1362
+ i=1
1363
+
1364
+
1365
+
1366
+ d′(λa(i), ga(µ∗(i))2
1367
+ (d(λa(i), ga(µ∗(i))3
1368
+
1369
+
1370
+
1371
+ �log
1372
+ ��
1373
+ 4eT
1374
+ LT + 4
1375
+
1376
+ + 3 log log
1377
+ ��
1378
+ 4eT
1379
+ LT + 4
1380
+
1381
+ +
1382
+ LT
1383
+
1384
+ i=1
1385
+ 2
1386
+ �d′(λa(i), ga(µ∗(i))
1387
+ d(λa(i), ga(µ∗(i))
1388
+ �2
1389
+ + 5.
1390
+ Since the rewards are bounded in [0, 1] for Bernoulli non-stationary stochastic bandits, the regret is upper-bounded by,
1391
+ ˜O
1392
+
1393
+ �
1394
+ a∈A
1395
+
1396
+ LTT +
1397
+
1398
+ a̸=a∗(i)
1399
+ LT
1400
+
1401
+ i=1
1402
+ log
1403
+ ��
1404
+ T
1405
+ LT
1406
+
1407
+ d(λa(i), ga(µ∗(i))
1408
+
1409
+  .
1410
+ Assuming that LT =
1411
+
1412
+ T β�
1413
+ for some β ∈ [0, 1), the expected regret is upper bounded as ˜O
1414
+
1415
+ T (1+β)/2�
1416
+ . In particular, if β = 0,
1417
+ the number of breakpoints is upper-bounded by L independently of T , then with w =
1418
+
1419
+ 4eT
1420
+ L+4, the upper bound is ˜O
1421
+ �√
1422
+ LT
1423
+
1424
+ .
1425
+
GdAyT4oBgHgl3EQfrflY/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
INAzT4oBgHgl3EQfHvu0/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfb86326d58ff238b966ea1ec8b02ec6d8e4930c46a387f3c7528bf432b68f21
3
+ size 1966125
J9FLT4oBgHgl3EQfKi8f/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fffa82521077281395774171eae4dd80ef192207c009b7aa5842983abf0f2509
3
+ size 7602221
K9E1T4oBgHgl3EQfswUE/content/tmp_files/2301.03368v1.pdf.txt ADDED
@@ -0,0 +1,1404 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 1
2
+ DRL-GAN: A Hybrid Approach for Binary and
3
+ Multiclass Network Intrusion Detection
4
+ Caroline Strickland∗, Chandrika Saha†, Muhammad Zakar‡,
5
+ Sareh Soltani Nejad§, Noshin Tasnim¶, Daniel Lizotte∥, Anwar Haque∗∗
6
+ Department of Computer Science, The University of Western Ontario, London, Canada
7
+ {∗cstrick4, †csaha, ‡mzakar, §ssolta7, ¶ntasnim3, ∥dlizotte, ∗∗ahaque32}@uwo.ca
8
+ Abstract—Our increasingly connected world continues to face
9
+ an ever-growing amount of network-based attacks. Intrusion
10
+ detection systems (IDS) are an essential security technology for
11
+ detecting these attacks. Although numerous machine learning-
12
+ based IDS have been proposed for the detection of malicious
13
+ network traffic, the majority have difficulty properly detecting
14
+ and classifying the more uncommon attack types. In this paper,
15
+ we implement a novel hybrid technique using synthetic data
16
+ produced by a Generative Adversarial Network (GAN) to use as
17
+ input for training a Deep Reinforcement Learning (DRL) model.
18
+ Our GAN model is trained with the NSL-KDD dataset for four
19
+ attack categories as well as normal network flow. Ultimately, our
20
+ findings demonstrate that training the DRL on specific synthetic
21
+ datasets can result in better performance in correctly classifying
22
+ minority classes over training on the true imbalanced dataset.
23
+ Index Terms—Network Security, Network Intrusion Detection
24
+ System, Deep Reinforcement Learning, Generative Adversarial
25
+ Networks, NSL-KDD, Machine Learning, Artificial Intelligence.
26
+ I. INTRODUCTION
27
+ T
28
+ HE increasing volume and sophistication of network-
29
+ based attacks motivate the development of effective tech-
30
+ niques and tools to prevent service disruption, unauthorized
31
+ access, and the disclosure of sensitive information [1]. An
32
+ Intrusion Detection System (IDS) is an important defence tool
33
+ against sophisticated and increasing network attacks, but these
34
+ systems, especially Machine Learning (ML) based systems,
35
+ require large, reliable, and valid network traffic datasets to be
36
+ effective. Although the majority of recently available datasets
37
+ cover a range of network attack types and traffic patterns
38
+ and include information about the attacking infrastructure,
39
+ modern networks are increasingly diversified such that existing
40
+ datasets are often not enough to develop effective classification
41
+ mechanisms. These datasets often suffer from a lack of traffic
42
+ diversity and volume or fail to cover the full scope of known
43
+ attack types. To cope up with these new changes, we require
44
+ a more dynamic dataset that will improve the ability of
45
+ an IDS to detect intrusions. Using deep learning techniques
46
+ such as Generative Adversarial Networks (GANs), we can
47
+ fabricate additional data using existing datasets to increase the
48
+ classification accuracy of an IDS, especially for rare attack
49
+ categories.
50
+ Two methods of IDS are Signature-based Intrusion Detec-
51
+ tion Systems (SNIDS) and Anomaly-based Intrusion Detec-
52
+ tion Systems (ANIDS). The SNIDS approach is effective for
53
+ known threats, as it looks for specific patterns (or ‘signatures’)
54
+ such as byte sequences in network traffic, or known malicious
55
+ instructions sequences used by malware [1]. Conversely, the
56
+ ANIDS approach uses ML algorithms to analyze and monitor
57
+ the network traffic in order to detect any suspicious activ-
58
+ ity, thus being an effective method for catching unknown
59
+ attacks [2].
60
+ The emergence of deep learning and its integration with
61
+ Reinforcement Learning (RL) has created a class of Deep
62
+ Reinforcement Learning (DRL) methods that are able to detect
63
+ the most recent and sophisticated types of network attacks.
64
+ DRL combines artificial neural networks with a framework of
65
+ RL that helps software agents (or ‘learning entities’) learn how
66
+ to reach their goals. DRL combines function approximation
67
+ and target optimization, mapping states and actions to the
68
+ rewards they lead to [3]. This results in a ‘policy’ that our
69
+ learning agents can follow to make the best decisions given the
70
+ current state. To detect network attacks, DRL is used to train
71
+ an agent such that, given a ‘state’ represented as a collection of
72
+ feature values, will take the best ‘action’ (which, in our case,
73
+ acts as a classification of attack type), in order to recognize
74
+ an attack.
75
+ Each network is different in that its behaviours and patterns
76
+ evolve gradually. Naturally, vulnerabilities also evolve. The
77
+ performance of IDS classification accuracy suffers as existing
78
+ datasets gradually become out of date, invalid, and unreli-
79
+ able. Moreover, reliable data cannot often be shared due to
80
+ privacy concerns. Existing publicly available datasets do not
81
+ include all of the existing network attack types, let alone the
82
+ unknown vulnerabilities and attacks. To resolve this, we need
83
+ more diverse and up-to-date datasets that properly reflect the
84
+ characteristics of network intrusions in order to increase the
85
+ performance of the IDS. Knowing this, we propose a SNIDS
86
+ using DRL techniques. We use a collection of GAN models
87
+ to generate varied datasets, then use DRL to implement an
88
+ IDS and train the model on the GAN-generated datasets and
89
+ compare our results.
90
+ We use the open-source dataset NSL-KDD [4]. NSL-KDD
91
+ is imbalanced with significantly less attack samples than
92
+ normal traffic (especially for Probe, U2R, and R2L attacks).
93
+ Thus, we used GAN to generate synthetic data so that there
94
+ is a more even class balance. We then trained the DRL model
95
+ on both the untouched NSL-KDD dataset as well as the GAN-
96
+ generated data from each of our unique models for both binary
97
+ and multiclass classification. Finally, we assess how training
98
+ the DRL models using synthetic datasets compares in terms
99
+ of IDS performance as well as individual class F1-scores.
100
+ arXiv:2301.03368v1 [cs.CR] 5 Jan 2023
101
+
102
+ 2
103
+ Overall, the primary contributions of this paper include:
104
+ 1) Using both conditional and unconditional CTGAN and
105
+ copulaGAN models to generate tabular data. This is
106
+ useful for increasing the minority class samples in im-
107
+ balanced datasets, as well as providing large datasets for
108
+ training ML models.
109
+ 2) Combining GAN and DRL techniques for the purpose of
110
+ network intrusion detection and increasing the precision
111
+ and recall for classifying underrepresented class data. We
112
+ propose a framework that trains a GAN model to produce
113
+ synthetic data, and then passes that data to a DRL model
114
+ that acts as an IDS and either alerts a user to an attack
115
+ or classifies the network traffic as benign.
116
+ The remainder of this paper is organized as follows: Sec-
117
+ tion II surveys related work for the purpose of network
118
+ intrusion detection and presents the motivation and novelty
119
+ behind this work. Section III discusses methodology and
120
+ details necessary for implementation of our models. Section IV
121
+ provides a comprehensive evaluation of the obtained results.
122
+ Section V presents an interpretation of our findings. Finally,
123
+ Section VI discusses directions for future work.
124
+ II. RELATED WORK
125
+ Hsu and Matsuoka [1] propose a DRL model for anomaly-
126
+ based network intrusion detection. This approach treats the
127
+ network traffic data as the RL environment state variables
128
+ and the outcome of intrusion detection as the action. The
129
+ correctness of the intrusion recognition result is used to
130
+ determine the reward. The novelty of this work is that the
131
+ DRL agent dynamically alternates between ‘detection mode’
132
+ and ‘learning mode’ based on whether the current performance
133
+ of the system is below a predefined threshold. In learning
134
+ mode, the performance is evaluated through the reward and
135
+ the model is updated with the new traffic data to improve the
136
+ detection performance. In detection mode, a dummy reward is
137
+ used to maintain operation and the true reward of the label is
138
+ not calculated. The system was evaluated on pre-established
139
+ benchmark datasets, NSL-KDD [4] and UNSW-NB15 [5],
140
+ and consistently achieved over 90% in accuracy, recall, and
141
+ precision performance metrics.
142
+ Alavizadeh et al. [6] also propose a DRL-based contin-
143
+ uously updating, self-learning NIDS. Their proposed Deep
144
+ Q-Learning (DQL) model combines Q-learning based RL
145
+ with a deep feed forward neural network to detect network
146
+ intrusions. The model uses an ongoing trial and error auto-
147
+ learning approach to improve its detection capabilities for
148
+ different types of network intrusions. The model was evaluated
149
+ on the NSL-KDD [4] dataset and outperformed some other ML
150
+ techniques with 78% classification accuracy for the intrusion
151
+ classes. This work is promising similarly to work in [1] due
152
+ to its adaptive-learning capabilities, making it better suited
153
+ for securing networks from the inevitably more sophisticated
154
+ attacks cyber-attacks seen today.
155
+ Benaddi et al. [7] developed a DRL-based IDS (DRL-IDS)
156
+ for Wireless Sensor Networks (WSNs) [8] and Internet of
157
+ Things (IoTs) [9]. Networking architectures like WSNs and
158
+ IoTs are receiving increasingly more adoption in many areas
159
+ such as healthcare, business, and smart cities and cyber-
160
+ threats are the primary challenge for these networks [10]. They
161
+ highlight that these networks are vulnerable to intrusions due
162
+ to security flaws commonly found in IoT and WSN devices,
163
+ zero-day vulnerabilities, and the openness of these networks
164
+ to a large number of users. The DRL-IDS model improves
165
+ intrusion detection performance while monitoring real-time
166
+ network traffic. The model was evaluated against standard
167
+ RL and K-Nearest Neighbours (KNN) based approaches using
168
+ the NSL-KDD [4] dataset and performed better in terms of
169
+ accuracy and detection rate with fewer false negatives.
170
+ Lin et al. [11] propose a IDSGAN, a framework that uses
171
+ GANs to generate adversarial malicious network traffic to
172
+ deceive IDS. Their goal was to leverage GANs to improve
173
+ IDS by exposing them to new, more combative and adversarial
174
+ attack methods and types. This system modeled the black-box
175
+ analogy of IDS from the perspective of an attacker that would
176
+ generally not know about the internal details of the detec-
177
+ tion system. A generator transformed known malicious traffic
178
+ records into adversarial ones and a discriminator classified
179
+ the records to learn about the originally unknown detection
180
+ system. The authors demonstrated the validity of their system
181
+ by only modifying the nonfunctional features of the records
182
+ such that the modified records would still classify as an
183
+ intrusion and not junk traffic. They evaluated their system
184
+ using the standard NSL-KDD [4] dataset on multiple different
185
+ detection models including Naive Bayes, Random Forest, and
186
+ multilayer perceptron classifiers. IDSGAN achieved excellent
187
+ results. The detection rate of the DoS attack type dropped from
188
+ approximately 80% with normal records to less than 1% with
189
+ modified, adversarial records.
190
+ Ring et al. [12] used GANs to generate realistic flow-based
191
+ network traffic data. They highlight that the ability of GANs
192
+ to only process continuous attributes is a key challenge in
193
+ using GANs to generate network traffic since network traffic
194
+ data ultimately contains categorical features like IP addresses
195
+ and ports. They propose three preprocessing techniques for
196
+ converting categorical values in flow-based network traffic data
197
+ into continuous values: (1) Simply treat features such as IP
198
+ addresses and ports as numerical values (2) Create binary
199
+ features from the categorical features (3) Use IP2Vec [13]
200
+ to represent the categorical features as vectors. The authors
201
+ evaluated these techniques on the CIDDS-001 [14] dataset and
202
+ found that techniques (2) and (3) are effective at generating
203
+ high-quality flow-based network traffic data. Finally, technique
204
+ (1) is not well suited for this task, meaning that straightfor-
205
+ ward numeric interpretation of categorical features should be
206
+ avoided with GANs.
207
+ Overall, there have been a handful of studies focused on
208
+ using DRL to classify network traffic as normal or intrusion,
209
+ as well as several that have used GANs to generate network
210
+ traffic data. However, no study has combined these two ML
211
+ approaches and evaluated the viability and effectiveness of this
212
+ combination both in detecting and classifying network traffic
213
+ as well as increasing the precision and recall performance for
214
+ classifying previously underrepresented classes. Our proposed
215
+ solution bridges this gap and improves the current state of
216
+ knowledge in this field.
217
+
218
+ 3
219
+ TABLE I
220
+ NSL-KDD DATASET FEATURES
221
+ F#
222
+ Feature
223
+ F#
224
+ Feature
225
+ F1
226
+ Duration
227
+ F22
228
+ Is guest login
229
+ F2
230
+ Protocol type
231
+ F23
232
+ Count
233
+ F3
234
+ Service
235
+ F24
236
+ Srv count
237
+ F4
238
+ Flag
239
+ F25
240
+ Serror rate
241
+ F5
242
+ Src bytes
243
+ F26
244
+ Srv serror rate
245
+ F6
246
+ Dst bytes
247
+ F27
248
+ Rerror rate
249
+ F7
250
+ Land
251
+ F28
252
+ Srv rerror rate
253
+ F8
254
+ Wrong fragment
255
+ F29
256
+ Same srv rate
257
+ F9
258
+ Urgent
259
+ F30
260
+ Diff srv rate
261
+ F10
262
+ Hot
263
+ F31
264
+ Srv diff host rate
265
+ F11
266
+ Num failed logins
267
+ F32
268
+ Dst host count
269
+ F12
270
+ Logged in
271
+ F33
272
+ Dst host srv count
273
+ F13
274
+ Num compromised
275
+ F34
276
+ Dst host same srv rate
277
+ F14
278
+ Root shell
279
+ F35
280
+ Dst host diff srv rate
281
+ F15
282
+ Su attempted
283
+ F36
284
+ Dst host same src port rate
285
+ F16
286
+ Num root
287
+ F37
288
+ Dst host srv diff host rate
289
+ F17
290
+ Num file creations
291
+ F38
292
+ Dst host serror rate
293
+ F18
294
+ Num shells
295
+ F39
296
+ Dst host srv serror rate
297
+ F19
298
+ Num access files
299
+ F40
300
+ Dst host rerror rate
301
+ F20*
302
+ Num outbound cmds
303
+ F41
304
+ Dst host srv rerror rate
305
+ F21
306
+ Is host login
307
+ F42
308
+ Class label
309
+ * Removed during data preprocessing.
310
+ III. METHODOLOGY
311
+ A. NSL-KDD Dataset
312
+ NSL-KDD is an updated version of the KDD’99 dataset [4].
313
+ Basic processing has been done, such as the removal of
314
+ redundant records preventing classifiers from becoming biased
315
+ towards more frequent records. The use of the NSL-KDD
316
+ dataset has been very popular in studies on IDS, in a sense,
317
+ becoming the de facto standard. It contains information which
318
+ can help to build a host-based and network-based intrusion
319
+ detection model to ensure network security in a variety of
320
+ systems.
321
+ The training and test set contains 125 973 and 22 544
322
+ records, respectively. This includes 42 features, however we
323
+ remove ‘Num outbound cmds’ as all records contain 0, so we
324
+ are left with 41 features: 9 basic features, 12 content features
325
+ for the connection, 9 temporal features calculated at two-
326
+ second time windows, 10 statistical network traffic features,
327
+ and the class label. Table I lists the features present in the
328
+ dataset. The training set contains 22 attack types and the
329
+ test set contains 37 attacks types. The 15 attack types not
330
+ included in the training set make this dataset excellent for
331
+ modelling unknown attacks. We opt to use the common 5-
332
+ class classification of network traffic records: normal, DoS,
333
+ Probe, R2L, and U2R. Table II describes these 5 classes in
334
+ further detail. The class ID refers to the numerical mapping
335
+ used by DRL and GAN.
336
+ B. Machine Learning Performance Evaluation
337
+ We used the accuracy and F1-score (which combines preci-
338
+ sion and recall) metrics to evaluate the performance our DRL
339
+ model and other ML algorithms. While the accuracy score only
340
+ measures the percentage of correctly classified samples, this
341
+ selection of performance metrics allows us to also evaluate the
342
+ percentage of samples that were incorrectly classified. This is
343
+ TABLE II
344
+ NSL-KDD DATASET RECORD CLASSES
345
+ ID
346
+ Class
347
+ Symbol
348
+ # of Records
349
+ Definition
350
+ 0
351
+ Normal
352
+ N
353
+ 77 054
354
+ Normal network traffic
355
+ record
356
+ 1
357
+ DoS
358
+ D
359
+ 53 387
360
+ Denial of Service attack to
361
+ prevent requests from
362
+ intended users from being
363
+ fulfilled
364
+ 2
365
+ Probe
366
+ P
367
+ 14 077
368
+ Probing attack to gather
369
+ information such as
370
+ vulnerabilities about the
371
+ target machine or network
372
+ 3
373
+ R2L
374
+ R
375
+ 3880
376
+ An attacker tries to gain
377
+ local access by sending
378
+ packets to a remote
379
+ machine
380
+ 4
381
+ U2R
382
+ U
383
+ 119
384
+ An attacker with normal
385
+ access tries to gain access
386
+ to the root by exploiting
387
+ system vulnerabilities
388
+ Fig. 1. Confusion matrix for NIDS performance evaluation.
389
+ especially important for NIDS as the accuracy performance
390
+ metric is not enough to evaluate imbalanced datasets such as
391
+ network traffic data which generally include significantly more
392
+ normal traffic. These performance metrics are derived from the
393
+ True Positive (TP), True Negative (TN), False Positive (FP),
394
+ and False Negative (FN) values. Fig. 1 presents this confusion
395
+ matrix used by our evaluation method.
396
+ 1) Accuracy: Accuracy measures the number of correct
397
+ predictions out of the total predictions made by the model.
398
+ In this case, accuracy measures the model’s ability to cor-
399
+ rectly identify normal and attack traffic records. Equation 1
400
+ formalizes the accuracy performance metric.
401
+ Accuracy =
402
+ TP + TN
403
+ TP + FP + TN + FN
404
+ (1)
405
+ 2) Precision: Precision measures the number of correct
406
+ positive predictions out of the total number of positive predic-
407
+ tions. In this case, precision measures the model’s degree of
408
+ correctness in predicting attack records over the total number
409
+ of attacks predicted [1], [15], [16]. Equation 2 formalizes the
410
+ precision performance metric.
411
+ Precision =
412
+ TP
413
+ TP + FP
414
+ (2)
415
+ 3) Recall: Recall measures the number of correct positive
416
+ predictions out of the total number of positive instances in
417
+ the dataset. In this case, recall measures the model’s ability to
418
+
419
+ Predicted Class
420
+ Positive = A (Attack or Class)
421
+ A
422
+ N
423
+ Negative = N (Normal
424
+ A
425
+ TP
426
+ FP
427
+ Actual
428
+ Class
429
+ N
430
+ FN
431
+ TN4
432
+ correctly identify attack traffic records. From this definition,
433
+ recall is also referred to as the true positive rate, detection rate,
434
+ or sensitivity. Equation 3 formalizes the recall performance
435
+ metric.
436
+ Recall =
437
+ TP
438
+ TP + FN
439
+ (3)
440
+ 4) F1-Score: F1-score is the harmonic mean of the pre-
441
+ cision and recall values, essentially a combined measure of
442
+ the two performance metrics. F1-score quantifies how dis-
443
+ criminative the model is [17] and acts as a good indicator
444
+ of performance since a decrease in either precision or recall
445
+ results in a significant decrease in the F1-score. In addition,
446
+ for multiclass classification we present both the unweighted
447
+ and weighted F1-scores. The weighted F1-score accounts for
448
+ label imbalance by considering the number of instances of
449
+ each label when calculating the average F1-score. Equation 4
450
+ shows how the F1-score is calculated.
451
+ F1 Score = 2 · 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 · 𝑅𝑒𝑐𝑎𝑙𝑙
452
+ 𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙
453
+ =
454
+ 𝑇𝑃
455
+ 𝑇𝑃 + 1
456
+ 2 (𝐹𝑃 + 𝐹𝑁)
457
+ (4)
458
+ C. Statistical Evaluation of Synthetic Data
459
+ To evaluate the synthetic data generated by the GAN models
460
+ against the real data they were trained on, we used statistical
461
+ metrics that compare the columns of the synthetic tables
462
+ against those in the real tables. These statistical metrics are
463
+ as follows:
464
+ 1) CSTest: The CSTest compares columns with discrete
465
+ values using the Chi-squared test to compare their distribu-
466
+ tions. The output of the test is an average of the CSTest 𝑝-
467
+ values for each of the columns, which ultimately quantifies the
468
+ probability that the compared columns were sampled from the
469
+ same distribution.
470
+ 2) KSTest: The KSTest compares columns with continuous
471
+ values using the two-sample Kolmogorov–Smirnov test and
472
+ empirical Cumulative Distributed Function (CDF) to compare
473
+ their distributions. The output of the test is an average of
474
+ 1 minus the KSTest D statistic for each of the columns.
475
+ This result quantifies the maximum distance between the CDF
476
+ expected and observed values.
477
+ 3) KSTestExtended: The KSTestExtended is an extension
478
+ of the KSTest that converts all columns to numerical values
479
+ using a hyper transformer and then applies the regular KSTest.
480
+ D. Detection-based Evaluation of Synthetic Data
481
+ Detection metrics use ML models to determine how distin-
482
+ guishable the synthetic data is from the real data. To achieve
483
+ this, both the synthetic and real tables are shuffled and a flag
484
+ indicating whether the record is synthetic or not is added.
485
+ Next, cross-validation is used with a selected ML model
486
+ that predicts the flag, outputting 1 minus the average ROC
487
+ AUC of all the cross-validation splits. Because the ROC AUC
488
+ measures the separability of the classes from the model, a high
489
+ detection metric score means that the model is unable to easily
490
+ distinguish the synthetic records from the real ones.
491
+ Fig. 2. Architecture of Generative Adversarial Networks.
492
+ E. Generative Adversarial Network Models
493
+ Goodfellow et al.
494
+ [18] first proposed the idea of a GAN
495
+ in 2014 as an unsupervised learning method that generates
496
+ synthetic data using an input of real data. GANs are used
497
+ to generate realistic synthetic data using real data, usually
498
+ because obtaining more data can be difficult, time consuming,
499
+ and costly. GANs use two independent models, a generator and
500
+ a discriminator. By detecting patterns or similarity from given
501
+ input data, the generator processes input data and produces
502
+ more data. The discriminator is a classifier which determines
503
+ the difference between the real data and the generated data.
504
+ It produces a probability between 0 and 1 to define whether
505
+ an instance belongs to the real data (closer to 1) or to the
506
+ generated data (closer to 0). Fig. 2 highlights the overall
507
+ workflow of GANs.
508
+ F. Deep Reinforcement Learning Model
509
+ DRL is a subfield of ML that combines both RL and deep
510
+ learning. RL considers the problem of an agent learning to
511
+ make decisions through trial and error, while DRL incorpo-
512
+ rates deep learning, allowing agents to make decisions from
513
+ unstructured input data without manual engineering of the state
514
+ space.
515
+ RL problems involve an agent learning how to map situa-
516
+ tions to actions in order a maximize a numerical reward signal.
517
+ It employs five key concepts:
518
+ • Environment: The physical world that the agent operates
519
+ within.
520
+ • State: The agent’s belief of a configuration of the envi-
521
+ ronment.
522
+ • Reward: Numerical feedback from the environment.
523
+ • Policy: A mapping from the agent’s state to actions.
524
+ • Value: Expected future reward an agent would receive by
525
+ taking an action in a certain state.
526
+ Simply put, RL is the process of running an agent through
527
+ sequences of state-actions pairs, observing the rewards that
528
+ result, and using those rewards to formulate an optimal policy
529
+ over time.
530
+ For RL problems with small discrete state-actions spaces,
531
+ the state-action mapping can be stored in a table to approxi-
532
+ mate the mapping within a reasonable error value. However,
533
+ for problems with large state-actions spaces, it is difficult to
534
+ store such large amounts of data and, therefore, traditional RL
535
+ methods suffer in terms of memory and performance. To over-
536
+ come this, we can incorporate DRL, which is a combination of
537
+ RL and deep neural networks. A neural network can be used
538
+
539
+ Real Data
540
+ Real
541
+ Random
542
+ Generated
543
+ Generator
544
+ Discriminator
545
+ Noise
546
+ Data
547
+ Fake
548
+ Fine Tune Training5
549
+ Fig. 3. Architecture of Deep Reinforcement Learning.
550
+ to approximate a value or policy function. Essentially, neural
551
+ nets learn to map states to values rather than using a lookup
552
+ table. Thus, a DRL model can independently learn to establish
553
+ a successful function for gaining maximum long-term rewards
554
+ in RL problems with large state-actions spaces.
555
+ We have defined some characteristics within our DRL model
556
+ in order for it to act as both a binary and multiclass classifier.
557
+ For binary classification, we have defined our action space as
558
+ follows:
559
+ • 0 : No Alert (benign)
560
+ • 1 : Alert (attack)
561
+ And the rewards for this model are defined by:
562
+ • +1 if agent correctly alerts to the correct type of attack.
563
+ • 0 if agent does not raise an alert when it is not needed.
564
+ • -1 if agent does not raise an alert when there is an attack.
565
+ • -1 if agent raises alert when there is not one needed.
566
+ For multiclass classification, we have defined our action
567
+ space, also seen in Fig. 3, as follows:
568
+ • 0 : No Alert (benign)
569
+ • 1 : DoS
570
+ • 2 : Probe
571
+ • 3 : R2L
572
+ • 4 : U2R
573
+ And the rewards for this model are defined by:
574
+ • +1 if agent correctly alerts to the correct type of attack.
575
+ • 0 if agent does not raise an alert when it is not needed.
576
+ • -1 if agent does not raise an alert when there is an attack.
577
+ • -1 if agent raises alert when there is not one needed.
578
+ • -1 if agent raises an alert to the incorrect type of attack.
579
+ In terms of network security, alerting benign network traffic
580
+ is typically safer than not alerting to an actual attack. Thus, we
581
+ might consider that the reward for the latter two cases in the
582
+ above enumeration should be greater than −1. However, this
583
+ reward function was selected because identifying the wrong
584
+ type of attack would lead to misdirection of resources, which
585
+ we want to avoid.
586
+ Finally, the state space for both binary and multiclass
587
+ classification is a collection of 41 features, both numerical
588
+ Normal
589
+ DoS
590
+ Probe
591
+ R2L
592
+ U2R
593
+ Record Class
594
+ 0
595
+ 10000
596
+ 20000
597
+ 30000
598
+ 40000
599
+ 50000
600
+ 60000
601
+ 70000
602
+ Number of Records
603
+ 67343
604
+ 45927
605
+ 11656
606
+ 995
607
+ 52
608
+ 9711
609
+ 7460
610
+ 2885
611
+ 2421
612
+ 67
613
+ Train
614
+ Test
615
+ Fig. 4. Distribution of NSL-KDD dataset by record classes.
616
+ Fig. 5.
617
+ The action and state spaces of the proposed deep reinforcement
618
+ learning model.
619
+ and nominal, existing within the NSL-KDD dataset. Thus, we
620
+ have an fairly complex and detailed state space. A visual of
621
+ this environment can be seen in Fig. 3.
622
+ In addition, we have assigned two distinct conditions for
623
+ terminating an episode. An episode will be terminated if 1) it
624
+ reaches a set timestep threshold, or 2) if an attack is issued
625
+ and no alert has been made.
626
+ IV. RESULTS
627
+ The following subsections describe the experimental results
628
+ from our proposed GAN and DRL models, followed by a
629
+ comparative analysis of our proposed model with other state-
630
+ of-the-art ML methods.
631
+ A. GAN Models
632
+ For our experiments, we trained two GAN models, CT-
633
+ GAN [19] and CopulaGAN [20], using the implementations
634
+ provided by the SDV open-source library [21], following work
635
+ by [22] which showed promising results for generating net-
636
+ work traffic data using models from this library. These models
637
+ were trained on the NSL-KDD training data for 100 epochs
638
+ with a batch size of 500. Both models used discriminator steps
639
+ of 5, matching WGAN [23], an extended version of vanilla
640
+
641
+ Agent
642
+ State
643
+ Action
644
+ Reward
645
+ State
646
+ Action
647
+ EnvironmentAgent
648
+ Action
649
+ Space
650
+ NO ALERT
651
+ Dos
652
+ Probe
653
+ U2R
654
+ R2L
655
+ Environment
656
+ State
657
+ Space
658
+ flag
659
+ protocol_type
660
+ service
661
+ logged_in6
662
+ TABLE III
663
+ STATISTICAL METRICS FOR SYNTHETIC DATA
664
+ Synthetic Data
665
+ CSTest
666
+ KSTest
667
+ KSTest
668
+ Extended
669
+ CTGAN
670
+ 0.9971
671
+ 0.9156
672
+ 0.9181
673
+ CTGAN (Conditional)
674
+ 0.7468
675
+ 0.8655
676
+ 0.8571
677
+ CopulaGAN
678
+ 0.9988
679
+ 0.9550
680
+ 0.9574
681
+ CopulaGAN (Conditional)
682
+ 0.6982
683
+ 0.9000
684
+ 0.8864
685
+ TABLE IV
686
+ DISCRENMENT RESULTS FOR SYNTHETIC DATA USING
687
+ LOGISTIC REGRESSION
688
+ Synthetic Data
689
+ Discernment Metric
690
+ CTGAN
691
+ 0.7579
692
+ CTGAN (Conditional)
693
+ 0.4139
694
+ CopulaGAN
695
+ 0.6862
696
+ CopulaGAN (Conditional)
697
+ 0.3948
698
+ GAN. For the other hyperparameters, we opted to use the
699
+ defaults provided by the SDV library.
700
+ The trained CTGAN and CopulaGAN models were each
701
+ used to generate two synthetic datasets:
702
+ 1) A dataset containing 200 000 records generated through
703
+ regular sampling without conditions. As expected, these
704
+ datasets contained records that closely matched the imbal-
705
+ anced class distribution of the original NSL-KDD dataset.
706
+ 2) A dataset containing 20 000 records for each class gener-
707
+ ated using conditional sampling through rejection. These
708
+ datasets were used to explore the efficacy of using GANs
709
+ to generate a balanced distribution from an highly imbal-
710
+ anced training distribution.
711
+ The statistical metric results showcased in Table III indicate
712
+ that both CTGAN and CopulaGAN model the discrete and
713
+ continuous features of the NSL-KDD dataset effectively. As
714
+ dictated by the KSTest and KSTestExtended, CopulaGAN
715
+ models continuous features better than CTGAN and main-
716
+ tains parity for discrete features as indicated by the CSTest.
717
+ Table IV highlights the results for a linear regression classifier
718
+ used to evaluate detection performance of the synthetic data.
719
+ Altogether, the classifier found it challenging to distinguish the
720
+ synthetic records from the real ones, which indicates that the
721
+ GANs are able to capture aspects of the true dataset. Table V
722
+ and Table VI showcase the performance of ML models when
723
+ trained to distinguish between various real and synthetic
724
+ datasets. Across the board, there is comparable performance
725
+ between the original real NSL-KDD dataset and the CTGAN
726
+ and CopulaGAN synthetic datasets. Thus, there is promise in
727
+ using synthetic data in place of real data.
728
+ B. DRL Model
729
+ The DRL models were implemented using both OpenAI
730
+ Gym [24] and Tensorflow [25]. Training of the model was
731
+ done in two distinct stages to investigate the variation in per-
732
+ formance – binary classification and multiclass classification.
733
+ 0
734
+ 20000
735
+ 40000
736
+ 60000
737
+ 80000
738
+ 100000
739
+ Timesteps
740
+ 0.4
741
+ 0.5
742
+ 0.6
743
+ 0.7
744
+ 0.8
745
+ 0.9
746
+ Accuracy
747
+ Original NSL-KDD
748
+ CTGAN
749
+ CTGAN (Conditional)
750
+ CopulaGAN
751
+ CopulaGAN (Conditional)
752
+ Fig. 6. Results measuring the accuracy of binary classification after training
753
+ the DRL model on both the original NSL-KDD dataset and each of the
754
+ synthetic GAN datasets.
755
+ 1) Binary Classification: We begin with binary classifica-
756
+ tion, using an action space of two (‘alert’ or ‘no alert’). While
757
+ binary classification offers the user less knowledge on attack
758
+ type specifications, it should perform the basic task of an IDS
759
+ – alerting the user to an attack with a high accuracy.
760
+ Initially, we trained the DRL model on the NSL-KDD
761
+ training set, described in detail above. We did this to create
762
+ a baseline to see how well our synthetic GAN-generated
763
+ data performed in comparison. Prior to training our model,
764
+ we converted all class labels using a binary mapping. If the
765
+ class was originally ‘normal’, we assigned it a value of ‘0’,
766
+ otherwise it was assigned a value of ‘1’, implying that the data
767
+ point was an attack of some sort.
768
+ For each model, proximal policy optimization (PPO2), a
769
+ policy-gradient algorithm that directly optimizes the expected
770
+ reward by estimating the gradient of the policy from the
771
+ trajectories taken by the agent, is executed. We applied a
772
+ custom multi-layer perceptron, a class of feedforward neural
773
+ network [26], of three layers with size 128, 64, and 32.
774
+ In addition, each model used a rectified linear unit (ReLU)
775
+ activation function.
776
+ Training on the NSL-KDD training dataset for 100,000
777
+ timesteps resulted in an average accuracy of 89.5% and an F1-
778
+ score of 0.906 on the test dataset. We then proceeded to train
779
+ the DRL model on each of the GAN-generated datasets one-
780
+ by-one and evaluate them individually on the NSL-KDD test
781
+ dataset. The detailed results of these experiments can be seen
782
+ in Table V, and viewed in terms of progressive performance
783
+ for average accuracy in Fig. 6.
784
+ Training on CTGAN synthetic data performs the best after
785
+ the NSL-KDD trained model, with 85.7% accuracy and 0.869
786
+ F1-score. Training using CopulaGAN synthetic data trails
787
+ close behind with 82.9% accuracy and 0.838 F1-score. The
788
+ conditional variations of both CopulaGAN and CTGAN per-
789
+ form significantly worse than the three other datasets, reaching
790
+ their peak of 70% and 66% respectively almost immediately
791
+ and then dropping to just below 50%.
792
+ 2) Multiclass Classification: We then trained the DRL
793
+ models to perform multiclass classification. Similar to binary
794
+ classification, we are still detecting whether there is an attack
795
+
796
+ 7
797
+ TABLE V
798
+ MACHINE LEARNING PERFORMANCE FOR BINARY CLASSIFICATION
799
+ Training Data
800
+ Decision Tree
801
+ AdaBoost
802
+ Classifier
803
+ Logistic Regression
804
+ Classifier
805
+ MLP Classifier
806
+ Proposed DRL
807
+ Accuracy
808
+ F1
809
+ Accuracy
810
+ F1
811
+ Accuracy
812
+ F1
813
+ Accuracy
814
+ F1
815
+ Accuracy
816
+ F1
817
+ NSL-KDD
818
+ 0.8407
819
+ 0.8414
820
+ 0.8221
821
+ 0.8270
822
+ 0.8700
823
+ 0.8802
824
+ 0.8054
825
+ 0.8080
826
+ 0.8951
827
+ 0.9064
828
+ CTGAN
829
+ 0.8074
830
+ 0.8112
831
+ 0.8404
832
+ 0.8486
833
+ 0.8610
834
+ 0.8710
835
+ 0.8461
836
+ 0.8545
837
+ 0.8572
838
+ 0.8687
839
+ CTGAN (Conditional)
840
+ 0.8801
841
+ 0.8927
842
+ 0.9086
843
+ 0.9226
844
+ 0.8740
845
+ 0.8853
846
+ 0.9077
847
+ 0.9220
848
+ 0.4662
849
+ 0.1172
850
+ CopulaGAN
851
+ 0.7735
852
+ 0.7607
853
+ 0.8259
854
+ 0.8246
855
+ 0.8163
856
+ 0.8201
857
+ 0.7918
858
+ 0.7831
859
+ 0.8294
860
+ 0.8375
861
+ CopulaGAN (Conditional)
862
+ 0.8287
863
+ 0.8333
864
+ 0.8743
865
+ 0.8881
866
+ 0.8256
867
+ 0.8311
868
+ 0.8947
869
+ 0.9074
870
+ 0.4901
871
+ 0.1893
872
+ TABLE VI
873
+ MACHINE LEARNING PERFORMANCE FOR MULTI-LABEL CLASSIFICATION
874
+ Training Data
875
+ Decision Tree
876
+ MLP Classifier
877
+ Proposed DRL
878
+ Accuracy
879
+ F1
880
+ F1 (weighted)
881
+ Accuracy
882
+ F1
883
+ F1 (weighted)
884
+ Accuracy
885
+ F1
886
+ F1 (weighted)
887
+ NSL-KDD
888
+ 0.7685
889
+ 0.5585
890
+ 0.7338
891
+ 0.7856
892
+ 0.6302
893
+ 0.7556
894
+ 0.7300
895
+ 0.4880
896
+ 0.6891
897
+ CTGAN
898
+ 0.7475
899
+ 0.5297
900
+ 0.7336
901
+ 0.7765
902
+ 0.6467
903
+ 0.7572
904
+ 0.4247
905
+ 0.3033
906
+ 0.4503
907
+ CTGAN (Conditional)
908
+ 0.6200
909
+ 0.4475
910
+ 0.6525
911
+ 0.7442
912
+ 0.5643
913
+ 0.7791
914
+ 0.5520
915
+ 0.3938
916
+ 0.4533
917
+ CopulaGAN
918
+ 0.7031
919
+ 0.4165
920
+ 0.6618
921
+ 0.7374
922
+ 0.4606
923
+ 0.6863
924
+ 0.7023
925
+ 0.3967
926
+ 0.6345
927
+ CopulaGAN (Conditional)
928
+ 0.6116
929
+ 0.3810
930
+ 0.6215
931
+ 0.7088
932
+ 0.4401
933
+ 0.6883
934
+ 0.4839
935
+ 0.2716
936
+ 0.4049
937
+ TABLE VII
938
+ CLASS-BASED F1 SCORES FOR MULTI-LABEL CLASSIFICATION
939
+ Dataset
940
+ Normal
941
+ DoS
942
+ Probe
943
+ R2L
944
+ U2R
945
+ NSL-KDD
946
+ 0.7785
947
+ 0.8072
948
+ 0.4752
949
+ 0.1490
950
+ 0.0
951
+ CTGAN
952
+ 0.5670
953
+ 0.4618
954
+ 0.3858
955
+ 0.0831
956
+ 0.0192
957
+ CTGAN
958
+ (Conditional)
959
+ 0.7662
960
+ 0.0
961
+ 0.4589
962
+ 0.5725
963
+ 0.1716
964
+ CopulaGAN
965
+ 0.8139
966
+ 0.7101
967
+ 0.4593
968
+ 0.0
969
+ 0.0
970
+ CopulaGAN
971
+ (Conditional)
972
+ 0.8039
973
+ 0.0
974
+ 0.2201
975
+ 0.2097
976
+ 0.0512
977
+ or not, however we now attempt to classify which type of
978
+ attack is taking place. Instead of ‘0’ or ‘1’, our action space
979
+ consists of 0, 1, 2, 3, and 4. As stated previously, 0 maps
980
+ to ‘benign’, whereas 1, 2, 3, and 4 map to DoS, Probe,
981
+ R2L, and U2R respectively. As our action space has increased
982
+ in comparison to binary classification, our problem becomes
983
+ significantly larger and more challenging.
984
+ Like binary classification, we used a ReLU activation func-
985
+ tion, however for the conditional versions of both CTGAN
986
+ and copulaGAN we used a Sigmoid activation function, as we
987
+ found that this results in a significant increase in performance
988
+ on test data. For each model, we again used a custom multi-
989
+ layer perceptron of three layers with size 128, 64, and 32.
990
+ Again, we first analyzed the performance of our model after
991
+ being trained on the real NSL-KDD dataset in order to create
992
+ a benchmark. As seen in Table VI, our DRL model achieved
993
+ 73% accuracy and a 68.9% weighted F1-score.
994
+ We then trained the DRL model on the four GAN-generated
995
+ synthetic datasets discussed previously. The most promising
996
+ results were seen in training the model on CopulaGAN. The
997
+ model reaches an accuracy of 70.2% and a weighted F1-
998
+ score of 63%. This is just a 2.7% drop in accuracy from
999
+ training on the true NSL-KDD data. Training the DRL model
1000
+ on the remaining three synthetic datasets underperforms when
1001
+ compared to both the decision tree and MLP classifier.
1002
+ As discussed previously, an F1-score refers to both precision
1003
+ and recall being high. When we train on imbalanced datasets,
1004
+ the F1-scores in minority classes are typically quite low, as
1005
+ the ML model does a poor job of recognizing and properly
1006
+ classifying that test data. Looking at Table VII, we can see the
1007
+ F1-scores for each individual class for each of our training sets.
1008
+ Since NSL-KDD had extremely low records for both R2L and
1009
+ U2R, we can see that the F1-scores for these classes are also
1010
+ quite low at 0.1490 and 0.0, respectively.
1011
+ One of the major goals of our work was to determine if,
1012
+ by generating synthetic GAN data, we could inflate the F1-
1013
+ scores (more specifically, precision and recall) of the minority
1014
+ classes from our imbalanced dataset. In Table VII, we can
1015
+ see that training our DRL model with data generated from
1016
+ conditional CTGAN and conditional CopulaGAN improved
1017
+ upon the F1-scores for both R2L and U2R in the same way that
1018
+ we would expect to see if the true dataset naturally contained
1019
+ more records of these two class types. Training the DRL model
1020
+ on synthetic data from conditional CTGAN increased the F1-
1021
+ scores for R2L and U2R by 0.573 and 0.172 respectively.
1022
+ Training on synthetic data from conditional CopulaGAN im-
1023
+ proved the F1-scores for R2L and U2R by 0.210 and 0.051
1024
+ respectively. This demonstrates that the concept of using GAN
1025
+ models to generate synthetic data for a minority class and
1026
+ artificially inflating the training set in order to have better
1027
+ performance in classifying underrepresented classes is a viable
1028
+ option.
1029
+
1030
+ 8
1031
+ 0.00
1032
+ 0.25
1033
+ 0.50
1034
+ 0.75
1035
+ 1.00
1036
+ 1.25
1037
+ 1.50
1038
+ 1.75
1039
+ 2.00
1040
+ Timesteps
1041
+ 1e6
1042
+ 0.0
1043
+ 0.2
1044
+ 0.4
1045
+ 0.6
1046
+ 0.8
1047
+ F1-Score
1048
+ Normal
1049
+ DoS
1050
+ Probe
1051
+ R2L
1052
+ U2R
1053
+ (a) NSL-KDD
1054
+ 0.00
1055
+ 0.25
1056
+ 0.50
1057
+ 0.75
1058
+ 1.00
1059
+ 1.25
1060
+ 1.50
1061
+ 1.75
1062
+ 2.00
1063
+ Timesteps
1064
+ 1e6
1065
+ 0.0
1066
+ 0.2
1067
+ 0.4
1068
+ 0.6
1069
+ 0.8
1070
+ F1-Score
1071
+ Normal
1072
+ DoS
1073
+ Probe
1074
+ R2L
1075
+ U2R
1076
+ (b) CTGAN
1077
+ 0.00
1078
+ 0.25
1079
+ 0.50
1080
+ 0.75
1081
+ 1.00
1082
+ 1.25
1083
+ 1.50
1084
+ 1.75
1085
+ 2.00
1086
+ Timesteps
1087
+ 1e6
1088
+ 0.0
1089
+ 0.2
1090
+ 0.4
1091
+ 0.6
1092
+ 0.8
1093
+ F1-Score
1094
+ Normal
1095
+ DoS
1096
+ Probe
1097
+ R2L
1098
+ U2R
1099
+ (c) CTGAN (Conditional)
1100
+ 0.00
1101
+ 0.25
1102
+ 0.50
1103
+ 0.75
1104
+ 1.00
1105
+ 1.25
1106
+ 1.50
1107
+ 1.75
1108
+ 2.00
1109
+ Timesteps
1110
+ 1e6
1111
+ 0.0
1112
+ 0.2
1113
+ 0.4
1114
+ 0.6
1115
+ 0.8
1116
+ F1-Score
1117
+ Normal
1118
+ DoS
1119
+ Probe
1120
+ R2L
1121
+ U2R
1122
+ (d) CopulaGAN
1123
+ 0.00
1124
+ 0.25
1125
+ 0.50
1126
+ 0.75
1127
+ 1.00
1128
+ 1.25
1129
+ 1.50
1130
+ 1.75
1131
+ 2.00
1132
+ Timesteps
1133
+ 1e6
1134
+ 0.0
1135
+ 0.2
1136
+ 0.4
1137
+ 0.6
1138
+ 0.8
1139
+ F1-Score
1140
+ Normal
1141
+ DoS
1142
+ Probe
1143
+ R2L
1144
+ U2R
1145
+ (e) CopulaGAN (Conditional)
1146
+ 0.00
1147
+ 0.25
1148
+ 0.50
1149
+ 0.75
1150
+ 1.00
1151
+ 1.25
1152
+ 1.50
1153
+ 1.75
1154
+ 2.00
1155
+ Timesteps
1156
+ 1e6
1157
+ 0.0
1158
+ 0.1
1159
+ 0.2
1160
+ 0.3
1161
+ 0.4
1162
+ 0.5
1163
+ 0.6
1164
+ 0.7
1165
+ F1-Score
1166
+ Original NSL-KDD
1167
+ CTGAN
1168
+ CTGAN (Conditional)
1169
+ CopulaGAN
1170
+ CopulaGAN (Conditional)
1171
+ (f) Average Accuracy
1172
+ Fig. 7. Results measuring the F1-scores of multiclass classification (fig a-e) as well as the averages (fig f) after training the DRL model for 2 million timesteps
1173
+ on both NSL-KDD as well as each synthetic dataset.
1174
+ V. CONCLUSION
1175
+ In this paper, we have proposed a SNIDS which is able to
1176
+ perform binary and multiclass classification on network traffic
1177
+ data. We used DRL to implement this IDS. The model was
1178
+ trained using the NSL-KDD dataset, allowing it to detect a
1179
+ range of attack types on a network. To enhance the learning
1180
+ capabilities of our proposed model, GANs were used to
1181
+ fabricate training data. Our results demonstrate that this system
1182
+ is able to interact with the network and identify attack classes
1183
+ with competitive accuracy. As well, we show that generating
1184
+ synthetic data for underrepresented classes can improve the
1185
+ precision and recall within these classes, thus acting as a
1186
+ solution for imbalanced datasets.
1187
+ For binary classification, we obtained an 89.5% accuracy
1188
+ after training on the NLS-KDD dataset. We consider this our
1189
+ baseline model. When trained on the four synthetic datasets,
1190
+ data generated from unconditional CTGAN produced an accu-
1191
+ racy of 85.7%, the closest competition to the baseline model.
1192
+ For multiclass classification, we obtained a 73.0% accuracy
1193
+ after training on the NSL-KDD dataset. When trained on
1194
+ the four synthetic datasets, data generated from CopulaGAN
1195
+ produced an accuracy of 70.2%, the closest competition to the
1196
+ baseline model. Thus, clearly our GAN models generate data
1197
+ realistic enough to create competitive IDS.
1198
+ Further, both Table VII and Fig. 7 demonstrate an increase
1199
+ in F1-scores for minority classes on the IDS trained using
1200
+ GAN-generated data. Thus, while our overall accuracy de-
1201
+ creased, we are getting better precision and recall performance
1202
+ for the classes without sufficient data in the NSL-KDD dataset.
1203
+ This points to a solution for other ML models trying to learn
1204
+ from imbalanced datasets.
1205
+ VI. FUTURE WORK
1206
+ While our work demonstrated competitive classifiers and an
1207
+ increase in individual F1-scores for minority classes, there is
1208
+ still room for improvement.
1209
+ When training our GAN models, we have discussed that we
1210
+ used 41 features from the NSL-KDD dataset as input to our
1211
+ model. There are two major changes that we aim to implement
1212
+ in our future work. First, passing our input dataset through
1213
+ a pipeline of feature analysis methods, including (but not
1214
+ limited to) Pearson correlation, recursive feature elimination,
1215
+ and Lasso, with the aim to reduce our feature space. This
1216
+ has the potential to increase the quality of our generated
1217
+ dataset, thus increasing the evaluation metric scores for our
1218
+ DRL model. Secondly, supplementing the NSL-KDD dataset
1219
+ with data from under-represented classes in order to balance
1220
+ the dataset. Our work demonstrates that there is a notable
1221
+ increase in F1-score when the class has a significant amount
1222
+ of data being given as input to the GAN model. We plan to
1223
+ explore this idea, and see the limitations of our performance
1224
+ when the GAN is trained on significant sample sizes from each
1225
+ class, rather than just a small subset.
1226
+ We also plan to explore GAN models that are trained only
1227
+ on the minority classes of our true dataset classes. Thus,
1228
+ this generated data could potentially be merged with the true
1229
+ dataset to allow for heightened overall performance of the IDS,
1230
+ as we are synthetically creating balance.
1231
+ Finally, we plan to explore the performance of training both
1232
+ DQN, a value-iteration based method [27], and A3C [28], a
1233
+ value-iteration and policy-gradient method, on GAN-generated
1234
+ data to see how it compares with our PPO2 model. Both DQN
1235
+ and A3C are common DRL approaches, and have the potential
1236
+ to surpass the performance of our current model.
1237
+
1238
+ 9
1239
+ REFERENCES
1240
+ [1] H. Ying-Feng and M. Morito, “A deep reinforcement learning approach
1241
+ for anomaly network intrusion detection system,” in 2020 IEEE 9th
1242
+ International Conference on Cloud Networking (CloudNet). IEEE, Nov.
1243
+ 2020.
1244
+ [2] M. H. Bhuyan, D. K. Bhattacharyya, and J. K. Kalita, “Network anomaly
1245
+ detection: methods, systems and tools,” Ieee communications surveys &
1246
+ tutorials, vol. 16, no. 1, pp. 303–336, 2013.
1247
+ [3] Y. Li, “Deep reinforcement learning: An overview,” arXiv preprint
1248
+ arXiv:1701.07274, 2017.
1249
+ [4] M. Tavallaee, E. Bagheri, W. Lu, and A. A. Ghorbani, “A detailed
1250
+ analysis of the kdd cup 99 data set,” in 2009 IEEE Symposium on
1251
+ Computational Intelligence for Security and Defense Applications, 2009,
1252
+ pp. 1–6.
1253
+ [5] N. Moustafa and J. Slay, “The evaluation of network anomaly detection
1254
+ systems: Statistical analysis of the unsw-nb15 data set and the compar-
1255
+ ison with the kdd99 data set,” pp. 1–14, 01 2016.
1256
+ [6] H. Alavizadeh, H. Alavizadeh, and J. Jang-Jaccard, “Deep q-learning
1257
+ based reinforcement learning approach for network intrusion detection,”
1258
+ Computers, vol. 11, no. 3, p. 41, 2022.
1259
+ [7] H. Benaddi, K. Ibrahimi, A. Benslimane, and J. Qadir, “A deep re-
1260
+ inforcement learning based intrusion detection system (DRL-IDS) for
1261
+ securing wireless sensor networks and internet of things,” in Lecture
1262
+ Notes of the Institute for Computer Sciences, Social Informatics and
1263
+ Telecommunications Engineering, ser. Lecture notes of the Institute
1264
+ for Computer Sciences, Social Informatics and Telecommunications
1265
+ Engineering. Cham: Springer International Publishing, 2020, pp. 73–87.
1266
+ [8] H. K. Patil and T. M. Chen, “Chapter 18 - wireless sensor network
1267
+ security: The internet of things,” in Computer and Information Security
1268
+ Handbook (Third Edition), third edition ed., J. R. Vacca, Ed.
1269
+ Boston:
1270
+ Morgan Kaufmann, 2017, pp. 317–337. [Online]. Available: https:
1271
+ //www.sciencedirect.com/science/article/pii/B9780128038437000181
1272
+ [9] S. Kumar, P. Tiwari, and M. Zymbler, “Internet of things is a revolu-
1273
+ tionary approach for future technology enhancement: a review,” J. Big
1274
+ Data, vol. 6, no. 1, Dec. 2019.
1275
+ [10] A. Ghosh, O. Khalid, R. N. B. Rais, A. Rehman, S. U. R. Malik, and
1276
+ I. A. Khan, “Data offloading in iot environments: modeling, analysis,
1277
+ and verification,” EURASIP Journal on Wireless Communications and
1278
+ Networking, vol. 2019, no. 1, p. 53, Mar 2019. [Online]. Available:
1279
+ https://doi.org/10.1186/s13638-019-1358-8
1280
+ [11] Z. Lin, Y. Shi, and Z. Xue, “IDSGAN: Generative adversarial networks
1281
+ for attack generation against intrusion detection,” Sep. 2018.
1282
+ [12] M. Ring, D. Schl¨or, D. Landes, and A. Hotho, “Flow-based network
1283
+ traffic generation using generative adversarial networks,” Sep. 2018.
1284
+ [13] M. Ring, A. Dallmann, D. Landes, and A. Hotho, “Ip2vec: Learning
1285
+ similarities between ip addresses,” in 2017 IEEE International Confer-
1286
+ ence on Data Mining Workshops (ICDMW), 2017, pp. 657–666.
1287
+ [14] M. Ring, S. Wunderlich, D. Gr¨udl, D. Landes, and A. Hotho, “Flow-
1288
+ based benchmark data sets for intrusion detection,” 2017.
1289
+ [15] F. Alghayadh and D. Debnath, “A hybrid intrusion detection system for
1290
+ smart home security,” in 2020 IEEE International Conference on Electro
1291
+ Information Technology (EIT).
1292
+ IEEE, Jul. 2020.
1293
+ [16] ——, “Performance evaluation of machine learning for prediction of
1294
+ network traffic in a smart home,” in 2020 11th IEEE Annual Ubiquitous
1295
+ Computing, Electronics & Mobile Communication Conference (UEM-
1296
+ CON).
1297
+ IEEE, Oct. 2020.
1298
+ [17] ——, “A hybrid intrusion detection system for smart home security
1299
+ based on machine learning and user behavior,” Adv. Internet Things,
1300
+ vol. 11, no. 01, pp. 10–25, 2021.
1301
+ [18] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, and Others, “Generative
1302
+ adversarial networks,” 2014.
1303
+ [19] L. Xu, M. Skoularidou, A. Cuesta-Infante, and K. Veeramachaneni,
1304
+ “Modeling
1305
+ tabular
1306
+ data
1307
+ using
1308
+ conditional
1309
+ gan,”
1310
+ 2019.
1311
+ [Online].
1312
+ Available: https://arxiv.org/abs/1907.00503
1313
+ [20] SDV, “CopulaGAN Model.” [Online]. Available: https://sdv.dev/SDV/
1314
+ user guides/single table/copulagan.html
1315
+ [21] A. Montanez et al., “Sdv: an open source library for synthetic data
1316
+ generation,” Ph.D. dissertation, Massachusetts Institute of Technology,
1317
+ 2018.
1318
+ [22] S.
1319
+ Bourou,
1320
+ A.
1321
+ El
1322
+ Saer,
1323
+ T.-H.
1324
+ Velivassaki,
1325
+ A.
1326
+ Voulkidis,
1327
+ and
1328
+ T. Zahariadis, “A review of tabular data synthesis using gans on an
1329
+ ids dataset,” Information, vol. 12, no. 9, 2021. [Online]. Available:
1330
+ https://www.mdpi.com/2078-2489/12/9/375
1331
+ [23] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” 2017.
1332
+ [Online]. Available: https://arxiv.org/abs/1701.07875
1333
+ [24] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman,
1334
+ J. Tang, and W. Zaremba, “Openai gym,” 2016.
1335
+ [25] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S.
1336
+ Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow,
1337
+ A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur,
1338
+ J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah,
1339
+ M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker,
1340
+ V. Vanhoucke, V. Vasudevan, F. Viegas, O. Vinyals, P. Warden, M. Wat-
1341
+ tenberg, M. Wicke, Y. Yu, and X. Zheng, “TensorFlow: Large-scale
1342
+ machine learning on heterogeneous distributed systems,” Mar. 2016.
1343
+ [26] L. Noriega, “Multilayer perceptron tutorial,” School of Computing.
1344
+ Staffordshire University, 2005.
1345
+ [27] S. Yoon and K.-J. Kim, “Deep q networks for visual fighting game ai,” in
1346
+ 2017 IEEE conference on computational intelligence and games (CIG).
1347
+ IEEE, 2017, pp. 306–308.
1348
+ [28] M. Babaeizadeh, I. Frosio, S. Tyree, J. Clemons, and J. Kautz, “Ga3c:
1349
+ Gpu-based a3c for deep reinforcement learning,” CoRR abs/1611.06256,
1350
+ 2016.
1351
+ Caroline Strickland received a B.Sc in 2017 and M.Sc in 2019 from
1352
+ Memorial University of Newfoundland. She is currently in the third year of
1353
+ pursuing a PhD in Computer Science at the University of Western Ontario.
1354
+ Her past research has focused on using reinforcement learning for pattern
1355
+ formation within swarm systems, and her current research interests involve
1356
+ the intersection of hierarchical reinforcement learning with healthcare.
1357
+ Chandrika Saha received a B.Sc in Computer Science and Engineering from
1358
+ the University of Barishal, in 2019. Currently, she is pursuing her M.Sc. in
1359
+ Computer Science at the Western University, London, Ontario, Canada. Her
1360
+ research interest is Machine Learning, more specifically, deep learning and
1361
+ its application to network security.
1362
+ Muhammad Zakar received a B.Sc. in Computer Science from Western
1363
+ University, London, Canada, in 2021. Currently, he is pursuing a M.Sc. in
1364
+ Computer Science at Western University. His current research interests are
1365
+ in the areas of autonomous drones and vehicles, distributed systems, next-
1366
+ generation networks, and machine learning.
1367
+ Sareh Soltani Nejad received a B.Sc. in Computer Engineering from
1368
+ Amirkabir University of Technology (AUT), Tehran, Iran, in 2019. She is
1369
+ currently pursuing a M.Sc. in Computer Science at the University of Western
1370
+ Ontario, Canada. Her research interests broadly focus on Machine learning and
1371
+ Internet of Things applications in Smart Cities, Smart homes and Healthcare.
1372
+ Noshin Tasnim received a B.Sc. in Computer Science and Engineering from
1373
+ BRAC University, Bangladesh, in 2019. She is currently pursuing a M.Sc.
1374
+ in Computer Science with the Department of Computer Science, Western
1375
+ University, London, Canada. Her current research interests are in the areas of
1376
+ network security, and machine learning.
1377
+ Daniel Lizotte is currently an Associate Professor in the Department of
1378
+ Computer Science and in the Department of Epidemiology and Biostatistics,
1379
+ University of Western Ontario, London, ON, Canada. His research group in
1380
+ collaboration with community partners investigates different aspects of data-
1381
+ driven decision support in public health and health care. This work aligns
1382
+ with methodological research in the areas of machine learning, epidemiology,
1383
+ and biostatistics. He has received funding from the Natural Sciences and
1384
+ Engineering Research Council of Canada, the Canadian Institutes of Health
1385
+ Research, and the Social Sciences and Humanities Research Council of
1386
+ Canada, and he has served in various capacities on committees for the Machine
1387
+ Learning for Health Care, International Conference on Machine Learning, and
1388
+ NeurIPS conferences.
1389
+ Anwar Haque is an Assistant Professor in the Deptartment of Computer
1390
+ Science at the University of Western Ontario, Canada. Before joining Western,
1391
+ he was an Associate Director at Bell Canada. He is a leading international
1392
+ expert on next-generation communication network resources and performance
1393
+ management, cyber security, and smart city applications. Dr. Haque has
1394
+ authored/co-authored over 80 peer-reviewed research publications in leading
1395
+ journals and conferences, authored many industry technical papers, and held
1396
+ a number of patent/licenses. He has been awarded several national/provincial-
1397
+ level research grants, including NSERC, MITACS, OCE, and SOSCIP. Dr.
1398
+ Haque’s collaborative research grants are valued at more than $15 million.
1399
+ Dr. Haque is serving on the inaugural advisory committee for the newly
1400
+ established Bell-Western 5G research centre, and he established an industry
1401
+ consortium to promote and support smart systems and digital services research
1402
+ at Western. Dr. Haque is the director of the Western Information & Networking
1403
+ Group (WING) Lab at Western.
1404
+
K9E1T4oBgHgl3EQfswUE/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
KNE3T4oBgHgl3EQfvQuV/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:536c5cc4a59036ae8a9e431f433e36d6d410622d4bc3c7c4617fbf43276fa20b
3
+ size 31064109
L9E4T4oBgHgl3EQfKAwg/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a3fb92bdb329c83b9f1181cce9d4e8466ca85924ad0d9a4b00a03959f1dc305
3
+ size 10747949
LdFRT4oBgHgl3EQf1zij/content/2301.13658v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:310f46a57235df3d1b924860570cc249b67c6d5fdb15e2cb84a3e53a0a692193
3
+ size 2427174
LdFRT4oBgHgl3EQf1zij/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bfa8d740d3e90bcbee983ff91b4b37c7e4d8e191225e4a237e718fbf41c873ad
3
+ size 155804