jackkuo commited on
Commit
4d7be70
·
verified ·
1 Parent(s): 6fa45de

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -9E2T4oBgHgl3EQfQgbg/content/tmp_files/2301.03772v1.pdf.txt +1322 -0
  2. -9E2T4oBgHgl3EQfQgbg/content/tmp_files/load_file.txt +0 -0
  3. -tAzT4oBgHgl3EQfhPwa/content/tmp_files/2301.01480v1.pdf.txt +1864 -0
  4. -tAzT4oBgHgl3EQfhPwa/content/tmp_files/load_file.txt +0 -0
  5. .gitattributes +43 -0
  6. 0NE1T4oBgHgl3EQfkwRo/content/tmp_files/load_file.txt +0 -0
  7. 0tFLT4oBgHgl3EQfpC-v/content/tmp_files/2301.12134v1.pdf.txt +553 -0
  8. 0tFLT4oBgHgl3EQfpC-v/content/tmp_files/load_file.txt +257 -0
  9. 1NFAT4oBgHgl3EQfjx2F/vector_store/index.faiss +3 -0
  10. 1NFKT4oBgHgl3EQfOi14/vector_store/index.faiss +3 -0
  11. 2NE3T4oBgHgl3EQfnwo5/vector_store/index.faiss +3 -0
  12. 39E1T4oBgHgl3EQfAgJ2/content/tmp_files/2301.02840v1.pdf.txt +1755 -0
  13. 39E1T4oBgHgl3EQfAgJ2/content/tmp_files/load_file.txt +0 -0
  14. 4NAzT4oBgHgl3EQf9f64/content/tmp_files/2301.01921v1.pdf.txt +2025 -0
  15. 4NAzT4oBgHgl3EQf9f64/content/tmp_files/load_file.txt +0 -0
  16. 6tE0T4oBgHgl3EQffABm/vector_store/index.faiss +3 -0
  17. 6tE3T4oBgHgl3EQfpwpy/content/2301.04645v1.pdf +3 -0
  18. 6tE3T4oBgHgl3EQfpwpy/vector_store/index.faiss +3 -0
  19. 6tE3T4oBgHgl3EQfpwpy/vector_store/index.pkl +3 -0
  20. 7NE3T4oBgHgl3EQfqApm/vector_store/index.pkl +3 -0
  21. 89AyT4oBgHgl3EQfQ_ZE/content/tmp_files/2301.00056v1.pdf.txt +372 -0
  22. 89AyT4oBgHgl3EQfQ_ZE/content/tmp_files/load_file.txt +327 -0
  23. 99E5T4oBgHgl3EQfRQ7z/vector_store/index.pkl +3 -0
  24. 9dE4T4oBgHgl3EQfdwwo/content/2301.05093v1.pdf +3 -0
  25. 9dE4T4oBgHgl3EQfdwwo/vector_store/index.faiss +3 -0
  26. A9FIT4oBgHgl3EQf_Swz/content/tmp_files/2301.11414v1.pdf.txt +1626 -0
  27. A9FIT4oBgHgl3EQf_Swz/content/tmp_files/load_file.txt +0 -0
  28. ANFIT4oBgHgl3EQf-iyR/vector_store/index.faiss +3 -0
  29. ANFIT4oBgHgl3EQf-iyR/vector_store/index.pkl +3 -0
  30. B9E1T4oBgHgl3EQf9gbH/content/tmp_files/2301.03558v1.pdf.txt +798 -0
  31. B9E1T4oBgHgl3EQf9gbH/content/tmp_files/load_file.txt +0 -0
  32. B9E4T4oBgHgl3EQfFQzj/content/2301.04885v1.pdf +3 -0
  33. B9E4T4oBgHgl3EQfFQzj/vector_store/index.pkl +3 -0
  34. CdAzT4oBgHgl3EQfGftE/content/2301.01028v1.pdf +3 -0
  35. DNAzT4oBgHgl3EQfGfv5/vector_store/index.faiss +3 -0
  36. DtAzT4oBgHgl3EQfGvt3/content/tmp_files/2301.01033v1.pdf.txt +0 -0
  37. DtAzT4oBgHgl3EQfGvt3/content/tmp_files/load_file.txt +0 -0
  38. FNAyT4oBgHgl3EQfSffh/content/tmp_files/2301.00089v1.pdf.txt +1676 -0
  39. FNAyT4oBgHgl3EQfSffh/content/tmp_files/load_file.txt +0 -0
  40. FtAzT4oBgHgl3EQfUfxu/vector_store/index.faiss +3 -0
  41. FtFJT4oBgHgl3EQfDSx3/content/2301.11433v1.pdf +3 -0
  42. FtFJT4oBgHgl3EQfDSx3/vector_store/index.faiss +3 -0
  43. FtFLT4oBgHgl3EQfGS-3/vector_store/index.pkl +3 -0
  44. G9E1T4oBgHgl3EQf_QZd/vector_store/index.faiss +3 -0
  45. IdFIT4oBgHgl3EQfYSv6/vector_store/index.pkl +3 -0
  46. JtAyT4oBgHgl3EQff_gI/content/tmp_files/2301.00348v1.pdf.txt +870 -0
  47. JtAyT4oBgHgl3EQff_gI/content/tmp_files/load_file.txt +0 -0
  48. KNE0T4oBgHgl3EQfigEg/content/2301.02445v1.pdf +3 -0
  49. KNE0T4oBgHgl3EQfigEg/vector_store/index.pkl +3 -0
  50. LdAyT4oBgHgl3EQfTvfu/content/tmp_files/2301.00114v1.pdf.txt +1521 -0
-9E2T4oBgHgl3EQfQgbg/content/tmp_files/2301.03772v1.pdf.txt ADDED
@@ -0,0 +1,1322 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Influence of illumination on the quantum lifetime in selectively doped single GaAs
2
+ quantum wells with short-period AlAs/GaAs superlattice barriers
3
+ A. A. Bykov, D. V. Nomokonov, A. V. Goran, I. S. Strygin, I. V. Marchishin, A. K. Bakarov
4
+ Rzhanov Institute of Semiconductor Physics, Russian Academy of Sciences, Siberian Branch,
5
+ Novosibirsk, 630090, Russia
6
+ The influence of illumination on a high mobility two-dimensional electron gas with high
7
+ concentration of charge carriers is studied in selectively doped single GaAs quantum wells with
8
+ short-period AlAs/GaAs superlattice barriers at a temperature T = 4.2 K in magnetic fields
9
+ B < 2 T. It is shown that illumination at low temperatures in the studied heterostructures leads to
10
+ an increase in the concentration, mobility, and quantum lifetime of electrons. An increase in the
11
+ quantum lifetime due to illumination of single GaAs quantum wells with modulated superlattice
12
+ doping is explained by a decrease in the effective concentration of remote ionized donors.
13
+ Introduction
14
+ Persistent photoconductivity (PPC), which occurs in selectively doped GaAs/AlGaAs
15
+ heterostructures at low temperatures (T) as the result of visible light illumination, is widely used
16
+ as a method for changing the concentration (ne), mobility () and quantum lifetime (q) of
17
+ electrons in such two-dimensional (2D) systems [1-5]. PPC is also used in one-dimensional
18
+ lateral superlattices based on high mobility selectively doped GaAs/AlGaAs heterostructures
19
+ [6, 7]. One of the causes of PPC is the change in the charge state of DX centers in doped AlGaAs
20
+ layers under illumination [8, 9]. PPC is undesirable in high mobility heterostructures intended for
21
+ the manufacturing of field-effect transistors, as it introduces instability into their performance.
22
+ One of the ways to suppress PPC is to use short-period AlAs/GaAs superlattices as barriers to
23
+ single GaAs quantum wells [10]. In this case, the sources of free charge carriers are thin -doped
24
+ GaAs layers located in short-period superlattice barriers in which DX centers do not appear.
25
+ Another motivation for remote superlattice doping of single GaAs quantum wells is the
26
+ fabrication of 2D electronic systems with simultaneously high ne and . In selectively doped
27
+ GaAs/AlGaAs heterostructures, to suppress the scattering of 2D electron gas on a random
28
+ potential of ionized donors, the charge transfer region is separated from the doping region by an
29
+ undoped AlGaAs layer (spacer) [4]. High  in such a system is achieved due to a “thick” spacer
30
+ (dS > 50 nm) with a relatively low concentration ne ~ 31015 m-2. To implement high mobility 2D
31
+ electron systems with a “thin” spacer (dS < 50 nm) and high ne, it was proposed in [11] to use
32
+ short-period AlAs/GaAs superlattices as barriers to single GaAs quantum wells (Fig. 1). In this
33
+ case, the suppression of scattering by ionized Si donors is achieved not only by separation of the
34
+ regions of doping and transport, but also by the screening effect of X-electrons localized in AlAs
35
+ layers [11-13].
36
+ 1
37
+
38
+ 0.6
39
+ 口1
40
+ 2
41
+ 0.4
42
+ (sd)
43
+ 0.2
44
+ (a)
45
+ 0.0
46
+ (b)
47
+ 1.2
48
+ 0.8
49
+ d
50
+
51
+ 1
52
+ 0.4
53
+ 0
54
+ 2
55
+ 0.0
56
+ 1.0
57
+ 1.2
58
+ 1.4
59
+ 1.6
60
+ ne (1016 m*2)(a)
61
+ AlAs/GaAs
62
+ Si-S-doping
63
+ SPSL
64
+ dsi
65
+ GaAs SQW
66
+ SQW
67
+ AlAs/GaAs
68
+ ↓ Si-8-doping
69
+ SPSL
70
+ (b)
71
+ AlAs
72
+ GaAs
73
+ Si-
74
+ +
75
+ +
76
+ +
77
+ AIAs18
78
+ Pyy
79
+ 12
80
+ 1
81
+ 3
82
+ 2
83
+ 6
84
+ 4
85
+ (a)
86
+ 0
87
+ 0.0
88
+ 0.2
89
+ 0.4
90
+ 0.6
91
+ 0.8
92
+ 1.0
93
+ B (T)
94
+ 6
95
+ (b)
96
+ 3
97
+ 5
98
+ 4
99
+ 4
100
+ 3
101
+ 1
102
+
103
+ 1
104
+ A
105
+ 2
106
+ 2
107
+ 2
108
+
109
+ 3
110
+ 4
111
+ 0.0
112
+ 0.4
113
+ 0.8
114
+ 1.2
115
+ 1.6
116
+ 2.0
117
+ 1/B (1/T)60
118
+ 1
119
+ 40
120
+ 2
121
+ 3
122
+ 20
123
+ 4
124
+ (a)
125
+ 0
126
+ 0.0
127
+ 0.6
128
+ 1.2
129
+ 1.8
130
+ B (T)
131
+ 12
132
+ (b)
133
+ 2
134
+ 0
135
+ 0
136
+ 8
137
+
138
+ (sd)
139
+ 1
140
+
141
+ 4.
142
+
143
+ 1
144
+ 2
145
+ 0
146
+ 1.1
147
+ 1.2
148
+ 1.3
149
+ 1.4
150
+ 1.5
151
+ ne (1016 m*2)Fig. 1. (a) Schematic view of a single GaAs quantum well with side barriers of short-period
152
+ AlAs/GaAs superlattices. (b) An enlarged view of a portion of the -doped layer in a narrow
153
+ GaAs quantum well with adjacent AlAs layers. Ellipses show compact dipoles formed by
154
+ positively charged Si donors in the -doped layer and X-electrons in AlAs layers [13].
155
+ Superlattice doping of single GaAs quantum wells is used not only to implement high
156
+ mobility 2D electronic systems with a thin spacer [11, 12], but also to achieve ultrahigh  in 2D
157
+ electronic systems with a thick spacer [14-16]. In GaAs/AlAs heterostructures with modulated
158
+ superlattice doping, PPC due to a change in the charge states of DX centers should not arise [10].
159
+ However, it has been found that in selectively doped single GaAs quantum wells with short-
160
+ period AlAs/GaAs superlattice barriers and a thin spacer, illumination increases ne and  [17-19],
161
+ and with a thick spacer, it increases q [20]. The increase in q was explained by the redistribution
162
+ of X-electrons in AlAs layers adjacent to thin -doped GaAs layers. However, the effect of
163
+ illumination on q in single GaAs quantum wells with a thin spacer and superlattice doping
164
+ remains unexplored.
165
+ 2
166
+
167
+ 0.6
168
+ 口1
169
+ 2
170
+ 0.4
171
+ (sd)
172
+ 0.2
173
+ (a)
174
+ 0.0
175
+ (b)
176
+ 1.2
177
+ 0.8
178
+ d
179
+
180
+ 1
181
+ 0.4
182
+ 0
183
+ 2
184
+ 0.0
185
+ 1.0
186
+ 1.2
187
+ 1.4
188
+ 1.6
189
+ ne (1016 m*2)(a)
190
+ AlAs/GaAs
191
+ Si-S-doping
192
+ SPSL
193
+ dsi
194
+ GaAs SQW
195
+ SQW
196
+ AlAs/GaAs
197
+ ↓ Si-8-doping
198
+ SPSL
199
+ (b)
200
+ AlAs
201
+ GaAs
202
+ Si-
203
+ +
204
+ +
205
+ +
206
+ AIAs18
207
+ Pyy
208
+ 12
209
+ 1
210
+ 3
211
+ 2
212
+ 6
213
+ 4
214
+ (a)
215
+ 0
216
+ 0.0
217
+ 0.2
218
+ 0.4
219
+ 0.6
220
+ 0.8
221
+ 1.0
222
+ B (T)
223
+ 6
224
+ (b)
225
+ 3
226
+ 5
227
+ 4
228
+ 4
229
+ 3
230
+ 1
231
+
232
+ 1
233
+ A
234
+ 2
235
+ 2
236
+ 2
237
+
238
+ 3
239
+ 4
240
+ 0.0
241
+ 0.4
242
+ 0.8
243
+ 1.2
244
+ 1.6
245
+ 2.0
246
+ 1/B (1/T)60
247
+ 1
248
+ 40
249
+ 2
250
+ 3
251
+ 20
252
+ 4
253
+ (a)
254
+ 0
255
+ 0.0
256
+ 0.6
257
+ 1.2
258
+ 1.8
259
+ B (T)
260
+ 12
261
+ (b)
262
+ 2
263
+ 0
264
+ 0
265
+ 8
266
+
267
+ (sd)
268
+ 1
269
+
270
+ 4.
271
+
272
+ 1
273
+ 2
274
+ 0
275
+ 1.1
276
+ 1.2
277
+ 1.3
278
+ 1.4
279
+ 1.5
280
+ ne (1016 m*2)One of the features of GaAs/AlAs heterostructures with a thin spacer and superlattice doping
281
+ grown by molecular beam epitaxy on (001) GaAs substrates is the anisotropy of  [21]. In such
282
+ structures, y in the [-110] crystallographic direction can exceed x in the [110] direction by
283
+ several times [22]. The anisotropy of  is due to scattering on the heterointerface roughness
284
+ oriented along the [-110] direction and arising during the growth of heterostructures [23, 24].
285
+ This work is devoted to studying the effect of illumination on a 2D electron gas with an
286
+ anisotropic  in single GaAs quantum wells with a thin spacer and superlattice doping. It has
287
+ been established that illumination increases ne, , and q in the heterostructures under study. It is
288
+ shown that the increase in q after illumination is due to a decrease in the effective concentration
289
+ of remote ionized donors.
290
+ Quantum lifetime
291
+ The traditional method of measuring q in a 2D electron gas is based on studying the
292
+ dependence of the amplitude of the Shubnikov – de Haas (SdH) oscillations on the magnetic
293
+ field (B) [25-30]. In 2D electron systems with isotropic  low field SdH oscillations are
294
+ described by the following relation [28]:
295
+ SdH = 4 0 X(T) exp(-/cq) cos(2F/ħc - ), (1)
296
+ where SdH is the oscillating component of the dependence xx(B), 0 = xx(B = 0) is the Drude
297
+ resistance, X(T) = (22kBT/ħc)/sinh(22kBT/ħc), c = eB/m*, F is the Fermi energy. Using the
298
+ results of [26], it is easy to generalize (1) for a 2D system with anisotropic mobility d. In this
299
+ case, the normalized amplitude of SdH oscillations will be determined by the following
300
+ expression [31]:
301
+ Ad
302
+ SdH = d
303
+ SdH/0d X(T) = A0d
304
+ SdH exp(-/cqd), (2)
305
+ where the index d corresponds to the main mutually perpendicular directions x and y, and A0d
306
+ SdH
307
+ = 4.
308
+ The value of q in single GaAs quantum wells with short-period AlAs/GaAs superlattice
309
+ barriers is determined mainly by small-angle scattering [11, 12]. In this case, q can be expressed
310
+ by the relation [32-34]:
311
+ q  qR = (2m*/) (kFdR)/nR
312
+ eff, (3)
313
+ where qR is the quantum lifetime upon scattering on a random potential of a remote impurity, kF
314
+ = (2ne)0.5, dR = (dS + dSQW/2), dSQW is the thickness of a single GaAs quantum well, and neff
315
+ R is
316
+ the effective concentration of remote ionized donors. The value of neff
317
+ R takes into account the
318
+ change in the scattering potential of remote donors when they are bound to X-electrons (Fig. 1b)
319
+ [13]. The dependence of neff
320
+ R on ne in the heterostructures under study is described by the
321
+ following phenomenological relation [35]:
322
+ neff
323
+ R = neff
324
+ R0/{exp[(ne - a)/b] + 1}  neff
325
+ R0 fab(ne), (4)
326
+ where neff
327
+ R0, a and b are fitting parameters. fab is the fraction of ionized remote donors not
328
+ associated with X-electrons into compact dipoles.
329
+ 3
330
+
331
+ 0.6
332
+ 口1
333
+ 2
334
+ 0.4
335
+ (sd)
336
+ 0.2
337
+ (a)
338
+ 0.0
339
+ (b)
340
+ 1.2
341
+ 0.8
342
+ d
343
+
344
+ 1
345
+ 0.4
346
+ 0
347
+ 2
348
+ 0.0
349
+ 1.0
350
+ 1.2
351
+ 1.4
352
+ 1.6
353
+ ne (1016 m*2)(a)
354
+ AlAs/GaAs
355
+ Si-S-doping
356
+ SPSL
357
+ dsi
358
+ GaAs SQW
359
+ SQW
360
+ AlAs/GaAs
361
+ ↓ Si-8-doping
362
+ SPSL
363
+ (b)
364
+ AlAs
365
+ GaAs
366
+ Si-
367
+ +
368
+ +
369
+ +
370
+ AIAs18
371
+ Pyy
372
+ 12
373
+ 1
374
+ 3
375
+ 2
376
+ 6
377
+ 4
378
+ (a)
379
+ 0
380
+ 0.0
381
+ 0.2
382
+ 0.4
383
+ 0.6
384
+ 0.8
385
+ 1.0
386
+ B (T)
387
+ 6
388
+ (b)
389
+ 3
390
+ 5
391
+ 4
392
+ 4
393
+ 3
394
+ 1
395
+
396
+ 1
397
+ A
398
+ 2
399
+ 2
400
+ 2
401
+
402
+ 3
403
+ 4
404
+ 0.0
405
+ 0.4
406
+ 0.8
407
+ 1.2
408
+ 1.6
409
+ 2.0
410
+ 1/B (1/T)60
411
+ 1
412
+ 40
413
+ 2
414
+ 3
415
+ 20
416
+ 4
417
+ (a)
418
+ 0
419
+ 0.0
420
+ 0.6
421
+ 1.2
422
+ 1.8
423
+ B (T)
424
+ 12
425
+ (b)
426
+ 2
427
+ 0
428
+ 0
429
+ 8
430
+
431
+ (sd)
432
+ 1
433
+
434
+ 4.
435
+
436
+ 1
437
+ 2
438
+ 0
439
+ 1.1
440
+ 1.2
441
+ 1.3
442
+ 1.4
443
+ 1.5
444
+ ne (1016 m*2)Samples under study and details of the experiment
445
+ The GaAs/AlAs heterostructures under study were grown using molecular beam epitaxy on
446
+ semi-insulating GaAs (100) substrates. They were single GaAs quantum wells with short-period
447
+ AlAs/GaAs superlattice barriers [11, 12]. Two Si -doping layers located at distances dS1 and dS2
448
+ from the upper and lower heterointerfaces of the GaAs quantum well served as the sources of
449
+ electrons. L-shaped bridges oriented along the [110] and [-110] directions were fabricated based
450
+ on the heterostructures grown by optical lithography and liquid etching. The bridges were
451
+ 100 µm long and 50 µm wide. The bridge resistance was measured at an alternating current Iac
452
+ < 1 μA with a frequency fac ~ 0.5 kHz at a temperature T = 4.2 K in magnetic fields B < 2 T. A
453
+ red LED was used for illumination.
454
+ Table 1. Heterostructure parameters: dSQW is the quantum well thickness; dS = (dS1 + dS2)/2 is
455
+ the spacer average thickness; nSi is the total concentration of remote Si donors in -doped thin
456
+ GaAs layers; ne is the electron concentration; x is the mobility in the [110] direction; y is the
457
+ mobility in the direction [-110]; y/x is the mobility ratio. The asterisk marks the values
458
+ obtained after illumination.
459
+ Structure
460
+ number
461
+ dSQW
462
+ (nm)
463
+ dS
464
+ (nm)
465
+ nSi
466
+ (1016 m-2)
467
+ ne
468
+ (1015 m-2)
469
+ y
470
+ (m2/V s)
471
+ x
472
+ (m2/V s)
473
+ y/x
474
+ 1
475
+ 13
476
+ 29.4
477
+ 3.2
478
+ 7.48
479
+ 8.42*
480
+ 124
481
+ 206*
482
+ 80.5
483
+ 103*
484
+ 1.54
485
+ 2*
486
+ 2
487
+ 10
488
+ 10.8
489
+ 5
490
+ 11.5
491
+ 14.5*
492
+ 14.7
493
+ 27.2*
494
+ 9.33
495
+ 18.6*
496
+ 1.58
497
+ 1.46*
498
+ Experimental results and discussion
499
+ Fig. 2a shows the experimental dependences of d(B) at T = 4.2 K for heterostructure no. 1
500
+ before illumination (curves 1 and 2) and after illumination (curves 3 and 4). In the region of
501
+ B > 0.5 T, SdH oscillations are observed, the period of which in the reverse magnetic field
502
+ decreased after illumination, which indicates an increase in ne. After illumination, the values of
503
+ 0d also decreased, which is due not only to an increase in ne, but also to an increase in d. The
504
+ illumination also led to an increase in the positive magnetoresistance (MR) of the 2D electron
505
+ gas, which indicates an increase in the quantum lifetime [36, 37]. The dependences of Ad
506
+ SdH on
507
+ 1/B for structure no. 1 are shown in Fig. 2b. In accordance with formula (2), the slope of the
508
+ dependences Ad
509
+ SdH(1/B) on a semilogarithmic scale is determined by the value qd. A decrease in
510
+ slope after illumination indicates an increase in qd. At the same time, the values of qd measured
511
+ in the directions [110] and [-110] are equal with an accuracy of 5%.
512
+ 4
513
+
514
+ 0.6
515
+ 口1
516
+ 2
517
+ 0.4
518
+ (sd)
519
+ 0.2
520
+ (a)
521
+ 0.0
522
+ (b)
523
+ 1.2
524
+ 0.8
525
+ d
526
+
527
+ 1
528
+ 0.4
529
+ 0
530
+ 2
531
+ 0.0
532
+ 1.0
533
+ 1.2
534
+ 1.4
535
+ 1.6
536
+ ne (1016 m*2)(a)
537
+ AlAs/GaAs
538
+ Si-S-doping
539
+ SPSL
540
+ dsi
541
+ GaAs SQW
542
+ SQW
543
+ AlAs/GaAs
544
+ ↓ Si-8-doping
545
+ SPSL
546
+ (b)
547
+ AlAs
548
+ GaAs
549
+ Si-
550
+ +
551
+ +
552
+ +
553
+ AIAs18
554
+ Pyy
555
+ 12
556
+ 1
557
+ 3
558
+ 2
559
+ 6
560
+ 4
561
+ (a)
562
+ 0
563
+ 0.0
564
+ 0.2
565
+ 0.4
566
+ 0.6
567
+ 0.8
568
+ 1.0
569
+ B (T)
570
+ 6
571
+ (b)
572
+ 3
573
+ 5
574
+ 4
575
+ 4
576
+ 3
577
+ 1
578
+
579
+ 1
580
+ A
581
+ 2
582
+ 2
583
+ 2
584
+
585
+ 3
586
+ 4
587
+ 0.0
588
+ 0.4
589
+ 0.8
590
+ 1.2
591
+ 1.6
592
+ 2.0
593
+ 1/B (1/T)60
594
+ 1
595
+ 40
596
+ 2
597
+ 3
598
+ 20
599
+ 4
600
+ (a)
601
+ 0
602
+ 0.0
603
+ 0.6
604
+ 1.2
605
+ 1.8
606
+ B (T)
607
+ 12
608
+ (b)
609
+ 2
610
+ 0
611
+ 0
612
+ 8
613
+
614
+ (sd)
615
+ 1
616
+
617
+ 4.
618
+
619
+ 1
620
+ 2
621
+ 0
622
+ 1.1
623
+ 1.2
624
+ 1.3
625
+ 1.4
626
+ 1.5
627
+ ne (1016 m*2)Fig. 2. (a) Experimental dependences of d on B measured on an L-shaped bridge at T = 4.2 K
628
+ before illumination (1, 2) and after illumination (3, 4) (no. 1). 1, 3 – xx(B). 2, 4 – yy(B). The
629
+ inset shows the geometry of the L-shaped bridge. (b) Dependences of Ad
630
+ SdH on 1/B before
631
+ illumination (1, 2) and after illumination (3, 4). Symbols are experimental data. Solid lines –
632
+ calculation by formula (2): 1 – A0x
633
+ SdH = 5.02; qx = 1.44 ps; 2 – A0y
634
+ SdH = 4.57; qy = 1.38 ps; 3 –
635
+ A0x
636
+ SdH = 6.29; qx = 2.72 ps; 4 – A0y
637
+ SdH = 4.66; qy = 3.01 ps.
638
+ Fig. 3a shows the experimental dependences of d(B) at T = 4.2 K for heterostructure no. 2
639
+ before illumination (curves 1 and 2) and after illumination (curves 3 and 4). For this structure, as
640
+ well as for structure no. 1, short-term illumination at low temperature leads to an increase in ne
641
+ and d. However, for structure no. 2, in contrast to no. 1, the dependences xx(B) do not show
642
+ quantum positive MR, while a classical negative MR is observed [38], which decreases
643
+ significantly after illumination. Dependences td(ne) are presented in Fig. 3b. These dependences
644
+ are not described by the theory [32], which takes into account only the change in kF with
645
+ increasing ne, which is due to the change in neff
646
+ R after illumination. A similar behavior of td on ne
647
+ is also observed when the concentration of the 2D electron gas is changed using a Schottky gate
648
+ [12, 35].
649
+ 5
650
+
651
+ 0.6
652
+ 口1
653
+ 2
654
+ 0.4
655
+ (sd)
656
+ 0.2
657
+ (a)
658
+ 0.0
659
+ (b)
660
+ 1.2
661
+ 0.8
662
+ d
663
+
664
+ 1
665
+ 0.4
666
+ 0
667
+ 2
668
+ 0.0
669
+ 1.0
670
+ 1.2
671
+ 1.4
672
+ 1.6
673
+ ne (1016 m*2)(a)
674
+ AlAs/GaAs
675
+ Si-S-doping
676
+ SPSL
677
+ dsi
678
+ GaAs SQW
679
+ SQW
680
+ AlAs/GaAs
681
+ ↓ Si-8-doping
682
+ SPSL
683
+ (b)
684
+ AlAs
685
+ GaAs
686
+ Si-
687
+ +
688
+ +
689
+ +
690
+ AIAs18
691
+ Pyy
692
+ 12
693
+ 1
694
+ 3
695
+ 2
696
+ 6
697
+ 4
698
+ (a)
699
+ 0
700
+ 0.0
701
+ 0.2
702
+ 0.4
703
+ 0.6
704
+ 0.8
705
+ 1.0
706
+ B (T)
707
+ 6
708
+ (b)
709
+ 3
710
+ 5
711
+ 4
712
+ 4
713
+ 3
714
+ 1
715
+
716
+ 1
717
+ A
718
+ 2
719
+ 2
720
+ 2
721
+
722
+ 3
723
+ 4
724
+ 0.0
725
+ 0.4
726
+ 0.8
727
+ 1.2
728
+ 1.6
729
+ 2.0
730
+ 1/B (1/T)60
731
+ 1
732
+ 40
733
+ 2
734
+ 3
735
+ 20
736
+ 4
737
+ (a)
738
+ 0
739
+ 0.0
740
+ 0.6
741
+ 1.2
742
+ 1.8
743
+ B (T)
744
+ 12
745
+ (b)
746
+ 2
747
+ 0
748
+ 0
749
+ 8
750
+
751
+ (sd)
752
+ 1
753
+
754
+ 4.
755
+
756
+ 1
757
+ 2
758
+ 0
759
+ 1.1
760
+ 1.2
761
+ 1.3
762
+ 1.4
763
+ 1.5
764
+ ne (1016 m*2)Fig. 3. (a) Dependences of xx(B) and yy(B) measured on the L-shaped bridge at T = 4.2 K
765
+ (no. 2): 1, 2 – before illumination; 3, 4 - after short-term illumination by a red LED. (b)
766
+ Dependencies of tx(ne) and ty(ne). Squares and circles - experimental data: 1 - tx; 2 - ty. Solid
767
+ lines – calculation according to the formula: td  ne
768
+ 1.5: 1 – tx; 2 – ty.
769
+ The experimental dependences qd(ne) for structure no. 2 (Fig. 4a) show that qd for different
770
+ crystallographic directions are equal with an accuracy of 5%, which agrees with [31]. The
771
+ experimental data are well described by formula (3) for the effective concentration of positively
772
+ charged Si donors calculated by formula (4). The agreement between the experimental
773
+ dependences qd(ne) and the calculated one indicates that the increase in the quantum lifetime of
774
+ electrons in a single GaAs quantum well after low-temperature illumination is due to a decrease
775
+ in neff
776
+ R.
777
+ 6
778
+
779
+ 0.6
780
+ 口1
781
+ 2
782
+ 0.4
783
+ (sd)
784
+ 0.2
785
+ (a)
786
+ 0.0
787
+ (b)
788
+ 1.2
789
+ 0.8
790
+ d
791
+
792
+ 1
793
+ 0.4
794
+ 0
795
+ 2
796
+ 0.0
797
+ 1.0
798
+ 1.2
799
+ 1.4
800
+ 1.6
801
+ ne (1016 m*2)(a)
802
+ AlAs/GaAs
803
+ Si-S-doping
804
+ SPSL
805
+ dsi
806
+ GaAs SQW
807
+ SQW
808
+ AlAs/GaAs
809
+ ↓ Si-8-doping
810
+ SPSL
811
+ (b)
812
+ AlAs
813
+ GaAs
814
+ Si-
815
+ +
816
+ +
817
+ +
818
+ AIAs18
819
+ Pyy
820
+ 12
821
+ 1
822
+ 3
823
+ 2
824
+ 6
825
+ 4
826
+ (a)
827
+ 0
828
+ 0.0
829
+ 0.2
830
+ 0.4
831
+ 0.6
832
+ 0.8
833
+ 1.0
834
+ B (T)
835
+ 6
836
+ (b)
837
+ 3
838
+ 5
839
+ 4
840
+ 4
841
+ 3
842
+ 1
843
+
844
+ 1
845
+ A
846
+ 2
847
+ 2
848
+ 2
849
+
850
+ 3
851
+ 4
852
+ 0.0
853
+ 0.4
854
+ 0.8
855
+ 1.2
856
+ 1.6
857
+ 2.0
858
+ 1/B (1/T)60
859
+ 1
860
+ 40
861
+ 2
862
+ 3
863
+ 20
864
+ 4
865
+ (a)
866
+ 0
867
+ 0.0
868
+ 0.6
869
+ 1.2
870
+ 1.8
871
+ B (T)
872
+ 12
873
+ (b)
874
+ 2
875
+ 0
876
+ 0
877
+ 8
878
+
879
+ (sd)
880
+ 1
881
+
882
+ 4.
883
+
884
+ 1
885
+ 2
886
+ 0
887
+ 1.1
888
+ 1.2
889
+ 1.3
890
+ 1.4
891
+ 1.5
892
+ ne (1016 m*2)Fig. 4. (a) Dependences of qd(ne): squares are the experimental values of qy; circles –
893
+ experimental values of qx; the solid line is the calculation for neff
894
+ R = neff
895
+ R0fab. (b) Dependences of
896
+ neff
897
+ R and neff
898
+ R0fab on ne: squares and circles are the values of neff
899
+ R calculated from the experimental
900
+ values of qx and qy; solid line – neff
901
+ R0fab for neff
902
+ R0 = 1.261016 m-2, a = 1.371016 m-2 and b =
903
+ 0.0821016 m-2.
904
+ Conclusion
905
+ The influence of illumination on the low-temperature transport in a 2D electron gas with
906
+ anisotropic mobility in selectively doped single GaAs quantum wells with short-period
907
+ AlAs/GaAs superlattice barriers in classically strong magnetic fields was studied. It has been
908
+ shown that, in the heterostructures under study, illumination by a red LED at low temperatures
909
+ leads to an increase in the concentration, mobility, and quantum lifetime of electrons. An
910
+ increase in the quantum lifetime of electrons in single GaAs quantum wells with modulated
911
+ superlattice doping after illumination is explained by a decrease in the effective concentration of
912
+ remote ionized donors.
913
+ 7
914
+
915
+ 0.6
916
+ 口1
917
+ 2
918
+ 0.4
919
+ (sd)
920
+ 0.2
921
+ (a)
922
+ 0.0
923
+ (b)
924
+ 1.2
925
+ 0.8
926
+ d
927
+
928
+ 1
929
+ 0.4
930
+ 0
931
+ 2
932
+ 0.0
933
+ 1.0
934
+ 1.2
935
+ 1.4
936
+ 1.6
937
+ ne (1016 m*2)(a)
938
+ AlAs/GaAs
939
+ Si-S-doping
940
+ SPSL
941
+ dsi
942
+ GaAs SQW
943
+ SQW
944
+ AlAs/GaAs
945
+ ↓ Si-8-doping
946
+ SPSL
947
+ (b)
948
+ AlAs
949
+ GaAs
950
+ Si-
951
+ +
952
+ +
953
+ +
954
+ AIAs18
955
+ Pyy
956
+ 12
957
+ 1
958
+ 3
959
+ 2
960
+ 6
961
+ 4
962
+ (a)
963
+ 0
964
+ 0.0
965
+ 0.2
966
+ 0.4
967
+ 0.6
968
+ 0.8
969
+ 1.0
970
+ B (T)
971
+ 6
972
+ (b)
973
+ 3
974
+ 5
975
+ 4
976
+ 4
977
+ 3
978
+ 1
979
+
980
+ 1
981
+ A
982
+ 2
983
+ 2
984
+ 2
985
+
986
+ 3
987
+ 4
988
+ 0.0
989
+ 0.4
990
+ 0.8
991
+ 1.2
992
+ 1.6
993
+ 2.0
994
+ 1/B (1/T)60
995
+ 1
996
+ 40
997
+ 2
998
+ 3
999
+ 20
1000
+ 4
1001
+ (a)
1002
+ 0
1003
+ 0.0
1004
+ 0.6
1005
+ 1.2
1006
+ 1.8
1007
+ B (T)
1008
+ 12
1009
+ (b)
1010
+ 2
1011
+ 0
1012
+ 0
1013
+ 8
1014
+
1015
+ (sd)
1016
+ 1
1017
+
1018
+ 4.
1019
+
1020
+ 1
1021
+ 2
1022
+ 0
1023
+ 1.1
1024
+ 1.2
1025
+ 1.3
1026
+ 1.4
1027
+ 1.5
1028
+ ne (1016 m*2)Funding
1029
+ This work was supported by the Russian Foundation for Basic Research (project no. 20-02-
1030
+ 00309).
1031
+ References
1032
+ [1] H. Stormer, R. Dingle, A. Gossard, W. Wiegmann, and M. Sturge, Solid State Commun. 29,
1033
+ 705 (1979).
1034
+ [2] E. F. Schubert, J. Knecht, and K. Ploog, J. Phys. C: Solid State Phys. 18, L215 (1985).
1035
+ [3] R. G. Mani and J. R. Anderson, Phys. Rev. B 37, 4299(R) (1988).
1036
+ [4] Loren Pfeiffer, K. W. West, H. L. Stormer, and K. W. Baldwin, Appl. Phys. Lett. 55, 1888
1037
+ (1989).
1038
+ [5] M. Hayne, A. Usher, J. J. Harris, V. V. Moshchalkov, and C. T. Foxon, Phys. Rev. B 57,
1039
+ 14813 (1998).
1040
+ [6] D. Weiss, K. v. Klitzing, K. Ploog, and G. Weimann, Europhys. Lett. 8, 179 (1989).
1041
+ [7] C. Hnatovsky, M. A. Zudov, G. D. Austing, A. Bogan, S. J. Mihailov, M. Hilke, K. W. West,
1042
+ L. N. Pfeiffer, and S. A. Studenikin, J. Appl. Phys. 132, 044301 (2022).
1043
+ [8] R. J. Nelson, Appl. Phys. Lett. 31, 351 (1977).
1044
+ [9] D. V. Lang, R. A. Logan, and M. Jaros, Phys. Rev. B 19, 1015 (1979).
1045
+ [10] Toshio Baba, Takashi Mizutani, and Masaki Ogawa, Jpn. J. Appl. Phys. 22, L627 (1983).
1046
+ [11] K. J. Friedland, R. Hey, H. Kostial, R. Klann, and K. Ploog, Phys. Rev. Lett. 77, 4616
1047
+ (1996).
1048
+ [12] D. V. Dmitriev, I. S. Strygin, A. A. Bykov, S. Dietrich, and S. A. Vitkalov, JETP Lett. 95,
1049
+ 420 (2012).
1050
+ [13] M. Sammon, M. A. Zudov, and B. I. Shklovskii, Phys. Rev. Materials 2, 064604 (2018).
1051
+ [14] V. Umansky, M. Heiblum, Y. Levinson, J. Smet, J. Nübler, M. Dolev, J. Cryst. Growth 311,
1052
+ 1658 (2009).
1053
+ [15] G. C. Gardner, S. Fallahi, J. D. Watson, M. J. Manfra, J. Cryst. Growth 441, 71 (2016).
1054
+ [16] Y. J. Chung, K. A. Villegas Rosales, K.W. Baldwin, K.W. West, M. Shayegan, and L. N.
1055
+ Pfeiffer, Phys. Rev. Materials 4, 044003 (2020).
1056
+ [17] A. A. Bykov, I. V. Marchishin, A. K. Bakarov, Jing-Qiao Zhang and S. A. Vitkalov, JETP
1057
+ Lett. 85, 63 (2007).
1058
+ [18] A. A. Bykov, I. S. Strygin, I. V. Marchishin, and A. V. Goran, JETP Lett. 99, 303 (2014).
1059
+ [19] A. A. Bykov, I. S. Strygin, E. E. Rodyakina, W. Mayer, and S. A. Vitkalov, JETP Lett. 101,
1060
+ 703 (2015).
1061
+ [20] X. Fu, A. Riedl, M. Borisov, M. A. Zudov, J. D. Watson, G. Gardner, M. J. Manfra, K. W.
1062
+ Baldwin, L. N. Pfeiffer, and K. W. West, Phys. Rev. B 98, 195403 (2018).
1063
+ [21] A. A. Bykov, A. K. Bakarov, A. V. Goran, A. V. Latyshev, A. I. Toropov, JETP Lett. 74,
1064
+ 164 (2001).
1065
+ 8
1066
+
1067
+ 0.6
1068
+ 口1
1069
+ 2
1070
+ 0.4
1071
+ (sd)
1072
+ 0.2
1073
+ (a)
1074
+ 0.0
1075
+ (b)
1076
+ 1.2
1077
+ 0.8
1078
+ d
1079
+
1080
+ 1
1081
+ 0.4
1082
+ 0
1083
+ 2
1084
+ 0.0
1085
+ 1.0
1086
+ 1.2
1087
+ 1.4
1088
+ 1.6
1089
+ ne (1016 m*2)(a)
1090
+ AlAs/GaAs
1091
+ Si-S-doping
1092
+ SPSL
1093
+ dsi
1094
+ GaAs SQW
1095
+ SQW
1096
+ AlAs/GaAs
1097
+ ↓ Si-8-doping
1098
+ SPSL
1099
+ (b)
1100
+ AlAs
1101
+ GaAs
1102
+ Si-
1103
+ +
1104
+ +
1105
+ +
1106
+ AIAs18
1107
+ Pyy
1108
+ 12
1109
+ 1
1110
+ 3
1111
+ 2
1112
+ 6
1113
+ 4
1114
+ (a)
1115
+ 0
1116
+ 0.0
1117
+ 0.2
1118
+ 0.4
1119
+ 0.6
1120
+ 0.8
1121
+ 1.0
1122
+ B (T)
1123
+ 6
1124
+ (b)
1125
+ 3
1126
+ 5
1127
+ 4
1128
+ 4
1129
+ 3
1130
+ 1
1131
+
1132
+ 1
1133
+ A
1134
+ 2
1135
+ 2
1136
+ 2
1137
+
1138
+ 3
1139
+ 4
1140
+ 0.0
1141
+ 0.4
1142
+ 0.8
1143
+ 1.2
1144
+ 1.6
1145
+ 2.0
1146
+ 1/B (1/T)60
1147
+ 1
1148
+ 40
1149
+ 2
1150
+ 3
1151
+ 20
1152
+ 4
1153
+ (a)
1154
+ 0
1155
+ 0.0
1156
+ 0.6
1157
+ 1.2
1158
+ 1.8
1159
+ B (T)
1160
+ 12
1161
+ (b)
1162
+ 2
1163
+ 0
1164
+ 0
1165
+ 8
1166
+
1167
+ (sd)
1168
+ 1
1169
+
1170
+ 4.
1171
+
1172
+ 1
1173
+ 2
1174
+ 0
1175
+ 1.1
1176
+ 1.2
1177
+ 1.3
1178
+ 1.4
1179
+ 1.5
1180
+ ne (1016 m*2)[22] K.-J. Friedland, R. Hey, O. Bierwagen, H. Kostial, Y. Hirayama, and K. H. Ploog, Physica
1181
+ E 13, 642 (2002).
1182
+ [23] Y. Tokura, T. Saku, S. Tarucha, and Y. Horikoshi, Phys. Rev. B 46 15558 (1992).
1183
+ [24] M. D. Johnson, C. Orme, A. W. Hunt, D. Graff, J. Sudijono, L. M. Sander, and B. G. Orr,
1184
+ Phys. Rev. Lett. 72, 116 (1994).
1185
+ [25] I. M. Lifshits and A. M. Kosevich, Zh. Eksp. Teor. Fiz. 29, 730 (1955) [Sov. Phys. JETP 2,
1186
+ 636 (1956)].
1187
+ [26] A. Isihara and L. Smrcka, J. Phys. C: Solid State Phys. 19, 6777 (1986).
1188
+ [27] P. T. Coleridge, R. Stoner, and R. Fletcher, Phys. Rev. B 39, 1120 (1989).
1189
+ [28] P. T. Coleridge, Phys. Rev. B 44, 3793 (1991).
1190
+ [29] S. D. Bystrov, A. M. Kreshchuk, S. V. Novikov, T. A. Polyanskaya, I. G. Savel'ev, Fiz.
1191
+ Tekh. Poluprov. 27, 645 (1993) [Semiconductors 27, 358 (1993)].
1192
+ [30] S. D. Bystrov, A. M. Kreshchuk, L. Taun, S. V. Novikov, T. A. Polyanskaya, I. G. Savel’ev,
1193
+ and A. Ya. Shik, Fiz. Tekh. Poluprov. 28, 91 (1994) [Semiconductors 28, 55 (1994)].
1194
+ [31] D. V. Nomokonov, A. K. Bakarov, A. A. Bykov, in the press.
1195
+ [32] A. Gold, Phys. Rev. B 38, 10798 (1988).
1196
+ [33] J. H. Davies, The Physics of Low Dimensional Semiconductors (Cambridge Univ. Press,
1197
+ New York, 1998).
1198
+ [34] I. A. Dmitriev, A. D. Mirlin, D. G. Polyakov, and M. A. Zudov, Rev. Mod. Phys. 84, 1709
1199
+ (2012).
1200
+ [35] A. A. Bykov, I. S. Strygin, A. V. Goran, D. V. Nomokonov, and A. K. Bakarov, JETP Lett.
1201
+ 112, 437 (2020).
1202
+ [36] M. G. Vavilov and I. L. Aleiner, Phys. Rev. B 69, 035303 (2004).
1203
+ [37] Scott Dietrich, Sergey Vitkalov, D. V. Dmitriev, and A. A. Bykov, Phys. Rev. B 85, 115312
1204
+ (2012).
1205
+ [38] A. A. Bykov, A. K. Bakarov, A. V. Goran, N. D. Aksenova, A. V. Popova, A. I. Toropov,
1206
+ JETP Lett. 78, 134 (2003).
1207
+ 9
1208
+
1209
+ 0.6
1210
+ 口1
1211
+ 2
1212
+ 0.4
1213
+ (sd)
1214
+ 0.2
1215
+ (a)
1216
+ 0.0
1217
+ (b)
1218
+ 1.2
1219
+ 0.8
1220
+ d
1221
+
1222
+ 1
1223
+ 0.4
1224
+ 0
1225
+ 2
1226
+ 0.0
1227
+ 1.0
1228
+ 1.2
1229
+ 1.4
1230
+ 1.6
1231
+ ne (1016 m*2)(a)
1232
+ AlAs/GaAs
1233
+ Si-S-doping
1234
+ SPSL
1235
+ dsi
1236
+ GaAs SQW
1237
+ SQW
1238
+ AlAs/GaAs
1239
+ ↓ Si-8-doping
1240
+ SPSL
1241
+ (b)
1242
+ AlAs
1243
+ GaAs
1244
+ Si-
1245
+ +
1246
+ +
1247
+ +
1248
+ AIAs18
1249
+ Pyy
1250
+ 12
1251
+ 1
1252
+ 3
1253
+ 2
1254
+ 6
1255
+ 4
1256
+ (a)
1257
+ 0
1258
+ 0.0
1259
+ 0.2
1260
+ 0.4
1261
+ 0.6
1262
+ 0.8
1263
+ 1.0
1264
+ B (T)
1265
+ 6
1266
+ (b)
1267
+ 3
1268
+ 5
1269
+ 4
1270
+ 4
1271
+ 3
1272
+ 1
1273
+
1274
+ 1
1275
+ A
1276
+ 2
1277
+ 2
1278
+ 2
1279
+
1280
+ 3
1281
+ 4
1282
+ 0.0
1283
+ 0.4
1284
+ 0.8
1285
+ 1.2
1286
+ 1.6
1287
+ 2.0
1288
+ 1/B (1/T)60
1289
+ 1
1290
+ 40
1291
+ 2
1292
+ 3
1293
+ 20
1294
+ 4
1295
+ (a)
1296
+ 0
1297
+ 0.0
1298
+ 0.6
1299
+ 1.2
1300
+ 1.8
1301
+ B (T)
1302
+ 12
1303
+ (b)
1304
+ 2
1305
+ 0
1306
+ 0
1307
+ 8
1308
+
1309
+ (sd)
1310
+ 1
1311
+
1312
+ 4.
1313
+
1314
+ 1
1315
+ 2
1316
+ 0
1317
+ 1.1
1318
+ 1.2
1319
+ 1.3
1320
+ 1.4
1321
+ 1.5
1322
+ ne (1016 m*2)
-9E2T4oBgHgl3EQfQgbg/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
-tAzT4oBgHgl3EQfhPwa/content/tmp_files/2301.01480v1.pdf.txt ADDED
@@ -0,0 +1,1864 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A new over-dispersed count model
2
+ Anupama Nandi, Subrata Chakraborty, Aniket Biswas
3
+ Dibrugarh University
4
+ January 5, 2023
5
+ Abstract
6
+ A new two-parameter discrete distribution, namely the PoiG distribution is derived
7
+ by the convolution of a Poisson variate and an independently distributed geometric
8
+ random variable. This distribution generalizes both the Poisson and geometric distri-
9
+ butions and can be used for modelling over-dispersed as well as equi-dispersed count
10
+ data. A number of important statistical properties of the proposed count model,
11
+ such as the probability generating function, the moment generating function, the
12
+ moments, the survival function and the hazard rate function. Monotonic properties
13
+ are studied such as the log concavity and the stochastic ordering are also investi-
14
+ gated in detail. Method of moment and the maximum likelihood estimators of the
15
+ parameters of the proposed model are presented. It is envisaged that the proposed
16
+ distribution may prove to be useful for the practitioners for modelling over-dispersed
17
+ count data compared to its closest competitors.
18
+ Keywords Geometric distribution, Poisson distribution, Conway-Maxwell Poisson dis-
19
+ tribution; BerG distribution; BerPoi distribution; Incomplete gamma function.
20
+ MSC 2010 60E05, 62E15.
21
+ 1
22
+ arXiv:2301.01480v1 [stat.ME] 4 Jan 2023
23
+
24
+ 1
25
+ Introduction
26
+ The phenomenon of the variance of a count data being more than its mean is commonly
27
+ termed as over-dispersion in the literature. Over-dispersion is relevant in many modelling
28
+ applications and it is encountered more often compared to the phenomena of under-
29
+ dispersion and equi-dispersion. A number of count models are available in the literature
30
+ for over-dispersed data. However, addition of a simple yet adequate model is of importance
31
+ given the ongoing research interest in this direction ([37], [25], [32], [35], [30], [29], [9],
32
+ [19], [26], [34], [5], [2] and [36]). The simplest and the most common count data model
33
+ is the Poisson distribution. Its equi-dispersion characteristic is well-known. This is a
34
+ limitation for the Poisson model and to overcome this issue, several alternatives have
35
+ been developed and used for their obvious advantage over the classical Poisson model.
36
+ Notable among these distributions are the hyper-Poisson (HP) of Bardwell and Crow
37
+ [6], generalized Poisson distribution of Jain and Consul [20], double-Poisson of Efron
38
+ [16], weighted Poisson of Castillo and Pérez-Casany [15], weighted generalized Poisson
39
+ distribution of Chakraborty [10], Mittag-Leffler function distribution of Chakraborty and
40
+ Ong [13] and the popular COM-Poisson distribution Shmueli et al. [31]. COM-Poisson
41
+ generalizes the binomial and the negative binomial distribution. The classical geometric
42
+ and negative binomial models are also used for over-dispersed count datasets. The gamma
43
+ mixture of the Poisson distribution generates the negative binomial distribution [17].
44
+ Thus unlike the Poisson distribution, these two count models posses the over-dispersion
45
+ characteristic. Consequently, several extensions of the geometric distribution have been
46
+ introduced in the literature for over-dispersed count data modelling ([11], [12], [18], [20],
47
+ [22], [27], [28], and [33] among others). Two most widely used distributions for over-
48
+ dispersed data are of course the negative binomial and COM-Poisson. As pointed out
49
+ earlier, there is still plenty of opportunity for developing new discrete distributions with
50
+ simple structure and explicit interpretation, appropriate for over-dispersed data.
51
+ Recently, Bourguignon et al. have introduced the BerG distribution [8] by using the
52
+ convolution of a Bernoulli random variable and a geometric random variable. In a very
53
+ recent publication, Bourguignon et al. have introduced the BerPoi distribution from a
54
+ similar motivation [7]. This is a convolution of a Bernoulli random variable and a Poisson
55
+ random variable. The first one is capable of modelling over-dispersed, under-dispersed
56
+ and equi-dispersed data whereas the second one is efficient for modelling under-dispersed
57
+ data. This approach is simple and has enormous potential. Here we use this idea to
58
+ develop a novel over-dispersed count model.
59
+ In this article, we propose a new discrete distribution derived from the convolution of
60
+ two independent count random variables. The random variables are Poisson and geomet-
61
+ ric. Hence we identify the proposed model as PoiG. This two-parameter distribution
62
+ has many advantages. Structural simplicity is one of them. It is easy to comprehend
63
+ unlike the COM-Poisson distribution, which involves a difficult normalising constant in
64
+ its probability mass function. A model with closed-form expressions of the mean and the
65
+ variance is well-suited for regression modelling. Unlike the COM-Poisson distribution,
66
+ mean and variance of the proposed distribution can be written in closed form expressions.
67
+ The proposed distribution extends both the Poisson and geometric distributions.
68
+ Rest of the article is organized as follows. In section 2, we present the PoiG distribution.
69
+ In Section 3, we describe its important statistical properties such as recurrence relation,
70
+ generating functions, moments, dispersion index, mode, reliability properties, monotonic
71
+ 2
72
+
73
+ properties and stochastic ordering. In Section 4, we present the moment and the max-
74
+ imum likelihood methods of parameter estimation. We conclude the article with a few
75
+ limitations and future scopes of the current study.
76
+ 2
77
+ The PoiG distribution
78
+ In this section, we introduce a novel discrete distribution by considering two independent
79
+ discrete random variables Y1 and Y2.
80
+ Let us denote the set of non-negative integers,
81
+ {0, 1, 2, ...} by N0. Also let, Y1 and Y2 follow the Poisson distribution with mean λ > 0
82
+ and the geometric distribution with mean 0 < 1/θ < 1, respectively. Both Y1 and Y2 have
83
+ the same support N0. For convenience, we write Y1 ∼ P(λ) and Y2 ∼ G(θ). Consider,
84
+ Y = Y1 + Y2. Then,
85
+ Pr(Y = y) =
86
+ y
87
+
88
+ i=0
89
+ Pr(Y1 = i) Pr(Y2 = y − i)
90
+ =
91
+ y
92
+
93
+ i=0
94
+ e−λλi
95
+ i!
96
+ θ(1 − θ)y−i
97
+ = θ(1 − θ)ye−λ
98
+ y
99
+
100
+ i=0
101
+ 1
102
+ i!
103
+
104
+ λ
105
+ 1 − θ
106
+ �i
107
+ ,
108
+ y = 0, 1, 2, ... .
109
+ (1)
110
+ The distribution in (1) being the convolution Poisson and geometric, is named the PoiG
111
+ distribution and we write Y ∼ PoiG(λ, θ). Thus, the probability mass function (pmf) of
112
+ PoiG(λ, θ) can be written as
113
+ pY (y) = θ(1 − θ)y
114
+ Γ(y + 1) exp
115
+ � λθ
116
+ 1 − θ
117
+
118
+ Γ
119
+
120
+ y + 1,
121
+ λ
122
+ 1 − θ
123
+
124
+ ,
125
+ y = 0, 1, 2, ... .
126
+ (2)
127
+ Figure 1 exhibits nature of the pmf for different choices of (λ, θ). The cumulative distri-
128
+ bution function (cdf) of PoiG distribution is
129
+ FY (y) = Pr(Y1 + Y2 ≤ y)
130
+ =
131
+ y
132
+
133
+ y1=0
134
+ y−y1
135
+
136
+ y2=0
137
+ pY (y1)pY (y2)
138
+ =
139
+ y
140
+
141
+ y1=0
142
+ FG(y − y1)pY (y1)
143
+ =
144
+ y
145
+
146
+ y1=0
147
+ (1 − (1 − θ)y−y1+1)pY (y1)
148
+ =
149
+ y
150
+
151
+ y1=0
152
+ e−λλy1
153
+ y1!
154
+ − (1 − θ)y+1e−λ
155
+ y
156
+
157
+ y1=0
158
+ 1
159
+ y1!
160
+
161
+ λ
162
+ 1 − θ
163
+ �y1
164
+ .
165
+ (3)
166
+ An explicit expression of (3) is given by
167
+ FY (y) = Γ(y + 1, λ)
168
+ Γ(y + 1)
169
+ − (1 − θ)y+1
170
+ Γ(y + 1) exp
171
+ � λθ
172
+ 1 − θ
173
+
174
+ Γ
175
+
176
+ y + 1,
177
+ λ
178
+ 1 − θ
179
+
180
+ ,
181
+ y = 0, 1, 2, ... .
182
+ (4)
183
+ 3
184
+
185
+ Figure 2 exhibits nature of the cdf for different choices of (λ, θ). The mean and variance
186
+ of the PoiG(λ, θ) distribution are given as follows.
187
+ E(Y ) = µ = λ + 1 − θ
188
+ θ
189
+ and
190
+ V (Y ) = σ2 = λ + 1 − θ
191
+ θ2
192
+ (5)
193
+ Special cases
194
+ • For λ −→ 0, PoiG(λ, θ) behaves like G(θ).
195
+ • For θ −→ 1, PoiG(λ, θ) behaves like P(λ).
196
+ Remark 1
197
+ • The incomplete gamma function [1] is defined as Γ(n, x) =
198
+ � ∞
199
+ x t(n−1)e−tdt and it
200
+ can also be rewrite as Γ(n, x) = (n − 1)! �n−1
201
+ k=0
202
+ e−xxk
203
+ k!
204
+ , which is valid for positive
205
+ values of n and any value of x. Thus the incomplete gamma function in (2) can be
206
+ rewritten as
207
+ Γ
208
+
209
+ y + 1,
210
+ λ
211
+ 1 − θ
212
+
213
+ = Γ(y + 1)
214
+ y
215
+
216
+ i=0
217
+ 1
218
+ Γ(i + 1) exp
219
+
220
+
221
+ λ
222
+ 1 − θ
223
+ ��
224
+ λ
225
+ 1 − θ
226
+ �i
227
+ ,
228
+ where Γ(y + 1) = y! and Γ(i + 1) = i!.
229
+ • FY (0) = pY (0) = θe−λ. Thus, the proportion of zeros in case of the PoiG distribu-
230
+ tion tends to θ as λ → 0 and to zero as λ → ∞.
231
+ 3
232
+ Properties of the PoiG distribution
233
+ In this section, we explore several important statistical properties of the proposed PoiG(λ, θ)
234
+ distribution. Some of the distributional properties studied here are the recurrence relation,
235
+ probability generating function (pgf), moment generating function (mgf), characteristic
236
+ function (cf), cumulant generating function (cgf), moments, coefficient of skewness and
237
+ kurtosis. We also study the reliability properties such as the survival function and the
238
+ hazard rate function. Log-concavity and stochastic ordering of the proposed model are
239
+ also investigated.
240
+ 3.1
241
+ Recurrence relation
242
+ Probability recurrence relation helps in finding the subsequent term using the preceding
243
+ term. It usually proves to be advantageous in computing the masses at different values.
244
+ Note that,
245
+ pY (y) = θ(1 − θ)y
246
+ Γ(y + 1) exp
247
+ � λθ
248
+ 1 − θ
249
+
250
+ Γ
251
+
252
+ y + 1,
253
+ λ
254
+ 1 − θ
255
+
256
+ = θ(1 − θ)ye−λ
257
+ y
258
+
259
+ i=0
260
+ 1
261
+ Γ(i + 1)
262
+
263
+ λ
264
+ 1 − θ
265
+ �i
266
+ = θ(1 − θ)ye−λsy.
267
+ 4
268
+
269
+ Figure 1:
270
+ Probability mass function of PoiG(λ, θ) for λ ∈ {0, 0.5, 5, 10} and θ ∈
271
+ {0.2, 0.4, 0.6, 0.8}.
272
+ The (i, j)th plot corresponds to the ith value of λ and jth value of
273
+ θ for i, j = 1, 2, 3, 4.
274
+ 5
275
+
276
+ 0.20
277
+ 0.4 f
278
+ 0.6 t
279
+ 0.8+
280
+ 0.5
281
+ 0.15
282
+ 0.3
283
+ 0.4
284
+ 0.6
285
+ 0.10
286
+ 0.2
287
+ 0.4
288
+ 0.05
289
+ 0.1
290
+ 0.2
291
+ 0.1
292
+ 0.2
293
+ 0
294
+ 10
295
+ 15
296
+ 20
297
+ 0
298
+ 2
299
+ 4
300
+ to
301
+ 10
302
+ 0
303
+ 2
304
+ 0
305
+ 2
306
+ 3
307
+ 4
308
+ 5
309
+ 0.15 F °
310
+ 0.25
311
+ 0.5 §
312
+ 0.35
313
+ 0.20
314
+ 0.30
315
+ 0.4
316
+ 0.10
317
+ 0.15
318
+ 0.25
319
+ 0.20
320
+ 0.3 E
321
+ 0.05
322
+ 0.10
323
+ 0.15
324
+ 0.2
325
+ 0.05
326
+ 0.10
327
+ 0.05
328
+ 0.1
329
+ 0
330
+ 5
331
+ 10
332
+ 15
333
+ 20
334
+ 2
335
+ 4
336
+ 6
337
+ 8
338
+ 10
339
+ 12
340
+ 0
341
+ 2
342
+ 4
343
+ 6
344
+ 8
345
+ 0
346
+ 2
347
+ 3
348
+ d
349
+ 5
350
+ 6
351
+ 0.10 E
352
+ 0.14 E
353
+ 0.12
354
+ 0.15
355
+ 0.08
356
+ 0.10
357
+ 0.06
358
+ 0.08
359
+ 0.10
360
+ 0.10
361
+ 0.04
362
+ 0.06
363
+ 0.04
364
+ 0.05
365
+ 0.05
366
+ 0.02
367
+ 0.02
368
+ :
369
+ 0
370
+ 5
371
+ 10
372
+ 15
373
+ 20
374
+ 25
375
+ 30
376
+ 0
377
+ 5
378
+ 10
379
+ 15
380
+ 0
381
+ 5
382
+ 10
383
+ 5
384
+ 0
385
+ 2
386
+ 4
387
+ 6
388
+ 8
389
+ 10
390
+ 0.08
391
+ 0.12
392
+ 0.12 E
393
+ 0.12 E
394
+ 0.10
395
+ 0.10
396
+ 0.10
397
+ 0.06
398
+ 0.08
399
+ 0.08
400
+ 0.08
401
+ 0.04
402
+ 0.06
403
+ 0.06
404
+ 0.06
405
+ 0.04
406
+ 0.02日
407
+ 0.04
408
+ 0.04
409
+ 0.02
410
+ 0.02
411
+ 0.02
412
+ 5
413
+ 0
414
+ 5
415
+ 10
416
+ 15
417
+ 20
418
+ 25
419
+ 0
420
+ 10
421
+ 15
422
+ 20
423
+ 0
424
+ 5
425
+ 10
426
+ 15
427
+ 20Figure 2: Cumulative distribution function of PoiG(λ, ��) for λ ∈ {0, 0.5, 5, 10} and θ ∈
428
+ {0.2, 0.4, 0.6, 0.8} .The (i, j)th plot corresponds to the ith value of λ and jth value of θ for
429
+ i, j = 1, 2, 3, 4.
430
+ 6
431
+
432
+ 1.0 E
433
+ 1.0
434
+ 1.0
435
+ .0
436
+ 0.8 E
437
+ 0.8
438
+ 0.8
439
+ 0.8.
440
+ 0.6
441
+ 0.6
442
+ 0.6.
443
+ 0.6
444
+ 0.4
445
+ 0.4
446
+ 0.4
447
+ 0.4
448
+ 0.2.
449
+ 0.2
450
+ 0.2
451
+ 0.2
452
+ 0
453
+ 5
454
+ 10
455
+ 15
456
+ 20
457
+ 25
458
+ 30
459
+ 0
460
+ 5
461
+ 10
462
+ 15
463
+ 20
464
+ 25
465
+ 30
466
+ 0
467
+ 5
468
+ 10
469
+ 15
470
+ 20
471
+ 25
472
+ 30
473
+ 0
474
+ 5
475
+ 10
476
+ 15
477
+ 20
478
+ 25
479
+ 30
480
+ 1.0 E
481
+ 1.0 F
482
+ 1.0 E
483
+ 1.0 E
484
+ 0.8 E
485
+ 0.8
486
+ 0.8
487
+ 0.8 E:
488
+ 0.6 E
489
+ 0.6
490
+ 0.6
491
+ 0.6 E
492
+ 0.4
493
+ 0.4
494
+ 0.4
495
+ 0.2
496
+ 0.2
497
+ 0.2
498
+ 0.2
499
+ 0
500
+ 5
501
+ 10
502
+ 15
503
+ 20
504
+ 25
505
+ 30
506
+ 0
507
+ 5
508
+ 10
509
+ 15
510
+ 20
511
+ 25
512
+ 30
513
+ 0
514
+ 5
515
+ 10
516
+ 15
517
+ 20
518
+ 25
519
+ 30
520
+ 0
521
+ 5
522
+ 10
523
+ 15
524
+ 20
525
+ 25
526
+ 30
527
+ 1.0 E
528
+ 1.0 F
529
+ 1.0 E
530
+ 1.0 E
531
+ 0.8 E
532
+ 0.8 E
533
+ 0.8 E
534
+ 0.8 E
535
+ 0.6 E
536
+ 0.6
537
+ 0.6
538
+ 0.6 E
539
+ 0.4 E
540
+ 0.4
541
+ 0.4
542
+ 0.4
543
+ 0.2
544
+ 0.2
545
+ 0.2
546
+ 0.2
547
+ 0
548
+ 5
549
+ 10
550
+ 15
551
+ 20
552
+ 25
553
+ 30
554
+ 5
555
+ 10
556
+ 15
557
+ 20
558
+ 25
559
+ 30
560
+ 10
561
+ 15
562
+ 20
563
+ 25
564
+ 30
565
+ 0
566
+ 5
567
+ 10
568
+ 15
569
+ 20
570
+ 25
571
+ 1.0 F
572
+ 1.0 F
573
+ 1.0
574
+ 1.0
575
+ 0.8 E
576
+ 0.8 E
577
+ 0.8 E
578
+ 180
579
+ 0.6 E
580
+ 0.6
581
+ 0.6
582
+ 0.6 F
583
+ 0.4 E
584
+ 0.4 E
585
+ 0.4
586
+ 0.4 E
587
+ 0.2
588
+ 0.2
589
+ 0.2
590
+ 0.2
591
+ 0
592
+ 5
593
+ 10
594
+ 15
595
+ 20
596
+ 25
597
+ 30
598
+ 5
599
+ 10
600
+ 15
601
+ 20
602
+ 25
603
+ 30
604
+ 0
605
+ 5
606
+ 10
607
+ 15
608
+ 20
609
+ 25
610
+ 30
611
+ 0
612
+ 5
613
+ 10
614
+ 15
615
+ 20
616
+ 25
617
+ 30Where,
618
+ sy =
619
+ y
620
+
621
+ i=0
622
+ 1
623
+ Γ(i + 1)
624
+
625
+ λ
626
+ 1 − θ
627
+ �i
628
+ and
629
+ sy+1 = sy +
630
+ 1
631
+ Γ(y + 2)
632
+
633
+ λ
634
+ 1 − θ
635
+ �y+1
636
+ .
637
+ Now,
638
+ pY (y + 1) = θ(1 − θ)y+1e−λsy+1
639
+ = θ(1 − θ)y+1e−λ
640
+
641
+ sy +
642
+ 1
643
+ Γ(y + 2)
644
+
645
+ λ
646
+ 1 − θ
647
+ �y+1�
648
+ = (1 − θ)pY (y) + θe−λ
649
+ λy+1
650
+ Γ(y + 2).
651
+ (6)
652
+ This is the recurrence formula of the PoiG distribution. It is easy to check that
653
+ sy+1
654
+ sy
655
+ = 1 +
656
+ 1
657
+ syΓ(y + 2)
658
+
659
+ λ
660
+ 1 − θ
661
+ �y+1
662
+ = 1
663
+ as y −→ ∞,
664
+ and
665
+ pY (y + 1)
666
+ pY (y)
667
+ = (1 − θ) + θe−λ
668
+ pY (y)
669
+ λy+1
670
+ Γ(y + 2) = 1 − θ
671
+ as y −→ ∞.
672
+ (7)
673
+ From (7), it is clear that the behaviour of the tail of the distribution depends on θ. When
674
+ θ −→ 0, the tail of the distribution decays relatively slowly, which implies long tail. when
675
+ θ −→ 1, the tail of the distribution decays fast, which implies short tail. This can easily
676
+ be verified from Figure 1.
677
+ 3.2
678
+ Generating functions
679
+ We use the notation H to denote a pgf and use the notation of the corresponding random
680
+ variable in the subscript. For Y1 ∼ P(λ) and Y2 ∼ G(θ),
681
+ HY1(s) = eλ(s−1)
682
+ and
683
+ HY2(s) =
684
+ θ
685
+ 1 − (1 − θ)s.
686
+ Now by using the convolution property of probability generating function we obtain the
687
+ pgf of PoiG(λ, θ) as
688
+ HY (s) =
689
+ θeλ(s−1)
690
+ 1 − s + θs.
691
+ (8)
692
+ Similar methods are used to obtain the other generating functions, including the mgf
693
+ MY (t), cf φY (t) and cgf KY (t). These are given below.
694
+ MY (t) =
695
+ θeλ(t−1)
696
+ 1 − (1 − θ)t
697
+ (9)
698
+ 7
699
+
700
+ φY (t) =
701
+ θeλ(eit−1)
702
+ 1 − (1 − θ)eit
703
+ (10)
704
+ KY (t) = λ(et − 1) + log
705
+
706
+ θ
707
+ 1 − (1 − θ)et
708
+
709
+ (11)
710
+ Let us discuss some useful definitions and notations for Result 1 given below. The no-
711
+ tation G(θ) has already been introduced in Section 2. Let R be the number of failures
712
+ preceding the first success in a sequence of independent Bernoulli trials. If the probability
713
+ of success is θ ∈ (0, 1), then R is said to follow G(θ). Suppose, we wait for the rth success.
714
+ Then the number of failures is a negative binomial random variable with index r and the
715
+ parameter θ. Let NB(r, θ) denote this distribution. Suppose Ri ∼ G(θ), for i = 1, 2, ..., r
716
+ independently and S ∼ NB(r, θ). Then S = R1 + R2 + ... + Rr. Thus, it is clear that
717
+ the G(θ) is a particular case of NB(r, θ) with r = 1. Similar to the genesis of PoiG
718
+ model, if we add one Poisson random variable and an independently distributed negative
719
+ binomial random variable, it is possible to obtain a generalization of the PoiG model.
720
+ An appropriate notation for this distribution would have been PoiNB. The objective of
721
+ the current work is not to study this three-parameter distribution in detail. However, the
722
+ following result establishes that the generalization from the geometric distribution to the
723
+ negative binomial distribution translates similarly to the PoiG − PoiNB case. This may
724
+ prove to be a motivation for generalizing the proposed model to PoiNB in future.
725
+ Result 1 The distribution of the sum of n independent PoiG random variables is a
726
+ PoiNB random variable for fixed θ. Mathematically, if Yi ∼ PoiG(λi, θ) for each i =
727
+ 1, 2, ..., n then,
728
+ n
729
+
730
+ i=1
731
+ Yi ∼ PoiNB(
732
+ n
733
+
734
+ i=1
735
+ λi, n, θ).
736
+ Proof of Result 1 From (8), the pgf of Yi ∼ PoiG(λi, θ) is
737
+ HYi(s) =
738
+ θeλi(s−1)
739
+ 1 − s + θs
740
+ for i = 1, 2, ..., n. We can derive the pgf of sum of n independent PoiG(λi, θ) variates
741
+ based on the convolution property of the pgf. Let, Z = Y1 + Y2 + .... + Yn. Then,
742
+ HZ(s) =
743
+ n
744
+
745
+ i=1
746
+ HYi(s)
747
+ =
748
+ θn
749
+ (1 − s + θs)ne
750
+ �n
751
+ i=1 λi(s−1).
752
+ (12)
753
+ The term θn/(1−s+θs)n in (12) is the pgf of NB(n, θ) which is a generalisation of geomet-
754
+ ric distribution and e
755
+ �n
756
+ i=1 λi(s−1) is pgf of P (�n
757
+ i=1 λi). Thus �n
758
+ i=1 Yi ∼ PoiNB (�n
759
+ i=1 λi, n, θ).
760
+ 8
761
+
762
+ 3.3
763
+ Moments and related concepts
764
+ The rth order raw moment of Y ∼ PoiG(λ, θ) can be obtained using the general expres-
765
+ sions of the raw moments of Y1 ∼ P(λ) and Y2 ∼ G(θ) as follows.
766
+ E(Y r) = E
767
+
768
+ r
769
+
770
+ j=0
771
+ �Y
772
+ j
773
+
774
+ Y1
775
+ jY2
776
+ y−j
777
+
778
+ =
779
+ r
780
+
781
+ j=0
782
+ �Y
783
+ j
784
+
785
+ E(Y1
786
+ j)E(Y2
787
+ y−j)
788
+ Note that,
789
+ E(Y1
790
+ j) =
791
+
792
+
793
+ Y1=0
794
+ Y1
795
+ j e−λλY1
796
+ Y1!
797
+ =
798
+
799
+
800
+ Y1=0
801
+ λY1S(j, Y1)
802
+ = φj(λ).
803
+ Here, S(j, Y1) is the Stirling number of the second kind [1] and φj(λ) is the Bell polynomial
804
+ [24]. Again,
805
+ E(Y2
806
+ y−j) =
807
+
808
+
809
+ Y2=0
810
+ Y2
811
+ y−jθ(1 − θ)Y2
812
+ = θ Li−(y−j)(1 − θ),
813
+ where Li−(y−j)(1 − θ) is the polylogarithm of negative integers [14]. Hence
814
+ E(Y r) =
815
+ r
816
+
817
+ j=0
818
+ �Y
819
+ j
820
+
821
+ φj(λ)θ Li−(y−j)(1 − θ).
822
+ (13)
823
+ The rth order raw moment can also be calculated by differentiating the mgf in (9) r times
824
+ with respect to t and putting t = 0. That is,
825
+ E(Y r) = M (r)
826
+ Y (0) = dr
827
+ dtr [MY (t)]t=0.
828
+ Explicit expressions of the first four moments are listed below.
829
+ E(Y ) = λ + 1 − θ
830
+ θ
831
+ (14)
832
+ E(Y 2) = 1
833
+ θ2[θ2(λ2 − λ + 1) + θ(2λ − 3) + 2]
834
+ (15)
835
+ E(Y 3) = 1
836
+ θ3[θ3(λ3 + λ − 1) + θ2(3λ2 − 6λ + 7) + θ(6λ − 12) + 6]
837
+ (16)
838
+ E(Y 4) = 1
839
+ θ4[θ4(λ4 + 2λ3 + λ2 − λ + 1) + θ3(4λ3 − 6λ2 + 14λ − 15)
840
+ + 2θ2(6λ2 − 18λ + 25) + 12θ(2λ − 5) + 24]
841
+ (17)
842
+ 9
843
+
844
+ Using the above, explicit expressions of the first four central moments are given as follows.
845
+ µ1 = 0
846
+ (18)
847
+ µ2 = λ + 1 − θ
848
+ θ2
849
+ (19)
850
+ µ3 = θ3λ + θ2 − 3θ + 2
851
+ θ3
852
+ (20)
853
+ µ4 = θ4λ(3λ + 1) − θ3(6λ + 1) + 2θ2(3λ + 5) − 18θ + 9
854
+ θ4
855
+ (21)
856
+ The first raw and second central moments are mean and variance of the PoiG(λ, θ) dis-
857
+ tribution, respectively. Let γ1 and γ2 denote the coefficients of skewness and kurtosis,
858
+ respectively. Using the central moments, these coefficients can be derived in closed forms
859
+ as follows.
860
+ β1 = µ32
861
+ µ23 = (θ3λ + θ2 − 3θ + 2)2
862
+ (θ2λ − θ + 1)3
863
+ γ1 =
864
+
865
+ β1 =
866
+
867
+ (θ3λ + θ2 − 3θ + 2)2
868
+ (θ2λ − θ + 1)3
869
+ β2 = µ4
870
+ µ22 = θ4λ(3λ + 1) − θ3(6λ + 1) + 2θ2(3λ + 5) − 18θ + 9
871
+ (θ2λ − θ + 1)2
872
+ γ2 = β2 − 3 = θ4λ(3λ + 1) − θ3(6λ + 1) + 2θ2(3λ + 5) − 18θ + 9
873
+ (θ2λ − θ + 1)2
874
+ − 3
875
+ Remark 3
876
+ • As θ → 1, β1 → 1
877
+ λ and as θ → 0, β1 → 4.
878
+ • As θ → 1, β2 → 3 + 1
879
+ λ and as θ → 0, β2 → 9.
880
+ The statements made in Remark 3 can easily be realized visually from Figure 3 and Figure
881
+ 4, respectively. Clearly, as λ → ∞, the distribution tends to attain normal shape with
882
+ β1 → 0 and β2 → 3.
883
+ 3.4
884
+ Dispersion index and coefficient of variation
885
+ The
886
+ dispersion index determines whether a distribution is suitable for modelling an
887
+ over, under and equi-dispersed dataset or not. Let IY denote the dispersion index of the
888
+ distribution of the random variable Y . When IY is more or less than one, the distribution
889
+ of Y can accommodate over-dispersion or under-dispersion, respectively. The notion of
890
+ equi-dispersion is indicated when IY = 1. The dispersion index is given by
891
+ IY = σ2
892
+ µ = 1 +
893
+ (1 − θ)2
894
+ θ(1 + λθ − θ).
895
+ 10
896
+
897
+ Figure 3: Skewness of PoiG(λ, θ) for θ ∈ {0.2, 0.4, 0.6, 0.8}. The ith plot corresponds to
898
+ the ith value of θ for different values of λ in the x-axis.
899
+ Figure 4: Kurtosis of PoiG(λ, θ) for θ ∈ {0.2, 0.4, 0.6, 0.8}. The ith plot corresponds to
900
+ the ith value of θ for different values of λ in the x-axis.
901
+ From the expression of IY above, it follows that the PoiG distribution is equi-dispersed
902
+ when θ = 1 and over-dispersed for all 0 < θ < 1. From Figure 5, it can be observed that
903
+ IY increases with decreasing λ and θ.
904
+ The coefficient of variation (CV) is an indicator for data variability. Higher value of the
905
+ CV indicates the capability of a distribution to model data with higher variability. Note
906
+ that,
907
+ CV (Y ) =
908
+
909
+ λθ2 − θ + 1
910
+ λθ − θ + 1
911
+ × 100%.
912
+ 3.5
913
+ Mode
914
+ In Section 3.7, we show that PoiG(λ, θ) is unimodal. Note that,
915
+ pY (1) ≤ pY (0)
916
+ =⇒
917
+ (1 + λ − θ)θe−λ ≤ θe−λ
918
+ =⇒
919
+ λθe−λ − θ2e−λ ≤ 0
920
+ =⇒
921
+ λ − θ ≤ 0
922
+ =⇒
923
+ λ ≤ θ.
924
+ The converse is trivially true. Thus, the distribution has mode at zero for λ ≤ θ. Figure
925
+ 1 clearly shows that the mode is zero for λ = 0, 0.5 and θ > 0, 0.5. For the equality case,
926
+ 11
927
+
928
+ 71
929
+ 6
930
+ 5
931
+ 4
932
+ 2
933
+ 3
934
+ 0
935
+ 10
936
+ 20
937
+ 30
938
+ 40
939
+ 50
940
+ 0
941
+ 5
942
+ 10
943
+ 15
944
+ 20
945
+ 25
946
+ 0
947
+ 5
948
+ 10
949
+ 15
950
+ 0
951
+ 2
952
+ 4
953
+ 6
954
+ 8
955
+ 1010.
956
+ 12 t
957
+ 8
958
+ 8
959
+ 10
960
+ 6
961
+ 8
962
+ 6
963
+ 2
964
+ 0
965
+ 10
966
+ 20
967
+ 30
968
+ 40
969
+ 50
970
+ 0
971
+ 10
972
+ 20
973
+ 30
974
+ 40
975
+ 50
976
+ 0
977
+ 10
978
+ 20
979
+ 30
980
+ 40
981
+ 50
982
+ 0
983
+ 10
984
+ 20
985
+ 30
986
+ 40
987
+ 50Figure 5: Dispersion index of PoiG(λ, θ).
988
+ Figure 6: Probability mass function of PoiG(λ, θ) for λ = θ ∈ {0.2, 0.4, 0.6, 0.8}.
989
+ that is λ = θ, the masses at zero and at unity are the same. Figure 6 clearly exhibits this
990
+ fact. However, for the λ > θ case, the distribution has non-zero mode. Unfortunately, an
991
+ explicit expression for this non-zero mode is difficult to find, if not impossible.
992
+ 3.6
993
+ Reliability properties
994
+ Reliability function of a discrete random variable Y at y is defined as the probability of
995
+ Y assuming values greater than or equal to y. The reliability function is also termed as
996
+ the survival function. The survival function of Y ∼ PoiG(λ, θ) is
997
+ SY (y) = P(Y ≥ y) = 1 − Γ(y, λ)
998
+ Γy
999
+ + (1 − θ)y
1000
+ Γy
1001
+ exp
1002
+ � λθ
1003
+ 1 − θ
1004
+
1005
+ Γ
1006
+
1007
+ y,
1008
+ λ
1009
+ 1 − θ
1010
+
1011
+ .
1012
+ (22)
1013
+ The hazard rate or failure rate of a discrete random variable T at time point t is defined
1014
+ as the conditional probability of failure at t, given that the survival time is at least t.
1015
+ The hazard rate function (hrf) of Y ∼ PoiG(λ, θ) can be obtained by using (1) and (4)
1016
+ 12
1017
+
1018
+ 0=0.75
1019
+ 入=0
1020
+ 入=1
1021
+ 入=5
1022
+ 入=50
1023
+ 0=0.15
1024
+ 0=0.30
1025
+ 0=0.45
1026
+ 0=0.60
1027
+ ly
1028
+ ly
1029
+ 7
1030
+ 20
1031
+ 15
1032
+ 5
1033
+ 4
1034
+ 10
1035
+ 3
1036
+ 10
1037
+ 20
1038
+ 30
1039
+ 40
1040
+ 50
1041
+ 0.0
1042
+ 0.2
1043
+ 0.4
1044
+ 0.6
1045
+ 0.8
1046
+ 1.00.25
1047
+ 0.35
1048
+ 0.35
1049
+ 0.15
1050
+ 0.30
1051
+ 0.20
1052
+ 0.30
1053
+ 0.25
1054
+ 0.25
1055
+ 0.10
1056
+ 0.15
1057
+ 0.20
1058
+ 0.20
1059
+ 0.10
1060
+ 0.15
1061
+ 0.15
1062
+ 0.05
1063
+ 0.05
1064
+ 0.10
1065
+ 0.10
1066
+ 0.05
1067
+ 0.05
1068
+ 1
1069
+ 2
1070
+ 4
1071
+ 6
1072
+ 224
1073
+ 0
1074
+ 2
1075
+ 6
1076
+ 8
1077
+ 10
1078
+ 0
1079
+ 2
1080
+ 4
1081
+ 8as follows.
1082
+ hY (y) = P(Y = y)
1083
+ P(Y ≥ y) =
1084
+ θ(1 − θ)y
1085
+ Γ(y + 1) exp
1086
+ � λθ
1087
+ 1 − θ
1088
+
1089
+ Γ
1090
+
1091
+ y + 1,
1092
+ λ
1093
+ 1 − θ
1094
+
1095
+ 1 − Γ(y, λ)
1096
+ Γy
1097
+ + (1 − θ)y
1098
+ Γy
1099
+ exp
1100
+ � λθ
1101
+ 1 − θ
1102
+
1103
+ Γ
1104
+
1105
+ y,
1106
+ λ
1107
+ 1 − θ
1108
+
1109
+ =
1110
+ θ(1 − θ)y exp
1111
+ � λθ
1112
+ 1 − θ
1113
+
1114
+ Γ
1115
+
1116
+ y + 1,
1117
+ λ
1118
+ 1 − θ
1119
+
1120
+ Γ(y + 1) − yΓ(y, λ) + y(1 − θ)y exp
1121
+ � λθ
1122
+ 1 − θ
1123
+
1124
+ Γ
1125
+
1126
+ y,
1127
+ λ
1128
+ 1 − θ
1129
+ �.
1130
+ (23)
1131
+ The hrf for different choices of the parameters are exhibited in Figure 7.
1132
+ The PoiG
1133
+ distribution exhibits constant failure rate when λ is very small and it exhibits an increasing
1134
+ failure rate, up to a specific time period, when λ increases.
1135
+ In reliability studies, the mean residual life is the expected additional lifetime given that
1136
+ a component has survived until a fixed time. If the random variable Y ∼ PoiG(λ, θ)
1137
+ represents the life of a component, then the mean residual life is
1138
+ µY (y) = E(Y − y|Y ≥ y)
1139
+ =
1140
+ �∞
1141
+ y=k(y − k)P(Y = y)
1142
+ P(Y ≥ y)
1143
+ =
1144
+ �∞
1145
+ y=k ¯F(y)
1146
+ ¯F(k − 1)
1147
+ =
1148
+ �∞
1149
+ y=k
1150
+
1151
+ 1 − Γ(y, λ)
1152
+ Γy
1153
+ + (1 − θ)y
1154
+ Γy
1155
+ exp
1156
+ � λθ
1157
+ 1 − θ
1158
+
1159
+ Γ
1160
+
1161
+ y,
1162
+ λ
1163
+ 1 − θ
1164
+ ��
1165
+ 1 − Γ(k − 1, λ)
1166
+ Γ(k − 1)
1167
+ + (1 − θ)k−1
1168
+ Γ(k − 1) exp
1169
+ � λθ
1170
+ 1 − θ
1171
+
1172
+ Γ
1173
+
1174
+ k − 1,
1175
+ λ
1176
+ 1 − θ
1177
+ �.
1178
+ (24)
1179
+ 3.7
1180
+ Monotonic Properties
1181
+ Y ∼ PoiG(λ, θ) is log-concave if the following holds for all y ≥ 1.
1182
+ p2
1183
+ Y (y) ≥ pY (y − 1)pY (y + 1)
1184
+ A log-concave distribution possesses several desirable properties. Some of the notable
1185
+ examples of log-concave distributions are the Bernoulli, binomial, Poisson, geometric, and
1186
+ negative binomial. Convolution of two independent log-concave distributions is also a log-
1187
+ concave distribution [21]. Being the convolution of Poisson and Geometric distributions,
1188
+ the proposed PoiG distribution is log-concave. Consequently, the following statements
1189
+ hold good for the PoiG distribution ([23] and [3]).
1190
+ • Strongly unimodal.
1191
+ • At most one exponential tail.
1192
+ • All the moments exist.
1193
+ • Log-concave survival function.
1194
+ 13
1195
+
1196
+ Figure 7: Hazard rate function of PoiG(λ, θ) for λ ∈ {0, 0.5, 5, 10} row-wise and θ ∈
1197
+ {0.2, 0.4, 0.6, 0.8} column-wise. The (i, j)th plot corresponds to the ith value of λ and jth
1198
+ value of θ for i, j = 1, 2, 3, 4.
1199
+ • Monotonically increasing hazard rate function (see Figure 7).
1200
+ • Monotonically decreasing mean residual life function.
1201
+ 14
1202
+
1203
+ 0.20
1204
+ 0.4
1205
+ 0.6
1206
+ 0.8
1207
+ 0.5
1208
+ 0.15
1209
+ 0.3
1210
+ 0.4
1211
+ 0.6
1212
+ 0.10
1213
+ 0.2
1214
+ 0.3
1215
+ 0.4
1216
+ 0.05
1217
+ 0.2
1218
+ 0.1
1219
+ 0.1
1220
+ 0.2
1221
+ F.
1222
+ 10
1223
+ 15
1224
+ 20
1225
+ 25
1226
+ 30
1227
+ 0
1228
+ 5
1229
+ 10
1230
+ 25
1231
+ 30
1232
+ 0
1233
+ 5
1234
+ 30
1235
+ 15
1236
+ 20
1237
+ 10
1238
+ 15
1239
+ 20
1240
+ 25
1241
+ 0
1242
+ 5
1243
+ 10
1244
+ 15
1245
+ 20
1246
+ 25
1247
+ 30
1248
+ 0.20 E
1249
+ 0.4
1250
+ 0.6 F
1251
+ 0.8 F
1252
+ 0.5
1253
+ 0.15
1254
+ 0
1255
+ 0.4.
1256
+ 0.6
1257
+ 0.10
1258
+ 0.2
1259
+ 0.3
1260
+ 0.4
1261
+ 0.2
1262
+ 0.05
1263
+ 0.1
1264
+ 0.1
1265
+ 0.2
1266
+ 10
1267
+ 15
1268
+ 20
1269
+ 25
1270
+ 30
1271
+ E
1272
+ 0
1273
+ 5
1274
+ 10
1275
+ 15
1276
+ 20
1277
+ 25
1278
+ 30
1279
+ 0
1280
+ 5
1281
+ 10
1282
+ 15
1283
+ 20
1284
+ 25
1285
+ 30
1286
+ 0
1287
+ 5
1288
+ 10
1289
+ 15
1290
+ 20
1291
+ 0.20 E
1292
+ 0.4 E
1293
+ 0.6 F
1294
+ 0.8 t
1295
+ 0.5
1296
+ 0.15
1297
+ 0.3
1298
+ 0.4
1299
+ 0.6
1300
+ 0.10
1301
+ 0.2
1302
+ 0
1303
+ 0.4
1304
+ 0.1
1305
+ 0.2
1306
+ 0.05
1307
+ 0.1
1308
+ 0.2
1309
+ .
1310
+ .i
1311
+ F.i
1312
+ 0
1313
+ 5
1314
+ 10
1315
+ 15
1316
+ 20
1317
+ 30
1318
+ 0
1319
+ 5
1320
+ 10
1321
+ 15
1322
+ 20
1323
+ 25
1324
+ 30
1325
+ 0
1326
+ 5
1327
+ 10
1328
+ 15
1329
+ 20
1330
+ 25
1331
+ 30
1332
+ 0
1333
+ 5
1334
+ 10
1335
+ 15
1336
+ 25
1337
+ 30
1338
+ 0.20 E
1339
+ 0.4 t
1340
+ 0.6 t
1341
+ 0.5
1342
+ 0.6
1343
+ 0.15 E
1344
+ 0.3
1345
+ 0.5
1346
+ 0.4
1347
+ 0.4
1348
+ 0.10
1349
+ 0.2
1350
+ 0.3
1351
+ 0
1352
+ 0.05
1353
+ 0.1 [
1354
+ 0.2
1355
+ 0.2
1356
+ 0.1
1357
+ 0.1
1358
+ 0
1359
+ 5
1360
+ 10
1361
+ 15
1362
+ 30
1363
+ 0
1364
+ 5
1365
+ 10
1366
+ 15
1367
+ 20
1368
+ 25
1369
+ 30
1370
+ 0
1371
+ 5
1372
+ 10
1373
+ 15
1374
+ 20
1375
+ 25
1376
+ 30
1377
+ 0
1378
+ 5
1379
+ 10
1380
+ 15
1381
+ 20
1382
+ 25
1383
+ 303.8
1384
+ Stochastic ordering
1385
+ Stochastic order is an important statistical property used to compare the behaviour of
1386
+ different random variables [4]. We have considered here the likelihood ratio order ≥lr. Let
1387
+ X ∼ PoiG(λ1, θ) and Y ∼ PoiG(λ2, θ). Then Y is said to be smaller than X in the usual
1388
+ likelihood ratio order, that is Y ≤lr X) if L(x) = pX(x)/pY (x) is an increasing function
1389
+ in x, that is L(x) ≤ L(x + 1) for all 0 < θ < 1 and λ2 < λ1. Note that,
1390
+ pX(x) = θ(1 − θ)xe−λ1
1391
+ x
1392
+
1393
+ i=0
1394
+ 1
1395
+ Γ(i + 1)
1396
+ � λ1
1397
+ 1 − θ
1398
+ �i
1399
+ ,
1400
+ x = 0, 1, 2, ...
1401
+ pY (x) = θ(1 − θ)xe−λ2
1402
+ x
1403
+
1404
+ i=0
1405
+ 1
1406
+ Γ(i + 1)
1407
+ � λ2
1408
+ 1 − θ
1409
+ �i
1410
+ ,
1411
+ x = 0, 1, 2, ... .
1412
+ L(x) = exp [−(λ1 − λ2)]
1413
+ �x
1414
+ i=0
1415
+ 1
1416
+ Γ(i + 1)
1417
+ � λ1
1418
+ 1 − θ
1419
+ �i
1420
+ �x
1421
+ i=0
1422
+ 1
1423
+ Γ(i + 1)
1424
+ � λ2
1425
+ 1 − θ
1426
+ �i,
1427
+ x = 0, 1, 2, ...
1428
+ It is easy to see that, L(x) ≤ L(x + 1) for all 0 < θ < 1 and λ2 < λ1.
1429
+ Let Y ≤st X denote P(Y ≥ x) ≤ P(X ≥ x) for all x. This is the notion of stochastic
1430
+ ordering. Similarly, the hazard rate order Y ≤hr X implies
1431
+ pX(x)
1432
+ P(X ≥ x) ≤
1433
+ pY (x)
1434
+ P(Y ≥ x)
1435
+ for all x. The reversed hazard rate order Y ≤rh X implies
1436
+ pY (x)
1437
+ P(Y ≤ x) ≤
1438
+ pX(x)
1439
+ P(X ≤ x)
1440
+ for all x.
1441
+ From the likelihood ratio order of X and Y , the following statements are
1442
+ immediate [4].
1443
+ • Stochastic order: Y ≤st X.
1444
+ • Hazard rate order: Y ≤hr X.
1445
+ • Reverse hazard rate order: Y ≤rh X.
1446
+ 4
1447
+ Estimation
1448
+ Let Y = (Y1, Y2, ..., Yn) be a random sample of size n from the PoiG(λ, θ) distribution
1449
+ and y = (y1, y2, ..., yn) be a realization on Y. The objective of this section is estimate the
1450
+ parameters λ and θ based on the available data y. We present two different methods of
1451
+ estimation. We also find asymptotic confidence intervals for both the parameters based
1452
+ on the maximum likelihood estimates.
1453
+ 4.1
1454
+ Method of moments
1455
+ Using the expressions in (14) and (19), the mean and the variance of Y ∼ PoiG(λ, θ) are
1456
+ as follows.
1457
+ µ
1458
+
1459
+ 1 = λ + 1 − θ
1460
+ θ
1461
+ and µ2 = λ + 1 − θ
1462
+ θ2
1463
+ 15
1464
+
1465
+ Now by subtracting µ2 from µ
1466
+
1467
+ 1,
1468
+ µ
1469
+
1470
+ 1 − µ2 = 1 − θ
1471
+ θ
1472
+ − 1 − θ
1473
+ θ2
1474
+ =⇒ µ
1475
+
1476
+ 1 − µ2 = 1 − θ
1477
+ θ
1478
+
1479
+ 1 − 1
1480
+ θ
1481
+
1482
+ =⇒ µ2 − µ
1483
+
1484
+ 1 =
1485
+ �1 − θ
1486
+ θ
1487
+ �2
1488
+ =⇒ 1 − θ
1489
+ θ
1490
+ =
1491
+
1492
+ µ2 − µ
1493
+
1494
+ 1
1495
+ =⇒ θ =
1496
+ 1
1497
+ 1 +
1498
+
1499
+ µ2 − µ
1500
+
1501
+ 1
1502
+ (25)
1503
+ By putting θ from (25) in µ
1504
+
1505
+ 1, we obtain
1506
+ λ = µ
1507
+
1508
+ 1 −
1509
+
1510
+ µ2 − µ
1511
+
1512
+ 1
1513
+ (26)
1514
+ This method involves equating sample moments with theoretical moments.
1515
+ Thus, by
1516
+ equating the first sample moment about the origin m
1517
+
1518
+ 1 = �n
1519
+ i=1 yi/n to µ
1520
+
1521
+ 1 and the second
1522
+ sample moment about the mean m2 = �n
1523
+ i=1(yi − ¯y)2/n to µ2 in equation (25) and (26),
1524
+ we obtain the following estimators for λ and θ.
1525
+ ˆλMM = m
1526
+
1527
+ 1 −
1528
+
1529
+ m2 − m
1530
+
1531
+ 1
1532
+ (27)
1533
+ ˆθMM =
1534
+ 1
1535
+ 1 +
1536
+
1537
+ m2 − m
1538
+
1539
+ 1
1540
+ (28)
1541
+ 4.2
1542
+ Maximum likelihood method
1543
+ Using the pmf of Y ∼ PoiG(λ, θ) in (1), the log-likelihood function of the parameters λ
1544
+ and θ can easily be found as
1545
+ l(λ, θ; y) = n log θ + ny log(1 − θ) + nλθ
1546
+ 1 − θ +
1547
+ n
1548
+
1549
+ i=0
1550
+ log
1551
+
1552
+
1553
+
1554
+
1555
+ Γ
1556
+
1557
+ yi + 1,
1558
+ λ
1559
+ 1 − θ
1560
+
1561
+ Γ(yi + 1)
1562
+
1563
+
1564
+
1565
+ � .
1566
+ (29)
1567
+ Let us define,
1568
+ β =
1569
+ λ
1570
+ 1 − θ
1571
+ and for j = 1, 2, 3, ...
1572
+ αj(yi) =
1573
+ e−β
1574
+ Γ (yi + 1, β)
1575
+ 1
1576
+ (1 − θ)j .
1577
+ Differentiating (29), with respect to parameters λ and θ, we get the score functions as
1578
+
1579
+ ∂λl(λ, θ; y) =
1580
+
1581
+ 1 − θ −
1582
+ n
1583
+
1584
+ i=1
1585
+ α1(yi)βyi
1586
+ (30)
1587
+
1588
+ ∂θl(λ, θ; y) = n
1589
+ θ + n(λ − ¯y)
1590
+ 1 − θ
1591
+ +
1592
+ nλθ
1593
+ (1 − θ)2 −
1594
+ n
1595
+
1596
+ i=1
1597
+ λα2(yi)βyi.
1598
+ (31)
1599
+ 16
1600
+
1601
+ Ideally, the explicit maximum likelihood estimators are obtained by simultaneously solving
1602
+ the two equations obtained by setting right hand sides of (30) and (31) equal to zero.
1603
+ Unfortunately, the explicit expressions of the maximum likelihood estimators could not
1604
+ be obtained in this case due to the structural complexity. Thus, we directly optimize
1605
+ the log-likelihood function with respect to the parameters using appropriate numerical
1606
+ technique. Let ˆλML and ˆθML denote the maximum likelihood estimates (MLE) of λ and
1607
+ θ respectively.
1608
+ Now, our objective is to obtain asymptotic confidence intervals for both the parameters.
1609
+ For this purpose, we require the information matrix. The second-order partial derivative
1610
+ of the log-likelihood are given below.
1611
+ ∂2l(λ, θ; y)
1612
+ ∂λ2
1613
+ =
1614
+ n
1615
+
1616
+ i=1
1617
+
1618
+ (βyi − yiβyi−1)α2(yi) − β2yiα1(yi)2�
1619
+ ∂2l(λ, θ; y)
1620
+ ∂λ∂θ
1621
+ =
1622
+ n
1623
+ (1 − θ)2 +
1624
+ n
1625
+
1626
+ i=1
1627
+
1628
+ λ(βyi − yiβyi−1)α3(yi) − βyiα2(yi) − λ(1 − θ)β2yiα1(yi)2�
1629
+ ∂2l(λ, θ; y)
1630
+ ∂θ2
1631
+ = 2nλ − n¯y(1 − θ)
1632
+ (1 − θ)3
1633
+ − n
1634
+ θ2+
1635
+ n
1636
+
1637
+ i=1
1638
+
1639
+ ((λ2 − 2λ(1 − θ))βyi − λ2yiβyi−1)α4(yi) − λ2β2yiα2(yi)2�
1640
+ The Fisher’s information matrix for (λ, θ) is
1641
+ I =
1642
+
1643
+
1644
+
1645
+
1646
+
1647
+
1648
+ −E
1649
+ �∂2l(λ, θ; y)
1650
+ ∂λ2
1651
+
1652
+ −E
1653
+ �∂2l(λ, θ; y)
1654
+ ∂λ∂θ
1655
+
1656
+ −E
1657
+ �∂2l(λ, θ; y)
1658
+ ∂λ∂θ
1659
+
1660
+ −E
1661
+ �∂2l(λ, θ; y)
1662
+ ∂θ2
1663
+
1664
+ .
1665
+
1666
+
1667
+
1668
+
1669
+
1670
+
1671
+ This can be approximated by
1672
+ �I =
1673
+
1674
+
1675
+
1676
+
1677
+
1678
+ −∂2l(λ, θ; y)
1679
+ ∂λ2
1680
+ −∂2l(λ, θ; y)
1681
+ ∂λ∂θ
1682
+ −∂2l(λ, θ; y)
1683
+ ∂λ∂θ
1684
+ −∂2l(λ, θ; y)
1685
+ ∂θ2
1686
+ .
1687
+
1688
+
1689
+
1690
+
1691
+
1692
+ (λ,θ)=(ˆλML,ˆθML)
1693
+ Under some general regularity conditions, for large n, √n(ˆλML − λ, ˆθML − θ) is bivariate
1694
+ normal with the mean vector (0, 0) and the dispersion matrix
1695
+ ˆI−1 =
1696
+ 1
1697
+ I11I22 − I12I21
1698
+
1699
+
1700
+ I22
1701
+ −I12
1702
+ −I21
1703
+ I11
1704
+
1705
+ � =
1706
+
1707
+
1708
+ J11
1709
+ −J12
1710
+ −J21
1711
+ J22.
1712
+
1713
+
1714
+ Thus, the asymptotic (1−α)×100% confidence interval for λ and θ are given respectively
1715
+ by
1716
+
1717
+ �ˆλML − Zα
1718
+ 2
1719
+
1720
+ J11 , ˆλML + Zα
1721
+ 2
1722
+
1723
+ J11
1724
+
1725
+ � and
1726
+
1727
+ �ˆθML − Zα
1728
+ 2
1729
+
1730
+ J22 , ˆθML + Zα
1731
+ 2
1732
+
1733
+ J22
1734
+
1735
+ � .
1736
+ 17
1737
+
1738
+ 5
1739
+ Discussion
1740
+ In this article, a new two-parameter distribution is proposed, extensively studied. Core
1741
+ of this work is theoretical development, its applied aspect is also important. From the
1742
+ application point of view, the proposed model is easy to use for modeling over-dispersed
1743
+ data. Despite the availability of several other over-dispersed count models, the proposed
1744
+ model may find wide applications due to the interpretability of its parameters.
1745
+ The
1746
+ parameter λ controls the tail of the distribution while the parameter θ adjusts for the over-
1747
+ dispersion present in a given dataset. Their combined effect gives flexibility to the shape
1748
+ of the distribution. When θ dominates λ, it keeps the J-shaped mass distribution and for
1749
+ large λ, the bell-shaped mass distribution. Consequently, the hump or the concentration of
1750
+ the observations is well accommodated. Simulation experiment to investigate performance
1751
+ of the point and asymptotic interval estimator and comparative real life data analysis will
1752
+ be reported in the complete version of the article.
1753
+ 18
1754
+
1755
+ References
1756
+ [1] Abramowitz, M., and Stegun, I. A. Handbook of mathematical functions with
1757
+ formulas, graphs, and mathematical tables, vol. 55. US Government printing office,
1758
+ 1964.
1759
+ [2] Altun, E. A new generalization of geometric distribution with properties and ap-
1760
+ plications. Communications in Statistics-Simulation and Computation 49, 3 (2020),
1761
+ 793–807.
1762
+ [3] Bagnoli, M., and Bergstrom, T. Log-concave probability and its applications.
1763
+ In Rationality and Equilibrium. Springer, 2006, pp. 217–241.
1764
+ [4] Bakouch, H. S., Jazi, M. A., and Nadarajah, S. A new discrete distribution.
1765
+ Statistics 48, 1 (2014), 200–240.
1766
+ [5] Bar-Lev, S. K., and Ridder, A. Exponential dispersion models for overdispersed
1767
+ zero-inflated count data. Communications in Statistics-Simulation and Computation
1768
+ (2021), 1–19.
1769
+ [6] Bardwell, G., and Crow, E. A two parameter family of hyper-Poisson distribu-
1770
+ tions. Journal of the American Statistical Association 59 (1964), 133–141.
1771
+ [7] Bourguignon, M., Gallardo, D. I., and Medeiros, R. M. A simple and
1772
+ useful regression model for underdispersed count data based on Bernoulli–Poisson
1773
+ convolution. Statistical Papers 63, 3 (2022), 821–848.
1774
+ [8] Bourguignon, M., and Weiß, C. H. An INAR (1) process for modeling count
1775
+ time series with equidispersion, underdispersion and overdispersion. Test 26, 4 (2017),
1776
+ 847–868.
1777
+ [9] Campbell, N. L., Young, L. J., and Capuano, G. A. Analyzing over-dispersed
1778
+ count data in two-way cross-classification problems using generalized linear models.
1779
+ Journal of Statistical Computation and Simulation 63, 3 (1999), 263–281.
1780
+ [10] Chakraborty, S.
1781
+ On some distributional properties of the family of weighted
1782
+ generalized poisson distribution. Communications in Statistics—Theory and Methods
1783
+ 39, 15 (2010), 2767–2788.
1784
+ [11] Chakraborty, S., and Bhati, D. Transmuted geometric distribution with ap-
1785
+ plications in modeling and regression analysis of count data. SORT-Statistics and
1786
+ Operations Research Transactions (2016), 153–176.
1787
+ [12] Chakraborty, S., and Gupta, R. D. Exponentiated geometric distribution: an-
1788
+ other generalization of geometric distribution. Communications in Statistics-Theory
1789
+ and Methods 44, 6 (2015), 1143–1157.
1790
+ [13] Chakraborty, S., and Ong, S. Mittag-leffler function distribution-a new gen-
1791
+ eralization of hyper-Poisson distribution.
1792
+ Journal of Statistical distributions and
1793
+ applications 4, 1 (2017), 1–17.
1794
+ [14] Cvijović, D. New integral representations of the polylogarithm function. Proceed-
1795
+ ings of the Royal Society A: Mathematical, Physical and Engineering Sciences 463,
1796
+ 2080 (2007), 897–905.
1797
+ 19
1798
+
1799
+ [15] Del Castillo, J., and Pérez-Casany, M. Weighted poisson distributions for
1800
+ overdispersion and underdispersion situations. Annals of the Institute of Statistical
1801
+ Mathematics 50, 3 (1998), 567–585.
1802
+ [16] Efron, B. Double exponential-families and their use in generalized linear-regression.
1803
+ Journal of the American Statistical Association 81 (1986), 709–721.
1804
+ [17] Fisher, R. A., Corbet, A. S., and Williams, C. B. The relation between the
1805
+ number of species and the number of individuals in a random sample of an animal
1806
+ population. The Journal of Animal Ecology (1943), 42–58.
1807
+ [18] Gómez-Déniz, E. Another generalization of the geometric distribution. Test 19, 2
1808
+ (2010), 399–415.
1809
+ [19] Hassanzadeh, F., and Kazemi, I. Analysis of over-dispersed count data with
1810
+ extra zeros using the Poisson log-skew-normal distribution. Journal of Statistical
1811
+ Computation and Simulation 86, 13 (2016), 2644–2662.
1812
+ [20] Jain, G., and Consul, P. A generalized negative binomial distribution. SIAM
1813
+ Journal on Applied Mathematics 21, 4 (1971), 501–513.
1814
+ [21] Johnson, O.
1815
+ Log-concavity and the maximum entropy property of the poisson
1816
+ distribution. Stochastic Processes and their Applications 117, 6 (2007), 791–802.
1817
+ [22] Makcutek, J. A generalization of the geometric distribution and its application in
1818
+ quantitative linguistics. Romanian Reports in Physics 60, 3 (2008), 501–509.
1819
+ [23] Mark, Y. A. Log-concave probability distributions: Theory and statistical testing.
1820
+ Duke University Dept of Economics Working Paper, 95-03 (1997).
1821
+ [24] Mihoubi, M. Bell polynomials and binomial type sequences. Discrete Mathematics
1822
+ 308, 12 (2008), 2450–2459.
1823
+ [25] Moghimbeigi, A., Eshraghian, M. R., Mohammad, K., and Mcardle,
1824
+ B. Multilevel zero-inflated negative binomial regression modeling for over-dispersed
1825
+ count data with extra zeros. Journal of Applied Statistics 35, 10 (2008), 1193–1202.
1826
+ [26] Moqaddasi Amiri, M., Tapak, L., and Faradmal, J. A mixed-effects least
1827
+ square support vector regression model for three-level count data. Journal of Statis-
1828
+ tical Computation and Simulation 89, 15 (2019), 2801–2812.
1829
+ [27] Nekoukhou, V., Alamatsaz, M., and Bidram, H. A discrete analogue of the
1830
+ generalized exponential distribution.
1831
+ Communications in Statistics - Theory and
1832
+ Methods 41, 11 (2012), 2000–2013.
1833
+ [28] Philippou, A., Georghiou, C., and Philippou, G. A generalized geometric
1834
+ distribution and some of its properties. Statistics and Probability Letters 1, 4 (1983),
1835
+ 171–175.
1836
+ [29] Rodrigues-Motta, M., Pinheiro, H. P., Martins, E. G., Araújo, M. S.,
1837
+ and dos Reis, S. F. Multivariate models for correlated count data. Journal of
1838
+ Applied Statistics 40, 7 (2013), 1586–1596.
1839
+ [30] Sarvi, F., Moghimbeigi, A., and Mahjub, H. GEE-based zero-inflated gener-
1840
+ alized Poisson model for clustered over or under-dispersed count data. Journal of
1841
+ Statistical Computation and Simulation 89, 14 (2019), 2711–2732.
1842
+ 20
1843
+
1844
+ [31] Sellers, K. F., and Shmueli, G. A flexible regression model for count data. The
1845
+ Annals of Applied Statistics (2010), 943–961.
1846
+ [32] Tapak, L., Hamidi, O., Amini, P., and Verbeke, G.
1847
+ Random effect
1848
+ exponentiated-exponential geometric model for clustered/longitudinal zero-inflated
1849
+ count data. Journal of Applied Statistics 47, 12 (2020), 2272–2288.
1850
+ [33] Tripathi, R., Gupta, R., and White, T. Some generalizations of the geometric
1851
+ distribution. Sankhya Ser. B 49, 3 (1987), 218–223.
1852
+ [34] Tüzen, F., Erbaş, S., and Olmuş, H. A simulation study for count data models
1853
+ under varying degrees of outliers and zeros. Communications in Statistics-Simulation
1854
+ and Computation 49, 4 (2020), 1078–1088.
1855
+ [35] Wang, S., Cadigan, N., and Benoît, H. Inference about regression parameters
1856
+ using highly stratified survey count data with over-dispersion and repeated measure-
1857
+ ments. Journal of Applied Statistics 44, 6 (2017), 1013–1030.
1858
+ [36] Wang, Y., Young, L. J., and Johnson, D. E. A UMPU test for comparing means
1859
+ of two negative binomial distributions. Communications in Statistics-Simulation and
1860
+ Computation 30, 4 (2001), 1053–1075.
1861
+ [37] Wongrin, W., and Bodhisuwan, W. Generalized Poisson–Lindley linear model
1862
+ for count data. Journal of Applied Statistics 44, 15 (2017), 2659–2671.
1863
+ 21
1864
+
-tAzT4oBgHgl3EQfhPwa/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
.gitattributes CHANGED
@@ -6901,3 +6901,46 @@ QdFQT4oBgHgl3EQfZTb8/content/2301.13316v1.pdf filter=lfs diff=lfs merge=lfs -tex
6901
  ytE4T4oBgHgl3EQfyA1G/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6902
  5tE2T4oBgHgl3EQf6wj3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6903
  QdFQT4oBgHgl3EQfZTb8/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6901
  ytE4T4oBgHgl3EQfyA1G/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6902
  5tE2T4oBgHgl3EQf6wj3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6903
  QdFQT4oBgHgl3EQfZTb8/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6904
+ NtE2T4oBgHgl3EQfqwjT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6905
+ PNE1T4oBgHgl3EQfaQRP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6906
+ kdE2T4oBgHgl3EQfIgZc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6907
+ X9FLT4oBgHgl3EQfUi8y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6908
+ NdFIT4oBgHgl3EQfdCtI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6909
+ FtFJT4oBgHgl3EQfDSx3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6910
+ 1NFKT4oBgHgl3EQfOi14/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6911
+ j9FIT4oBgHgl3EQfqivf/content/2301.11328v1.pdf filter=lfs diff=lfs merge=lfs -text
6912
+ DNAzT4oBgHgl3EQfGfv5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6913
+ j9FIT4oBgHgl3EQfqivf/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6914
+ 9dE4T4oBgHgl3EQfdwwo/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6915
+ edFQT4oBgHgl3EQfkDbq/content/2301.13357v1.pdf filter=lfs diff=lfs merge=lfs -text
6916
+ FtAzT4oBgHgl3EQfUfxu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6917
+ ONE3T4oBgHgl3EQfCAkU/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6918
+ cNAyT4oBgHgl3EQfwfkz/content/2301.00648v1.pdf filter=lfs diff=lfs merge=lfs -text
6919
+ cdAzT4oBgHgl3EQfZvzP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6920
+ ANFIT4oBgHgl3EQf-iyR/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6921
+ WdE0T4oBgHgl3EQfVwBf/content/2301.02268v1.pdf filter=lfs diff=lfs merge=lfs -text
6922
+ i9AyT4oBgHgl3EQf-_q2/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6923
+ FtFJT4oBgHgl3EQfDSx3/content/2301.11433v1.pdf filter=lfs diff=lfs merge=lfs -text
6924
+ 2NE3T4oBgHgl3EQfnwo5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6925
+ S9A0T4oBgHgl3EQfD__i/content/2301.02013v1.pdf filter=lfs diff=lfs merge=lfs -text
6926
+ gtE3T4oBgHgl3EQf3wvW/content/2301.04767v1.pdf filter=lfs diff=lfs merge=lfs -text
6927
+ 1NFAT4oBgHgl3EQfjx2F/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6928
+ T9AyT4oBgHgl3EQfhPid/content/2301.00374v1.pdf filter=lfs diff=lfs merge=lfs -text
6929
+ CdAzT4oBgHgl3EQfGftE/content/2301.01028v1.pdf filter=lfs diff=lfs merge=lfs -text
6930
+ l9FKT4oBgHgl3EQfDy1k/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6931
+ 9dE4T4oBgHgl3EQfdwwo/content/2301.05093v1.pdf filter=lfs diff=lfs merge=lfs -text
6932
+ B9E4T4oBgHgl3EQfFQzj/content/2301.04885v1.pdf filter=lfs diff=lfs merge=lfs -text
6933
+ G9E1T4oBgHgl3EQf_QZd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6934
+ 6tE3T4oBgHgl3EQfpwpy/content/2301.04645v1.pdf filter=lfs diff=lfs merge=lfs -text
6935
+ adAyT4oBgHgl3EQfv_mu/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6936
+ X9FLT4oBgHgl3EQfUi8y/content/2301.12049v1.pdf filter=lfs diff=lfs merge=lfs -text
6937
+ g9E3T4oBgHgl3EQfIQkV/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6938
+ ftE4T4oBgHgl3EQfRQzA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6939
+ jtE1T4oBgHgl3EQf0AX0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6940
+ KNE0T4oBgHgl3EQfigEg/content/2301.02445v1.pdf filter=lfs diff=lfs merge=lfs -text
6941
+ ctFQT4oBgHgl3EQfijaX/content/2301.13350v1.pdf filter=lfs diff=lfs merge=lfs -text
6942
+ wdE0T4oBgHgl3EQfcACj/content/2301.02357v1.pdf filter=lfs diff=lfs merge=lfs -text
6943
+ odAzT4oBgHgl3EQfqv0U/content/2301.01632v1.pdf filter=lfs diff=lfs merge=lfs -text
6944
+ xtAyT4oBgHgl3EQfnvhT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6945
+ 6tE3T4oBgHgl3EQfpwpy/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
6946
+ 6tE0T4oBgHgl3EQffABm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
0NE1T4oBgHgl3EQfkwRo/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
0tFLT4oBgHgl3EQfpC-v/content/tmp_files/2301.12134v1.pdf.txt ADDED
@@ -0,0 +1,553 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Underwater Robotics Semantic Parser Assistant
2
+ Jake Imyak
3
4
+ Parth Parekh
5
6
+ Cedric McGuire
7
8
+ Abstract
9
+ Semantic parsing is a means of taking natu-
10
+ ral language and putting it in a form that a
11
+ computer can understand. There has been a
12
+ multitude of approaches that take natural lan-
13
+ guage utterances and form them into lambda
14
+ calculus expressions - mathematical functions
15
+ to describe logic. Here, we experiment with
16
+ a sequence to sequence model to take natural
17
+ language utterances, convert those to lambda
18
+ calculus expressions, when can then be parsed,
19
+ and place them in an XML format that can be
20
+ used by a finite state machine. Experimental
21
+ results show that we can have a high accuracy
22
+ model such that we can bridge the gap between
23
+ technical and nontechnical individuals in the
24
+ robotics field.
25
+ 1
26
+ Credits
27
+ Jake Imyak was responsible for the creation of
28
+ the 1250 dataset terms and finding the RNN en-
29
+ coder/decoder model. This took 48 Hours. Cedric
30
+ McGuire was responsible for the handling of the
31
+ output logical form via the implementation of the
32
+ Tokenizer and Parser. This took 44 Hours. Parth
33
+ Parekh assembled the Python structure for behavior
34
+ tree as well as created the actions on the robot. This
35
+ took 40 Hours. All group members were responsi-
36
+ ble for the research, weekly meetings, presentation
37
+ preparation, and the paper. In the paper, each group
38
+ member was responsible for explaining their re-
39
+ spective responsibilities with a collaborative effort
40
+ on the abstract, credits, introduction, discussion,
41
+ and references. A huge thanks to our Professor Dr.
42
+ Huan Sun for being such a great guide through the
43
+ world of Natural Language Processing.
44
+ 2
45
+ Introduction
46
+ Robotics is a hard field to master. Its one of the few
47
+ fields which is truly interdisciplinary. This leads to
48
+ engineers with many different backgrounds work-
49
+ ing on one product. There are domains within this
50
+ product that engineers within one subfield may not
51
+ be able to work with. This leads to some engineers
52
+ not being able to interact with the product properly
53
+ without supervision.
54
+ As already mentioned, we aim to create an
55
+ interface for those engineers on the Underwa-
56
+ ter Robotics Team (UWRT). Some members on
57
+ UWRT specialize in other fields that are not soft-
58
+ ware engineering. They are not able to create logic
59
+ for the robot on their own. This leads to members
60
+ of the team that are required to be around when
61
+ pool testing the robot. This project wants to reduce
62
+ or remove that component of creating logic for the
63
+ robot. This project can also be applied to other
64
+ robots very easily as all of the main concepts are
65
+ generalized and only require the robots to imple-
66
+ ment the actions that are used to train the project.
67
+ 3
68
+ Robotics Background
69
+ 3.1
70
+ Usage of Natural Language in Robotics
71
+ Robots are difficult to produce logic for. One big
72
+ problem that most robotics teams have is having
73
+ non-technical members produce logical forms for
74
+ the robot to understand. Those who do not code
75
+ are not able to manually create logic quickly.
76
+ 3.2
77
+ Finite State Machines
78
+ One logical form that is common in the robotics
79
+ space is a Finite State Machine (FSM). FSMs are
80
+ popular because they allow a representation to be
81
+ completely general while encoding the logic di-
82
+ rectly into the logical form. This means things
83
+ such as control flow, fallback states, and sequences
84
+ to be directly encoded into the logical form itself.
85
+ As illustrated in Figure 1, we can easily encode
86
+ logic into this representation. Since it easily generi-
87
+ fied, FSM’s can be used across any robot which im-
88
+ arXiv:2301.12134v1 [cs.CL] 28 Jan 2023
89
+
90
+ Figure 1:
91
+ A FSM represented in Behaviortree.CPP
92
+ (Fanconti, 2020) (Fanconti, 2020)
93
+ plements the commands that are contained within
94
+ it.
95
+ 3.3
96
+ Underwater Robotics Team Robot
97
+ Since 2016, The Underwater Robotics Team
98
+ (UWRT) at The Ohio State University has iterated
99
+ on the foundations of a single Autonomous Under-
100
+ water Vehicle (AUV) each year to compete at the
101
+ RoboSub competition. Breaking from tradition, the
102
+ team decided to take the 2019-2021 school years
103
+ to design and build a new vehicle to compete in
104
+ the 2021 competition. Featuring an entirely new
105
+ hull design, refactored software, and an improved
106
+ electrical system, UWRT has created its brand-new
107
+ vehicle, Tempest. (Parekh, 2021)
108
+ 3.3.1
109
+ Vehicle
110
+ Tempest is a 6 Degree of Freedom (DOF) AUV
111
+ with vectored thrusters for linear axis motion and
112
+ direct drive heave thrusters. This allows the robot to
113
+ achieve any orientation in all 6 Degrees of freedom
114
+ [X, Y , Z, Roll, Pitch, Yaw].
115
+ Figure 2: A render of Tempest
116
+ 3.3.2
117
+ Vehicle Experience
118
+ With this vehicle, the team has focused on creat-
119
+ ing a fully fleshed out experience. This includes
120
+ commanding and controlling the vehicle. One big
121
+ focus of the team was to make sure that any mem-
122
+ ber, technical or non-technical was able to manage
123
+ and operate the robot successfully.
124
+ 3.3.3
125
+ Task Code System
126
+ A step to fulfill this focus was to change the
127
+ vehicle’s task code system to use the FSM rep-
128
+ resentation.
129
+ This is done through the library
130
+ BehaviorTree.CPP (Fanconti, 2020). This generic
131
+ FSM representation allows for Tempest to use
132
+ generified logical forms that can be applied to ANY
133
+ robotic plant as long as that plant implements those
134
+ commands. This library also creates and maintains
135
+ a Graphical User Interface (GUI) which allows for
136
+ visual tracking and creation of FSM trees. Any tree
137
+ created by the GUI is stored within an XML file
138
+ to preserve the tree structure. The structure of the
139
+ output of the XML syntax is explained within the
140
+ parser section.
141
+ 4
142
+ Data
143
+ A dataset was to be created in order to use natu-
144
+ ral language utterances to lambda calculus expres-
145
+ sions that a parser would be able to recognize to
146
+ convert to a finite state machine. For reference,
147
+ the following datasets were considered: the Geo-
148
+ query set(Zettlemoyer, 2012) and General Purpose
149
+ Service Robotics commands set (Walker, 2019).
150
+ The Geoquery dataset provided a foundation for
151
+ a grammar to follow for the lambda calculus ex-
152
+ pression such that consistency would hold for our
153
+ parser. Moreover, the gpsr dataset provided an
154
+ ample amount of examples and different general
155
+ purpose robotics commands that could be extended
156
+ within the dataset we curated.
157
+ The dataset followed the following form: nat-
158
+ ural language utterance followed by a tab then a
159
+ lambda calculus expression. The lambda calcu-
160
+ lus expression is of the form ( seq ( action0
161
+ ( $0 ( parameter ) ) ) ... ( actionN ( $N (
162
+ parameter ) ) ). The power of the following ex-
163
+ pression is that it can be extended to N number of
164
+ actions in a given sequence, meaning that a user
165
+ can hypothetically type in a very complex string
166
+ of action and an expression will be constructed for
167
+ said sequence. Moreover, the format of our dataset
168
+ allows for it to be extended for any type of robotics
169
+
170
+ Root
171
+ root Fallback
172
+ Sequence
173
+ SubTreeExpanded
174
+ A:PassThroughwindow
175
+ door open sequence
176
+ Collapse
177
+ C: IsDooropen
178
+ A:PassThroughDoor
179
+ Sequence
180
+ door closed sequence
181
+ Inverter
182
+ C RetryUntilSuccesful
183
+ A:PassThroughDoor
184
+ A:CloseDoor
185
+ num_attempts
186
+ 4
187
+ C:IsDooropen
188
+ A:OpenDoorcommand that a user may have. They just need to
189
+ include examples in the train set with said action
190
+ and the model will consider it.
191
+ The formal grammar is:
192
+ < seq > : ( seq ( action ) [ (action) ] )
193
+ < action > : actionName [ (parameter ] )
194
+ < parameter > : paramName λ ( $n ( n ) )
195
+ The dataset we created had 1000 entries in the
196
+ training dataset and 250 entries in the test dataset.
197
+ The size of the vocabulary |V | = 171 for the input
198
+ text and |V | = 46 for the output text, which is
199
+ similar in vocabulary size to the GeoQuery dataset.
200
+ The expressions currently increase in complexity
201
+ in terms of the number of actions within the se-
202
+ quence. A way to extend the complexity of the ex-
203
+ pressions would make the < seq > tag a nontermi-
204
+ nal to chain together nested sequences. The actions
205
+ within our dataset currently are as follows: move
206
+ (params: x, y, z, roll, pitch, raw), flatten (params:
207
+ num), say (params: words), clean (params: obj),
208
+ bring (params: val), find (params: val), goal,
209
+ and gate. The most complex sequence is a string
210
+ of seven subsequent actions.
211
+ 5
212
+ Model
213
+ 5.1
214
+ Seq2Seq Model
215
+ We decided to use the model presented in ”Lan-
216
+ guage to Logical Form with Neural Attention”
217
+ (Dong, 2016). There was an implementation on
218
+ GitHub (AvikDelta, 2018) utilizing Google’s Ten-
219
+ sorflow library to handle all implementation details
220
+ of the following model. The part of the paper that
221
+ was presented was the Sequence to Sequence model
222
+ with an attention mechanism.
223
+ Figure 3: Process of how input natural language are en-
224
+ coded and decoded via recurrent neural networks and
225
+ an attention mechanism to find the utterance’s respec-
226
+ tive natural language form. (Dong and Lapata, 2016)
227
+ The model interprets both the input and output
228
+ from the network as sequences of information. This
229
+ process is represented in Figure 3: input is passed
230
+ to the encoder, then passed through the decoder,
231
+ and through using the attention mechanism, we can
232
+ get an output that is a lambda calculus expression.
233
+ Both of these sequences can be represented as L-
234
+ layer recurrent neural networks with long short-
235
+ term memory (LSTM) that are used to take the
236
+ tokens from the sentences and the expressions we
237
+ have. The model creates 200 (can be changed to
238
+ increase and decrease the size of the network) units
239
+ of both LSTM cells and GRU cells. The GRU
240
+ cells are used to help compensate for the vanishing
241
+ gradient problem. These LSTM and GRU cells
242
+ are used in the input sequence to encode x1, ..., xq
243
+ into vectors. Then these vectors are what form
244
+ the hidden state of the beginning of the sequence
245
+ in the decoder. Then in the decoder, the topmost
246
+ LSTM cell predicts the t-th output token by taking
247
+ the softmax of the parameter matrix and the vector
248
+ from the LSTM cell multiplied by a one-hot vector
249
+ used to compute the probability of the output from
250
+ the probability distribution. The softmax used here
251
+ is sampled softmax, which only takes into account
252
+ a subset of our vocabulary V rather than everything
253
+ to help alleviate the difficulty of finding the softmax
254
+ of a large vocabulary.
255
+ 5.2
256
+ Attention Mechanism
257
+ The model also implemented an attention mecha-
258
+ nism to help with the predicted values. The mo-
259
+ tivation behind the attention mechanism is to use
260
+ the input sequence in the decoding process since
261
+ it is relevant information for the prediction of the
262
+ output token. To achieve this, a context vector is
263
+ created which is the weighted sums of the hidden
264
+ vectors in the encoder. Then this context vector is
265
+ used as context to find the probability of generating
266
+ a given output.
267
+ 5.3
268
+ Training
269
+ To train the model, the objective is the maximize
270
+ the likelihood of predicting the correct logical form
271
+ given some natural language expression. Hence,
272
+ the goal is to minimize the sum of the log prob-
273
+ ability of predicting logical form a given natural
274
+ language utterance q summed over all training pairs.
275
+ The model used the RMSProp algorithm which
276
+ is an extension of the Adagrad optimizer but uti-
277
+ lizes learning rate adaptation. Dropout is also used
278
+ for regularization which helps out with a smaller
279
+ datasets to prevent overfitting. We performed 90
280
+ epochs.
281
+ 5.4
282
+ Inference
283
+ To perform inference, the argmax is found of the
284
+ probability of candidate output given the natural
285
+
286
+ AttentionLayer
287
+ whatmicrosoftjobs
288
+ answer(J,(compa
289
+ ny(J,'microsoft).j
290
+ do not require a
291
+ ob,not(reqde
292
+ bscs?
293
+ g(J,bscs)))
294
+ Input
295
+ Sequence Sequence/Tree
296
+ Logical
297
+ Utterance
298
+ Encoder
299
+ Decoder
300
+ Formlanguage utterance. Since it is not possible to find
301
+ the probability of all possible outputs, the proba-
302
+ bility is put in a form such that a beam search can
303
+ be employed to generate each individual token of
304
+ lambda calculus expression to get the appropriate
305
+ output.
306
+ 6
307
+ Results
308
+ With the default parameters set, the Sequence to Se-
309
+ quence model achieved 86.7% accuracy for exact
310
+ matches on the test dataset. This is consistent with
311
+ the model’s performance on the Geoquery dataset,
312
+ achieving 83.9% accuracy. The test dataset pro-
313
+ vided contained a 250 entries of similar utterances
314
+ to the train dataset of various complexities ranging
315
+ anywhere from one to six actions being performed.
316
+ There are other methods of evaluating we would
317
+ like to look into in the future such as computing
318
+ something such as an F1 score rather than solely
319
+ relying on exact logical form matching.
320
+ This accuracy for exact logical forms is really
321
+ important when using the parser. It allows for FSM
322
+ representation to be easily and quickly built. We
323
+ were able to build the XML representation and
324
+ run basic commands on the robot with the model
325
+ maintaining the order we said them in.
326
+ 7
327
+ Logical Form Parser
328
+ The logical form output of our model is sent to a
329
+ custom parser. The goal of this parser is to translate
330
+ the output form into BehaviorTree XML files, in
331
+ which the robot is able to read in as a finite state
332
+ machine.
333
+ 7.1
334
+ Tokenizer
335
+ The Tokenizer comprises the initial framework of
336
+ the parser. It accepts the raw logical form as a
337
+ String object and outputs a set of tokens in a Python
338
+ List. These tokens are obtained by looking for sepa-
339
+ rator characters (in our case, a space) present in the
340
+ logical form and splitting them into an array-like
341
+ structure. The Tokenizer method permits custom
342
+ action, parameter, and variable names from the log-
343
+ ical form input, thus allowing ease of scalability
344
+ in implementing new robot actions. Our model’s
345
+ output nature is not able to generate syntactically
346
+ incorrect logical forms, thus our implementation
347
+ does not check for invalid tokens and will assume
348
+ all input is correct. The Tokenizer is stored in a
349
+ static Singleton class such that it can be accessed
350
+ anywhere in the program once initialized. It keeps
351
+ track of the current token (using getToken()) and
352
+ has an implementation to move forward to the next
353
+ token skipToken(). This functionality is impor-
354
+ tant for the object-oriented approach of the parser,
355
+ discussed in the next section.
356
+ 7.2
357
+ Parsing Lambda Calculus Expressions
358
+ The output tokens from the Tokenizer must be in-
359
+ terpreted into a proper Python from before they
360
+ are staged to be turned into XML-formatted robot-
361
+ ready trees. This is the function of the middle step
362
+ of the parser, in which a tree of Python objects
363
+ are built. The parser utilizes an object-oriented
364
+ approach.
365
+ As such, we include three objects:
366
+ Sequence, Action, and Parameter, with each
367
+ corresponding to an individual member of our cus-
368
+ tom grammar. The objects orient themselves into
369
+ a short 3-deep tree, consisting of a Sequence root,
370
+ Action children, and Parameter grand-children.
371
+ Each object has its own parse() method that will
372
+ advance the tokenizer, validate the input structure,
373
+ and assemble themselves into a Python structure to
374
+ be staged into an XML file. The validations are en-
375
+ forced through our grammar definitions in Section
376
+ 4.
377
+ 7.2.1
378
+ Sequence Object
379
+ The Sequence object is the first object initialized
380
+ by the parser, along with the root of our action
381
+ tree. Each Sequence is composed of a list of 0 or
382
+ more child actions to be executed in the order they
383
+ appear. The parseSequence() method will parse
384
+ each individual action using parseSAction(), all
385
+ the while assembling a list of child actions for this
386
+ Sequence object. As of now, Sequence objects
387
+ are unable to be their own children (i.e. nesting
388
+ Sequences is not permitted). However, if required,
389
+ the Sequence object’s parseSequence() method
390
+ can be modified to recognize a nested action se-
391
+ quence and recursively parse it.
392
+ 7.2.2
393
+ Action Object
394
+ Action objects define the title of the action be-
395
+ ing performed. Similar to Sequence, Action ob-
396
+ jects have an internally stored list, however with
397
+ Parameter objects as children. There may be
398
+ any number of parameters, including none. When
399
+ parseAction() method is called, the program val-
400
+ idates the tokens and will call parseParameter()
401
+ on each Parameter child identified by the action.
402
+
403
+ 7.2.3
404
+ Parameter Object
405
+ The Parameter object is a simple object that
406
+ stores a parameter’s name and value. The parser
407
+ does not have a check for what the name of the pa-
408
+ rameter is, nor does it have any restrictions to what
409
+ the value can be.
410
+ parseParameter() searches
411
+ through the tokens for these two items and stores
412
+ them as attributes to the Parameter object. This
413
+ implementation of parameter is scalable with robot
414
+ parameters and allows any new configuration of
415
+ parameter to pass by without any changes in the
416
+ parser as a whole. If a new parameter is needed for
417
+ the robot, it only has to be trained into the Seq2Seq
418
+ model on the frontend and into the robot itself on
419
+ the backend; the Parameter object should take care
420
+ of it all the same.
421
+ 7.3
422
+ BehaviorTree Output
423
+ In the end, the parser outputs an XML file which
424
+ can be read in to BehaviorTree.CPP (Fanconti,
425
+ 2020). An example of this file structure is shown
426
+ in Figure 4.
427
+ Figure 4:
428
+ A FSM that was generated from test input
429
+ through our RNN
430
+ This file structure is useful because it encodes
431
+ sequence of actions within it. The leaves of the
432
+ sequence are always in order. The tree can also
433
+ encode subtrees into the sequence which we have
434
+ not implemented yet.
435
+ 8
436
+ Discussion
437
+ 8.1
438
+ Summary
439
+ We learned that semantic parsing is excellent tool
440
+ at bridging the gap between both technical and non-
441
+ technical individuals. The power within semantic
442
+ parsing with robotics is that any human can auto-
443
+ mate any task just through using their words. Our
444
+ dataset is written in a way that just extending the
445
+ entries with another robot’s tasks that use a behav-
446
+ ior tree to perform action, that robot’s actions can
447
+ be automated as well.
448
+ 8.2
449
+ Future Plans
450
+ Future plans with this project would be to ex-
451
+ pand the logical flow that can be implemented
452
+ with BehaviorTree.CPP. As an FSM library, Behav-
453
+ iorTree.CPP implements many more helper func-
454
+ tions to create more complicated FSMs. These
455
+ include things like if statements fallback nodes,
456
+ and subtrees. This would be a valid expansion
457
+ of our RNN’s logical output and with more time,
458
+ we could support the full range of features from
459
+ BehaviorTree.CPP
460
+ We would also like to implement a front end
461
+ user interface to make this service more accessible
462
+ to anyone who was not technical. Right now, the
463
+ only means of running our program is through the
464
+ command line which is not suitable for individuals
465
+ who are nontechnical. Moreover, including a speak-
466
+ to-text component to this project would elevate it
467
+ since an individual would be able to directly tell a
468
+ robot what commands to do, similar to a human.
469
+ 8.3
470
+ Source Code
471
+ You can view the source code here:
472
+ https://
473
+ github.com/jrimyak/parse_seq2seq
474
+ References
475
+ Vinyals, O., Kaiser, L., Koo, T., Petrov, S., Sutskever,
476
+ I. & Hinton, G. Grammar as a Foreign Language.
477
+ (2015),
478
+ Dong, L. & Lapata, M. Language to Logical Form with
479
+ Neural Attention. (2016),
480
+ Yao, Z., Tang, Y., Yih, W., Sun, H. & Su, Y. An Im-
481
+ itation Game for Learning Semantic Parsers from
482
+ User Interaction. Proceedings Of The 2020 Confer-
483
+ ence On Empirical Methods In Natural Language
484
+ Processing (EMNLP). (2020),
485
+ Yao, Z., Su, Y., Sun, H. & Yih, W. Model-based In-
486
+ teractive Semantic Parsing: A Unified Framework
487
+ and A Text-to-SQL Case Study. Proceedings Of The
488
+ 2019 Conference On Empirical Methods In Natu-
489
+ ral Language Processing And The 9th International
490
+ Joint Conference On Natural Language Processing
491
+ (EMNLP-IJCNLP). pp. 5450-5461 (2019),
492
+ Walker, N., Peng, Y. & Cakmak, M. Neural Se-
493
+ mantic Parsing with Anonymization for Command
494
+ Understanding in General-Purpose Service Robots.
495
+ Lecture Notes In Computer Science. pp. 337-350
496
+ (2019),
497
+ Dukes, K. Supervised Semantic Parsing of Robotic
498
+ Spatial Commands .SemEval-2014 Task 6. (2014),
499
+ Walker, N. GPSR Commands Dataset. (Zenodo,2019),
500
+ https://zenodo.org/record/3244800,
501
+
502
+ test.xm
503
+ 1
504
+ <root main_tree_to_execute ="test">
505
+ 2
506
+ <BehaviorTreeID="test">
507
+ 3
508
+ <Seguencename="root_seg">
509
+ 4
510
+ <Action
511
+ ID="say"words="so"/
512
+ 5
513
+ <Action
514
+ ID="move"X="s1"/>
515
+ 6
516
+ </Seguence>
517
+ 7
518
+ </BehaviorTree>
519
+ 8
520
+ </root>
521
+ 6Avikdelta parse seq2seq. GitHub Repository. (2018),
522
+ https://github.com/avikdelta/parse_
523
+ seq2seq,
524
+ Faconti, D. BehaviorTree - Groot. GitHub Repository.
525
+ (2020),
526
+ https://github.com/BehaviorTree/
527
+ Groot,
528
+ Faconti,
529
+ D. BehaviorTree.CPP. Github Repository.
530
+ (2020),
531
+ https://github.com/BehaviorTree/
532
+ BehaviorTree.CPP,
533
+ Hwang, W., Yim, J., Park, S. & Seo, M. A Compre-
534
+ hensive Exploration on WikiSQL with Table-Aware
535
+ Word Contextualization. (2019),
536
+ OSU-UWRT.
537
+ Riptide
538
+ Autonomy.
539
+ GitHub
540
+ Reposi-
541
+ tory. (2021), https://github.com/osu-uwrt/
542
+ riptide_autonomy,
543
+ Parekh, P., et al. The Ohio State University Underwater
544
+ Robotics Tempest AUV Design and Implementa-
545
+ tion
546
+ (2021)
547
+ https://robonation.org/app/
548
+ uploads/sites/4/2021/07/RoboSub_2021_
549
+ The-Ohio-State-U_TDR-compressed.pdf,
550
+ Zettlemoyer, L. & Collins, M. Learning to Map Sen-
551
+ tences to Logical Form: Structured Classification
552
+ with Probabilistic Categorial Grammars. (2012),
553
+
0tFLT4oBgHgl3EQfpC-v/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf,len=256
2
+ page_content='Underwater Robotics Semantic Parser Assistant Jake Imyak imyak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
3
+ page_content='1@osu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
4
+ page_content='edu Parth Parekh parekh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
5
+ page_content='86@osu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
6
+ page_content='edu Cedric McGuire mcguire.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
7
+ page_content='389@osu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
8
+ page_content='edu Abstract Semantic parsing is a means of taking natu- ral language and putting it in a form that a computer can understand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
9
+ page_content=' There has been a multitude of approaches that take natural lan- guage utterances and form them into lambda calculus expressions - mathematical functions to describe logic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
10
+ page_content=' Here, we experiment with a sequence to sequence model to take natural language utterances, convert those to lambda calculus expressions, when can then be parsed, and place them in an XML format that can be used by a finite state machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
11
+ page_content=' Experimental results show that we can have a high accuracy model such that we can bridge the gap between technical and nontechnical individuals in the robotics field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
12
+ page_content=' 1 Credits Jake Imyak was responsible for the creation of the 1250 dataset terms and finding the RNN en- coder/decoder model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
13
+ page_content=' This took 48 Hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
14
+ page_content=' Cedric McGuire was responsible for the handling of the output logical form via the implementation of the Tokenizer and Parser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
15
+ page_content=' This took 44 Hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
16
+ page_content=' Parth Parekh assembled the Python structure for behavior tree as well as created the actions on the robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
17
+ page_content=' This took 40 Hours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
18
+ page_content=' All group members were responsi- ble for the research, weekly meetings, presentation preparation, and the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
19
+ page_content=' In the paper, each group member was responsible for explaining their re- spective responsibilities with a collaborative effort on the abstract, credits, introduction, discussion, and references.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
20
+ page_content=' A huge thanks to our Professor Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
21
+ page_content=' Huan Sun for being such a great guide through the world of Natural Language Processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
22
+ page_content=' 2 Introduction Robotics is a hard field to master.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
23
+ page_content=' Its one of the few fields which is truly interdisciplinary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
24
+ page_content=' This leads to engineers with many different backgrounds work- ing on one product.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
25
+ page_content=' There are domains within this product that engineers within one subfield may not be able to work with.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
26
+ page_content=' This leads to some engineers not being able to interact with the product properly without supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
27
+ page_content=' As already mentioned, we aim to create an interface for those engineers on the Underwa- ter Robotics Team (UWRT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
28
+ page_content=' Some members on UWRT specialize in other fields that are not soft- ware engineering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
29
+ page_content=' They are not able to create logic for the robot on their own.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
30
+ page_content=' This leads to members of the team that are required to be around when pool testing the robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
31
+ page_content=' This project wants to reduce or remove that component of creating logic for the robot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
32
+ page_content=' This project can also be applied to other robots very easily as all of the main concepts are generalized and only require the robots to imple- ment the actions that are used to train the project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
33
+ page_content=' 3 Robotics Background 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
34
+ page_content='1 Usage of Natural Language in Robotics Robots are difficult to produce logic for.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
35
+ page_content=' One big problem that most robotics teams have is having non-technical members produce logical forms for the robot to understand.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
36
+ page_content=' Those who do not code are not able to manually create logic quickly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
37
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
38
+ page_content='2 Finite State Machines One logical form that is common in the robotics space is a Finite State Machine (FSM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
39
+ page_content=' FSMs are popular because they allow a representation to be completely general while encoding the logic di- rectly into the logical form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
40
+ page_content=' This means things such as control flow, fallback states, and sequences to be directly encoded into the logical form itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
41
+ page_content=' As illustrated in Figure 1, we can easily encode logic into this representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
42
+ page_content=' Since it easily generi- fied, FSM’s can be used across any robot which im- arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
43
+ page_content='12134v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
44
+ page_content='CL] 28 Jan 2023 Figure 1: A FSM represented in Behaviortree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
45
+ page_content='CPP (Fanconti, 2020) (Fanconti, 2020) plements the commands that are contained within it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
46
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
47
+ page_content='3 Underwater Robotics Team Robot Since 2016, The Underwater Robotics Team (UWRT) at The Ohio State University has iterated on the foundations of a single Autonomous Under- water Vehicle (AUV) each year to compete at the RoboSub competition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
48
+ page_content=' Breaking from tradition, the team decided to take the 2019-2021 school years to design and build a new vehicle to compete in the 2021 competition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
49
+ page_content=' Featuring an entirely new hull design, refactored software, and an improved electrical system, UWRT has created its brand-new vehicle, Tempest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
50
+ page_content=' (Parekh, 2021) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
51
+ page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
52
+ page_content='1 Vehicle Tempest is a 6 Degree of Freedom (DOF) AUV with vectored thrusters for linear axis motion and direct drive heave thrusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
53
+ page_content=' This allows the robot to achieve any orientation in all 6 Degrees of freedom [X, Y , Z, Roll, Pitch, Yaw].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
54
+ page_content=' Figure 2: A render of Tempest 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
55
+ page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
56
+ page_content='2 Vehicle Experience With this vehicle, the team has focused on creat- ing a fully fleshed out experience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
57
+ page_content=' This includes commanding and controlling the vehicle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
58
+ page_content=' One big focus of the team was to make sure that any mem- ber, technical or non-technical was able to manage and operate the robot successfully.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
59
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
60
+ page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
61
+ page_content='3 Task Code System A step to fulfill this focus was to change the vehicle’s task code system to use the FSM rep- resentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
62
+ page_content=' This is done through the library BehaviorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
63
+ page_content='CPP (Fanconti, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
64
+ page_content=' This generic FSM representation allows for Tempest to use generified logical forms that can be applied to ANY robotic plant as long as that plant implements those commands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
65
+ page_content=' This library also creates and maintains a Graphical User Interface (GUI) which allows for visual tracking and creation of FSM trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
66
+ page_content=' Any tree created by the GUI is stored within an XML file to preserve the tree structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
67
+ page_content=' The structure of the output of the XML syntax is explained within the parser section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
68
+ page_content=' 4 Data A dataset was to be created in order to use natu- ral language utterances to lambda calculus expres- sions that a parser would be able to recognize to convert to a finite state machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
69
+ page_content=' For reference, the following datasets were considered: the Geo- query set(Zettlemoyer, 2012) and General Purpose Service Robotics commands set (Walker, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
70
+ page_content=' The Geoquery dataset provided a foundation for a grammar to follow for the lambda calculus ex- pression such that consistency would hold for our parser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
71
+ page_content=' Moreover, the gpsr dataset provided an ample amount of examples and different general purpose robotics commands that could be extended within the dataset we curated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
72
+ page_content=' The dataset followed the following form: nat- ural language utterance followed by a tab then a lambda calculus expression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
73
+ page_content=' The lambda calcu- lus expression is of the form ( seq ( action0 ( $0 ( parameter ) ) ) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
74
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
75
+ page_content=' ( actionN ( $N ( parameter ) ) ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
76
+ page_content=' The power of the following ex- pression is that it can be extended to N number of actions in a given sequence, meaning that a user can hypothetically type in a very complex string of action and an expression will be constructed for said sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
77
+ page_content=' Moreover, the format of our dataset allows for it to be extended for any type of robotics Root root Fallback Sequence SubTreeExpanded A:PassThroughwindow door open sequence Collapse C: IsDooropen A:PassThroughDoor Sequence door closed sequence Inverter C RetryUntilSuccesful A:PassThroughDoor A:CloseDoor num_attempts 4 C:IsDooropen A:OpenDoorcommand that a user may have.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
78
+ page_content=' They just need to include examples in the train set with said action and the model will consider it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
79
+ page_content=' The formal grammar is: < seq > : ( seq ( action ) [ (action) ] ) < action > : actionName [ (parameter ] ) < parameter > : paramName λ ( $n ( n ) ) The dataset we created had 1000 entries in the training dataset and 250 entries in the test dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
80
+ page_content=' The size of the vocabulary |V | = 171 for the input text and |V | = 46 for the output text, which is similar in vocabulary size to the GeoQuery dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
81
+ page_content=' The expressions currently increase in complexity in terms of the number of actions within the se- quence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
82
+ page_content=' A way to extend the complexity of the ex- pressions would make the < seq > tag a nontermi- nal to chain together nested sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
83
+ page_content=' The actions within our dataset currently are as follows: move (params: x, y, z, roll, pitch, raw), flatten (params: num), say (params: words), clean (params: obj), bring (params: val), find (params: val), goal, and gate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
84
+ page_content=' The most complex sequence is a string of seven subsequent actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
85
+ page_content=' 5 Model 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
86
+ page_content='1 Seq2Seq Model We decided to use the model presented in ”Lan- guage to Logical Form with Neural Attention” (Dong, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
87
+ page_content=' There was an implementation on GitHub (AvikDelta, 2018) utilizing Google’s Ten- sorflow library to handle all implementation details of the following model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
88
+ page_content=' The part of the paper that was presented was the Sequence to Sequence model with an attention mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
89
+ page_content=' Figure 3: Process of how input natural language are en- coded and decoded via recurrent neural networks and an attention mechanism to find the utterance’s respec- tive natural language form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
90
+ page_content=' (Dong and Lapata, 2016) The model interprets both the input and output from the network as sequences of information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
91
+ page_content=' This process is represented in Figure 3: input is passed to the encoder, then passed through the decoder, and through using the attention mechanism, we can get an output that is a lambda calculus expression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
92
+ page_content=' Both of these sequences can be represented as L- layer recurrent neural networks with long short- term memory (LSTM) that are used to take the tokens from the sentences and the expressions we have.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
93
+ page_content=' The model creates 200 (can be changed to increase and decrease the size of the network) units of both LSTM cells and GRU cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
94
+ page_content=' The GRU cells are used to help compensate for the vanishing gradient problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
95
+ page_content=' These LSTM and GRU cells are used in the input sequence to encode x1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
96
+ page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
97
+ page_content=', xq into vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
98
+ page_content=' Then these vectors are what form the hidden state of the beginning of the sequence in the decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
99
+ page_content=' Then in the decoder, the topmost LSTM cell predicts the t-th output token by taking the softmax of the parameter matrix and the vector from the LSTM cell multiplied by a one-hot vector used to compute the probability of the output from the probability distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
100
+ page_content=' The softmax used here is sampled softmax, which only takes into account a subset of our vocabulary V rather than everything to help alleviate the difficulty of finding the softmax of a large vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
101
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
102
+ page_content='2 Attention Mechanism The model also implemented an attention mecha- nism to help with the predicted values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
103
+ page_content=' The mo- tivation behind the attention mechanism is to use the input sequence in the decoding process since it is relevant information for the prediction of the output token.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
104
+ page_content=' To achieve this, a context vector is created which is the weighted sums of the hidden vectors in the encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
105
+ page_content=' Then this context vector is used as context to find the probability of generating a given output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
106
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
107
+ page_content='3 Training To train the model, the objective is the maximize the likelihood of predicting the correct logical form given some natural language expression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
108
+ page_content=' Hence, the goal is to minimize the sum of the log prob- ability of predicting logical form a given natural language utterance q summed over all training pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
109
+ page_content=' The model used the RMSProp algorithm which is an extension of the Adagrad optimizer but uti- lizes learning rate adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
110
+ page_content=' Dropout is also used for regularization which helps out with a smaller datasets to prevent overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
111
+ page_content=' We performed 90 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
112
+ page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
113
+ page_content="4 Inference To perform inference, the argmax is found of the probability of candidate output given the natural AttentionLayer whatmicrosoftjobs answer(J,(compa ny(J,'microsoft)." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
114
+ page_content='j do not require a ob,not(reqde bscs?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
115
+ page_content=' g(J,bscs))) Input Sequence Sequence/Tree Logical Utterance Encoder Decoder Formlanguage utterance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
116
+ page_content=' Since it is not possible to find the probability of all possible outputs, the proba- bility is put in a form such that a beam search can be employed to generate each individual token of lambda calculus expression to get the appropriate output.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
117
+ page_content=' 6 Results With the default parameters set, the Sequence to Se- quence model achieved 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
118
+ page_content='7% accuracy for exact matches on the test dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
119
+ page_content=' This is consistent with the model’s performance on the Geoquery dataset, achieving 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
120
+ page_content='9% accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
121
+ page_content=' The test dataset pro- vided contained a 250 entries of similar utterances to the train dataset of various complexities ranging anywhere from one to six actions being performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
122
+ page_content=' There are other methods of evaluating we would like to look into in the future such as computing something such as an F1 score rather than solely relying on exact logical form matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
123
+ page_content=' This accuracy for exact logical forms is really important when using the parser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
124
+ page_content=' It allows for FSM representation to be easily and quickly built.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
125
+ page_content=' We were able to build the XML representation and run basic commands on the robot with the model maintaining the order we said them in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
126
+ page_content=' 7 Logical Form Parser The logical form output of our model is sent to a custom parser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
127
+ page_content=' The goal of this parser is to translate the output form into BehaviorTree XML files, in which the robot is able to read in as a finite state machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
128
+ page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
129
+ page_content='1 Tokenizer The Tokenizer comprises the initial framework of the parser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
130
+ page_content=' It accepts the raw logical form as a String object and outputs a set of tokens in a Python List.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
131
+ page_content=' These tokens are obtained by looking for sepa- rator characters (in our case, a space) present in the logical form and splitting them into an array-like structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
132
+ page_content=' The Tokenizer method permits custom action, parameter, and variable names from the log- ical form input, thus allowing ease of scalability in implementing new robot actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
133
+ page_content=' Our model’s output nature is not able to generate syntactically incorrect logical forms, thus our implementation does not check for invalid tokens and will assume all input is correct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
134
+ page_content=' The Tokenizer is stored in a static Singleton class such that it can be accessed anywhere in the program once initialized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
135
+ page_content=' It keeps track of the current token (using getToken()) and has an implementation to move forward to the next token skipToken().' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
136
+ page_content=' This functionality is impor- tant for the object-oriented approach of the parser, discussed in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
137
+ page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
138
+ page_content='2 Parsing Lambda Calculus Expressions The output tokens from the Tokenizer must be in- terpreted into a proper Python from before they are staged to be turned into XML-formatted robot- ready trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
139
+ page_content=' This is the function of the middle step of the parser, in which a tree of Python objects are built.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
140
+ page_content=' The parser utilizes an object-oriented approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
141
+ page_content=' As such, we include three objects: Sequence, Action, and Parameter, with each corresponding to an individual member of our cus- tom grammar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
142
+ page_content=' The objects orient themselves into a short 3-deep tree, consisting of a Sequence root, Action children, and Parameter grand-children.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
143
+ page_content=' Each object has its own parse() method that will advance the tokenizer, validate the input structure, and assemble themselves into a Python structure to be staged into an XML file.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
144
+ page_content=' The validations are en- forced through our grammar definitions in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
145
+ page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
146
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
147
+ page_content='1 Sequence Object The Sequence object is the first object initialized by the parser, along with the root of our action tree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
148
+ page_content=' Each Sequence is composed of a list of 0 or more child actions to be executed in the order they appear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
149
+ page_content=' The parseSequence() method will parse each individual action using parseSAction(), all the while assembling a list of child actions for this Sequence object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
150
+ page_content=' As of now, Sequence objects are unable to be their own children (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
151
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
152
+ page_content=' nesting Sequences is not permitted).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
153
+ page_content=' However, if required, the Sequence object’s parseSequence() method can be modified to recognize a nested action se- quence and recursively parse it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
154
+ page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
155
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
156
+ page_content='2 Action Object Action objects define the title of the action be- ing performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
157
+ page_content=' Similar to Sequence, Action ob- jects have an internally stored list, however with Parameter objects as children.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
158
+ page_content=' There may be any number of parameters, including none.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
159
+ page_content=' When parseAction() method is called, the program val- idates the tokens and will call parseParameter() on each Parameter child identified by the action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
160
+ page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
161
+ page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
162
+ page_content='3 Parameter Object The Parameter object is a simple object that stores a parameter’s name and value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
163
+ page_content=' The parser does not have a check for what the name of the pa- rameter is, nor does it have any restrictions to what the value can be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
164
+ page_content=' parseParameter() searches through the tokens for these two items and stores them as attributes to the Parameter object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
165
+ page_content=' This implementation of parameter is scalable with robot parameters and allows any new configuration of parameter to pass by without any changes in the parser as a whole.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
166
+ page_content=' If a new parameter is needed for the robot, it only has to be trained into the Seq2Seq model on the frontend and into the robot itself on the backend;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
167
+ page_content=' the Parameter object should take care of it all the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
168
+ page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
169
+ page_content='3 BehaviorTree Output In the end, the parser outputs an XML file which can be read in to BehaviorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
170
+ page_content='CPP (Fanconti, 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
171
+ page_content=' An example of this file structure is shown in Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
172
+ page_content=' Figure 4: A FSM that was generated from test input through our RNN This file structure is useful because it encodes sequence of actions within it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
173
+ page_content=' The leaves of the sequence are always in order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
174
+ page_content=' The tree can also encode subtrees into the sequence which we have not implemented yet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
175
+ page_content=' 8 Discussion 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
176
+ page_content='1 Summary We learned that semantic parsing is excellent tool at bridging the gap between both technical and non- technical individuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
177
+ page_content=' The power within semantic parsing with robotics is that any human can auto- mate any task just through using their words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
178
+ page_content=' Our dataset is written in a way that just extending the entries with another robot’s tasks that use a behav- ior tree to perform action, that robot’s actions can be automated as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
179
+ page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
180
+ page_content='2 Future Plans Future plans with this project would be to ex- pand the logical flow that can be implemented with BehaviorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
181
+ page_content='CPP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
182
+ page_content=' As an FSM library, Behav- iorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
183
+ page_content='CPP implements many more helper func- tions to create more complicated FSMs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
184
+ page_content=' These include things like if statements fallback nodes, and subtrees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
185
+ page_content=' This would be a valid expansion of our RNN’s logical output and with more time, we could support the full range of features from BehaviorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
186
+ page_content='CPP We would also like to implement a front end user interface to make this service more accessible to anyone who was not technical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
187
+ page_content=' Right now, the only means of running our program is through the command line which is not suitable for individuals who are nontechnical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
188
+ page_content=' Moreover, including a speak- to-text component to this project would elevate it since an individual would be able to directly tell a robot what commands to do, similar to a human.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
189
+ page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
190
+ page_content='3 Source Code You can view the source code here: https:// github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
191
+ page_content='com/jrimyak/parse_seq2seq References Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
192
+ page_content=', Kaiser, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
193
+ page_content=', Koo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
194
+ page_content=', Petrov, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
195
+ page_content=', Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
196
+ page_content=' & Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
197
+ page_content=' Grammar as a Foreign Language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
198
+ page_content=' (2015), Dong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
199
+ page_content=' & Lapata, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
200
+ page_content=' Language to Logical Form with Neural Attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
201
+ page_content=' (2016), Yao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
202
+ page_content=', Tang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
203
+ page_content=', Yih, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
204
+ page_content=', Sun, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
205
+ page_content=' & Su, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
206
+ page_content=' An Im- itation Game for Learning Semantic Parsers from User Interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
207
+ page_content=' Proceedings Of The 2020 Confer- ence On Empirical Methods In Natural Language Processing (EMNLP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
208
+ page_content=' (2020), Yao, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
209
+ page_content=', Su, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
210
+ page_content=', Sun, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
211
+ page_content=' & Yih, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
212
+ page_content=' Model-based In- teractive Semantic Parsing: A Unified Framework and A Text-to-SQL Case Study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
213
+ page_content=' Proceedings Of The 2019 Conference On Empirical Methods In Natu- ral Language Processing And The 9th International Joint Conference On Natural Language Processing (EMNLP-IJCNLP).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
214
+ page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
215
+ page_content=' 5450-5461 (2019), Walker, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
216
+ page_content=', Peng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
217
+ page_content=' & Cakmak, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
218
+ page_content=' Neural Se- mantic Parsing with Anonymization for Command Understanding in General-Purpose Service Robots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
219
+ page_content=' Lecture Notes In Computer Science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
220
+ page_content=' pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
221
+ page_content=' 337-350 (2019), Dukes, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
222
+ page_content=' Supervised Semantic Parsing of Robotic Spatial Commands .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
223
+ page_content='SemEval-2014 Task 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
224
+ page_content=' (2014), Walker, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
225
+ page_content=' GPSR Commands Dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
226
+ page_content=' (Zenodo,2019), https://zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
227
+ page_content='org/record/3244800, test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
228
+ page_content='xm 1 <root main_tree_to_execute ="test"> 2 <BehaviorTreeID="test"> 3 <Seguencename="root_seg"> 4 <Action ID="say"words="so"/ 5 <Action ID="move"X="s1"/> 6 </Seguence> 7 </BehaviorTree> 8 </root> 6Avikdelta parse seq2seq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
229
+ page_content=' GitHub Repository.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
230
+ page_content=' (2018), https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
231
+ page_content='com/avikdelta/parse_ seq2seq, Faconti, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
232
+ page_content=' BehaviorTree - Groot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
233
+ page_content=' GitHub Repository.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
234
+ page_content=' (2020), https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
235
+ page_content='com/BehaviorTree/ Groot, Faconti, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
236
+ page_content=' BehaviorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
237
+ page_content='CPP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
238
+ page_content=' Github Repository.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
239
+ page_content=' (2020), https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
240
+ page_content='com/BehaviorTree/ BehaviorTree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
241
+ page_content='CPP, Hwang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
242
+ page_content=', Yim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
243
+ page_content=', Park, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
244
+ page_content=' & Seo, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
245
+ page_content=' A Compre- hensive Exploration on WikiSQL with Table-Aware Word Contextualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
246
+ page_content=' (2019), OSU-UWRT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
247
+ page_content=' Riptide Autonomy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
248
+ page_content=' GitHub Reposi- tory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
249
+ page_content=' (2021), https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
250
+ page_content='com/osu-uwrt/ riptide_autonomy, Parekh, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
251
+ page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
252
+ page_content=' The Ohio State University Underwater Robotics Tempest AUV Design and Implementa- tion (2021) https://robonation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
253
+ page_content='org/app/ uploads/sites/4/2021/07/RoboSub_2021_ The-Ohio-State-U_TDR-compressed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
254
+ page_content='pdf, Zettlemoyer, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
255
+ page_content=' & Collins, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
256
+ page_content=' Learning to Map Sen- tences to Logical Form: Structured Classification with Probabilistic Categorial Grammars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
257
+ page_content=' (2012),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFLT4oBgHgl3EQfpC-v/content/2301.12134v1.pdf'}
1NFAT4oBgHgl3EQfjx2F/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bd5cc73774d3a9794bb686b1252c14981e7646f255741c42411abd127d8a1ba
3
+ size 983085
1NFKT4oBgHgl3EQfOi14/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4db162a1b372b12f6d7e18b533b21750c15690dcf1b2a7ddbdf9c4f3bfc88e5e
3
+ size 2949165
2NE3T4oBgHgl3EQfnwo5/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49bec3d77ebab617b8b56672133d0b1d577d1c9daed027f7da3c146a572ba126
3
+ size 655405
39E1T4oBgHgl3EQfAgJ2/content/tmp_files/2301.02840v1.pdf.txt ADDED
@@ -0,0 +1,1755 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.02840v1 [cs.NI] 7 Jan 2023
2
+ Network Slicing: Market Mechanism and
3
+ Competitive Equilibria
4
+ Panagiotis Promponas, and Leandros Tassiulas
5
+ Department of Electrical Engineering and Institute for Network Science, Yale University, USA
6
+ {panagiotis.promponas, leandros.tassiulas}@yale.edu
7
+ Abstract—Towards addressing spectral scarcity and enhancing
8
+ resource utilization in 5G networks, network slicing is a
9
+ promising technology to establish end-to-end virtual networks
10
+ without requiring additional infrastructure investments. By
11
+ leveraging Software Defined Networks (SDN) and Network
12
+ Function Virtualization (NFV), we can realize slices completely
13
+ isolated and dedicated to satisfy the users’ diverse Quality
14
+ of Service (QoS) prerequisites and Service Level Agreements
15
+ (SLAs). This paper focuses on the technical and economic
16
+ challenges that emerge from the application of the network
17
+ slicing architecture to real-world scenarios. We consider a market
18
+ where multiple Network Providers (NPs) own the physical
19
+ infrastructure and offer their resources to multiple Service
20
+ Providers (SPs). Then, the SPs offer those resources as slices
21
+ to their associated users. We propose a holistic iterative model
22
+ for the network slicing market along with a clock auction that
23
+ converges to a robust ǫ-competitive equilibrium. At the end of
24
+ each cycle of the market, the slices are reconfigured and the SPs
25
+ aim to learn the private parameters of their users. Numerical
26
+ results are provided that validate and evaluate the convergence
27
+ of the clock auction and the capability of the proposed market
28
+ architecture to express the incentives of the different entities of
29
+ the system.
30
+ Index Terms—Network Slicing, Mechanism Design, Network
31
+ Economics, Bayesian Inference
32
+ I. INTRODUCTION
33
+ The ascending trend of the volume of the data traffic,
34
+ as well as the vast number of connected devices, puts
35
+ pressure on the industries to enhance resource utilization in
36
+ 5G wireless networks. With the advent of 5G networks and
37
+ Internet of Things (IoT), researchers aim at a technological
38
+ transformation to simultaneously improve throughput, extend
39
+ network coverage and augment the users’ quality of service
40
+ without wasting valuable resources. Despite the significant
41
+ advances brought by the enhanced network architectures and
42
+ technologies, spectral scarcity will still impede the realization
43
+ of the full potential of 5G technology.
44
+ In the future 5G networks, verticals need distinct network
45
+ services as they may differ in their Quality of Service (QoS)
46
+ requirements, Service Level Agreements (SLAs), and key
47
+ performance indicators (KPIs). Such a need highlights the
48
+ inefficiency of the previous architecture technologies which
49
+ were based on a ”one network fits all” nature. In this
50
+ This paper appeared in INFOCOM 2023.
51
+ The research work was supported by the Office of Naval Research under
52
+ project numbers N00014-19-1-2566, N00173-21-1-G006 and by the National
53
+ Science Foundation under the project number CNS-2128530.
54
+ direction, network slicing is a promising technology that
55
+ enables the transition from one-size-fits-all to one-size-per-
56
+ service abstraction [1], which is customized for the distinct
57
+ use cases in a contemporary 5G network model.
58
+ Using Software Defined Networks (SDN) and Network
59
+ Function Virtualization (NFV), those slices are associated with
60
+ completely isolated resources that can be tailored on-demand
61
+ to satisfy the diverse QoS prerequisites and SLAs. Resource
62
+ allocation in network slicing plays a pivotal role in load
63
+ balancing, resource utilization and networking performance
64
+ [2]. Nevertheless, such a resource allocation model faces
65
+ various challenges in terms of isolation, customization, and
66
+ end-to-end coordination which involves both the core but also
67
+ the Radio Access Network (RAN) [3].
68
+ In a typical network slicing scenario, multiple Network
69
+ Providers (NPs), own the physical infrastructure and offer
70
+ their resources to multiple Service Providers (SPs). Possible
71
+ services of the SPs include e-commerce, video, gaming, virtual
72
+ reality, wearable smart devices, and other IoT devices. The
73
+ SPs offer their resources as completely isolated slices to their
74
+ associated users. Thereby, such a system contains three types
75
+ of actors that interact with each other and compete for the same
76
+ resources, either monetary or networking. This paper focuses
77
+ on the technical and economic challenges that emerge from
78
+ the application of this architecture to real-world scenarios.
79
+ A. Related Work
80
+ User
81
+ Satisfaction
82
+ &
83
+ Sigmoid
84
+ Functions:
85
+ Network
86
+ applications can be separated into elastic (e.g. email, text
87
+ file transfer) and inelastic (e.g. audio/video phone, video
88
+ conference, tele-medicine) [4]. Utilities for elastic applications
89
+ are modeled as concave functions that increase with the
90
+ resources with diminishing returns [4]. On the other hand,
91
+ the utility function for an inelastic traffic is modeled as
92
+ a non-concave and usually as a sigmoid function. Such
93
+ non-concavities impose challenges for the optimization of
94
+ a network, but are suitable with the 5G era where the
95
+ services may differ in their QoS requirements [5]. In that
96
+ direction, multiple works in the literature employ sigmoid
97
+ utility functions for the network users [5]–[13]. Nevertheless,
98
+ all of these works consider either one SP and model the
99
+ interaction between the users, or multiple SPs that compete
100
+ for a fixed amount of resources (e.g. bandwidth).
101
+ Network
102
+ Slicing
103
+ in
104
+ 5G
105
+ Networks: Network slicing
106
+ introduces various challenges to the resource allocation in 5G
107
+
108
+ networks in terms of isolation, customization, elasticity, and
109
+ end-to-end coordination [2]. Most surveys on network slicing
110
+ investigate its multiple business models motivated by 5G, the
111
+ fundamental architecture of a slice and the state-of-the-art
112
+ algorithms of network slicing [2], [14], [15]. Microeconomic
113
+ theories such as non-cooperative games and/or mechanism
114
+ design arise as perfect tools to model the trading of network
115
+ infrastructure and radio resources that takes place in network
116
+ slicing [9], [16]–[18].
117
+ Mechanism Design in Network Slicing: Multiple auction
118
+ mechanisms have been used to identify the business model
119
+ of a network slicing market (see a survey in [16]). Contrary
120
+ to our work, the majority of the literature considers a single-
121
+ sided auction, a model that assumes that a single NP owns the
122
+ whole infrastructure of the market [9], [18]–[22]. For example,
123
+ [9] considers a Vickrey–Clarke–Groves (VCG) auction-based
124
+ model where the NP plays the role of an auctioneer and
125
+ distributes discrete physical resource blocks. We find [3] and
126
+ [17] to be closer to our work, since the authors employ
127
+ the double-sided auction introduced by [23] to maximize the
128
+ social welfare of a system with multiple NPs. Contrary to our
129
+ work, the auction proposed in [23] assumes concave utility
130
+ functions for the different actors and requires the computation
131
+ of their gradients for its convergence. The aforementioned
132
+ assumptions might lead to an over-simplification of a more
133
+ complex networking architecture (e.g. that of the network
134
+ slicing model) where the utility function for a user with
135
+ inelastic traffic is expressed as a sigmoid function [9] and that
136
+ of an SP as an optimization problem [3].
137
+ B. Contributions
138
+ Our work develops an iterative market model for the
139
+ network
140
+ slicing
141
+ architecture,
142
+ where
143
+ multiple
144
+ NPs
145
+ with
146
+ heterogeneous Radio Access Technologies (RATs), own the
147
+ physical infrastructure and offer their resources to multiple
148
+ SPs. The latter offer the resources as slices to their associated
149
+ users. Specifically, we propose a five-step iterative model
150
+ for the network slicing market that converges to a robust ǫ-
151
+ competitive equilibrium even when the utility functions of
152
+ the different actors are non-concave. In every cycle of the
153
+ proposed model, the slices are reconfigured and the SPs learn
154
+ the private parameters of their associated end-users to make the
155
+ equilibrium of the next cycle more efficient. The introduced
156
+ market model, can be seen as a framework that suits well
157
+ to various networking problems where three types of actors
158
+ are involved: those who own the physical infrastructure, those
159
+ who lease part of it to sell services and those who enjoy the
160
+ services (e.g. data-offloading [23]).
161
+ For the interaction between the SPs and the NPs and for
162
+ the convergence of the market to an equilibrium, we propose
163
+ an iterative clock auction. Such dynamic auctions are used
164
+ in the literature to auction divisible goods [24], [25]. The
165
+ key differentiating aspects of the proposed auction, are (i) the
166
+ relaxation of the common assumptions that the utility functions
167
+ are concave and their gradients can be analytically computed,
168
+ (ii) it provides highly usable price discovery, and (iii) it is a
169
+ double-sided auction, and thus appropriate for a market with
170
+ multiple NPs. Numerical results are provided that validate
171
+ and evaluate the convergence of the clock auction and the
172
+ capability of the proposed market architecture to express the
173
+ incentives of the different entities of the system.
174
+ II. MARKET MODEL & INCENTIVES
175
+ In this section we describe the different entities of the
176
+ network slicing market and their conflicting incentives.
177
+ A. Market Model
178
+ A typical slicing system model [2], [3], [14], [15] consists of
179
+ multiple SPs represented by M = {1, 2, . . ., M} and multiple
180
+ NPs that own RANs of possibly different RATs, represented
181
+ by a set K = {1, 2, . . ., K}. Each SP owns a slice with a
182
+ predetermined amount of isolated resources (e. g., bandwidth)
183
+ and is associated with a set of users, Um, that serves through its
184
+ slices. For the rest of the paper and without loss of generality
185
+ we assume that each NP owns exactly one RAN, so we use
186
+ the terms RAN and NP interchangeably.
187
+ 1) Network Providers: The multiple NPs of the system
188
+ can quantify their radio resources as the performance level of
189
+ the same network metric (e.g., downlink throughput) [3]. Let
190
+ x(m,k) denote the amount of resources NP k allocates to SP
191
+ m, and the vector xm := (x(m,k))k∈K to denote the amount of
192
+ resources m gets from every NP. Without loss of generality [3],
193
+ capacity Ck limits the amount of resources that can be offered
194
+ from NP k, i.e., �M
195
+ m=1 x(m,k) ≤ Ck. Let C = (Ck)k∈K. For
196
+ the rest of the paper, we assume that there is a constant cost
197
+ related to operation and management overheads induced to the
198
+ NP. The main goal of every NP k is to maximize its profits
199
+ by adjusting the price per unit of resources, denoted by ck.
200
+ 2) Service Providers & Associated Users: The main goal of
201
+ an SP is to purchase resources from a single or multiple NPs in
202
+ order to maximize its profit, which depends on its associated
203
+ users’ satisfaction. The connectivity of a user i ∈ Um is
204
+ denoted by a vector βi = (β(k,i))k∈K, where β(k,i) is a non-
205
+ negative number representing factors such as the link quality
206
+ i.e., numbers in (0, 1] that depend on the path loss. Moreover,
207
+ each user i of the SP m, is associated with a service class,
208
+ c(i), depending on their preferences. We denote the set of the
209
+ possible service classes of SP m as Cm = {Cm
210
+ 1 , . . . , Cm
211
+ cm}
212
+ and thus c(i) ∈ Cm,
213
+ ∀i ∈ Um. Each SP m, is trying to
214
+ distribute the resources purchased from the NPs, i.e., xm,
215
+ to maximize its profit. This process, referred to as intra-slice
216
+ resource allocation, is described in detail in Section II-B.
217
+ Throughout the paper, we assume that the number of users
218
+ of every SP m, i.e., |Um|, is much greater than the number
219
+ of SPs, which is much greater than the number of NPs in
220
+ the market. This assumption is made often in the mechanism
221
+ design literature and is sufficient to ensure that the end-users
222
+ and the SPs have limited information of the market [23], [26].
223
+ The latter let us consider them as price-takers. In the following
224
+ section, we describe in detail the intra-slice resource allocation
225
+ problem from the perspective of an SP who tries to maximize
226
+ the satisfaction of its associated users.
227
+
228
+ B. Intra-Slice Resource Allocation
229
+ The problem of the intra-slice resource allocation concerns
230
+ the distribution of the resources, xm, from the SP m to
231
+ its associated users. Specifically, every SP m allocates a
232
+ portion of x(m,k) to its associated user i, denoted as r(k,i).
233
+ Let ri := (r(k,i))k∈K and rm := (ri)i∈Um. For ease of
234
+ notation, the resources, ri, of a user i ∈ Um, as well as the
235
+ connectivities, βi, are not indexed by m because i is assumed
236
+ to be a unique identifier for the user. Although every user i
237
+ is assigned with r(k,i) resources from RAN k, because of its
238
+ connectivity βi, the aggregated amount of resources it gets
239
+ is zi := βT
240
+ i ri. Moreover, let zm := (zi)i∈Um. In a feasible
241
+ intra-slice allocation it should hold that xm ⪰ �
242
+ i∈Um ri for
243
+ each SP m.
244
+ Every SP should distribute the obtained resources among
245
+ its users to maximize their satisfaction. Towards providing
246
+ intuition behind the employment of sigmoidal functions in the
247
+ literature to model user satisfaction (e.g. see [5]–[12]), note
248
+ that by making the same assumption as logistic regression,
249
+ we model the logit1 of the probability that a user is satisfied,
250
+ as a linear function of the resources. Hence, the probability
251
+ that user i is satisfied with the amount of resources zi, say
252
+ P[QoS sati], satisfies log(
253
+ P [QoS sati]
254
+ 1−P [QoS sati]) = tz
255
+ c(i)(zi − kc(i))
256
+ and thus:
257
+ P[QoS sati] =
258
+ etz
259
+ c(i)(zi−kc(i))
260
+ 1 + etz
261
+ c(i)(zi−kc(i)) ,
262
+ (1)
263
+ where kc(i) ≥ 0 denotes the prerequisite amount of resources
264
+ of the user i and tz
265
+ c(i)
266
+ ≥ 0 expresses how ”tight” this
267
+ prerequisite is. Note that the probability of a user being
268
+ satisfied with respect to the value of zi, is a sigmoid function
269
+ with inflection point kc(i). We assume that the user’s service
270
+ class fully determines its private parameters, hence every user
271
+ i ∈ c(i) has QoS prerequisite kc(i) and sensitivity parameter
272
+ tz
273
+ c(i). These parameters are unknown to the users, so the SP’s
274
+ goal to eventually learn them is challenging (Section III-C).
275
+ Given the previous analysis, the aggregated satisfaction of
276
+ the users of the SP m is um(rm) := �
277
+ i∈Um ui(ri) ( [10],
278
+ [7]), where
279
+ ui(ri) :=
280
+ etz
281
+ c(i)(βT
282
+ i ri−kc(i))
283
+ 1 + etz
284
+ c(i)(βT
285
+ i ri−kc(i)) .
286
+ (2)
287
+ Note that the function ui(·) can be expressed as a function of
288
+ zi as well. With a slight abuse of notation, we switch between
289
+ the two by changing the input variable. We can write the final
290
+ optimization problem for the intra-slice allocation of SP m as:
291
+ (IN-SL):
292
+ max
293
+ rm
294
+ um(rm)
295
+ s.t.
296
+ ri ⪰ 0,
297
+ ∀i ∈ Um
298
+ xm ⪰
299
+
300
+ i∈Um
301
+ ri
302
+ In case the amount of resources obtained from every NP, xm,
303
+ is not given, SP m can optimize it together with the intra-
304
+ 1The logit function is defined as logit(p) = log(
305
+ p
306
+ 1−p ).
307
+ slice resource allocation. Hence, SP m can solve the following
308
+ problem
309
+ (P):
310
+ max
311
+ rm,xm
312
+ Ψm(rm, xm) := um(rm) − cT xm
313
+ s.t.
314
+ ri ⪰ 0,
315
+ ∀i ∈ Um
316
+ xm ⪰
317
+
318
+ i∈Um
319
+ ri
320
+ Recall that ck denotes the price per unit of resources
321
+ announced from every NP k. In Problem P , the objective
322
+ function Ψm can be thought of as the profit of SP m. Let the
323
+ solution of the above problem be ψ∗
324
+ m.
325
+ Problems IN-SL and P are maximization problems of
326
+ a summation of sigmoid functions over a linear set of
327
+ constraints. In [27] the problem of maximizing a sum of
328
+ sigmoid functions over a convex constraint set is addressed.
329
+ This work shows that this problem is generally NP-hard and
330
+ it proposes an approximation algorithm, using a branch-and-
331
+ bound method, to find an approximate solution to the sigmoid
332
+ programming problem.
333
+ In the rest of the section, we study three variations of
334
+ problem P. Specifically, in Section II-B1, we study the case
335
+ where the end-users are charged to get the resources from
336
+ the SPs and in Sections II-B2 and II-B3 we regularize and
337
+ concavify P respectively, something that will facilitate the
338
+ analysis of the rest of the paper.
339
+ 1) Price Mechanism in P: In this subsection we argue that
340
+ Problem P is expressive enough to capture the case where
341
+ every user i is charged for its assigned resources. Let pi be
342
+ the amount of money that user i should pay to receive the zi
343
+ resources. In that case, the SPs should modify Problems IN-
344
+ SL and P accordingly. First, note that user i’s satisfaction may
345
+ depend also on pi. Similarly with the previous section, we can
346
+ express the satisfaction of user i with respect to the price pi
347
+ using a sigmoid function as P[price sati] =
348
+ 1
349
+ 1+e
350
+ tp
351
+ c(i)(pi−bc(i)) ,
352
+ where bc(i) ≥ 0 is the budget of the user i for the prerequisite
353
+ resources kc(i), and tp
354
+ c(i) ≥ 0 expresses how ”tight” is this
355
+ budget. We can now model the acceptance probability function
356
+ [7] as P[sati] = P[price sati]P[QoS sati], and hence the
357
+ expected total revenue, or the new utility of SP m, u
358
+
359
+ m, is
360
+ modeled as
361
+ u
362
+
363
+ m(rm, pm) :=
364
+
365
+ i∈Um
366
+ P[sati]pi.
367
+ (3)
368
+ From Eq. (3), it is possible for SP m to immediately determine
369
+ the optimal price ˆpi to ask from any user i ∈ Um. This follows
370
+ from the fact that for positive pi the function admits a unique
371
+ critical point, ˆp. Therefore, by just adding proper coefficients
372
+ to the terms of Problem IN-SL and P, we can embed a pricing
373
+ mechanism for the end-users in the model. For the rest of the
374
+ paper, without loss of generality in our model, we assume that
375
+ the end-users are not charged for the obtained resources.
376
+ 2) Regularization of P : We can regularize Problem P ,
377
+ with a small positive λm. In that manner, we encourage dense
378
+
379
+ solutions and hence we avoid situations where a problem in
380
+ one RAN completely disrupts the operation of the SP.
381
+ ( ¯
382
+ P ):
383
+ max
384
+ rm,xm
385
+ Ψm(rm, xm) − λm∥xm∥2
386
+ 2
387
+ s.t.
388
+ ri ⪰ 0,
389
+ ∀i ∈ Um
390
+ xm ⪰
391
+
392
+ i∈Um
393
+ ri
394
+ In the regularized problem
395
+ ¯
396
+ P , note that larger values of
397
+ λm penalize the vectors xm with greater L2 norms. Let the
398
+ solution of Problem ¯
399
+ P be ¯ψ∗
400
+ m. The Lemma below, shows that
401
+ for small λm, the optimal values ¯ψ∗
402
+ m and ψ∗
403
+ m are close. Its
404
+ proof is simple and thus ommited for brevity.
405
+ Lemma 1. Let (r∗
406
+ m, x∗
407
+ m) and (¯r∗
408
+ m, ¯x∗
409
+ m) be solutions of
410
+ Problems P and ¯
411
+ P respectively. Then,
412
+ ψ∗
413
+ m − λm∥x∗
414
+ m∥2
415
+ 2 ≤ ¯ψ∗
416
+ m ≤ ψ∗
417
+ m − λm∥¯x∗
418
+ m∥2
419
+ 2
420
+ Lemma 1, proves that the regularization of P was (almost)
421
+ without loss of optimality. In the next section, we proceed by
422
+ concavifying Problem ¯
423
+ P . The new concavified problem will
424
+ be a fundamental building block of the auction analysis in
425
+ Section III-A.
426
+ 3) Concavification of
427
+ ¯P : To concavify
428
+ ¯P , we replace
429
+ every summand of um with its tightest concave envelope,
430
+ i.e., the pointwise infimum over all concave functions that are
431
+ greater or equal. For the sigmoid function ui(zi) the concave
432
+ envelope, ˆui(zi), has a closed form given by
433
+ ˆui(zi) =
434
+
435
+ ui(0) + ui(w)−ui(0)
436
+ w
437
+ zi
438
+ 0≤zi≤w
439
+ ui(zi)
440
+ w≤zi
441
+ ,
442
+ for some w > ki which can be found easily by bisection
443
+ [27]. Fig. 1 depicts the concavification of the aforementioned
444
+ sigmoid functions for kc(·) = 100 and three different values
445
+ for tz
446
+ c(·). Note that for the lowest tz
447
+ c(·) (elastic traffic) we get the
448
+ best approximation whilst for the largest (inelastic traffic/tight
449
+ QoS prerequisites) we get the worst.
450
+ To exploit the closed form of the envelope ˆui(zi), instead
451
+ of problem ¯P , we will concavify the equivalent problem:
452
+ ( ˜
453
+ P ):
454
+ max
455
+ rm,xm,zm,
456
+
457
+ i∈Um
458
+ fi(ri, zi) − cT xm − λm∥xm∥2
459
+ 2,
460
+ s.t.
461
+ (ri, zi) ∈ Si,
462
+ ∀i ∈ Um
463
+ xm ⪰
464
+
465
+ i∈Um
466
+ ri
467
+ where Si
468
+ :=
469
+ {(ri, zi)
470
+ :
471
+ ri
472
+
473
+ 0, zi
474
+ =
475
+ βT
476
+ i ri } and
477
+ fi(ri, zi) := ui(zi) with domain Si. The following lemma
478
+ uses the concave envelope of the sigmoid function ui(zi),
479
+ to compute the concave envelope of fi(ri, zi) and hence the
480
+ concavification of the problem ˜
481
+ P . Its proof is based on the
482
+ definition of the concave envelope and is omitted for brevity.
483
+ Lemma 2. The concave envelope of the function fi(ri, zi) :=
484
+ e
485
+ tz
486
+ c(i)(zi−kc(i))
487
+ 1+e
488
+ tz
489
+ c(i)(zi−kc(i)) with domain Si, ˆfi(ri, zi), has the following
490
+ closed form (with domain Si):
491
+ ˆfi(ri, zi) = ˆui(zi),
492
+ ∀(ri, zi) ∈ Si.
493
+ Therefore, SP m can concavify ˜P as follows:
494
+ ( ˆ
495
+ P ):
496
+ max
497
+ rm,xm,zm
498
+
499
+ i∈Um
500
+ ˆfi(ri, zi) − cT xm − λm∥xm∥2
501
+ 2
502
+ s.t.
503
+ (ri, zi) ∈ Si,
504
+ ∀i ∈ Um
505
+ xm ⪰
506
+
507
+ i∈Um
508
+ ri
509
+ Note that ˆ
510
+ P is strongly concave and thus admits a unique
511
+ maximizer. Let the solution and the optimal point of problem
512
+ ˆ
513
+ P be ˆψ∗
514
+ m and (ˆx∗
515
+ m, ˆr∗
516
+ m) respectively. Ultimately, we would
517
+ like to compare the solution of the concavified ˆ
518
+ P with the
519
+ one of the original problem P . Towards that direction, we
520
+ first define the nonconcavity of a function as follows [28]:
521
+ Definition 1 (Nonconcavity of a function). We define the
522
+ nonconcavity ρ(f) of a function f : S → R with domain
523
+ S, to be
524
+ ρ(f) = sup
525
+ x ( ˆf(x) − f(x)).
526
+ Let F denote a set of possibly non-concave functions. Then
527
+ define ρ[j](F) to be the jth largest of the nonconcavities of
528
+ the functions in F. The theorem below, summarizes the main
529
+ result of this section, which is that every SP can solve the
530
+ concavified ˆ
531
+ P instead of the original P , since the former
532
+ provides a constant bound approximation of the latter. Recall
533
+ that Ψm(ˆr∗
534
+ m, ˆx∗
535
+ m) is the profit of SP m, evaluated at the
536
+ solution of ˆ
537
+ P and that K is the number of the NPs.
538
+ Theorem 1. Let (r∗
539
+ m, x∗
540
+ m) and (¯r∗
541
+ m, ¯x∗
542
+ m) be solutions of
543
+ Problems P
544
+ and
545
+ ¯
546
+ P
547
+ respectively. Moreover, let
548
+ ˆF
549
+ :=
550
+ {ui}i∈Um. Then,
551
+ ψ∗
552
+ m − ǫ − δ1(λm) ≤ Ψm(ˆr∗
553
+ m, ˆx∗
554
+ m) ≤ ψ∗
555
+ m + δ2(λm),
556
+ where δ1(λm)
557
+ :=
558
+ λm(∥x∗
559
+ m∥2
560
+ 2 − ∥ˆx∗
561
+ m∥2
562
+ 2), δ2(λm)
563
+ :=
564
+ λm(∥ˆx∗
565
+ m∥2
566
+ 2 − ∥¯x∗
567
+ m∥2
568
+ 2) and ǫ = �K
569
+ j=1 ρ[j]( ˆF).
570
+ Proof:
571
+ Note that ¯ψ∗
572
+ m is also given by solving ˜
573
+ P and that (ˆr∗
574
+ m, ˆx∗
575
+ m)
576
+ with the corresponding optimal value ˆψ∗
577
+ m, are given by solving
578
+ ˆ
579
+ P . Therefore, from [28, Th. 1], we have that
580
+ ¯ψ∗
581
+ m −
582
+ K
583
+
584
+ j=1
585
+ ρ[j]( ˆF) ≤ um(ˆr∗
586
+ m) − cT ˆx∗
587
+ m − λm∥ˆx∗
588
+ m∥2
589
+ 2 ≤ ¯ψ∗
590
+ m
591
+ The result follows from Lemma 1.
592
+ Remark 1. The values of δ1 and δ2 decrease as λm decreases
593
+ and hence for small regularization penalties they can get
594
+ arbitrarily close to zero.
595
+ Remark 2. The approximation error, ǫ, depends on the K
596
+ greatest nonconcavities of the set {ui}i∈Um. There are two
597
+ conditions that ensure negligible approximation error, i.e.,
598
+ ǫ << ψ∗
599
+ m: i) the end-users have concave utility functions
600
+ (in that case ǫ → 0) or, ii) the market is profitable enough
601
+ for every SP m and hence ψ∗
602
+ m >> K. Condition ii) makes
603
+ the error negligible since ǫ ≤ K, and it can be satisfied for
604
+ example when the supply of the market, C, is sufficiently large.
605
+
606
+ 0
607
+ 50
608
+ 100
609
+ 150
610
+ 200
611
+ 250
612
+ 300
613
+ z
614
+ i
615
+ 0.2
616
+ 0.4
617
+ 0.6
618
+ 0.8
619
+ 1.0
620
+ utility
621
+ Sigmoid Utility
622
+ Concave Envelope
623
+ (a)
624
+ 0
625
+ 50
626
+ 100
627
+ 150
628
+ 200
629
+ 250
630
+ 300
631
+ z
632
+ i
633
+ 0.0
634
+ 0.2
635
+ 0.4
636
+ 0.6
637
+ 0.8
638
+ 1.0
639
+ utility
640
+ Sigmoid Utility
641
+ Concave Envelope
642
+ (b)
643
+ 0
644
+ 50
645
+ 100
646
+ 150
647
+ 200
648
+ 250
649
+ 300
650
+ z
651
+ i
652
+ 0.0
653
+ 0.2
654
+ 0.4
655
+ 0.6
656
+ 0.8
657
+ 1.0
658
+ utility
659
+ Sigmoid Utility
660
+ Concave Envelope
661
+ (c)
662
+ Fig. 1: Concave Envelopes of sigmoid utility functions with kc(·) =
663
+ 100 and (a) tz
664
+ c(·) = 0.02, (b) tz
665
+ c(·) = 0.2 and (c) tz
666
+ c(·) = 2.
667
+ Theorem 1, implies that every SP can solve Problem ˆ
668
+ P ,
669
+ which is a concave program with a unique solution, to find
670
+ an approximate solution to P . This observation fosters the
671
+ convergence analysis of the proposed auction in Section III-A.
672
+ III. NETWORK SLICING MARKET CYCLE
673
+ In this section, we study the evolution of the network slicing
674
+ market using an iterative model that consists of 5-step cycles.
675
+ We refer to the following sequence of steps as a market cycle:
676
+ S1. |Um| prospective users appear to every SP m.
677
+ S2. The vector xm, i.e., the distribution of the resources from
678
+ the NPs to SP m is determined for every m. To achieve
679
+ that in a distributed fashion, an auction between the SPs
680
+ and the NPs should be realized.
681
+ S3. Given xm, each SP m determines the vectors ri and
682
+ hence the amount of resources zi for every user i ∈ Um
683
+ (intra-slice resource allocation).
684
+ S4. After receiving the resources, each user i determines and
685
+ reports to the SP whether the QoS received was enough
686
+ or not to complete its application.
687
+ S5. The SPs exploit the responses of their users, to estimate
688
+ their private parameters and hence to distribute the
689
+ resources more efficiently in the next cycle.
690
+ It is important for the vector xm to be determined before
691
+ the intra-slice resource allocation, since the first serves as the
692
+ capacity in the resources available to SP m. In the following,
693
+ we expand upon each (non-trivial) step of the market cycle.
694
+ A. Step S2 - Clock Auction for the Network Slicing Market
695
+ In this section, we develop and analyze a clock auction
696
+ between the SPs and the NPs, that converges to a market’s
697
+ equilibrium. Specifically, we describe the goal (Section
698
+ III-A1), the steps (Section III-A2), and the convergence
699
+ (Section III-A3) of the auction.
700
+ 1) Auction Goal: Note that the solutions of the problems
701
+ P and ˆ
702
+ P appear to be a function of the prices c1, . . . , cK. Let
703
+ the demand of SP m, given the price vector c, be denoted as
704
+ x∗
705
+ m(c) or ˆx∗
706
+ m(c) depending on whether SP m uses Problem
707
+ P or ˆ
708
+ P to ask for resources. Let also r∗
709
+ m(c) and ˆr∗
710
+ m(c)
711
+ be optimal intra-slice resource allocation vectors respectively.
712
+ Hence, (r∗
713
+ m(c), x∗
714
+ m(c)) and (ˆr∗
715
+ m(c), ˆx∗
716
+ m(c)) are maximizers
717
+ of P and ˆ
718
+ P respectively (given c). Since Problem P may
719
+ admit multiple solutions, let the set Dm(c) be defined as
720
+ Dm(c) :=
721
+
722
+ x∗
723
+ m : {∃r∗
724
+ m : {Ψm(r∗
725
+ m, x∗
726
+ m) = ψ∗
727
+ m given c}
728
+
729
+ .
730
+ We define a Competitive equilibrium as follows:
731
+ Definition
732
+ 2
733
+ (Competitive
734
+ equilibrium).
735
+ Competitive
736
+ equilibrium of the Network Slicing Market is defined to be
737
+ any price vector c† and allocation of the resources of the
738
+ NPs x†, such that:
739
+ i. x†
740
+ m ∈ Dm(c†) for every SP m, and
741
+ ii. C = �
742
+ m∈M x†
743
+ m (the demand equals the supply).
744
+ Note that in a competitive equilibrium, every SP m gets
745
+ resources that could maximize its profit given the price vector.
746
+ Because a competitive equilibrium sets a balance between the
747
+ interests of all participants, it appears to be the settling point
748
+ of the markets in economic analysis [26], [29]. Nevertheless,
749
+ since the SPs’ demands are expressed by solving a non-
750
+ concave program, we define an ǫ-competitive equilibrium
751
+ which will be the ultimate goal of the proposed clock auction.
752
+ Definition
753
+ 3
754
+ (ǫ-Competitive
755
+ equilibrium).
756
+ ǫ-Competitive
757
+ equilibrium of the Network Slicing Market is defined to be
758
+ any price vector ˆc† and allocation of the resources of the NPs
759
+ ˆx†, such that:
760
+ i. For every SP m, there exists an ǫ ≥ 0 and a feasible intra-
761
+ slice resource allocation vector ˆr†
762
+ m (given ˆx†
763
+ m), such that:
764
+ ψ∗
765
+ m − ǫ ≤ Ψm(ˆr†
766
+ m, ˆx†
767
+ m) ≤ ψ∗
768
+ m + ǫ, and
769
+ ii. C = �
770
+ m∈M ˆx†
771
+ m (the demand equals the supply).
772
+ Observe that the first condition of the above definition
773
+ ensures that every SP is satisfied (up to a constant) with
774
+ the obtained resources in a sense that it operates close to its
775
+ maximum possible profit. From Theorem 1, note that if there
776
+ exists a price vector ˆc† such that C = �
777
+ m∈M ˆx∗
778
+ m(ˆc†), then
779
+ the prices in ˆc† with the allocation ˆx† := ˆx∗(ˆc†) form an
780
+ ǫ-competitive equilibrium. Finding such a price vector, is the
781
+ motivation of the proposed clock auction. For the rest of the
782
+ paper we make the following assumption:
783
+ Assumption 1. The SPs calculate their demand and intra-
784
+ resource allocation by solving Problem ˆ
785
+ P .
786
+ This is a reasonable assumption since in Theorem 1 and the
787
+ corresponding Remarks 1 and 2, we proved that by solving
788
+ a (strictly) concave problem, every SP can operate near its
789
+ optimal profit. Therefore, for the rest of the paper, we call
790
+ ˆx∗
791
+ m(c), the demand of SP m given the prices c.
792
+ 2) Auction Description: We propose the following clock
793
+ auction that converges to an ǫ-competitive equilibrium of the
794
+ Network Slicing market (Theorem 2). As we will prove in
795
+ Theorem 3, this equilibrium is robust since the convergent
796
+ price vector is the unique one that clears the market, i.e., makes
797
+ the demand to equal the supply.
798
+
799
+ i. An
800
+ auctioneer
801
+ announces
802
+ a
803
+ price
804
+ vector
805
+ c,
806
+ each
807
+ component of which corresponds to the price that an NP
808
+ sells a unit of its resources.
809
+ ii. The bidders (SPs) report their demands.
810
+ iii. If the aggregated demand received by an NP is greater
811
+ than its available supply, the price of that NP is increased
812
+ and vice versa. In other words, the auctioneer adjusts the
813
+ price vector according to Walrasian tatonnement.
814
+ iv. The process repeats until the price vector converges.
815
+ Note that the components of the price vector change
816
+ simultaneously and independently. Hence different brokers
817
+ can cooperate to jointly clear the market efficiently in a
818
+ decentralized fashion [23]. Let the excess demand, Z(c), be
819
+ the difference between the aggregate demand and supply:
820
+ Z(c) = −C + �
821
+ m∈M ˆx∗
822
+ m(c). In Walrasian tatonnement, the
823
+ price vector adjusts in continuous time according to excess
824
+ demand as ˙c = f(Z(c(t))), where f is a continuous, sign-
825
+ preserving transformation [24]. For the rest of the paper, we
826
+ set f to be the identity function and thus ˙c = Z(c(t)). In
827
+ auctions based on Walrasian tatonnement, the payments are
828
+ only valid after the convergence of the mechanism [30].
829
+ 3) Auction Convergence: Towards proving the convergence
830
+ of the auction, we provide the lemma below which proves that
831
+ the concavified version of the intra-slice resource allocation
832
+ problem IN − SL, can be thought of as a concave function.
833
+ The proof is ommitted as a direct extension of [3] and [31].
834
+ Lemma 3. The function Um(xm) shown below is concave.
835
+ Um(xm) := max
836
+ rm,zm
837
+
838
+ i∈Um
839
+ ˆfi(ri, zi)
840
+ s.t.
841
+ (ri, zi) ∈ Si,
842
+ ∀i ∈ Um
843
+ xm ⪰
844
+
845
+ i∈Um
846
+ ri
847
+ (4)
848
+ Using the function Um, we can rewrite Problem ˆ
849
+ P as
850
+ max
851
+ xm⪰0
852
+ Um(xm) − λm − cT xm∥xm∥2
853
+ 2.
854
+ The following theorem studies the convergence of the auction.
855
+ Theorem 2. Starting from any price vector cinit, the proposed
856
+ clock auction converges to an ǫ-competitive equilibrium.
857
+ Proof: The proof relies on a global stability argument
858
+ similarly to [24], [29]. Let Vm(·) denote m’s net indirect
859
+ utility function:
860
+ Vm(c) = max
861
+ xm⪰0
862
+ {Um(xm) − λm∥xm∥2
863
+ 2 − cT xm}.
864
+ Let a candidate Lyapunov function be V(c) := cT C +
865
+
866
+ m∈M Vm(c). To study the convergence of the auction we
867
+ should find the time derivative of the above Lyapunov function:
868
+ ˙V(c)= ˙c·
869
+
870
+ CT +�
871
+ m∈M
872
+ d
873
+ dc
874
+
875
+ maxxm⪰0{Um(xm)−λm∥xm∥2
876
+ 2−cT xm}
877
+ ��
878
+ .
879
+ Hence, we deduce that:
880
+ ˙V(c) =
881
+
882
+ CT +
883
+
884
+ m∈M
885
+ {−ˆx∗T
886
+ m (c)}
887
+
888
+ · ˙c = −ZT(c(t)) · Z(c(t)).
889
+ The above holds true since the function h(xm) := Um(xm)−
890
+ λm∥xm∥2
891
+ 2, has as concave conjugate the function (see [31])
892
+ h∗(s) = max
893
+ xm⪰0{h(xm) − cT xm},
894
+ and hence ∇h∗(s) = arg maxxm⪰0{Um(xm)− λm∥xm∥2
895
+ 2 −
896
+ cT xm}. Therefore, V(·) is a decreasing function of time and
897
+ converges to its minimum. Note that in the convergent point
898
+ the supply equals the demand for every NP.
899
+ The market might admit multiple ǫ-competitive equilibria.
900
+ Nevertheless, the equilibrium point that the clock auction
901
+ converges is robust in the following sense: given Assumption
902
+ 1, the price vector that clears the market is unique. Therefore,
903
+ essentially, in Theorem 2 we proved that the proposed clock
904
+ auction converges to that unique price vector. This is formally
905
+ proposed by the following theorem.
906
+ Theorem 3. There exists a unique price vector c† such that
907
+
908
+ m∈M ˆx∗
909
+ m(c†) = C.
910
+ Towards proving Theorem 3 we provide Lemmata 4 and 5.
911
+ First, we show that if a component in the price vector changes,
912
+ the demand of an SP who used to obtain resources from the
913
+ corresponding NP, should change as well.
914
+ Lemma 4. For two distinct price vectors c, ¯c with ∃k : ck ̸=
915
+ ¯ck, it holds true that
916
+ ˆx∗
917
+ m(c) = ˆx∗
918
+ m(¯c) ⇒ ˆx∗
919
+ (m,k)(c) = ˆx∗
920
+ (m,k)(¯c) = 0.
921
+ Proof: Let such price vectors, ¯c and c, with ck ̸= ¯ck.
922
+ Since ˆx∗
923
+ m(c) is the optimal point of problem ˆ
924
+ P given c,
925
+ applying KKT will give:
926
+ ˆx∗
927
+ (m,k)(c) = 0
928
+ or
929
+ ∂{Um(xm) − λm∥xm∥2
930
+ 2}
931
+ ∂x(m,k)
932
+ ����
933
+ ˆx∗m(c)
934
+ = ck. (5)
935
+ However, ˆx∗
936
+ m(¯c) is optimal for ˆ
937
+ P given ¯c. Employing a similar
938
+ equation as (5) proves that if ˆx∗
939
+ m(c) = ˆx∗
940
+ m(¯c) then it can only
941
+ hold that ˆx∗
942
+ (m,k)(c) = ˆx∗
943
+ (m,k)(¯c) = 0.
944
+ Definition
945
+ 4
946
+ (WARP property). The aggregate demand
947
+ function satisfies the Weak Axiom of Revealed Preferences
948
+ (WARP), if for different price vectors c and ¯c, it holds that:
949
+ cT ·
950
+
951
+ m∈M
952
+ ˆx∗
953
+ m(¯c) ≤ cT ·
954
+
955
+ m∈M
956
+ ˆx∗
957
+ m(c) ⇒
958
+ ¯cT ·
959
+
960
+ m∈M
961
+ ˆx∗
962
+ m(¯c) < ¯cT ·
963
+
964
+ m∈M
965
+ ˆx∗
966
+ m(c)
967
+ Lemma 5. The aggregate demand function satisfies the WARP
968
+ for distinct price vectors c, ¯c such that �
969
+ m∈M ˆx∗
970
+ m(c) ≻ 0
971
+ and �
972
+ m∈M ˆx∗
973
+ m(¯c) ≻ 0.
974
+ Proof: Since c ̸= ¯c then ∃k ∈ K : ck ̸= ¯ck. Furthermore,
975
+ we have that �
976
+ m∈M ˆx∗
977
+ m(c) ≻ 0 and hence ∃m1 ∈ M
978
+ such that ˆx∗
979
+ m1,k(c) > 0. Using Lemma 4 we conclude that
980
+ ˆx∗
981
+ m1(c) ̸= ˆx∗
982
+ m1(¯c). Hence, since Problem ˆ
983
+ P admits a unique
984
+ global maximum we have that:
985
+
986
+ m∈M
987
+
988
+ Um(ˆx∗
989
+ m(c)) − λm∥ˆx∗
990
+ m(c)∥2
991
+ 2 − cT · ˆx∗
992
+ m(c)
993
+
994
+ >
995
+
996
+
997
+ m∈M
998
+
999
+ Um(ˆx∗
1000
+ m(¯c)) − λm∥ˆx∗
1001
+ m(¯c)∥2
1002
+ 2 − cT · ˆx∗
1003
+ m(¯c)
1004
+
1005
+ Now, the above combined with the WARP hypothesis,
1006
+
1007
+ m∈M
1008
+ cT · ˆx∗
1009
+ m(¯c) ≤
1010
+
1011
+ m∈M
1012
+ cT · ˆx∗
1013
+ m(c),
1014
+ gives:
1015
+
1016
+ m∈M
1017
+
1018
+ Um(ˆx∗
1019
+ m(c)) − λm∥ˆx∗
1020
+ m(c)∥2
1021
+ 2
1022
+
1023
+ >
1024
+
1025
+ m∈M
1026
+
1027
+ Um(ˆx∗
1028
+ m(¯c)) − λm∥ˆx∗
1029
+ m(¯c)∥2
1030
+ 2
1031
+
1032
+ .
1033
+ (6)
1034
+ The result follows by switching the roles of c and ¯c and
1035
+ combine the inequalities.
1036
+ We can now prove Theorem 3 as follows.
1037
+ proof of Theorem 3:
1038
+ Towards a contradiction, assume
1039
+ that there exist two distinct (non-zero) price vectors c and ¯c
1040
+ that satisfy �
1041
+ m∈M ˆx∗
1042
+ m(¯c) = �
1043
+ m∈M ˆx∗
1044
+ m(c) = C and thus
1045
+ cT ·
1046
+ � �
1047
+ m∈M
1048
+ ˆx∗
1049
+ m(¯c) −
1050
+
1051
+ m∈M
1052
+ ˆx∗
1053
+ m(c)
1054
+
1055
+ = 0.
1056
+ (7)
1057
+ Therefore, from Lemma 5 we know that:
1058
+ ¯cT ·
1059
+
1060
+ m∈M
1061
+ ˆx∗
1062
+ m(¯c) < ¯cT ·
1063
+
1064
+ m∈M
1065
+ ˆx∗
1066
+ m(c),
1067
+ (8)
1068
+ which is a contradiction because of the hypothesis.
1069
+ Remark 3. Theorems 2 and 3 together with Remarks 1 and 2
1070
+ imply that if the users’ traffic is elastic, or the total capacity
1071
+ C of the NPs is sufficiently large, the clock auction converges
1072
+ monotonically to the unique competitive equilibrium of the
1073
+ market.
1074
+ At the end of step S2, the final price vector ˆc† and the final
1075
+ demands of each SP m, ˆx∗
1076
+ m, have been determined.
1077
+ B. Intra-Slice Resource Allocation & Feedback (Steps S3, S4)
1078
+ At the beginning of step S3, every SP m is aware of the
1079
+ convergent point ˆx∗
1080
+ m and hence it can allocate the resources
1081
+ either by solving the sigmoid program IN − SL, or by
1082
+ using the convergent approximate solution, ˆr∗
1083
+ m. At that step,
1084
+ an SP can also determine whether it will overbook network
1085
+ resources. Overbooking, is a common practice in airlines and
1086
+ hotel industries and is now being used in the network slicing
1087
+ problem [32], [33]. This management model allocates the
1088
+ same resources to users of the network expecting that not
1089
+ everyone uses their booked capacity. In that case, SP m solves
1090
+ Problem IN−SL whilst setting increased obtained resources,
1091
+ xov
1092
+ m = ˆx∗
1093
+ m +α%◦ ˆx∗
1094
+ m, for a relatively small positive α. Here,
1095
+ ◦ denotes the component-wise multiplication operator.
1096
+ During the step S4 of the cycle, each user i, receives their
1097
+ resources ri, and provide feedback on whether it was satisfied
1098
+ or not. In the next step, the SPs can use the these responses
1099
+ to learn the private parameters of the different service classes.
1100
+ C. Learning the Parameters (Step S5)
1101
+ At the final step of the cycle, the SPs exploit the data they
1102
+ obtained to learn the private parameters of their users. In that
1103
+ fashion, the market ”learns” its equilibrium. For the rest of
1104
+ the paper, for generality, we assume the pricing mechanism
1105
+ introduced in Section II-B1. Therefore, for every user i, the
1106
+ SPs get to know whether it is satisfied by the pair of resources-
1107
+ price (zi, pi). A Bayesian inference model needs the data, a
1108
+ model for the private parameters and a prior distribution.
1109
+ Model: The observed data is the outcome of the Bernoulli
1110
+ variables sati|θc(i)
1111
+ ∼ Bernoulli(P[sati]) for every user
1112
+ i, where θc(i)
1113
+ =
1114
+ (tp
1115
+ c(i), bc(i), tz
1116
+ c(i), kc(i)) is the tuple of
1117
+ the private parameters that we want to infer. Prior: Let
1118
+ the prior distribution for every parameter of θc(i) have
1119
+ probability density functions πtp
1120
+ c(i)(·), πbc(i)(·), πtz
1121
+ c(i)(·) and
1122
+ πkc(i)(·) respectively. The SPs infer the private parameters
1123
+ θc(i) for each service class using the Bayes rule separately:
1124
+ p(θc(i)|data) ∝ Ln(data|θc(i))π(θc(i)), where p(θc(i)|data)
1125
+ is the posterior distribution of θc(i), Ln(data|θc(i)) is the
1126
+ likelihood of the data given our model and π(θc(i)) is the
1127
+ prior distribution. Assuming independent private parameters,
1128
+ π(θc(i)) is the product of the distinct prior distributions, and
1129
+ for each class c we have that:
1130
+ Ln(data|θc(i)) =
1131
+
1132
+ i∈Cm
1133
+ c
1134
+ P[sati]fi(1 − P[sati])1−fi,
1135
+ where fi is 1 when user i is satisfied and 0 when not.
1136
+ The SPs can use Marcov Chain Monte Carlo (MCMC) with
1137
+ Metropolis Sampling, to find the posterior distribution after
1138
+ each market cycle. As the market evolves, the SPs exploit the
1139
+ previous posterior distributions to find better priors for the next
1140
+ cycle.
1141
+ IV. CENTRALIZED SOLUTION
1142
+ In case there exists a centralized entity that knows the utility
1143
+ function of every SP, it can optimize the social welfare, i.e.,
1144
+ the summation of the utility functions of the service and the
1145
+ network providers. This centralized problem can be formulated
1146
+ as follows:
1147
+ (SWM):
1148
+ max
1149
+ rm
1150
+
1151
+ m∈M
1152
+ um(rm)
1153
+ s.t.
1154
+ ri ⪰ 0,
1155
+ ∀i ∈ Um
1156
+
1157
+ m∈M
1158
+
1159
+ i∈Um
1160
+ ri ⪯ C
1161
+ The SW M problem, can be solved with any chosen positive
1162
+ approximation error, using the framework of sigmoidal
1163
+ programming [27].
1164
+ V. NUMERICAL RESULTS
1165
+ A. Auction Convergence & Parameter Tuning
1166
+ In this section we study the convergence of the clock
1167
+ auction, as well as the impact that the various parameters have
1168
+ on its behavior. For this simulation, we assume a small market
1169
+ with 3 NPs with capacities C1 = 850, C2 = 750, C3 = 755
1170
+
1171
+ 0
1172
+ 2
1173
+ 4
1174
+ 6
1175
+ 8
1176
+ 10
1177
+ 12
1178
+ 14
1179
+ Iterations
1180
+ 0
1181
+ 1
1182
+ 2
1183
+ 3
1184
+ 4
1185
+ 5
1186
+ 6
1187
+ 7
1188
+ L2 Norm of the Excess Demand
1189
+ 1e6
1190
+ cinit = [0.62, 0.64, 0.58]
1191
+ cinit = [1.2, 1.4, 1.1]
1192
+ cinit = [0.2, 0.4, 0.1]
1193
+ cinit = [0.4, 0.4, 1.1]
1194
+ (a)
1195
+ 0
1196
+ 10
1197
+ 20
1198
+ 30
1199
+ 40
1200
+ 50
1201
+ Iterations
1202
+ 0
1203
+ 2500
1204
+ 5000
1205
+ 7500
1206
+ 10000
1207
+ 12500
1208
+ 15000
1209
+ 17500
1210
+ L2 Norm of the Excess Demand
1211
+ κ = 10^{-4}
1212
+ κ = 10^{-5}
1213
+ κ = 10^{-6}
1214
+ (b)
1215
+ Fig. 2: L2 norm of the excess demand vector throughout the clock
1216
+ auction (a) for κ = 10−4 and various initialization price vectors
1217
+ cinit, and (b) for cT
1218
+ init = [0.62, 0.64, 0.58] and different values of
1219
+ κ.
1220
+ Cost of NP1
1221
+ 0.30.40.50.60.7 0.8 0.9 1.0 1.1
1222
+ Cost of NP2
1223
+ 0.4
1224
+ 0.6
1225
+ 0.8
1226
+ 1.0
1227
+ 1.2
1228
+ Cost of NP3
1229
+ 0.4
1230
+ 0.5
1231
+ 0.6
1232
+ 0.7
1233
+ 0.8
1234
+ 0.9
1235
+ 1.0
1236
+ cinit = [0.62, 0.64, 0.58]
1237
+ cinit = [1.2, 1.4, 1.1]
1238
+ cinit = [0.2, 0.4, 0.1]
1239
+ cinit = [0.4, 0.4, 1.1]
1240
+ Fig. 3: Illustrating Theorem 2. Starting from any price vector cinit,
1241
+ the clock auction converges to the market clearing prices c†.
1242
+ and 5 SPs with 6 users and 3 distinct service classes each.
1243
+ The users’ private parameters are set as follows: for an i in
1244
+ the first class tz
1245
+ c(i) = tp
1246
+ c(i) = 0.2, kc(i) = bc(i) = 100, for the
1247
+ second class tz
1248
+ c(i) = tp
1249
+ c(i) = 2, kc(i) = bc(i) = 120, and for the
1250
+ third class tz
1251
+ c(i) = tp
1252
+ c(i) = 20, kc(i) = bc(i) = 150. Such values
1253
+ indicate that the users wish to pay a unit of monetary value
1254
+ for a unit of offered resources.
1255
+ To discretize the auction, we change the cost vector
1256
+ according to a step value, κ, as ct+1 = ct + κZ(ct). Fig. 2
1257
+ depicts the L2 norm of the excess demand vector throughout
1258
+ the clock auction for different cost vector initializations cinit
1259
+ (Fig 2a), and for different step values κ (Fig. 2b). By
1260
+ simulating the clock auction, we deduce that the clearing price
1261
+ vector is c†T = [0.6116, 0.6273, 0.5811]. In Fig. 2a note that
1262
+ the closer the initialization cost vector is to c†T , the faster
1263
+ the convergence becomes. Fig. 2b, connotes the need for a
1264
+ proper choice of the step value κ. Clearly, κ = 10−4 gives
1265
+ the fastest convergence and as we decrease the step values
1266
+ it becomes slower. Nevertheless, since Theorem 2 is proved
1267
+ for the continuous case, large values of κ cannot guarantee
1268
+ the convergence of the auction to an equilibrium. In Fig. 3
1269
+ observe that the convergence of the auction does not depend
1270
+ on the initialization of the cost vector (Theorem 2).
1271
+ SP1/NP1
1272
+ SP1/NP2
1273
+ SP2/NP1
1274
+ SP2/NP2
1275
+ Service Provider/Network Provider
1276
+ 0
1277
+ 200
1278
+ 400
1279
+ 600
1280
+ 800
1281
+ 1000
1282
+ 1200
1283
+ 1400
1284
+ x
1285
+ m,
1286
+ k
1287
+ Auction
1288
+ SPP
1289
+ oSPP(5%)
1290
+ SWM
1291
+ Fig. 4: Total amount of resources obtained by every SP m from
1292
+ every NP k in the market, x(m,k).
1293
+ B. Visualization of the Resource Allocation
1294
+ In this section, we get insights on the allocation of
1295
+ the resources in the market. We assume 2 NPs with
1296
+ C1 = C2 = 1400 and 2 SPs with 10 users each and one shared
1297
+ service class with tz
1298
+ c(i) = tp
1299
+ c(i) = 0.2 and kc(i) = bc(i) = 100
1300
+ for all i. The first SP (SP1) is near the first NP (NP1)
1301
+ and far from NP2 and hence, we set [β(1,1), . . . , β(1,10)] =
1302
+ [0.99, 0.96, 0.87, 0.85, 0.82, 0.81, 0.80, 0.80, 0.70, 0.70]
1303
+ and
1304
+ β(2,i) = 0.2, ∀i ∈ U1. Moreover, for the users of SP2 we set
1305
+ β1,i = β2,i = 0.8, ∀i ∈ U2.
1306
+ We compare the resource allocation of four different
1307
+ methods. First, ’Auction’ refers to the resource allocation that
1308
+ results immediately after the auction. ’SPP’ takes ˆx∗
1309
+ m from the
1310
+ equilibrium but performs the intra-slice of every SP by solving
1311
+ IN − SL. We also study the method ’oSPP(5%)’, which
1312
+ mimics the SPP method but with 5% overbooked resources.
1313
+ Finally, ’SWM’ refers to the solution of the Problem SW M.
1314
+ Fig. 4 shows the amount of resources obtained from the
1315
+ two SPs. All methods allocate the majority of the resources
1316
+ of NP1 to SP1 since its users have greater connectivity with
1317
+ it. Although the users of SP2 have equally high connectivity
1318
+ with both NPs, all of the four methods were flexible enough
1319
+ to allocate the resources of NP2 to SP2. Note that none of the
1320
+ methods gives resources from NP2 to SP1.
1321
+ Fig. 5 depicts the intra-slice resource allocations. In Fig.
1322
+ 5a observe that the greater the connectivity of a user is,
1323
+ the less resources it gets. That is because users with good
1324
+ connectivity factors meet their prerequisite QoS using less
1325
+ resources and hence SP1 could maximize its expected profit
1326
+ by giving them less. Note that ’SPP’ gives no resources to the
1327
+ user with the worst connectivity whereas with the overbooking,
1328
+ SP1 gets enough resources to make attractive offers to every
1329
+ user. Therefore, ’SPP’ might make an unfair allocation, since
1330
+ when the resources are not enough, it neglects the users with
1331
+ bad connectivity. In Fig. 5c, note that the homogeneity in the
1332
+ connectivities of the users of SP2 forces every method to fairly
1333
+ divide the resources among them.
1334
+ Fig. 6a shows the expected value of the total revenue, or the
1335
+ social welfare. ’SWM’ gives the greatest revenue among the
1336
+ methods that do not overbook. Nevertheless, although ’SPP’
1337
+ is a completely distributed solution and was not designed to
1338
+ maximize the total revenue, it performs very close to ’SWM’.
1339
+ Moreover, a 5% overbooking leads to greater revenues.
1340
+
1341
+ 1
1342
+ 2
1343
+ 3
1344
+ 4
1345
+ 5
1346
+ 6
1347
+ 7
1348
+ 8
1349
+ 9
1350
+ 10
1351
+ User ID of SP1
1352
+ 0
1353
+ 25
1354
+ 50
1355
+ 75
1356
+ 100
1357
+ 125
1358
+ 150
1359
+ 175
1360
+ r
1361
+ 1,
1362
+ i
1363
+ Auction
1364
+ SPP
1365
+ oSPP(5%)
1366
+ SWM
1367
+ (a)
1368
+ 1
1369
+ 2
1370
+ 3
1371
+ 4
1372
+ 5
1373
+ 6
1374
+ 7
1375
+ 8
1376
+ 9
1377
+ 10
1378
+ User ID of SP2
1379
+ 0
1380
+ 10
1381
+ 20
1382
+ 30
1383
+ 40
1384
+ 50
1385
+ r
1386
+ 1,
1387
+ i
1388
+ Auction
1389
+ SPP
1390
+ oSPP(5%)
1391
+ SWM
1392
+ (b)
1393
+ 1
1394
+ 2
1395
+ 3
1396
+ 4
1397
+ 5
1398
+ 6
1399
+ 7
1400
+ 8
1401
+ 9
1402
+ 10
1403
+ User ID of SP2
1404
+ 0
1405
+ 20
1406
+ 40
1407
+ 60
1408
+ 80
1409
+ 100
1410
+ 120
1411
+ 140
1412
+ 160
1413
+ r
1414
+ 2,
1415
+ i
1416
+ Auction
1417
+ SPP
1418
+ oSPP(5%)
1419
+ SWM
1420
+ (c)
1421
+ Fig. 5: The solution of the intra-slice resource allocation problem from the perspective of the two different SPs of the market. Specifically,
1422
+ how (a) SP1 distributed the resources of NP1, i.e., r1,i for every i in U1, (b) SP2 distributed the resources of NP1, i.e., r1,i for every i in
1423
+ U2, and (c) SP2 distributed the resources of NP2, i.e., r2,i for every i in U2.
1424
+ Auction
1425
+ SPP
1426
+ oSPP(5%)
1427
+ SWM
1428
+ Resource Allocation Method
1429
+ 0
1430
+ 200
1431
+ 400
1432
+ 600
1433
+ 800
1434
+ 1000
1435
+ 1200
1436
+ 1400
1437
+ 1600
1438
+ Expected T
1439
+ otal Revenue
1440
+ 1575.31
1441
+ 1598.16
1442
+ 1677.81
1443
+ 1611.54
1444
+ (a)
1445
+ Auction
1446
+ SPP
1447
+ oSPP(5%)
1448
+ SWM
1449
+ Resource Allocation Method
1450
+ 0
1451
+ 100
1452
+ 200
1453
+ 300
1454
+ 400
1455
+ 500
1456
+ 600
1457
+ 700
1458
+ 800
1459
+ Expected Revenue of SP1
1460
+ 746.85
1461
+ 769.65
1462
+ 827.42
1463
+ 806.49
1464
+ (b)
1465
+ Auction
1466
+ SPP
1467
+ οSPP(5%)
1468
+ SWM
1469
+ Resource Allocation Method
1470
+ 0
1471
+ 100
1472
+ 200
1473
+ 300
1474
+ 400
1475
+ 500
1476
+ 600
1477
+ 700
1478
+ 800
1479
+ Expected Revenue of SP2
1480
+ 828.46
1481
+ 828.51
1482
+ 850.39
1483
+ 805.06
1484
+ (c)
1485
+ Fig. 6: Illustrating the expected revenue (given by Eq. (3)) for the four different resource allocation methods. Fig. (a) shows the aggregated
1486
+ expected revenue, Fig. (b) shows the expected revenue of SP1, and Fig. (c) shows the expected revenue of SP2.
1487
+ C. Impact of Bayesian Inference
1488
+ The previous results are extracted after a sufficient number
1489
+ of cycles, when the SPs have learned the parameters of the
1490
+ end-users. In this section, we consider an SP with 10 users
1491
+ and one service class that employs Bayesian inference to learn
1492
+ the private parameter tz
1493
+ c(i) for every i. We set the true value
1494
+ of the parameter to be tz
1495
+ c(i) = 2. The other parameters are
1496
+ set tp
1497
+ c(i) = 2, kc(i) = bc(i) = 120 and β1,i = 0.9, ∀i ∈ U1.
1498
+ We assume one more SP with a unique service class with
1499
+ tp
1500
+ c(i) = tz
1501
+ c(i) = 0.2, kc(i) = bc(i) = 100 and β2,i = 0.9∀i ∈ U2.
1502
+ Finally, there are 2 NPs with C1 = C2 = 1200.
1503
+ In this example, SP1 sets as prior distribution the normal
1504
+ N(0.02, 2) and hence assumes elastic traffic. At the end
1505
+ of each market cycle, the SP makes an estimation, ˆtz
1506
+ c(i),
1507
+ by calculating the mean of the posterior distribution. Fig. 7
1508
+ depicts the histogram of the posterior distribution for the first
1509
+ two market cycles. Observe that even in the third market cycle,
1510
+ SP1 can estimate with high accuracy the actual value of the
1511
+ parameter. In Table I, note that the perceived revenue, i.e., the
1512
+ expected revenue calculated using the estimation, is different
1513
+ between the cycles that ˆtz
1514
+ c(i) differs from tz
1515
+ c(i). Hence, it is
1516
+ impossible for the SPs to maximize their expected profits
1517
+ when they don’t know the actual values of the parameters.
1518
+ Indeed, observe that the bad estimate of ˆtz
1519
+ c(i) = 0.02 gives
1520
+ poor expected revenue compared to the last two cycles.
1521
+ VI. CONCLUDING REMARKS
1522
+ In this paper we focus on the technical and economic
1523
+ challenges that emerge from the application of the network
1524
+ slicing architecture to real world scenarios. Taking into
1525
+ 0
1526
+ 2
1527
+ 4
1528
+ 6
1529
+ 8
1530
+ 0.0
1531
+ 0.1
1532
+ 0.2
1533
+ 0.3
1534
+ 0.4
1535
+ tc(i)
1536
+ z
1537
+ (a)
1538
+ 0
1539
+ 1
1540
+ 2
1541
+ 3
1542
+ 4
1543
+ 5
1544
+ 6
1545
+ 0.0
1546
+ 0.1
1547
+ 0.2
1548
+ 0.3
1549
+ 0.4
1550
+ 0.5
1551
+ tc(i)
1552
+ z
1553
+ (b)
1554
+ Fig. 7: Posterior distribution of the unknown private parameter tz
1555
+ c(i)
1556
+ in (a) the first Market Cycle, and (b) in the second Market Cycle.
1557
+ Cycle
1558
+ ˆtz
1559
+ c(i)
1560
+ Acquired
1561
+ Resources
1562
+ Perceived
1563
+ Revenue
1564
+ Actual
1565
+ Revenue
1566
+ 1
1567
+ 0.02
1568
+ 1087
1569
+ 530.26
1570
+ 699
1571
+ 2
1572
+ 1.68
1573
+ 1370
1574
+ 1160.77
1575
+ 1163.48
1576
+ 3
1577
+ 2.01
1578
+ 1365
1579
+ 1161.42
1580
+ 1161.42
1581
+ TABLE I: Bayesian inference in different market cycles.
1582
+ consideration the heterogenity of the users’ service classes
1583
+ we introduce an iterative market model along with a clock
1584
+ auction that converges to a robust ǫ-competitive equilibrium.
1585
+ Finally, we propose a Bayesian inference model, for the SPs
1586
+ to learn the private parameters of their users and make the
1587
+ next equilibria more efficient. Numerical results validate the
1588
+ convergence of the clock auction and the capability of the
1589
+ proposed framework to capture the different incentives.
1590
+
1591
+ REFERENCES
1592
+ [1] Q.
1593
+ Zhang,
1594
+ F.
1595
+ Liu,
1596
+ and
1597
+ C.
1598
+ Zeng,
1599
+ “Adaptive
1600
+ interference-aware
1601
+ vnf placement for service-customized 5g network slices,” in IEEE
1602
+ INFOCOM 2019-IEEE Conference on Computer Communications.
1603
+ IEEE, 2019, pp. 2449–2457.
1604
+ [2] R.
1605
+ Su,
1606
+ D.
1607
+ Zhang,
1608
+ R.
1609
+ Venkatesan,
1610
+ Z.
1611
+ Gong,
1612
+ C.
1613
+ Li,
1614
+ F.
1615
+ Ding,
1616
+ F. Jiang, and Z. Zhu, “Resource allocation for network slicing in 5g
1617
+ telecommunication networks: A survey of principles and models,” IEEE
1618
+ Network, vol. 33, no. 6, pp. 172–179, 2019.
1619
+ [3] Q. Qin, N. Choi, M. R. Rahman, M. Thottan, and L. Tassiulas, “Network
1620
+ slicing in heterogeneous software-defined rans,” in IEEE INFOCOM
1621
+ 2020-IEEE Conference on Computer Communications.
1622
+ IEEE, 2020,
1623
+ pp. 2371–2380.
1624
+ [4] Q.-V. Pham and W.-J. Hwang, “Network utility maximization-based
1625
+ congestion control over wireless networks: A survey and potential
1626
+ directives,” IEEE Communications Surveys & Tutorials, vol. 19, no. 2,
1627
+ pp. 1173–1200, 2016.
1628
+ [5] J.-W. Lee, R. R. Mazumdar, and N. B. Shroff, “Non-convex optimization
1629
+ and rate control for multi-class services in the internet,” IEEE/ACM
1630
+ transactions on networking, vol. 13, no. 4, pp. 827–840, 2005.
1631
+ [6] J. Liu, “A theoretical framework for solving the optimal admissions
1632
+ control
1633
+ with
1634
+ sigmoidal
1635
+ utility
1636
+ functions,”
1637
+ in
1638
+ 2013
1639
+ International
1640
+ Conference on Computing, Networking and Communications (ICNC).
1641
+ IEEE, 2013, pp. 237–242.
1642
+ [7] A. Lieto, I. Malanchini, S. Mandelli, E. Moro, and A. Capone,
1643
+ “Strategic network slicing management in radio access networks,” IEEE
1644
+ Transactions on Mobile Computing, 2020.
1645
+ [8] Q. Zhu and R. Boutaba, “Nonlinear quadratic pricing for concavifiable
1646
+ utilities in network rate control,” in IEEE GLOBECOM 2008-2008 IEEE
1647
+ Global Telecommunications Conference.
1648
+ IEEE, 2008, pp. 1–6.
1649
+ [9] L. Gao, P. Li, Z. Pan, N. Liu, and X. You, “Virtualization framework
1650
+ and vcg based resource block allocation scheme for lte virtualization,” in
1651
+ 2016 IEEE 83rd Vehicular Technology Conference (VTC Spring). IEEE,
1652
+ 2016, pp. 1–6.
1653
+ [10] G. Lee, H. Kim, Y. Cho, and S.-H. Lee, “Qoe-aware scheduling for
1654
+ sigmoid optimization in wireless networks,” IEEE Communications
1655
+ Letters, vol. 18, no. 11, pp. 1995–1998, 2014.
1656
+ [11] M. Hemmati, B. McCormick, and S. Shirmohammadi, “Qoe-aware
1657
+ bandwidth allocation for video traffic using sigmoidal programming,”
1658
+ IEEE MultiMedia, vol. 24, no. 4, pp. 80–90, 2017.
1659
+ [12] L. Tan, Z. Zhu, F. Ge, and N. Xiong, “Utility maximization resource
1660
+ allocation
1661
+ in wireless networks: Methods and algorithms,” IEEE
1662
+ Transactions on systems, man, and cybernetics: systems, vol. 45, no. 7,
1663
+ pp. 1018–1034, 2015.
1664
+ [13] S. Papavassiliou, E. E. Tsiropoulou, P. Promponas, and P. Vamvakas,
1665
+ “A paradigm shift toward satisfaction, realism and efficiency in wireless
1666
+ networks resource sharing,” IEEE Network, vol. 35, no. 1, pp. 348–355,
1667
+ 2020.
1668
+ [14] I. Afolabi, T. Taleb, K. Samdanis, A. Ksentini, and H. Flinck,
1669
+ “Network slicing and softwarization: A survey on principles, enabling
1670
+ technologies,
1671
+ and
1672
+ solutions,”
1673
+ IEEE
1674
+ Communications
1675
+ Surveys
1676
+ &
1677
+ Tutorials, vol. 20, no. 3, pp. 2429–2453, 2018.
1678
+ [15] S. Vassilaras, L. Gkatzikis, N. Liakopoulos, I. N. Stiakogiannakis, M. Qi,
1679
+ L. Shi, L. Liu, M. Debbah, and G. S. Paschos, “The algorithmic aspects
1680
+ of network slicing,” IEEE Communications Magazine, vol. 55, no. 8,
1681
+ pp. 112–119, 2017.
1682
+ [16] U. Habiba and E. Hossain, “Auction mechanisms for virtualization
1683
+ in 5g cellular networks: basics, trends, and open challenges,” IEEE
1684
+ Communications Surveys & Tutorials, vol. 20, no. 3, pp. 2264–2293,
1685
+ 2018.
1686
+ [17] D. Zhang, Z. Chang, F. R. Yu, X. Chen, and T. H¨am¨al¨ainen, “A double
1687
+ auction mechanism for virtual resource allocation in sdn-based cellular
1688
+ network,” in 2016 IEEE 27th Annual International Symposium on
1689
+ Personal, Indoor, and Mobile Radio Communications (PIMRC).
1690
+ IEEE,
1691
+ 2016, pp. 1–6.
1692
+ [18] F. Fu and U. C. Kozat, “Wireless network virtualization as a sequential
1693
+ auction game,” in 2010 Proceedings IEEE INFOCOM.
1694
+ IEEE, 2010,
1695
+ pp. 1–9.
1696
+ [19] H. Ahmadi, I. Macaluso, I. Gomez, L. DaSilva, and L. Doyle,
1697
+ “Virtualization of spatial streams for enhanced spectrum sharing,” in
1698
+ 2016 IEEE Global Communications Conference (GLOBECOM). IEEE,
1699
+ 2016, pp. 1–6.
1700
+ [20] B. Cao, W. Lang, Y. Li, Z. Chen, and H. Wang, “Power allocation in
1701
+ wireless network virtualization with buyer/seller and auction game,” in
1702
+ 2015 IEEE Global Communications Conference (GLOBECOM). IEEE,
1703
+ 2015, pp. 1–6.
1704
+ [21] K. Zhu and E. Hossain, “Virtualization of 5g cellular networks as
1705
+ a hierarchical combinatorial auction,” IEEE Transactions on Mobile
1706
+ Computing, vol. 15, no. 10, pp. 2640–2654, 2015.
1707
+ [22] K. Zhu, Z. Cheng, B. Chen, and R. Wang, “Wireless virtualization as
1708
+ a hierarchical combinatorial auction: An illustrative example,” in 2017
1709
+ IEEE Wireless Communications and Networking Conference (WCNC).
1710
+ IEEE, 2017, pp. 1–6.
1711
+ [23] G.
1712
+ Iosifidis,
1713
+ L.
1714
+ Gao,
1715
+ J.
1716
+ Huang,
1717
+ and
1718
+ L.
1719
+ Tassiulas,
1720
+ “A
1721
+ double-
1722
+ auction mechanism for mobile data-offloading markets,” IEEE/ACM
1723
+ Transactions on Networking, vol. 23, no. 5, pp. 1634–1647, 2014.
1724
+ [24] L. M. Ausubel and P. Cramton, “Auctioning many divisible goods,”
1725
+ Journal of the European Economic Association, vol. 2, no. 2-3, pp.
1726
+ 480–493, 2004.
1727
+ [25] L. M. Ausubel, P. Cramton, and P. Milgrom, “The clock-proxy auction:
1728
+ A practical combinatorial auction design,” Handbook of Spectrum
1729
+ Auction Design, pp. 120–140, 2006.
1730
+ [26] S. SHEN, “First fundamental theorem of welfare economics,” 2018.
1731
+ [27] M. Udell and S. Boyd, “Maximizing a sum of sigmoids,” Optimization
1732
+ and Engineering, pp. 1–25, 2013.
1733
+ [28] ——, “Bounding duality gap for separable problems with linear
1734
+ constraints,” Computational Optimization and Applications, vol. 64,
1735
+ no. 2, pp. 355–378, 2016.
1736
+ [29] M. Bichler, M. Fichtl, and G. Schwarz, “Walrasian equilibria from an
1737
+ optimization perspective: A guide to the literature,” Naval Research
1738
+ Logistics (NRL), vol. 68, no. 4, pp. 496–513, 2021.
1739
+ [30] C. Courcoubetis and R. Weber, Pricing communication networks:
1740
+ economics, technology and modelling.
1741
+ John Wiley & Sons, 2003.
1742
+ [31] S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization.
1743
+ Cambridge university press, 2004.
1744
+ [32] J. X. Salvat, L. Zanzi, A. Garcia-Saavedra, V. Sciancalepore, and
1745
+ X. Costa-Perez, “Overbooking network slices through yield-driven
1746
+ end-to-end orchestration,” in Proceedings of the 14th International
1747
+ Conference on emerging Networking EXperiments and Technologies,
1748
+ 2018, pp. 353–365.
1749
+ [33] C. Marquez, M. Gramaglia, M. Fiore, A. Banchs, and X. Costa-Perez,
1750
+ “Resource sharing efficiency in network slicing,” IEEE Transactions on
1751
+ Network and Service Management, vol. 16, no. 3, pp. 909–923, 2019.
1752
+
1753
+ This figure "fig1.png" is available in "png"� format from:
1754
+ http://arxiv.org/ps/2301.02840v1
1755
+
39E1T4oBgHgl3EQfAgJ2/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
4NAzT4oBgHgl3EQf9f64/content/tmp_files/2301.01921v1.pdf.txt ADDED
@@ -0,0 +1,2025 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Control over Berry Curvature Dipole with Electric Field in WTe2
2
+ Xing-Guo Ye,1,* Huiying Liu,2,* Peng-Fei Zhu,1,* Wen-Zheng Xu,1,* Shengyuan A. Yang,2
3
+ Nianze Shang,1 Kaihui Liu,1 and Zhi-Min Liao
4
+ 1,†
5
+ 1State Key Laboratory for Mesoscopic Physics and Frontiers Science Center for Nano-optoelectronics, School of Physics,
6
+ Peking University, Beijing 100871, China
7
+ 2Research Laboratory for Quantum Materials, Singapore University of Technology and Design, Singapore, 487372, Singapore
8
+ Berry curvature dipole plays an important role in various nonlinear quantum phenomena. However,
9
+ the maximum symmetry allowed for nonzero Berry curvature dipole in the transport plane is a single
10
+ mirror line, which strongly limits its effects in materials. Here, via probing the nonlinear Hall effect, we
11
+ demonstrate the generation of Berry curvature dipole by applied dc electric field in WTe2, which is used to
12
+ break the symmetry constraint. A linear dependence between the dipole moment of Berry curvature and the
13
+ dc electric field is observed. The polarization direction of the Berry curvature is controlled by the relative
14
+ orientation of the electric field and crystal axis, which can be further reversed by changing the polarity of
15
+ the dc field. Our Letter provides a route to generate and control Berry curvature dipole in broad material
16
+ systems and to facilitate the development of nonlinear quantum devices.
17
+ Berry curvature is an important geometrical property
18
+ of Bloch bands, which can lead to a transverse velocity of
19
+ Bloch electrons moving under an external electric field
20
+ [1–6]. Hence, it is often regarded as a kind of magnetic field
21
+ in momentum space, leading to various exotic transport
22
+ phenomena, such as anomalous Hall effect (AHE) [1],
23
+ anomalous Nernst effect [7], and extra phase shift in
24
+ quantum oscillations [8]. The integral of Berry curvature
25
+ over the Brillouin zone for fully occupied bands gives rise
26
+ to the Chern number [5], which is one of the central
27
+ concepts of topological physics.
28
+ Recently, Sodemann and Fu [9] proposed that the dipole
29
+ moment of Berry curvature over the occupied states, known
30
+ as Berry curvature dipole (BCD), plays an important role in
31
+ the second-order nonlinear AHE in time-reversal-invariant
32
+ materials. For transport in the x-y plane which is typical in
33
+ experiments, the relevant BCD components form an in-
34
+ plane pseudovector with Dα ¼
35
+ R
36
+ k f0ð∂αΩzÞ [9], where Dα
37
+ is the BCD component along direction α, k is the wave
38
+ vector, the integral is over the Brillouin zone and with
39
+ summation over the band index, f0 is the Fermi distribution
40
+ (in the absence of external field), Ωz is out-of-plane Berry
41
+ curvature, and ∂α ¼ ∂=∂kα. It results in a second-harmonic
42
+ Hall voltage in response to a longitudinal ac probe current,
43
+ which could find useful applications in high-frequency
44
+ rectifiers, wireless charging, energy harvesting, and infra-
45
+ red detection, etc. BCD and its associated nonlinear AHE
46
+ have been predicted in several material systems [9–11]
47
+ and experimentally detected in systems such as two-
48
+ dimensional (2D) monolayer or few-layer WTe2 [12–15],
49
+ Weyl semimetal TaIrTe4 [16], 2D MoS2, and WSe2
50
+ [17–20], corrugated bilayer graphene [21], and a few
51
+ topological materials [22–25]. However, a severe limitation
52
+ is that BCD obeys a rather stringent symmetry constraint.
53
+ In the transport plane, the maximum symmetry allowed
54
+ for Dα is a single mirror line [9]. In several previous
55
+ Letters [17–21], one needs to perform additional material
56
+ engineering such as lattice strain or interlayer twisting to
57
+ generate a sizable BCD. This constraint limits the available
58
+ material platforms with nonzero BCD, unfavorable for the
59
+ in-depth exploration of BCD-related physics and practical
60
+ applications.
61
+ Recent works suggested an alternative route to obtain
62
+ nonzero BCD, that is, utilizing the Berry connection
63
+ polarizability to achieve a field-induced BCD, where the
64
+ additional lattice engineering is unnecessary [26,27]. The
65
+ Berry connection polarizability is also a band geometric
66
+ quantity, related to the field-induced positional shift of
67
+ Bloch electrons [28]. It is a second-rank tensor, defined as
68
+ GabðkÞ ¼ ½∂Að1Þ
69
+ a ðkÞ=∂Eb�, where Að1Þ is the field-induced
70
+ Berry connection, E is the applied electric field [28], and
71
+ the superscript “(1)” represents that the physical quantity is
72
+ the first order term of electric field. Then, the E field
73
+ induced Berry curvature is given by Ωð1Þ ¼ ∇k × ðG
74
+
75
+
76
+ [27], where the double arrow indicates a second-rank
77
+ tensor. This field-induced Berry curvature will lead to a
78
+ field-induced BCD Dð1Þ
79
+ α . Considering transport in the x-y
80
+ plane and applied dc E field also in the plane, we
81
+ have Dð1Þ
82
+ α ¼
83
+ R
84
+ kf0ð∂αΩð1Þ
85
+ z Þ¼εzγμ
86
+ R
87
+ kf0½∂αð∂γGμνÞ�Eν, where
88
+ α; γ; μ; ν ¼ x, y, and εzγμ is the Levi-Civita symbol. In
89
+ systems where the original BCD is forbidden by the crystal
90
+ symmetry, the field-induced BCD by an external E field
91
+ 1
92
+
93
+ could generally be nonzero and become the dominant
94
+ contribution. In such a case, the symmetry is lowered by
95
+ the applied E field, and the induced BCD should be linear
96
+ with E and its direction also controllable by the E field. So
97
+ far, this BCD caused by Berry connection polarizability
98
+ and its field control have not been experimentally demon-
99
+ strated yet, and the nonlinear Hall effect derived from this
100
+ mechanism has not been observed.
101
+ In this Letter, we report the manipulation of electric field
102
+ induced BCD due to the Berry connection polarizability.
103
+ Utilizing a dc electric field Edc to produce BCD in bulk
104
+ WTe2 (for which the inherent BCD is symmetry forbid-
105
+ den), the second-harmonic Hall voltage V2ω
106
+ H is measured as
107
+ a response to an applied ac current Iω. Both orientation and
108
+ magnitude of the induced BCD are highly tunable by the
109
+ applied Edc. Our Letter provides a general route to extend
110
+ BCD to abundant material platforms with high tunability,
111
+ promising for practical applications.
112
+ The WTe2 devices were fabricated with circular disc
113
+ electrodes (device S1) or Hall-bar shaped electrodes
114
+ (device S2). The WTe2 flakes were exfoliated from
115
+ bulk crystal and then transferred onto the prefabricated
116
+ electrodes (Supplemental Material, Note 1 [29]). The WTe2
117
+ thickness of device S1 is 8.4 nm (Supplemental Material,
118
+ Fig. S1 [29]), corresponding to a 12-layer WTe2, and we
119
+ present the results from device S1 in the main text. The
120
+ crystal orientations of WTe2 devices were identified
121
+ by their long, straight edges [12] and further confirmed
122
+ by both polarized Raman spectroscopy (Supplemental
123
+ Material, Note 2 [29]) and angle-dependent transport
124
+ measurements (Supplemental Material, Note 3 [29]). The
125
+ electron mobility of device S1 is ∼ 4974 cm2=V s at 5 K
126
+ (Supplemental Material, Note 4 [29]).
127
+ In our experiments, we use thick Td-WTe2 samples
128
+ (thickness ∼8.4 nm), which have an effective inversion
129
+ symmetry in the x-y plane (which is the transport plane).
130
+ This is formed by the combination of the mirror symmetry
131
+ Ma and the glide mirror symmetry ˜Mb, as indicated in
132
+ Fig. 1(c). The in-plane inversion leads to the absence
133
+ of inherent in-plane BCD and hence the nonlinear Hall
134
+ effect in bulk (see Supplemental Material, Note 5 [29] for
135
+ detailed symmetry analysis). Because ˜Mb involves a half-
136
+ cell translation along the c axis and hence is broken on the
137
+ sample surface, a small but nonzero intrinsic BCD may
138
+ exist on the surface. In fact, such BCD due to surface
139
+ symmetry breaking has already been reported [13], and is
140
+ also observed in our samples, although the signal is much
141
+ weaker in thicker samples (see Supplemental Material,
142
+ Fig. S9 [29]).
143
+ To induce BCD in bulk WTe2 through Berry connection
144
+ polarizability, a dc electric field Edc is applied in the x-y
145
+ plane. As shown in Figs. 1(a) and 1(b), the field-induced
146
+ Berry curvature shows a dipolelike distribution with non-
147
+ zero BCD (theoretical calculations; see Supplemental
148
+ Material, Note 6 [29]). The induced BCD can be controlled
149
+ by the dc E field and should satisfy the following symmetry
150
+ requirements. Because the presence of a mirror symmetry
151
+ would force the BCD to be perpendicular to the mirror
152
+ plane [9], the induced BCD Dð1Þ must be perpendicular to
153
+ Edc when Edc is along the a or b axis. Control experiments
154
+ were carried out in device S1 to confirm the above
155
+ expectations. The measurement configuration is shown
156
+ in Fig. 1(d) (see Supplemental Material, Fig. S2 [29],
157
+ for circuit schematic). The probe ac current with ac field Eω
158
+ and frequency ω was applied approximately along the −a
159
+ axis, satisfying Eω ≪ Edc, and the second-harmonic Hall
160
+ (c)
161
+ (d)
162
+ (e)
163
+ (f)
164
+ (a)
165
+ (b)
166
+ FIG. 1.
167
+ (a) and (b) The field-induced Berry curvature Ωð1Þ
168
+ c ðkÞ in the kz ¼ 0 plane by a dc electric field Edc ¼ 3 kV=m applied along
169
+ (a) a or (b) b axis, respectively. The unit of Ωð1Þ
170
+ c ðkÞ is Å2. The green arrows indicate the direction of Edc. The gray lines depict the Fermi
171
+ surface. (c) The a-b plane of monolayer Td-WTe2. (d) The optical image of device S1, where an angle θ is defined. (e) and (f) The
172
+ second-harmonic Hall voltage V2ω
173
+ H as Edc (e) along b axis (θ ¼ 0°), and (f) along −a axis (θ ¼ 90°) at 5 K. The Eω is applied along −a
174
+ axis, as schematized in (d).
175
+ 2
176
+
177
+ voltage V2ω
178
+ H was measured to reveal the nonlinear Hall
179
+ effect. The Edc that is used to produce BCD was applied
180
+ along the direction characterized by the angle θ, which is
181
+ the angle between the direction of Edc and the baseline of a
182
+ pair of electrodes [white line in Fig. 1(d)] that is approx-
183
+ imately along the b axis. Then Edc along θ ¼ 0° (b axis)
184
+ and θ ¼ 90° (−a axis) correspond to the induced Dð1Þ along
185
+ the a axis and b axis, respectively. Because the nonlinear
186
+ Hall voltage V2ω
187
+ H
188
+ is proportional to Dð1Þ · Eω [9], the
189
+ nonlinear Hall effect should be observed for EωkDð1Þ
190
+ and be vanishing for Eω⊥Dð1Þ.
191
+ As shown in Fig. 1(e), when Edc along θ ¼ 0°, nonlinear
192
+ Hall voltage V2ω
193
+ H is indeed observed as expected. The Edc
194
+ along the b axis induces BCD along the a axis, leading to
195
+ nonzero V2ω
196
+ H since Eω is applied along the −a axis. The
197
+ second-order nature is verified by both the second-
198
+ harmonic signal and parabolic I-V characteristics. It is
199
+ found that the nonlinear Hall voltage is highly tunable by
200
+ the magnitude of Edc. The sign reverses when Edc is
201
+ reversed. Moreover, the nonlinear Hall voltage is linearly
202
+ proportional to Edc (Supplemental Material [29] Fig. S11),
203
+ as we expected. As for Edc along θ ¼ 90°, as shown in
204
+ Fig. 1(f), the V2ω
205
+ H is much suppressed, which is at least one
206
+ order of magnitude smaller than the V2ω
207
+ H
208
+ in Fig. 1(e).
209
+ Because in this case the Edc along the a axis induces BCD
210
+ along the b axis, Eω is almost perpendicular to BCD,
211
+ leading to negligible nonlinear Hall effect. Similar results
212
+ are also reproduced in device S2 (Supplemental Material
213
+ [29], Fig. S12). Such control experiments are well con-
214
+ sistent with our theoretical expectation and confirm the
215
+ validity of field-induced BCD.
216
+ Besides the crystalline axis (θ ¼ 0° and 90°), we also
217
+ study the case when Edc is applied along arbitrary θ
218
+ directions to obtain the complete angle dependence of
219
+ field-induced BCD. Here, Eω is applied along the −a or b
220
+ axis, to detect the BCD component along the a or b axis,
221
+ i.e., Dð1Þ ¼ ½Dð1Þ
222
+ a ðθÞ; Dð1Þ
223
+ b ðθÞ�, where Dð1Þ
224
+ a
225
+ and Dð1Þ
226
+ b
227
+ are the
228
+ BCD components along the a and b axis, respectively. The
229
+ measurement configurations are shown in Figs. 2(a) and
230
+ 2(d). Figures 2(b) and 2(e) show the second-order Hall
231
+ voltage as a function of θ, with the magnitude of Edc fixed
232
+ at 3 kV=m. The second-order Hall response ½E2ω
233
+ H =ðEωÞ2� is
234
+ calculated by E2ω
235
+ H ¼ ðV2ω
236
+ H =WÞ and Eω ¼ ðIωRk=LÞ, where
237
+ W is the channel width, Rk is the longitudinal resistance,
238
+ and L is the channel length. As shown in Figs. 2(c) and 2(f),
239
+ ½E2ω
240
+ H =ðEωÞ2� demonstrates a strong anisotropy, closely
241
+ related to the inherent symmetry of WTe2. First of all, it
242
+ is worth noting that the second-order Hall signal is
243
+ negligible at Edc ¼ 0. This is consistent with our previous
244
+ analysis that the inherent bulk in-plane BCD is symmetry
245
+ forbidden [26,27]. Second, ½E2ω
246
+ H =ðEωÞ2� almost vanishes
247
+ when EdckEω along a or b axis. This is constrained by the
248
+ mirror symmetries Ma or
249
+ ˜Mb, forcing the BCD to be
250
+ perpendicular to the mirror plane in such configurations.
251
+ (a)
252
+ (b)
253
+ (c)
254
+ (d)
255
+ (e)
256
+ (f)
257
+ FIG. 2.
258
+ (a) and (d) Measurement configuration for the second-order AHE with (a) Eωk − a axis and (d) Eωkb axis, respectively. The
259
+ Edc, satisfying Edc ≫ Eω, is rotated to along various directions. (b) and (e) The second-order Hall voltage V2ω
260
+ H as a function of Iω at
261
+ fixed Edc ¼ 3 kV=m but along various directions and at 5 K with (b) Eωk − a axis and (e) Eωkb axis, respectively. (c) and (f) The
262
+ second-order Hall signal ½E2ω
263
+ H =ðEωÞ2� as a function of θ at 5 K with (c) Eωk − a axis and (f) Eωkb axis, respectively.
264
+ 3
265
+
266
+ Thus, when EdckEω along the a or b axis, the induced BCD
267
+ is perpendicular to Edc and Eω, satisfying Dð1Þ · Eω ¼ 0,
268
+ which leads to almost vanished second-order Hall signals.
269
+ Moreover, ½E2ω
270
+ H =ðEωÞ2� exhibits a sensitive dependence on
271
+ the angle θ, indicating the BCD is highly tunable by the
272
+ orientation of Edc. A local minimum of ½E2ω
273
+ H =ðEωÞ2� is
274
+ found at an intermediate angle around θ ¼ 30° when
275
+ Eωk − a axis in Fig. 2(c). This is because ½E2ω
276
+ H =ðEωÞ2�
277
+ depends not only on ðDð1Þ · c
278
+ EωÞ, i.e., the projection of the
279
+ pseudovector Dð1Þ to the direction of Eω, but also on the
280
+ anisotropy of conductivity in WTe2. The two terms show
281
+ different dependence on the angle θ, leading to a local
282
+ minimum around θ ¼ 30°.
283
+ Through control experiments and symmetry analysis, the
284
+ extrinsic effects, such as diode effect, thermal effect, and
285
+ thermoelectric effect, could be safely ruled out as the main
286
+ reason of the observed second-order nonlinear AHE (see
287
+ Supplemental Material, Note 9 [29]). To further investigate
288
+ this effect, the temperature dependence and scaling law of
289
+ the second-order nonlinear Hall signal are studied. By
290
+ changing the temperature, V2ω
291
+ H and longitudinal conduc-
292
+ tivity σxx were collected, where the magnitude of Edc was
293
+ fixed at 3 kV=m. Figures 3(a) and 3(c) show the V2ω
294
+ H at
295
+ different temperatures with Eωk − a axis, θ ¼ 0° and Eωkb
296
+ axis, θ ¼ 90°, respectively. A relatively small but nonzero
297
+ second-order Hall signal is observed at 286 K. The scaling
298
+ law, that is, the second-order Hall signal ½E2ω
299
+ H =ðEωÞ2�
300
+ versus σxx, is presented and analyzed in Figs. 3(b) and
301
+ 3(d) for different angles θ. The σxx was calculated by
302
+ σxx ¼ð1=RkÞðL=WdÞ, where d is the thickness of WTe2,
303
+ and was varied by changing temperature. According to
304
+ Ref. [42], the scaling law between ½E2ω
305
+ H =ðEωÞ2� and σxx
306
+ satisfies ½E2ω
307
+ H =ðEωÞ2� ¼ C0 þ C1σxx þ C2σ2xx. The coeffi-
308
+ cients C2 and C1 involve the mixing contributions from
309
+ various skew scattering processes [42–45], such as impu-
310
+ rity scattering, phonon scattering, and mixed scattering
311
+ from both phonons and impurities [42]. C0 is mainly
312
+ contributed by the intrinsic mechanism, i.e., the field-
313
+ induced BCD here. As shown in Figs. 3(b) and 3(d), the
314
+ scaling law is well fitted for all angles θ.
315
+ It
316
+ is
317
+ found
318
+ that
319
+ C0
320
+ shows
321
+ strong
322
+ anisotropy
323
+ (Supplemental Material [29], Fig. S18), indicating the
324
+ field-induced BCD is also strongly dependent on angle
325
+ θ. The value of field-induced BCD can be estimated
326
+ through D ¼ ð2ℏ2n=m�eÞ½E2ω
327
+ H =ðEωÞ2� [12], where ℏ is
328
+ the reduced Planck constant, e is the electron charge, m� ¼
329
+ 0.3me is the effective electron mass, n is the carrier density.
330
+ Here, we replace the ½E2ω
331
+ H =ðEωÞ2� by the coefficient C0
332
+ from the scaling law fitting. The two components of BCD
333
+ along the a and b axes, denoted as Dð1Þ
334
+ a
335
+ and Dð1Þ
336
+ b , are
337
+ calculated from the fitting curves with the magnitude of Edc
338
+ fixed at 3 kV=m under the Eωk − a axis and the Eωkb axis,
339
+ respectively. As shown in Figs. 4(a) and 4(b), it is found
340
+ that Dð1Þ
341
+ a
342
+ shows a cos θ dependence on θ, whereas Dð1Þ
343
+ b
344
+ (a)
345
+ (b)
346
+ (c)
347
+ (d)
348
+ FIG. 3.
349
+ (a) and (c) The second-harmonic Hall voltage at various
350
+ temperatures with the magnitude of Edc fixed at 3 kV=m (a) under
351
+ Eωk − a axis, θ ¼ 0° and (c) under Eωkb axis, θ ¼ 90°. (b),(d)
352
+ Second-order Hall signal ½E2ω
353
+ H =ðEωÞ2� as a function of σxx
354
+ (b) under Eωk − a axis and (d) under Eωkb axis at various θ
355
+ with the magnitude of Edc fixed at 3 kV=m. The temperature
356
+ range for the scaling law in (b) and (d) is 50–286 K.
357
+ (a)
358
+ (b)
359
+ (c)
360
+ FIG. 4.
361
+ The induced Berry curvature dipole as a function of θ with the magnitude of Edc fixed at 3 kV=m for (a) the component along
362
+ a axis, Dð1Þ
363
+ a and (b) the component along b axis, Dð1Þ
364
+ b . (c) The relationship between the field-induced Berry curvature dipole Dð1Þ and the
365
+ applied Edc ¼ 3 kV=m along different directions. The scale bar of Dð1Þ is 0.2 nm.
366
+ 4
367
+
368
+ shows a sin θ dependence. Such angle dependence is
369
+ well consistent with the theoretical predications (see
370
+ Supplemental Material [29], Note 6). According to the
371
+ two components Dð1Þ
372
+ a
373
+ and Dð1Þ
374
+ b , the field induced BCD
375
+ vector of Dð1Þ is synthesized for Edc along various
376
+ directions, as presented in Fig. 4(c). It is found that both
377
+ the magnitude and orientation of the field-induced BCD are
378
+ highly tunable by the dc field.
379
+ In summary, we have demonstrated the generation,
380
+ modulation, and detection of the induced BCD due to
381
+ the Berry connection polarizability in WTe2. It is found that
382
+ the direction of the generated BCD is controlled by the
383
+ relative orientation between the applied Edc direction
384
+ and the crystal axis, and its magnitude is proportional to
385
+ the intensity of Edc. Using independent control of the
386
+ two applied fields, our Letter demonstrates an efficient
387
+ approach to probe the nonlinear transport tensor symmetry,
388
+ which is also helpful for full characterization of nonlinear
389
+ transport coefficients. Moreover, the manipulation of BCD
390
+ up to room temperature by electric means without addi-
391
+ tional symmetry breaking will greatly extend the BCD-
392
+ related physics [46,47] to more general materials and
393
+ should be valuable for developing devices utilizing the
394
+ geometric properties of Bloch electrons.
395
+ This work was supported by National Key Research and
396
+ Development Program of China (No. 2018YFA0703703),
397
+ National Natural Science Foundation of China (Grants
398
+ No. 91964201 and No. 61825401), and Singapore MOE
399
+ AcRF Tier 2 (MOE-T2EP50220-0011). We are grateful to
400
+ Dr. Yanfeng Ge at SUTD for inspired discussions.
401
+ *These authors contributed equally to this work.
402
403
+ [1] N. Nagaosa, J. Sinova, S. Onoda, A. H. MacDonald, and
404
+ N. P. Ong, Anomalous Hall effect, Rev. Mod. Phys. 82,
405
+ 1539 (2010).
406
+ [2] J. Sinova, S. O. Valenzuela, J. Wunderlich, C. H. Back, and
407
+ T. Jungwirth, Spin Hall effects, Rev. Mod. Phys. 87, 1213
408
+ (2015).
409
+ [3] D. Xiao, W. Yao, and Q. Niu, Valley-Contrasting Physics in
410
+ Graphene: Magnetic Moment and Topological Transport,
411
+ Phys. Rev. Lett. 99, 236809 (2007).
412
+ [4] L. Šmejkal, Y. Mokrousov, B. Yan, and A. H. MacDonald,
413
+ Topological antiferromagnetic spintronics, Nat. Phys. 14,
414
+ 242 (2018).
415
+ [5] D. Xiao, M.-C. Chang, and Q. Niu, Berry phase effects on
416
+ electronic properties, Rev. Mod. Phys. 82, 1959 (2010).
417
+ [6] M.-C. Chang and Q. Niu, Berry phase, hyperorbits, and the
418
+ Hofstadter spectrum: Semiclassical dynamics in magnetic
419
+ Bloch bands, Phys. Rev. B 53, 7010 (1996).
420
+ [7] M. T. Dau, C. Vergnaud, A. Marty, C. Beign´e, S.
421
+ Gambarelli, V. Maurel, T. Journot, B. Hyot, T. Guillet, B.
422
+ Gr´evin et al., The valley Nernst effect in WSe2, Nat.
423
+ Commun. 10, 5796 (2019).
424
+ [8] B.-C. Lin, S. Wang, S. Wiedmann, J.-M. Lu, W.-Z. Zheng,
425
+ D. Yu, and Z.-M. Liao, Observation of an Odd-Integer
426
+ Quantum Hall Effect from Topological Surface States in
427
+ Cd3As2, Phys. Rev. Lett. 122, 036602 (2019).
428
+ [9] I. Sodemann and L. Fu, Quantum Nonlinear Hall Effect
429
+ Induced by Berry Curvature Dipole in Time-Reversal
430
+ Invariant Materials, Phys. Rev. Lett. 115, 216806 (2015).
431
+ [10] J.-S. You, S. Fang, S.-Y. Xu, E. Kaxiras, and T. Low, Berry
432
+ curvature dipole current in the transition metal dichalcoge-
433
+ nides family, Phys. Rev. B 98, 121109(R) (2018).
434
+ [11] Y. Zhang, J. van den Brink, C. Felser, and B. Yan, Electri-
435
+ cally tunable nonlinear anomalous Hall effect in two-
436
+ dimensional transition-metal dichalcogenides WTe2 and
437
+ MoTe2, 2D Mater. 5, 044001 (2018).
438
+ [12] Q. Ma, S.-Y. Xu, H. Shen, D. MacNeill, V. Fatemi, T.-R.
439
+ Chang, A. M. M. Valdivia, S. Wu, Z. Du, C.-H. Hsu et al.,
440
+ Observation of the nonlinear Hall effect under time-reversal-
441
+ symmetric conditions, Nature (London) 565, 337 (2019).
442
+ [13] K. Kang, T. Li, E. Sohn, J. Shan, and K. F. Mak, Nonlinear
443
+ anomalous Hall effect in few-layer WTe2, Nat. Mater. 18,
444
+ 324 (2019).
445
+ [14] S.-Y. Xu, Q. Ma, H. Shen, V. Fatemi, S. Wu, T.-R. Chang,
446
+ G. Chang, A. M. M. Valdivia, C.-K. Chan, Q. D. Gibson, J.
447
+ Zhou, Z. Liu, K. Watanabe, T. Taniguchi, H. Lin, R. J. Cava,
448
+ L. Fu, N. Gedik, and P. Jarillo-Herrero, Electrically switch-
449
+ able Berry curvature dipole in the monolayer topological
450
+ insulator WTe2, Nat. Phys. 14, 900 (2018).
451
+ [15] J. Xiao, Y. Wang, H. Wang, C. D. Pemmaraju, S. Wang, P.
452
+ Muscher, E. J. Sie, C. M. Nyby, T. P. Devereaux, X. Qian
453
+ et al., Berry curvature memory through electrically driven
454
+ stacking transitions, Nat. Phys. 16, 1028 (2020).
455
+ [16] D. Kumar, C.-H. Hsu, R. Sharma, T.-R. Chang, P. Yu, J.
456
+ Wang, G. Eda, G. Liang, and H. Yang, Room-temperature
457
+ nonlinear Hall effect and wireless radiofrequency rectifica-
458
+ tion in Weyl semimetal TaIrTe4, Nat. Nanotechnol. 16, 421
459
+ (2021).
460
+ [17] J. Lee, Z. Wang, H. Xie, K. F. Mak, and J. Shan, Valley
461
+ magnetoelectricity in single-layer MoS2, Nat. Mater. 16,
462
+ 887 (2017).
463
+ [18] J. Son, K.-H. Kim, Y. H. Ahn, H.-W. Lee, and J. Lee, Strain
464
+ Engineering of the Berry Curvature Dipole and Valley
465
+ Magnetization in Monolayer MoS2, Phys. Rev. Lett. 123,
466
+ 036806 (2019).
467
+ [19] M.-S. Qin, P.-F. Zhu, X.-G. Ye, W.-Z. Xu, Z.-H. Song, J.
468
+ Liang, K. Liu, and Z.-M. Liao, Strain tunable Berry curvature
469
+ dipole, orbital magnetization and nonlinear Hall effect in
470
+ WSe2 monolayer, Chin. Phys. Lett. 38, 017301 (2021).
471
+ [20] M. Huang, Z. Wu, J. Hu, X. Cai, E. Li, L. An, X. Feng, Z.
472
+ Ye, N. Lin, K. T. Law et al., Giant nonlinear Hall effect in
473
+ twisted WSe2, Natl. Sci. Rev. nwac232 (2022).
474
+ [21] S.-C. Ho, C.-H. Chang, Y.-C. Hsieh, S.-T. Lo, B. Huang,
475
+ T.-H.-Y. Vu, C. Ortix, and T.-M. Chen, Hall effects in
476
+ artificially corrugated bilayer graphene without breaking
477
+ time-reversal
478
+ symmetry,
479
+ Nat.
480
+ Electron.
481
+ 4,
482
+ 116
483
+ (2021).
484
+ [22] P. He, H. Isobe, D. Zhu, C.-H. Hsu, L. Fu, and H. Yang,
485
+ Quantum frequency doubling in the topological insulator
486
+ Bi2Se3, Nat. Commun. 12, 698 (2021).
487
+ [23] O. O.
488
+ Shvetsov,
489
+ V. D.
490
+ Esin,
491
+ A. V.
492
+ Timonina,
493
+ N. N.
494
+ Kolesnikov, and E. V. Deviatov, Non-linear Hall effect in
495
+ 5
496
+
497
+ three-dimensional Weyl and Dirac semimetals, JETP Lett.
498
+ 109, 715 (2019).
499
+ [24] S. Dzsaber, X. Yan, M. Taupin, G. Eguchi, A. Prokofiev, T.
500
+ Shiroka, P. Blaha, O. Rubel, S. E. Grefe, H.-H. Lai et al.,
501
+ Giant spontaneous Hall effect in a nonmagnetic Weyl–
502
+ Kondo semimetal, Proc. Natl. Acad. Sci. U.S.A. 118,
503
+ e2013386118 (2021).
504
+ [25] A. Tiwari, F. Chen, S. Zhong, E. Drueke, J. Koo, A.
505
+ Kaczmarek, C. Xiao, J. Gao, X. Luo, Q. Niu et al., Giant
506
+ c-axis nonlinear anomalous Hall effect in Td-MoTe2 and
507
+ WTe2, Nat. Commun. 12, 2049 (2021).
508
+ [26] S. Lai, H. Liu, Z. Zhang, J. Zhao, X. Feng, N. Wang, C.
509
+ Tang, Y. Liu, K. S. Novoselov, S. A. Yang et al., Third-order
510
+ nonlinear Hall effect induced by the Berry-connection
511
+ polarizability tensor, Nat. Nanotechnol. 16, 869 (2021).
512
+ [27] H. Liu, J. Zhao, Y.-X. Huang, X. Feng, C. Xiao, W. Wu, S.
513
+ Lai, W.-b. Gao, and S. A. Yang, Berry connection polar-
514
+ izability tensor and third-order Hall effect, Phys. Rev. B 105,
515
+ 045118 (2022).
516
+ [28] Y. Gao, S. A. Yang, and Q. Niu, Field Induced Positional
517
+ Shift of Bloch Electrons and Its Dynamical Implications,
518
+ Phys. Rev. Lett. 112, 166601 (2014).
519
+ [29] See
520
+ Supplemental
521
+ Material
522
+ at
523
+ http://link.aps.org/
524
+ supplemental/10.1103/PhysRevLett.130.016301 for device
525
+ fabrication, electrical measurements, calculation details,
526
+ polarized Raman spectroscopy of few-layer WTe2, transport
527
+ properties of the devices, angle-dependent third-order
528
+ anomalous Hall effect, symmetry analysis of WTe2, theory
529
+ analysis of the field-induced Berry curvature dipole, control
530
+ experiments in device S2, extrinsic effects that may induce
531
+ nonlinear transport, and anisotropy of the scaling parame-
532
+ ters, which includes Refs. [30–41].
533
+ [30] L. Wang, I. Meric, P. Y. Huang, Q. Gao, Y. Gao, H. Tran, T.
534
+ Taniguchi, K. Watanabe, L. M. Campos, D. A. Muller et al.,
535
+ One-dimensional electrical contact to a two-dimensional
536
+ material, Science 342, 614 (2013).
537
+ [31] G. Kresse and J. Hafner, Ab initio molecular dynamics for
538
+ open-shell transition metals, Phys. Rev. B 48, 13115 (1993).
539
+ [32] G. Kresse and J. Furthmüller, Efficient iterative schemes for
540
+ ab initio total-energy calculations using a plane-wave basis
541
+ set, Phys. Rev. B 54, 11169 (1996).
542
+ [33] P. E. Blöchl, Projector augmented-wave method, Phys. Rev.
543
+ B 50, 17953 (1994).
544
+ [34] J. P. Perdew, K. Burke, and M. Ernzerhof, Generalized
545
+ Gradient Approximation Made Simple, Phys. Rev. Lett. 77,
546
+ 3865 (1996).
547
+ [35] G. Pizzi et al., Wannier90 as a community code: New
548
+ features and applications, J. Phys. Condens. Matter 32,
549
+ 165902 (2020).
550
+ [36] M. Kim, S. Han, J. H. Kim, J.-U. Lee, Z. Lee, and H.
551
+ Cheong, Determination of the thickness and orientation of
552
+ few-layer tungsten ditelluride using polarized Raman spec-
553
+ troscopy, 2D Mater. 3, 034004 (2016).
554
+ [37] M. N. Ali, J. Xiong, S. Flynn, J. Tao, Q. D. Gibson, L. M.
555
+ Schoop, T. Liang, N. Haldolaarachchige, M. Hirschberger,
556
+ N. P. Ong et al., Large, non-saturating magnetoresistance in
557
+ WTe2, Nature (London) 514, 205 (2014).
558
+ [38] V. Fatemi, Q. D. Gibson, K. Watanabe, T. Taniguchi, R. J.
559
+ Cava, and P. Jarillo-Herrero, Magnetoresistance and quan-
560
+ tum oscillations of an electrostatically tuned semimetal-
561
+ to-metal transition in ultrathin WTe2, Phys. Rev. B 95,
562
+ 041410(R) (2017).
563
+ [39] X. Zhang, V. Kakani, J. M. Woods, J. J. Cha, and
564
+ X. Shi, Thickness dependence of magnetotransport proper-
565
+ ties of tungsten ditelluride, Phys. Rev. B 104, 165126
566
+ (2021).
567
+ [40] T. Akamatsu et al., Avan der Waals interface that creates in-
568
+ plane polarization and a spontaneous photovoltaic effect,
569
+ Science 372, 68 (2021).
570
+ [41] C. Dames and G. Chen, 1ω, 2ω, and 3ω methods for
571
+ measurements of thermal properties, Rev. Sci. Instrum. 76,
572
+ 124902 (2005).
573
+ [42] Z. Z. Du, C. M. Wang, S. Li, H.-Z. Lu, and X. C. Xie,
574
+ Disorder-induced nonlinear Hall effect with time-reversal
575
+ symmetry, Nat. Commun. 10, 3047 (2019).
576
+ [43] Y. Tian, L. Ye, and X. Jin, Proper Scaling of the Anomalous
577
+ Hall Effect, Phys. Rev. Lett. 103, 087206 (2009).
578
+ [44] L. Ye, M. Kang, J. Liu, F. von Cube, C. R. Wicker, T.
579
+ Suzuki, C. Jozwiak, A. Bostwick, E. Rotenberg, D. C. Bell
580
+ et al., Massive Dirac fermions in a ferromagnetic kagome
581
+ metal, Nature (London) 555, 638 (2018).
582
+ [45] H. Isobe, S.-Y. Xu, and L. Fu, High-frequency rectification
583
+ via chiral Bloch electrons, Sci. Adv. 6, eaay2497 (2020).
584
+ [46] X.-G. Ye, P.-F. Zhu, W.-Z. Xu, N. Shang, K. Liu, and Z.-M.
585
+ Liao, Orbit-transfer torque driven field-free switching of
586
+ perpendicular magnetization, Chin. Phys. Lett. 39, 037303
587
+ (2022).
588
+ [47] S. Sinha, P. C. Adak, A. Chakraborty, K. Das, K. Debnath,
589
+ L. D. V.
590
+ Sangani,
591
+ K.
592
+ Watanabe,
593
+ T.
594
+ Taniguchi,
595
+ U. V.
596
+ Waghmare, A. Agarwal, and M. M. Deshmukh, Berry
597
+ curvature dipole senses topological transition in a moir´e
598
+ superlattice, Nat. Phys. 18, 765 (2022).
599
+ 6
600
+
601
+ 1
602
+
603
+ Supplemental Material for
604
+ Control over Berry curvature dipole with electric field in WTe2
605
+ Xing-Guo Ye1,+, Huiying Liu2,+, Peng-Fei Zhu1,+, Wen-Zheng Xu1,+, Shengyuan A.
606
+ Yang2, Nianze Shang1, Kaihui Liu1, and Zhi-Min Liao1,*
607
+ 1 State Key Laboratory for Mesoscopic Physics and Frontiers Science Center for
608
+ Nano-optoelectronics, School of Physics, Peking University, Beijing 100871, China.
609
+ 2 Research Laboratory for Quantum Materials, Singapore University of Technology
610
+ and Design, Singapore, 487372, Singapore.
611
+ + These authors contributed equally.
612
+ * Email: [email protected]
613
+ This file contains supplemental Figures S1-S18 and Notes 1-10.
614
+ Note 1: Device fabrication, experimental and calculation methods.
615
+ Note 2: Polarized Raman spectroscopy of WTe2.
616
+ Note 3: Angle-dependent longitudinal resistance and third-order nonlinear Hall effect.
617
+ Note 4: Magnetotransport properties of WTe2.
618
+ Note 5: Symmetry analysis of WTe2.
619
+ Note 6: Theoretical analysis and calculations of field-induced Berry curvature dipole.
620
+ Note 7: Electric field dependence of second-order Hall signals.
621
+ Note 8: Control experiments in device S2.
622
+ Note 9: Discussions of other possible origins of the second order AHE.
623
+ Note 10: Angle dependence of parameter C0 obtained from the fittings of scaling law.
624
+
625
+
626
+
627
+ 2
628
+
629
+ Supplemental Note 1: Device fabrication, experimental and calculation methods.
630
+ 1) Device fabrication
631
+ The WTe2 flakes were exfoliated from bulk crystal by scotch tape and then
632
+ transferred onto the polydimethylsiloxane (PDMS). The PDMS was then covered onto
633
+ a Si substrate with 285 nm-thick SiO2, where the Si substrate was precleaned by air
634
+ plasma, and further heated for about 1 minute at 90℃ to transfer the WTe2 flakes onto
635
+ Si substrate. Disk and Hall bar-shaped Ti/Au electrodes (around 10 nm thick) were
636
+ prefabricated on individual SiO2/Si substrates with e-beam lithography, metal
637
+ deposition and lift-off. Exfoliated BN (around 20 nm thick) and WTe2 flakes (around
638
+ 5-20 nm thick) were sequentially picked up and then transferred onto the Ti/Au
639
+ electrodes using a polymer-based dry transfer technique [30]. The atomic force
640
+ microscope image of device S1 is shown in Fig. S1. The thickness of this sample is 8.4
641
+ nm, corresponding to a 12-layer WTe2. The whole exfoliation and transfer processes
642
+ were done in an argon-filled glove box with O2 and H2O content below 0.01 parts per
643
+ million to avoid sample degeneration.
644
+
645
+ Figure S1: (a) The atomic force microscope image of device S1. (b) The line profile
646
+ shows the thickness of the WTe2 sample is 8.4 nm.
647
+
648
+ 0
649
+ 1
650
+ 2
651
+ 3
652
+ 4
653
+ 5
654
+ 0
655
+ 3
656
+ 6
657
+ 9
658
+ Height (nm)
659
+ Line profile (mm)
660
+ 8.4 nm
661
+ 3
662
+ WTe2
663
+ (a)
664
+ (b)
665
+
666
+ 3
667
+
668
+ 2) Electrical transport measurements and circuit schematic
669
+ All the transport measurements were carried out in an Oxford cryostat with a
670
+ variable temperature insert and a superconducting magnet. First-, second- and third-
671
+ harmonic signals were collected by standard lock-in techniques (Stanford Research
672
+ Systems Model SR830) with frequency ω . Frequency  equals 17.777 Hz unless
673
+ otherwise stated.
674
+ The circuit schematic with multiple sources in experiments is depicted in Fig. S2.
675
+ The a.c. and d.c. sources are both effective current sources. The original SR830 a.c.
676
+ source is a voltage source. In experiments, we connected the SR830 voltage source and
677
+ a protective resistor with resistance value 𝑅𝑝 in series (𝑅𝑝 = 100 kΩ for device S1
678
+ and 𝑅𝑝 = 10 kΩ for device S2), as shown in Fig. S2. The resistance of WTe2 channel
679
+ is in the order of 10 Ω, much less than 𝑅𝑝, which makes the SR830 source an effective
680
+ current source with excitation current 𝐼𝜔 ≅ 𝑈𝜔 𝑅𝑝
681
+
682
+ , where 𝑈𝜔 is the source voltage.
683
+ The Keithley 2400 current source is used for the d.c source. As shown in Fig. S2,
684
+ the positive and negative terminals of the Keithley source are connected to a pair of
685
+ diagonal electrodes to form a loop circuit, i.e., a floating loop. The d.c. electric field is
686
+ obtained by 𝐸𝑑𝑐 =
687
+ 𝐼𝑑𝑐𝑅𝜃
688
+ 𝐿 , where 𝐼𝑑𝑐 is the applied d.c. current, 𝑅𝜃 is the resistance
689
+ of WTe2 along direction 𝜃, and 𝐿 is the channel length of WTe2. The impedance of
690
+ the floating Keithley source to ground is measured to be ~60 MΩ. While, the negative
691
+ terminal of SR830 source is directly connected to the ground.
692
+
693
+ 4
694
+
695
+
696
+ Figure S2: Schematic structure of the circuit for measurements in device S1.
697
+
698
+ 3) Spectral purity of lock-in measurements
699
+ For the lock-in measurements, the used integration time is 300 ms and the filter
700
+ roll-off is 24 dB/octave, that is, the cutoff (-3 dB) frequency for the low-pass filter is
701
+ 0.531 Hz and the filter roll-off is 24 dB per octave. For our lock-in measurements, the
702
+ narrow detection bandwidth (±0.531 Hz) effectively avoided the spectral leakage.
703
+ The spectral purity of the lock-in homodyne circuit is verified by the control
704
+ experiments of the lock-in measurements of a resistor. The first-, second- and third-
705
+ harmonic voltages of a resistor with resistance ~100 Ω are measured using the same
706
+ frequency (17.777 Hz), integration time (300 ms) and filter roll-off (24 dB/octave) as
707
+ used in experiments, as shown in Fig. S3. The first-harmonic voltage shows linear
708
+ dependence on the alternating current, consistent with the resistance value ~100 Ω. The
709
+ second- and third-harmonic voltages are four orders of magnitude smaller than the first-
710
+ harmonic voltage, which indicates the high purity of spectrum of the lock-in homodyne
711
+ circuit.
712
+ Keithley 2400
713
+ current source
714
+ SR830 voltage
715
+ source
716
+ SR830 lock-in
717
+ measurement
718
+
719
+ 5
720
+
721
+
722
+ Figure S3: Lock-in measurements for a resistor with resistance ~𝟏𝟎𝟎 𝛀.
723
+ a, The first-harmonic voltage versus the alternating current.
724
+ b, The second- and third-harmonic voltages versus the alternating current.
725
+
726
+ 4) Validity of electrical measurements with the two sources
727
+ In our experiments, the Keithley source is used as the d.c. current source, which
728
+ has an output impedance ~20 MΩ. The a.c. current source is realized by connecting a
729
+ resistor 𝑅𝑝 in series (𝑅𝑝 = 100 kΩ for device S1 and 𝑅𝑝 = 10 kΩ for device S2) in
730
+ series with the SR830 voltage source. Both the a.c. and d.c. current sources have
731
+ effectively large output impedance comparing to the sample resistance ~10 Ω, so that
732
+ they can be considered as independent current sources. These two current sources can
733
+ be applied to the device simultaneously, having well-defined potential differences. To
734
+ further confirm the validity of our electrical measurements with the two current sources,
735
+ we design a test circuit, as shown in Fig. S4(a). The a.c. current flowing through 𝑅2
736
+ was calculated by measuring the first-harmonic voltage 𝑉ω of 𝑅2 and 𝐼𝜔 = 𝑉𝜔/𝑅2.
737
+ The d.c. current is applied by the Keithley current source and is measured by measuring
738
+ the d.c. voltage 𝑉𝑑𝑐 of 𝑅2 and 𝐼𝑑𝑐 = 𝑉𝑑𝑐/𝑅2. As shown in Fig. S4(b), where the a.c.
739
+ voltage of SR830 source is fixed at 1 V, it is found that the 𝐼𝜔 is unchanged when
740
+ 0
741
+ 0.1
742
+ 0.2
743
+ 0.3
744
+ 0.4
745
+ 0.5
746
+ 0
747
+ 10
748
+ 20
749
+ 30
750
+ 40
751
+ 50
752
+ V (mV)
753
+ I (mA)
754
+ 0
755
+ 0.1
756
+ 0.2
757
+ 0.3
758
+ 0.4
759
+ 0.5
760
+ -1
761
+ 0
762
+ 1
763
+ n = 2
764
+ n = 3
765
+ Vn (mV)
766
+ I (mA)
767
+ (a)
768
+ (b)
769
+
770
+ 6
771
+
772
+ varying the d.c. current by Keithley source, while measured 𝐼𝑑𝑐 is almost the same as
773
+ the output current of the Keithley source. In Fig. S4(c), where the d.c. current of
774
+ Keithley source is fixed, it is found that 𝐼𝜔 well satisfies 𝐼𝜔 = 𝑈𝜔/(𝑅1 + 𝑅2 +
775
+ 𝑅3) ≅ 𝑈𝜔 𝑅𝑝
776
+
777
+ with 𝑈𝜔 as the SR830 source voltage and 𝑅𝑝 = 𝑅1 . These results
778
+ clearly confirm the a.c. and d.c. sources are effectively independent with negligible
779
+ current shunt between each other.
780
+
781
+ Figure S4: Validity of the electrical measurements with two sources.
782
+ a, Schematic of the test circuit.
783
+ b, The 𝐼𝜔 and 𝐼𝑑𝑐 as a function of the Keithley source current with SR830 source
784
+ voltage 𝑈𝜔 fixed at 1 V.
785
+ c, The 𝐼𝜔 and 𝐼𝑑𝑐 as a function of the SR830 source voltage 𝑈𝜔 with Keithley
786
+ source current fixed at 1 mA.
787
+
788
+ SR830 voltage source
789
+ Keithley 2400 current source
790
+ SR830 lock-in
791
+ measurement
792
+ (a)
793
+ (b)
794
+ (c)
795
+ 0
796
+ 1
797
+ 2
798
+ 3
799
+ 4
800
+ 5
801
+ 0
802
+ 10
803
+ 20
804
+ 30
805
+ 40
806
+ 50
807
+ I (mA)
808
+ U (V)
809
+ 0.9
810
+ 1
811
+ 1.1
812
+ Keithley source current=1 mA
813
+ Idc (mA)
814
+ -4
815
+ -2
816
+ 0
817
+ 2
818
+ 4
819
+ 8
820
+ 10
821
+ 12
822
+ I (mA)
823
+ Keithley source current (mA)
824
+ U = 1 V
825
+ -4
826
+ -2
827
+ 0
828
+ 2
829
+ 4
830
+ Idc (mA)
831
+
832
+ 7
833
+
834
+ 5) Calculation methods
835
+ First-principles calculations were performed to reveal the properties of the Berry
836
+ connection polarizability tensor and field-induced Berry curvature dipole in WTe2. The
837
+ electronic structures were carried out in the framework of density functional theory as
838
+ implemented in the Vienna ab initio simulation package [31,32] with the projector
839
+ augmented wave method [33] and Perdew, Burke, and Ernzerh of exchange correlation
840
+ functionals [34]. For the convergence of the results, the spin–orbit coupling was
841
+ included self-consistently in the calculations of electronic structures with the kinetic
842
+ energy cutoff of 600 eV and Monkhorst-Pack k mesh of 14 × 8 × 4. We used d orbitals
843
+ of W atom and p orbitals of Te atoms to construct Wannier functions [35]. While
844
+ evaluating the band geometric quantities, we consider the finite temperature effect in
845
+ the distribution function and a lifetime broadening of 𝑘𝐵𝑇 with 𝑇 = 5 K.
846
+
847
+
848
+
849
+
850
+ 8
851
+
852
+ Supplemental Note 2: Polarized Raman spectroscopy of WTe2.
853
+ The crystalline orientation of WTe2 device was determined by the polarized
854
+ Raman spectroscopy in the parallel polarization configuration [36]. Figure S5 shows
855
+ the polarized Raman spectrum of device S2 as an example. The optical image of device
856
+ S2 is displayed in Fig. S5(a). Raman spectroscopy was measured with 514 nm
857
+ excitation wavelengths through a linearly polarized solid-state laser beam. The
858
+ polarization of the excitation laser was controlled by a quarter-wave plate and a
859
+ polarizer. We collected the Raman scattered light with the same polarization as the
860
+ excitation laser. A typical Raman spectroscopy of device S2 is shown in Fig. S5(b),
861
+ where five Raman peaks are identified, belonging to the A1 modes of WTe2 [36]. We
862
+ further measured the polarization dependence of intensities of peaks P2 and P11
863
+ [denoted in Fig. S5(b)] in Figs. S5(c) and S5(d), respectively. Based on previous
864
+ reports [36], the polarization direction with maximum intensity was assigned as the b
865
+ axis. The measured crystalline orientation is further indicated in the optical image [Fig.
866
+ S5(a)], where the applied a.c. current is approximately parallel to a axis.
867
+
868
+ 9
869
+
870
+
871
+ Figure S5: Polarized Raman spectroscopy of WTe2 to determine the crystalline
872
+ orientation.
873
+ a, Optical image of device S2. The crystalline axes, i.e., a axis and b axis, determined
874
+ by the polarized Raman spectroscopy, are denoted by the black arrows. The applied a.c.
875
+ current is also noted by the red arrow, which is approximately aligned with a axis.
876
+ b, A typical Raman spectrum measured with 514 nm excitation wavelengths, where the
877
+ polarization direction is approximately along b axis. Five Raman peaks are observed,
878
+ which belong to the A1 modes of WTe2 [36].
879
+ c,d, Polarization dependence of intensities of peaks (c) P2 and (d) P11. Here the
880
+ polarization angle takes 0° along the b axis, along which maximum intensity is
881
+ observed [36].
882
+
883
+
884
+ 60
885
+ 120
886
+ 180
887
+ 240
888
+ 300
889
+ 0
890
+ 100
891
+ 200
892
+ 300
893
+ Intensity (a.u.)
894
+ Wavenumber (cm-1)
895
+ 0
896
+ 60
897
+ 120
898
+ 180
899
+ 240
900
+ 300
901
+ 0
902
+ 60
903
+ 120
904
+ 180
905
+ 240
906
+ 300
907
+ b
908
+ a
909
+ 10 mm
910
+ (a)
911
+ (b)
912
+ (c)
913
+ (d)
914
+ P2
915
+ P10
916
+ P11
917
+ P2
918
+ P11
919
+ b axis
920
+ b axis
921
+
922
+ 10
923
+
924
+ Supplemental Note 3: Angle-dependent longitudinal resistance and third-order
925
+ nonlinear Hall effect.
926
+ The third-order anomalous Hall effect (AHE) is investigated in device S1, as
927
+ shown in Fig. S6(a). By exploiting the circular disc electrode structure, the angle-
928
+ dependence of the third-order AHE is measured. It shows highly sensitive to the
929
+ crystalline orientation, as shown in Fig. S6(c), which inherits from the intrinsic
930
+ anisotropy of WTe2 [26]. Based on the symmetry of WTe2 [26], the third-order AHE
931
+ shows angle-dependence following the formula
932
+ E𝐻
933
+
934
+ (𝐸𝜔)3 ∝
935
+ cos(θ−θ0)sin(θ−θ0)[(χ22r4−3χ12r2)sin2(θ−θ0)+(3χ21r2−χ11)cos2(θ−θ0)]
936
+ (cos2(θ−θ0)+𝑟sin2(θ−θ0))3
937
+ ,
938
+ where 𝐸𝐻
939
+ 3𝜔 =
940
+ 𝑉𝐻
941
+ 3𝜔
942
+ 𝑊 , 𝐸𝜔 =
943
+ 𝐼𝜔𝑅∥
944
+ 𝐿 , 𝑉𝐻
945
+ 3𝜔 is the third-harmonic Hall voltage, 𝐼𝜔 is the
946
+ applied a.c. current, 𝑅∥ is the longitudinal resistance, 𝑊 and 𝐿 are channel width
947
+ and length, respectively, r is the resistance anisotropy, 𝜒𝑖𝑗 are elements of the third-
948
+ order susceptibility tensor, 𝜃0 is the angle misalignment between 𝜃 = 0° and
949
+ crystalline b axis. The fitting curve for this angle dependence is shown by the red line
950
+ in Fig. S6(c), which yields the misalignment 𝜃0 ~1.5°. In addition to the third-order
951
+ AHE, the longitudinal (𝑅∥) resistance also shows strong anisotropy [13], as shown in
952
+ Fig. S6(b), following
953
+ 𝑅∥(𝜃) = 𝑅𝑏𝑐𝑜𝑠2(𝜃 − 𝜃0) + 𝑅𝑎𝑠𝑖𝑛2(𝜃 − 𝜃0),
954
+ consistent with previous results [13], where 𝑅𝑎 and 𝑅𝑏 are resistance along
955
+ crystalline a and b axis, respectively.
956
+
957
+ 11
958
+
959
+
960
+ Figure S6: Angle-dependence of third-order nonlinear Hall effect in device S1 at
961
+ 5 K.
962
+ a, The third-harmonic anomalous Hall voltages at various 𝜃. Here 𝜃 is defined as the
963
+ relative angle between the alternating current and the baseline (approximately along b
964
+ axis).
965
+ b,c, (b) Rxx and (c) third-order Hall signal
966
+ E𝐻
967
+
968
+ (𝐸𝜔)3 as a function of 𝜃, respectively.
969
+
970
+
971
+
972
+ 0
973
+ 0.5
974
+ 1
975
+ -15
976
+ -10
977
+ -5
978
+ 0
979
+ 5
980
+ 10
981
+ 15
982
+ 0
983
+ 30
984
+ 60
985
+ V3
986
+ H (mV)
987
+ E (kV/m)
988
+ 90
989
+ 120
990
+ 150
991
+ 16
992
+ 24
993
+ Rxx (W)
994
+ 0
995
+ 60
996
+ 120
997
+ 180
998
+ 240
999
+ 300
1000
+ 360
1001
+ -50
1002
+ 0
1003
+ 50
1004
+ V3
1005
+ H /(V)3 (V-2)
1006
+ q ()
1007
+ (a)
1008
+ (b)
1009
+ (c)
1010
+
1011
+ 12
1012
+
1013
+ Supplemental Note 4: Magnetotransport properties of WTe2.
1014
+ The magneto-transport properties of the device S1 were investigated. Figure S7(a)
1015
+ shows the resistivity as a function of temperature. The resistivity decreases upon
1016
+ decreasing temperature with a residual-resistivity at low temperatures, showing typical
1017
+ metallic behaviors. Figure S7(b) shows the magnetoresistance (MR) and Hall
1018
+ resistance as a function of magnetic field. MR is defined as
1019
+ 𝑅𝑥𝑥(𝐵)−𝑅𝑥𝑥(0)
1020
+ 𝑅𝑥𝑥(0)
1021
+ × 100%. The
1022
+ low residual resistance and large, non-saturated MR indicate the high quality of the
1023
+ WTe2 devices [37,38]. The carrier mobility of device S1 is estimated as high as
1024
+ 4974.4 cm2/(V ⋅ s). Moreover, resistance oscillations due to the formations of Landau
1025
+ levels are also observed, as shown in Fig. S7(c), indicative of the high crystal quality.
1026
+ The oscillation ∆𝑅𝑥𝑥 is obtained by subtracting a parabolic background. The fast
1027
+ Fourier transform (FFT) is performed, as shown in Fig. S7(d). Three frequencies are
1028
+ observed, indicating the multiple Fermi pockets in WTe2, which is consistent with
1029
+ previous work [37-39]. The dominant peak of FFT 𝑓1 is around 44 T.
1030
+
1031
+ 13
1032
+
1033
+
1034
+ Figure S7: Transport properties of the device S1.
1035
+ a, The resistivity as a function of temperature.
1036
+ b, Magnetoresistance and Hall resistance at 5 K.
1037
+ c, Oscillations of Rxx at 5 K. The ∆𝑅𝑥𝑥 is obtained by subtracting a parabolic
1038
+ background.
1039
+ d, The FFT analysis of ∆𝑅𝑥𝑥 oscillations, where three peaks are obtained.
1040
+
1041
+
1042
+ 0
1043
+ 50
1044
+ 100 150 200 250 300
1045
+ 0
1046
+ 20
1047
+ 40
1048
+ 60
1049
+ 80
1050
+ 100
1051
+ rxx (cm×mW)
1052
+ T (K)
1053
+ -15 -10
1054
+ -5
1055
+ 0
1056
+ 5
1057
+ 10
1058
+ 15
1059
+ 0
1060
+ 500
1061
+ 1000
1062
+ 1500
1063
+ 2000
1064
+ MR (%)
1065
+ B (T)
1066
+ -60
1067
+ -40
1068
+ -20
1069
+ 0
1070
+ 20
1071
+ 40
1072
+ Rxy (W)
1073
+ 0.05
1074
+ 0.1
1075
+ 0.15
1076
+ 0.2
1077
+ 0.25
1078
+ -6
1079
+ -3
1080
+ 0
1081
+ 3
1082
+ 6
1083
+ DRxx (W)
1084
+ 1/B (T-1)
1085
+ 100
1086
+ 200
1087
+ 300
1088
+ 0
1089
+ 200
1090
+ 400
1091
+ 600
1092
+ 800
1093
+ FFT amplitude (a.u.)
1094
+ Frequency (T)
1095
+ f1
1096
+ f2
1097
+ f3
1098
+ (a)
1099
+ (b)
1100
+ (c)
1101
+ (d)
1102
+
1103
+ 14
1104
+
1105
+ Supplemental Note 5: Symmetry analysis of WTe2.
1106
+ Td-WTe2 has a distorted crystal structure with low symmetry. Here we analyze the
1107
+ thickness dependence of the symmetry in WTe2 in details. Figure S8(a) shows the b-c
1108
+ plane of monolayer WTe2. Each monolayer consists of a layer of W atoms sandwiched
1109
+ between two layers of Te atoms, denoted as Te1 (denoted in yellow) and Te2 (denoted
1110
+ in red), respectively. The inversion symmetry of the monolayer is approximately
1111
+ satisfied, and Te1 is equivalent to Te2. The presence of inversion symmetry forces Berry
1112
+ curvature dipole (BCD) to be zero. However, as a perpendicular displacement field is
1113
+ applied to break the inversion symmetry, the Te1 is no longer equivalent to Te2. As
1114
+ shown in the bottom of Fig. S8(a), an in-plane electric polarization along b axis can be
1115
+ induced by the out-of-plane displacement field. The electric polarization along b axis
1116
+ plays a similar role as the d.c. electric field in our work, leading to nonzero BCD along
1117
+ a axis.
1118
+ Nonzero BCD in bilayer WTe2 origins from crystal symmetry breaking. The
1119
+ largest symmetry in bilayer WTe2 is a single mirror symmetry 𝑀𝑎 with bc plane as
1120
+ mirror plane. As shown in Fig. S8(b), the stacking between the two layers makes bilayer
1121
+ WTe2 inversion symmetry breaking. Under inversion operation, the top and bottom
1122
+ layers are swapped, which fails to coincide with each other. As shown in Fig. S8(b),
1123
+ Te1 is not equivalent to Te2 due to the stacking arrangement in bilayer. Therefore, an
1124
+ in-plane electric polarization P along b axis exists, similar to the case in monolayer with
1125
+ an out-of-plane displacement field. The polarization P is able to induce nonzero BCD
1126
+ along the perpendicular crystalline axis, i.e., along a axis.
1127
+
1128
+ 15
1129
+
1130
+ In fact, such in-plane polarization P along b axis in monolayer and bilayer WTe2 is
1131
+ already evidenced by the circular photogalvanic effect [14]. The symmetry breaking
1132
+ induced polarization is also confirmed in various 2D materials, such as WSe2/black
1133
+ phosphorus heterostructures [40].
1134
+ In trilayer and thicker WTe2, as shown in Fig. S8(c), the Te1 and Te2 are equivalent
1135
+ in bulk, leading to vanished electric polarization. The in-plane inversion symmetry in
1136
+ bulk forbids the presence of in-plane BCD. However, the inversion is broken on surface.
1137
+ Therefore, for trilayer and thicker WTe2, a small but nonzero BCD may occur on surface.
1138
+
1139
+ Figure S8: Crystal structure of Td-WTe2.
1140
+ a, b-c plane of monolayer Td-WTe2.
1141
+ b, b-c plane of bilayer Td-WTe2. The stacking arrangement breaks the inversion
1142
+ symmetry.
1143
+ c, b-c plane of trilayer Td-WTe2.
1144
+
1145
+ Importantly, the surface BCD and it induced second-order AHE in few-layer WTe2
1146
+ Inversion operation
1147
+ W
1148
+ Te2
1149
+ c
1150
+ b
1151
+ Te1
1152
+ E
1153
+ b
1154
+ (a)
1155
+ (b)
1156
+ (c)
1157
+
1158
+ 16
1159
+
1160
+ is reported in Ref. [13], which is also observed in our device. We measured the second-
1161
+ order AHE without the application of Edc in a WTe2 device, as shown in Fig. S9. This
1162
+ second-order AHE is observable when applying 𝐼𝜔 in the order of 1 mA. By
1163
+ comparison, the second-order AHE induced by d.c. field is observable when applying
1164
+ 𝐼𝜔 smaller than 0.05 mA (Fig. 1 of main text). The calculated BCD along a axis 𝐷𝑎
1165
+ without the application of Edc is ~0.03 nm, which is one order of magnitude smaller
1166
+ than 𝐷𝑎
1167
+ (1) ~0.29 nm measured under Edc = 3kV/m (Fig. 4 of main text). These results
1168
+ confirm the validity of Edc induced BCD in our work.
1169
+
1170
+ Figure S9: The second-order AHE without external d.c. electric field in WTe2 at
1171
+ 1.8 K.
1172
+
1173
+
1174
+
1175
+ 0
1176
+ 0.5
1177
+ 1
1178
+ 1.5
1179
+ 2
1180
+ 0
1181
+ 10
1182
+ 20
1183
+ 30
1184
+ 40
1185
+ V2
1186
+ H (mV)
1187
+ I (mA)
1188
+
1189
+ 17
1190
+
1191
+ Supplemental Note 6: Theoretical analysis and calculations of field-induced Berry
1192
+ curvature dipole.
1193
+ The electric field-induced Berry curvature depends on the Berry connection
1194
+ polarizability tensor and the applied d.c. field with the relation that
1195
+ 𝛀(1) = 𝛁𝐤 × (𝐆⃡𝐄𝑑𝑐),
1196
+ Ωβ
1197
+ (1)(𝑛, 𝒌) = εβγμ[∂γ𝐺μν(𝑛, 𝒌)]𝐸ν
1198
+ dc,
1199
+ with 𝐺μν(𝑛, 𝒌) = 2𝑒Re ∑
1200
+ (𝐴μ)𝑛𝑚(𝐴ν)𝑚𝑛
1201
+ ε𝑛−ε𝑚
1202
+ 𝑚≠𝑛
1203
+ , where 𝐴𝑚𝑛 is the interband Berry
1204
+ connection and 𝑒 is the electron charge. The superscript “(1)” represents that the
1205
+ physical quantity is the first order term of electric field. Here the Greek letters refer to
1206
+ the spatial directions, 𝑚, 𝑛 refer to the energy band indices, εβγμ is the Levi-Civita
1207
+ symbol, and 𝜕𝛾 is short for 𝜕/𝜕𝑘𝛾 . The Berry connection polarizability tensor of
1208
+ WTe2 is calculated and shown in Figs. S10(a)-(c). From the definition, the field-induced
1209
+ BCD is
1210
+ 𝐷αβ
1211
+ (1) = ∫ [𝑑𝒌]𝑓0 (∂αΩβ
1212
+ (1))
1213
+ 𝑘
1214
+ = εβγμ ∫ [𝑑𝒌]𝑓0[∂α(∂γ𝐺μν)]𝐸ν
1215
+ dc
1216
+ 𝑘
1217
+ ,
1218
+ where ∫ [𝑑𝒌]
1219
+ 𝑘
1220
+ = ∑
1221
+ 1
1222
+ (2π)3 ∭ 𝑑𝒌
1223
+ 𝑛
1224
+ is taken over the first Brillouin zone of the system and
1225
+ summed over all energy bands.
1226
+ In two-dimensional systems, 𝛀(1) is constrained to the out of plane direction, and
1227
+ BCD behaves as a pseudo vector in the plane. Here we choose our coordinate frame
1228
+ along the crystal principal axes 𝑎, 𝑏, 𝑐 . By applying a d.c. electric field 𝐄dc =
1229
+ (Ea
1230
+ dc, Eb
1231
+ dc) in the 𝑎𝑏 plane, the induced Ω𝑐
1232
+ (1) reads
1233
+ Ω𝑐
1234
+ (1)(𝑛, 𝒌) = (𝜕𝑎𝐺𝑏𝑎 − 𝜕𝑏𝐺𝑎𝑎)Ea
1235
+ dc + (𝜕𝑎𝐺𝑏𝑏 − 𝜕𝑏𝐺𝑎𝑏)Eb
1236
+ dc.
1237
+ 𝐷α
1238
+ (1) defined in a few-layer 2D system can be approximately derived from 𝐷αc(bulk)
1239
+ (1)
1240
+ of
1241
+
1242
+ 18
1243
+
1244
+ the bulk system by 𝐷α
1245
+ (1) = 𝑑𝐷αc(bulk)
1246
+ (1)
1247
+ , where 𝑑 is the thickness of the film. The
1248
+ independent components of 𝐷α
1249
+ (1) are related to the Berry connection polarizability
1250
+ tensor, 𝐄dc and 𝑑. The mirror symmetry 𝑀𝑎 and the glide symmetry 𝑀̃𝑏 in WTe2
1251
+ constrain 𝐷α
1252
+ (1) to be
1253
+ 𝐷𝑎
1254
+ (1) = ∫[𝑑𝑘] f0[∂a(∂aGbb) − ∂a(∂bGab)]Eb
1255
+ dc𝑑
1256
+ k
1257
+ ,
1258
+ 𝐷𝑏
1259
+ (1) = ∫[𝑑𝑘] f0[∂b(𝜕𝑎𝐺𝑏𝑎) − ∂b(𝜕𝑏𝐺𝑎𝑎)]Ea
1260
+ dc𝑑
1261
+ k
1262
+ ,
1263
+ where the other terms are prohibited by symmetry. In the experiment, the d.c. electric
1264
+ field is applied along a direction with an angle 𝜃 between 𝑏 axis, which can be
1265
+ expressed
1266
+ as
1267
+ 𝐄dc = 𝐸dc(− sin 𝜃 , cos 𝜃) .
1268
+ The
1269
+ induced
1270
+ BCD
1271
+ 𝐃(1)(𝜃) =
1272
+ (𝐷𝑎
1273
+ (1)(𝜃), 𝐷𝑏
1274
+ (1)(𝜃)) hence reads
1275
+ 𝐷𝑎
1276
+ (1)(𝜃) = ∫[𝑑𝑘] f0[∂a(∂aGbb) − ∂a(∂bGab)]Edc
1277
+ k
1278
+ cos 𝜃 𝑑,
1279
+ 𝐷𝑏
1280
+ (1)(𝜃) = ∫[𝑑𝑘] f0[∂b(∂bGaa) − ∂b(∂aGba)]Edc
1281
+ k
1282
+ sin 𝜃 𝑑.
1283
+ With the field-induced BCD, the second-order Hall current of an a.c. electric field
1284
+ 𝐄ω is [9]
1285
+ 𝑗𝛼
1286
+ 2ω = −εαμγ
1287
+ 𝑒3𝜏
1288
+ 2(1 + 𝑖ωτ)ℏ2 𝐷βμ
1289
+ (1)𝐸β
1290
+ ω𝐸γ
1291
+ ω.
1292
+ In two-dimensional systems, where 𝛀(1) is along out of plane direction and
1293
+ 𝐷αc
1294
+ (1) = ∫ [𝑑𝒌]𝑓0(∂αΩc
1295
+ (1))
1296
+ 𝑘
1297
+ , it is equivalent to
1298
+ 𝒋2ω = −
1299
+ 𝑒3𝜏
1300
+ 2(1 + 𝑖ωτ)ℏ2 (𝒛̂ × 𝐄ω)[D(1)(𝜃) ⋅ Eω].
1301
+ The magnitude of induced second-order Hall conductivity is determined by
1302
+ D(1)(𝜃) ⋅ 𝐄̂ω, which is the projection of the pseudo vector 𝐃(1) to the direction of 𝐄ω,
1303
+
1304
+ 19
1305
+
1306
+ and the direction of Hall current is perpendicular to 𝐄ω. Consequently, we can measure
1307
+ the 𝐄dc induced BCD 𝐃(1) by detecting its projective component 𝐷𝑎
1308
+ (1)(𝜃) or
1309
+ 𝐷𝑏
1310
+ (1)(𝜃) with an a.c. electric field along the corresponding direction. From the above
1311
+ derivation, when the direction of the d.c electric field varies in the 𝑎𝑏 plane, the
1312
+ independent components of induced BCD 𝐷𝑎
1313
+ (1) and 𝐷𝑏
1314
+ (1) change as a cosine and a sine
1315
+ function, respectively. This relation is clearly demonstrated by our experimental results
1316
+ in Fig. 4 of main text.
1317
+ With first-principles calculations, we estimate the extreme value of 𝐷𝑎
1318
+ (1)(0°) and
1319
+ 𝐷𝑏
1320
+ (1)(90°), as shown in Fig. S10(d). It is taken that 𝑑 ∼ 8.4 nm and 𝐸dc ∼ 3 kV/m
1321
+ according to the experiment. 𝐷𝑎
1322
+ (1)(0°) and 𝐷𝑏
1323
+ (1)(90°) refer to 𝐷𝑎
1324
+ (1) and 𝐷𝑏
1325
+ (1) as the
1326
+ applied 𝐸dc along the b axis and -a axis, respectively. It is found that 𝐷𝑏
1327
+ (1)(90°)
1328
+ varies from ~-0.14 nm to 0 as tuning chemical potential away from 0, and 𝐷𝑎
1329
+ (1)(0°)
1330
+ shows a non-monotonic change between 0.18 and -0.13 nm as changing chemical
1331
+ potential. The experimental results of 𝐷𝑏
1332
+ (1)(90°) ~-0.05 nm and 𝐷𝑎
1333
+ (1)(0°) ~-0.28 nm
1334
+ (Fig. 4 in main text) agree well with the calculations on the order of magnitude.
1335
+
1336
+ Figure S10: Calculations of Berry connection polarizability tensor and field-
1337
+ (a)
1338
+ (b)
1339
+ (c)
1340
+ (d)
1341
+
1342
+ 0.2
1343
+ -D(0°)
1344
+ (nm)
1345
+ 0.1
1346
+ .-.D"(90°)
1347
+ 0
1348
+ D(1)
1349
+ -0.1
1350
+ =3kV/m
1351
+ -0.2
1352
+ -20
1353
+ -10
1354
+ 0
1355
+ 10
1356
+ 20
1357
+ μ(meV)106
1358
+ G
1359
+ 104
1360
+ 102
1361
+ X
1362
+ 0
1363
+ -102
1364
+ -104
1365
+ -106
1366
+ Y
1367
+ Gbb
1368
+ 10°
1369
+ 104
1370
+ 102
1371
+ X
1372
+ 0
1373
+ -102
1374
+ -104
1375
+ -106
1376
+ Y106
1377
+ 104
1378
+ 102
1379
+ X
1380
+ 0
1381
+ -102
1382
+ -104
1383
+ -106
1384
+ Y20
1385
+
1386
+ induced Berry curvature dipole in WTe2.
1387
+ a-c, The calculated distribution of Berry connection polarizability tensor elements (a)
1388
+ 𝐺𝑎𝑎, (b) 𝐺𝑏𝑏, (c) 𝐺𝑎𝑏 in the 𝑘𝑧 = 0 plane of the Brillouin Zone for the occupied
1389
+ bands. The unit of BCP is Å2 ⋅ V−1. The grey lines depict the Fermi surface.
1390
+ d, Calculated field-induced BCD 𝐷𝑎
1391
+ (1)(0°) and 𝐷𝑏
1392
+ (1)(90°) with respect to the
1393
+ chemical potential 𝜇 when 𝐸dc = 3 kV/m. In the calculations, the finite temperature
1394
+ effect is considered with a boarding of 𝑘𝐵𝑇 at 5 K.
1395
+
1396
+
1397
+
1398
+ 21
1399
+
1400
+ Supplemental Note 7: Electric field dependence of second-order Hall signals.
1401
+ The second-harmonic I-V characteristics in Fig. 1(e) of main text are converted
1402
+ into the 𝑉𝐻
1403
+ 2𝜔 versus (𝑉𝜔)2 in Fig. S11(a), where linear relationships are observed.
1404
+ The
1405
+ E𝐻
1406
+
1407
+ (𝐸𝜔)2 as a function of the applied 𝐸𝑑𝑐 is further calculated and presented in Fig.
1408
+ S11(b).
1409
+
1410
+ Figure S11: Second-order AHE modulated by d.c. electric field at 5 K.
1411
+ a, The second-harmonic Hall voltage 𝑉𝐻
1412
+ 2𝜔 as a function of (𝑉𝜔)2 as 𝐄𝑑𝑐 along b
1413
+ axis and 𝐄𝜔 along -a axis.
1414
+ b, The second-order Hall signal
1415
+ E𝐻
1416
+
1417
+ (𝐸𝜔)2 as a function of 𝐸𝑑𝑐 at 𝜃 = 0° and 𝜃 = 90°
1418
+ with 𝐄𝜔 ∥ −𝑎 axis.
1419
+
1420
+
1421
+
1422
+ 0
1423
+ 5
1424
+ 10
1425
+ 15
1426
+ 20
1427
+ 25
1428
+ 30
1429
+ -6
1430
+ -4
1431
+ -2
1432
+ 0
1433
+ 2
1434
+ 4
1435
+ 6
1436
+ Edc (kV/m)
1437
+ 3
1438
+ 1.5
1439
+ 0
1440
+ -1.5
1441
+ -3
1442
+ V2
1443
+ H (mV)
1444
+ (V)2 (10-8 V2)
1445
+ q = 0
1446
+ E  -a axis
1447
+ -3
1448
+ -2
1449
+ -1
1450
+ 0
1451
+ 1
1452
+ 2
1453
+ 3
1454
+ -9
1455
+ -6
1456
+ -3
1457
+ 0
1458
+ 3
1459
+ 6
1460
+ 9
1461
+ q
1462
+ 0
1463
+ 90
1464
+ E2
1465
+ H /(E)2 (10-5 m/V)
1466
+ Edc (kV/m)
1467
+ E  -a axis
1468
+ (a)
1469
+ (b)
1470
+
1471
+ 22
1472
+
1473
+ Supplemental Note 8: Control experiments in device S2.
1474
+ To demonstrate the symmetry constraint in WTe2, control experiments were
1475
+ carried out in device S2. As schematically shown in Figs. S12(a), (d), the a.c. and d.c.
1476
+ current sources are applied. The SR830 is an effective a.c. current source as connecting
1477
+ a resistor in series with output impedance 10 kΩ. The d.c. source is the Keithley current
1478
+ source with output impedance ~20 MΩ. For the d.c. field applied along a and b axis,
1479
+ respectively, the first-harmonic Hall voltage shows no obvious dependence on 𝐄𝑑𝑐, as
1480
+ shown in Figs. S12(b) and S12(e), which indicate the independence of the two electric
1481
+ sources. When applying 𝐄𝑑𝑐 ∥ 𝐄𝜔 ∥ 𝑎 axis, no second-order nonlinear Hall effect can
1482
+ be observed in Fig. S12(c). Nevertheless, upon applying 𝐄𝑑𝑐 ⊥ 𝐄𝜔 and 𝐄𝜔 ∥ 𝑎 axis,
1483
+ as shown in Fig. S12(f), nonzero second-order nonlinear Hall effect emerges due to the
1484
+ 𝐄𝑑𝑐 induced Berry curvature dipole along a axis.
1485
+
1486
+ Figure S12: The measurements by applying both d.c. electric field 𝐄𝒅𝒄 and a.c.
1487
+ current in devices S2 at 1.8 K.
1488
+ a, Schematic of the measurement configuration for (b) and (c).
1489
+ 0
1490
+ 0.1
1491
+ 0.2
1492
+ 0.3
1493
+ 0.4
1494
+ 0.5
1495
+ -2
1496
+ -1
1497
+ 0
1498
+ 1
1499
+ 2
1500
+ 3
1501
+ Edc (104 V/m)
1502
+ -10.7
1503
+ -5.2
1504
+ -1.5
1505
+ 0
1506
+ 1.5
1507
+ 5.2
1508
+ 10.7
1509
+ V2
1510
+ ⊥ (mV)
1511
+ I (mA)
1512
+ E ⊥ Edc
1513
+ 0
1514
+ 0.1
1515
+ 0.2
1516
+ 0.3
1517
+ 0.4
1518
+ 0.5
1519
+ -1
1520
+ -0.5
1521
+ 0
1522
+ 0.5
1523
+ 1
1524
+ Edc
1525
+  (104 V/m)
1526
+ 2.5
1527
+ -2.5
1528
+ V2
1529
+ ⊥ (mV)
1530
+ I (mA)
1531
+ E  Edc
1532
+ 0
1533
+ 0.1
1534
+ 0.2
1535
+ 0.3
1536
+ 0.4
1537
+ 0.5
1538
+ 0
1539
+ 0.5
1540
+ 1
1541
+ 1.5
1542
+ Edc (104 V/m)
1543
+ -5.2
1544
+ 5.2
1545
+ V
1546
+ H (mV)
1547
+ I (mA)
1548
+ E ⊥ Edc
1549
+ 0
1550
+ 0.1
1551
+ 0.2
1552
+ 0.3
1553
+ 0.4
1554
+ 0.5
1555
+ 0
1556
+ 0.5
1557
+ 1
1558
+ 1.5
1559
+ Edc (104 V/m)
1560
+ 2.5
1561
+ -2.5
1562
+ V
1563
+ H (mV)
1564
+ I (mA)
1565
+ E  Edc
1566
+ SR830
1567
+ voltage source
1568
+ Keithley 2400
1569
+ current source
1570
+ SR830
1571
+ voltage source
1572
+ Keithley 2400
1573
+ current source
1574
+ b
1575
+ a
1576
+ b
1577
+ a
1578
+ (a)
1579
+ (b)
1580
+ (c)
1581
+ (d)
1582
+ (e)
1583
+ (f)
1584
+
1585
+ 23
1586
+
1587
+ b, First-harmonic Hall voltage 𝑉𝐻
1588
+ 𝜔 under 𝐄𝑑𝑐 ∥ 𝐄𝜔 ∥ 𝑎 axis.
1589
+ c, There is no clear second-harmonic Hall voltage 𝑉𝐻
1590
+ 2𝜔 under 𝐄𝑑𝑐 ∥ 𝐄𝜔 ∥ 𝑎 axis.
1591
+ d, Schematic of the measurement configuration for (e) and (f).
1592
+ e, The 𝑉𝐻
1593
+ 𝜔 under various 𝐄𝑑𝑐 with 𝐄𝑑𝑐 ⊥ 𝐄𝜔 and 𝐄𝜔 ∥ 𝑎 axis.
1594
+ f, The 𝑉𝐻
1595
+ 2𝜔 under various 𝐄𝑑𝑐 with 𝐄𝑑𝑐 ⊥ 𝐄𝜔 and 𝐄𝜔 ∥ 𝑎 axis.
1596
+
1597
+
1598
+
1599
+ 24
1600
+
1601
+ Supplemental Note 9: Discussions of other possible origins of the second order
1602
+ AHE.
1603
+ 1) Diode effect. An accidental diode due to the contact can lead to a rectification,
1604
+ causing high-order transport, which, however, can be safely ruled out in this work due
1605
+ to the following reasons:
1606
+ (a) Extrinsic signals of this origin should be strongly contact dependent. Thus, the
1607
+ angle-dependence should be also coupled to extrinsic contacts. Nevertheless, the angle-
1608
+ dependence of second-order AHE in Fig. 2 and Fig. S12 is well consistent with the
1609
+ inherent symmetry of WTe2, which excludes the extrinsic origins.
1610
+ (b) The two-terminal d.c. measurements for all the diagonal electrodes show linear
1611
+ I-V characteristics, as shown in Fig. S13(a), excluding the existence of diode effect.
1612
+ Linear fittings are performed for the two-terminal I-V curves. The R-square of the linear
1613
+ fittings is at least larger than 0.99997, indicating perfect linearity. Further, the deviation
1614
+ from linearity is analyzed by subtracting the linear-dependent part, as shown in Fig.
1615
+ S13(b). It is found ∆Vdc, i.e., the deviation part, is four orders of magnitude smaller
1616
+ than the original Vdc, indicating a negligible nonlinearity. Moreover, the ∆Vdc shows
1617
+ no obvious current or angle dependence (Fig. S13(b)), and its magnitude is also much
1618
+ smaller than that of the higher-harmonic Hall voltages (Fig. S13(c)), further indicating
1619
+ that the observed higher-order transport in this work is failed to be attributed to the
1620
+ diode effect induced by contact.
1621
+
1622
+
1623
+ 25
1624
+
1625
+
1626
+ Figure S13: Two-terminal d.c. measurements at 5 K in device S1.
1627
+ a, Current-voltage curves from two-terminal d.c. measurements for all the diagonal
1628
+ electrodes.
1629
+ b, The current dependence of ∆Vdc, that is, the deviations from the linearity of the
1630
+ current-voltage curves in Fig. S13a.
1631
+ c, The comparation of the ∆Vdc, 𝑉𝐻
1632
+ 2𝜔 and 𝑉𝐻
1633
+ 3𝜔. For ∆Vdc and 𝑉𝐻
1634
+ 3𝜔, the excitation
1635
+ current is applied at 𝜃 = 30°, while for 𝑉𝐻
1636
+ 2𝜔, the excitation current is applied along a
1637
+ axis and a d.c. field 3 kV/m is applied at 𝜃 = 30°.
1638
+
1639
+ 2) Capacitive effect. Contact resistance is generally inevitable between the metal
1640
+ electrodes and two-dimensional materials, which would induce an accidental capacitive
1641
+ effect, resulting in higher-order transport effect. Here, the second-order AHE shows a
1642
+ negligible dependence on frequency, as shown in Fig. S14(a), excluding the capacitive
1643
+ effect. The phase of the second-harmonic Hall voltage is also investigated, where the Y
1644
+ signal dominates over the X signal (Fig. S14(b)). The phase of the second-harmonic
1645
+ Hall voltage is approximately ±90°, as shown in Fig. S14(c). These features further
1646
+ exclude the capacitive effect.
1647
+ -0.6
1648
+ 0
1649
+ 0.6
1650
+ -10
1651
+ 0
1652
+ 10
1653
+ Vdc (mV)
1654
+ Idc (mA)
1655
+ 90
1656
+ 120
1657
+ 150
1658
+ 0
1659
+ 30
1660
+ 60
1661
+ (a)
1662
+ (b)
1663
+ (c)
1664
+ -0.6
1665
+ -0.4
1666
+ -0.2
1667
+ 0
1668
+ 0.2
1669
+ 0.4
1670
+ 0.6
1671
+ -0.04
1672
+ -0.02
1673
+ 0
1674
+ 0.02
1675
+ 0.04
1676
+ 0
1677
+ 30
1678
+ 60
1679
+ DVdc (mV)
1680
+ Idc (mA)
1681
+ 90
1682
+ 120
1683
+ 150
1684
+ 0
1685
+ 0.2
1686
+ 0.4
1687
+ -5
1688
+ 0
1689
+ 5
1690
+ 10
1691
+ V (mV)
1692
+ Excitation current (mA)
1693
+ DVdc
1694
+ V2
1695
+ H
1696
+ V3
1697
+ H
1698
+
1699
+ 26
1700
+
1701
+
1702
+ Figure S14: Frequency-dependence and phase of second-order AHE in device S1
1703
+ at 5 K and with 𝐄𝐝𝐜 = 𝟑 𝐤𝐕/𝐦 at 𝜽 = 𝟔𝟎°.
1704
+ a, The second-order Hall signals at different frequencies.
1705
+ b, The X and Y signals of the second-order Hall voltages.
1706
+ c, The absolute value of the phase of the second-order Hall voltages.
1707
+
1708
+ 3) Thermal effect. The thermal effect can also induce a second-order signal [41]. If the
1709
+ observed nonlinear Hall effect origins from thermal effect, it should response to both
1710
+ longitudinal and transverse d.c. electric field. However, as shown in Fig. S12, when
1711
+ applying 𝐄𝑑𝑐 ∥ 𝐄𝜔 ∥ 𝑎 axis, no second-order nonlinear Hall effect is observed.
1712
+ Nevertheless, upon applying 𝐄𝜔 ⊥ 𝐄𝑑𝑐 , nonzero second-order nonlinear Hall effect
1713
+ emerges. This observation is clearly inconsistent with the thermal effect. Moreover, the
1714
+ observed second-order nonlinear Hall effect shows strong anisotropy, as shown in Fig.
1715
+ 2 of main text. The angle-dependence of the d.c. field-induced second-order Hall effect
1716
+ is well consistent with the inherent symmetry of WTe2, which is failed to be explained
1717
+ by the thermal effect.
1718
+ 4) Thermoelectric effect. Joule heating induced temperature gradient across the
1719
+ sample can drive a thermoelectric voltage, leading to second-order nonlinear Hall effect.
1720
+ This thermoelectric effect can also be excluded due to the following reasons:
1721
+ 0.01
1722
+ 0.02
1723
+ 0.03
1724
+ 0.04
1725
+ 0.05
1726
+ 0
1727
+ 20
1728
+ 40
1729
+ 60
1730
+ 80
1731
+ 100
1732
+ abs(phase) ()
1733
+ I (mA)
1734
+ 0
1735
+ 0.01 0.02 0.03 0.04 0.05
1736
+ -2.5
1737
+ -2
1738
+ -1.5
1739
+ -1
1740
+ -0.5
1741
+ 0
1742
+ X signal
1743
+ Y signal
1744
+ V2
1745
+ H (mV)
1746
+ I (mA)
1747
+ (b)
1748
+ (c)
1749
+ 0
1750
+ 0.01 0.02 0.03 0.04 0.05
1751
+ -2.5
1752
+ -2
1753
+ -1.5
1754
+ -1
1755
+ -0.5
1756
+ 0
1757
+ 17.777 Hz
1758
+ 77.777 Hz
1759
+ 177.77 Hz
1760
+ 777.77 Hz
1761
+ 1777.7 Hz
1762
+ V2
1763
+ H (mV)
1764
+ I (mA)
1765
+ (a)
1766
+
1767
+ 27
1768
+
1769
+ (a) Uniform Joule heating will not induce a temperature gradient and thus no
1770
+ thermoelectric voltage across the sample.
1771
+ (b) To generate thermoelectric voltage, the Joule heating should couple with
1772
+ external asymmetry, such as contact junction or flake shape, which should be unrelated
1773
+ to the inherent symmetry of WTe2. However, the anisotropy of second-order nonlinear
1774
+ Hall effect is well consistent with the inherent symmetry analysis, as shown in Fig. 2
1775
+ of main text.
1776
+ 5) A residue of the first-harmonic Hall response 𝑽𝑯
1777
+ 𝝎. The influence of 𝑉𝐻
1778
+ 𝜔 on the
1779
+ 𝑉𝐻
1780
+ 2𝜔 can be ruled out because the first- and second-harmonic signals show different
1781
+ dependence on the d.c. electric field. As shown in Fig. S15, the first-harmonic Hall
1782
+ signal (𝑉𝐻
1783
+ 𝜔) shows that the I-V curves under 𝐸𝑑𝑐 = ±3 kV/m overlap with each other.
1784
+ By comparison, the second-harmonic Hall signal (𝑉𝐻
1785
+ 2𝜔 ) shows an anti-symmetric
1786
+ dependence on 𝐸𝑑𝑐, where the sign of 𝑉𝐻
1787
+ 2𝜔 is changed upon changing the sign of 𝐸𝑑𝑐.
1788
+ This indicates that the existence of the first order signal 𝑉𝐻
1789
+ 𝜔 will not affect the
1790
+ measurements of the second order signal 𝑉𝐻
1791
+ 2𝜔.
1792
+
1793
+ Figure S15: The first- and second-harmonic signals at 5 K as 𝐄𝒅𝒄 along b axis
1794
+ (𝜽 = 𝟎°) and 𝐄𝝎 along -a axis.
1795
+ a, The first-harmonic Hall voltage 𝑉𝐻
1796
+ 𝜔 as a function of 𝐼𝜔 at 𝐸𝑑𝑐 = ±3 kV/m.
1797
+ 0
1798
+ 0.01 0.02 0.03 0.04 0.05
1799
+ -6
1800
+ -4
1801
+ -2
1802
+ 0
1803
+ 2
1804
+ 4
1805
+ 6
1806
+ Edc (kV/m)
1807
+ 3
1808
+ -3
1809
+ V2
1810
+ H (mV)
1811
+ I (mA)
1812
+ q = 0
1813
+ (a)
1814
+ (b)
1815
+ 0
1816
+ 0.01 0.02 0.03 0.04 0.05
1817
+ 0
1818
+ 0.01
1819
+ 0.02
1820
+ 0.03
1821
+ 0.04
1822
+ 0.05
1823
+ Edc (kV/m)
1824
+ 3
1825
+ -3
1826
+ V
1827
+ H (mV)
1828
+ I (mA)
1829
+
1830
+ 28
1831
+
1832
+ b, The second-harmonic Hall voltage 𝑉𝐻
1833
+ 2𝜔.
1834
+
1835
+ 6) Trivial effect by d.c. source. We measured the first-harmonic longitudinal voltage
1836
+ upon applying Edc = 3 kV/m , as shown in Fig. S16. It is clearly found that when
1837
+ reversing the sign of d.c. electric field, the I-V curves overlapped with each other. The
1838
+ results show that the d.c. source will not affect the a.c. measurements.
1839
+
1840
+ Figure S16: The first-harmonic longitudinal voltage versus current under
1841
+ different d.c. electric fields at 5 K. The 𝐄𝝎 and 𝐄𝒅𝒄 are along a axis.
1842
+
1843
+ 7) Longitudinal nonlinearity originating from a circuit artifact. We have measured
1844
+ both the second-harmonic Hall and longitudinal voltage at all the angles, as shown in
1845
+ Fig. S17. The measurement configuration is shown in the inset of Fig. S17(d) with d.c.
1846
+ field applied at angle 𝜃. It is clearly found that the Hall nonlinearity is dominated over
1847
+ longitudinal one, which guarantees that the observed second-order Hall effect doesn’t
1848
+ originate from the longitudinal nonlinearity induced by a circuit artifact.
1849
+ 0
1850
+ 0.01 0.02 0.03 0.04 0.05
1851
+ 0
1852
+ 0.2
1853
+ 0.4
1854
+ 0.6
1855
+ 0.8
1856
+ V
1857
+ xx (mV)
1858
+ I (mA)
1859
+ Edc (kV/m)
1860
+ 3
1861
+ -3
1862
+
1863
+ 29
1864
+
1865
+
1866
+ Figure S17: The second-harmonic Hall 𝑽𝑯
1867
+ 𝟐𝝎 and longitudinal voltage 𝑽𝑳
1868
+ 𝟐𝝎 with
1869
+ 𝐄𝝎 ∥ −𝒂 axis and 𝐄𝒅𝒄 = 𝟏. 𝟓 𝐤𝐕/𝐦 along different angles at 5 K. The angle 𝜽 is
1870
+ defined in Fig. 1(d) of main text.
1871
+
1872
+
1873
+
1874
+ 0
1875
+ 0.01 0.02 0.03 0.04 0.05
1876
+ 0
1877
+ 0.2
1878
+ 0.4
1879
+ 0.6
1880
+ V2
1881
+ H
1882
+ V2
1883
+ L
1884
+ V2 (mV)
1885
+ I (mA)
1886
+ 0
1887
+ 0.01 0.02 0.03 0.04 0.05
1888
+ -0.5
1889
+ 0
1890
+ 0.5
1891
+ 1
1892
+ 1.5
1893
+ 2
1894
+ 2.5
1895
+ V2
1896
+ H
1897
+ V2
1898
+ L
1899
+ V2 (mV)
1900
+ I (mA)
1901
+ 0
1902
+ 0.01 0.02 0.03 0.04 0.05
1903
+ -0.5
1904
+ 0
1905
+ 0.5
1906
+ 1
1907
+ 1.5
1908
+ 2
1909
+ V2
1910
+ H
1911
+ V2
1912
+ L
1913
+ V2 (mV)
1914
+ I (mA)
1915
+ 0
1916
+ 0.01 0.02 0.03 0.04 0.05
1917
+ 0
1918
+ 0.5
1919
+ 1
1920
+ 1.5
1921
+ 2
1922
+ 2.5
1923
+ V2
1924
+ H
1925
+ V2
1926
+ L
1927
+ V2 (mV)
1928
+ I (mA)
1929
+ 0
1930
+ 0.01 0.02 0.03 0.04 0.05
1931
+ -2.5
1932
+ -2
1933
+ -1.5
1934
+ -1
1935
+ -0.5
1936
+ 0
1937
+ V2
1938
+ H
1939
+ V2
1940
+ L
1941
+ V2 (mV)
1942
+ I (mA)
1943
+ 0
1944
+ 0.01 0.02 0.03 0.04 0.05
1945
+ -1.5
1946
+ -1
1947
+ -0.5
1948
+ 0
1949
+ V2
1950
+ H
1951
+ V2
1952
+ L
1953
+ V2 (mV)
1954
+ I (mA)
1955
+ a
1956
+ b
1957
+ (a)
1958
+ (b)
1959
+ (c)
1960
+ (d)
1961
+ (e)
1962
+ (f)
1963
+
1964
+ 30
1965
+
1966
+ Supplemental Note 10: Angle dependence of parameter C0 obtained from the
1967
+ fittings of scaling law.
1968
+ The second-order Hall signal
1969
+ EH
1970
+
1971
+ (𝐸𝜔)2 is found to satisfy scaling law
1972
+ EH
1973
+
1974
+ (𝐸𝜔)2 = 𝐶0 +
1975
+ 𝐶1𝜎𝑥𝑥 + 𝐶2𝜎𝑥𝑥
1976
+ 2 . For 𝐄𝑑𝑐 = 3 kV/m with a fixed direction (angle 𝜃), a set of curves of
1977
+ VH
1978
+ 2ω vs. Iω is measured at different temperatures as Iω is applied along -a axis and b
1979
+ axis, respectively. Through varying temperature, the 𝜎𝑥𝑥 is changed accordingly.
1980
+ Therefore, for a fixed angle 𝜃, the relationship between
1981
+ EH
1982
+
1983
+ (𝐸𝜔)2 and 𝜎𝑥𝑥 is plotted. By
1984
+ fitting the experimental data, the parameter 𝐶0 is then obtained and presented in Fig.
1985
+ S18.
1986
+
1987
+ Figure S18: Angle-dependence of the coefficient 𝑪𝟎.
1988
+ a,b, The coefficient 𝐶0 as a function of 𝜃 with the amplitude of 𝐄𝑑𝑐 fixed at 3 kV/m
1989
+ for (a) 𝐄𝜔 ∥ −𝑎 axis and (b) 𝐄𝜔 ∥ 𝑏 axis.
1990
+
1991
+ 0
1992
+ 60
1993
+ 120
1994
+ 180
1995
+ 240
1996
+ 300
1997
+ 360
1998
+ -0.6
1999
+ -0.4
2000
+ -0.2
2001
+ 0
2002
+ 0.2
2003
+ 0.4
2004
+ 0.6
2005
+ C0 (10-7 m/V)
2006
+ q (o)
2007
+ 0
2008
+ 60
2009
+ 120
2010
+ 180
2011
+ 240
2012
+ 300
2013
+ 360
2014
+ -2
2015
+ -1
2016
+ 0
2017
+ 1
2018
+ 2
2019
+ C0 (10-7 m/V)
2020
+ q (o)
2021
+ (a)
2022
+ (b)
2023
+ axis
2024
+ axis
2025
+
4NAzT4oBgHgl3EQf9f64/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
6tE0T4oBgHgl3EQffABm/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:335f0a36ae23f1ee3ded154094f80a2640b9752e45c32b0faaef44668d8d19e2
3
+ size 2293805
6tE3T4oBgHgl3EQfpwpy/content/2301.04645v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c914e43f38b796da326874a8042b66b391cec9dc61d6ad1b1ffa68fb152afad7
3
+ size 142542
6tE3T4oBgHgl3EQfpwpy/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ea1e08d426c6ee9124641cce4c0d346268698fef43cfe175eee5daeecaeed07
3
+ size 720941
6tE3T4oBgHgl3EQfpwpy/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0379813fc11359e2808844f7774d8762bf6ac05e43dfc36e6bf7a3c57aa247a2
3
+ size 32934
7NE3T4oBgHgl3EQfqApm/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:822da5373ee8d6e9a254350664f96860eec18ffac562e3d8e9cc023e2955c75e
3
+ size 255837
89AyT4oBgHgl3EQfQ_ZE/content/tmp_files/2301.00056v1.pdf.txt ADDED
@@ -0,0 +1,372 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MNRAS 000, 1–4 (2022)
2
+ Preprint 30 December 2022
3
+ Compiled using MNRAS LATEX style file v3.0
4
+ A Bayesian Neural Network Approach to identify Stars and AGNs
5
+ observed by XMM Newton ★
6
+ Sarvesh Gharat,1† and Bhaskar Bose2
7
+ 1 Centre for Machine Intelligence and Data Science, Indian Institute of Technology Bombay, 400076, Mumbai, India
8
+ 2 Smart Mobility Group, Tata Consultancy Services, 560067, Bangalore, India
9
+ Accepted XXX. Received YYY; in original form ZZZ
10
+ ABSTRACT
11
+ In today’s era, a tremendous amount of data is generated by different observatories and manual classification of data is something
12
+ which is practically impossible. Hence, to classify and categorize the objects there are multiple machine and deep learning
13
+ techniques used. However, these predictions are overconfident and won’t be able to identify if the data actually belongs to the
14
+ trained class. To solve this major problem of overconfidence, in this study we propose a novel Bayesian Neural Network which
15
+ randomly samples weights from a distribution as opposed to the fixed weight vector considered in the frequentist approach. The
16
+ study involves the classification of Stars and AGNs observed by XMM Newton. However, for testing purposes, we consider CV,
17
+ Pulsars, ULX, and LMX along with Stars and AGNs which the algorithm refuses to predict with higher accuracy as opposed
18
+ to the frequentist approaches wherein these objects are predicted as either Stars or AGNs. The proposed algorithm is one of
19
+ the first instances wherein the use of Bayesian Neural Networks is done in observational astronomy. Additionally, we also make
20
+ our algorithm to identify stars and AGNs in the whole XMM-Newton DR11 catalogue. The algorithm almost identifies 62807
21
+ data points as AGNs and 88107 data points as Stars with enough confidence. In all other cases, the algorithm refuses to make
22
+ predictions due to high uncertainty and hence reduces the error rate.
23
+ Key words: methods: data analysis – methods: observational – methods: miscellaneous
24
+ 1 INTRODUCTION
25
+ Since the last few decades, a large amount of data is regularly
26
+ generated by different observatories and surveys. The classification
27
+ of this enormous amount of data by professional astronomers is
28
+ time-consuming as well as practically impossible. To make the
29
+ process simpler, various citizen science projects (Desjardins et al.
30
+ 2021) (Cobb 2021) (Allf et al. 2022) (Faherty et al. 2021) are
31
+ introduced which has been reducing the required time by some
32
+ extent. However, there are many instances wherein classifying the
33
+ objects won’t be simple and may require domain expertise.
34
+ In this modern era, wherein Machine Learning and Neural Net-
35
+ works are widely used in multiple fields, there has been significant
36
+ development in the use of these algorithms in Astronomy. Though
37
+ these algorithms are accurate with their predictions there is certainly
38
+ some overconfidence (Kristiadi et al. 2020) (Kristiadi et al. 2021)
39
+ associated with it. Besides that, these algorithms tend to classify
40
+ every input as one of the trained classes (Beaumont & Haziza 2022)
41
+ irrespective of whether it actually belongs to those trained classes
42
+ eg: The algorithm trained to classify stars will also predict AGNs as
43
+ one of the stars. To solve this major issue, in this study we propose a
44
+ Bayesian Neural Network (Jospin et al. 2022) (Charnock et al. 2022)
45
+ ★ Based on observations obtained with XMM-Newton, an ESA science mis-
46
+ sion with instruments and contributions directly funded by ESA Member
47
+ States and NASA
48
+ † E-mail: [email protected]
49
+ which refuses to make a prediction whenever it isn’t confident about
50
+ its predictions. The proposed algorithm is implemented on the data
51
+ collected by XMM-Newton (Jansen et al. 2001). We do a binary
52
+ classification to classify Stars and AGNs (Małek et al. 2013) (Golob
53
+ et al. 2021). Additionally to test our algorithm with the inputs which
54
+ don’t belong to the trained class we consider data observed from CV,
55
+ Pulsars, ULX, and LMX. Although, the algorithm doesn’t refuse to
56
+ predict all these objects, but the number of objects it predicts for
57
+ these 4 classes is way smaller than that of trained classes.
58
+ For the trained classes, the algorithm gives its predictions for al-
59
+ most 64% of the data points and avoids predicting the output when-
60
+ ever it is not confident about its predictions. The achieved accuracy
61
+ in this binary classification task whenever the algorithm gives its
62
+ prediction is 98.41%. On the other hand, only 14.6% of the incor-
63
+ rect data points are predicted as one of the classes by the algorithm.
64
+ The percentage decrease from 100% to 14.6% in the case of different
65
+ inputs is what dominates our model over other frequentist algorithms.
66
+ 2 METHODOLOGY
67
+ In this section, we discuss the methodology used to perform this
68
+ study. This section is divided into the following subsections.
69
+ • Data Collection and Feature Extraction
70
+ • Model Architecture
71
+ • Training and Testing
72
+ © 2022 The Authors
73
+
74
+ 2
75
+ S. Gharat et al.
76
+ Class
77
+ Catalogue
78
+ AGN
79
+ VERONCAT (Véron-Cetty & Véron 2010)
80
+ LMX
81
+ NGC3115CXO (Lin et al. 2015)
82
+ RITTERLMXB (Ritter & Kolb 2003)
83
+ LMXBCAT (Liu et al. 2007)
84
+ INTREFCAT (Ebisawa et al. 2003)
85
+ M31XMMXRAY (Stiele et al. 2008)
86
+ M31CFCXO (Hofmann et al. 2013)
87
+ RASS2MASS (Haakonsen & Rutledge 2009)
88
+ Pulsars
89
+ ATNF (Manchester et al. 2005)
90
+ FERMIL2PSR (Abdo et al. 2013)
91
+ CV
92
+ CVC (Drake et al. 2014)
93
+ ULX
94
+ XSEG (Drake et al. 2014)
95
+ Stars
96
+ CSSC (Skiff 2014)
97
+ Table 1. Catalogues used to create labeled data
98
+ Class
99
+ Training Data
100
+ Test Data
101
+ AGN
102
+ 8295
103
+ 2040
104
+ LMX
105
+ 0
106
+ 49
107
+ Pulsars
108
+ 0
109
+ 174
110
+ CV
111
+ 0
112
+ 36
113
+ ULX
114
+ 0
115
+ 261
116
+ Stars
117
+ 6649
118
+ 1628
119
+ Total
120
+ 14944
121
+ 4188
122
+ Table 2. Data distribution after cross-matching all the data points with cata-
123
+ logs mentioned in Table 1
124
+ 2.1 Data Collection and Feature Extraction
125
+ In this study, we make use of data provided in "XMM-DR11 SEDs"
126
+ Webb et al. (2020). We further cross-match the collected data with
127
+ different vizier (Ochsenbein et al. 2000) catalogs. Please refer to
128
+ Table 1 to view all the catalogs used in this study. As the proposed
129
+ algorithm is a "supervised Bayesian algorithm", this happens to be
130
+ one of the important steps for our algorithm to work.
131
+ The provided data has 336 different features that can increase
132
+ computational complexity by a larger extent and also has a lot of
133
+ missing data points. Therefore in this study, we consider a set of
134
+ 18 features corresponding to the observed source. The considered
135
+ features for all the sources are available on our Github repository,
136
+ more information of which is available on the official webpage 1 of
137
+ the observatory. After cross-matching and reducing the number of
138
+ features, we were left with a total of 19136 data points. The data
139
+ distribution can be seen in Table 2. We further also plot the sources
140
+ (Refer Figure1) based on their "Ra" and "Dec" to confirm if the
141
+ data coverage of the considered sources matches with the actual data
142
+ covered by the telescope.
143
+ 1 http://xmmssc.irap.omp.eu/Catalogue/4XMM-DR11/col_unsrc.
144
+ html
145
+ Figure 1. Sky map coverage of considered data points
146
+ The collected data is further classified into train and test according
147
+ to the 80 : 20 splitting condition. The exact number of data points is
148
+ mentioned in Table 2
149
+ 2.2 Model Architecture
150
+ The proposed model has 1 input, hidden and output layers (refer
151
+ Figure 2) with 18, 512, and 2 neurons respectively. The reason for
152
+ having 18 neurons in the input layer is the number of input features
153
+ considered in this study. Further, to increase the non-linearity of the
154
+ output, we make use of "Relu" (Fukushima 1975) (Agarap 2018) as
155
+ an activation function for the first 2 layers. On the other hand, the
156
+ output layer makes use of "Softmax" to make the predictions. This
157
+ is done so that the output of the model will be the probability of
158
+ image belonging to a particular class (Nwankpa et al. 2018) (Feng
159
+ & Lu 2019).
160
+ The "optimizer" and "loss" used in this study are "Adam" (Kingma
161
+ et al. 2020) and "Trace Elbo" (Wingate & Weber 2013) (Ranganath
162
+ et al. 2014) respectively. The overall idea of BNN (Izmailov et al.
163
+ 2021) (Jospin et al. 2022) (Goan & Fookes 2020) is to have a pos-
164
+ terior distribution corresponding to all weights and biases such that,
165
+ the output distribution produced by these posterior distributions is
166
+ similar to that of the categorical distributions defined in the training
167
+ dataset. Hence, convergence, in this case, can be achieved by min-
168
+ imizing the KL divergence between the output and the categorical
169
+ distribution or just by maximizing the ELBO (Wingate & Weber
170
+ 2013) (Ranganath et al. 2014). We make use of normal distributions
171
+ which are initialized with random mean and variance as prior (For-
172
+ tuin et al. 2021), along with the likelihood derived from the data to
173
+ construct the posterior distribution.
174
+ 2.3 Training and Testing
175
+ The proposed model is constructed using Pytorch (Paszke et al.
176
+ 2019) and Pyro (Bingham et al. 2019). The training of the model
177
+ is conducted on Google Colaboratory, making use of NVIDIA
178
+ K80 GPU (Carneiro et al. 2018). The model is trained over 2500
179
+ epochs with a learning rate of 0.01. Both these parameters i.e
180
+ number of epochs and learning rate has to be tuned and are done by
181
+ iterating the algorithm multiple times with varying parameter values.
182
+ The algorithm is further asked to make 100 predictions corre-
183
+ sponding to every sample in the test set. Every time it makes the
184
+ prediction, the corresponding prediction probability varies. This is
185
+ due to random sampling of weights and biases from the trained dis-
186
+ tributions. Further, the algorithm considers the "mean" and "standard
187
+ deviation" corresponding to those probabilities to make a decision
188
+ as to proceed with classification or not.
189
+ MNRAS 000, 1–4 (2022)
190
+
191
+ 4
192
+ 45°
193
+ 31
194
+ 15*
195
+ 15*
196
+ 30*
197
+ 45*
198
+ 60
199
+ -75"BNN Classifier
200
+ 3
201
+ Figure 2. Model Architecture
202
+ AGN
203
+ Stars
204
+ AGN
205
+ 1312
206
+ 6
207
+ Stars
208
+ 31
209
+ 986
210
+ Table 3. Confusion Matrix for classified data points
211
+ Class
212
+ Precision
213
+ Recall
214
+ F1 Score
215
+ AGN
216
+ 0.99
217
+ 0.97
218
+ 0.98
219
+ Stars
220
+ 0.97
221
+ 0.99
222
+ 0.98
223
+ Average
224
+ 0.98
225
+ 0.98
226
+ 0.98
227
+ Table 4. Classification report for classified data points
228
+ 3 RESULTS AND DISCUSSION
229
+ The proposed algorithm is one of the initial attempts to implement
230
+ "Bayesian Neural Networks" in observational astronomy which
231
+ has shown significant results. The algorithm gives the predictions
232
+ with an accuracy of more than 98% whenever it agrees to make
233
+ predictions for trained classes.
234
+ Table 3 represents confusion matrix of classified data. To calculate
235
+ accuracy, we make use of the given formula.
236
+ Accuracy =
237
+ 𝑎11 + 𝑎22
238
+ 𝑎11 + 𝑎12 + 𝑎21 + 𝑎22
239
+ × 100
240
+ In our case, the calculated accuracy is
241
+ Accuracy =
242
+ 1312 + 986
243
+ 1312 + 6 + 31 + 986 × 100 = 98.4%
244
+ As accuracy is not the only measure to evaluate any classification
245
+ model, we further calculate precision, recall and f1 score correspond-
246
+ ing to both the classes as shown in Table 4
247
+ Although, the obtained results from simpler "BNN" can be
248
+ obtained via complex frequentist models, the uniqueness of the
249
+ algorithm is that it agrees to classify only 14% of the unknown
250
+ classes as one of the trained classes as opposed to frequentist
251
+ approaches wherein all those samples are classified as one of these
252
+ classes. Table 5 shows the percentage of data from untrained classes
253
+ Class
254
+ AGN
255
+ Star
256
+ CV
257
+ 13.8 %
258
+ 0 %
259
+ Pulsars
260
+ 2.3 %
261
+ 6.3 %
262
+ ULX
263
+ 14.9 %
264
+ 6.5 %
265
+ LMX
266
+ 2 %
267
+ 26.5 %
268
+ Total
269
+ 9.4 %
270
+ 7.8 %
271
+ Table 5. Percentage of misidentified data points
272
+ which are predicted as a Star or a AGN.
273
+ As the algorithm gives significant results on labelled data, we make
274
+ use of it to identify the possible Stars and AGNs in the raw data 2.
275
+ The algorithm almost identifies almost 7.1% of data as AGNs and
276
+ 10.04% of data as AGNs. Numerically, the number happens to be
277
+ 62807 and 88107 respectively. Although, there’s high probability that
278
+ there exists more Stars and AGNs as compared to the given number
279
+ the algorithm simply refuses to give the prediction as it isn’t enough
280
+ confident with the same.
281
+ 4 CONCLUSIONS
282
+ In this study, we propose a Bayesian approach to identify Stars and
283
+ AGNs observed by XMM Newton. The proposed algorithm avoids
284
+ making predictions whenever it is unsure about the predictions. Im-
285
+ plementing such algorithms will help in reducing the number of
286
+ wrong predictions which is one of the major drawbacks of algo-
287
+ rithms making use of the frequentist approach. This is an important
288
+ thing to consider as there always exists a situation wherein the algo-
289
+ rithm receives an input on which it is never trained. The proposed
290
+ algorithm also identifies 62807 Stars and 88107 AGNs in the data
291
+ release 11 by XMM-Newton.
292
+ 5 CONFLICT OF INTEREST
293
+ The authors declare that they have no conflict of interest.
294
+ DATA AVAILABILITY
295
+ The raw data used in this study is publicly made available by XMM
296
+ Newton data archive. All the codes corresponding to the algorithm
297
+ and the predicted objects along with the predictions will be publicly
298
+ made available on "Github" and "paperswithcode" by June 2023.
299
+ REFERENCES
300
+ Abdo A., et al., 2013, The Astrophysical Journal Supplement Series, 208, 17
301
+ Agarap A. F., 2018, arXiv preprint arXiv:1803.08375
302
+ Allf B. C., Cooper C. B., Larson L. R., Dunn R. R., Futch S. E., Sharova M.,
303
+ Cavalier D., 2022, BioScience, 72, 651
304
+ Beaumont J.-F., Haziza D., 2022, Canadian Journal of Statistics
305
+ Bingham E., et al., 2019, The Journal of Machine Learning Research, 20, 973
306
+ 2 http://xmmssc.irap.omp.eu/Catalogue/4XMM-DR11/col_unsrc.
307
+ html
308
+ MNRAS 000, 1–4 (2022)
309
+
310
+ h2
311
+ 54
312
+ S. Gharat et al.
313
+ Carneiro T., Da Nóbrega R. V. M., Nepomuceno T., Bian G.-B., De Albu-
314
+ querque V. H. C., Reboucas Filho P. P., 2018, IEEE Access, 6, 61677
315
+ Charnock T., Perreault-Levasseur L., Lanusse F., 2022, in , Artificial Intelli-
316
+ gence for High Energy Physics. World Scientific, pp 663–713
317
+ Cobb B., 2021, in Astronomical Society of the Pacific Conference Series.
318
+ p. 415
319
+ Desjardins R., Pahud D., Doerksen N., Laczko M., 2021, in Astronomical
320
+ Society of the Pacific Conference Series. p. 23
321
+ Drake A., et al., 2014, Monthly Notices of the Royal Astronomical Society,
322
+ 441, 1186
323
+ Ebisawa K., Bourban G., Bodaghee A., Mowlavi N., 2003, Astronomy &
324
+ Astrophysics, 411, L59
325
+ Faherty J. K., et al., 2021, The Astrophysical Journal, 923, 48
326
+ Feng J., Lu S., 2019, in Journal of physics: conference series. p. 022030
327
+ Fortuin V., Garriga-Alonso A., Ober S. W., Wenzel F., Rätsch G., Turner R. E.,
328
+ van der Wilk M., Aitchison L., 2021, arXiv preprint arXiv:2102.06571
329
+ Fukushima K., 1975, Biological cybernetics, 20, 121
330
+ Goan E., Fookes C., 2020, in , Case Studies in Applied Bayesian Data Science.
331
+ Springer, pp 45–87
332
+ Golob A., Sawicki M., Goulding A. D., Coupon J., 2021, Monthly Notices of
333
+ the Royal Astronomical Society, 503, 4136
334
+ Haakonsen C. B., Rutledge R. E., 2009, The Astrophysical Journal Supple-
335
+ ment Series, 184, 138
336
+ Hofmann F., Pietsch W., Henze M., Haberl F., Sturm R., Della Valle M.,
337
+ Hartmann D. H., Hatzidimitriou D., 2013, Astronomy & Astrophysics,
338
+ 555, A65
339
+ Izmailov P., Vikram S., Hoffman M. D., Wilson A. G. G., 2021, in Interna-
340
+ tional conference on machine learning. pp 4629–4640
341
+ Jansen F., et al., 2001, Astronomy & Astrophysics, 365, L1
342
+ Jospin L. V., Laga H., Boussaid F., Buntine W., Bennamoun M., 2022, IEEE
343
+ Computational Intelligence Magazine, 17, 29
344
+ Kingma D. P., Ba J. A., Adam J., 2020, arXiv preprint arXiv:1412.6980, 106
345
+ Kristiadi A., Hein M., Hennig P.,2020, in International conferenceon machine
346
+ learning. pp 5436–5446
347
+ Kristiadi A., Hein M., Hennig P., 2021, Advances in Neural Information
348
+ Processing Systems, 34, 18789
349
+ Lin D., et al., 2015, The Astrophysical Journal, 808, 19
350
+ Liu Q., Van Paradijs J., Van Den Heuvel E., 2007, Astronomy & Astrophysics,
351
+ 469, 807
352
+ Małek K., et al., 2013, Astronomy & Astrophysics, 557, A16
353
+ Manchester R. N., Hobbs G. B., Teoh A., Hobbs M., 2005, The Astronomical
354
+ Journal, 129, 1993
355
+ Nwankpa C., Ijomah W., Gachagan A., Marshall S., 2018, arXiv preprint
356
+ arXiv:1811.03378
357
+ Ochsenbein F., Bauer P., Marcout J., 2000, Astronomy and Astrophysics
358
+ Supplement Series, 143, 23
359
+ Paszke A., et al., 2019, Advances in neural information processing systems,
360
+ 32
361
+ Ranganath R., Gerrish S., Blei D., 2014, in Artificial intelligence and statistics.
362
+ pp 814–822
363
+ Ritter H., Kolb U., 2003, Astronomy & Astrophysics, 404, 301
364
+ Skiff B., 2014, VizieR Online Data Catalog, pp B–mk
365
+ Stiele H., Pietsch W., Haberl F., Freyberg M., 2008, Astronomy & Astro-
366
+ physics, 480, 599
367
+ Véron-Cetty M.-P., Véron P., 2010, Astronomy & Astrophysics, 518, A10
368
+ Webb N., et al., 2020, Astronomy & Astrophysics, 641, A136
369
+ Wingate D., Weber T., 2013, arXiv preprint arXiv:1301.1299
370
+ This paper has been typeset from a TEX/LATEX file prepared by the author.
371
+ MNRAS 000, 1–4 (2022)
372
+
89AyT4oBgHgl3EQfQ_ZE/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,327 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf,len=326
2
+ page_content='MNRAS 000, 1–4 (2022) Preprint 30 December 2022 Compiled using MNRAS LATEX style file v3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
3
+ page_content='0 A Bayesian Neural Network Approach to identify Stars and AGNs observed by XMM Newton ★ Sarvesh Gharat,1† and Bhaskar Bose2 1 Centre for Machine Intelligence and Data Science, Indian Institute of Technology Bombay, 400076, Mumbai, India 2 Smart Mobility Group, Tata Consultancy Services, 560067, Bangalore, India Accepted XXX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
4
+ page_content=' Received YYY;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
5
+ page_content=' in original form ZZZ ABSTRACT In today’s era, a tremendous amount of data is generated by different observatories and manual classification of data is something which is practically impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
6
+ page_content=' Hence, to classify and categorize the objects there are multiple machine and deep learning techniques used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
7
+ page_content=' However, these predictions are overconfident and won’t be able to identify if the data actually belongs to the trained class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
8
+ page_content=' To solve this major problem of overconfidence, in this study we propose a novel Bayesian Neural Network which randomly samples weights from a distribution as opposed to the fixed weight vector considered in the frequentist approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
9
+ page_content=' The study involves the classification of Stars and AGNs observed by XMM Newton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
10
+ page_content=' However, for testing purposes, we consider CV, Pulsars, ULX, and LMX along with Stars and AGNs which the algorithm refuses to predict with higher accuracy as opposed to the frequentist approaches wherein these objects are predicted as either Stars or AGNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
11
+ page_content=' The proposed algorithm is one of the first instances wherein the use of Bayesian Neural Networks is done in observational astronomy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
12
+ page_content=' Additionally, we also make our algorithm to identify stars and AGNs in the whole XMM-Newton DR11 catalogue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
13
+ page_content=' The algorithm almost identifies 62807 data points as AGNs and 88107 data points as Stars with enough confidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
14
+ page_content=' In all other cases, the algorithm refuses to make predictions due to high uncertainty and hence reduces the error rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
15
+ page_content=' Key words: methods: data analysis – methods: observational – methods: miscellaneous 1 INTRODUCTION Since the last few decades, a large amount of data is regularly generated by different observatories and surveys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
16
+ page_content=' The classification of this enormous amount of data by professional astronomers is time-consuming as well as practically impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
17
+ page_content=' To make the process simpler, various citizen science projects (Desjardins et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
18
+ page_content=' 2021) (Cobb 2021) (Allf et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
19
+ page_content=' 2022) (Faherty et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
20
+ page_content=' 2021) are introduced which has been reducing the required time by some extent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
21
+ page_content=' However, there are many instances wherein classifying the objects won’t be simple and may require domain expertise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
22
+ page_content=' In this modern era, wherein Machine Learning and Neural Net- works are widely used in multiple fields, there has been significant development in the use of these algorithms in Astronomy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
23
+ page_content=' Though these algorithms are accurate with their predictions there is certainly some overconfidence (Kristiadi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
24
+ page_content=' 2020) (Kristiadi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
25
+ page_content=' 2021) associated with it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
26
+ page_content=' Besides that, these algorithms tend to classify every input as one of the trained classes (Beaumont & Haziza 2022) irrespective of whether it actually belongs to those trained classes eg: The algorithm trained to classify stars will also predict AGNs as one of the stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
27
+ page_content=' To solve this major issue, in this study we propose a Bayesian Neural Network (Jospin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
28
+ page_content=' 2022) (Charnock et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
29
+ page_content=' 2022) ★ Based on observations obtained with XMM-Newton, an ESA science mis- sion with instruments and contributions directly funded by ESA Member States and NASA † E-mail: sarveshgharat19@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
30
+ page_content='com which refuses to make a prediction whenever it isn’t confident about its predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
31
+ page_content=' The proposed algorithm is implemented on the data collected by XMM-Newton (Jansen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
32
+ page_content=' 2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
33
+ page_content=' We do a binary classification to classify Stars and AGNs (Małek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
34
+ page_content=' 2013) (Golob et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
35
+ page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
36
+ page_content=' Additionally to test our algorithm with the inputs which don’t belong to the trained class we consider data observed from CV, Pulsars, ULX, and LMX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
37
+ page_content=' Although, the algorithm doesn’t refuse to predict all these objects, but the number of objects it predicts for these 4 classes is way smaller than that of trained classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
38
+ page_content=' For the trained classes, the algorithm gives its predictions for al- most 64% of the data points and avoids predicting the output when- ever it is not confident about its predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
39
+ page_content=' The achieved accuracy in this binary classification task whenever the algorithm gives its prediction is 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
40
+ page_content='41%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
41
+ page_content=' On the other hand, only 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
42
+ page_content='6% of the incor- rect data points are predicted as one of the classes by the algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
43
+ page_content=' The percentage decrease from 100% to 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
44
+ page_content='6% in the case of different inputs is what dominates our model over other frequentist algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
45
+ page_content=' 2 METHODOLOGY In this section, we discuss the methodology used to perform this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
46
+ page_content=' This section is divided into the following subsections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
47
+ page_content=' Data Collection and Feature Extraction Model Architecture Training and Testing © 2022 The Authors 2 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
48
+ page_content=' Gharat et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
49
+ page_content=' Class Catalogue AGN VERONCAT (Véron-Cetty & Véron 2010) LMX NGC3115CXO (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
50
+ page_content=' 2015) RITTERLMXB (Ritter & Kolb 2003) LMXBCAT (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
51
+ page_content=' 2007) INTREFCAT (Ebisawa et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
52
+ page_content=' 2003) M31XMMXRAY (Stiele et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
53
+ page_content=' 2008) M31CFCXO (Hofmann et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
54
+ page_content=' 2013) RASS2MASS (Haakonsen & Rutledge 2009) Pulsars ATNF (Manchester et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
55
+ page_content=' 2005) FERMIL2PSR (Abdo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
56
+ page_content=' 2013) CV CVC (Drake et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
57
+ page_content=' 2014) ULX XSEG (Drake et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
58
+ page_content=' 2014) Stars CSSC (Skiff 2014) Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
59
+ page_content=' Catalogues used to create labeled data Class Training Data Test Data AGN 8295 2040 LMX 0 49 Pulsars 0 174 CV 0 36 ULX 0 261 Stars 6649 1628 Total 14944 4188 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
60
+ page_content=' Data distribution after cross-matching all the data points with cata- logs mentioned in Table 1 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
61
+ page_content='1 Data Collection and Feature Extraction In this study, we make use of data provided in "XMM-DR11 SEDs" Webb et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
62
+ page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
63
+ page_content=' We further cross-match the collected data with different vizier (Ochsenbein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
64
+ page_content=' 2000) catalogs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
65
+ page_content=' Please refer to Table 1 to view all the catalogs used in this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
66
+ page_content=' As the proposed algorithm is a "supervised Bayesian algorithm", this happens to be one of the important steps for our algorithm to work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
67
+ page_content=' The provided data has 336 different features that can increase computational complexity by a larger extent and also has a lot of missing data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
68
+ page_content=' Therefore in this study, we consider a set of 18 features corresponding to the observed source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
69
+ page_content=' The considered features for all the sources are available on our Github repository, more information of which is available on the official webpage 1 of the observatory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
70
+ page_content=' After cross-matching and reducing the number of features, we were left with a total of 19136 data points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
71
+ page_content=' The data distribution can be seen in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
72
+ page_content=' We further also plot the sources (Refer Figure1) based on their "Ra" and "Dec" to confirm if the data coverage of the considered sources matches with the actual data covered by the telescope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
73
+ page_content=' 1 http://xmmssc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
74
+ page_content='irap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
75
+ page_content='omp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
76
+ page_content='eu/Catalogue/4XMM-DR11/col_unsrc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
77
+ page_content=' html Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
78
+ page_content=' Sky map coverage of considered data points The collected data is further classified into train and test according to the 80 : 20 splitting condition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
79
+ page_content=' The exact number of data points is mentioned in Table 2 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
80
+ page_content='2 Model Architecture The proposed model has 1 input, hidden and output layers (refer Figure 2) with 18, 512, and 2 neurons respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
81
+ page_content=' The reason for having 18 neurons in the input layer is the number of input features considered in this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
82
+ page_content=' Further, to increase the non-linearity of the output, we make use of "Relu" (Fukushima 1975) (Agarap 2018) as an activation function for the first 2 layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
83
+ page_content=' On the other hand, the output layer makes use of "Softmax" to make the predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
84
+ page_content=' This is done so that the output of the model will be the probability of image belonging to a particular class (Nwankpa et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
85
+ page_content=' 2018) (Feng & Lu 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
86
+ page_content=' The "optimizer" and "loss" used in this study are "Adam" (Kingma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
87
+ page_content=' 2020) and "Trace Elbo" (Wingate & Weber 2013) (Ranganath et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
88
+ page_content=' 2014) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
89
+ page_content=' The overall idea of BNN (Izmailov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
90
+ page_content=' 2021) (Jospin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
91
+ page_content=' 2022) (Goan & Fookes 2020) is to have a pos- terior distribution corresponding to all weights and biases such that, the output distribution produced by these posterior distributions is similar to that of the categorical distributions defined in the training dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
92
+ page_content=' Hence, convergence, in this case, can be achieved by min- imizing the KL divergence between the output and the categorical distribution or just by maximizing the ELBO (Wingate & Weber 2013) (Ranganath et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
93
+ page_content=' 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
94
+ page_content=' We make use of normal distributions which are initialized with random mean and variance as prior (For- tuin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
95
+ page_content=' 2021), along with the likelihood derived from the data to construct the posterior distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
96
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
97
+ page_content='3 Training and Testing The proposed model is constructed using Pytorch (Paszke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
98
+ page_content=' 2019) and Pyro (Bingham et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
99
+ page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
100
+ page_content=' The training of the model is conducted on Google Colaboratory, making use of NVIDIA K80 GPU (Carneiro et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
101
+ page_content=' 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
102
+ page_content=' The model is trained over 2500 epochs with a learning rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
103
+ page_content='01.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
104
+ page_content=' Both these parameters i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
105
+ page_content='e number of epochs and learning rate has to be tuned and are done by iterating the algorithm multiple times with varying parameter values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
106
+ page_content=' The algorithm is further asked to make 100 predictions corre- sponding to every sample in the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
107
+ page_content=' Every time it makes the prediction, the corresponding prediction probability varies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
108
+ page_content=' This is due to random sampling of weights and biases from the trained dis- tributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
109
+ page_content=' Further, the algorithm considers the "mean" and "standard deviation" corresponding to those probabilities to make a decision as to proceed with classification or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
110
+ page_content=' MNRAS 000, 1–4 (2022) 4 45° 31 15* 15* 30* 45* 60 75"BNN Classifier 3 Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
111
+ page_content=' Model Architecture AGN Stars AGN 1312 6 Stars 31 986 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
112
+ page_content=' Confusion Matrix for classified data points Class Precision Recall F1 Score AGN 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
113
+ page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
114
+ page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
115
+ page_content='98 Stars 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
116
+ page_content='97 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
117
+ page_content='99 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
118
+ page_content='98 Average 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
119
+ page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
120
+ page_content='98 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
121
+ page_content='98 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
122
+ page_content=' Classification report for classified data points 3 RESULTS AND DISCUSSION The proposed algorithm is one of the initial attempts to implement "Bayesian Neural Networks" in observational astronomy which has shown significant results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
123
+ page_content=' The algorithm gives the predictions with an accuracy of more than 98% whenever it agrees to make predictions for trained classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
124
+ page_content=' Table 3 represents confusion matrix of classified data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
125
+ page_content=' To calculate accuracy, we make use of the given formula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
126
+ page_content=' Accuracy = 𝑎11 + 𝑎22 𝑎11 + 𝑎12 + 𝑎21 + 𝑎22 × 100 In our case, the calculated accuracy is Accuracy = 1312 + 986 1312 + 6 + 31 + 986 × 100 = 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
127
+ page_content='4% As accuracy is not the only measure to evaluate any classification model,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
128
+ page_content=' we further calculate precision,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
129
+ page_content=' recall and f1 score correspond- ing to both the classes as shown in Table 4 Although,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
130
+ page_content=' the obtained results from simpler "BNN" can be obtained via complex frequentist models,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
131
+ page_content=' the uniqueness of the algorithm is that it agrees to classify only 14% of the unknown classes as one of the trained classes as opposed to frequentist approaches wherein all those samples are classified as one of these classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
132
+ page_content=' Table 5 shows the percentage of data from untrained classes Class AGN Star CV 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
133
+ page_content='8 % 0 % Pulsars 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
134
+ page_content='3 % 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
135
+ page_content='3 % ULX 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
136
+ page_content='9 % 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
137
+ page_content='5 % LMX 2 % 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
138
+ page_content='5 % Total 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
139
+ page_content='4 % 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
140
+ page_content='8 % Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
141
+ page_content=' Percentage of misidentified data points which are predicted as a Star or a AGN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
142
+ page_content=' As the algorithm gives significant results on labelled data, we make use of it to identify the possible Stars and AGNs in the raw data 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
143
+ page_content=' The algorithm almost identifies almost 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
144
+ page_content='1% of data as AGNs and 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
145
+ page_content='04% of data as AGNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
146
+ page_content=' Numerically, the number happens to be 62807 and 88107 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
147
+ page_content=' Although, there’s high probability that there exists more Stars and AGNs as compared to the given number the algorithm simply refuses to give the prediction as it isn’t enough confident with the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
148
+ page_content=' 4 CONCLUSIONS In this study, we propose a Bayesian approach to identify Stars and AGNs observed by XMM Newton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
149
+ page_content=' The proposed algorithm avoids making predictions whenever it is unsure about the predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
150
+ page_content=' Im- plementing such algorithms will help in reducing the number of wrong predictions which is one of the major drawbacks of algo- rithms making use of the frequentist approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
151
+ page_content=' This is an important thing to consider as there always exists a situation wherein the algo- rithm receives an input on which it is never trained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
152
+ page_content=' The proposed algorithm also identifies 62807 Stars and 88107 AGNs in the data release 11 by XMM-Newton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
153
+ page_content=' 5 CONFLICT OF INTEREST The authors declare that they have no conflict of interest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
154
+ page_content=' DATA AVAILABILITY The raw data used in this study is publicly made available by XMM Newton data archive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
155
+ page_content=' All the codes corresponding to the algorithm and the predicted objects along with the predictions will be publicly made available on "Github" and "paperswithcode" by June 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
156
+ page_content=' REFERENCES Abdo A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
157
+ page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
158
+ page_content=', 2013, The Astrophysical Journal Supplement Series, 208, 17 Agarap A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
159
+ page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
160
+ page_content=', 2018, arXiv preprint arXiv:1803.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
161
+ page_content='08375 Allf B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
162
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
163
+ page_content=', Cooper C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
164
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
165
+ page_content=', Larson L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
166
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
167
+ page_content=', Dunn R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
168
+ page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
169
+ page_content=', Futch S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
170
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
171
+ page_content=', Sharova M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
172
+ page_content=', Cavalier D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
173
+ page_content=', 2022, BioScience, 72, 651 Beaumont J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
174
+ page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
175
+ page_content=', Haziza D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
176
+ page_content=', 2022, Canadian Journal of Statistics Bingham E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
177
+ page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
178
+ page_content=', 2019, The Journal of Machine Learning Research, 20, 973 2 http://xmmssc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
179
+ page_content='irap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
180
+ page_content='omp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
181
+ page_content='eu/Catalogue/4XMM-DR11/col_unsrc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
182
+ page_content=' html MNRAS 000, 1–4 (2022) h2 54 S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
183
+ page_content=' Gharat et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
184
+ page_content=' Carneiro T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
185
+ page_content=', Da Nóbrega R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
186
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
187
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
188
+ page_content=', Nepomuceno T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
189
+ page_content=', Bian G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
190
+ page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
191
+ page_content=', De Albu- querque V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
192
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
193
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
194
+ page_content=', Reboucas Filho P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
195
+ page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
196
+ page_content=', 2018, IEEE Access, 6, 61677 Charnock T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
197
+ page_content=', Perreault-Levasseur L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
198
+ page_content=', Lanusse F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
199
+ page_content=', 2022, in , Artificial Intelli- gence for High Energy Physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
200
+ page_content=' World Scientific, pp 663–713 Cobb B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
201
+ page_content=', 2021, in Astronomical Society of the Pacific Conference Series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
202
+ page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
203
+ page_content=' 415 Desjardins R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
204
+ page_content=', Pahud D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
205
+ page_content=', Doerksen N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
206
+ page_content=', Laczko M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
207
+ page_content=', 2021, in Astronomical Society of the Pacific Conference Series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
208
+ page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
209
+ page_content=' 23 Drake A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
210
+ page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
211
+ page_content=', 2014, Monthly Notices of the Royal Astronomical Society, 441, 1186 Ebisawa K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
212
+ page_content=', Bourban G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
213
+ page_content=', Bodaghee A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
214
+ page_content=', Mowlavi N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
215
+ page_content=', 2003, Astronomy & Astrophysics, 411, L59 Faherty J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
216
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
217
+ page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
218
+ page_content=', 2021, The Astrophysical Journal, 923, 48 Feng J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
219
+ page_content=', Lu S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
220
+ page_content=', 2019, in Journal of physics: conference series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
221
+ page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
222
+ page_content=' 022030 Fortuin V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
223
+ page_content=', Garriga-Alonso A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
224
+ page_content=', Ober S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
225
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
226
+ page_content=', Wenzel F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
227
+ page_content=', Rätsch G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
228
+ page_content=', Turner R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
229
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
230
+ page_content=', van der Wilk M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
231
+ page_content=', Aitchison L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
232
+ page_content=', 2021, arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
233
+ page_content='06571 Fukushima K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
234
+ page_content=', 1975, Biological cybernetics, 20, 121 Goan E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
235
+ page_content=', Fookes C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
236
+ page_content=', 2020, in , Case Studies in Applied Bayesian Data Science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
237
+ page_content=' Springer, pp 45–87 Golob A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
238
+ page_content=', Sawicki M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
239
+ page_content=', Goulding A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
240
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
241
+ page_content=', Coupon J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
242
+ page_content=', 2021, Monthly Notices of the Royal Astronomical Society, 503, 4136 Haakonsen C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
243
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
244
+ page_content=', Rutledge R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
245
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
246
+ page_content=', 2009, The Astrophysical Journal Supple- ment Series, 184, 138 Hofmann F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
247
+ page_content=', Pietsch W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
248
+ page_content=', Henze M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
249
+ page_content=', Haberl F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
250
+ page_content=', Sturm R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
251
+ page_content=', Della Valle M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
252
+ page_content=', Hartmann D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
253
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
254
+ page_content=', Hatzidimitriou D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
255
+ page_content=', 2013, Astronomy & Astrophysics, 555, A65 Izmailov P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
256
+ page_content=', Vikram S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
257
+ page_content=', Hoffman M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
258
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
259
+ page_content=', Wilson A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
260
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
261
+ page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
262
+ page_content=', 2021, in Interna- tional conference on machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
263
+ page_content=' pp 4629–4640 Jansen F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
264
+ page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
265
+ page_content=', 2001, Astronomy & Astrophysics, 365, L1 Jospin L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
266
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
267
+ page_content=', Laga H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
268
+ page_content=', Boussaid F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
269
+ page_content=', Buntine W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
270
+ page_content=', Bennamoun M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
271
+ page_content=', 2022, IEEE Computational Intelligence Magazine, 17, 29 Kingma D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
272
+ page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
273
+ page_content=', Ba J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
274
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
275
+ page_content=', Adam J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
276
+ page_content=', 2020, arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
277
+ page_content='6980, 106 Kristiadi A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
278
+ page_content=', Hein M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
279
+ page_content=', Hennig P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
280
+ page_content=',2020, in International conferenceon machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
281
+ page_content=' pp 5436–5446 Kristiadi A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
282
+ page_content=', Hein M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
283
+ page_content=', Hennig P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
284
+ page_content=', 2021, Advances in Neural Information Processing Systems, 34, 18789 Lin D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
285
+ page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
286
+ page_content=', 2015, The Astrophysical Journal, 808, 19 Liu Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
287
+ page_content=', Van Paradijs J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
288
+ page_content=', Van Den Heuvel E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
289
+ page_content=', 2007, Astronomy & Astrophysics, 469, 807 Małek K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
290
+ page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
291
+ page_content=', 2013, Astronomy & Astrophysics, 557, A16 Manchester R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
292
+ page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
293
+ page_content=', Hobbs G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
294
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
295
+ page_content=', Teoh A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
296
+ page_content=', Hobbs M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
297
+ page_content=', 2005, The Astronomical Journal, 129, 1993 Nwankpa C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
298
+ page_content=', Ijomah W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
299
+ page_content=', Gachagan A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
300
+ page_content=', Marshall S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
301
+ page_content=', 2018, arXiv preprint arXiv:1811.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
302
+ page_content='03378 Ochsenbein F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
303
+ page_content=', Bauer P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
304
+ page_content=', Marcout J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
305
+ page_content=', 2000, Astronomy and Astrophysics Supplement Series, 143, 23 Paszke A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
306
+ page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
307
+ page_content=', 2019, Advances in neural information processing systems, 32 Ranganath R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
308
+ page_content=', Gerrish S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
309
+ page_content=', Blei D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
310
+ page_content=', 2014, in Artificial intelligence and statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
311
+ page_content=' pp 814–822 Ritter H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
312
+ page_content=', Kolb U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
313
+ page_content=', 2003, Astronomy & Astrophysics, 404, 301 Skiff B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
314
+ page_content=', 2014, VizieR Online Data Catalog, pp B–mk Stiele H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
315
+ page_content=', Pietsch W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
316
+ page_content=', Haberl F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
317
+ page_content=', Freyberg M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
318
+ page_content=', 2008, Astronomy & Astro- physics, 480, 599 Véron-Cetty M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
319
+ page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
320
+ page_content=', Véron P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
321
+ page_content=', 2010, Astronomy & Astrophysics, 518, A10 Webb N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
322
+ page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
323
+ page_content=', 2020, Astronomy & Astrophysics, 641, A136 Wingate D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
324
+ page_content=', Weber T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
325
+ page_content=', 2013, arXiv preprint arXiv:1301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
326
+ page_content='1299 This paper has been typeset from a TEX/LATEX file prepared by the author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
327
+ page_content=' MNRAS 000, 1–4 (2022)' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/89AyT4oBgHgl3EQfQ_ZE/content/2301.00056v1.pdf'}
99E5T4oBgHgl3EQfRQ7z/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:985f60497c14d9ab63b556366c0ee837e164d0849cb76182f223c956021b0d0c
3
+ size 233276
9dE4T4oBgHgl3EQfdwwo/content/2301.05093v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50334824fa5e4c1fb968eb00bcd8da986e3988417ea0e0409cac4bc7536fdeea
3
+ size 6542313
9dE4T4oBgHgl3EQfdwwo/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97b468a86900a36f07ea08f098d48cb6182100e2f78399a40b40e17c099b7af5
3
+ size 4915245
A9FIT4oBgHgl3EQf_Swz/content/tmp_files/2301.11414v1.pdf.txt ADDED
@@ -0,0 +1,1626 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A Simple Algorithm For Scaling Up Kernel Methods
2
+ Teng Andrea Xu†, Bryan Kelly‡, and Semyon Malamud†
3
+ †Swiss Finance Institute, EPFL
4
+ andrea.xu,[email protected]
5
+ ‡Yale School of Management, Yale University
6
7
+ Abstract
8
+ The recent discovery of the equivalence between infinitely wide neural networks
9
+ (NNs) in the lazy training regime and Neural Tangent Kernels (NTKs) Jacot et al.
10
+ [2018] has revived interest in kernel methods. However, conventional wisdom suggests
11
+ kernel methods are unsuitable for large samples due to their computational complexity
12
+ and memory requirements. We introduce a novel random feature regression algorithm
13
+ that allows us (when necessary) to scale to virtually infinite numbers of random fea-
14
+ tures. We illustrate the performance of our method on the CIFAR-10 dataset.
15
+ arXiv:2301.11414v1 [cs.LG] 26 Jan 2023
16
+
17
+ 1
18
+ Introduction
19
+ Modern neural networks operate in the over-parametrized regime, which sometimes requires
20
+ orders of magnitude more parameters than training data points. Effectively, they are interpo-
21
+ lators (see, Belkin [2021]) and overfit the data in the training sample, with no consequences
22
+ for the out-of-sample performance. This seemingly counterintuitive phenomenon is some-
23
+ times called “benign overfit” [Bartlett et al., 2020, Tsigler and Bartlett, 2020].
24
+ In the so-called lazy training regime Chizat et al. [2019], wide neural networks (many
25
+ nodes in each layer) are effectively kernel regressions, and “early stopping” commonly used
26
+ in neural network training is closely related to ridge regularization [Ali et al., 2019]. See,
27
+ Jacot et al. [2018], Hastie et al. [2019], Du et al. [2018, 2019a], Allen-Zhu et al. [2019]. Recent
28
+ research also emphasizes the “double descent,” in which expected forecast error drops in the
29
+ high-complexity regime. See, for example, Zhang et al. [2016], Belkin et al. [2019a,b], Spigler
30
+ et al. [2019], Belkin et al. [2020].
31
+ These discoveries made many researchers argue that we need to gain a deeper under-
32
+ standing of kernel methods (and, hence, random feature regressions) and their link to deep
33
+ learning. See, e.g., Belkin et al. [2018]. Several recent papers have developed numerical
34
+ algorithms for scaling kernel-type methods to large datasets and large numbers of random
35
+ features. See, e.g., Zandieh et al. [2021], Ma and Belkin [2017], Arora et al. [2019a], Shankar
36
+ et al. [2020]. In particular, Arora et al. [2019b] show how NTK combined with the support
37
+ vector machines (SVM) (see also Fern´andez-Delgado et al. [2014]) perform well on small
38
+ data tasks relative to many competitors, including the highly over-parametrized ResNet-34.
39
+ In particular, while modern deep neural networks do generalize on small datasets (see, e.g.,
40
+ Olson et al. [2018]), Arora et al. [2019b] show that kernel-based methods achieve superior
41
+ performance in such small data environments. Similarly, Du et al. [2019b] find that the graph
42
+ neural tangent kernel (GNTK) dominates graph neural networks on datasets with up to 5000
43
+ 2
44
+
45
+ samples. Shankar et al. [2020] show that, while NTK is a powerful kernel, it is possible to
46
+ build other classes of kernels (they call Neural Kernels) that are even more powerful and are
47
+ often at par with extremely complex deep neural networks.
48
+ In this paper, we develop a novel form of kernel ridge regression that can be applied to
49
+ any kernel and any way of generating random features. We use a doubly stochastic method
50
+ similar to that in Dai et al. [2014], with an important caveat: We generate (potentially large,
51
+ defined by the RAM constraints) batches of random features and then use linear algebraic
52
+ properties of covariance matrices to recursively update the eigenvalue decomposition of the
53
+ feature covariance matrix, allowing us to perform the optimization in one shot across a large
54
+ grid of ridge parameters.
55
+ The paper is organized as follows. Section 2 discusses related work. In Section 3, we
56
+ provide a novel random feature regression mathematical formulation and algorithm. Then,
57
+ Section 4 and Section 5 present numerical results and conclusions, respectively.
58
+ 2
59
+ Related Work
60
+ Before the formal introduction of the NTK in Jacot et al. [2018], numerous papers discussed
61
+ the intriguing connections between infinitely wide neural networks and kernel methods. See,
62
+ e.g., Neal [1996]; Williams [1997]; Le Roux and Bengio [2007]; Hazan and Jaakkola [2015];
63
+ Lee et al. [2018]; Matthews et al. [2018]; Novak et al. [2018]; Garriga-Alonso et al. [2018];
64
+ Cho and Saul [2009]; Daniely et al. [2016]; Daniely [2017].
65
+ As in the standard random
66
+ feature approximation of the kernel ridge regression (see Rahimi and Recht [2007]), only the
67
+ network’s last layer is trained in the standard kernel ridge regression. A surprising discovery
68
+ of Jacot et al. [2018] is that (infinitely) wide neural networks in the lazy training regime
69
+ converge to a kernel even though all network layers are trained. The corresponding kernel,
70
+ the NTK, has a complex structure dependent on the neural network’s architecture. See also
71
+ Lee et al. [2019], Arora et al. [2019a] for more results about the link between NTK and the
72
+ 3
73
+
74
+ underlying neural network, and Novak et al. [2019] for an efficient algorithm for implementing
75
+ the NTK. In a recent paper, Shankar et al. [2020] introduce a new class of kernels and show
76
+ that they perform remarkably well on even very large datasets, achieving a 90% accuracy on
77
+ the CIFAR-10 dataset. While this performance is striking, it comes at a huge computational
78
+ cost. Shankar et al. [2020] write:
79
+ “CIFAR-10/CIFAR-100 consist of 60, 000 32 × 32 × 3 images and MNIST consists of
80
+ 70, 000 28 × 28 images. Even with this constraint, the largest compositional kernel matrices
81
+ we study took approximately 1000 GPU hours to compute. Thus, we believe an imperative
82
+ direction of future work is reducing the complexity of each kernel evaluation. Random feature
83
+ methods or other compression schemes could play a significant role here.
84
+ In this paper, we offer one such highly scalable scheme based on random features. How-
85
+ ever, computing the random features underlying the Neural Kernels of Shankar et al. [2020]
86
+ would require developing non-trivial numerical algorithms based on the recursive iteration
87
+ of non-linear functions. We leave this as an important direction for future research.
88
+ As in standard kernel ridge regressions, we train our random feature regression on the
89
+ full sample. This is a key computational limitation for large datasets. After all, one of
90
+ the reasons for the success of modern deep learning is the possibility of training them us-
91
+ ing stochastic gradient descent on mini-batches of data. Ma and Belkin [2017] shows how
92
+ mini-batch training can be applied to kernel ridge regression.
93
+ A key technical difficulty
94
+ arises because kernel matrices (equivalently, covariance matrices of random features) have
95
+ eigenvalues that decay very quickly. Yet, these low eigenvalues contain essential informa-
96
+ tion and cannot be neglected. Our regression method can be easily modified to allow for
97
+ mini-batches. Furthermore, it is known that mini-batch linear regression can even lead to
98
+ performance gains in the high-complexity regime. As LeJeune et al. [2020] show, one can run
99
+ regression on mini-batches and then treat the obtained predictions as an ensemble. LeJeune
100
+ et al. [2020] prove that, under technical conditions, the average of these predictions attains
101
+ 4
102
+
103
+ a lower generalization error than the full-train-sample-based regression. We test this mini-
104
+ batch ensemble approach using our method and show that, indeed, with moderately-sized
105
+ mini-batches, the method’s performance matches that of the full sample regression.
106
+ Moreover, there is an intriguing connection between mini-batch regressions and spectral
107
+ dimensionality reduction. By construction, the feature covariance matrix with a mini-batch
108
+ of size B has at most B non-zero eigenvalues.
109
+ Thus, a mini-batch effectively performs
110
+ a dimensionality reduction on the covariance matrix. Intuitively, we expect that the two
111
+ methods (using a mini-batch of size B or using the full sample but only keeping B largest
112
+ eigenvalues) should achieve comparable performance. We show that this is indeed the case
113
+ for small sample sizes. However, the spectral method for larger-sized samples (N ≥ 10000)
114
+ is superior to the mini-batch method unless we use very large mini-batches. For example,
115
+ on the full CIFAR-10 dataset, the spectral method outperforms the mini-batch approach by
116
+ 3% (see Section 4 for details).
117
+ 3
118
+ Random Features Ridge Regression and Classifica-
119
+ tion
120
+ Suppose that we have a train sample (X, y)
121
+ =
122
+ (xi, yi)N
123
+ i=1, xi ∈ Rd, yi ∈ R, so that
124
+ X ∈ RN×d, y ∈ RN×1. Following Rahimi and Recht [2007] we construct a large number of
125
+ random features f(x; θp), p = 1, . . . , P, where f is a non-linear function and θp are sampled
126
+ from some distribution, and P is a large number.
127
+ We denote S = f(X; θ) ∈ RN×P as
128
+ the train sample realizations of random features. Following Rahimi and Recht [2007], we
129
+ consider the random features ridge regression,
130
+ β(z) = (S⊤S/N + zI)−1S⊤y/N ,
131
+ (1)
132
+ 5
133
+
134
+ as an approximation for kernel ridge regression when P → ∞. For classification problems,
135
+ it is common to use categorical cross-entropy as the objective. However, as Belkin [2021]
136
+ explains, minimizing the mean-squared error with one-hot encoding often achieves superior
137
+ generalization performance. Here, we follow this approach. Given the K labels, k = 1, . . . , K,
138
+ we build the one-hot encoding matrix Q = (qi,k) where qi,k = 1yi=k. Then, we get
139
+ β(z) = (S⊤S/N + zI)−1S⊤Q/N ∈ RP×K .
140
+ (2)
141
+ Then, for each test feature vector s = f(x; θ) ∈ RP, we get a vector β(z)⊤s ∈ RK. Next,
142
+ define the actual classifier as
143
+ k(x; z) = arg max{β(z)⊤s} ∈ {1, · · · , K} .
144
+ (3)
145
+ 3.1
146
+ Dealing with High-Dimensional Features
147
+ A key computational (hardware) limitation of kernel methods comes from the fact that,
148
+ when P is large, computing the matrix S⊤S ∈ RP×P becomes prohibitively expensive, in
149
+ particular, because S cannot even be stored in RAM. We start with a simple observation
150
+ that the following identity implies that storing all these features is not necessary:1
151
+ (S⊤S/N + zI)−1S⊤ = S⊤(SS⊤/N + zI)−1 ,
152
+ (4)
153
+ and therefore we can compute β(z) as
154
+ β(z) = S⊤(SS⊤/N + zI)−1y/N .
155
+ (5)
156
+ Suppose now we split S into multiple blocks, S1, . . . , SK, where Sk ∈ RN×P1 for all
157
+ 1This identity follows directly from (S⊤S/N + zI)S⊤ = S⊤(SS⊤/N + zI).
158
+ 6
159
+
160
+ k = 1, . . . , K, for some small P1, with KP1 = P. Then,
161
+ Ψ = SS⊤ =
162
+ K
163
+
164
+ k=1
165
+ SkS⊤
166
+ k
167
+ (6)
168
+ can be computed by generating the blocks Sk, one at a time, and recursively adding SkS⊤
169
+ k up.
170
+ Once Ψ has been computed, one can calculate its eigenvalue decomposition, Ψ = V DV ⊤,
171
+ and then evaluate Q(z) = (Ψ/N + zI)−1y/N
172
+ = V (D + zI)−1V ⊤y/N ∈ RN in one go for
173
+ a grid of z. Then, using the same seeds, we can again generate the random features Sk and
174
+ compute βk(z) = S⊤
175
+ k Q(z) ∈ RP1. Then, β(z) = (βk(z))K
176
+ k=1 ∈ RP . The logic described above
177
+ is formalized in Algorithm 1.
178
+ Algorithm 1 FABR
179
+ Require: P1, P, X ∈ RN×d, y ∈ RN, z, voc curve
180
+ blocks ← P//P1
181
+ k ← 0
182
+ Ψ ← 0N×N
183
+ while k < blocks do
184
+ Generate Sk ∈ RN×P1 Use k as seed
185
+ Ψ ← Ψ + SkSk⊤
186
+ if k in voc curve then
187
+ DV ← eigen( Ψ
188
+ N )
189
+ Qk(z) ← V (D + zI)−1V ⊤ y
190
+ N
191
+ ▷ Store Qk(z)
192
+ end if
193
+ k = k + 1
194
+ end while
195
+ DV ← eigen( Ψ
196
+ N )
197
+ Q(z) ← V (D + zI)−1V ⊤ y
198
+ N
199
+ k ← 0
200
+ while k < blocks do
201
+ (re-)Generate Sk ∈ RN×P1
202
+ ▷ Use k as seed
203
+ βk(z) ← S⊤
204
+ k Q(z)
205
+ ˆy += Skβk
206
+ end while
207
+ 7
208
+
209
+ 3.2
210
+ Dealing with Massive Datasets
211
+ The above algorithm relies crucially on the assumption that N is small. Suppose now that
212
+ the sample size N is so large that storing and eigen-decomposing the matrix SS⊤ ∈ RN×N
213
+ becomes prohibitively expensive. In this case, we proceed as follows.
214
+ Define for all k = 1, . . . , K
215
+ Ψk =
216
+ k
217
+
218
+ κ=1
219
+ SkS⊤
220
+ k ∈ RN×N, Ψ0 = 0N×N ,
221
+ (7)
222
+ and let λ1(A) ≥ · · · ≥ λN(A) be the eigenvalues of a symmetric matrix A ∈ RN×N. Our
223
+ goal is to design an approximation to (ΨK + zI)−1, based on a simple observation that the
224
+ eigenvalues of the empirically observed Ψk matrices tend to decay very quickly, with only
225
+ a few hundreds of largest eigenvalues being significantly different from zero. In this case,
226
+ we can fix a ν ∈ N and design a simple, rank−ν approximation to ΨK by annihilating all
227
+ eigenvalues below λν(ΨK). As we now show, it is possible to design a recursive algorithm for
228
+ constructing such an approximation to ΨK, dealing with small subsets of random features
229
+ simultaneously. To this end, we proceed as follows.
230
+ Suppose we have constructed an approximation ˆΨk ∈ RN×N to Ψk with rank ν, and
231
+ let Vk ∈ RN×ν be the corresponding matrix of orthogonal eigenvectors for the non-zero
232
+ eigenvalues, and Dk ∈ Rν×ν the diagonal matrix of eigenvalues so that ˆΨk = VkDkV ⊤
233
+ k
234
+ and
235
+ V ⊤
236
+ k Vk = Iν×ν. Instead of storing the full ˆΨk matrix, we only need to store the pair (Vk, Dk).
237
+ For all k = 1, . . . , K, we now define
238
+ ˜Ψk+1 = ˆΨk + Sk+1S⊤
239
+ k+1 .
240
+ (8)
241
+ This N × N matrix is a theoretical construct. We never actually compute it (see Algorithm
242
+ 8
243
+
244
+ 2). Let Θk = I − VkV ⊤
245
+ k be the orthogonal projection on the kernel of ˆΨk, and
246
+ ˜Sk+1 = ΘkSk+1 = Sk+1 − Vk
247
+ ����
248
+ N×ν
249
+ (V ⊤
250
+ k Sk+1
251
+ � �� �
252
+ ν×P1
253
+ )
254
+ (9)
255
+ be Sk+1 orthogonalized with respect to the columns of Vk. Then, we define ˜Wk+1 = ˜Sk+1( ˜Sk+1 ˜S⊤
256
+ k+1)−1/2
257
+ to be the orthogonalized columns of ˜Sk+1, and ˆVk+1 = [Vk, ˜Wk+1]. To compute ˜Sk+1( ˜Sk+1 ˜S⊤
258
+ k+1)−1/2,
259
+ we use the following lemma that, once again, uses smart eigenvalue decomposition techniques
260
+ to avoid dealing with the N × N matrix ˜Sk+1 ˜S⊤
261
+ k+1.
262
+ Lemma 1. Let ˜S⊤
263
+ k+1 ˜Sk+1
264
+
265
+ ��
266
+
267
+ ν×ν
268
+ = Wδ ˜W ⊤ be the eigenvalue decomposition of ˜S⊤
269
+ k+1 ˜Sk+1. Then,
270
+ ˜W = ˜Sk+1Wδ−1/2 is the matrix of eigenvectors of ˜Sk+1 ˜S⊤
271
+ k+1 for the non-zero eigenvalues.
272
+ Thus,
273
+ ˜Sk+1( ˜Sk+1 ˜S⊤
274
+ k+1)−1/2 =
275
+ ˜Wk+1 .
276
+ (10)
277
+ By construction, the columns of ˆVk+1 form an orthogonal basis of the span of the columns
278
+ of Vk, Sk+1, and hence
279
+ Ψk+1,∗ = ˆV ⊤
280
+ k+1 ˜Ψk+1 ˆVk+1 ∈ R(P1+ν)×(P1+ν)
281
+ (11)
282
+ has the same non-zero eigenvalues as ˜Ψk+1. We then define ˜Vk+1 ∈ R(P1+ν)×ν to be the
283
+ matrix with eigenvectors of Ψk+1,∗ for the largest ν eigenvalues, and we denote the diagonal
284
+ matrix of these eigenvalues by Dk+1 ∈ Rν×ν, and then we define Vk+1 =
285
+ ˆVk+1 ˜Vk+1 . Then,
286
+ ˆΨk+1
287
+ =
288
+ Vk+1Dk+1Vk+1
289
+ =
290
+ Πk+1 ˜Ψk+1Πk+1 , where Πk+1
291
+ =
292
+ ˆVk+1 ˜Vk+1 ˜V ⊤
293
+ k+1 ˆV ⊤
294
+ k+1 is the
295
+ orthogonal projection onto the eigen-subspace of ˜Ψk+1 for the largest ν eigenvalues.
296
+ Lemma 2. We have ˆΨk ≤ ˜Ψk ≤ ΨK and
297
+ ∥Ψk − ˆΨk∥ ≤
298
+ k
299
+
300
+ i=1
301
+ λν+1(Ψi) ≤ k λν+1(ΨK) ,
302
+ (12)
303
+ 9
304
+
305
+ and
306
+ ∥(Ψk+1 + zI)−1 − (ˆΨk+1 + zI)−1∥ ≤ z−2
307
+ k
308
+
309
+ i=1
310
+ λν+1(Ψi) .
311
+ (13)
312
+ There is another important aspect of our algorithm: It allows us to directly compute
313
+ the performance of models with an expanding level of complexity. Indeed, since we load
314
+ random features in batches of size P1, we generate predictions for P ∈ [P1, 2P1, · · · , KP1].
315
+ This is useful because we might use it to calibrate the optimal degree of complexity and
316
+ because we can directly study the double descent-like phenomena, see, e.g., Belkin et al.
317
+ [2019a] and Nakkiran et al. [2021]. That is the effect of complexity on the generalization
318
+ error. In the next section, we do this. As we show, consistent with recent theoretical results
319
+ Kelly et al. [2022], with sufficient shrinkage, the double descent curve disappears, and the
320
+ performance becomes almost monotonic in complexity. Following Kelly et al. [2022], we
321
+ name this phenomenon the virtue of complexity (VoC) and the corresponding performance
322
+ plots the VoC curves. See, Figure 6 below.
323
+ We call this algorithm Fast Annihilating Batch Regression (FABR) as it annihilates all
324
+ eigenvalues below λν(ΨK) and allows to solve the random features ridge regression in one go
325
+ for a grid of z. Algorithm 2 formalizes the logic described above.
326
+ 4
327
+ Numerical Results
328
+ This section presents several experimental results on different datasets to evaluate FABR’s
329
+ performance and applications. In contrast to the most recent computational power demand
330
+ in kernel methods, e.g., Shankar et al. [2020], we ran all experiments on a laptop, a MacBook
331
+ Pro model A2485, equipped with an M1 Max with a 10-core CPU and 32 GB RAM.
332
+ 10
333
+
334
+ Algorithm 2 FABR-ν
335
+ Require: ν, P1, P, X ∈ RN×d, y ∈ RN, z, voc curve
336
+ blocks ← P//P1
337
+ k ← 0
338
+ while k < blocks do
339
+ Generate Sk ∈ RN×P1
340
+ ▷ Use k as seed to generate the random features
341
+ if k = 0 then
342
+ ˜d, ˜V ← eigen(S⊤
343
+ k Sk)
344
+ V ← Sk ˜V diag( ˜d)− 1
345
+ 2
346
+ V0 ← V:,min(ν,P1)
347
+ ▷ Save V0
348
+ d0 ← ˜d:min(ν,P1)
349
+ ▷ Save d0
350
+ if k in voc curve then
351
+ Q0(z) ← V0(diag(d0) + zI)−1V ⊤
352
+ 0 y
353
+ ▷ Save Q0(z)
354
+ end if
355
+ else if k > 0 then
356
+ ˜Sk ← (I − Vk−1V ⊤
357
+ k−1)Sk
358
+ Γk ← ˜
359
+ S⊤
360
+ k ˜Sk
361
+ δk, Wk ← eigen(Γk)
362
+ Keep top min(ν, P1) eigenvalues and eigenvectors from δk, Wk
363
+ ˜
364
+ Wk ← ˜SkWkdiag(δk)− 1
365
+ 2
366
+ ˆVk ← [Vk−1, ˜
367
+ Wk]
368
+ ¯Vk ← ˆ
369
+ V ⊤
370
+ k Vk−1
371
+ ¯
372
+ Wk ← ¯Vkdiag(dk−1) ¯
373
+ V ⊤
374
+ k
375
+ ¯Sk ← ˆ
376
+ V ⊤
377
+ k Sk
378
+ ¯Zk ← ¯SkS⊤
379
+ k
380
+ Ψ∗ ← ¯
381
+ Wk ¯Zk
382
+ dk, Vk ← eigen(Ψ∗)
383
+ Keep top min(ν, P1) eigenvalues and eigenvectors from dk, Vk
384
+ Vk ← ˆVkVk
385
+ ▷ Save dk, Vk
386
+ if k in voc curve then
387
+ Qk(z) ← Vk(diag(dk) + zI)−1V ⊤
388
+ k y
389
+ ▷ Save Qk(z)
390
+ end if
391
+ end if
392
+ k = k + 1
393
+ end while
394
+ k ← 0
395
+ while k < blocks do
396
+ (re-)Generate Sk ∈ RN×P1
397
+ ▷ Use k as seed to generate the random features
398
+ βk(z) ← S⊤
399
+ k Qk(z)
400
+ ˆy += Skβk
401
+ end while
402
+ 11
403
+
404
+ 4.1
405
+ A comparison with sklearn
406
+ We now aim to show FABR’s training and prediction time with respect to the number of
407
+ features d. To this end, we do not use any random feature projection or the rank-ν matrix
408
+ approximation described in Section 3.1. We draw N = 5000 i.i.d. samples from ⊗d
409
+ j=1N(0, 1)
410
+ and let
411
+ yi = xiβ + ϵi
412
+ ∀i = 1, . . . , N,
413
+ where β ∼ ⊗d
414
+ j=1N(0, 1), and ϵi ∼ N(0, 1) for all i = 1, . . . , N. Then, we define
415
+ yi =
416
+
417
+
418
+
419
+
420
+
421
+
422
+
423
+ 1
424
+ if yi > median(y),
425
+ 0
426
+ otherwise
427
+ ∀i = 1, . . . , N.
428
+ Next, we create a set of datasets for classification with varying complexity d and keep the
429
+ first 4000 samples as the training set and the remaining 1000 as the test set. We show in
430
+ Figure 1 the average training and prediction time (in seconds) of FABR with a different
431
+ number of regularizers ( we denote this number by |z|) and sklearn RidgeClassifier with
432
+ an increasing number of features d. The training and prediction time is averaged over five
433
+ independent runs. As one can see, our method is drastically faster when d > 10000. E.g., for
434
+ d = 100000 we outperform sklearn by approximately 5 and 25 times for |z| = 5 and |z| = 50,
435
+ respectively. Moreover, one can notice that the number of different shrinkages |z| does not
436
+ affect FABR. We report a more detailed table with average training and prediction time and
437
+ standard deviation in Appendix B.
438
+ 4.2
439
+ Experiments on Real Datasets
440
+ We assess FABR’s performance on both small and big datasets regimes for further evaluation.
441
+ For all experiments, we perform a random features kernel ridge regression for demeaned one-
442
+ 12
443
+
444
+ 0
445
+ 20000
446
+ 40000
447
+ 60000
448
+ 80000 100000
449
+ d
450
+ 0
451
+ 200
452
+ 400
453
+ 600
454
+ 800
455
+ Training and Prediction Time (s)
456
+ FABR - |z| = 5
457
+ FABR - |z| = 10
458
+ FABR - |z| = 20
459
+ FABR - |z| = 50
460
+ sklearn - |z| = 5
461
+ sklearn - |z| = 10
462
+ sklearn - |z| = 20
463
+ sklearn - |z| = 50
464
+ Figure 1: The figure above compares FABR training and prediction time, shown on the y-
465
+ axis, in black, against sklearn’s RidgeClassifier, in red, for an increasing amount of features,
466
+ shown on the x-axis, and the number of shrinkages z.
467
+ Here, |z| denotes the number of
468
+ different values of z for which we perform the training.
469
+ hot labels and solve the optimization problem using FABR as described in Section 3.
470
+ 4.2.1
471
+ Data Representation
472
+ Table 1: The table below shows the average test accuracy and standard deviation of
473
+ ResNet-34, CNTK, and FABR on the subsampled CIFAR-10 datasets. The test accuracy is
474
+ average over twenty independent runs.
475
+ n
476
+ ResNet-34
477
+ 14-layer CNTK
478
+ z=1
479
+ z=100
480
+ z=10000
481
+ z=100000
482
+ 10
483
+ 14.59% ± 1.99%
484
+ 15.33% ± 2.43%
485
+ 18.50% ± 2.18%
486
+ 18.50% ± 2.18%
487
+ 18.42% ± 2.13%
488
+ 18.13% ± 2.01%
489
+ 20
490
+ 17.50% ± 2.47%
491
+ 18.79% ± 2.13%
492
+ 20.84% ± 2.38%
493
+ 20.85% ± 2.38%
494
+ 20.78% ± 2.35%
495
+ 20.13% ± 2.34%
496
+ 40
497
+ 19.52% ± 1.39%
498
+ 21.34% ± 1.91%
499
+ 25.09% ± 1.76%
500
+ 25.10% ± 1.76%
501
+ 25.14% ± 1.75%
502
+ 24.41% ± 1.88%
503
+ 80
504
+ 23.32% ± 1.61%
505
+ 25.48% ± 1.91%
506
+ 29.61% ± 1.35%
507
+ 29.60% ± 1.35%
508
+ 29.62% ± 1.39%
509
+ 28.63% ± 1.66%
510
+ 160
511
+ 28.30% ± 1.38%
512
+ 30.48% ± 1.17%
513
+ 34.86% ± 1.12%
514
+ 34.87% ± 1.12%
515
+ 35.02% ± 1.11%
516
+ 33.54% ± 1.24%
517
+ 320
518
+ 33.15% ± 1.20%
519
+ 36.57% ± 0.88%
520
+ 40.46% ± 0.73%
521
+ 40.47% ± 0.73%
522
+ 40.66% ± 0.72%
523
+ 39.34% ± 0.72%
524
+ 640
525
+ 41.66% ± 1.09%
526
+ 42.63% ± 0.68%
527
+ 45.68% ± 0.71%
528
+ 45.68% ± 0.72%
529
+ 46.17% ± 0.68%
530
+ 44.91% ± 0.72%
531
+ 1280
532
+ 49.14% ± 1.31%
533
+ 48.86% ± 0.68%
534
+ 50.30% ± 0.57%
535
+ 50.32% ± 0.56%
536
+ 51.05% ± 0.54%
537
+ 49.74% ± 0.42%
538
+ FABR requires, like any standard kernel methods or randomized-feature techniques, a
539
+ good data representation. Usually, we don’t know such a representation a-priori, and learning
540
+ a good kernel is outside the scope of this paper. Therefore, we build a simple Convolutional
541
+ Neural Network (CNN) mapping h : Rd → RD; that extracts image features ˜x ∈ RD for
542
+ some sample x ∈ Rd. The CNN is not optimized; we use it as a simple random feature
543
+ mapping. The CNN architecture, shown in Fig. 2, alternates a 3 × 3 convolution layer with
544
+ 13
545
+
546
+ GlobalAveragePool
547
+ 3x3 Convolution
548
+ ReLU
549
+ 2x2 Average Pool
550
+ BatchNormalization
551
+ 3x3 Convolution
552
+ ReLU
553
+ 2x2 Average Pool
554
+ BatchNormalization
555
+ 3x3 Convolution
556
+ ReLU
557
+ 2x2 Average Pool
558
+ BatchNormalization
559
+ 3x3 Convolution
560
+ ReLU
561
+ 2x2 Average Pool
562
+ BatchNormalization
563
+ Figure 2: CNN architecture used to extract image features.
564
+ a ReLU activation function, a 2 × 2 Average Pool, and a BatchNormalization layer Ioffe
565
+ and Szegedy [2015]. Convolutional layers weights are initialized using He Uniform He et al.
566
+ [2015]. To vectorize images, we use a global average pooling layer that has proven to enforce
567
+ correspondences between feature maps and to be more robust to spatial translations of the
568
+ input Lin et al. [2013]. We finally obtain the train and test random features realizations
569
+ s = f(˜x, θ). Specifically, we use the following random features mapping
570
+ si = σ(W ˜x),
571
+ (14)
572
+ where W ∈ RP×D with wi,j ∼ N(0, 1) and σ is some elementwise activation function. This
573
+ 14
574
+
575
+ can be described as a one-layer neural network with random weights W.
576
+ To show the
577
+ importance of over-parametrized models, throughout the results, we report the complexity,
578
+ c, of the model as c = P/N, that is, the ratio between the parameters (dimensions) and the
579
+ number of observations. See Belkin et al. [2019a], Hastie et al. [2019], Kelly et al. [2022].
580
+ 4.2.2
581
+ Small Datasets
582
+ We now study the performance of FABR on the subsampled CIFAR-10 dataset Krizhevsky
583
+ et al. [2009].
584
+ To this end, we reproduce the same experiment described in Arora et al.
585
+ [2019b]. In particular, we obtain random subsampled training set (y; X) = (yi; xi)n
586
+ i=1 where
587
+ n ∈ {10, 20, 40, 80, 160, 320, 640, 1280} and test on the whole test set of size 10000. We make
588
+ sure that exactly n/10 sample from each image class is in the training sample. We train
589
+ FABR using random features projection of the subsampled training set
590
+ S = σ(Wg(X)) ∈ Rn×P,
591
+ where g is an untrained CNN from Figure 2, randomly initialized using He Uniform distri-
592
+ bution. In this experiment, we push the model complexity c to 100; in other words, FABR’s
593
+ number of parameters equals a hundred times the number of observations in the subsample.
594
+ As n is small, we deliberately do not perform any low-rank covariance matrix approximation.
595
+ Finally, we run our model twenty times and report the mean out-of-sample performance and
596
+ the standard deviation. We report in Table 1 FABR’s performance for different shrinkages
597
+ (z) together with ResNet-34 and the 14-layers CNTK. Without any complicated random fea-
598
+ ture projection, FABR can outperform both ResNet-34 and CNTK. FABR’s test accuracy
599
+ increases with the model’s complexity c on different (n) subsampled CIFAR-10 datasets. We
600
+ show Figure 3 as an example for n = 10. Additionally, we show, to better observe the double
601
+ descent phenomena, truncated curves at c = 25 for all CIFAR-10 subsamples in Figure 4.
602
+ The full curves are shown in Appendix B. To sum up this section findings:
603
+ 15
604
+
605
+ Figure 3: The figures above show FABR’s test accuracy increases with the model’s complexity
606
+ c on the subsampled CIFAR-10 dataset for n = 10. The test accuracy is averaged over five
607
+ independent runs.
608
+ • FABR, with enough complexity together and a simple random feature projection, is
609
+ able to outperform deep neural networks (ResNet-34) and CNTKs.
610
+ • FABR always reaches the maximum accuracy beyond the interpolation threshold.
611
+ • Moreover, if the random feature ridge regression shrinkage z is sufficiently high, the
612
+ double descent phenomenon disappears, and the accuracy does not drop at the inter-
613
+ polation threshold point, i.e., when c = 1 or n = P. Following Kelly et al. [2022], we
614
+ call this phenomenon virtue of complexity (VoC).
615
+ 4.2.3
616
+ Big Datasets
617
+ In this section, we repeat the same experiments described in Section 4.2.2, but we extend
618
+ the training set size n up to the full CIFAR-10 dataset. For each n, we train FABR, FABR-ν
619
+ with a rank-ν approximation as described in Algorithm 2, and the min-batch-FABR. We
620
+ use ν = 2000 and batch size = 2000 in the last two algorithms. Following Arora et al.
621
+ [2019b], we train ResNet-34 as the benchmark for 160 epochs, with an initial learning rate
622
+ of 0.001 and a batch size of 32. We decrease the learning rate by ten at epochs 80 and 120.
623
+ ResNet-34 always reaches close to perfect accuracy on the training set, i.e., above 99%. We
624
+ 16
625
+
626
+ 0.18
627
+ 0.16
628
+ (%)
629
+ Z = 10-5
630
+ Accuracy
631
+ z= 10-1
632
+ 0.14
633
+ Z= 100
634
+ z= 101
635
+ 0.12
636
+ z= 102
637
+ z= 103
638
+ z= 104
639
+ 0.10
640
+ Z= 105
641
+ 0
642
+ 20
643
+ 40
644
+ 60
645
+ 80
646
+ 100
647
+ c(a) n = 10
648
+ (b) n = 20
649
+ (c) n = 40
650
+ (d) n = 80
651
+ (e) n = 160
652
+ (f) n = 320
653
+ (g) n = 640
654
+ (h) n = 1280
655
+ Figure 4: The figures above show FABR’s test accuracy increases with the model’s complexity
656
+ c on different (n) subsampled CIFAR-10 datasets. The expanded dataset follows similar
657
+ patterns. We truncate the curve for c > 25 to better show the double descent phenomena.
658
+ The full curves are shown in Appendix B. Notice that when the shrinkage is sufficiently
659
+ high, the double descent disappears, and the accuracy monotonically increases in complexity.
660
+ Following Kelly et al. [2022], we name this phenomenon the virtue of complexity (VoC). The
661
+ test accuracy is averaged over 20 independent runs.
662
+ run each training five times and report mean out-of-sample performance and its standard
663
+ deviation. As the training sample is sufficiently large already, we set the model complexity
664
+ to only c = 15, meaning that for the full sample, FABR performs a random feature ridge
665
+ regression with P = 7.5 × 105. We report the results in Tables 4.2.3 and 3.
666
+ Table 2: The table below shows the average test accuracy and standard deviation of ResNet-
667
+ 34 and FABR on the subsampled and full CIFAR-10 dataset. The test accuracy is average
668
+ over five independent runs.
669
+ n
670
+ ResNet-34
671
+ z=1
672
+ z=100
673
+ z=10000
674
+ z=100000
675
+ 2560
676
+ 48.12% ± 0.69%
677
+ 52.24% ± 0.29%
678
+ 52.45% ± 0.21%
679
+ 54.29% ± 0.44%
680
+ 48.28% ± 0.37%
681
+ 5120
682
+ 56.03% ± 0.82%
683
+ 55.34% ± 0.32%
684
+ 55.74% ± 0.34%
685
+ 58.29% ± 0.20%
686
+ 52.06% ± 0.08%
687
+ 10240
688
+ 63.21% ± 0.26%
689
+ 58.36% ± 0.45%
690
+ 58.86% ± 0.54%
691
+ 62.17% ± 0.35%
692
+ 55.75% ± 0.18%
693
+ 20480
694
+ 69.24% ± 0.47%
695
+ 61.08% ± 0.17%
696
+ 61.65% ± 0.27%
697
+ 65.12% ± 0.19%
698
+ 59.34% ± 0.14%
699
+ 50000
700
+ 75.34% ± 0.21%
701
+ 66.38% ± 0.00%
702
+ 66.98% ± 0.00%
703
+ 68.62% ± 0.00%
704
+ 63.25% ± 0.00%
705
+ The experiment delivers a number of additional conclusions:
706
+ • First, we observe that, while for small train sample sizes of n ≤ 10000, simple kernel
707
+ 17
708
+
709
+ 0.45
710
+ 0.40
711
+ -
712
+ (%)
713
+ 0.35
714
+ Z = 10-5
715
+ Accuracy
716
+ 0.30
717
+ Z= 10-1
718
+ z= 100
719
+ 0.25
720
+ z= 101
721
+ 0.20
722
+ z = 102
723
+ z= 103
724
+ 0.15
725
+ z= 104
726
+ Z= 105
727
+ 0.10
728
+ 0
729
+ 5
730
+ 10
731
+ 15
732
+ 20
733
+ 25
734
+ c0.5
735
+ 0.4
736
+ %)
737
+ Z = 10-5
738
+ Accuracy
739
+ Z= 10-1
740
+ 0.3
741
+ z=100
742
+ z= 101
743
+ z = 102
744
+ 0.2
745
+ Z= 103
746
+ z= 104
747
+ Z= 105
748
+ 0.1
749
+ 0
750
+ 5
751
+ 10
752
+ 15
753
+ 20
754
+ 25
755
+ c0.16
756
+ (%)
757
+ Z = 10-5
758
+ cy
759
+ 0.14
760
+ Z= 10-1
761
+ Accura
762
+ Z= 100
763
+ z= 101
764
+ 0.12
765
+ z = 102
766
+ z= 103
767
+ z= 104
768
+ 0.10
769
+ Z= 105
770
+ 0
771
+ 5
772
+ 10
773
+ 15
774
+ 20
775
+ 25
776
+ c-
777
+ 0.20
778
+ -
779
+ 0.18
780
+ (%)
781
+ 0.16
782
+ Z = 10-5
783
+ Accuracy
784
+ Z= 10-1
785
+ z= 100
786
+ 0.14
787
+ z= 101
788
+ z = 102
789
+ 0.12
790
+ Z= 103
791
+ z= 104
792
+ 0.10
793
+ Z= 105
794
+ 0
795
+ 5
796
+ 10
797
+ 15
798
+ 20
799
+ 25
800
+ c-
801
+ 0.24
802
+ 0.22
803
+ 0.20
804
+ Z = 10-5
805
+ Accuracy
806
+ 0.18
807
+ Z= 10-1
808
+ z= 100
809
+ 0.16
810
+ z= 101
811
+ z = 102
812
+ 0.14
813
+ z= 103
814
+ z= 104
815
+ 0.12
816
+ Z= 105
817
+ 0
818
+ 5
819
+ 10
820
+ 15
821
+ 20
822
+ 25
823
+ c0.275
824
+ 0.250
825
+ (%)
826
+ 0.225
827
+ Z = 10-5
828
+ Accuracy
829
+ Z= 10-1
830
+ 0.200
831
+ z= 100
832
+ 0.175
833
+ z= 101
834
+ z= 102
835
+ 0.150
836
+ z= 103
837
+ z = 104
838
+ 0.125
839
+ Z= 105
840
+ 0.100
841
+ 0
842
+ 5
843
+ 10
844
+ 15
845
+ 20
846
+ 25
847
+ c1
848
+ 0.30
849
+ -
850
+ (%)
851
+ Z = 10-5
852
+ 0.25
853
+ Accuracy
854
+ Z = 10-1
855
+ z= 100
856
+ 0.20
857
+ z= 101
858
+ z= 102
859
+ z= 103
860
+ 0.15
861
+ Z= 104
862
+ Z=105
863
+ 0.10
864
+ 0
865
+ 5
866
+ 10
867
+ 15
868
+ 20
869
+ 25
870
+ c0.40
871
+ -
872
+ 0.35
873
+ -
874
+ (%)
875
+ 0.30
876
+ Z = 10-5
877
+ Accuracy
878
+ Z= 10-1
879
+ 0.25
880
+ z= 100
881
+ z= 101
882
+ 0.20
883
+ Z= 102
884
+ z= 103
885
+ 0.15
886
+ z= 104
887
+ Z= 105
888
+ 0.10
889
+ 0
890
+ 5
891
+ 10
892
+ 15
893
+ 20
894
+ 25
895
+ c(a) n = 2560
896
+ (b) n = 50000
897
+ Figure 5: The figures above show FABR’s test accuracy increases with the model’s complexity
898
+ c on the subsampled CIFAR-10 dataset 5a and the full CIFAR-10 dataset 5b. FABR is trained
899
+ using a ν = 2000 low-rank covariance matrix approximation. Notice that we still observe a
900
+ (shifted) double descent when ν ≈ n. The same phenomenon disappears when ν ≪ n. The
901
+ test accuracy is averaged over five independent runs.
902
+ Table 3: The table below shows the average test accuracy and standard deviation of FABR-ν
903
+ and mini-batch FABR on the subsampled and full CIFAR-10 dataset. The test accuracy is
904
+ average over five independent runs.
905
+ z = 1
906
+ z = 100
907
+ z = 10000
908
+ z = 100000
909
+ FABR
910
+ batch = 2000
911
+ ν = 2000
912
+ batch = 2000
913
+ ν = 2000
914
+ batch = 2000
915
+ ν = 2000
916
+ batch = 2000
917
+ ν = 2000
918
+ n
919
+ 2560
920
+ 53.13% ± 0.38%
921
+ 53.48% ± 0.22%
922
+ 53.15% ± 0.42%
923
+ 53.63% ± 0.24%
924
+ 52.01% ± 0.51%
925
+ 54.05% ± 0.44%
926
+ 46.78% ± 0.52%
927
+ 48.23% ± 0.34%
928
+ 5120
929
+ 57.68% ± 0.18%
930
+ 57.63% ± 0.19%
931
+ 57.70% ± 0.16%
932
+ 57.63% ± 0.18%
933
+ 56.83% ± 0.27%
934
+ 57.53% ± 0.11%
935
+ 51.42% ± 0.22%
936
+ 51.75% ± 0.14%
937
+ 10240
938
+ 59.79% ± 0.35%
939
+ 61.20% ± 0.39%
940
+ 59.79% ± 0.35%
941
+ 61.20% ± 0.38%
942
+ 58.63% ± 0.28%
943
+ 60.63% ± 0.21%
944
+ 53.73% ± 0.37%
945
+ 55.16% ± 0.34%
946
+ 20480
947
+ 61.56% ± 0.35%
948
+ 63.50% ± 0.12%
949
+ 61.55% ± 0.37%
950
+ 63.50% ± 0.13%
951
+ 60.90% ± 0.20%
952
+ 62.92% ± 0.12%
953
+ 57.10% ± 0.19%
954
+ 58.40% ± 0.21%
955
+ 50000
956
+ 62.74% ± 0.10%
957
+ 65.45% ± 0.18%
958
+ 62.74% ± 0.10%
959
+ 65.44% ± 0.18%
960
+ 62.35% ± 0.05%
961
+ 65.04% ± 0.19%
962
+ 59.99% ± 0.02%
963
+ 61.71% ± 0.09%
964
+ methods achieve performance comparable with that of DNNs, this is not the case for
965
+ n > 20000. Beating DNNs on big datasets with shallow methods requires more complex
966
+ kernels, such as those in Shankar et al. [2020], Li et al. [2019].
967
+ • Second, we confirm the findings of Ma and Belkin [2017], Lee et al. [2020] suggesting
968
+ that the role of small
969
+ eigenvalues is important. For example, FABR-ν with ν = 2000 loses several percent of
970
+ accuracy on larger datasets.
971
+ 18
972
+
973
+ 0.55-
974
+ 0.50
975
+ (%)
976
+ 0.45
977
+ z = 10-5
978
+ Accuracy
979
+ z= 10-1
980
+ 0.40
981
+ z= 100
982
+ z= 101
983
+ 0.35
984
+ z= 102
985
+ z= 103
986
+ 0.30
987
+ z= 104
988
+ z= 105
989
+ 0.0
990
+ 2.5
991
+ 5.0
992
+ 7.5
993
+ 10.0
994
+ 12.5
995
+ 15.0
996
+ c0.65
997
+ 0.60
998
+ z= 10-5
999
+ 0.55
1000
+ z= 10-1
1001
+ z= 100
1002
+ 0.50
1003
+ z= 101
1004
+ z= 102
1005
+ 0.45
1006
+ Z= 103
1007
+ z= 104
1008
+ z= 105
1009
+ 0.40
1010
+ 0.0
1011
+ 2.5
1012
+ 5.0
1013
+ 7.5
1014
+ 10.0
1015
+ 12.5
1016
+ 15.0
1017
+ c• Third, surprisingly, both the mini-batch FABR and FABR-ν sometimes achieve higher
1018
+ accuracy than the full sample regression on moderately-sized datasets. See Tables 2
1019
+ and 3. Understanding these phenomena is an interesting direction for future research.
1020
+ • Fourth, the double descent phenomenon naturally appears for both FABR-ν and the
1021
+ mini-batch FABR but only when ν ≈ n or batch size ≈ n. However, the double descent
1022
+ phenomenon disappears when ν ≪ n. This intriguing finding is shown in Figure 5 for
1023
+ FABR-ν, and in Appendix B for the mini-batch FABR.
1024
+ • Fifth, on average, FABR-ν outperforms mini-batch FABR on larger datasets.
1025
+ 5
1026
+ Conclusion and Discussion
1027
+ The recent discovery of the equivalence between infinitely wide neural networks (NNs) in
1028
+ the lazy training regime and neural tangent kernels (NTKs) Jacot et al. [2018] has revived
1029
+ interest in kernel methods. However, these kernels are extremely complex and usually re-
1030
+ quire running on big and expensive computing clusters Avron et al. [2017], Shankar et al.
1031
+ [2020] due to memory (RAM) requirements. This paper proposes a highly scalable random
1032
+ features ridge regression that can run on a simple laptop. We name it Fast Annihilating
1033
+ Batch Regression (FABR). Thanks to the linear algebraic properties of covariance matrices,
1034
+ this tool can be applied to any kernel and any way of generating random features. More-
1035
+ over, we provide several experimental results to assess its performance. We show how FABR
1036
+ can outperform (in training and prediction speed) the current state-of-the-art ridge classi-
1037
+ fier’s implementation. Then, we show how a simple data representation strategy combined
1038
+ with a random features ridge regression can outperform complicated kernels (CNTKs) and
1039
+ over-parametrized Deep Neural Networks (ResNet-34) in the few-shot learning setting. The
1040
+ experiments section concludes by showing additional results on big datasets. In this paper,
1041
+ we focus on very simple classes of random features. Recent findings (see, e.g., Shankar et al.
1042
+ 19
1043
+
1044
+ [2020]) suggest that highly complex kernel architectures are necessary to achieve competi-
1045
+ tive performance on large datasets. Since each kernel regression can be approximated with
1046
+ random features, our method is potentially applicable to these kernels as well. However,
1047
+ directly computing the random feature representation of such complex kernels is non-trivial
1048
+ and we leave it for future research.
1049
+ 20
1050
+
1051
+ References
1052
+ Alnur Ali, J Zico Kolter, and Ryan J Tibshirani. A continuous-time view of early stopping for
1053
+ least squares regression. In The 22nd International Conference on Artificial Intelligence
1054
+ and Statistics, pages 1370–1378. PMLR, 2019.
1055
+ Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via
1056
+ over-parameterization. In International Conference on Machine Learning, pages 242–252.
1057
+ PMLR, 2019.
1058
+ Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Russ R Salakhutdinov, and Ruosong Wang.
1059
+ On exact computation with an infinitely wide neural net. Advances in Neural Information
1060
+ Processing Systems, 32, 2019a.
1061
+ Sanjeev Arora, Simon S Du, Zhiyuan Li, Ruslan Salakhutdinov, Ruosong Wang, and Dingli
1062
+ Yu. Harnessing the power of infinitely wide deep nets on small-data tasks. arXiv preprint
1063
+ arXiv:1910.01663, 2019b.
1064
+ Haim Avron, Kenneth L Clarkson, and David P Woodruff. Faster kernel ridge regression
1065
+ using sketching and preconditioning. SIAM Journal on Matrix Analysis and Applications,
1066
+ 38(4):1116–1138, 2017.
1067
+ Peter L Bartlett, Philip M Long, G´abor Lugosi, and Alexander Tsigler. Benign overfitting in
1068
+ linear regression. Proceedings of the National Academy of Sciences, 117(48):30063–30070,
1069
+ 2020.
1070
+ Mikhail Belkin. Fit without fear: remarkable mathematical phenomena of deep learning
1071
+ through the prism of interpolation. Acta Numerica, 30:203–248, 2021.
1072
+ Mikhail Belkin, Siyuan Ma, and Soumik Mandal. To understand deep learning we need
1073
+ 21
1074
+
1075
+ to understand kernel learning. In International Conference on Machine Learning, pages
1076
+ 541–549. PMLR, 2018.
1077
+ Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine-
1078
+ learning practice and the classical bias–variance trade-off. Proceedings of the National
1079
+ Academy of Sciences, 116(32):15849–15854, 2019a.
1080
+ Mikhail Belkin, Alexander Rakhlin, and Alexandre B Tsybakov. Does data interpolation
1081
+ contradict statistical optimality?
1082
+ In The 22nd International Conference on Artificial
1083
+ Intelligence and Statistics, pages 1611–1619. PMLR, 2019b.
1084
+ Mikhail Belkin, Daniel Hsu, and Ji Xu. Two models of double descent for weak features.
1085
+ SIAM Journal on Mathematics of Data Science, 2(4):1167–1180, 2020.
1086
+ Lenaic Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable pro-
1087
+ gramming. Advances in Neural Information Processing Systems, 32, 2019.
1088
+ Youngmin Cho and Lawrence Saul. Kernel methods for deep learning. Advances in neural
1089
+ information processing systems, 22, 2009.
1090
+ Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria-Florina F Balcan, and Le Song.
1091
+ Scalable kernel methods via doubly stochastic gradients. Advances in neural information
1092
+ processing systems, 27, 2014.
1093
+ Amit Daniely. Sgd learns the conjugate kernel class of the network. Advances in Neural
1094
+ Information Processing Systems, 30, 2017.
1095
+ Amit Daniely, Roy Frostig, and Yoram Singer.
1096
+ Toward deeper understanding of neural
1097
+ networks: The power of initialization and a dual view on expressivity. Advances in neural
1098
+ information processing systems, 29, 2016.
1099
+ 22
1100
+
1101
+ Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds
1102
+ global minima of deep neural networks. In International conference on machine learning,
1103
+ pages 1675–1685. PMLR, 2019a.
1104
+ Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh.
1105
+ Gradient descent provably
1106
+ optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054, 2018.
1107
+ Simon S Du, Kangcheng Hou, Russ R Salakhutdinov, Barnabas Poczos, Ruosong Wang,
1108
+ and Keyulu Xu. Graph neural tangent kernel: Fusing graph neural networks with graph
1109
+ kernels. Advances in neural information processing systems, 32, 2019b.
1110
+ Manuel Fern´andez-Delgado, Eva Cernadas, Sen´en Barro, and Dinani Amorim. Do we need
1111
+ hundreds of classifiers to solve real world classification problems? The journal of machine
1112
+ learning research, 15(1):3133–3181, 2014.
1113
+ Adri`a Garriga-Alonso, Carl Edward Rasmussen, and Laurence Aitchison. Deep convolu-
1114
+ tional networks as shallow gaussian processes. In International Conference on Learning
1115
+ Representations, 2018.
1116
+ Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J Tibshirani. Surprises in
1117
+ high-dimensional ridgeless least squares interpolation. arXiv preprint arXiv:1903.08560,
1118
+ 2019.
1119
+ Tamir Hazan and Tommi Jaakkola. Steps toward deep kernel methods from infinite neural
1120
+ networks. arXiv preprint arXiv:1508.05133, 2015.
1121
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers:
1122
+ Surpassing human-level performance on imagenet classification.
1123
+ In Proceedings of the
1124
+ IEEE international conference on computer vision, pages 1026–1034, 2015.
1125
+ Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
1126
+ 23
1127
+
1128
+ by reducing internal covariate shift. In International conference on machine learning, pages
1129
+ 448–456. PMLR, 2015.
1130
+ Arthur Jacot, Franck Gabriel, and Cl´ement Hongler. Neural tangent kernel: Convergence
1131
+ and generalization in neural networks. Advances in neural information processing systems,
1132
+ 31, 2018.
1133
+ Bryan T Kelly, Semyon Malamud, and Kangying Zhou. The virtue of complexity in return
1134
+ prediction. 2022.
1135
+ Alex Krizhevsky, Geoffrey Hinton, et al.
1136
+ Learning multiple layers of features from tiny
1137
+ images. 2009.
1138
+ Nicolas Le Roux and Yoshua Bengio. Continuous neural networks. In Artificial Intelligence
1139
+ and Statistics, pages 404–411. PMLR, 2007.
1140
+ Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S Schoenholz, Jeffrey Pennington,
1141
+ and Jascha Sohl-Dickstein. Deep neural networks as gaussian processes. In International
1142
+ Conference on Learning Representations, 2018.
1143
+ Jaehoon Lee, Lechao Xiao, Samuel Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl-
1144
+ Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear
1145
+ models under gradient descent. Advances in neural information processing systems, 32,
1146
+ 2019.
1147
+ Jaehoon Lee, Samuel Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman
1148
+ Novak, and Jascha Sohl-Dickstein. Finite versus infinite neural networks: an empirical
1149
+ study. Advances in Neural Information Processing Systems, 33:15156–15172, 2020.
1150
+ Daniel LeJeune, Hamid Javadi, and Richard Baraniuk. The implicit regularization of ordi-
1151
+ nary least squares ensembles. In International Conference on Artificial Intelligence and
1152
+ Statistics, pages 3525–3535. PMLR, 2020.
1153
+ 24
1154
+
1155
+ Zhiyuan Li, Ruosong Wang, Dingli Yu, Simon S Du, Wei Hu, Ruslan Salakhutdinov,
1156
+ and Sanjeev Arora.
1157
+ Enhanced convolutional neural tangent kernels.
1158
+ arXiv preprint
1159
+ arXiv:1911.00809, 2019.
1160
+ Min Lin, Qiang Chen, and Shuicheng Yan.
1161
+ Network in network.
1162
+ arXiv preprint
1163
+ arXiv:1312.4400, 2013.
1164
+ Siyuan Ma and Mikhail Belkin. Diving into the shallows: a computational perspective on
1165
+ large-scale shallow learning. Advances in neural information processing systems, 30, 2017.
1166
+ Alexander G de G Matthews, Mark Rowland, Jiri Hron, Richard E Turner, and Zoubin
1167
+ Ghahramani. Gaussian process behaviour in wide deep neural networks. arXiv preprint
1168
+ arXiv:1804.11271, 2018.
1169
+ Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya
1170
+ Sutskever.
1171
+ Deep double descent: Where bigger models and more data hurt.
1172
+ Journal
1173
+ of Statistical Mechanics: Theory and Experiment, 2021(12):124003, 2021.
1174
+ Radford M Neal. Priors for infinite networks. In Bayesian Learning for Neural Networks,
1175
+ pages 29–53. Springer, 1996.
1176
+ Roman Novak, Lechao Xiao, Yasaman Bahri, Jaehoon Lee, Greg Yang, Jiri Hron, Daniel A
1177
+ Abolafia, Jeffrey Pennington, and Jascha Sohl-dickstein.
1178
+ Bayesian deep convolutional
1179
+ networks with many channels are gaussian processes.
1180
+ In International Conference on
1181
+ Learning Representations, 2018.
1182
+ Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A Alemi, Jascha Sohl-
1183
+ Dickstein, and Samuel S Schoenholz.
1184
+ Neural tangents: Fast and easy infinite neural
1185
+ networks in python. arXiv preprint arXiv:1912.02803, 2019.
1186
+ Matthew Olson, Abraham Wyner, and Richard Berk. Modern neural networks generalize on
1187
+ small data sets. Advances in Neural Information Processing Systems, 31, 2018.
1188
+ 25
1189
+
1190
+ Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. Advances
1191
+ in neural information processing systems, 20, 2007.
1192
+ Vaishaal Shankar, Alex Fang, Wenshuo Guo, Sara Fridovich-Keil, Jonathan Ragan-Kelley,
1193
+ Ludwig Schmidt, and Benjamin Recht. Neural kernels without tangents. In International
1194
+ Conference on Machine Learning, pages 8614–8623. PMLR, 2020.
1195
+ Stefano Spigler, Mario Geiger, St´ephane d’Ascoli, Levent Sagun, Giulio Biroli, and Matthieu
1196
+ Wyart. A jamming transition from under-to over-parametrization affects generalization in
1197
+ deep learning. Journal of Physics A: Mathematical and Theoretical, 52(47):474001, 2019.
1198
+ A. Tsigler and P. L. Bartlett. Benign overfitting in ridge regression, 2020.
1199
+ Christopher KI Williams. Computing with infinite networks. In Advances in Neural Infor-
1200
+ mation Processing Systems 9: Proceedings of the 1996 Conference, volume 9, page 295.
1201
+ MIT Press, 1997.
1202
+ Amir Zandieh, Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, and Jinwoo Shin.
1203
+ Scaling neural tangent kernels via sketching and random features.
1204
+ In M. Ranzato,
1205
+ A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Ad-
1206
+ vances in Neural Information Processing Systems, volume 34, pages 1062–1073. Cur-
1207
+ ran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/
1208
+ 08ae6a26b7cb089ea588e94aed36bd15-Paper.pdf.
1209
+ Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals.
1210
+ Understanding
1211
+ deep
1212
+ learning
1213
+ requires
1214
+ rethinking
1215
+ generalization.
1216
+ arXiv preprint
1217
+ arXiv:1611.03530, 2016.
1218
+ 26
1219
+
1220
+ A
1221
+ Proofs
1222
+ Proof of Lemma 2. We have
1223
+ Ψk+1 = Ψk + Sk+1S′
1224
+ k+1
1225
+ ˜Ψk+1 = ˆΨk + Sk+1S′
1226
+ k+1
1227
+ ˆΨk+1 = Pk+1 ˜Ψk+1Pk+1 .
1228
+ (15)
1229
+ By the definition of the spectral projection, we have
1230
+ ∥˜Ψk+1 − ˆΨk+1∥ ≤ λν+1(˜Ψk+1) ≤ λν+1(Ψk+1) ,
1231
+ (16)
1232
+ and hence
1233
+ ∥Ψk+1 − ˆΨk+1∥
1234
+ ≤ ∥Ψk+1 − ˜Ψk+1∥ + ∥˜Ψk+1 − ˆΨk+1∥
1235
+ = ∥Ψk − ˆΨk∥ + ∥˜Ψk+1 − ˆΨk+1∥)
1236
+ ≤ ∥Ψk − ˆΨk∥ + λν+1(Ψk+1) ,
1237
+ (17)
1238
+ and the claim follows by induction. The last claim follows from the simple inequality
1239
+ ∥(Ψk+1 + zI)−1 − (ˆΨk+1 + zI)−1∥ ≤ z−2∥Ψk+1 − ˆΨk+1∥ .
1240
+ (18)
1241
+ B
1242
+ Additional Experimental Results
1243
+ This section provides additional experiments and findings that may help the community with
1244
+ future research.
1245
+ First, we dive into more details about our comparison with sklearn. Table 4 shows a more
1246
+ 27
1247
+
1248
+ detailed training and prediction time comparison between FABR and sklearn. In particular,
1249
+ we average training and prediction time over five independent runs. The experiment settings
1250
+ are explained in Section 4.1. We show how one, depending on the number shrinkages |z|,
1251
+ would start considering using FABR when the number of observations in the dataset n ≈
1252
+ 5000. In this case, we have used the numpy linear algebra library to decompose FABR’s
1253
+ covariance matrix, which appears to be faster than the scipy counterpart. We share our
1254
+ code in the following repository: https://github.com/tengandreaxu/fabr.
1255
+ Second, while Figure 4 shows FABR’s test accuracy on increasing complexity c truncated
1256
+ curves, we present here the whole picture; i.e., Figure 6 shows full FABR’s test accuracy
1257
+ increases with the model’s complexity c on different (n) subsampled CIFAR-10 datasets
1258
+ averaged over twenty independent runs.
1259
+ The expanded dataset follows similar patterns.
1260
+ Similar to Figure 4, one can notice that when the shrinkage is sufficiently high, the double
1261
+ descent disappears, and the accuracy monotonically increases in complexity.
1262
+ Third, the double descent phenomenon naturally appears for both FABR-ν and the
1263
+ mini-batch FABR but only when ν ≈ n or batch size ≈ n. However, the double descent
1264
+ phenomenon disappears when ν ≪ n.
1265
+ This intriguing finding is shown in Figure 5 for
1266
+ FABR-ν, and here, in Figure 7, we report the same curves for mini-batch FABR.
1267
+ 28
1268
+
1269
+ (a) n = 10
1270
+ (b) n = 20
1271
+ (c) n = 40
1272
+ (d) n = 80
1273
+ (e) n = 160
1274
+ (f) n = 320
1275
+ (g) n = 640
1276
+ (h) n = 1280
1277
+ Figure 6: The figure above shows the full FABR’s accuracy increase with the model’s com-
1278
+ plexity c in the small dataset regime. The expanded dataset follows similar patterns.
1279
+ (a) n = 2560
1280
+ (b) n = 50000
1281
+ Figure 7: Similar to Figure 5, the figures above show FABR’s test accuracy increases with
1282
+ the model’s complexity c on the subsampled CIFAR-10 dataset 7a and the full CIFAR-10
1283
+ dataset 7b. FABR trains using mini-batches with batch size=2000 in both cases. Notice that
1284
+ we still observe a (shifted) double descent when batch size ≈ n, while the same phenomenon
1285
+ disappears when batch size ≪ n. The test accuracy is averaged over 5 independent runs.
1286
+ 29
1287
+
1288
+ 0.20
1289
+ 0.18
1290
+ (%)
1291
+ Z = 10-5
1292
+ Accuracy
1293
+ 0.16
1294
+ z= 10-1
1295
+ Z= 100
1296
+ 0.14
1297
+ z= 101
1298
+ z= 102
1299
+ 0.12
1300
+ z= 103
1301
+ z= 104
1302
+ 0.10
1303
+ Z= 105
1304
+ 0
1305
+ 20
1306
+ 40
1307
+ 60
1308
+ 80
1309
+ 100
1310
+ c0.24
1311
+ 0.22
1312
+ (%)
1313
+ 0.20
1314
+ Z = 10-5
1315
+ Accuracy
1316
+ Z= 10-1
1317
+ 0.18
1318
+ z=100
1319
+ 0.16
1320
+ z= 101
1321
+ z = 102
1322
+ 0.14
1323
+ z= 103
1324
+ z= 104
1325
+ 0.12
1326
+ Z= 105
1327
+ 0
1328
+ 20
1329
+ 40
1330
+ 60
1331
+ 80
1332
+ 100
1333
+ c0.30
1334
+ 0.25
1335
+ (%)
1336
+ Z = 10-5
1337
+ Accuracy
1338
+ z= 10-1
1339
+ 0.20
1340
+ z= 100
1341
+ z= 101
1342
+ z= 102
1343
+ 0.15
1344
+ z= 103
1345
+ z= 104
1346
+ Z= 105
1347
+ 0.10
1348
+ 0
1349
+ 20
1350
+ 40
1351
+ 60
1352
+ 80
1353
+ 100
1354
+ c0.35
1355
+ 0.30
1356
+ (%)
1357
+ Z = 10-5
1358
+ Accuracy
1359
+ 0.25
1360
+ z= 10-1
1361
+ z= 100
1362
+ 0.20
1363
+ z= 101
1364
+ Z= 102
1365
+ z= 103
1366
+ 0.15
1367
+ z= 104
1368
+ Z= 105
1369
+ 0.10
1370
+ 0
1371
+ 20
1372
+ 40
1373
+ 60
1374
+ 80
1375
+ 100
1376
+ c0.40
1377
+ 0.35
1378
+ %)
1379
+ 0.30
1380
+ Z = 10-5
1381
+ Accuracy
1382
+ z= 10-1
1383
+ 0.25
1384
+ z= 100
1385
+ z= 101
1386
+ 0.20
1387
+ Z= 102
1388
+ z= 103
1389
+ 0.15
1390
+ z= 104
1391
+ Z= 105
1392
+ 0.10
1393
+ 0
1394
+ 20
1395
+ 40
1396
+ 60
1397
+ 80
1398
+ 100
1399
+ c0.45
1400
+ 0.40
1401
+ 0.35
1402
+ Z = 10-5
1403
+ Accuracy
1404
+ 0.30
1405
+ Z= 10-1
1406
+ z=100
1407
+ 0.25
1408
+ z= 101
1409
+ z= 102
1410
+ 0.20
1411
+ z= 103
1412
+ 0.15
1413
+ z= 104
1414
+ z= 105
1415
+ 0.10
1416
+ 0
1417
+ 20
1418
+ 40
1419
+ 60
1420
+ 80
1421
+ 100
1422
+ c0.5
1423
+ 0.4
1424
+ [%)
1425
+ Z = 10-5
1426
+ Accuracy
1427
+ Z= 10-1
1428
+ 0.3
1429
+ Z= 100
1430
+ z= 101
1431
+ z= 102
1432
+ 0.2
1433
+ Z= 103
1434
+ z= 104
1435
+ Z= 105
1436
+ 0.1
1437
+ 0
1438
+ 20
1439
+ 40
1440
+ 60
1441
+ 80
1442
+ 100
1443
+ c0.50.
1444
+ (%)
1445
+ 0.45
1446
+ z= 10-5
1447
+ Accuracy
1448
+ 0.40
1449
+ z= 10-1
1450
+ z= 100
1451
+ z= 101
1452
+ 0.35
1453
+ z= 102
1454
+ z= 103
1455
+ 0.30
1456
+ z= 104
1457
+ z= 105
1458
+ 0.25
1459
+ 0.0
1460
+ 2.5
1461
+ 5.0
1462
+ 7.5
1463
+ 10.0
1464
+ 12.5
1465
+ 15.0
1466
+ c0.60
1467
+ (%)
1468
+ 0.55
1469
+ z= 10-5
1470
+ Accuracy
1471
+ z= 10-1
1472
+ z= 100
1473
+ 0.50
1474
+ Z= 101
1475
+ z= 102
1476
+ 0.45
1477
+ z= 103
1478
+ z= 104
1479
+ z= 105
1480
+ 0.40
1481
+ 0.0
1482
+ 2.5
1483
+ 5.0
1484
+ 7.5
1485
+ 10.0
1486
+ 12.5
1487
+ 15.0
1488
+ c0.18
1489
+ 0.16
1490
+ (%)
1491
+ Z = 10-5
1492
+ Accuracy
1493
+ z= 10-1
1494
+ 0.14
1495
+ Z= 100
1496
+ z= 101
1497
+ 0.12
1498
+ z= 102
1499
+ z= 103
1500
+ z= 104
1501
+ 0.10
1502
+ Z= 105
1503
+ 0
1504
+ 20
1505
+ 40
1506
+ 60
1507
+ 80
1508
+ 100
1509
+ cTable 4: The table below shows FABR and sklearn’s training and prediction time (in sec-
1510
+ onds) on a synthetic dataset. We vary the dataset number of features d and the number of
1511
+ shrinkages (|z|). We report the average running time and the standard deviation over five
1512
+ independent runs.
1513
+ |z| = 5
1514
+ |z| = 10
1515
+ |z| = 20
1516
+ |z| = 50
1517
+ FABR
1518
+ sklearn
1519
+ FABR
1520
+ sklearn
1521
+ FABR
1522
+ sklearn
1523
+ FABR
1524
+ sklearn
1525
+ d
1526
+ 10
1527
+ 7.72s ± 0.36s
1528
+ 0.01s ± 0.00s
1529
+ 6.90s ± 0.77s
1530
+ 0.02s ± 0.00s
1531
+ 7.04s ± 0.67s
1532
+ 0.03s ± 0.00s
1533
+ 7.44s ± 0.57s
1534
+ 0.07s ± 0.01s
1535
+ 100
1536
+ 7.35s ± 0.36s
1537
+ 0.06s ± 0.02s
1538
+ 6.58s ± 0.34s
1539
+ 0.11s ± 0.01s
1540
+ 7.61s ± 1.14s
1541
+ 0.24s ± 0.04s
1542
+ 7.3s ± 0.49s
1543
+ 0.53s ± 0.06s
1544
+ 500
1545
+ 7.37s ± 0.44s
1546
+ 0.33s ± 0.16s
1547
+ 6.81s ± 0.25s
1548
+ 0.54s ± 0.03s
1549
+ 7.02s ± 0.35s
1550
+ 1.01s ± 0.07s
1551
+ 7.44s ± 0.48s
1552
+ 2.41s ± 0.21s
1553
+ 1000
1554
+ 7.62s ± 0.31s
1555
+ 0.58s ± 0.21s
1556
+ 7.38s ± 0.23s
1557
+ 1.06s ± 0.04s
1558
+ 7.51s ± 0.24s
1559
+ 2.04s ± 0.04s
1560
+ 7.69s ± 0.08s
1561
+ 4.79s ± 0.36s
1562
+ 2000
1563
+ 8.33s ± 0.42s
1564
+ 1.21s ± 0.03s
1565
+ 8.09s ± 0.73s
1566
+ 2.44s ± 0.05s
1567
+ 8.33s ± 0.24s
1568
+ 4.87s ± 0.07s
1569
+ 8.29s ± 0.47s
1570
+ 12.21s ± 0.15s
1571
+ 3000
1572
+ 9.24s ± 0.25s
1573
+ 2.49s ± 0.05s
1574
+ 9.18s ± 0.41s
1575
+ 5.08s ± 0.03s
1576
+ 9.51s ± 0.20s
1577
+ 10.06s ± 0.02s
1578
+ 9.67s ± 0.41s
1579
+ 25.67s ± 0.23s
1580
+ 5000
1581
+ 10.64s ± 0.86s
1582
+ 5.36s ± 0.05s
1583
+ 11.01s ± 0.7s
1584
+ 10.74s ± 0.06s
1585
+ 11.57s ± 0.81s
1586
+ 21.31s ± 0.12s
1587
+ 11.54s ± 0.41s
1588
+ 54.18s ± 0.73s
1589
+ 10000
1590
+ 11.49s ± 0.66s
1591
+ 17.87s ± 8.58s
1592
+ 11.81s ± 0.47s
1593
+ 28.32s ± 10.53s
1594
+ 11.61s ± 0.49s
1595
+ 44.72s ± 9.99s
1596
+ 12.55s ± 0.3s
1597
+ 101.58s ± 15.66s
1598
+ 25000
1599
+ 13.89s ± 0.21s
1600
+ 27.79s ± 8.75s
1601
+ 14.50s ± 0.45s
1602
+ 49.84s ± 9.68s
1603
+ 14.46s ± 0.96s
1604
+ 94.08s ± 10.94s
1605
+ 15.68s ± 0.74s
1606
+ 224.31s ± 11.75s
1607
+ 50000
1608
+ 17.99s ± 0.22s
1609
+ 50.51s ± 8.99s
1610
+ 18.27s ± 0.37s
1611
+ 92.88s ± 10.45s
1612
+ 19.10s ± 0.37s
1613
+ 176.24s ± 10.07s
1614
+ 19.68s ± 0.85s
1615
+ 422.95s ± 13.22s
1616
+ 100000
1617
+ 25.30s ± 0.39s
1618
+ 95.57s ± 0.25s
1619
+ 26.16s ± 0.46s
1620
+ 177.54s ± 3.77s
1621
+ 27.93s ± 0.35s
1622
+ 340.32s ± 3.74s
1623
+ 29.48s ± 1.38s
1624
+ 816.25s ± 4.35s
1625
+ 30
1626
+
A9FIT4oBgHgl3EQf_Swz/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
ANFIT4oBgHgl3EQf-iyR/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:352353fa857996658be2c35bf9b7889b82dbe8cea97eae48d9a558c830b43f71
3
+ size 3735597
ANFIT4oBgHgl3EQf-iyR/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:adfd288be7a32edeb2df4b13df57ca4fbbc7c06cdefe7e951284b91080d24a75
3
+ size 158965
B9E1T4oBgHgl3EQf9gbH/content/tmp_files/2301.03558v1.pdf.txt ADDED
@@ -0,0 +1,798 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Draft version January 10, 2023
2
+ Typeset using LATEX default style in AASTeX631
3
+ Pre-merger sky localization of gravitational waves from binary neutron star mergers using deep
4
+ learning
5
+ Chayan Chatterjee
6
+ 1 and Linqing Wen
7
+ 1
8
+ 1Department of Physics, OzGrav-UWA, The University of Western Australia,
9
+ 35 Stirling Hwy, Crawley, Western Australia 6009, Australia
10
+ ABSTRACT
11
+ The simultaneous observation of gravitational waves (GW) and prompt electromagnetic counterparts
12
+ from the merger of two neutron stars can help reveal the properties of extreme matter and gravity
13
+ during and immediately after the final plunge. Rapid sky localization of these sources is crucial to
14
+ facilitate such multi-messenger observations. Since GWs from binary neutron star (BNS) mergers can
15
+ spend up to 10-15 mins in the frequency bands of the detectors at design sensitivity, early warning
16
+ alerts and pre-merger sky localization can be achieved for sufficiently bright sources, as demonstrated
17
+ in recent studies. In this work, we present pre-merger BNS sky localization results using CBC-SkyNet,
18
+ a deep learning model capable of inferring sky location posterior distributions of GW sources at orders
19
+ of magnitude faster speeds than standard Markov Chain Monte Carlo methods. We test our model’s
20
+ performance on a catalog of simulated injections from Sachdev et al. (2020), recovered at 0-60 secs
21
+ before merger, and obtain comparable sky localization areas to the rapid localization tool BAYESTAR.
22
+ These results show the feasibility of our model for rapid pre-merger sky localization and the possibility
23
+ of follow-up observations for precursor emissions from BNS mergers.
24
+ 1. INTRODUCTION
25
+ The first direct detection of GWs from a merging binary black hole (BBH) system was made in 2015 (Abbott et al.
26
+ (2016)), which heralded a new era in astronomy. Since then the LIGO-Virgo-KAGRA (LVK) Collaboration (Aasi et al.
27
+ (2015); Acernese et al. (2014); Akutsu et al. (2019)) has made more than 90 detections of GWs from merging compact
28
+ binaries (Abbott et al. (2021a)), including two confirmed detections from merging binary neutron stars (BNS) and at
29
+ two from mergers of neutron star-black hole (NSBH) binaries (Abbott et al. (2021a,b)). The first detection of GWs
30
+ from a BNS merger on August 17th, 2017 (GW170817) along with its associated electromagnetic (EM) counterpart
31
+ revolutionized the field of multi-messenger astronomy (Abbott et al. (2017a)). This event involved the joint detection
32
+ of the GW signal by LIGO and Virgo, and the prompt short gamma-ray burst (sGRB) observation by the Fermi-GBM
33
+ and INTEGRAL space telescopes (Abbott et al. (2017b,c)) ∼ 2 secs after the merger. This joint observation of GWs
34
+ and sGRB, along with the observations of EM emissions at all wavelengths for months after the event had a tremendous
35
+ impact on astronomy, leading to – an independent measurement of the Hubble Constant (Abbott et al. (2017d)), new
36
+ constraints on the neutron star equation of state (Abbott et al. (2019)) and confirmation of the speculated connection
37
+ between sGRB and kilonovae with BNS mergers (Abbott et al. (2017b)).
38
+ While more multi-messenger observations involving GWs are certainly desirable, the typical delays between a GW
39
+ detection and the associated GCN alerts, which is of the order of a few minutes (Magee et al. (2021)), makes such joint
40
+ discoveries extremely challenging. This is because the prompt EM emissions lasts for just 1-2 secs after merger, which
41
+ means an advance warning system with pre-merger sky localization of such events is essential to enable joint GW and
42
+ EM observations by ground and space-based telescopes (Haas et al. (2016); Nissanke et al. (2013); Dyer et al. (2022)).
43
+ In recent years, several studies have shown that for a fraction of BNS events, it will be possible to issue alerts
44
+ up to 60 secs before merger (Magee et al. (2021); Sachdev et al. (2020); Kovalam et al. (2022); Nitz et al. (2020)).
45
+ Such early-warning detections, along with pre-merger sky localizations will facilitate rapid EM follow-up of prompt
46
+ emissions. The observations of optical and ultraviolet emissions prior to mergers are necessary for understanding
47
+ r-process nucleosynthesis (Nicholl et al. (2017)) and shock-heated ejecta (Metzger (2017)) post mergers. Prompt X-
48
+ ray emission can reveal the final state of the remnant (Metzger & Piro (2014); Bovard et al. (2017); Siegel & Ciolfi
49
+ (2016)), and early radio observations can reveal pre-merger magnetosphere interactions (Most & Philippov (2020)),
50
+ arXiv:2301.03558v1 [astro-ph.HE] 30 Dec 2022
51
+
52
+ ID2
53
+ and help test theories connecting BNS mergers with fast radio bursts (Totani (2013); Wang et al. (2016); Dokuchaev
54
+ & Eroshenko (2017)).
55
+ In the last three LVK observation runs, five GW low-latency detection pipelines have processed data and sent out
56
+ alerts in real-time. These pipelines are GstLAL (Sachdev et al. (2019)), SPIIR (Chu et al. (2022)), PyCBC (Usman
57
+ et al. (2016)), MBTA (Aubin et al. (2021)), and cWB (Klimenko et al. (2016)). Of these, the first four pipelines use
58
+ the technique of matched filtering (Hooper (2013)) to identify real GW signals in detector data, while cWB uses a
59
+ coherent analysis to search for burst signals in detector data streams. In 2020, an end-to-end mock data challenge
60
+ (Magee et al. (2021)) was conducted by the GstLAL and SPIIR search pipelines and successfully demonstrated their
61
+ feasibility to send pre-merger alerts (Magee et al. (2021)). This study also estimated the expected rate of BNS mergers
62
+ and their sky localization areas using the rapid localization tool, BAYESTAR (Singer & Price (2016)) using a four detector
63
+ network consisting of LIGO Hanford (H1), LIGO Livingston (L1), Virgo (V1) and KAGRA in O4 detector sensitivity.
64
+ In a previous study, Sachdev et al. (2020) (Sachdev et al. (2020)) showed early warning performance of the GstLAL
65
+ pipeline over a month of simulated data with injections. Their study suggested that alerts could be issued 10s (60 s)
66
+ before merger for 24 (3) BNS systems over the course of one year of observations of a three-detector Advanced network
67
+ operating at design sensitivity. These findings were in broad agreement with the estimates of Cannon et al. (2012)
68
+ (Cannon et al. (2012)) on the rates of early warning detections at design sensitivity. Sky localization was also obtained
69
+ at various number of seconds before merger, using the online rapid sky localization software called BAYESTAR (Singer
70
+ & Price (2016)), with the indication that around one event will be both detected before merger and localized within
71
+ 100 deg2, based on current BNS merger rate estimates.
72
+ The online search pipelines, however, experience additional latencies owing to data transfer, calibration and filtering
73
+ processes, which contribute up to 7-8 secs of delay in the publication of early warning alerts (Kovalam et al. (2022);
74
+ Sachdev et al. (2020)). For sky localization, BAYESTAR typically takes 8 secs to produce skymaps, which is expected
75
+ to reduce to 1-2 secs in the third observation run. This latency can, however, be potentially reduced further by the
76
+ application of machine learning techniques, as demonstrated in Chatterjee et al. (2022) (Chatterjee et al. (2022)).
77
+ In this Letter, we report pre-merger sky localization using deep learning for the first time. We obtain our results using
78
+ CBC-SkyNet (Compact Binary Coalescence - Sky Localization Neural Network.), a normalizing flow model (Rezende &
79
+ Mohamed (2015); Kingma et al. (2016); Papamakarios et al. (2017)) for sky localization of all types of compact binary
80
+ coalescence sources (Chatterjee et al. (2022)). We test our model on simulated BNS events from the injection catalog
81
+ in Sachdev et al. (2020) (Sachdev et al. (2020)), that consists of signals detected at 0 to 60 secs before merger using the
82
+ GstLAL search pipeline. We compare our sky localization performance with BAYESTAR and find that our localization
83
+ contours have comparable sky contour areas with BAYESTAR, at an inference speed of just a few milli-seconds using a
84
+ P100 GPU.
85
+ The paper is divided as follows: we briefly describe our normalizing flow model in Section 2. In Section 3, we describe
86
+ the details of the simulations used to generate the training and test sets. In Section 4, we desribe our architecture of
87
+ CBC-SkyNet. In Section 5, we discuss results obtained using our network on the dataset from Sachdev et al. (2020)
88
+ (Sachdev et al. (2020)). Finally, we discuss future directions of this research in Section 6.
89
+ 2. METHOD
90
+ Our neural network, CBC-SkyNet is based on a class of deep neural density estimators called normalizing flow,
91
+ the details of which is provided in (Chatterjee et al. (2022)). CBC-SkyNet consists of three main components: (i)
92
+ the normalizing flow, specifically, a Masked Autoregressive Flow (MAF) (Kingma et al. (2016); Papamakarios et al.
93
+ (2017)) network, (ii) a ResNet-34 model (He et al. (2015)) that extracts features from the complex signal-to-noise
94
+ (SNR) time series data which is obtained by matched filtering GW strains with BNS template waveforms, and (iii)
95
+ a fully connected neural network whose inputs are the intrinsic parameters (component masses and z-component of
96
+ spins) of the templates used to generate the SNR time series by matched filtering. The architecture of our model is
97
+ shown in Figure 1. The features extracted by the ResNet-34 and fully connected networks from the SNR time series
98
+ (ρ(t)) and best-matched intrinsic parameters (ˆθin) respectively, are combined into a single feature vector and passed
99
+ as a conditional input to the MAF. The MAF is a normalizing flow with a specific architecture, that transforms a
100
+ simple base distribution (a multi-variate Gaussian) z ∼ p(z) into a more complex target distribution x ∼ p(x) which
101
+ in our case, is the posterior distribution of the right ascension (α) and declination angles (δ) of the GW events, given
102
+ the SNR time series and intrinsic parameters p(α, δ|ρ(t), ˆθin).
103
+
104
+ 3
105
+ Figure 1.
106
+ Architecture of our model, CBC-SkyNet. The input data, consisting of the SNR time series, ρ(t) and intrinsic
107
+ parameters, ˆθin are provided to the network through two separate channels: the ResNet-34 channel (only one ResNet block is
108
+ shown here) and the multi-layered fully connected (Dense) network respectively. The features extracted by ρ(t) and ˆθin are then
109
+ combined and provided as conditional input to the main component of CBC-SkyNet - the Masked Autoregressive Flow (MAF)
110
+ network , denoted by f(z). The MAF draws samples, z, from a multivariate Gaussian, and learns a mapping between z to (α,
111
+ δ), which are the right ascension and declination angles of the GW events.
112
+ This mapping is learnt by the flow during training using the method of maximum likelihood, and can be expressed
113
+ as:
114
+ p(x) = π(z)
115
+ ����det∂f(z)
116
+ ∂z
117
+ ����
118
+ −1
119
+ ,
120
+ (1)
121
+ If z is a random sample drawn from the base distribution π(z), and f is the invertible transformation parametrized by
122
+ the normalizing flow, then the new random variable obtained after the transformation is x = f(z). The transformation,
123
+ f can be made more flexible and expressive by stacking a chain of transformations together as follows:
124
+ xk = fk ◦ . . . ◦ f1 (z0)
125
+ (2)
126
+ This helps the normalizing flow learn arbitrarily complex distributions, provided each of the transformations are
127
+ invertible and the Jacobians are easy to evaluate. Neural posterior estimation (NPE) (Papamakarios & Murray (2016);
128
+ Lueckmann et al. (2017); Greenberg et al. (2019)) techniques, including normalizing flows and conditional variational
129
+ autoencoders have been used to estimate posterior distribution of BBH source parameters with high accuracy and
130
+ speed (Dax et al. (2021); Gabbard et al. (2022); Chua & Vallisneri (2020)). Chatterjee et al. (2022) (Chatterjee
131
+ et al. (2022)) used a normalizing flow to demonstrate rapid inference of sky location posteriors for all CBC sources for
132
+ the first time. This work shows the first application of deep learning for pre-merger BNS sky localization and is an
133
+ extension of the model introduced in Chatterjee et al. (2022)
134
+ 3.
135
+ DATA GENERATION
136
+
137
+ Hanford
138
+ Livingston
139
+ Virgo
140
+ Conv 2D
141
+ Dense (64)
142
+ BatchNorm
143
+ ReLU
144
+ Conv2D
145
+ Dense (64)
146
+ Conv 2D
147
+ BatchNorm
148
+ Dense (64)
149
+ BatchNorm
150
+ Dense (64)
151
+ Dense (64)
152
+ ReLU
153
+ 90% area: 121 deg
154
+ 50% area: 34 deg
155
+ 60*
156
+ Pz(z)
157
+ Feature Vector
158
+ f(z)
159
+ 30°
160
+ 2
161
+ z ~ N(0, 1)D+1
162
+ α, S ~ Pe(α, Slp(t), θin)4
163
+ We train six different versions of CBC-SkyNet with distinct training sets (ρi(t), ˆθi
164
+ in) for each “negative latency",
165
+ i = 0, 10, 14, 28, 44, 58 secs before merger.
166
+ Our training and test set injections parameters were sampled from the
167
+ publicly available injection dataset used in Sachdev et al. (2020) Sachdev et al. (2020). These ˆθi
168
+ in parameters were
169
+ used to first simulate the BNS waveforms using the SpinTaylorT4 approximant (Sturani et al. (2010)) and then
170
+ injected into Gaussian noise with advanced LIGO power spectral density (PSD) at design sensitivity (Littenberg &
171
+ Cornish (2015)) to obtain the desired strains. The SNR time series, ρi(t), was then obtained by matched filtering the
172
+ simulated BNS strains with template waveforms.
173
+ For generating the training sets, the template waveforms for matched filtering were simulated using the optimal
174
+ parameters, which have the exact same values as the injection parameters used to generate the detector strains.
175
+ The SNR time series obtained by matched filtering the strains with the optimal templates, ρi
176
+ opt(t), and the optimal
177
+ intrinsic parameters, ˆθi,opt
178
+ in
179
+ , were then used as input to our network during the training process. For testing, the
180
+ template parameters were sampled from publicly available data by Sachdev et al. (2020) (Sachdev et al. (2020)). These
181
+ parameters correspond to the parameters of the maximum likelihood or ‘best-matched’ signal template recovered by
182
+ the GstLAL matched-filtering search pipeline. Therefore the values of ˆθi
183
+ in used during testing are close to, but is not
184
+ the exact same as ˆθi,opt
185
+ in
186
+ . Similarly, the SNR time series ρi(t) is not exactly similar to the optimal ρi
187
+ opt(t), and has a
188
+ slightly lower peak amplitude than the corresponding ρi
189
+ opt(t) peak because of the small mismatch between the injection
190
+ parameters and the best-matched template waveform parameters.
191
+ While our injections have the same parameter distribution as (Sachdev et al. (2020)), we only choose samples with
192
+ network SNRs lying between 9 and 40, at each negative latency, for this analysis. This is because when the network
193
+ is trained on samples with identical parameter distributions as the dataset from (Sachdev et al. (2020)), our model’s
194
+ predictions on test samples with network SNRs > 40 tend to become spurious, with α and δ samples drawn from
195
+ the predicted posterior distribution for these events having values outside their permissible ranges. This is because
196
+ in the dataset from (Sachdev et al. (2020)), injection samples with SNR > 40 are much fewer in number compared
197
+ to samples between SNR 9 and 40. This means for models trained on data with parameters from (Sachdev et al.
198
+ (2020)), there exists very few training examples for SNR > 40 to learn from. Since Normalizing Flow models are
199
+ known to fail at learning out-of-distribution data, as described in (Kirichenko et al. (2020)), our model fails to make
200
+ accurate predictions at the high SNR limit. Although this can potentially be solved by generating training sets with
201
+ uniform SNR distribution over the entire existing SNR range in (Sachdev et al. (2020)), which corresponds to a uniform
202
+ distribution of sources in comoving volume up to a redshift of z=0.2, this would be require generating an unfeasibly
203
+ large number of training samples for each negative latency. Also, such events detected with SNR > 40 are expected
204
+ to be exceptionally rare, even at design sensitivities of advanced LIGO and Virgo, which is why we choose to ignore
205
+ them for this study. We therefore generate samples with uniformly distributed SNRs between 9 and 40 for training,
206
+ while our test samples have the same SNR distribution as (Sachdev et al. (2020)) between 9 and 40.
207
+ 4. NETWORK ARCHITECTURE
208
+ In this section, we describe the architecture of the different components of our model. The MAF is implemented
209
+ using a neural network that is designed to efficiently model conditional probability densities. This network is called
210
+ Masked Autoencoder for Density Estimation (MADE) (Germain et al. (2015)). We stack 10 MADE blocks together
211
+ to make a sufficiently expressive model, with each MADE block consisting of 5 layers with 256 neurons in each layer.
212
+ In between each pair of MADE networks, we use batch normalization to stabilize training. We use a ResNet-34 model
213
+ (He et al. (2015)), that is constructed using 2D convolutional and MaxPooling layers with skip connections, (He et al.
214
+ (2015)) to extract features from the SNR time series data. The real and imaginary parts of the SNR time series are
215
+ stacked vertically to generate a two dimensional input data stream for each training and test sample. The initial
216
+ number of kernels for the convolutional layers of the ResNet model is chosen to be 32, which is doubled progressively
217
+ through the network (He et al. (2015)). The final vector of features obtained by the ResNet are combined with the
218
+ features extracted from the intrinsic parameters, ˆθi
219
+ in, by the fully-connected network, consisting of 5 hidden layers
220
+ with 64 neurons in each hidden layer. The combined feature vector is then passed as a conditional input to the MAF
221
+ which learns the mapping between the base and target distributions during training.
222
+ 5. RESULTS
223
+ In this section, we describe the results of the injection runs at each negative latency. Figure 2 (a) to (f) shows
224
+ the histograms of the areas of the 90% credible intervals of the predicted posterior distributions from CBC-SkyNet
225
+
226
+ 5
227
+ (blue) and BAYESTAR (orange), evaluated on the injections in (Sachdev et al. (2020)) with network SNRs between 9
228
+ and 40. We observe that for most of the test sets, our model predicts smaller median 90% credible interval areas than
229
+ BAYESTAR. Also, BAYESTAR shows much broader tails at < 100 deg2, compared to CBC-SkyNet, especially for 0 secs,
230
+ 10 secs and 15 secs before merger (Figures 2 (a), (b) and (c)). These injections, with 90% areas < 100 deg2 typically
231
+ have SNR > 25, which shows that although CBC-SkyNet produces smaller 90 % contours on average, it fails to match
232
+ BAYESTAR’s accuracy for high SNR cases. Especially at 0 secs before merger (Figure 2 (a)), the area of the smallest
233
+ 90% credible interval by CBC-SkyNet is 13 deg2, whereas for BAYESTAR, it is around 1 deg2. The number of injections
234
+ localized with a 90% credible interval area between 10 - 15 deg2 by CBC-SkyNet is also much lower than BAYESTAR,
235
+ although this effect is much less prominent for the other test sets.
236
+ Similar results are found for the searched area distributions at 0 secs before merger (Figure 3 (a)), although the
237
+ distributions of searched areas from for all other cases (Figure 3 (b) - (f)) from CBC-SkyNet and BAYESTAR are very
238
+ similar. Figures 4 (a) and (b) show box and whisker plots for 90% credible interval areas and searched areas obtained
239
+ by CBC-SkyNet (blue) and BAYESTAR (pink) respectively. We observe that our median 90% areas (white horizontal
240
+ lines) for most of the cases are smaller than BAYESTAR’s.
241
+ A possible explanation for these observations is as follows: BAYESTAR uses an adaptive sampling method (Singer &
242
+ Price (2016)) to evaluate the densities, in which the posterior probability is first evaluated over Nside,0 = 16 HEALPix
243
+ grids (Górski et al. (2005)), corresponding to a single sky grid area of 13.4 deg2. The highest probability grids are
244
+ then adaptively subdivided into smaller grids over which the posterior is evaluated again. This process is repeated
245
+ seven times, with the highest possible resolution at the end of the iteration being Nside = 211, with an area of ∼ 10−3
246
+ deg2 for the smallest grid (Singer & Price (2016)).
247
+ This adaptive sampling process, however, takes much longer to evaluate, compared to conventional evaluation over
248
+ a uniform angular resolution in the sky. This is why for our analysis, we do not adopt the adaptive sampling process,
249
+ since our primary aim is to improve the speed of pre-merger sky localization. Instead, we draw 5000 α and δ posterior
250
+ samples each, from our model’s predicted posterior and then apply a 2-D Kernel Density Estimate (KDE) over these
251
+ samples. We then evaluate the KDE over Nside,0 = 32 HEALPix grids, corresponding to a single grid area of ∼ 3.3
252
+ deg2 to obtain our final result. Therefore, our chosen angular resolution results in sky grids which are much larger
253
+ than BAYESTAR’s smallest sky grids after adaptive refinement. Therefore our approach results in larger 90% contours
254
+ and searched areas than BAYESTAR for high network SNR cases where the angular resolution has a more significant
255
+ impact in the overall result. The sampling process adopted by us may also explain why our median areas are smaller
256
+ compared to BAYESTAR. During inference, after sampling α and δ from the predicted posterior, we evaluate the KDE
257
+ with a fixed bandwidth of 0.03, chosen by cross-validation. This may result in a narrower contour estimate, on average,
258
+ compared to BAYESTAR’s sampling method.
259
+ Figures 5 (a) - (f) show P-P plots for a subset of injections at 0 secs, 10 secs, 15 secs, 28 secs, 44 secs and 58 secs
260
+ before merger respectively. To obtain the P-P plots, we compute the percentile scores of the true right ascension and
261
+ declination parameters within their marginalized posteriors and obtain the cumulative distribution of these scores.
262
+ For accurate posteriors, the distribution of the percentile scores should be uniform, which means the cumulative
263
+ distribution should be diagonal, which is evident from the figures. We also perform Kolmogorov-Smirnoff (KS) tests
264
+ for each dataset to test our hypothesis that the percentile values for each set are uniformly distributed. The p-values
265
+ from the KS tests, shown in the legend, for each parameter have values > 0.05, which means at a 95% level of
266
+ significance, we cannot reject the null hypothesis that the percentile values are uniform, and thereby our posteriors
267
+ are consistent with the expected distribution.
268
+ Because of the low dimensionality of our input data, training our network takes less than an hour on a NVIDIA Tesla
269
+ P100 GPU. Overall the sampling and evaluation step during inference takes a few milli-seconds for each injection on
270
+ the same computational resource. Sample generation and matched filtering was implemented with a modified version
271
+ of the code developed by (Gebhard et al. (2019)) that uses PyCBC software (Nitz et al. (2021)). CBC-SkyNet was written
272
+ in TensorFlow 2.4 (Abadi et al. (2016)) using the Python language.
273
+ 6. DISCUSSION
274
+ In summary, we have reported the first deep learning based approach for pre-merger sky localization of BNS sources,
275
+ capable of orders of magnitude faster inference than Bayesian methods. Currently our model’s accuracy is similar to
276
+ BAYESTAR on injections with network SNR between 9 and 40 at design sensitivity. The next step in this research would
277
+ be to perform similar analysis on real detector data which has non-stationary noise and glitches that may corrupt
278
+
279
+ 6
280
+ (a)
281
+ (b)
282
+ (c)
283
+ (d)
284
+ (e)
285
+ (f)
286
+ Figure 2.
287
+ Top panel from (a) to (c): Histograms of the areas of the 90% credible intervals from CBC-SkyNet (blue) and
288
+ BAYESTAR (orange) for 0 secs, 10 secs, 15 secs before merger are shown. Bottom panel from (d) to (f): Similar histograms for 28
289
+ secs, 44 secs and 58 secs before merger are shown.
290
+ the signal and affect detection and sky localization. A possible way to improve our model’s performance at high
291
+ SNRs (> 25) would be to use a finer angular resolution in the sky for evaluating the posteriors. We can also train
292
+ different versions of the model for different luminosity distance (and hence SNR) ranges. Our long-term goal is to
293
+ construct an independent machine learning pipeline for pre-merger detection and localization of GW sources. The
294
+ faster inference speed of machine learning models would be crucial for electromagnetic follow-up and observation of
295
+ prompt and precursor emissions from compact binary mergers. This method is also scalable and can be applied for
296
+ predicting the luminosity distance of the sources pre-merger, which would help obtain volumetric localization of the
297
+ source and potentially identify host galaxies of BNS mergers.
298
+ The authors would like to thank Dr. Foivois Diakogiannis, Kevin Vinsen, Prof. Amitava Datta and Damon Beveridge
299
+ for useful comments on this work. This research was supported in part by the Australian Research Council Centre of
300
+ Excellence for Gravitational Wave Discovery (OzGrav, through Project No. CE170100004). This research was under-
301
+ taken with the support of computational resources from the Pople high-performance computing cluster of the Faculty of
302
+ Science at the University of Western Australia. This work used the computer resources of the OzStar computer cluster
303
+ at Swinburne University of Technology. The OzSTAR program receives funding in part from the Astronomy National
304
+ Collaborative Research Infrastructure Strategy (NCRIS) allocation provided by the Australian Government. This
305
+ research used data obtained from the Gravitational Wave Open Science Center (https://www.gw-openscience.org), a
306
+ service of LIGO Laboratory, the LIGO Scientific Collaboration and the Virgo Collaboration. LIGO is funded by the
307
+ U.S. National Science Foundation. Virgo is funded by the French Centre National de Recherche Scientifique (CNRS),
308
+ the Italian Istituto Nazionale della Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by Polish and
309
+ Hungarian institutes. This material is based upon work supported by NSF’s LIGO Laboratory which is a major facility
310
+ fully funded by the National Science Foundation.
311
+ 1
312
+ 2
313
+ 3
314
+ 4
315
+ 5
316
+ 6
317
+ 7
318
+ 8
319
+ 9
320
+ 10
321
+ 11
322
+ 12
323
+ 13
324
+ REFERENCES
325
+ Aasi, J., Abbott, B. P., Abbott, R., et al. 2015, Classical
326
+ and Quantum Gravity, 32, 074001,
327
+ doi: 10.1088/0264-9381/32/7/074001
328
+ Abadi, M., Agarwal, A., Barham, P., et al. 2016,
329
+ TensorFlow: Large-Scale Machine Learning on
330
+ Heterogeneous Distributed Systems
331
+
332
+ CBC-SkyNet
333
+ 0.8
334
+ Bayestar
335
+ 0.6
336
+ Density
337
+ 0.4
338
+ 0.2
339
+ 0.0
340
+ 0
341
+ 1
342
+ 2
343
+ 3
344
+ 4
345
+ 90% credible interval area in log (deg2)1.2
346
+ CBC-SkyNet
347
+ Bayestar
348
+ 1.0
349
+ 0.8
350
+ Density
351
+ 0.6
352
+ 0.4
353
+ 0.2
354
+ 0.0
355
+ 1
356
+ 2
357
+ 3
358
+ 4
359
+ 90% credible interval area in log (deg2)CBC-SkyNet
360
+ 1.25
361
+ Bayestar
362
+ 1.00
363
+ Density
364
+ 0.75
365
+ 0.50
366
+ 0.25
367
+ 0.00
368
+ 2
369
+ 1
370
+ 3
371
+ 4
372
+ 90% credible interval area in log (deg2)CBC-SkyNet
373
+ Bayestar
374
+ 1.5
375
+ Density
376
+ 1.0
377
+ 0.5
378
+ 0.0
379
+ 1
380
+ 2
381
+ 3
382
+ 4
383
+ 90% credible interval area in log (deg2)CBC-SkyNet
384
+ Bayestar
385
+ 1.5
386
+ Density
387
+ 1.0
388
+ 0.5
389
+ 0.0
390
+ 1
391
+ 2
392
+ 3
393
+ 4
394
+ 90% credible interval area in log (deg2)2.0
395
+ CBC-SkyNet
396
+ Bayestar
397
+ 1.5
398
+ Density
399
+ 1.0
400
+ 0.5
401
+ 0.0
402
+ 1
403
+ 2
404
+ 3
405
+ 4
406
+ 90% credible interval area in log (deg2)7
407
+ (a)
408
+ (b)
409
+ (c)
410
+ (d)
411
+ (e)
412
+ (f)
413
+ Figure 3.
414
+ Top panel from (a) to (c): Histograms of the searched areas from CBC-SkyNet (blue) and BAYESTAR (orange) for 0
415
+ secs, 10 secs, 15 secs before merger are shown. Bottom panel from (d) to (f): Similar histograms for 28 secs, 44 secs and 58 secs
416
+ before merger are shown.
417
+ (a)
418
+ (b)
419
+ Figure 4.
420
+ (a) Box and whiskers plots showing the areas of the 90% credible intervals from CBC-SkyNet (blue) and BAYESTAR
421
+ (pink) at 0 secs, 10 secs, 15 secs, 28 secs, 44 secs and 58 secs before merger. The boxes encompass 95% of the events and the
422
+ whiskers extend up to the rest. The white lines within the boxes represent the median values of the respective data sets. (b)
423
+ Similar box and whiskers plot as (a) for comparing searched areas from CBC-SkyNet (blue) and BAYESTAR (pink) at 0 secs, 10
424
+ secs, 15 secs, 28 secs, 44 secs and 58 secs before merger.
425
+
426
+ 0.6
427
+ CBC-SkyNet
428
+ Bayestar
429
+ 0.5
430
+ 0.4
431
+ Density
432
+ 0.3
433
+ 0.2
434
+ 0.1
435
+ 0.0
436
+ -4
437
+ -2
438
+ 0
439
+ 2
440
+ 4
441
+ Searched area in log (deg2)0.6
442
+ CBC-SkyNet
443
+ Bayestar
444
+ 0.5
445
+ 0.4
446
+ Density
447
+ 0.3
448
+ 0.2
449
+ 0.1
450
+ 0.0
451
+ -4
452
+ -2
453
+ 0
454
+ 2
455
+ 4
456
+ Searched area in log (deg2)0.6
457
+ CBC-SkyNet
458
+ Bayestar
459
+ 0.5
460
+ 0.4
461
+ Density
462
+ 0.3
463
+ 0.2
464
+ 0.1
465
+ 0.0
466
+ -4
467
+ -2
468
+ 0
469
+ 2
470
+ 4
471
+ Searched area in log (deg2)CBC-SkyNet
472
+ Bayestar
473
+ 0.5
474
+ 0.4
475
+ Density
476
+ 0.3
477
+ 0.2
478
+ 0.1
479
+ 0.0
480
+ -4
481
+ -2
482
+ 0
483
+ 2
484
+ 4
485
+ Searched area in log (deg2)0.6
486
+ CBC-SkyNet
487
+ Bayestar
488
+ 0.5
489
+ 0.4
490
+ Density
491
+ 0.3
492
+ 0.2
493
+ 0.1
494
+ 0.0
495
+ -4
496
+ -2
497
+ 0
498
+ 2
499
+ 4
500
+ Searched area in log (deg2)104.
501
+ 2
502
+ in deg
503
+ area
504
+ 103
505
+ 90% credible interval
506
+ 102
507
+ CBC-SkyNet
508
+ 101
509
+ Bayestar
510
+ -10
511
+ -15
512
+ -28
513
+ -44
514
+ -58
515
+ 0
516
+ Time from merger (in secs)104
517
+ 103.
518
+ 2
519
+ 6
520
+ p
521
+ Searched area in
522
+ 102
523
+ 101
524
+ S
525
+ 100
526
+ CBC-SkyNet
527
+ Bayestar
528
+ -10
529
+ -15
530
+ -28
531
+ -44
532
+ -58
533
+ 0
534
+ Time from merger (in secs)CBC-SkyNet
535
+ 0.6
536
+ Bayestar
537
+ 0.5
538
+ Density
539
+ 0.4
540
+ 0.3
541
+ 0.2
542
+ 0.1
543
+ 0.0
544
+ -4
545
+ -2
546
+ 0
547
+ 2
548
+ 4
549
+ Searched area in log (deg2)8
550
+ (a)
551
+ (b)
552
+ (c)
553
+ (d)
554
+ (e)
555
+ (f)
556
+ Figure 5. (a) to (f): P–P plots for a subset of the total number of test samples at 0 secs, 10 secs, 15 secs, 28 secs, 44 secs and
557
+ 58 secs before merger. We compute the percentile values (denoted as p) of the true right ascension and declination parameters
558
+ within their 1D posteriors. The figure shows the cumulative distribution function of the percentile values, which should lie close
559
+ to the diagonal if the network is performing properly. The p-values of the KS test for each run is shown in the legend.
560
+
561
+ 1.0
562
+ RA(0.101)
563
+ Dec (0.0571)
564
+ 0.8
565
+ Cumulative distribution
566
+ 0.6
567
+ 0.4
568
+ 0.2
569
+ 0.0
570
+ 0.0
571
+ 0.2
572
+ 0.4
573
+ 0.6
574
+ 0.8
575
+ 1.0
576
+ p1.0
577
+ RA (0.817)
578
+ Dec (0.325)
579
+ 0.8
580
+ Cumulative distribution
581
+ 0.6
582
+ 0.4
583
+ 0.2
584
+ 0.0
585
+ 0.0
586
+ 0.2
587
+ 0.4
588
+ 0.6
589
+ 0.8
590
+ 1.0
591
+ p1.0
592
+ RA (0.829)
593
+ Dec (0.188)
594
+ 0.8
595
+ Cumulative distribution
596
+ 0.6
597
+ 0.4
598
+ 0.2
599
+ 0.0
600
+ 0.0
601
+ 0.2
602
+ 0.4
603
+ 0.6
604
+ 0.8
605
+ 1.0
606
+ p1.0
607
+ RA(0.0891)
608
+ Dec (0.441)
609
+ 0.8
610
+ Cumulative distribution
611
+ 0.6
612
+ 0.4
613
+ 0.2
614
+ 0.0
615
+ 0.0
616
+ 0.2
617
+ 0.4
618
+ 0.6
619
+ 0.8
620
+ 1.0
621
+ p1.0
622
+ RA (0.0745)
623
+ Dec (0.122)
624
+ 0.8
625
+ Cumulative distribution
626
+ 0.6
627
+ 0.4
628
+ 0.2
629
+ 0.0
630
+ 0.0
631
+ 0.2
632
+ 0.4
633
+ 0.6
634
+ 0.8
635
+ 1.0
636
+ p1.0
637
+ RA (0.338)
638
+ Dec (0.147)
639
+ 0.8
640
+ Cumulative distribution
641
+ 0.6
642
+ 0.4
643
+ 0.2
644
+ 0.0
645
+ 0.0
646
+ 0.2
647
+ 0.4
648
+ 0.6
649
+ 0.8
650
+ 1.0
651
+ p9
652
+ Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2016, Phys.
653
+ Rev. Lett., 116, 061102,
654
+ doi: 10.1103/PhysRevLett.116.061102
655
+ —. 2017a, Phys. Rev. Lett., 119, 161101,
656
+ doi: 10.1103/PhysRevLett.119.161101
657
+ —. 2017b, The Astrophysical Journal Letters, 848, L12,
658
+ doi: 10.3847/2041-8213/aa91c9
659
+ —. 2017c, The Astrophysical Journal Letters, 848, L13,
660
+ doi: 10.3847/2041-8213/aa920c
661
+ —. 2017d, Nature, 551, 85, doi: 10.1038/nature24471
662
+ —. 2019, Phys. Rev. X, 9, 011001,
663
+ doi: 10.1103/PhysRevX.9.011001
664
+ Abbott, R., Abbott, T. D., Acernese, F., et al. 2021a,
665
+ GWTC-3: Compact Binary Coalescences Observed by
666
+ LIGO and Virgo During the Second Part of the Third
667
+ Observing Run
668
+ Abbott, R., Abbott, T. D., Abraham, S., et al. 2021b, The
669
+ Astrophysical Journal Letters, 915, L5,
670
+ doi: 10.3847/2041-8213/ac082e
671
+ Acernese, F., Agathos, M., Agatsuma, K., et al. 2014,
672
+ Classical and Quantum Gravity, 32, 024001,
673
+ doi: 10.1088/0264-9381/32/2/024001
674
+ Akutsu, T., Ando, M., Arai, K., et al. 2019, Nature
675
+ Astronomy, 3, 35, doi: 10.1038/s41550-018-0658-y
676
+ Aubin, F., Brighenti, F., Chierici, R., et al. 2021, Classical
677
+ and Quantum Gravity, 38, 095004,
678
+ doi: 10.1088/1361-6382/abe913
679
+ Bovard, L., Martin, D., Guercilena, F., et al. 2017, 96.
680
+ https://www.osti.gov/pages/biblio/1415425
681
+ Cannon, K., Cariou, R., Chapman, A., et al. 2012, The
682
+ Astrophysical Journal, 748, 136,
683
+ doi: 10.1088/0004-637X/748/2/136
684
+ Chatterjee, C., Wen, L., Beveridge, D., Diakogiannis, F., &
685
+ Vinsen, K. 2022, Rapid localization of gravitational wave
686
+ sources from compact binary coalescences using deep
687
+ learning
688
+ Chu, Q., Kovalam, M., Wen, L., et al. 2022, PhRvD, 105,
689
+ 024023, doi: 10.1103/PhysRevD.105.024023
690
+ Chua, A. J. K., & Vallisneri, M. 2020, Phys. Rev. Lett.,
691
+ 124, 041102, doi: 10.1103/PhysRevLett.124.041102
692
+ Dax, M., Green, S. R., Gair, J., et al. 2021, Phys. Rev.
693
+ Lett., 127, 241103, doi: 10.1103/PhysRevLett.127.241103
694
+ Dokuchaev, V. I., & Eroshenko, Y. N. 2017
695
+ Dyer, M. J., Ackley, K., Lyman, J., et al. 2022, in
696
+ Proc.SPIE, Vol. 12182, The Gravitational-wave Optical
697
+ Transient Observer (GOTO), 121821Y,
698
+ doi: 10.1117/12.2629369
699
+ Gabbard, H., Messenger, C., Heng, I. S., Tonolini, F., &
700
+ Murray-Smith, R. 2022, Nature Physics, 18, 112,
701
+ doi: 10.1038/s41567-021-01425-7
702
+ Gebhard, T. D., Kilbertus, N., Harry, I., & Schölkopf, B.
703
+ 2019, Phys. Rev. D, 100, 063015,
704
+ doi: 10.1103/PhysRevD.100.063015
705
+ Germain, M., Gregor, K., Murray, I., & Larochelle, H. 2015
706
+ Greenberg, D., Nonnenmacher, M., & Macke, J. 2019, in
707
+ Proceedings of Machine Learning Research, Vol. 97,
708
+ Proceedings of the 36th International Conference on
709
+ Machine Learning, ed. K. Chaudhuri & R. Salakhutdinov
710
+ (PMLR), 2404–2414.
711
+ https://proceedings.mlr.press/v97/greenberg19a.html
712
+ Górski, K. M., Hivon, E., Banday, A. J., et al. 2005, The
713
+ Astrophysical Journal, 622, 759, doi: 10.1086/427976
714
+ Haas, R., Ott, C. D., Szilagyi, B., et al. 2016, Phys. Rev. D,
715
+ 93, 124062, doi: 10.1103/PhysRevD.93.124062
716
+ He, K., Zhang, X., Ren, S., & Sun, J. 2015, Deep Residual
717
+ Learning for Image Recognition
718
+ Hooper, S. 2013, PhD thesis
719
+ Kingma, D. P., Salimans, T., Jozefowicz, R., et al. 2016,
720
+ Improving Variational Inference with Inverse
721
+ Autoregressive Flow
722
+ Kirichenko, P., Izmailov, P., & Wilson, A. G. 2020, Why
723
+ Normalizing Flows Fail to Detect Out-of-Distribution
724
+ Data
725
+ Klimenko, S., Vedovato, G., Drago, M., et al. 2016, Phys.
726
+ Rev. D, 93, 042004, doi: 10.1103/PhysRevD.93.042004
727
+ Kovalam, M., Patwary, M. A. K., Sreekumar, A. K., et al.
728
+ 2022, The Astrophysical Journal Letters, 927, L9,
729
+ doi: 10.3847/2041-8213/ac5687
730
+ Littenberg, T. B., & Cornish, N. J. 2015, Phys. Rev. D, 91,
731
+ 084034, doi: 10.1103/PhysRevD.91.084034
732
+ Lueckmann, J.-M., Goncalves, P. J., Bassetto, G., et al.
733
+ 2017, in Advances in Neural Information Processing
734
+ Systems, ed. I. Guyon, U. V. Luxburg, S. Bengio,
735
+ H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett,
736
+ Vol. 30 (Curran Associates, Inc.).
737
+ https://proceedings.neurips.cc/paper/2017/file/
738
+ addfa9b7e234254d26e9c7f2af1005cb-Paper.pdf
739
+ Magee, R., Chatterjee, D., Singer, L. P., et al. 2021, The
740
+ Astrophysical Journal Letters, 910, L21,
741
+ doi: 10.3847/2041-8213/abed54
742
+ Metzger, B. D. 2017, Welcome to the Multi-Messenger Era!
743
+ Lessons from a Neutron Star Merger and the Landscape
744
+ Ahead
745
+ Metzger, B. D., & Piro, A. L. 2014, Monthly Notices of the
746
+ Royal Astronomical Society, 439, 3916,
747
+ doi: 10.1093/mnras/stu247
748
+ Most, E. R., & Philippov, A. A. 2020, The Astrophysical
749
+ Journal Letters, 893, L6, doi: 10.3847/2041-8213/ab8196
750
+
751
+ 10
752
+ Nicholl, M., Berger, E., Kasen, D., et al. 2017, The
753
+ Astrophysical Journal Letters, 848, L18,
754
+ doi: 10.3847/2041-8213/aa9029
755
+ Nissanke, S., Kasliwal, M., & Georgieva, A. 2013, The
756
+ Astrophysical Journal, 767, 124,
757
+ doi: 10.1088/0004-637X/767/2/124
758
+ Nitz, A., Harry, I., Brown, D., et al. 2021, gwastro/pycbc:
759
+ 1.18.0 release of PyCBC, v1.18.0, Zenodo,
760
+ doi: 10.5281/zenodo.4556907
761
+ Nitz, A. H., Schäfer, M., & Canton, T. D. 2020, The
762
+ Astrophysical Journal Letters, 902, L29,
763
+ doi: 10.3847/2041-8213/abbc10
764
+ Papamakarios, G., & Murray, I. 2016, Fast ϵ-free Inference
765
+ of Simulation Models with Bayesian Conditional Density
766
+ Estimation
767
+ Papamakarios, G., Pavlakou, T., & Murray, I. 2017, in
768
+ Advances in Neural Information Processing Systems, ed.
769
+ I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach,
770
+ R. Fergus, S. Vishwanathan, & R. Garnett, Vol. 30
771
+ (Curran Associates, Inc.).
772
+ https://proceedings.neurips.cc/paper/2017/file/
773
+ 6c1da886822c67822bcf3679d04369fa-Paper.pdf
774
+ Rezende, D. J., & Mohamed, S. 2015, Variational Inference
775
+ with Normalizing Flows
776
+ Sachdev, S., Caudill, S., Fong, H., et al. 2019, The GstLAL
777
+ Search Analysis Methods for Compact Binary Mergers in
778
+ Advanced LIGO’s Second and Advanced Virgo’s First
779
+ Observing Runs
780
+ Sachdev, S., Magee, R., Hanna, C., et al. 2020, The
781
+ Astrophysical Journal Letters, 905, L25,
782
+ doi: 10.3847/2041-8213/abc753
783
+ Siegel, D. M., & Ciolfi, R. 2016, The Astrophysical Journal,
784
+ 819, 14, doi: 10.3847/0004-637X/819/1/14
785
+ Singer, L. P., & Price, L. R. 2016, Phys. Rev. D, 93,
786
+ 024013, doi: 10.1103/PhysRevD.93.024013
787
+ Sturani, R., Fischetti, S., Cadonati, L., et al. 2010,
788
+ Phenomenological gravitational waveforms from spinning
789
+ coalescing binaries
790
+ Totani, T. 2013, Publications of the Astronomical Society
791
+ of Japan, 65, L12, doi: 10.1093/pasj/65.5.L12
792
+ Usman, S. A., Nitz, A. H., Harry, I. W., et al. 2016,
793
+ Classical and Quantum Gravity, 33, 215004,
794
+ doi: 10.1088/0264-9381/33/21/215004
795
+ Wang, J.-S., Yang, Y.-P., Wu, X.-F., Dai, Z.-G., & Wang,
796
+ F.-Y. 2016, The Astrophysical Journal Letters, 822, L7,
797
+ doi: 10.3847/2041-8205/822/1/L7
798
+
B9E1T4oBgHgl3EQf9gbH/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
B9E4T4oBgHgl3EQfFQzj/content/2301.04885v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:539c3c373277403f6a408828805329e11a5e973396257197f07b97259f641970
3
+ size 1279791
B9E4T4oBgHgl3EQfFQzj/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd1980d7e48b9e5feccf9365ca03aa32f1794aa3d708d8a8b993a668ed2b9dc5
3
+ size 73699
CdAzT4oBgHgl3EQfGftE/content/2301.01028v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:814f31820af1c6783d01ffc3510e7611218283e34e9b9fdad229ea6078553e5f
3
+ size 13147693
DNAzT4oBgHgl3EQfGfv5/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:716a9e7e1511e4b06a884118b9af2b201a4e5ab6ee650759a3a5031d447e5a71
3
+ size 2228269
DtAzT4oBgHgl3EQfGvt3/content/tmp_files/2301.01033v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
DtAzT4oBgHgl3EQfGvt3/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
FNAyT4oBgHgl3EQfSffh/content/tmp_files/2301.00089v1.pdf.txt ADDED
@@ -0,0 +1,1676 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Chair of Robotics, Artificial Intelligence and Real-Time Systems
2
+ TUM School of Computation, Information and Technology
3
+ Technical University of Munich
4
+ 1
5
+ Autonomous Driving Simulator based on Neurorobotics Platform
6
+ Wei Cao, Liguo Zhou �, Yuhong Huang, and Alois Knoll
7
+ Chair of Robotics, Artificial Intelligence and Real-Time Systems, Technical University of Munich
8
9
+ Abstract — There are many artificial intelligence algorithms for autonomous driving in the present market,
10
+ but directly installing these algorithms on vehicles is unrealistic and expensive. At the same time, many of these
11
+ algorithms need an environment to train and optimize. Simulation is a valuable and meaningful solution with
12
+ training and testing functions, and it can say that simulation is a critical link in the autonomous driving world.
13
+ There are also many different applications or systems of simulation from companies or academies such as SVL
14
+ and Carla. These simulators flaunt that they have the closest real-world simulation, but their environment objects,
15
+ such as pedestrians and other vehicles around the agent-vehicle, are already fixed programmed. They can only
16
+ move along the pre-setting trajectory, or random numbers determine their movements. What is the situation
17
+ when all environmental objects are also installed by Artificial Intelligence, or their behaviors are like real people
18
+ or natural reactions of other drivers? This problem is a blind spot for most of the simulation applications, or
19
+ these applications cannot be easy to solve this problem. The Neurorobotics Platform from the TUM team of
20
+ Prof. Alois Knoll has the idea about "Engines" and "Transceiver Functions" to solve the multi-agents problem.
21
+ This report will start with a little research on the Neurorobotics Platform and analyze the potential and possibility
22
+ of developing a new simulator to achieve the true real-world simulation goal. Then based on the NRP-Core
23
+ Platform, this initial development aims to construct an initial demo experiment. The consist of this report
24
+ starts with the basic knowledge of NRP-Core and its installation, then focus on the explanation of the necessary
25
+ components for a simulation experiment, at last, about the details of constructions for the autonomous driving
26
+ system, which is integrated object detection function and autonomous driving control function. At the end will
27
+ discuss the existing disadvantages and improvements of this autonomous driving system.
28
+ Keywords— Simulation, Neurorobotics Platform, NRP-Core, Engines, Transceiver Functions, Autonomous Driving,
29
+ Object Detection, PID Trajectory Control
30
+ 1 Introduction
31
+ 1.1 Motivation
32
+ At present, there are many different Artificial Intelligence (AI) algorithms used for autonomous driving. Some algorithms
33
+ are used to perceive the environment, such as object detection and semantic/instance segmentation. Some algorithms are
34
+ dedicated to making the best trajectory strategy and control decisions based on the road environment. Others contribute
35
+ to many different applications, e.g. path planning and parking. Simulation is the best cost-performance way to develop
36
+ these algorithms before they are truly deployed to actual vehicles or robots. So, the performance of a simulation platform
37
+ is influencing the performance of the AI algorithms. In the present market or business world, there are already a lot of
38
+ different “real-world” simulation applications such as CARLA [1] for simulating the algorithm for autonomous driving,
39
+ AirSim [2] from Microsoft for autonomous vehicle and quadrotor and PTV Vissim [3] from Germany PTV Group for
40
+ flexible traffic simulation.
41
+ Although these simulators are dedicated to the “real world” simulation, they have more or less “unreal” problems on
42
+ some sides in the process of simulation. For example, besides the problem about the unreal 3-D models and environment,
43
+ these simulators have an obvious feature, these AI algorithms are only deployed to target experimental subjects, vehicles, or
44
+ robots, and the environment such as other vehicles, motorbikes, and pedestrian looks very close to the “real” environment
45
+ but actually these environmental subjects are already in advance pre-programmed and have a fix motion trail. The core
46
+ problem of most of them focuses on basic information transmission. They only transfer the essential or necessary traffic
47
+ information to the agent subject in the simulation. This transmission is one-way direction. Considering this situation, can
48
+ let other subjects in this simulation have their own different AI algorithms at the same time that they can react to the agent’s
49
+ behavior? In the future world, there would be not only one vehicle owning one algorithm from one company, but they
50
+ must also have much interaction with other agents. The interaction between different algorithms can take which influence
51
+ back on these algorithms, and this problem is also a blind point for many simulators.
52
+ This large range of interaction between lots of agents is the main problem that these applications should pay attention
53
+ to and these existing applications do not have an efficient way to solve this problem. A simulation platform that is truly
54
+ arXiv:2301.00089v1 [cs.RO] 31 Dec 2022
55
+
56
+ 2
57
+ like the real world, whose environment is not only a fixed pre-definition program, the objects in the environment can make
58
+ a relative objective interaction with vehicles with the testing autonomous driving algorithms and they can influence each
59
+ other, the goal and concept is an intractable problem for the construction of a simulation platform. There is a platform
60
+ called The Neurorobotics Platform (NRP) from the TUM team of Prof. Alois Knoll that provides a potential idea to solve
61
+ this interaction problem. This research project focuses on preliminary implementation and searches for the possibility of
62
+ solving the previously mentioned interaction problem.
63
+ 1.2 Neurorobotics Platform (NRP)
64
+ Figure 1.1 The base model of Neurorobotics Platform (NRP)
65
+ Neurorobotics Platform [4] is an open-source integrative simulation framework platform developed by the group of the
66
+ chair of Robotics, Artificial Intelligence and Real-Time Systems of the Technical University of Munich in the context of
67
+ the Human Brain Project - a FET Flagship funded by the European Commission. The basic starting point of this platform
68
+ enables to choose and test of different brain models (ranging from spiking neural networks to deep networks) for robots.
69
+ This platform builds an efficient information transmission framework to let simulated agents interact with their virtual
70
+ environment.
71
+ The new Version of NRP called NRP Core provides a new idea, which regards all the Participator in the Simulation-
72
+ system as "Engines", just like the object in the programming language C++/python, the properties of the simulation
73
+ participator such as the robot, autonomous-driving car, weather, or pedestrian and their "behaviors" would be completely
74
+ constructed in their own "Engine"-object and let all the participates become a "real" object and can each other influence in
75
+ the simulation world and they would not be a fix definite "Program". And the NRP-Platform is the most important transport
76
+ median between these engines and they are called the Transceiver Function. It transmits the "Information" such as the
77
+ image from the camera and sends the image to an autonomous-driving car and the same time would send other information
78
+ to other engines by different transfer protocols such as JSON or ROS system. That means the transmission of information
79
+ is highly real-time and lets the simulation world very close to the real world and it has high simulation potency, e.g. the
80
+ platform sends the image information to the autonomous-driving car and lets the car computes the situation and makes
81
+ the right strategy and rational decision, and at the same moment the environment-cars or "drivers" also get the location
82
+ information from the autonomous-driving car and make their own decisions such like drive further or change velocity and
83
+ lanes, and the same time these cars are influenced by the situation of the weather, e.g. in raining days the brake time of the
84
+ car would be longer and let the decision making and object detection more significant.
85
+ NRP-core is mostly written in C++, with the Transceiver Function framework relying on Python for better usability.
86
+ It guarantees a fully deterministic execution of the simulation, provided every simulator used is itself deterministic and
87
+ works on the basis of controlled progression through time steps. Users should thus take note that event-based simulators
88
+ may not be suitable for integration in NRP-core (to be analyzed on a case-by-case basis). Communications to and from
89
+ NRP-core are indeed synchronous, and function calls are blocking; as such, the actual execution time of a simulation
90
+ based on NRP-core will critically depend on the slowest simulator integrated therein. The aforementioned feature of the
91
+ NRP-Core platform is significant to build multi-object which interact with other agencies in the simulation progress and
92
+ lets the simulation be close to the real world.
93
+ 2 NRP-Core configurations for simulation progress
94
+ NRP-Core has many application scenarios for different demands of simulation situations. For a specific purpose, the
95
+ model of NRP-Core can be widely different. This development for the Autonomous-driving benchmark focuses on the
96
+ actual suggested development progress. It concentrates on the construction of the simulation application, the details of
97
+
98
+ Close
99
+ Transceiver Functions
100
+ Loop
101
+ Engine3
102
+ the operation mechanism of NRP-Core would not be discussed, and deep research in this development documentation, the
103
+ principle of the operation mechanism can be found on the homepage of NRP-Core.
104
+ 2.1 Installation of NRP-Core and setting environment
105
+ For the complete installation, refer to the homepage of the NRP-Core Platform by "Getting Started" under the page
106
+ "Installation Instructions." This section lists only all the requirements for applying the autonomous driving simulator and
107
+ benchmark.
108
+ WARNING: Previous versions of the NRP install forked versions of several libraries, notably NEST and Gazebo.
109
+ Installing NRP-core in a system where a previous version of NRP is installed is known to cause conflicts. That will be
110
+ strongly recommended not to install the last version at the same time.
111
+ Operating System: recommend on Ubuntu 20.04
112
+ Setting the Installation Environment: To properly set the environment to run experiments with NRP-core, please make
113
+ sure that it is added the lines below to your /.bashrc file.
114
+ 1 # Start
115
+ setting
116
+ environment
117
+ 2 export
118
+ NRP_INSTALL_DIR ="/home/${USER }/. local/nrp" # The
119
+ installation
120
+ directory ,
121
+ which was given
122
+ before
123
+ 3 export
124
+ NRP_DEPS_INSTALL_DIR ="/home/${USER }/. local/nrp_deps"
125
+ 4 export
126
+ PYTHONPATH="${ NRP_INSTALL_DIR }"/lib/python3 .8/site -packages:"${
127
+ NRP_DEPS_INSTALL_DIR }"/lib/python3 .8/site -packages:$PYTHONPATH
128
+ 5 export
129
+ LD_LIBRARY_PATH ="${ NRP_INSTALL_DIR }"/lib:"${ NRP_DEPS_INSTALL_DIR }"/lib:${
130
+ NRP_INSTALL_DIR }/lib/ nrp_gazebo_plugins : $LD_LIBRARY_PATH
131
+ 6 export
132
+ PATH=$PATH:"${ NRP_INSTALL_DIR }"/bin:"${ NRP_DEPS_INSTALL_DIR }"/bin
133
+ 7 export
134
+ GAZEBO_PLUGIN_PATH =${ NRP_INSTALL_DIR }/lib/ nrp_gazebo_plugins :${
135
+ GAZEBO_PLUGIN_PATH }
136
+ 8 . /usr/share/gazebo -11/ setup.sh
137
+ 9 . /opt/ros/noetic/setup.bash
138
+ 10 . ${CATKIN_WS }/ devel/setup.bash
139
+ 11 # End of setting
140
+ environment
141
+ Dependency installation:
142
+ 1 # Start of dependencies
143
+ installation
144
+ 2 # Pistache
145
+ REST
146
+ Server
147
+ 3 sudo add -apt -repository
148
+ ppa:pistache+team/unstable
149
+ 4
150
+ 5 # Gazebo
151
+ repository
152
+ 6 sudo sh -c ’echo "deb http :// packages. osrfoundation .org/gazebo/ubuntu -stable ‘
153
+ lsb_release -cs ‘ main"> /etc/apt/sources.list.d/gazebo -stable.list ’
154
+ 7 wget
155
+ https :// packages.osrfoundation .org/gazebo.key -O - | sudo apt -key add -
156
+ 8
157
+ 9 sudo apt update
158
+ 10 sudo apt
159
+ install
160
+ git cmake
161
+ libpistache -dev libboost -python -dev libboost -
162
+ filesystem -dev libboost -numpy -dev libcurl4 -openssl -dev nlohmann -json3 -dev
163
+ libzip -dev cython3
164
+ python3 -numpy
165
+ libgrpc ++-dev protobuf -compiler -grpc
166
+ libprotobuf -dev
167
+ doxygen
168
+ libgsl -dev libopencv -dev python3 -opencv
169
+ python3 -pil
170
+ python3 -pip libgmock -dev
171
+ 11
172
+ 12 # required by gazebo
173
+ engine
174
+ 13 sudo apt
175
+ install
176
+ libgazebo11 -dev
177
+ gazebo11
178
+ gazebo11 -plugin -base
179
+ 14
180
+ 15 # Remove the flask if it was
181
+ installed to ensure it is installed
182
+ from pip
183
+ 16 sudo apt remove
184
+ python3 -flask python3 -flask -cors
185
+ 17 # required by Python
186
+ engine
187
+ 18 # If you are
188
+ planning to use The
189
+ Virtual
190
+ Brain
191
+ framework , you will most
192
+ likely
193
+ have to use flask
194
+ version
195
+ 1.1.4.
196
+ 19 # By installing
197
+ flask
198
+ version
199
+ 1.1.4
200
+ markupsafe
201
+ library (included
202
+ with
203
+ flask) has
204
+ to be downgraded to version
205
+ 2.0.1 to run
206
+ properly
207
+ with
208
+ gunicorn
209
+ 20 # You can
210
+ install
211
+ that
212
+ version
213
+ with
214
+ 21 # pip install
215
+ flask ==1.1.4
216
+ gunicorn
217
+ markupsafe ==2.0.1
218
+ 22 pip install
219
+ flask
220
+ gunicorn
221
+ 23
222
+ 24 # required by nest -server (which is built and
223
+ installed
224
+ along
225
+ with nrp -core)
226
+
227
+ 4
228
+ 25 sudo apt
229
+ install
230
+ python3 - restrictedpython
231
+ uwsgi -core uwsgi -plugin -python3
232
+ 26 pip install
233
+ flask_cors
234
+ mpi4py
235
+ docopt
236
+ 27
237
+ 28 # required by nrp -server , which
238
+ uses gRPC
239
+ python
240
+ bindings
241
+ 29 pip install
242
+ grpcio -tools
243
+ pytest
244
+ psutil
245
+ docker
246
+ 30
247
+ 31 # Required
248
+ for using
249
+ docker
250
+ with ssh
251
+ 32 pip install
252
+ paramiko
253
+ 33
254
+ 34 # ROS , when not needed , can jump to the next step
255
+ 35
256
+ 36 # Install ROS: follow the
257
+ installation
258
+ instructions: http :// wiki.ros.org/noetic
259
+ Installation/Ubuntu. To enable ros
260
+ support in nrp on ‘ros -noetic -ros -base ‘ is
261
+ required.
262
+ 37
263
+ 38 #Tell
264
+ nrpcore
265
+ where
266
+ your
267
+ catkin
268
+ workspace is located: export a variable
269
+ CATKIN_WS
270
+ pointing to an existing
271
+ catkin
272
+ workspace
273
+ root
274
+ folder. If the
275
+ variable
276
+ does not exist , a new catkin
277
+ workspace
278
+ will be created at ‘${HOME }/
279
+ catkin_ws ‘.
280
+ 39
281
+ 40 # MQTT , if needed , see the
282
+ homepage of NRP -Core
283
+ 41
284
+ 42 # End of dependencies
285
+ installation
286
+ NRP installation:
287
+ 1 # Start of installation
288
+ 2 git clone
289
+ https :// bitbucket.org/ hbpneurorobotics /nrp -core.git
290
+ 3 cd nrp -core
291
+ 4 mkdir
292
+ build
293
+ 5 cd build
294
+ 6 # See the
295
+ section "Common NRP -core
296
+ CMake
297
+ options" in the
298
+ documentation
299
+ for the
300
+ additional
301
+ ways to configure
302
+ the
303
+ project
304
+ with
305
+ CMake
306
+ 7 cmake .. -DCMAKE_INSTALL_PREFIX ="${ NRP_INSTALL_DIR }" -
307
+ DNRP_DEP_CMAKE_INSTALL_PREFIX ="${ NRP_DEPS_INSTALL_DIR }"
308
+ 8 mkdir -p "${ NRP_INSTALL_DIR }"
309
+ 9 # the
310
+ installation
311
+ process
312
+ might
313
+ take some time , as it downloads
314
+ and
315
+ compiles
316
+ Nest as well.
317
+ 10 # If you haven ’t installed
318
+ MQTT libraries , add
319
+ ENABLE_MQTT=OFF
320
+ definition to
321
+ cmake (-DENABLE_MQTT=OFF).
322
+ 11 make
323
+ 12 make
324
+ install
325
+ 13 # Just in case of wanting to build the
326
+ documentation . Documentation
327
+ can then be
328
+ found in a new doxygen
329
+ folder
330
+ 14 make
331
+ nrp_doxygen
332
+ 15 # End of installation
333
+ Common NRP-core CMake options: Here is the list of the CMake options that can help modify the project configu-
334
+ ration (turn on and off the support of some components and features).
335
+ • Developers options:
336
+ – COVERAGE enables the generation of the code coverage reports during the testing
337
+ – BUILD_RST enables the generation of the reStructuredText source files from the Doxygen documentation
338
+ • Communication protocols options:
339
+ – ENABLE_ROS enables compilation with ROS support;
340
+ – ENABLE_MQTT enables compilation with the MQTT support.
341
+ • ENABLE_SIMULATOR and BUILD_SIMULATOR_ENGINE_SERVER options:
342
+ – ENABLE_NEST and BUILD_NEST_ENGINE_SERVER;
343
+ – ENABLE_GAZEBO and BUILD_GAZEBO_ENGINE_SERVER.
344
+ The ENABLE_SIMULATOR and BUILD_SIMULATOR_ENGINE_SERVER flags allow disabling the compilation
345
+ of those parts of nrp-core that depend on or install a specific simulator (eg. gazebo, nest).
346
+ The expected behavior for each of these pairs of flags is as follows:
347
+
348
+ 5
349
+ • the NRPCoreSim is always built regardless of any of the flags values.
350
+ • if ENABLE_SIMULATOR is set to OFF:
351
+ – the related simulator won’t be assumed to be installed in the system, ie. make won’t fail if it isn’t. Also it
352
+ won’t be installed in the compilation process if this possibility is available (as in the case of nest)
353
+ – The engines connected with this simulator won’t be built (nor client nor server components)
354
+ – tests that would fail if the related simulator is not available won’t be built
355
+ • if the ENABLE_SIMULATOR is set to ON and BUILD_SIMULATOR_ENGINE_SERVER is set to OFF: Same
356
+ as above, but:
357
+ – the engine clients connected to this simulator will be built. This means that they should not depend on or link
358
+ to any specific simulator
359
+ – the engine server-side components might or might not be built, depending on if the related simulator is
360
+ required at compilation time
361
+ • if both flags are set to ON the simulator is assumed to be installed or it will be installed from the source if this
362
+ option is available. All targets connected with this simulator will be built.
363
+ This flag system allows configuring the resulting NRP-Core depending on which simulators are available on the system,
364
+ both for avoiding potential dependency conflicts between simulators and enforcing modularity, opening the possibility of
365
+ having specific engine servers running on a different machine or inside containers.
366
+ 2.2 Introduction of basic components of simulation by NRP
367
+ Some important elements for constructing a simulation example by the NRP platform are: Engines, Transceiver Function
368
+ (TF) + Preprocessing Function (PF), Simulation Configuration JSON file, Simulation model file and DataPack, which are
369
+ basic components of simulation progress. In this section, list and declare their definition, content and implementation.
370
+ 2.2.1 Engine
371
+ Engines are a core aspect of the NRP-core framework. They run the actual simulation software (which can be comprised
372
+ of any number of heterogeneous modules), with the Simulation Loop and TransceiverFunctions merely being a way to
373
+ synchronize and exchange data between them. The data exchange is carried out through an engine client (see paragraph
374
+ below). An Engine can run any type of software, from physics engines to brain simulators. The only requirement is that
375
+ they should be able to manage progressing through time with fixed-duration time steps.
376
+ There are different engines already implemented in NRP-Core:
377
+ • Nest: two different implementations that integrate the NEST Simulator into NRP-core.
378
+ • Gazebo: engine implementation for the Gazebo physics simulator.
379
+ • PySim: engine implementation based on the Python JSON Engine wrapping different simulators (Mujoco, Opensim,
380
+ and OpenAI) with a python API.
381
+ • The Virtual Brain: engine implementation based on the Python JSON Engine and TVB Python API.
382
+ and so on are provided by NRP and as the first user-interested engines for research Spiking neural Networks and the
383
+ like. These applications are distributed to the specific simulator. This platform provides also Python JSON Engine, this
384
+ versatile engine enables users to execute a user-defined python script as an engine server, thus ensuring synchronization
385
+ and enabling DataPack data transfer with the Simulation Loop process. It can be used to integrate any simulator with a
386
+ Python API in an NRP-core experiment. This feature allows users to modular develop experiment agents in constructed
387
+ simulation world and is flexible to manage plural objects with different behaviors and characters.
388
+ 2.2.2 DataPack and Construction format
389
+ The carrier of Information which is transported between engines and lets engines with each other communicate is DataPack.
390
+ By NRP are there three types of supported DataPack, all of them are simple objects which wrap around arbitrary data
391
+ structures, one is JSON DataPack, second is Protobuf DataPack and another is ROS msg DataPack. They provide the
392
+ necessary abstract interface, which is understood by all components of NRP-Core, while still allowing the passing of data
393
+ in various formats. DataPack is also an important feature or property of a specific Engine, meaning the parameters and
394
+ form of data of a specific DataPack be declared in the Engine (Example see section 3.4.2).
395
+ A DataPack consists of two parts:
396
+
397
+ 6
398
+ • DataPack ID: which allows unique identify the object.
399
+ • DataPack data: this is the data stored by the DataPack, which can be in the principle of any type.
400
+ DataPacks are mainly used by Transceiver functions to relay data between engines. Each engine type is designed to
401
+ accept only datapacks of a certain type and structure.
402
+ Every DataPack contains a DataPackIdentifier, which uniquely identifies the datapack object and allows for the routing
403
+ of the data between transceiver functions, engine clients and engine servers. A datapack identifier consists of three fields:
404
+ • name - the name of the DataPack. It must be unique.
405
+ • type - string representation of the DataPack data type. This field will most probably be of no concern for the users.
406
+ It is set and used internally and is not in human-readable form.
407
+ • engine name - the name of the engine to which the DataPack is bound.
408
+ DataPack is a template class with a single template parameter, which specifies the type of data contained by the DataPack.
409
+ This DataPack data can be in the principle of any type. In practice, there are some limitations though, since DataPacks,
410
+ which are C++ objects, must be accessible from TransceiverFunctions, which are written in Python. Therefore the only
411
+ DataPack data types which can be actually used in NRP-core are those for which Python bindings are provided. It is
412
+ possible for a DataPack to contain no data. This is useful, for example, when an Engine is asked for a certain DataPack
413
+ but it is not able to provide it. In this case, an Engine can return an empty DataPack. This type of Datapack contains only
414
+ a Datapack identifier and no data. Attempting to retrieve the data from an empty DataPack will result in an exception. A
415
+ method "isEmpty" is provided to check whether a DataPack is empty or not before attempting to access its data:
416
+ 1 if(not
417
+ datapack.isEmpty ()):
418
+ 2
419
+ # It’s safe to get the data
420
+ 3
421
+ print(datapack.data)
422
+ 4 else:
423
+ 5
424
+ # This will
425
+ raise an exception
426
+ 6
427
+ print(datapack.data)
428
+ • The Format of getting DataPack from a particular Engine:
429
+ 1 # Declare
430
+ datapack
431
+ with "datapack_name " name from
432
+ engine "engine_name" as
433
+ input
434
+ using the
435
+ @EngineDataPack
436
+ decorator
437
+ 2 # The
438
+ transceiver
439
+ function
440
+ must
441
+ accept an argument
442
+ with the same name as "
443
+ keyword" in the
444
+ datapack
445
+ decorator
446
+ 3
447
+ 4 @EngineDataPack (keyword="datapack", id= DataPackIdentifier (" datapack_name ",
448
+ "engine_name"))
449
+ 5 @TransceiverFunction ("engine_name")
450
+ 6 def
451
+ transceiver_function (datapack):
452
+ 7
453
+ print(datapack.data)
454
+ 8
455
+ 9 # Multiple
456
+ input
457
+ datapacks
458
+ from
459
+ different
460
+ engines
461
+ can be declared
462
+ 10 @EngineDataPack (keyword="datapack1", id= DataPackIdentifier (" datapack_name1 "
463
+ , "engine_name1"))
464
+ 11 @EngineDataPack (keyword="datapack2", id= DataPackIdentifier (" datapack_name2 "
465
+ , "engine_name2"))
466
+ 12 @TransceiverFunction ("engine_name1 ")
467
+ 13 def
468
+ transceiver_function (datapack1 , datapack2):
469
+ 14
470
+ print(datapack1.data)
471
+ 15
472
+ print(datapack2.data)
473
+ PS: The details of two Decorators of TransceiverFunction see below in section 2.2.3.
474
+ • The Format of setting information in DataPack and sending to particular Engine:
475
+ 1 # NRP -Core
476
+ expects
477
+ transceiver
478
+ functions to always
479
+ return a list of
480
+ datapacks
481
+ 2 @TransceiverFunction ("engine_name")
482
+ 3 def
483
+ transceiver_function ():
484
+ 4
485
+ datapack = JsonDataPack(" datapack_name ", "engine_name")
486
+ 5
487
+ return [ datapack ]
488
+ 6
489
+
490
+ 7
491
+ 7 # Multiple
492
+ datapacks
493
+ can be returned
494
+ 8
495
+ 9 @TransceiverFunction ("engine_name")
496
+ 10 def
497
+ transceiver_function ():
498
+ 11
499
+ datapack1 = JsonDataPack(" datapack_name1 ", "engine_name")
500
+ 12
501
+ datapack2 = JsonDataPack(" datapack_name2 ", "engine_name")
502
+ 13
503
+ 14
504
+ return [ datapack1 , datapack2 ]
505
+ 2.2.3 Transceiver Function and Preprocessing Function
506
+ 1. Transceiver Function
507
+ Transceiver Functions are user-defined Python functions that take the role of transmitting DataPacks between engines.
508
+ They are used in the architecture to convert, transform or combine data from one or multiple engines and relay it to another.
509
+ The definition of a Transceiver Function must use Decorator before the user-defined “def” transceiver function, which
510
+ means: Sending the DataPack to the target Engine:
511
+ 1 @TransceiverFunction ("engine_name")
512
+ To request datapacks from engines, additional decorators can be prepended to the Transceiver Function, with the form
513
+ (Attention: Receive-Decorator must be in the front of TransceiverFunction):
514
+ 1 @EngineDataPack (keyword_datapack , id_datapack)
515
+ • keyword_datapack: user-defined new data name of DataPacks, this keyword is used as Input to Transceiver Function.
516
+ • id_datapack: the id of from particular Engine received DataPack, “DataPack ID” = “DataPack Name” + “Engine
517
+ Name” (Examples see 2.2.2)
518
+ 2. Preprocessing Function
519
+ Preprocessing Function is very similar to Transceiver Function but has different usage. Preprocessing Functions are
520
+ introduced to optimize expensive computations on DataPacks attached to a single engine. In some cases, there might be
521
+ necessary to apply the same operations on a particular DataPack in multiple Transceiver Functions. An example of this
522
+ might be applying a filter to a DataPack containing an image from a physics simulator. In order to allow to execute this
523
+ operation just once and let other TFs access the processed DataPack data, PreprocessingFunctions (PFs) are introduced.
524
+ They show two main differences with respect to Transceiver Functions:
525
+ • Their output datapacks are not sent to the corresponding Engines, they are kept in a local datapack cache and can
526
+ be used as input in TransceiverFunctions
527
+ • PFs just can take input DataPacks from the Engine they are linked to
528
+ The format of Preprocessing Function is similar to Transceiver Function:
529
+ 1 @PreprocessingFunction ("engine_name")
530
+ 2 @PreprocessedDataPack (keyword_datapack , id_datapack)
531
+ These Decorators “@PreprocessingFunction” and “@PreprocessedDataPack” must be used in Preprocessing Functions.
532
+ Since the output of Preprocessing Function is stored in the local cache and does not need to process on the Engine Server
533
+ side, Preprocessing Function can return any type of DataPack without restrictions.
534
+ 2.2.4 Simulation Configuration Json file
535
+ The details of configuration information for any simulation with Engines and Transceiver Functions are stored in a single
536
+ JSON file, this file contains the objects of engines, Transceiver functions, and also their important necessary parameters
537
+ to initialize and execute a simulation. This file is usually written in the “example_simlation.json” file.
538
+ The JSON format is here a JSON schema, which is highly readable and offers similar capabilities as XML Schema.
539
+ The advantage of composability and inheritance allows the simulation to use reference keywords to definite the agent and
540
+ to validate inheritance by referring to other schemas. That means that the same basement of an engine can at the same
541
+ time create plural agents or objects with only different identify IDs.
542
+ 1. Simulation Parameters
543
+ For details, see appendix Table A.1: Simulation configuration parameter.
544
+ 2. Example form
545
+
546
+ 8
547
+ 1 {
548
+ 2
549
+ " SimulationName": " example_simulation ",
550
+ 3
551
+ " SimulationDescription ": "Launch two python
552
+ engines. "
553
+ 4
554
+ " SimulationTimeout ": 1,
555
+ 5
556
+ " EngineConfigs":
557
+ 6
558
+ [
559
+ 7
560
+ {
561
+ 8
562
+ "EngineType": "python_json",
563
+ 9
564
+ "EngineName": "python_1",
565
+ 10
566
+ " PythonFileName": "engine_1.py"
567
+ 11
568
+ },
569
+ 12
570
+ {
571
+ 13
572
+ "EngineType": "python_json",
573
+ 14
574
+ "EngineName": "python_2",
575
+ 15
576
+ " PythonFileName": "engine_2.py"
577
+ 16
578
+ }
579
+ 17
580
+ ],
581
+ 18
582
+ " DataPackProcessingFunctions ":
583
+ 19
584
+ [
585
+ 20
586
+ {
587
+ 21
588
+ "Name": "tf_1",
589
+ 22
590
+ "FileName": "tf_1.py"
591
+ 23
592
+ }
593
+ 24
594
+ ]
595
+ 25 }
596
+ • EngineConfigs: this section list all the engines are participating in the simulation progress.
597
+ There are some
598
+ important parameters should be declared:
599
+ – EngineType: which type of engine is used for this validated engine, e.g., gazebo engine, python JSON engine
600
+ – EngineName: user-defined unit identification name for the validated engine
601
+ – Other Parameters: These Parameters should be declared according to the type of engines (details see appendix
602
+ Table A.2: Engine Base Parameter)
603
+ ∗ Python Json engine: “PythonFileName” – reference base python script for validated engine
604
+ ∗ Gazebo engine: see in section
605
+ • DataPackProcessingFunctions: this section lists all the Transceiver functions validated in simulation progress.
606
+ Mostly are there two parameters that should be declared:
607
+ – Name: user-defined identification name for validated Transceiver Function
608
+ – FileName: which file as reference base python script to validate Transceiver Function
609
+ • Other Simulation Parameters: see section 2.2.4 – 1. Simulation Parameters
610
+ • Launch a simulation: This simulation configuration JSON file is also the launch file and uses the NRP command to
611
+ start a simulation experiment with the following command:
612
+ 1 NRPCoreSim -c user_defined_simulation_config .json
613
+ Tip: In a user-defined simulation, the folder can simultaneously exist many different named configuration JSON files. It
614
+ is very useful to config the target engine or Transceiver Functions that which user wants to launch and test with. To start
615
+ and launch the target simulation experiment, just choose the corresponding configuration file.
616
+ 2.2.5 Simulation model file
617
+ In this experiment for Autonomous driving on the NRP platform Gazebo physics simulator [5] is the world description
618
+ simulator. For the construction of the simulation, the world can use the “SDF” file based on XML format to describe all
619
+ the necessary information about 3D models in a file, e.g. sunlight, environment, friction, wind, landform, robots, vehicles,
620
+ and other physics objects. This file can in detail describe the static or dynamic information of the robot, the relative
621
+ position and motion information, the declaration of sensor or control plugins, and so on. And Gazebo is a simulator that
622
+ has a close correlation to the ROS system and provides simulation components for ROS, so the ROS system describes
623
+ many similar details about the construction of SDF file [6].
624
+ According to XML format label to describe components of the simulation world and construct the dependence relation-
625
+ ship of these components:
626
+
627
+ 9
628
+ • World Label
629
+ 1 <sdf
630
+ version=’1.7’>
631
+ 2
632
+ <world
633
+ name=’default ’>
634
+ 3
635
+ ........
636
+ 4
637
+ </world >
638
+ 5 </sdf>
639
+ All the components and their labels should be under <world> label.
640
+ • Model Labels
641
+ 1 <model
642
+ name=’model_name ’>
643
+ 2
644
+ <pose >0 0 0 0 -0 0</pose >
645
+ 3
646
+ <link name=’road map’>
647
+ 4
648
+ .........
649
+ 5
650
+ </link >
651
+ 6
652
+ <plugin
653
+ name=’link_plugin ’ filename=’NRPGazeboGrpcLinkControllerPlugin .
654
+ so’/>
655
+ 7
656
+ </plugin >
657
+ 8 </model >
658
+ The Description is under label <model>, and importantly if user will use a plugin such as the control-plugin or
659
+ sensor-plugin (camera or lidar), this <plugin> label must be set under the corresponding <model> label. Under
660
+ <link> label describes the model physics features like <collision>, <visual>, <joint>, and so on.
661
+ • 3-D models – mesh files
662
+ Gazebo requires that mesh files be formatted as STL, Collada, or OBJ, with Collada and OBJ being the preferred
663
+ formats. Blow lists the file suffixes to the corresponding mesh file format.
664
+ Collada - .dae
665
+ OBJ - .obj
666
+ STL - .stl
667
+ Tip: Collada and OBJ file formats allow users to attach materials to the meshes. Use this mechanism to improve
668
+ the visual appearance of meshes.
669
+ Mesh file should be declared under a needed label like <visual> or <collision> with layer structure with <geometry>
670
+ - <mesh> - <uri> (Uri can be absolute or relative file path):
671
+ 1 <geometry >
672
+ 2
673
+ <mesh >
674
+ 3
675
+ <uri>xxxx/xxxx.dae</uri>
676
+ 4
677
+ </mesh >
678
+ 5 </geometry >
679
+ 3 Simulation Construction on NRP-Core
680
+ Based on the steps for configuring a simulation on the NRP-Core platform, the autonomous driving benchmark can now be
681
+ implemented with the components mentioned above, from 3D models to communicating mechanisms. This section will
682
+ introduce the requirements of the autonomous driving application, and second will analyze the corresponding components
683
+ and their functions. The third is the concrete implementation of these requirements.
684
+ Second, this project will also research the possibility of achieving modular development for multi-agents on the NRP
685
+ platform, comparing it with other existing and widely used systems, and analyzing the simulation performance according
686
+ to the progress result.
687
+ 3.1 Analysis of requirements for autonomous driving application
688
+ An application to achieve the goal of testing the performance of autonomous driving algorithms can refer to different
689
+ aspects. The reason is that autonomous driving can integrate different algorithms such as computer vision, object detection,
690
+ decision-making and trajectory planning, vehicle control, or Simultaneous localization and mapping. The concept and
691
+ final goal of the application are to build a real-world simulation that integrates multi-agents, different algorithms, and
692
+ corresponding evaluation systems to the performance of the autonomous driving vehicle.
693
+ But that first needs many
694
+ available, mature, and feasible algorithms. Second, the construction of world 3D models is a big project. And last, the
695
+ evaluation system is based on the successful operation of the simulation. So the initial construction of the application will
696
+ focus on the base model of the communication mechanism to first achieve the communication between the single agent
697
+
698
+ 10
699
+ and object-detection algorithm under the progress of NRP-Core. And for vehicle control algorithm reacts logically based
700
+ on the object detection and generates feasible control commands, in this project will skip this step and give a specific
701
+ trajectory, that let the vehicle along this trajectory move.
702
+ Requirements of implementation:
703
+ • Construction of the base model frame for communication between the Gazebo simulator, object-detection algorithm,
704
+ and control unit.
705
+ • Selection of feasible object-detection algorithm
706
+ • Simple control system for autonomous movement of high accuracy physical vehicle model
707
+ 3.2 Object detection algorithm and YOLO v5 Detector Python Class
708
+ According to the above analysis, the requirements of the application should choose an appropriate existing object detection
709
+ algorithm as the example to verify the communication mechanism of the NRP platform and at the same time to optimize
710
+ performance.
711
+ On the research of existing object detection algorithms from base Alex-Net for image classification [7] and CNN-
712
+ Convolution neural network for image recognition [8], the optimized neural network ResNet [9] and SSD neural network
713
+ for multi-box Detector [10] and in the end the YOLOv5 neural network [11], YOLOv5 has high performance on the object
714
+ detection and its advantage by efficient handling of frame image on real-time let this algorithm also be meaningful as a
715
+ reference to test other object-detection algorithms. Considering the requirements of autonomous driving is YOLOv5 also
716
+ a suitable choice as the experimental object-detection algorithm to integrate into the NRP platform.
717
+ Table Notes:
718
+ • All checkpoints are trained to 300 epochs with default settings and hyperparameters.
719
+ • mAPval values are for single-model single-scale on COCO val2017 dataset. Reproduced by python val.py –data
720
+ coco.yaml –img 640 –conf 0.001 –iou 0.65
721
+ • Speed averaged over COCO val images using a AWS p3.2xlarge instance.
722
+ NMS times ( 1 ms/img) not in-
723
+ cluded.Reproduce by python val.py –data coco.yaml –img 640 –conf 0.25 –iou 0.45
724
+ • TTA Test Time Augmentation includes reflection and scale augmentations.Reproduce by python val.py –data
725
+ coco.yaml –img 1536 –iou 0.7 –augment
726
+ Requirements and Environment for YOLOv5:
727
+ • Quick link for YOLOv5 documentation : YOLOv5 Docs [12]
728
+ • Environment requirements: Python >= 3.7.0 version and PyTorch [13] >= 1.7
729
+ • Integration of original initial trained YOLOv5 neural network parameters, the main backbone has no changes
730
+ compared to the initial version
731
+ Based on the original execute-python file “detect.py” has another python file “Yolov5Detector.py” with a self-defined
732
+ Yolov5Detector class interface written in the “YOLOv5” package. To use YOLO v5 should in main progress validate the
733
+ YOLO v5 class, second use warm-up function “detectorWarmUp()” to initiate the neural network. And “detectImage()”
734
+ is the function that sends the image frame to the main predict detection function and will finally return the detected image
735
+ with bounding boxes in NumPy format.
736
+ 3.3 3D-Models for Gazebo simulation world
737
+ According to the performance of the Gazebo is the scope of the base environment world not suitable to use a large map.
738
+ On the basic test of different sizes of the map of Garching-area is the environment world model recommends encircling
739
+ the area of Parkring in Garching-Hochbrück. This map model is based on the high-accuracy satellite generated and is very
740
+ similar to the origin location. And by the simulation progress, the experimental vehicle moves around the main road of
741
+ Parkring.
742
+ The experimental vehicle is also a high detail modeling vehicle model with independently controllable steerings for
743
+ diversion control of two front wheels, free front, and rear wheels, and a high-definition camera. For the rebuilding of
744
+ these models, the belonging relationship for each mode should be declared in the SDF file. In the SDF file are these
745
+ models including base-chassis, steerings, wheels, and camera as “Link” of the car “model” under the <model> label with a
746
+ user-defined unique name. Attention, the name of models or links must be specific and has no same name as other objects.
747
+ The below shows the base architecture frame to describe the physical relationship of the whole vehicle in the SDF file:
748
+
749
+ 11
750
+ (a) Parkring Garching Hochbrueck high accuracy map model
751
+ (b) Experiment vehicle for simulation
752
+ Figure 3.1
753
+ 1 <model
754
+ name=’smart_car ’>
755
+ 2
756
+ <link name=’base_link ’>
757
+ 3
758
+ .......
759
+ 4
760
+ </link >
761
+ 5
762
+ <link name=’eye_vision_camera ’>
763
+ 6
764
+ .......
765
+ 7
766
+ </link >
767
+ 8
768
+ <joint
769
+ name=’eye_vision_camera_joint ’ type=’revolute ’>
770
+ 9
771
+ <parent >base_link </parent >
772
+ 10
773
+ <child >eye_vision_camera </child >
774
+ 11
775
+ ......
776
+ 12
777
+ </joint >
778
+ 13
779
+ <link name=’front_left_steering_link ’>
780
+ 14
781
+ .......
782
+ 15
783
+ </link >
784
+ 16
785
+ <joint
786
+ name=’front_left_steering_joint ’ type=’revolute ’>
787
+ 17
788
+ <parent >base_link </parent >
789
+ 18
790
+ <child >front_left_steering_link </child >
791
+ 19
792
+ .......
793
+ 20
794
+ </joint >
795
+ 21
796
+ ......
797
+ 22 </model >
798
+ 1. Description of Labels [6]:
799
+ • <link> — The corresponding model as a component from the entirety model
800
+ • <joint> — Description of relationship between link-components
801
+ • <joint type> — Type of the joint:
802
+ – revolute — a hinge joint that rotates along the axis and has a limited range specified by the upper and lower
803
+ limits.
804
+ – continuous — a continuous hinge joint that rotates around the axis and has no upper and lower limits.
805
+ – prismatic — a sliding joint that slides along the axis, and has a limited range specified by the upper and lower
806
+ limits.
807
+ – fixed — this is not a joint because it cannot move. All degrees of freedom are locked. This type of joint does
808
+ not require the <axis>, <calibration>, <dynamics>, <limits> or <safety_controller>.
809
+ – floating — this joint allows a motion for all 6 degrees of freedom.
810
+ – planar — this joint allows motion in a plane perpendicular to the axis.
811
+ • <parent>/<child> — the secondary label as element of <joint> label
812
+ — declaration for the belonging relationship of referring “links”
813
+
814
+ 12
815
+ The mesh file “vehicle_body.dae” (shown in Fig. 3.1b the blue car body) is used for the base-chassis of the experiment
816
+ vehicle under <link name=‘base_link’> label. And the mesh file “wheel.dae” is used for the rotatable vehicle wheels
817
+ under <link name=’ front_left_wheel_link’> and the other three similar link labels. And for steering models, <cylinder>
818
+ labels are used to simply generate length – 0.01m + height radius 0.1m cylinder as the joint elements between wheels and
819
+ chassis.
820
+ 2. Sensor Label:
821
+ In the Gazebo simulator to activate the camera function, the camera model should under the “camera link” label declare
822
+ a new secondary “sensor label” - <sensor> with “name” and “type=camera” elements. And the detailed construction for
823
+ the camera sensor seeing blow scripts:
824
+ 1 <sensor
825
+ name=’camera ’ type=’camera ’>
826
+ 2
827
+ <pose >0 0 0.132 0
828
+ -0.174 0</pose >
829
+ 3
830
+ <topic >/smart/camera </topic >
831
+ 4
832
+ <camera >
833
+ 5
834
+ <horizontal_fov >1.57 </horizontal_fov >
835
+ 6
836
+ <image >
837
+ 7
838
+ <width >736</width >
839
+ 8
840
+ <height >480</height >
841
+ 9
842
+ </image >
843
+ 10
844
+ <clip >
845
+ 11
846
+ <near >0.1</near >
847
+ 12
848
+ <far>100</far>
849
+ 13
850
+ </clip >
851
+ 14
852
+ <noise >
853
+ 15
854
+ <type >gaussian </type >
855
+ 16
856
+ <mean >0</mean >
857
+ 17
858
+ <stddev >0.007 </stddev >
859
+ 18
860
+ </noise >
861
+ 19
862
+ </camera >
863
+ 20
864
+ <always_on >1</always_on >
865
+ 21
866
+ <update_rate >30</update_rate >
867
+ 22
868
+ <visualize >1</visualize >
869
+ 23 </sensor >
870
+ • <image> — this label defines the camera resolution ratio and this is regarded as the size of the frame-image that
871
+ sends to the Yolo detector engine. According to the requirement of the YOLO detection algorithm, the width and
872
+ height of the camera should be set as integral multiples by 32.
873
+ 3.4 Construction of Engines and Transceiver Functions
874
+ Figure 3.2 the system of autonomous driving on NRP
875
+ The construction of the whole project regards as an experiment on the NRP platform, and as an experiment, the
876
+ whole package of the autonomous driving benchmark is under the “nrp-core” path in the examples folder. According
877
+ to bevor announced NRP components for a simulation experiment is the application also modular developed referring
878
+
879
+ YoloDetectorEngine
880
+ Gazebo Engine
881
+ camera Transeiver
882
+ imageprocess
883
+ function
884
+ Camera frame-image
885
+ Yolo detector class
886
+ OpenCv
887
+ location coordination
888
+ state Transeiver function
889
+ Detected
890
+ camera
891
+ Vehicle control Engine
892
+ motorsettingTranseiver
893
+ coordination transform
894
+ joint control
895
+ function
896
+ trajectorycompute
897
+ controlcommand13
898
+ to requirements of autonomous driving benchmark application. And the whole system frame is shown in Fig. 3.2. The
899
+ construction of simulation would according to primary embrace two branches extend:
900
+ • A close loop from the Gazebo engine to get the location information of the vehicle and sent to the Vehicle control
901
+ engine depending on Gazebo DataPacks (Protobuf DataPack), then send the joint control command back to the
902
+ Gazebo engine.
903
+ • An open loop from Gazebo engine to get camera information and sent to Yolo Detector Engine, final using OpenCV
904
+ to show the detected frame-image as monitor window.
905
+ 3.4.1 Gazebo plugins
906
+ Before the steps to acquire the different information must the corresponding plugins in SDF be declared. These plugins label
907
+ are such as recognition-label to let Gazebo know what information and parameters should be sent or received and assigned.
908
+ A set of plugins is provided to integrate the Gazebo in NRP-Core simulation. NRPGazeboCommunicationPlugin registers
909
+ the engine with the SimulationManager and handles control requests for advancing the gazebo simulation or shutting it
910
+ down. Its use is mandatory in order to run the engine. And there are two implementations of the Gazebo engine are
911
+ provided. One is based on JSON over REST and another on Protobuf over gRPC. The latter performs much better and it is
912
+ recommended. The gRPC implementation uses protobuf objects to encapsulate data exchanged between the Engine and
913
+ TFs, whereas the JSON implementation uses nlohmann::json objects. Besides this fact, both engines are very similar in
914
+ their configuration and behavior. The rest of the documentation below is implicitly referred to the gRPC implementation
915
+ even though in most cases the JSON implementation shows no differences. The corresponding plugins are also based on
916
+ Protobuf over the gRPC protocol. There are four plugins that would be applied in the SDF model world file:
917
+ • World communication plugin – NRPGazeboGrpcCommunicationPlugin
918
+ This plugin is the main communication plugin to set up a gRPC server and waits for NRP commands. It must be
919
+ declared under the <world> label in the SDF file.
920
+ 1 <world
921
+ name=’default ’>
922
+ 2 ...
923
+ 3
924
+ <plugin
925
+ name=" nrp_world_plugin " filename=" NRPGazeboGrpcWorldPlugin .so"/
926
+ >
927
+ 4 ...
928
+ 5 </world >
929
+ • Activation of Camera sensor plugin – NRPGazeboGrpcCameraPlugin
930
+ This plugin is used to add a GazeboCameraDataPack datapack. In the SDF file, the plugin would be named
931
+ “smart_camera” (user-defined). This name can be accessed by TransceiverFunctions and get the corresponding
932
+ information. This plugin must be declared under <sensor> label in the application under the camera sensor label:
933
+ 1 <sensor
934
+ name=’camera ’ type=’camera ’>
935
+ 2
936
+ ...
937
+ 3
938
+ <plugin
939
+ name=’smart_camera ’ filename=’
940
+ NRPGazeboGrpcCameraControllerPlugin .so’/>
941
+ 4
942
+ ...
943
+ 5 </sensor >
944
+ • Joint control and message – NRPGazeboGrpcJointPlugin
945
+ This plugin is used to register GazeboJointDataPack DataPack and in this case, only those joints that are explicitly
946
+ named in the plugin will be registered and made available to control under NRP. The joint’s name must be unique
947
+ and once again in the plugin declared. In contrast to the other plugins described above or below, when using
948
+ NRPGazeboGrpcJointPlugin DataPacks can be used to set a target state for the referenced joint, the plugin is
949
+ integrated with the PID controller and can for each of the joint-specific set a better control performance.
950
+ This plugin must be declared under the corresponding <model> label and have the parallel level in contrast to the
951
+ <joint> label, and there are four joints that would be chosen to control: rear left and right wheel joint, front left
952
+ and right steering joint, and according to small tests of the physical model of experiment-vehicle in Gazebo are the
953
+ parameters of PID controller listed in below block:
954
+ 1 <model
955
+ name=’smart_car ’>
956
+ 2
957
+ ...
958
+ 3
959
+ <joint
960
+ name=" rear_left_wheel_joint ">...</joint >
961
+
962
+ 14
963
+ 4
964
+ <joint
965
+ name=" rear_right_wheel_jointt ">...</joint >
966
+ 5
967
+ <joint
968
+ name=" front_left_steering_joint ">...</joint >
969
+ 6
970
+ <joint
971
+ name=" front_right_steering_joint ">...</joint >
972
+ 7
973
+ ...
974
+ 8
975
+ <plugin
976
+ name=’smart_car_joint_plugin ’ filename=’
977
+ NRPGazeboGrpcJointControllerPlugin .so’>
978
+ 9
979
+ <rear_left_wheel_joint P=’10’ I=’0’ D=’0’ Type=’velocity ’ Target=’0’
980
+ IMax=’0’ IMin=’0’/>
981
+ 10
982
+ <rear_right_wheel_joint P=’10’ I=’0’ D=’0’ Type=’velocity ’ Target=’0’
983
+ IMax=’0’ IMin=’0’/>
984
+ 11
985
+ <front_left_steering_joint P=’40000.0 ’ I=’200.0 ’ D=’1.0’ Type=’
986
+ position ’ Target=’0’ IMax=’0’ IMin=’0’/>
987
+ 12
988
+ <front_right_steering_joint P=’40000.0 ’ I=’200.0 ’ D=’1.0’ Type=’
989
+ position ’ Target=’0’ IMax=’0’ IMin=’0’/>
990
+ 13
991
+ </plugin >
992
+ 14
993
+ ...
994
+ 15 </model >
995
+ Attention: There are two target types that can be influenced and supported in Gazebo: Position and Velocity. And
996
+ for the rear left and right wheels of the vehicle are recommended for setting type with “Velocity” and for the front
997
+ left and right steering are recommended setting type with “Position”. Because the actual control of the rear wheels
998
+ is better with velocity and front steering uses angle to describe the turning control.
999
+ • Gazebo link information – NRPGazeboGrpcLinkPlugin
1000
+ This plugin is used to register GazebolinkDataPack DataPacks for each link of the experiment vehicle. Similar to
1001
+ the sensor plugin, this plugin must be declared under <model> label and has the parallel level of <link> label, and
1002
+ only be declared once:
1003
+ 1 <model
1004
+ name=’smart_car ’>
1005
+ 2
1006
+ ...
1007
+ 3
1008
+ <plugin
1009
+ name=’smart_car_link_plugin ’ filename=’
1010
+ NRPGazeboGrpcLinkControllerPlugin .so’/>
1011
+ 4
1012
+ ...
1013
+ 5
1014
+ <link name=’base_link ’>...</link >
1015
+ 6
1016
+ <link name=’eye_vision_camera ’>...</link >
1017
+ 7
1018
+ <link name=’front_left_steering_link ’>...</link >
1019
+ 8
1020
+ <link name=’front_left_wheel_link ’>...</link >
1021
+ 9
1022
+ <link name=’front_right_steering_link ’>...</link >
1023
+ 10
1024
+ <link name=’front_right_wheel_link ’>...</link >
1025
+ 11
1026
+ <link name=’rear_left_wheel_link ’>...</link >
1027
+ 12
1028
+ <link name=’rear_right_wheel_link ’>...</link >
1029
+ 13
1030
+ ...
1031
+ 14 </model >
1032
+ 3.4.2 State Transceiver Function “state_tf.py”
1033
+ State Transceiver Function acquires the location information from the Gazebo engine and transmits it to Vehicle Control
1034
+ Engine to compute the next control commands. The receiving of location coordinates of the vehicle is based on the
1035
+ DataPack from Gazebo, and this DataPack is already encapsulated in NRP, it only needs to in the Decoder indicate which
1036
+ link information should be loaded in DataPack.
1037
+ 1 @EngineDataPack (keyword=’state_gazebo ’, id= DataPackIdentifier (’
1038
+ smart_car_link_plugin :: base_link ’, ’gazebo ’))
1039
+ 2 @TransceiverFunction ("car_ctl_engine ")
1040
+ 3 def
1041
+ car_control(state_gazebo):
1042
+ The location coordinates in the experiment would be the coordinate of base-chassis “base_link” chosen and use C++
1043
+ inheritance declaration with the name of the plugin that is declared in the SDF file. And the received DataPack with the
1044
+ user-defined keyword “state_gazebo” would be sent in Transceiver Function “car_control()”.
1045
+ Attention: Guarantee to get link-information from Gazebo it is recommended new declaring on the top of the script
1046
+ with the below sentence:
1047
+ 1 from
1048
+ nrp_core.data.nrp_protobuf
1049
+ import
1050
+ GazeboLinkDataPack
1051
+
1052
+ 15
1053
+ that could let NRP accurately communicate with Gazebo.
1054
+ The link-information DataPack in NRP would be called GazeboLinkDataPack. And its Attributes are listed in next
1055
+ Table 3.1. In Project are “position” and “rotation” information chosen and set to the “car_ctl_engine” engine defining Json
1056
+ DataPack, in the last “return” back to “car_ctl_engine”. Use the “JsonDataPack” function to get in other engine-defined
1057
+ DataPack and itself form and assign the corresponding parameter with received information from Gazebo.
1058
+ 1 car_state = JsonDataPack(" state_location ", " car_ctl_engine ")
1059
+ 2
1060
+ 3 car_state.data[’location_x ’] = state_gazebo.data.position [0]
1061
+ 4 car_state.data[’location_y ’] = state_gazebo.data.position [1]
1062
+ 5 car_state.data[’qtn_x ’] = state_gazebo.data.rotation [0]
1063
+ 6 car_state.data[’qtn_y ’] = state_gazebo.data.rotation [1]
1064
+ 7 car_state.data[’qtn_z ’] = state_gazebo.data.rotation [2]
1065
+ 8 car_state.data[’qtn_w ’] = state_gazebo.data.rotation [3]
1066
+ Tip: The z-direction coordinate is not necessary. So only x- and y-direction coordinates are included in DataPack to
1067
+ make the size of JSON DataPack smaller and let the transmission more efficient.
1068
+ Attribute
1069
+ Description
1070
+ Python Type
1071
+ C Type
1072
+ pos
1073
+ Link Position
1074
+ numpy.array(3, numpy.float32)
1075
+ std::array<float,3>
1076
+ rot
1077
+ Link Rotation as quaternion
1078
+ numpy.array(4, numpy.float32)
1079
+ std::array<float,4>
1080
+ lin_vel
1081
+ Link Linear Velocity
1082
+ numpy.array(3, numpy.float32)
1083
+ std::array<float,3>
1084
+ ang_vel
1085
+ Link Angular Velocity
1086
+ numpy.array(3, numpy.float32)
1087
+ std::array<float,3>
1088
+ Table 3.1 GazeboLinkDataPack Attributes.
1089
+ Tip: the rotation information from Gazebo is quaternion and its four
1090
+ parameters sort sequence is “x, y, z, w”.
1091
+ 3.4.3 Vehicle Control Engine “car_ctl_engine.py”
1092
+ The Vehicle Control Engine would be written according to the form of Python Json Engine. The construction of a Python
1093
+ Json Engine is similar to the definition of a python class file that includes the attributes such as parameters or initialization
1094
+ and its functions. And a class file should declare that this Python Json Engine inherits the class “EngineScript” to let NRP
1095
+ recognize this file as a Python Json Engine to compute and execute. So a Python Json Engine can mostly be divided into
1096
+ three main blocks with def functions: def initialize(self), def runLoop(self, timestep_ns), and def shutdown(self).
1097
+ • In initialize block is the initial parameters and functions defined for the next simulation. And in this block, should the
1098
+ correspondingDataPacksthatbelongtothespecificEngineatthesametimebedefinedwith“self._registerDataPack()”
1099
+ and “self._setDataPack()” functions:
1100
+ 1 self. _registerDataPack ("actors")
1101
+ 2 self._setDataPack("actors", {"angular_L": 0, "angular_R": 0, "linear_L": 0,
1102
+ "linear_R": 0})
1103
+ 3 self. _registerDataPack ("state_location ")
1104
+ 4 self._setDataPack("state_location ", { "location_x": 0, "location_y": 0, "
1105
+ qtn_x": 0, "qtn_y": 0,"qtn_z": 0,"qtn_w": 0})
1106
+ – _registerDataPack(): - given the user-defined DataPack in the corresponding Engine.
1107
+ – _setDataPack(): - given the corresponding name of DataPack and set parameters, form, and value of the
1108
+ DataPack.
1109
+ The generated actors-control-commands and location-coordinate of the vehicle in this project would be as properties
1110
+ of the DataPack belonging to the “car_ctl_engine” Engine.
1111
+ • runLoop block is the main block that would always be looped during the simulation progress, which means the
1112
+ computation that relies on time and always need to update would be written in this block. In the “car_ctl_engine”
1113
+ Engine should always get the information from Gazebo Engine with the function “self._getDataPack()”:
1114
+ 1 state = self._getDataPack(" state_location ")
1115
+ – _getDataPack(): - given the user_defined name of the DataPack
1116
+ Attention: the name must be same as the name in the Transceiver function that user-chosen DataPack which
1117
+ is sent back to Engine.
1118
+
1119
+ 16
1120
+ After the computation of the corresponding command to control the vehicle is the function “_setDataPack()” once
1121
+ again called to set the commands information in corresponding “actors” DataPack and waiting for other Transceiver
1122
+ Function to call this DataPack:
1123
+ 1 self._setDataPack("actors", {"angular_L": steerL_angle , "angular_R":
1124
+ steerR_angle , "linear_L": rearL_omiga , "linear_R": rearR_omiga })
1125
+ • shutdown block is only called when the simulation is shutting down or the Engine arises errors and would run
1126
+ under progress.
1127
+ 3.4.4 Package of Euler-angle-quaternion Transform and Trajectory
1128
+ • Euler-angle and quaternion transform
1129
+ The received information of rotation from Gazebo is quaternion. That should be converted into Euler-angle to
1130
+ conveniently compute the desired steering angle value according to the beforehand setting trajectory. And this
1131
+ package is called “euler_from_quaternion.py” and should be in the “car_ctl_engine” Engine imported.
1132
+ • Trajectory and Computation of target relative steering angle
1133
+ The beforehand setting trajectory consists of many equal proportional divided points-coordinate. And through
1134
+ the comparison of the present location coordinate and the target coordinate, the package would get the desired
1135
+ distance and steering angle to adjust whether the vehicle arrives at the target. If the vehicle arrives in the radius
1136
+ 0.8m of the target location points will be decided that the vehicle will reach the present destination, and the
1137
+ index will jump to the next destination location coordinate until the final destination.
1138
+ This package is called
1139
+ “relateAngle_computation.py”.
1140
+ 3.4.5 Actors “Motor” Setting Transceiver Function “motor_set_tf.py”
1141
+ This Transceiver Function is the communication medium similar to the state-Transceiver Function. The direction of data
1142
+ is now from the “car_ctl_engine” Engine to the Gazebo engine. The acquired data from the “car_ctl_engine” Engine is
1143
+ the DataPack “actors” with the keyword “actors”:
1144
+ 1 @EngineDataPack (keyword=’actors ’, id= DataPackIdentifier (’actors ’, ’
1145
+ car_ctl_engine ’))
1146
+ 2 @TransceiverFunction ("gazebo")
1147
+ 3 def
1148
+ car_control(actors):
1149
+ And the DataPack from the Gazebo joint must be validated in this Transceiver Function with the “GazeboJointDat-
1150
+ aPack()” function. This function is specifically provided by Gazebo to control the joint, the given parameters are the
1151
+ corresponding joint name (declared with NRPGazeboGrpcJointPlugin plugin name in the SDF file) and target Gazebo
1152
+ engine (gazebo) (Attention: each joint should be registered as a new joint DataPack):
1153
+ 1 rear_left_wheel_joint = GazeboJointDataPack (" smart_car_joint_plugin ::
1154
+ rear_left_wheel_joint ", "gazebo")
1155
+ 2 rear_right_wheel_joint = GazeboJointDataPack (" smart_car_joint_plugin ::
1156
+ rear_right_wheel_joint ", "gazebo")
1157
+ 3 front_left_steering_joint = GazeboJointDataPack (" smart_car_joint_plugin ::
1158
+ front_left_steering_joint ", "gazebo")
1159
+ 4 front_right_steering_joint = GazeboJointDataPack (" smart_car_joint_plugin ::
1160
+ front_right_steering_joint ", "gazebo")
1161
+ The joint control DataPack is GazeboJointDataPack and its attributes are listed in Table 3.2:
1162
+ Attribute
1163
+ Description
1164
+ Python Type
1165
+ C Type
1166
+ position
1167
+ Joint angle position (in rad)
1168
+ float
1169
+ float
1170
+ velocity
1171
+ Joint angle velocity (in rad/s)
1172
+ float
1173
+ float
1174
+ effort
1175
+ Joint angle effort (in N)
1176
+ float
1177
+ float
1178
+ Table 3.2 GazeboJointDataPack Attributes.
1179
+ Attention: Guarantee to send Joint-information to Gazebo it is recommended new declaring on the top of the script
1180
+ with the below sentence:
1181
+ 1 from
1182
+ nrp_core.data.nrp_protobuf
1183
+ import
1184
+ GazeboJointDataPack
1185
+
1186
+ 17
1187
+ 3.4.6 Camera Frame-Image Transceiver Function “camera_tf.py”
1188
+ Camera frame-image Transceiver Function acquires the single frame image gathered by Gazebo internally installed camera
1189
+ plugin and sends this frame image to YOLO v5 Engine “yolo_detector”. The receiving of the image of the camera is based
1190
+ on the camera DataPack from Gazebo called “GazeboCameraDataPack”. To get the data, should the Decorator declare
1191
+ the corresponding sensor name with Validation through C++ and indicate the “gazebo” engine and assign a new keyword
1192
+ for the next Transceiver Function:
1193
+ 1 @EngineDataPack (keyword=’camera ’, id= DataPackIdentifier (’smart_camera :: camera ’,
1194
+ ’gazebo ’))
1195
+ 2 @TransceiverFunction ("yolo_detector ")
1196
+ 3 def
1197
+ detect_img(camera):
1198
+ Attention: Guarantee to acquire camera information from Gazebo it is recommended new declaring on the top of the
1199
+ script with the below sentence that confirms import GazeboCameraDataPack:
1200
+ 1 from
1201
+ nrp_core.data.nrp_protobuf
1202
+ import
1203
+ GazeboCameraDataPack
1204
+ And received image Json-information is four parameters: height, width, depth, and image data. The Attributes of the
1205
+ GazeboCameraDataPack are listed in Table 3.3:
1206
+ Attribute
1207
+ Description
1208
+ Python Type
1209
+ C Type
1210
+ image_height
1211
+ Camera Image height
1212
+ uint32
1213
+ uint32
1214
+ image_width
1215
+ Camera Image width
1216
+ uint32
1217
+ uint32
1218
+ image_depth
1219
+ Camera Image depth.
1220
+ Number of bytes per pixel
1221
+ uint8
1222
+ uint32
1223
+ image_data
1224
+ Camera Image data.
1225
+ 1-D array of pixel data
1226
+ numpy.array(image_height
1227
+ * image_width * image_depth,
1228
+ numpy.uint8)
1229
+ std::vector<unsigned char>
1230
+ Table 3.3 GazeboCameraDataPack Attributes.
1231
+ The received image data from the gazebo is a 1-D array of pixels with unsigned-int-8 form in a sequence of 3 channels.
1232
+ So this Transceiver Function should be pre-processed with NumPy “frombuffer()” function that transforms the 1-D array
1233
+ in NumPy form:
1234
+ 1 imgData = np.frombuffer(trans_imgData_bytes , np.uint8)
1235
+ And in the end, validate the Json-DataPack from YOLO v5 Engine and set all information in DataPack, and return to
1236
+ YOLO v5 Engine:
1237
+ 1 processed_image = JsonDataPack("camera_img", " yolo_detector ")
1238
+ 2
1239
+ 3 processed_image .data[’c_imageHeight ’] = trans_imgHeight
1240
+ 4 processed_image .data[’c_imageWidth ’] = trans_imgWidth
1241
+ 5 processed_image .data[’current_image_frame ’] = imgData
1242
+ 3.4.7 YOLO v5 Engine for Detection of the Objects “yolo_detector_engine.py”
1243
+ YOLO v5 Engine acquires the camera frame image from Gazebo during the camera Transceiver Function and detects
1244
+ objects in the current frame image. In the end, through the OpenCV package, the result is shown in another window. And
1245
+ the Yolo v5 Engine is also based on the Python Json Engine model and is similar to the vehicle control Engine in section
1246
+ 3.4.2. The whole structure is divided into three main blocks with another step to import Yolo v5 package.
1247
+ • Initialization of Engine with establishing “camera_img” DataPack and validation Yolo v5 object with specific
1248
+ pre-preparation by “detectorWarmUp()”:
1249
+ 1 self. _registerDataPack ("camera_img")
1250
+ 2 self._setDataPack("camera_img", {" c_imageHeight ": 0, "c_imageWidth": 0, "
1251
+ current_image_frame ": [240 , 320 , 3]})
1252
+ 3 self.image_np = 0
1253
+ 4
1254
+ 5 self.detector = Yolov5.Yolov5Detector ()
1255
+
1256
+ 18
1257
+ 6 stride , names , pt , jit , onnx , engine , imgsz , device = self.detector.
1258
+ detectorInit ()
1259
+ 7 self.detector.detectorWarmUp ()
1260
+ • In the main loop function first step is to acquire the camera image with the “_getDataPack()” function. And the
1261
+ extracted image data from Json DataPack during the camera Transceiver Function became already again in 1-D
1262
+ “list” data form. There is a necessary step to reform the structure of the image data to fit the form for OpenCV. The
1263
+ first is to convert the 1-D array into NumPy ndarray form and, according to acquired height and width information,
1264
+ reshape this np-array. And image form for OpenCV is the default in “BGR” form, and the image from Gazebo is
1265
+ “RGB”. There is also an extra step to convert the “RGB” shaped NumPy ndarray [14]. In the last, it sends the
1266
+ original NumPy array-shaped image and OpenCV-shaped image together into detect-function and finally returns an
1267
+ OpenCV-shaped image with an object-bonding box, and this OpenCV-shaped ndarray can directly use the function
1268
+ of OpenCV showed in the window:
1269
+ 1 # Image
1270
+ conversion
1271
+ 2 img_frame = np.array(img_list , dtype=np.uint8)
1272
+ 3 cv_image = img_frame.reshape (( img_height , img_width , 3))
1273
+ 4 cv_image = cv_image [:, :, ::-1] - np.zeros_like(cv_image)
1274
+ 5 np_image = cv_image.transpose (2,0,1)
1275
+ 6
1276
+ 7 # Image
1277
+ detection by Yolo v5
1278
+ 8 cv_ImgRet ,detect ,_ = self.detector.detectImage(np_image , cv_image ,
1279
+ needProcess=True)
1280
+ 9
1281
+ 10 # Show of Detected
1282
+ image
1283
+ through
1284
+ OpenCV
1285
+ 11 cv2.imshow(’detected
1286
+ image ’, cv_ImgRet)
1287
+ 12 cv2.waitKey (1)
1288
+ 4 Simulation Result and Analysis of Performance
1289
+ (a)
1290
+ (b)
1291
+ Figure 4.1 Object-detection by Yolo v5 on NRP platform (right: another frame)
1292
+ The final goal of the Autonomous driving Benchmark Platform is to build a real-world simulation platform that can
1293
+ train, do research, test or validate different AI algorithms integrated into vehicles, and next, according to the performance
1294
+ to give benchmark and evaluation to adjust algorithms, in the end to real installed these algorithms on the real vehicle.
1295
+ This project “Autonomous Driving Simulator and Benchmark on Neurorobotics Platform” is a basic and tentative concept
1296
+ and foundation to research the possibility of the simulator with multi-agents on the NRP-Core platform. And according to
1297
+ the above construction of a single vehicle agent, the autonomous driving simulation experiment has been finished. This
1298
+ section will discuss the results and suggestions based on the performance of the simulation on the NRP-Core Platform and
1299
+ the Gazebo simulator.
1300
+
1301
+ detected Image
1302
+ traffic light 0.27
1303
+ umbrella0.69
1304
+ suitcase 0.47 plant 0.25
1305
+ truck 0.68person0.93
1306
+ person 0.89
1307
+ car0.55
1308
+ firehydrint 0.87
1309
+ x=1273.v=107)
1310
+ R:18G-13B:11detectedimage
1311
+ suitcase 0.57
1312
+ umbrella 0.69
1313
+ truck0.72
1314
+ person0.81
1315
+ firehydrant 0.8719
1316
+ 4.1 Simulation Result of Object-detection and Autonomous Driving
1317
+ 4.1.1 Object Detection through YOLOv5 on NRP
1318
+ The object detection is based on the visual camera from the Gazebo simulator through the Yolo v5 algorithm. NRP-Core
1319
+ is the behind transmit medium between the Gazebo and Yolo v5 detector. The simulation result is shown in Fig. 4.1.
1320
+ On the point of objects-detection, the result reaches the standard and performances well, most of the objects in the
1321
+ camera frame image has been detected, but in some different frame, the detected objects are not stable and come to
1322
+ “undetected.” And in the other hand, although most objects are correctly detected with a high confidence coefficient, e.g.,
1323
+ the person is between 80%
1324
+ 93%, at the same time, there are few detected errors, such as when the flowering shrubs are
1325
+ detected as a car or a potted plant, the bush plant is detected as an umbrella and the but in front of the vehicle is detected
1326
+ as a suitcase. And last, even though the Yolo works well on the NRP platform, the performance is actually not smooth,
1327
+ and in the Gazebo simulator, the running frame rate is very low, perhaps only around 10-13 frames per second, in a more
1328
+ complex situation, the frame rate came to only 5 frames per second. That makes the simulation in Gazebo very slow and
1329
+ felled the sense of stumble. And when the size and resolution ratio of the camera became bigger, that made the stumble
1330
+ situation worse.
1331
+ 4.1.2 Autonomous Driving along pre-defined Trajectory
1332
+ Autonomous driving along a pre-defined trajectory works well, the performance of simulation also runs smoothly and the
1333
+ FPS (frame pro second) holds between 20-40 fps. This FPS ratio is also in the tolerance of real-world simulation. The
1334
+ part trajectory of the experiment vehicle is shown in Fig. 4.2, and the vehicle could run around Parkring and finish one
1335
+ circle. As the first image of the experiment, the vehicle would, according to the detection result, make the corresponding
1336
+ decision to control the vehicle to accelerate or to brake down and turn to evade other obstacles. But for this project, there
1337
+ is no appropriate autonomous driving algorithm to support presently, so here only use a pre-defined trajectory consisting
1338
+ of plenty of point coordinates. The speed of the vehicle is also fixed, and using PID controller to achieve simulated
1339
+ autonomous driving.
1340
+ And on the other hand, all the 3-D models are equal in proportion to the real size of objects. After many tests of
1341
+ different sizes of the world maps, the size of Parkring is almost the limit of the Gazebo, even though the complexity of the
1342
+ map is not high. For a bigger scenario of the map, the FPS is obviously reduced, and finally, the simulation would become
1343
+ stumble and generate a sense of separation.
1344
+ (a)
1345
+ (b)
1346
+ Figure 4.2 Simulation trajectory of autonomous driving
1347
+ 4.1.3 Multi-Engines united Simulation
1348
+ The final experiment is to start the Yolo v5 Engine and the autonomous driving control Engine. The above experiments
1349
+ are loaded with only one Engine, and they actually reacted well and had a relatively good performance. And the goal of
1350
+ this project is also to research the possibility of multi-agent simulation.
1351
+ The result of multi-Engines simulation actually works in that the Yolo v5 Engine can detect the image and show it
1352
+ in a window and at the same time, the vehicle can move along the trajectory automatically drive. But the simulation
1353
+ performance is not good, and the FPS can only hold between 9 -11 fps. The driving vehicle in Gazebo moves very slowly
1354
+ and not smoothly, and the simulation time has an enormous error compared to the real-time situation.
1355
+
1356
+ 20
1357
+ 4.2 Analysis of Simulation Performance and Discussion
1358
+ 4.2.1 YOLOv5 Detection ratio and Accuracy
1359
+ Most of the objects near the vehicle in the field of view of the camera have been detected and have high confidence, but
1360
+ there are also some errors appearing during the detection that some objects in as wrong objects are detected, some far
1361
+ objects are detected bus some obvious close objects are not detected. The reason can conclude in two aspects:
1362
+ 1. The employment of the integrated Yolo v5 algorithm is the original version that is not aimed at the specific purpose
1363
+ of this autonomous driving project and has not been trained according to the specific usage. Its network parameters and
1364
+ arts of objects are original and did not use the specific self-own data set, which makes the result actually have a big error
1365
+ between the detected result and expected performance. So that makes the result described in section 4.1.1 that appears
1366
+ some detection error.
1367
+ 2. The accuracy and reality of 3-D models and environment. The object detection algorithm is actually deeply dependent
1368
+ on the quality of the sent image. Here the quality is not about the resolution size but refers to the “reality” of the objects in
1369
+ the image. The original Yolo v5 algorithm was trained based on real-world images, but the camera images from Gazebo
1370
+ actually have enormous distances from real-world images. But the 3-D models and the environment in Gazebo Simulator
1371
+ are relatively very rough, and like cartoon style, they have a giant distance to the real-world objects on the side of the light,
1372
+ material texture of surface and reflection, the accuracy of objects. For example, in Gazebo, the bus has terrible texture
1373
+ and reflection that lets the bus be seen as a black box and not easy to recognize, and Yolo Engine actually detected as a
1374
+ suitcase. And the Environment in Gazebo is also not well exquisitely built. For example, the shrub and bushes on the
1375
+ roadside have a rough appearance with coarse triangles and obvious polygon shapes. That would make huge mistakes and
1376
+ influence the accuracy of desired algorithms.
1377
+ (a)
1378
+ (b)
1379
+ Figure 4.3 Distance between real-world and visual camera image
1380
+ 3. The property of the Gazebo simulator. The Gazebo simulator is perhaps suitable for small scene simulations like in
1381
+ a room, a tank station, or in a factory. Comparing to other simulators on the market like Unity or Unreal, the advantage
1382
+ of Gazebo is quickly start-up to the reproduction of a situation and environment. But the upper limit of Gazebo and its
1383
+ rendering quality is actually not very close to the real world and can let people at the first time recognize this is a virtual
1384
+ simulation, which also has a huge influence on training object-detection algorithms. And the construction of the virtual
1385
+ world in Gazebo is very difficult and has to use other supported applications like Blender [15] to help the construction.
1386
+ Even in Blender, the world has a very high reality, but after the transfer to Gazebo, the rendering quality becomes terrible
1387
+ and awful.
1388
+ In fact, although detection has some mistakes and errors, the total result and performance are in line with the forecast
1389
+ that the Yolo v5 algorithm has excellent performance.
1390
+ 4.2.2 Multi-Engines Situation and Non-smooth Simulation Phenomenon
1391
+ The simulation of single loaded Yolo Engine and the multi-engine meanwhile operation appear terrible performance by
1392
+ the movement of the vehicle and inferior progress FPS of the whole simulation. But simulation for single loaded vehicle
1393
+ control engine is actually working well and has smooth performance. After the comparison experiment, the main reason
1394
+ for the terrible performance is because of the backstage transmission mechanism of information between Python Json
1395
+
1396
+ 21
1397
+ Engine on the NRP Platform. In the simulation of a single loaded vehicle control Engine, the transmission from Gazebo
1398
+ is based on Protobuf-gRPC protocol, and transmission back to Gazebo is JSON protocol, but the size of transmitted
1399
+ information is actually very small because the transmitted data consists of only the control commands like “line-velocity”
1400
+ and “angular-velocity” that don’t take much transmission capacity and for JSON Protocol is actually has a negligible error
1401
+ to Protobuf Protocol. And the image transmission from Gazebo to Transceiver Function is also based on the Protobuf-
1402
+ gRPC method. But the transmission of an image from the Transceiver Function to Yolo Engine through JSON Protocol is
1403
+ very slow because the information of an image is hundreds of commands, and the according to the simulation loop in NRP,
1404
+ would make a block during the process of simulation and let the system “be forced” wait for the finish of transmission
1405
+ of the image. The transfer efficiency of JSON Protocol is actually compared to real-time slowness and tardiness, which
1406
+ takes the choke point to the transmission and, according to the test, only reduces the resolution rate of the camera to fit the
1407
+ simulation speed requirements.
1408
+ 4.3 Improvement Advice and Prospect
1409
+ The autonomous driving simulator and application on NRP-Core achieve the first goal of building a concept and foundation
1410
+ for multi-agents, and at the same time, this model is still imperfect and has many disadvantages that would be improved.
1411
+ On the NRP-Core platform is also the possibility for a real-world simulator discussed, and the NRP-Core has large potential
1412
+ to achieve the complete simulation and online cooperation with other platforms. There are also some directions and advice
1413
+ for the improvement of this application presently on NRP for further development.
1414
+ 4.3.1 Unhindered simulation with other communication protocol
1415
+ As mentioned before, the problem that communication with JSON protocol is the simulation at present is not smooth and
1416
+ has terrible simulation performance with Yolo Engine. Actually, the transmission of information through the Protobuf
1417
+ protocol based on the transmission between Gazebo and Transceiver Functions has an exceeding expectation performance
1418
+ than JSON protocol.
1419
+ The development Group of NRP-Core has also been developing and integrating the Protobuf-
1420
+ gRPC [16] communication backstage mechanism on the NRP-Core platform to solve the big data transmission problem.
1421
+ And in order to use Yolo or other object-detection Engines, it is recommended to change the existing communication
1422
+ protocol in the Protobuf-gRPC protocol. And the Protobuf protocol is a free and open-source cross-platform data format
1423
+ used to serialize structured data and developed by google, and details see on the official website [16].
1424
+ 4.3.2 Selection of Basic Simulator with better performance
1425
+ Because of the limitation of performance and functions of the Gazebo, there are many applications that can not in Gazebo
1426
+ easy to realize, such as the weather and itself change, and the accuracy and reality of 3-D models also have limitations.
1427
+ The usage of high-accuracy models would make the load became heavier on the Gazebo because of the fall behind the
1428
+ optimization of the Gazebo simulator. In fact, there are many excellent simulators, and they also provide many application
1429
+ development packages that can shorten the development period, such as Unity3D [17] or Unreal engine simulator [18]. In
1430
+ the team of an autonomous driving simulator and the benchmark there is an application demo on Unity3D simulator and
1431
+ figure Fig. 4.4 shows the difference between Gazebo and Unity3D.
1432
+ The construction and simulation in Unity3D have much better rendering quality close to the real world than Gazebo, and
1433
+ the simulation FPS can maintain above 30 or even 60 fps. And for the YoloV5 detection result, according to the analysis
1434
+ in section 4.2.1, the result by Unity3D is better than the performance by Gazebo simulator because of more precision
1435
+ 3-D models and better rendering quality of models (Example see Fig. 4.5). The better choice for the development as
1436
+ the basic simulator and world expresser is recommended to develop on Unity3D or other game engines. And actually,
1437
+ NRP-Core will push a new version that integrates the interfaces with Unity3D and could use Protobuf protocol to ensure
1438
+ better performance for a real-world simulation.
1439
+ 4.3.3 Comparing to other Communication Systems and frameworks
1440
+ There are also many communication transmission frameworks and systems that are widely used in academia or business
1441
+ for robot development, especially ROS (Robot Operating System) system already has many applications and development.
1442
+ Actually, ROS has already been widely and mainly used for Robot-development with different algorithms: detection
1443
+ algorithm and computer vision, SLAM (Simultaneous Localization and Mapping) and Motion-control, and so on. ROS
1444
+ has already provided relatively mature and stable methods and schemes to undertake the role of transmitting these necessary
1445
+ data from sensors to the robot’s algorithms and sending the corresponding control command codes to the robot body or
1446
+ actors. But the reason chosen NRP-Core to be the communication system is based on the concepts of Engines and
1447
+ Transceiver Functions. Compared to ROS or other framework NRP platform has many advantages: This platform is very
1448
+ easy to build multi-agents in simulation and conveniently load in or delete from the configuration of simulation; The
1449
+
1450
+ 22
1451
+ (a) Sunny
1452
+ (b) Foggy
1453
+ (c) Raining
1454
+ (d) Snowy
1455
+ Figure 4.4 Construction of simulation world in Unity3D with weather application
1456
+ (a) Detection by YOLOv5 on Gazebo
1457
+ (b) Detection by YOLOv5 on Unity3D
1458
+ Figure 4.5 Comparing of the detection result by different platforms
1459
+ management of information is easier to identify than ROS-topics-system; The transmission of information is theoretically
1460
+ more efficient, and modularization and this platform can also let ROS at the same time as parallel transmission method to
1461
+ match and adapt to another systems or simulations. From this viewpoint, the NRP platform generalizes the transmission of
1462
+ data and extends the boundary of the development of the robot, which makes the development more modular and efficient.
1463
+ ROS system can also realize the multi-agents union simulation but is not convenient to manage based on the "topic" system.
1464
+ ROS system is now more suitable for a single agent simulation and the simulation environment. As mentioned before,
1465
+ the real interacting environment is not easy to realize. But NRP-Core has the potential because that NRP-Core can at the
1466
+ same time run the ROS system and let the agent developed based on the ROS system easily join in the simulation. That is
1467
+ meaningful to develop further on the NRP-Core platform.
1468
+ 5 Conclusion and Epilogue
1469
+ This project focuses on the first construction of the basic framework on the Neurorobotics Platform for applying the
1470
+ Autonomous Driving Simulator and Benchmark. Most of the functions including the template of the autonomous driving
1471
+ function and object-detection functions are realized. The part of the benchmark because there are no suitable standards
1472
+ and further development is a huge project regarded as further complete development for the application.
1473
+
1474
+ umbre
1475
+ umbrella 0.69
1476
+ suitcase0.57
1477
+ truck 0.72
1478
+ person 0.85
1479
+ 00tedpldnt0.4truck0.49
1480
+ person 0.81
1481
+ fire hydrant 0.87OGGY1
1482
+ Burger:Queen
1483
+ KSC23
1484
+ This project started with researching the basic characters to build a simulation experiment on the NRP-Core Platform.
1485
+ Then the requirements of the construction of the simulation are listed and each necessary component and object of the
1486
+ NRP-Core is given the basic and key understanding and attention. The next step according to the frame of the NRP-Core
1487
+ is the construction of the application of the autonomous driving simulator. Started with establishing the physic model of
1488
+ the vehicle and the corresponding environment in the SDF file, then building the “close loop” - autonomous driving based
1489
+ on PID control along the pre-defined trajectory and finally the “open loop” – objects-detection based on YoloV5 algorithm
1490
+ and successfully achieve the goal to demonstrate the detected current frame image in a window and operated as camera
1491
+ monitor. And at last, the current problems and the points of improvement are listed and discussed in this development
1492
+ document.
1493
+ And at the same time there are also many problems that should be optimized and solved. At present the simulation
1494
+ application can only regard as research for the probability of the multi-agent simulation. The performance of the scripts
1495
+ has a lot of space to improve, and it is recommended to select a high-performance simulator as the carrier of the real-world
1496
+ simulation. In fact the NRP-Core platform has shown enormous potential for the construction of a simulation world with
1497
+ each object interacting function and the high efficiency to control and manage the whole simulation project. In conclusion
1498
+ the NRP-Core platform has great potential to achieve the multi-agents simulation world.
1499
+ References
1500
+ [1] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. CARLA: An open urban
1501
+ driving simulator. In Proceedings of the 1st Annual Conference on Robot Learning, pages 1–16, 2017.
1502
+ [2] Shital Shah, Debadeepta Dey, Chris Lovett, and Ashish Kapoor. Airsim: High-fidelity visual and physical simulation
1503
+ for autonomous vehicles. In Field and Service Robotics, 2017.
1504
+ [3] PTV Group. Ptv vissim. https://www.ptvgroup.com/en/solutionsproducts/ptv-vissim/.
1505
+ [4] Human Brain Project. Neurorobotics platform. https://neurorobotics.net/.
1506
+ [5] Nathan Koenig and Andrew Howard. Design and use paradigms for gazebo, an open-source multi-robot simulator.
1507
+ In 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566),
1508
+ volume 3, pages 2149–2154. IEEE, 2004.
1509
+ [6] ROS Wiki. urdf/xml. https://wiki.ros.org/urdf/XML.
1510
+ [7] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural
1511
+ networks. Communications of the ACM, 60(6):84–90, 2017.
1512
+ [8] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv
1513
+ preprint arXiv:1409.1556, 2014.
1514
+ [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In
1515
+ Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
1516
+ [10] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C
1517
+ Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016.
1518
+ [11] G Jocher, K Nishimura, T Mineeva, and R Vilarino. Yolov5 by ultralytics. Disponıvel em: https://github. com/ultr-
1519
+ alytics/yolov5, 2020.
1520
+ [12] Yolov5 documentation. https://docs.ultralytics.com/.
1521
+ [13] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming
1522
+ Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library.
1523
+ Advances in neural information processing systems, 32, 2019.
1524
+ [14] G. Bradski. The OpenCV Library. Dr. Dobb’s Journal of Software Tools, 2000.
1525
+ [15] Blender Online Community. Blender - a 3D modelling and rendering package. Blender Foundation, Stichting
1526
+ Blender Foundation, Amsterdam, 2018.
1527
+ [16] Kenton Varda. Protocol buffers: Google’s data interchange format. Technical report, Google, 6 2008.
1528
+ [17] Unity Technologies. Real-time 3d tools and more. https://unity.com/.
1529
+ [18] Epic Games. Unreal engine. https://www.unrealengine.com/.
1530
+ A Appendix
1531
+
1532
+ 24
1533
+ Name
1534
+ Description
1535
+ Type
1536
+ Default
1537
+ Array
1538
+ Values
1539
+ SimulationLoop
1540
+ Type of simulation loop used in
1541
+ the experiment
1542
+ enum
1543
+ "FTILoop"
1544
+ "FTILoop"
1545
+ "EventLoop"
1546
+ SimulationTimeout
1547
+ Experiment Timeout (in
1548
+ seconds). It refers to simulation
1549
+ time
1550
+ integer
1551
+ 0
1552
+ SimulationTimestep
1553
+ Time in seconds the simulation
1554
+ advances in each Simulation
1555
+ Loop. It refers to simulation
1556
+ time.
1557
+ number
1558
+ 0.01
1559
+ ProcessLauncherType ProcessLauncher type to be used
1560
+ for launching engine processes
1561
+ string
1562
+ Basic
1563
+ EngineConfigs
1564
+ Engines that will be started in
1565
+ the experiment
1566
+ EngineBase
1567
+ X
1568
+ DataPackProcessor
1569
+ Framework used to process and
1570
+ rely datapack data between
1571
+ engines. Available options are
1572
+ the TF framework (tf) and
1573
+ Computation Graph (cg)
1574
+ enum
1575
+ "tf"
1576
+ "tf", "cg"
1577
+ DataPackProcessing-
1578
+ Functions
1579
+ Transceiver and Preprocessing
1580
+ functions that will be used in the
1581
+ experiment
1582
+ TransceiverFunction
1583
+ X
1584
+ StatusFunction
1585
+ Status Function that can be used
1586
+ to exchange data between NRP
1587
+ Python Client and Engines
1588
+ StatusFunction
1589
+ ComputationalGraph
1590
+ List of filenames defining
1591
+ the ComputationalGraph that
1592
+ will be used in the experiment
1593
+ string
1594
+ X
1595
+ EventLoopTimeout
1596
+ Event loop timeout (in seconds).
1597
+ 0 means no timeout. If not
1598
+ specified ’SimulationTimeout’
1599
+ is used instead
1600
+ integer
1601
+ 0
1602
+ EventLoopTimestep
1603
+ Time in seconds the event loop
1604
+ advances in each loop. If not
1605
+ specified ’SimulationTimestep’
1606
+ is used instead
1607
+ number
1608
+ 0.01
1609
+ ExternalProcesses
1610
+ Additional processes that will
1611
+ be started in the experiment
1612
+ ProcessLauncher
1613
+ X
1614
+ ConnectROS
1615
+ If this parameter is present a
1616
+ ROS node is started by
1617
+ NRPCoreSim
1618
+ ROSNode
1619
+ ConnectMQTT
1620
+ If this parameter is present an
1621
+ MQTT client is instantiated and
1622
+ connected
1623
+ MQTTClient
1624
+ Table A.1 Simulation configuration
1625
+
1626
+ 25
1627
+ Name
1628
+ Description
1629
+ Type
1630
+ Default
1631
+ Required Array
1632
+ EngineName
1633
+ Name of the engine
1634
+ string
1635
+ X
1636
+ EngineType
1637
+ Engine type. Used
1638
+ by EngineLauncherManager to
1639
+ select the correct engine launcher
1640
+ string
1641
+ X
1642
+ EngineProcCmd
1643
+ Engine Process Launch command
1644
+ string
1645
+ EngineProcStartParams
1646
+ Engine Process Start Parameters
1647
+ string
1648
+ [ ]
1649
+ X
1650
+ EngineEnvParams
1651
+ Engine Process Environment
1652
+ Parameters
1653
+ string
1654
+ [ ]
1655
+ X
1656
+ EngineLaunchCommand
1657
+ LaunchCommand with parameters
1658
+ that will be used to launch the
1659
+ engine process
1660
+ object
1661
+ "LaunchType":
1662
+ "BasicFork"
1663
+ EngineTimestep
1664
+ Engine Timestep in seconds
1665
+ number
1666
+ 0.01
1667
+ EngineCommandTimeout
1668
+ Engine Timeout (in seconds). It
1669
+ tells how long to wait for the
1670
+ completion of the engine runStep.
1671
+ 0 or negative values are interpreted
1672
+ as no timeout
1673
+ number
1674
+ 0.0
1675
+ Table A.2 Engine Base Parameter
1676
+
FNAyT4oBgHgl3EQfSffh/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
FtAzT4oBgHgl3EQfUfxu/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f9e096eca6c84b6d171286dd78bfbd11810401bbcdce04840abc5483aa8888b
3
+ size 5767213
FtFJT4oBgHgl3EQfDSx3/content/2301.11433v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebd87e49195cb2fcdf017cd79847d91ab63d1012e89c747bf72f2f0b7f21b7d8
3
+ size 6388485
FtFJT4oBgHgl3EQfDSx3/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:22c1b5d2697b658d39b05273f74bec5ea2eaeefb754f99df4a1a64c79110b559
3
+ size 7077933
FtFLT4oBgHgl3EQfGS-3/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e2c3860bed31e2fd7993d450f7b2b824f83a43ad399e56528258688c304be24
3
+ size 73819
G9E1T4oBgHgl3EQf_QZd/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5cac600d23ceb40ee22fd850b148121e5991d9cfdad9bba47c398534470dc92
3
+ size 11141165
IdFIT4oBgHgl3EQfYSv6/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2546f2639cbe756254b86c145f38c6ffb69ed47f3a127f60774f1b2f5f32119
3
+ size 137689
JtAyT4oBgHgl3EQff_gI/content/tmp_files/2301.00348v1.pdf.txt ADDED
@@ -0,0 +1,870 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Fibrous thermoresponsive Janus membranes for directional
2
+ vapor transport
3
+ Anupama Sargur Ranganatha, Avinash Bajia,b, Giuseppino
4
+ Fortunatoc, René M. Rossic
5
+ aPillar of Engineering Product Development, Singapore University of Technology and
6
+ Design, Singapore 487372
7
+ bManufacturing Engineering, LA TROBE University, Melbourne Victoria 3086, Australia
8
+ c Empa, Swiss Federal Laboratories for Materials Science and Technology, Laboratory for
9
+ Biomimetic Membranes and Textiles, CH-9014 St. Gallen, Switzerland
10
+ Abstract
11
+ Wearing comfort of apparel is highly dependent on moisture management and the
12
+ respective transport properties of the textiles. In today’s used textiles, water vapor
13
+ transmission (WVT) depends primarily on the porosity and the wettability of the clothing
14
+ layer next to the skin and is not adapting or responsive to environmental conditions. The
15
+ WVT is inevitably the same from both sides of the membrane. We propose a novel
16
+ approach in this study by developing a thermoresponsive Janus membrane using
17
+ electrospinning procedures. We targeted a membrane as a bilayer composite structure using
18
+ polyvinylidene fluoride (PVDF) as one layer and a blend of PVDF and thermoresponsive
19
+ poly-n-isopropyl acrylamide (PNIPAM) as the second layer changing wettability
20
+ properties in the range of physiological temperatures. Tailored electrospinning conditions
21
+ led to a self-standing membrane incorporating fiber diameters of 400nm and porosities of
22
+ 50% for both layers within the Janus membrane. The WVT studies revealed that the
23
+ combined effects of the Janus membrane’s directional wettability and the temperature-
24
+ responsive property results in temperature-dependent vapor transport. The results show
25
+ that the membrane offers minimum resistance to WVT when the PVDF side faces the skin,
26
+ which depicts the side with high humidity over a range of temperatures. However, the same
27
+ membrane shows a temperature-dependent WVT behavior when the blend side faces the
28
+ skin. From a room temperature of 25 °C to an elevated temperature of 35 °C, there is a
29
+ significant increase in the membrane’s resistance to WVT. This behavior is attributed to
30
+ the combined effect of the Janus construct and the thermoresponsive property.
31
+ This temperature-controlled differential vapor transport offers ways to adapt vapor
32
+ transport independence of environmental conditions leading to enhanced wearing comfort
33
+ and performance to be applied in fields such as apparel or the packaging industry.
34
+ Introduction
35
+ Tailored protective clothing is one of the oldest but actively researched fields for improving
36
+ textiles' performance and comfort of textiles.1, 2 The practical end-use include firefighter
37
+ protection clothing, diver’s, and space suit as well as raincoats, etc., drive the research in
38
+ protective clothing. Typically, a wearer expects the garment to be functional and
39
+ comfortable for the required end-use.
40
+
41
+ Sweat and heat transport of apparel describe the thermal comfort aspect,3 which is a
42
+ tradeoff between the performance and comfort properties of the clothing4. For example,
43
+ rain ponchos have one of the best waterproof abilities as they are impermeable to water but
44
+ uncomfortable to wear as the sweat cannot diffuse. Modern apparels employ a combination
45
+ of novel chemistry and material structure to attain performance and comfort.5-9 Even
46
+ though the performance levels have improved from what it was decades ago, the comfort
47
+ aspect still depends on the surrounding environment. Other than the touch, which is a
48
+ qualitative factor, sweat transport, measured by water vapor transmission (WVT) across
49
+ the fabric, is considered a quantitative measure of comfort in apparel. In a practical
50
+ scenario, sweat transmission combines water and vapor transport, depending on the
51
+ person’s activity level. Liquid sweat transmission is required during a person’s high
52
+ activity level. In contrast, vapor transmission is needed during all activity levels of a person
53
+ and is considered a measure of the fabric’s comfort. It is often reported as a measure of
54
+ comfort for the new material systems.10-12
55
+ WVT is primarily driven by the partial vapor pressure difference across the membrane and
56
+ usually follows Fick’s law of diffusion when a system is in a steady state. The temperature
57
+ and humidity of the local environment govern the partial vapor pressure as expressed in
58
+ Eq. 1. Therefore, WVT is achieved much better in arid 13 compared to humid regions.
59
+ ������������������������ =
60
+ ������������∗6.11∗ ������������(17.67∗
61
+ ������������
62
+ ������������+243.6)
63
+ 100
64
+
65
+
66
+
67
+
68
+
69
+
70
+
71
+ (1)
72
+ Free water vapor diffusion is often insufficient to increase comfort in humid regions, and
73
+ forced ventilation is required to pump out the sweat. Attaching devices to circulate the air
74
+ to remove sweat is cumbersome and practically inconvenient. Therefore, research on a
75
+ material that can steer, adapt, and respond to environmental changes and pump out liquid
76
+ sweat without external support is essential.
77
+ Electrospun fibrous membranes exhibit good water vapor permeability and wind
78
+ resistance. Gibson et al. reported that the water vapor quickly diffuses out through the
79
+ electrospun membrane due to the large porosity of the electrospun membrane. On the other
80
+ side, the large surface area of the nanofibrous layer resists convective wind flow.14
81
+ Furthermore, to improve the performance, electrospun membranes from selected polymers
82
+ such as polyacrylonitrile (PAN) and polyurethane (PU) were modified to enhance their
83
+ tensile stress and breaking elongation %, waterproof ability with tailored vapor
84
+ permeability.15-17 Selected studies evaluated the vapor permeability of multilayered
85
+ membranes18, a combination of electrospun and woven textiles19, and electrospinning of
86
+ hydrophilic/hydrophobic layers.20 Multilayered membrane samples investigated by
87
+ Mukhopadhyay et al.18 showed that the water vapor transmittance is dependent on the
88
+ porosity and pore size of the middle layer of polyester fleece/polyester spacer, even when
89
+ the porosity of the innermost and outermost layers is constant in all the tested samples.
90
+ Therefore, the multilayer system, which had a highly porous middle layer, exhibited larger
91
+ WVT due to increased overall porosity. Investigation of the woven fabric coated with an
92
+ electrospun fibrous mat by Bagherzadeh et al. 19 showed that the electrospun layer did not
93
+ impede the water vapor permeability of the woven fabric. In another investigation of the
94
+ multilayer electrospun membrane by Gorji et al.20 they revealed the effect of incorporating
95
+ graphene oxide in the hydrophilic matrix, which is layered adjacent to a polyurethane
96
+
97
+ fibrous membrane. The water vapor permeability reduced with higher hydrophilic layer’s
98
+ weight. Increasing graphene oxide content from 0.1 to 0.4% in the acrylamide-based
99
+ hydrophilic polymer, reduced the water solubility of the polymer and consequently
100
+ increasing the water vapor permeability. However, studies on multi-layer systems did not
101
+ investigate the direction of the vapor transmission across the thickness of the membranes.
102
+ As previous studies revealed that the vapor diffusion through the microporous membrane
103
+ is porosity dependent, the vapor transmission from either side of the membrane along its
104
+ thickness is assumed to be constant.
105
+ Introducing heterogeneity in the membrane chemistry across the thickness, without altering
106
+ the porosity, exhibits directionality in the membrane’s properties. E.g., combining
107
+ hydrophobic and hydrophilic material in layers within a single membrane exhibits
108
+ directional water flow. Such membranes, with faces of different chemistry, are termed
109
+ Janus membranes21, 22. These membranes have attracted high attention from researchers. In
110
+ one such system, the water drops flow from the hydrophobic side to the hydrophilic side
111
+ but not from the hydrophilic to the hydrophobic side. The flow from the hydrophobic side
112
+ is due to the hydrophilic layer underneath the hydrophobic, which pulls the droplet across
113
+ the membrane. The Laplace pressure difference, along with the thickness of the membrane,
114
+ explains this mechanism.23
115
+ Another class of materials, i.e., environmentally responsive polymers, such as poly N-
116
+ (isopropylacrylamide), are termed ‘smart’ due to their switchable properties in response to
117
+ environmental cues.24 Figure 1 shows the change in molecular conformation of PNIPAM
118
+ concerning the environmental temperature. At room temperature, the carbonyl and amide
119
+ groups of PNIPAM are exposed to form a hydrogen bond with the surrounding water vapor.
120
+ However, at elevated temperatures, the hydrogen bonds break and cause intramolecular
121
+ bonding between carbonyl and amide groups from adjacent monomer units. This coil
122
+ conformation is relatively hydrophobic compared to the extended conformation at room
123
+ temperature, being hydrophilic.25-27
124
+ The PNIPAM-based hydrogel can be coated on textiles like cotton or nylon 6 fabrics and
125
+ exhibit thermoresponsive behavior. The WVT studies by Stular et al., and Verbič et al.
126
+ show less vapor transmission at ambient temperature in comparison with WVT at an
127
+ elevated temperature of 40 °C. The swelling of PNIPAM reduces the porosity and, in turn,
128
+ reduces the vapor transmission at ambient temperature
129
+ Independent research on responsive materials,24, 28, 29 and Janus constructs30-32 shows high
130
+ potential for smart and self-sustaining systems with directional liquid transport. Our current
131
+ study shows the result of combining responsive material such as PNIPAM in a Janus
132
+ construct. PNIPAM is blended with PVDF in a 25:75 wt% ratio to minimize the effect of
133
+ swelling and retain the thermoresponsive behavior. The blend and pristine PVDF are
134
+ electrospun in layers to obtain a Janus construct. Consequently, the electrospun Janus
135
+ membrane has PVDF on one face and a blend on the other. We use two independent
136
+ experimental approaches to assess and confirm the WVT performance of the membrane.
137
+ For the first time in literature, we show that WVT is preferentially more in one direction
138
+ within a given membrane. The directionality is plausibly due to the ‘passive pumping’
139
+ action of the Janus membrane, combined with the thermoresponsive property of the
140
+ hydrophilic layer.
141
+
142
+
143
+
144
+ Figure 1: Shows the change in the molecular conformation of poly N-(isopropyl acrylamide)
145
+ in response to the change in the environmental temperature.
146
+ Materials and methods
147
+ Materials
148
+ PVDF pellets with a molecular weight of 180000 g/mol and 530000 g/mol powder were
149
+ procured from Sigma-Aldrich, Switzerland. PNIPAM powder with a molecular weight of
150
+ 300000 g/mol was purchased from Scientific Polymer Products Inc. N, N-
151
+ dimethylformamide (DMF, 99.5 %) from Sigma-Aldrich Switzerland.
152
+
153
+ Methods
154
+ Membrane fabrication
155
+ The PVDF solutions were prepared by dissolving 33 wt% of PVDF (180000 g/mol) pellets
156
+ in DMF at 60 ºC overnight. The blend solutions were prepared by dissolving 18 wt% of
157
+ the polymer mixture, i.e., PVDF (530000 g/mol) and PNIPAM (300000 g/mol) in 75/25
158
+ w/w ratio in DMF, and magnetically stirred at 60 ºC on a hot plate overnight and
159
+ subsequently cooled to room temperature before electrospinning.
160
+
161
+ Needle-based electrospinning
162
+ PVDF solution or the blend solution was loaded in a plastic syringe with a 21G blunt needle
163
+ (0.8 mm inner diameter). The flow rate of the polymer solution was set to 0.5ml*h-1 using
164
+ a flow pump. The needle tip was connected to a voltage of 10kV and the collector plate to
165
+ a voltage of -5kV. The working distance between the needle and the flat plate aluminum
166
+ collector was 12 cm. The aluminum collector was covered with silicone paper for easy
167
+ peeling off the electrospun membranes.
168
+
169
+ TA2C
170
+ ConLcooo aboe TcsNeedle-less electrospinning (NanospiderTM)
171
+ NanospiderTM (Elmarco, Czech Republic) is a needle-less electrospinning technology with
172
+ upscaling capability. Figure 1 shows a schematic of the procedure. The following spinning
173
+ parameters were used for homogenous spinning: The vertical gap between the two wires
174
+ was 25 cm, wherein the top wire was applied with a voltage of +60 kV and the bottom wire
175
+ with -10 kV. The traversing carriage, with a speed of 270 mm/s on the bottom wire, housed
176
+ a pinhole of 0.5 mm, which controls the volume of polymer solution trailed on the bottom
177
+ wire. The collector paper moving at a speed of 18 mm/min, is placed right below the top
178
+ wire
179
+ After 20 minutes of electrospinning, the paper position is unrolled to the starting point to
180
+ electrospin in the same area. Five such repetitions build a thick and wide membrane with
181
+ a surface area of 500 mm2. The fabrication of the blend and PVDF layer followed the same
182
+ procedure. Four Janus membranes were prepared and tested for water-vapor resistances
183
+ using the sweating guarded hotplate, elaborated in the following sections.
184
+
185
+ Figure 2: Needle-less electrospinning setup used to develop sub-micron-sized fibers at
186
+ the pilot-scale level.
187
+ A carriage with polymer solution traverses the bottom metallic wire (+60 kV) and leaves
188
+ a trail of the solution droplets on this wire. The potential difference between the wires
189
+ draws the fibers from these droplets. The silicone paper collects the fibers placed right
190
+ below the top metallic wire (-10 kV).
191
+
192
+ Material characterization
193
+
194
+ The viscosity of the polymer solutions was evaluated using a Physica MCR301 Rheometer
195
+ (Anton P, Graz, Austria) with a plate-cone geometry. The shear viscosity of the polymer
196
+ solutions was assessed as a function of the shear rate. The spinning solution's electrical
197
+
198
+ conductivity was measured using Metrohm 660, Switzerland. The electrospun membranes
199
+ were visually examined using scanning electron microscopy (SEM) by a Hitachi S-4800,
200
+ Hitachi High-Technologies, USA & Canada) using 2 kV accelerating voltage and 10 mA
201
+ current flow. Before the SEM measurements, the samples were sputtered with 8nm of Au-
202
+ Pd to increase the conductivity using a sputter coater, Leica EM ACE600, Leica
203
+ Microsystems, Germany.
204
+
205
+ Sweating guarded hotplate
206
+
207
+ The sweating guarded-hotplate to determine the resistance of the membrane to WVT
208
+ follows ISO 1109233. It is often referred to as the “skin model” as it simulates the heat and
209
+ vapor transfer processes next to the skin. Primarily, it consists of an electrically heated
210
+ porous plate to simulate the thermoregulation model of the skin (Figure 2). Heat loss is
211
+ avoided by using the guard underneath and on both sides of the hot plate. At the same time,
212
+ the guards are heated to the same temperature as the porous plate. The water-circulating
213
+ system feeds the heated plate to produce the vapor by evaporation. The system underneath
214
+ the plate measures the heating power required to maintain the temperature of the plate. The
215
+ measurement is carried out in a controlled environment as it involves temperature, relative
216
+ humidity, and wind speed combinations.
217
+
218
+ Figure 3: Schematic of the sweating hot plate instrument. a) is the photographic image
219
+ of the device. b) shows the airflow tangential to the mounted test sample. c) shows the
220
+ parts in layers. The circulation pump supplies the water to the electrically heated and
221
+ porous plate for evaporation. Cellophane is a waterproof but vapor permeable layer to
222
+ transmit the evaporating vapor from the plate. The test membrane lays flat on the
223
+ cellophane covered heated plate and is held in place by a frame on all sides.
224
+ The membrane is placed on an electrically heated plate, covered with a saturated
225
+ cellophane sheet permeable to vapor but impermeable to water. Air is tangentially blown
226
+ across the membrane surface to maintain a constant vapor pressure gradient. This setup
227
+ permits only the vapor from the plate to pass through the fibrous membrane and prevents
228
+ the water from wetting the fibrous membrane. The heat flux required to maintain the
229
+ saturated vapor pressure is a measure of the membrane’s resistance to vapor permeability.
230
+ The expression for the resistance to vapor permeability is as follows:
231
+ ������������������������������������ =
232
+ [������������������������−������������������������]
233
+ ������������−∆������������������������
234
+ (2)
235
+ ������������������������������������: Water-vapor resistance in ������������2������������������������ ������������
236
+
237
+
238
+
239
+ a)������������������������: The saturation water-vapor partial pressure in Pascal (Pa), at the surface of the heated
240
+ plate at a temperature in ºC
241
+ ������������������������: The water-vapor partial pressure in Pa of the air in the test enclosure with air
242
+ temperature in ºC
243
+ ������������: Heating power supplied to the measuring unit in W
244
+ ∆������������������������: Baseline error correction term for heating power for the measurement of water-vapor
245
+ resistance ������������������������������������, used as a reference value for ambient conditions.
246
+ The boundary layer resistance of the cellophane layers is the baseline measurement of the
247
+ system. The software deducts the resistance of the boundary layer from the experiment
248
+ results for subsequent measurements. Thus, the instrument directly calculates the resistance
249
+ offered by the fibrous membranes.
250
+ OptiCal double-chamber method
251
+ OptiCal from Michell Instruments is a relative humidity (RH) and temperature calibrator
252
+ that uses an optical sensor for high-precision measurements. It is used to design a setup
253
+ that determines the WVT of the membrane. Figure 4 shows the setup schematic, which
254
+ consists of a temperature-controlled chamber with a sealed container for the test membrane.
255
+ A reservoir draws water from a tube placed on the weighing scale to maintain its level to
256
+ ensure a controlled RH. A climatic chamber with controlled temperature and RH houses
257
+ the entire setup. A computer-connected software controls the environmental conditions. In
258
+ parallel, another software records the weighing scale measurements, i.e., the weight
259
+ reduction due to water flow into the OptiCal chamber is directly associated with vapor
260
+ diffusion through the membrane
261
+
262
+ Figure 4: Schematic of the double chamber setup.
263
+ Measurement of water vapor permeability started after the stabilization of the
264
+ environmental conditions. The water vapor passes through the membrane (∅ = 0.06 m)
265
+
266
+ Test conditions:
267
+ 1. Below LCST
268
+ Climatic chamber
269
+ a.
270
+ Inside: 30 °C & 80% RH
271
+ Outside
272
+ Outside: 30 °C & 40% RH
273
+ b.
274
+ Inside: 20 °C & 80% RH
275
+ Outside: 30 °C & 40% RH
276
+ Inside
277
+ Membrane
278
+ Above LCST
279
+ Inside: 40 °C & 60% RH
280
+ Outside: 30 °C & 40% RH
281
+ Membrane orientation:
282
+ OptiCal calibrator
283
+ ET: External sensor for temperature and RH
284
+ IT: Internal sensor for temperature and RHfrom the OptiCal chamber. As the RH in the OptiCal chamber drops, water from the
285
+ reservoir is evaporated to maintain the desired RH. The water from the tube on the scale
286
+ flows to the reservoir to maintain the water level in the reservoir. The reduced amount of
287
+ water from the tube is weighed by scale, and the weight loss is recorded in real-time. Water
288
+ vapor permeance is the weight loss over a defined period for a unit partial vapor pressure
289
+ difference across the membrane. The environmental conditions between the inside and
290
+ outside instruments govern this partial vapor pressure difference. An external RH and
291
+ temperature sensor from MSR® with a built-in data logger measured the test conditions
292
+ outside the membrane surface. 0.05” thermocouple wires were embedded into the fibrous
293
+ membranes to record the actual temperature at the surface and the interface of two layers
294
+ of the Janus membrane. The expression for the partial vapor pressure as a function of
295
+ temperature and humidity is given by following equations34, 35.
296
+ ������������������������ =
297
+ ������������∗6.11∗ ������������(17.67∗
298
+ ������������
299
+ ������������+243.6)
300
+ 100
301
+
302
+ (3)
303
+ ������������������������ is the partial pressure, H is the humidity in %, and T is the temperature in °C.
304
+ ������������������������������������ =
305
+ ������������
306
+ ������������ ∗ (������������������������������������− ������������������������������������������������)∗������������
307
+ (4)
308
+ WVP is the water vapor permeability in g/(ℎ ∗ ������������2*mbar), W is the water loss in grams, t
309
+ is the time in hours, ������������������������������������������������ − ������������������������������������������������������������ is the water vapor partial pressure difference in mbar
310
+ between inside and outside conditions, A is the membrane area in ������������2.
311
+ Figure 4 lists the testing condition of the experiments. It was possible to maintain
312
+ isothermal conditions in the system below the lower critical solution temperature (LCST).
313
+ As the recommended operating temperature of the OptiCal was less than 30 °C, it was
314
+ not possible to maintain isothermal conditions above LCST. However, to ensure that the
315
+ membrane is above LCST and to minimize the thermal gradients, the constant
316
+ temperature condition of 30 °C and 40 °C are maintained outside and inside the chamber,
317
+ respectively.
318
+
319
+ Results and discussion
320
+ The electrospun membrane with Janus construct was fabricated using a needle-less and
321
+ needle-based electrospinning setup (see Table 1). The needle-based electrospinning setup
322
+ allowed us to incorporate the thermocouples between the layers and just below the surface.
323
+ Therefore, the precise measurement of surface temperature and that between the layers was
324
+ precisely measured for the double-chamber method. These measured temperatures were
325
+ used to calculate the vapor pressure gradient across the membrane.
326
+ The PVDF solution was electrospun on top of the electrospun blend membrane to produce
327
+ a two-layered Janus construct. The blend solution with a concentration of 16 wt% had a
328
+ shear viscosity of 2.38 Pa.s under a shear of 1/10 s and a conductivity of 5.16 μs/cm.
329
+ Similarly, the shear viscosity of the PVDF solution is 1.2 Pa.s and a conductivity of 32
330
+ μs/cm. The SEM examination shows a smooth fiber morphology with a diameter of 0.2-
331
+
332
+ 0.4 μm for needle-less electrospun fibers and 0.2-0.6 μm for needle based electrospun
333
+ fibers. These values indicate no dimensional difference between the fibrous web produced
334
+ by needle based or needle-less electrospinning methods. The specific weight of the Janus
335
+ membranes is 30-40 GSM, a lightweight fabric category. However, in electrospun or thin-
336
+ film membranes, this weight range indicates a heavyweight membrane suitable for
337
+ practical applications36.
338
+ Table 1: Polymeric solution parameters and their corresponding electrospun membrane
339
+ properties.
340
+
341
+ Polymer (MW, Da)
342
+ Wt% (w/w)
343
+ Shear
344
+ viscosity
345
+ Pa.s, (at
346
+ 1/10 s)
347
+ Conductivity,
348
+ μs/cm
349
+ Duration,
350
+ no of cycles
351
+ of 20min
352
+ each
353
+ Fiber
354
+ diameter,
355
+ nm
356
+ Thickness,
357
+ μm
358
+ Porosity, %
359
+ PVDF(530K)/PNIPAM
360
+ (330K)(75/25)
361
+ 18
362
+ 2.4
363
+ 5.16
364
+ 5
365
+ 502 ± 193.8
366
+ 103.3 ± 5.4
367
+ 49.7 ± 1.8
368
+ PVDF (180K)
369
+ 30
370
+ 1.2
371
+ 32
372
+ 3
373
+ 139.2 ± 76.2
374
+ 67.7 ± 4.7
375
+ 49.6 ± 3.3
376
+
377
+
378
+
379
+ um
380
+ 5
381
+ um
382
+ 5μm
383
+ 5μm
384
+ 5μm
385
+ 5μm
386
+ nS
387
+ 5μmFigure 5: SEM micrographs of the four Janus membranes prepared using needle-less
388
+ electrospinning. Blend fibers are relatively more uniform in comparison to PVDF fibers, which
389
+ have a bimodal distribution of fiber diameter. GSM refers to the membrane weight in grams
390
+ per square meter. I to IV are the Janus samples electrospun for measuring WVT
391
+ Figure 6 shows the surface elemental composition by X-ray photoelectron spectroscopy
392
+ (XPS) for PNIPAM, PVDF, and their blend. Comparing the blend with respective pristine
393
+ counterparts reveals that the blend surface is enriched with Nitrogen and Oxygen, which is
394
+ like PNIPAM fibers. The XPS results confirm the observations from our previous study on
395
+ the thermoresponsive wettability of PNIPAM/PVDF blends fabricated using needle-based
396
+ electrospinning29. The Thermal characterization using DSC and TGA suggested the phase
397
+ separation of PNIPAM and PVDF during electrospinning. At the same time, the wettability
398
+ switch observed by contact angle measurement at room temperature and elevated
399
+ temperature suggested PNIPAM enriching the fiber surface29.
400
+ A comparison of density and solubility parameters in Table 2 favors the dissolution of
401
+ PNIPAM in DMF over PVDF. Therefore, the evaporation of DMF during the
402
+ electrospinning process supports the migration of PNIPAM to the fiber surface, lasting
403
+ longer in solution. Our previous study on this blend suggests miscibility when the PNIPAM
404
+ content is 50 wt% or above29. However, when the PNIPAM content is 25 wt% or below,
405
+ PVDF and PNIPAM phases separate, enhancing the migration of PNIPAM to the fiber
406
+ surface during the electrospinning process.
407
+
408
+
409
+
410
+ Elements/transitio
411
+ ns
412
+ XPS surface composition, at.%
413
+ PVDF
414
+ PNIPAM
415
+ BLEND
416
+ BLEND after
417
+ three months
418
+ in water
419
+ Carbon C1s
420
+ 51.1
421
+ 75.9
422
+ 71.8
423
+ 71.8
424
+ Nitrogen N1s
425
+ -
426
+ 10.9
427
+ 8.7
428
+ 9.4
429
+ Oxygen O1s
430
+ 3.0
431
+ 13.2
432
+ 11.8
433
+ 9.6
434
+ Fluorine F1s
435
+ 45.9
436
+ -
437
+ 7.8
438
+ 9.3
439
+
440
+ Figure 6: XPS graphs of PNIPAM, PVDF, and the blend of PVDF/PNIPAM (75/25, w/w)
441
+
442
+
443
+ Table 2: Polymer properties
444
+ Polymer
445
+ Density
446
+ Solubility
447
+ parameter
448
+ δ2
449
+ Solubility
450
+ parameter
451
+ DMF δ1
452
+ Δδ (1-2)
453
+
454
+ a) PNIPAM
455
+ b) PVDF
456
+ C-F3
457
+ Measured
458
+ Measured
459
+ Envelope
460
+ C-F4
461
+ Envelope
462
+ C-H3
463
+ C-H2
464
+ C-C, C-H
465
+ C-F2
466
+ N-C=O
467
+ 292.5
468
+ 290
469
+ 287.5
470
+ 285
471
+ 280
472
+ 292.5
473
+ 290
474
+ 287.5
475
+ 285
476
+ 282.5 280
477
+ c) Blend
478
+ C-F2
479
+ Measured
480
+ Envelope
481
+ C-H2
482
+ C-C: C-H
483
+ N-C=O
484
+ 295
485
+ 292.5
486
+ 290
487
+ 287.5
488
+ 285
489
+ 282.5
490
+ 280
491
+ Binding Energy (ev)PNIPAM
492
+ 1.05
493
+ 23.5
494
+ 24.9
495
+ 1.4
496
+ PVDF
497
+ 1.68
498
+ 17.5
499
+ 24.9
500
+ 7.4
501
+
502
+ Before water-vapor transmission experiments were performed, the fabricated membranes
503
+ were conditioned for a day in an environment-controlled chamber (at test conditions).
504
+ The skin model (mimicked by the porous hot plate) measures the membrane’s resistance
505
+ to vapor permeability as a function of temperature, as shown in Figure 7. When the
506
+ membrane is placed on the hot plate, the water vapor diffuses from the bottom to the top
507
+ side of the Janus membrane (see Figure 7). The membrane’s resistance to vapor diffusion
508
+ is measured from both sides of the Janus membrane in a separate set of experiments to
509
+ assess the influence of the wettability gradient within the membrane. This set of
510
+ measurements is performed at five different temperatures to plot the membrane’s resistance
511
+ as a function of temperature. The water vapor resistance measurement removes the
512
+ temperature bias on the WVT and is expected to be constant for the same fabric at different
513
+ isothermal conditions.
514
+ When the membrane is placed on the hot plate with the blend side facing down, i.e., when
515
+ vapor transmits from the blend to the PVDF side of the Janus membrane, there is an
516
+ increased water vapor resistance at higher temperatures (see Figure 7). As the blend is
517
+ thermoresponsive, it is hydrophilic (CA=10⁰) at a lower temperature range (<32 ⁰C) and
518
+ hydrophobic (CA=120⁰) at a temperature higher than 32 ⁰C. As a result, the lower resistance
519
+ is attributed to the blend’s affinity to water vapor at a lower temperature. Similarly, the
520
+ reduced affinity at elevated temperatures causes more significant resistance to vapor
521
+ transmission. As the vapor transmits from the blend to the PVDF side, the hydrophobicity
522
+ increases the membrane’s thickness and consequently increases the membrane’s resistance
523
+ to vapor transmission.
524
+ The hydrophilic layer next to the PVDF layer supports the vapor transmission when the
525
+ PVDF side faces the hot plate. At elevated temperatures, the resistance increases but is
526
+ significantly lower than the resistance the membrane offers, with the blend facing the hot
527
+ plate (Figure 7). Even though the blend is hydrophobic at elevated temperatures, it is less
528
+ hydrophobic than PVDF. Therefore, when the vapor transmits from the PVDF to the blend
529
+ side, the hydrophobicity reduces along with the thickness of the membrane, which favors
530
+ the vapor transmission.
531
+ Based on the examination of the results, the Janus construct with PVDF facing the hot plate
532
+ favors the vapor transmission at all investigated temperatures. This behavior is due to
533
+ unchanging hygroscopic properties of the PVDF with temperature. Therefore, the
534
+ thermoresponsive Janus membrane makes it possible to maintain active vapor transport
535
+ irrespective of the outside temperature.
536
+
537
+
538
+ Figure 7: Effect of Janus directionality on the resistance to water-vapor permeability
539
+ through the membranes. The thermoresponsive property of the blend combined with
540
+ the Janus structure offers higher resistance to WVT. The behavior is attributed to the
541
+ moisture released by the blend layer at a higher temperature. As a result, when the
542
+ blend faces the hot plate, it increases the boundary layer gap and consequently
543
+ increases the resistance to water-vapor permeability.
544
+
545
+ We measured the water vapor permeability using a double chamber method to verify the
546
+ observed behavior. Needle-based electrospinning was used to fabricate the Janus
547
+ membrane to incorporate the thermocouples between the layers and almost at the surface
548
+ of the Blend layer (approximately 5sec of electrospinning). Figure 8 shows the Janus
549
+ membrane with thermocouples on the sample holder to fit the mouth of the OptiCal
550
+ chamber. The figure also shows the SEM micrographs of the blend and PVDF side of the
551
+ Janus membrane incorporating a fiber diameter of 0.2-0.6 μm.
552
+ The sample holder plugs the mouth of the chamber such that one of the sides of the
553
+ membrane faces outside the chamber and the other faces the inside chamber of the OptiCal
554
+ chamber. The entire system and the membranes were conditioned at 20°C and 40% RH
555
+ before carrying out below LCST. Before measurements above LCST, membranes were
556
+ conditioned at 30°C and 40% RH, as mentioned in Figure 4.
557
+
558
+ 3.5
559
+ PVDFtoBlend
560
+ Blend to PVDF
561
+ 3
562
+ 2.5
563
+ , m’Pa/W
564
+ PVDF
565
+ 2
566
+ RET,
567
+ H
568
+ Blend
569
+ 1.5
570
+ 0.5
571
+ 20
572
+ 25
573
+ 30
574
+ 35
575
+ 40
576
+ Temperature, °C
577
+ Figure 8: The top part shows the membrane with the thermocouples mounted (red arrow)
578
+ on the sample holder that fits the mouth of the OptiCal instrument. The bottom section
579
+ shows the SEM micrographs of the fibers from the PVDF and the blend side, respectively
580
+
581
+
582
+ Figure 9 shows the membranes' water vapor permeability as a temperature function. At a
583
+ lower temperature of 20 °C, the permeabilities are comparable for both samples. However,
584
+ the increasing vapor permeability with increasing temperature is predominantly due to
585
+ increasing partial vapor pressure difference across the membrane37. Figure 10 plots the
586
+ vapor permeability as a function of partial vapor pressure across the membrane.
587
+ PVDF being hydrophobic is expected to adsorb less moisture and transmit less than the
588
+ unswelling hydrophilic membrane (Blend). However, interestingly, the vapor permeability
589
+ from the PVDF to the blend side is significantly higher than from the blend to the PVDF
590
+ side. Further, to isolate the membrane effects from the vapor pressure, Figure 10B shows
591
+ the water vapor permeability per unit of partial pressure across the membrane. When the
592
+ vapor transmits from the blend side, the permeability is constant, suggesting the vapor
593
+ pressure difference is the primary driving force. However, vapor permeates significantly
594
+ more (P-value =0.003, n=4) from the PVDF side, suggesting that the membrane’s influence
595
+ on vapor transmission increases with temperature. Therefore, the combined effect of the
596
+ Janus constructs and the temperature-responsive property drives more vapor through the
597
+ membrane from one direction over the other.
598
+
599
+
600
+ Sum
601
+ 10um
602
+ Figure 9: a) shows the Janus membrane's water vapor permeability as a temperature
603
+ function and compares the effect of the membrane’s directionality. b) shows the partial
604
+ vapor pressure difference across the membrane as a function of temperature. The water
605
+ vapor permeability for PVDF to blend direction is significantly higher due to the
606
+ additional partial pressure drop caused by the wettability gradient in the membrane.
607
+
608
+
609
+
610
+
611
+ 300
612
+ 250
613
+ Blend to PVDF
614
+ PVDF to Blend
615
+ 200
616
+ 150
617
+ 100
618
+ b)
619
+ 30
620
+ Blend to PVDF
621
+ 25
622
+ PVDF to Blend
623
+ 20
624
+ 10
625
+ 20
626
+ 25
627
+ 30
628
+ 35
629
+ 40
630
+ Temperature, 'C
631
+ a)
632
+
633
+ b)
634
+
635
+ Figure 10: a) Water vapor permeability as a function of the partial vapor pressure
636
+ difference across the Janus membrane. Vapor permeability from the PVDF side is
637
+ significantly higher than from the blend side due to the Janus structure favoring the
638
+ vapor transmission from the PVDF to the blend side.
639
+ b) Water vapor permeability per unit of the partial vapor pressure difference across the
640
+ Janus membrane as a function of temperature. The differences in the vapor permeability
641
+ from the blend side to PVDF are statistically insignificant at all tested temperatures. The
642
+ partial vapor pressure difference across the membrane is the primary driving force for
643
+ transmitting the water vapor from the blend side. However, this permeability per unit of
644
+ the pressure from the PVDF side increases significantly with temperature due to the
645
+ Janus construct's combined effect and the blend's thermoresponsive property.
646
+
647
+ Figure 10 shows the possible mechanism based on the obtained experimental results. Based
648
+ on the prior hygroscopic measurements, pristine PNIPAM adsorbs 19 wt% of vapor at a
649
+ temperature of 40 °C38. As the blend contains 25 wt% of PNIPAM and adsorbs water vapor
650
+ in proportion to the PNIPAM content29, which is at least 4 wt% of vapor at 40 °C. However,
651
+ PVDF being hydrophobic, adsorbs less than 1% of vapor via Van der Waal’s forces29.
652
+ Therefore, at any given time during the experiment, the blend surface layer will hold more
653
+ moisture (vapor molecules) (C������������), when compared with the PVDF surface layer (C������������).
654
+ When the PVDF side faces outside, the vapor transmits from the blend to the PVDF side
655
+ of the membrane. The amount of vapor molecules available for evaporation is C������������ on the
656
+ PVDF surface. Therefore, the vapor pressure gradient drives C������������ molecules through the
657
+ membrane. As this concentration of vapor C������������ does not change with temperature, we have
658
+ almost the same vapor permeability when transmitting from the blend to the PVDF side.
659
+ In the other scenario, with the blend side facing outside, the vapor transmits from the PVDF
660
+ side to the blend side of the membrane. At equilibrium, the blend surface holds more vapor
661
+
662
+ PVDFtoBlend
663
+ 250
664
+ BlendtoPVDF
665
+ 200
666
+ Water vapor
667
+ 100
668
+ 50
669
+ 5
670
+ 10
671
+ 15
672
+ 20
673
+ 25
674
+ 30
675
+ Partialvaporpressuredifference,mbar12
676
+ PVDFtoBlend
677
+ BlendtoPVDF
678
+ "mbar
679
+ HHHH
680
+ Blend
681
+ PVDF
682
+ 10
683
+ 9
684
+ PVDF
685
+ Blend
686
+ 8
687
+ 20
688
+ 25
689
+ 30
690
+ 35
691
+ 40
692
+ Temperature,Cthan the PVDF side. However, at 20 °C, most vapor molecules form hydrogen bonds with
693
+ the amide (-N-H) and carbonyl (-C=O-) groups from PNIPAM. As a result, the
694
+ concentration of vapor molecules (C������������) is a combination of bound vapor (T) and free vapor
695
+ molecules (F). The vapor pressure gradient drives the free vapor molecules (F) through the
696
+ membrane, which is comparable with C������������. Therefore, vapor permeability at temperatures
697
+ below LCST is similar irrespective of the transmission direction.
698
+ Above LCST, due to the coil conformation of PNIPAM, all the bound vapor molecules are
699
+ released and become free vapor molecules. When the vapor molecule C������������ evaporates, the
700
+ vapor pressure gradient drives C������������ through the membrane. As C������������≫C������������, the vapor
701
+ permeability from PVDF to the blend side is higher than that from the flipped direction.
702
+
703
+
704
+ Figure 11: Illustrates the mechanism of water vapor permeability from the blend side to
705
+ the PVDF side and vice-versa when driven by the partial vapor pressure difference.
706
+ Conclusion
707
+ Electrospun thermoresponsive Janus membranes exhibit directional WVT. Herein, the
708
+ vapor transmitted from the hydrophobic (PVDF) side to the hydrophilic (blend) is faster
709
+ than in the opposite direction (hydrophilic to hydrophobic). The results from the indirect
710
+ approach, i.e., via the sweating hot plate method, and from the direct approach, i.e., via a
711
+ double-chamber method, complement each other. Based on the physical reasoning, we
712
+ postulate that this mechanism is due to the combined effect of the temperature-responsive
713
+ behavior of the Janus construct on vapor transmission. By complementing the experiment,
714
+ numerical modeling can shed further insight into the physical processes, which results in
715
+ directional vapor permeability.
716
+
717
+ :8:8:8:
718
+ Cp is exposed
719
+ CB is all FreeThese new results open pathways in membrane research and development, which is unique
720
+ for liquid and gas transmission. The novelty not only contributes to the field of textiles,
721
+ packaging, or filter systems.
722
+
723
+ References
724
+ 1.
725
+ Gugliuzza A and Drioli E. A review on membrane engineering for innovation in
726
+ wearable fabrics and protective textiles. Journal of Membrane Science 2013; 446: 350-
727
+ 375.
728
+ 2.
729
+ Waterproof Breathable Textiles (WBT) Market Analysis By Textile (Densely
730
+ Woven, Membrane, Coated), By Product (Garments, Footwear, Gloves), By Application
731
+ (Active Sportswear) And Segment Forecasts To 2020,
732
+ http://www.grandviewresearch.com/industry-analysis/waterproof-breathable-textiles-
733
+ industry (2015).
734
+ 3.
735
+ Tanner JC. Breathability, comfort and Gore-Tex laminates. Journal of Industrial
736
+ Textiles 1979; 8: 312-322.
737
+ 4.
738
+ Rossi R. Interaction between protection and thermal comfort. Cambridge, UK,
739
+ Woodhead Publishing Limited, 2005, p. 233-253.
740
+ 5.
741
+ Mukhopadhyay A and Midha VK. A review on designing the waterproof
742
+ breathable fabrics part I: fundamental principles and designing aspects of breathable
743
+ fabrics. Journal of industrial textiles 2008; 37: 225-262.
744
+ 6.
745
+ Gun AD and Bodur A. Thermo-physiological comfort properties of double-face
746
+ knitted fabrics made from different types of polyester yarns and cotton yarn. J Text Inst
747
+ 2017; 108: 1518-1527. Article. DOI: 10.1080/00405000.2016.1259953.
748
+ 7.
749
+ Rouhani ST and Fashandi H. Breathable dual-layer textile composed of cellulose
750
+ dense membrane and plasma-treated fabric with enhanced comfort. Cellulose 2018; 25:
751
+ 5427-5442. Article. DOI: 10.1007/s10570-018-1950-9.
752
+ 8.
753
+ Suganthi T and Senthilkumar P. Development of tri-layer knitted fabrics for
754
+ shuttle badminton players. Journal of Industrial Textiles 2018; 48: 738-760. Article. DOI:
755
+ 10.1177/1528083717740766.
756
+ 9.
757
+ Zhou L, Wang HB, Du JM, et al. Eco-friendly and Durable Antibacterial Cotton
758
+ Fabrics Prepared with Polysulfopropylbetaine. Fibers and Polymers 2018; 19: 1228-
759
+ 1236. Article. DOI: 10.1007/s12221-018-8053-y.
760
+ 10.
761
+ Gu XY, Li N, Cao J, et al. Preparation of electrospun polyurethane/hydrophobic
762
+ silica gel nanofibrous membranes for waterproof and breathable application. Polym Eng
763
+ Sci 2018; 58: 1381-1390. Article. DOI: 10.1002/pen.24726.
764
+ 11.
765
+ Gu XY, Li N, Gu HH, et al. Polydimethylsiloxane-modified polyurethane-poly(-
766
+ caprolactone) nanofibrous membranes for waterproof, breathable applications. Journal of
767
+ Applied Polymer Science 2018; 135: 10. Article. DOI: 10.1002/app.46360.
768
+ 12.
769
+ Gu XY, Li N, Luo JJ, et al. Electrospun polyurethane microporous membranes for
770
+ waterproof and breathable application: the effects of solvent properties on membrane
771
+ performance. Polym Bull 2018; 75: 3539-3553. Article. DOI: 10.1007/s00289-017-2223-
772
+ 8.
773
+
774
+ 13.
775
+ Gretton J, Brook D, Dyson H, et al. Moisture vapor transport through waterproof
776
+ breathable fabrics and clothing systems under a temperature gradient. Textile Research
777
+ Journal 1998; 68: 936-941.
778
+ 14.
779
+ Gibson P, Schreuder-Gibson H and Rivin D. Transport properties of porous
780
+ membranes based on electrospun nanofibers. Colloids and Surfaces A: Physicochemical
781
+ and Engineering Aspects 2001; 187: 469-481.
782
+ 15.
783
+ Sheng J, Li Y, Wang X, et al. Thermal inter-fiber adhesion of the
784
+ polyacrylonitrile/fluorinated polyurethane nanofibrous membranes with enhanced
785
+ waterproof-breathable performance. Separation and Purification Technology 2016; 158:
786
+ 53-61.
787
+ 16.
788
+ Sheng J, Zhang M, Luo W, et al. Thermally induced chemical cross-linking
789
+ reinforced fluorinated polyurethane/polyacrylonitrile/polyvinyl butyral nanofibers for
790
+ waterproof-breathable application. RSC Advances 2016; 6: 29629-29637.
791
+ 17.
792
+ Sheng J, Zhang M, Xu Y, et al. Tailoring Water-Resistant and Breathable
793
+ Performance of Polyacrylonitrile Nanofibrous Membranes Modified by
794
+ Polydimethylsiloxane. ACS applied materials & interfaces 2016; 8: 27218-27226.
795
+ 18.
796
+ Mukhopadhyay A, Preet A and Midha V. Moisture transmission behaviour of
797
+ individual component and multilayered fabric with sweat and pure water. J Text Inst
798
+ 2018; 109: 383-392. Article. DOI: 10.1080/00405000.2017.1348435.
799
+ 19.
800
+ Bagherzadeh R, Latifi M, Najar SS, et al. Transport properties of multi-layer
801
+ fabric based on electrospun nanofiber mats as a breathable barrier textile material. Textile
802
+ research journal 2012; 82: 70-76.
803
+ 20.
804
+ Gorji M, Karimi M and Nasheroahkam S. Electrospun PU/P (AMPS-GO)
805
+ nanofibrous membrane with dual-mode hydrophobic–hydrophilic properties for
806
+ protective clothing applications. Journal of Industrial Textiles 2018; 47: 1166-1184.
807
+ 21.
808
+ Wu J, Wang N, Wang L, et al. Unidirectional water-penetration composite fibrous
809
+ film via electrospinning. Soft Matter 2012; 8: 5996-5999.
810
+ 22.
811
+ Wang H, Wang X and Lin T. Unidirectional water transfer effect from fabrics
812
+ having a superhydrophobic-to-hydrophilic gradient. Journal of nanoscience and
813
+ nanotechnology 2013; 13: 839-842.
814
+ 23.
815
+ Tian X, Li J and Wang X. Anisotropic liquid penetration arising from a cross-
816
+ sectional wettability gradient. Soft Matter 2012; 8: 2633-2637.
817
+ 24.
818
+ Guo F and Guo Z. Inspired smart materials with external stimuli responsive
819
+ wettability: a review. RSC Advances 2016; 6: 36623-36641. 10.1039/C6RA04079A.
820
+ DOI: 10.1039/C6RA04079A.
821
+ 25.
822
+ Tanaka T. Collapse of gels and the critical endpoint. Physical Review Letters
823
+ 1978; 40: 820.
824
+ 26.
825
+ Tanaka T and Fillmore DJ. Kinetics of swelling of gels. The Journal of Chemical
826
+ Physics 1979; 70: 1214-1218.
827
+ 27.
828
+ Schild HG. Poly (N-isopropylacrylamide): experiment, theory and application.
829
+ Progress in polymer science 1992; 17: 163-249.
830
+ 28.
831
+ Liu X, Li Y, Hu J, et al. Smart moisture management and thermoregulation
832
+ properties of stimuli-responsive cotton modified with polymer brushes. RSC Advances
833
+ 2014; 4: 63691-63695. 10.1039/C4RA11080C. DOI: 10.1039/C4RA11080C.
834
+ 29.
835
+ Ranganath AS, Ganesh VA, Sopiha K, et al. Thermoresponsive electrospun
836
+ membrane with enhanced wettability. RSC Advances 2017; 7: 19982-19989.
837
+
838
+ 30.
839
+ Airoudj A, Bally-Le Gall F and Roucoules V. Textile with Durable Janus Wetting
840
+ Properties Produced by Plasma Polymerization. The Journal of Physical Chemistry C
841
+ 2016; 120: 29162-29172.
842
+ 31.
843
+ Li H, Cao M, Ma X, et al. “Plug‐and‐Go”‐Type Liquid Diode: Integrated Mesh
844
+ with Janus Superwetting Properties. Advanced Materials Interfaces 2016; 3.
845
+ 32.
846
+ Ranganath AS and Baji A. Electrospun Janus Membrane for Efficient and
847
+ Switchable Oil–Water Separation. Macromolecular Materials and Engineering 2018;
848
+ 303: 1800272.
849
+ 33.
850
+ 11092 I. Textiles‐Determination of physiological properties–measurement of
851
+ thermal and water‐vapor resistance under steady‐state conditions (sweating guarded‐
852
+ hotplate test). 1993.
853
+ 34.
854
+ Wagnera W and Prußb A. The IAPWS formulation 1995 for the thermodynamic
855
+ properties of ordinary water substance for general and scientific use. J Phys Chem Ref
856
+ Data 2002; 31: 387.
857
+ 35.
858
+ Oyj V. Humidity Conversion Formulas. 2013: 3-16.
859
+ 36.
860
+ Brown P and Cox CL. Fibrous filter media. Woodhead Publishing, 2017.
861
+ 37.
862
+ Gibson P. Effect of temperature on water vapor transport through polymer
863
+ membrane laminates. 1999. ARMY SOLDIER AND BIOLOGICAL CHEMICAL
864
+ COMMAND NATICK MA.
865
+ 38.
866
+ Thakur N, Sargur Ranganath A, Sopiha K, et al. Thermoresponsive Cellulose
867
+ Acetate–Poly (N-isopropylacrylamide) Core–Shell Fibers for Controlled Capture and
868
+ Release of Moisture. ACS Applied Materials & Interfaces 2017; 9: 29224-29233.
869
+
870
+
JtAyT4oBgHgl3EQff_gI/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
KNE0T4oBgHgl3EQfigEg/content/2301.02445v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3d1f5a012b5a4c476c20b916334ad68787844a5614af8d2fe2a19ef4003a457
3
+ size 724830
KNE0T4oBgHgl3EQfigEg/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:194f1ec3ee1fd66fb857c67e1a1e54bb57ccf8e7c0c9044baeef4fc46003ccfa
3
+ size 186078
LdAyT4oBgHgl3EQfTvfu/content/tmp_files/2301.00114v1.pdf.txt ADDED
@@ -0,0 +1,1521 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 1
2
+ Skeletal Video Anomaly Detection using Deep
3
+ Learning: Survey, Challenges and Future Directions
4
+ Pratik K. Mishra, Alex Mihailidis, Shehroz S. Khan
5
+ Abstract—The existing methods for video anomaly detec-
6
+ tion mostly utilize videos containing identifiable facial and
7
+ appearance-based features. The use of videos with identifiable
8
+ faces raises privacy concerns, especially when used in a hospital
9
+ or community-based setting. Appearance-based features can also
10
+ be sensitive to pixel-based noise, straining the anomaly detection
11
+ methods to model the changes in the background and making
12
+ it difficult to focus on the actions of humans in the foreground.
13
+ Structural information in the form of skeletons describing the
14
+ human motion in the videos is privacy-protecting and can over-
15
+ come some of the problems posed by appearance-based features.
16
+ In this paper, we present a survey of privacy-protecting deep
17
+ learning anomaly detection methods using skeletons extracted
18
+ from videos. We present a novel taxonomy of algorithms based on
19
+ the various learning approaches. We conclude that skeleton-based
20
+ approaches for anomaly detection can be a plausible privacy-
21
+ protecting alternative for video anomaly detection. Lastly, we
22
+ identify major open research questions and provide guidelines to
23
+ address them.
24
+ Index Terms—skeleton, body joint, human pose, anomaly
25
+ detection, video.
26
+ I. INTRODUCTION
27
+ Anomalous events pertain to unusual or abnormal actions,
28
+ behaviours or situations that can lead to health, safety and
29
+ economical risks [1]. Anomalous events, by definition, are
30
+ largely unseen and not much is known about them in advance
31
+ [2]. Due to their rarity, diversity and infrequency, collecting
32
+ labeled data for anomalous events can be very difficult or
33
+ costly [1], [3]. With the lack of predetermined classes and
34
+ a few labelled data for anomalous events, it can be very hard
35
+ to train supervised machine learning models [1]. Therefore, a
36
+ general approach in majority of anomaly detection algorithms
37
+ is to train a model that can best represent the ’normal’ events
38
+ or actions, and any deviations from it can be flagged as an
39
+ unseen anomaly [4]. Anomalous behaviours among humans
40
+ can be attributed at an individual level (e.g., falls [5]) or
41
+ multiple people in a scene (e.g., pedestrian crossing [6],
42
+ violence in a crowded mall [7]). In the context of video-
43
+ based anomaly detection, the general approach is to train
44
+ a model to learn the patterns of actions or behaviours of
45
+ individual(s), background and other semantic information in
46
+ the normal activities videos, and identify significant deviations
47
+ in the test videos as anomalies. However, anomaly detection
48
+ is a challenging task due to the lack of labels and often times
49
+ the unclear definition of an anomaly [2].
50
+ Pratik K. Mishra, Alex Mihailidis, and Shehroz S. Khan are with the
51
+ Institute of Biomedical Engineering, University of Toronto, Toronto, Canada,
52
+ and also with the KITE – Toronto Rehabilitation Institute, University
53
+ Health Network, Toronto, Canada (e-mail: [email protected];
54
55
+ The majority of video-based anomaly detection approaches
56
+ use RGB videos where the people in the scene are identifiable.
57
+ While using RGB camera-based systems in public places (e.g.,
58
+ malls, airports) is generally acceptable, the situation can be
59
+ very different in personal dwelling, community, residential or
60
+ clinical settings [8]. In a home or residential setting (e.g.,
61
+ nursing homes), individuals or patients can be monitored in
62
+ their personal space that may breach their privacy. The lack
63
+ of measures to deal with the privacy of individuals can be
64
+ a bottleneck in the adoption and deployment of the anomaly
65
+ detection-based systems [9]. However, monitoring of people
66
+ with physical, cognitive or aging issues is also important to
67
+ improve their quality of life and care. Therefore, as a trade-
68
+ off, privacy-protecting video modalities can fill that gap and
69
+ be used in these settings to save lives and improve patient
70
+ care. Wearable devices face compliance issues among certain
71
+ populations, where people may forget or in some cases refuse
72
+ to wear them [10]. Some of the privacy-protecting camera
73
+ modalities that has been used in the past for anomaly detection
74
+ involving humans include depth cameras [5], [11], thermal
75
+ cameras [12], and infrared cameras [13], [14]. While these
76
+ modalities can partially or fully obfuscate an individual’s
77
+ identity, they require specialized hardware or cameras and
78
+ can be expensive to be used by general population. Skeletons
79
+ extracted from RGB camera streams using pose estimation al-
80
+ gorithms [15] provide a suitable solution of privacy protection
81
+ over RGB and other types of cameras. Skeleton tracking only
82
+ focuses on body joints and ignores facial identity, full body
83
+ scan or background information. The pixel-based features in
84
+ RGB videos that mask important information about the scene
85
+ are sensitive to noise resulting from illumination, viewing
86
+ direction and background clutter, resulting in false positives
87
+ when detecting anomalies [16]. Furthermore, due to redundant
88
+ information present in these features (e.g., background), there
89
+ is an increased burden on methods to model the change in
90
+ those areas of the scene rather than focus on the actions of
91
+ humans in the foreground. Extracting information specific to
92
+ human actions can not only provide a privacy-protecting solu-
93
+ tion, but can also help to filter out the background-related noise
94
+ in the videos and aid the model to focus on key information
95
+ for detecting abnormal events related to human behaviour.
96
+ The skeletons represent an efficient way to model the human
97
+ body joint positions over time and are robust to the complex
98
+ background, illumination changes, and dynamic camera scenes
99
+ [17]. In addition to being privacy-protecting, skeleton features
100
+ are compact, well-structured, semantically rich, and highly
101
+ descriptive about human actions and motion [17]. Anomaly
102
+ detection using skeleton tracking is an emerging area of
103
+ research as awareness around privacy of individuals and their
104
+ arXiv:2301.00114v1 [cs.CV] 31 Dec 2022
105
+
106
+ 2
107
+ data grows. However, skeleton-based approaches may not be
108
+ sufficient for situations that explicitly need facial information
109
+ for analysis, including emotion recognition [18], [19], pain
110
+ detection [20] or remote heart monitoring [21], to name a few.
111
+ In recent years, deep learning methods have been developed
112
+ to use skeletons for different applications, such as action
113
+ recognition [40], medical diagnosis [24], and sports analytics
114
+ [41]. The use of skeletons for anomaly detection in videos
115
+ is an under-explored area, and concerted research is needed
116
+ [24]. The human skeletons can help in developing privacy-
117
+ preserving solutions for private dwellings, crowded/public
118
+ areas, medical settings, rehabilitation centers and long-term
119
+ care homes to detect anomalous events that impacts health
120
+ and safety of individuals. Use of this type of approach could
121
+ improve the adoption of video-based monitoring systems in
122
+ the homes and residential settings. However, there is a paucity
123
+ of literature on understanding the existing techniques that use
124
+ skeleton-based anomaly detection approaches. We identify this
125
+ gap in the literature and present one of the first survey on the
126
+ recent advancements in using skeletons for anomaly detection
127
+ in videos. We identified the major themes in existing work
128
+ and present a novel taxonomy that is based on how these
129
+ methods learn to detect anomalous events. We also discuss the
130
+ applications where these approaches were used to understand
131
+ their potential in bringing these algorithms in a personal
132
+ dwelling, or long-term care scenario.
133
+ II. LITERATURE SURVEY
134
+ We adopted a narrative literature review for this work.
135
+ The following keywords (and their combinations) were used
136
+ to search for relevant papers – skeleton, human pose, body
137
+ pose, body joint, trajectory, anomaly detection, abnormal and
138
+ video. These keywords were searched on scholarly databases,
139
+ including Google Scholar, IEEE Xplore, Elsevier and Springer.
140
+ We mostly reviewed papers between year 2016 to year 2022;
141
+ therefore, the list may not be comprehensive. In this review,
142
+ we only focus on the recent deep learning-based algorithms
143
+ for skeletal video anomaly detection and did not include
144
+ traditional machine learning based approaches. We did not
145
+ adopt the systematic or scoping review search protocol for this
146
+ work; therefore, our literature review may not be exhaustive.
147
+ However, we tried our best to include the latest development
148
+ in the field to be able to summarize their potential and identify
149
+ challenges. In this section, we provide a survey of skeletal deep
150
+ learning video anomaly detection methods. We present a novel
151
+ taxonomy to study the skeletal video anomaly approaches
152
+ based on learning approaches into four broad categories,
153
+ i.e., reconstruction, prediction, their combinations and other
154
+ specific approaches. Table I provides a summary of 21 relevant
155
+ papers, based on the taxonomy, found in our literature search.
156
+ Unless otherwise specified, the values in the last column of
157
+ the table refer to AUC(ROC) values corresponding to each
158
+ dataset in the reviewed paper. Five papers use reconstruction
159
+ approach, five papers use prediction approach, five papers use
160
+ a combination of reconstruction and prediction approaches,
161
+ three papers use a combination of reconstruction and clustering
162
+ approaches, and three papers use other specific approaches.
163
+ A. Reconstruction Approaches
164
+ In the reconstruction approaches, generally, an autoencoder
165
+ (AE) or its variant model is trained on the skeleton information
166
+ of only normal human activities. During training, the model
167
+ learns to reconstruct the samples representing normal activ-
168
+ ities with low reconstruction error. Hence, when the model
169
+ encounters an anomalous sample at test time, it is expected to
170
+ give high reconstruction error.
171
+ Gatt et al. [22] used Long Short-Term Memory (LSTM)
172
+ and 1-Dimensional Convolution (1DConv)-based AE models
173
+ to detect abnormal human activities, including, but not limited
174
+ to falls, using skeletons estimated from videos of a publicly
175
+ available dataset. Temuroglu et al. [23] proposed a skeleton
176
+ trajectory representation that handled occlusions and an AE
177
+ framework for pedestrian abnormal behaviour detection. The
178
+ pedestrian video dataset used in this work was collected by the
179
+ authors, where the training dataset was composed of normal
180
+ walking, and the test dataset was composed of normal and
181
+ drunk walking. The pose skeletons were treated to handle
182
+ occlusions using the proposed representation and combined
183
+ into a sequence to train an AE. They compared the results
184
+ of occlusion-aware skeleton keypoints input with keypoints
185
+ without occlusion flags, keypoint image heatmaps and raw
186
+ pedestrian image inputs. The authors used average of recall
187
+ and specificity to evaluate the models due to the unbalanced
188
+ dataset and found that occlusion-aware input achieved the
189
+ highest results. Suzuki et al. [24] trained a Convolutional
190
+ AE (CAE) on good gross motor movements in children and
191
+ detected poor limb motion as an anomaly. Motion time-series
192
+ images [42] were obtained from skeletons estimated from
193
+ the videos of kindergarten children participants. The motion
194
+ time-series images were fed as input to a CAE, which was
195
+ trained on only the normal data. The difference between
196
+ the input and reconstructed pixels was used to localize the
197
+ poor body movements in anomalous frames. Jiang et al. [25]
198
+ presented a message passing Gated Recurrent Unit (GRU)
199
+ encoder-decoder network to detect and localize the anomalous
200
+ pedestrian behaviours in videos captured at the grade crossing.
201
+ The field-collected dataset consisted of over 50 hours of
202
+ video recordings at two selected grade crossings with different
203
+ camera angles. The skeletons were estimated and decomposed
204
+ into global and local components before being fed as input to
205
+ the encoder-decoder network. The localization of the anoma-
206
+ lous pedestrians within a frame was done by identifying the
207
+ skeletons with reconstruction error higher than the empirical
208
+ threshold. They manually removed wrongly detected false
209
+ skeletons as they claim that the wrong detection issue was
210
+ observed at only one grade crossing. However, an approach
211
+ of manual removal of false skeletons is impractical in many
212
+ real world applications where the data is very large, making
213
+ the need of an automated false skeleton identification and
214
+ removal step imperative. Fan et al. [26] proposed an anomaly
215
+ detection framework which consisted of two pairs of generator
216
+ and discriminator. The generators were trained to reconstruct
217
+ the normal video frames and the corresponding skeletons,
218
+ respectively. The discriminators were trained to distinguish
219
+ the original and reconstructed video frames and the original
220
+
221
+ 3
222
+ TABLE I
223
+ SUMMARY OF REVIEWED PAPERS.
224
+ Learning
225
+ approach
226
+ Paper
227
+ Datasets used
228
+ Experimental
229
+ Setting
230
+ Number of
231
+ people in
232
+ scene
233
+ Type of anomalies
234
+ Pose
235
+ estimation
236
+ algorithm
237
+ Model input
238
+ Model type
239
+ Anomaly score
240
+ Eval. metric
241
+ AUC(ROC)
242
+ (or other)
243
+ Reconstruction
244
+ Gatt et al. [22]
245
+ UTD-MHAD
246
+ Indoor
247
+ Single
248
+ Irregular body
249
+ postures
250
+ Openpose,
251
+ Posenet
252
+ Skeleton
253
+ keypoints
254
+ 1DConv-AE,
255
+ LSTM-AE
256
+ Reconstruction
257
+ error
258
+ AUC(PR)=0.91,
259
+ F score=0.98
260
+ Temuroglu et al. [23]
261
+ Custom
262
+ Outdoor
263
+ Multiple
264
+ Drunk walking
265
+ Openpose
266
+ Skeleton
267
+ keypoints
268
+ AE
269
+ Reconstruction
270
+ error
271
+ Average of
272
+ recall and
273
+ specificity=0.91
274
+ Suzuki et al. [24]
275
+ Custom
276
+
277
+ Single
278
+ Poor body
279
+ movements in
280
+ children
281
+ Openpose
282
+ Motion time-
283
+ series images
284
+ CAE
285
+ Reconstruction
286
+ error
287
+ Accuracy=99.3,
288
+ F score=0.99
289
+ Jiang et al. [25]
290
+ Custom
291
+ Outdoor
292
+ Multiple
293
+ Abnormal pedestrian
294
+ behaviours at
295
+ grade crossings
296
+ Alphapose
297
+ Skeleton
298
+ keypoints
299
+ GRU Encoder-
300
+ Decoder
301
+ Reconstruction
302
+ error
303
+ 0.82
304
+ Fan et al. [26]
305
+ CUHK Avenue,
306
+ UMN
307
+ Indoor and
308
+ Outdoor
309
+ Multiple
310
+ Anomalous human
311
+ behaviours
312
+ Alphapose
313
+ Video frame,
314
+ Skeleton
315
+ keypoints
316
+ Generative
317
+ adversarial network
318
+ Reconstruction
319
+ error of
320
+ video frame
321
+ 0.88
322
+ 0.99
323
+ Prediction
324
+ Rodrigues et al. [27]
325
+ IITB-Corridor,
326
+ ShanghaiTech,
327
+ CUHK Avenue
328
+ Outdoor
329
+ Multiple
330
+ Abnormal human
331
+ activities
332
+ Openpose
333
+ Skeleton
334
+ keypoints
335
+ Multi-timescale
336
+ 1DConv
337
+ encoder-decoder
338
+ Prediction error
339
+ from different
340
+ timescales
341
+ 0.67
342
+ 0.76
343
+ 0.83
344
+ Luo et al. [16]
345
+ ShanghaiTech,
346
+ CUHK Avenue
347
+ Outdoor
348
+ Multiple
349
+ Irregular body
350
+ postures
351
+ Alphapose
352
+ Skeleton
353
+ joints graph
354
+ Spatio-Temporal
355
+ GCN
356
+ Prediction error
357
+ 0.74
358
+ 0.87
359
+ Zeng et al. [28]
360
+ UCSD Pedestrian,
361
+ ShanghaiTech,
362
+ CUHK Avenue,
363
+ IITB-Corridor
364
+ Outdoor
365
+ Multiple
366
+ Anomalous human
367
+ behaviours
368
+ HRNet
369
+ Skeleton
370
+ joints graph
371
+ Hierarchical
372
+ Spatio-Temporal
373
+ GCN
374
+ Weighted sum of
375
+ prediction errors
376
+ from different
377
+ levels
378
+ 0.98
379
+ 0.82
380
+ 0.87
381
+ 0.7
382
+ Fan et al. [29]
383
+ ShanghaiTech,
384
+ CUHK Avenue
385
+ Outdoor
386
+ Multiple
387
+ Anomalous human
388
+ actions
389
+ Alphapose
390
+ Skeleton
391
+ keypoints
392
+ GRU feed forward
393
+ network
394
+ Prediction error
395
+ 0.83
396
+ 0.92
397
+ Pang et al. [30]
398
+ ShanghaiTech,
399
+ CUHK Avenue
400
+ Outdoor
401
+ Multiple
402
+ Anomalous human
403
+ actions
404
+ Alphapose
405
+ Skeleton
406
+ keypoints
407
+ Transformer
408
+ Prediction error
409
+ 0.77
410
+ 0.87
411
+ Reconstruction+
412
+ Prediction
413
+ Morais et al. [17]
414
+ ShanghaiTech,
415
+ CUHK Avenue
416
+ Outdoor
417
+ Multiple
418
+ Anomalous human
419
+ actions
420
+ Alphapose
421
+ Skeleton
422
+ keypoints
423
+ GRU Encoder-
424
+ Decoder
425
+ Weighted sum of
426
+ reconstruction and
427
+ prediction errors
428
+ 0.73
429
+ 0.86
430
+ Boekhoudt et al. [7]
431
+ ShanghaiTech,
432
+ HR Crime
433
+ Indoor and
434
+ Outdoor
435
+ Multiple
436
+ Human and Crime
437
+ related anomalies
438
+ Alphapose
439
+ Skeleton
440
+ keypoints
441
+ GRU Encoder-
442
+ Decoder
443
+ Weighted sum of
444
+ reconstruction and
445
+ prediction errors
446
+ 0.73
447
+ 0.6
448
+ Li and Zhang [31]
449
+ ShanghaiTech
450
+ Outdoor
451
+ Multiple
452
+ Abnormal pedestrian
453
+ behaviours
454
+ Alphapose
455
+ Skeleton
456
+ keypoints
457
+ GRU Encoder-
458
+ Decoder
459
+ Weighted sum of
460
+ reconstruction and
461
+ prediction errors
462
+ 0.75
463
+ Li et al. [32]
464
+ ShanghaiTech,
465
+ CUHK Avenue
466
+ Outdoor
467
+ Multiple
468
+ Human-related
469
+ anomalous events
470
+ Alphapose
471
+ Skeleton
472
+ joints graph
473
+ GCAE with
474
+ embedded LSTM
475
+ Sum of max
476
+ reconstruction and
477
+ prediction errors
478
+ 0.76, EER=30.7
479
+ 0.84, EER=20.7
480
+ Wu et al. [33]
481
+ ShanghaiTech,
482
+ CUHK Avenue
483
+ Outdoor
484
+ Multiple
485
+ Abnormal human
486
+ actions
487
+ Alphapose
488
+ Skeleton
489
+ joints graph,
490
+ Confidence
491
+ scores
492
+ GCN
493
+ Confidence score
494
+ weighted sum of
495
+ reconstruction,
496
+ prediction and
497
+ SVDD errors
498
+ 0.77
499
+ 0.85
500
+ Reconstruction+
501
+ Clustering
502
+ Markovitz et al. [34]
503
+ ShanghaiTech,
504
+ NTU-RGB+D,
505
+ Kinetics-250
506
+ Indoor and
507
+ Outdoor
508
+ Multiple
509
+ Anomalous human
510
+ actions
511
+ Alphapose,
512
+ Openpose
513
+ Skeleton
514
+ joints graph
515
+ GCAE,
516
+ Deep clustering
517
+ Dirichlet process
518
+ mixture model
519
+ score
520
+ 0.75
521
+ 0.85
522
+ 0.74
523
+ Cui et al. [35]
524
+ ShanghaiTech
525
+ Outdoor
526
+ Multiple
527
+ Human pose
528
+ anomalies
529
+
530
+ Skeleton
531
+ joints graph
532
+ GCAE,
533
+ Deep clustering
534
+ Dirichlet process
535
+ mixture model
536
+ score
537
+ 0.77
538
+ Liu et al. [36]
539
+ ShanghaiTech,
540
+ CUHK Avenue
541
+ Outdoor
542
+ Multiple
543
+ Anomalous human
544
+ behaviours
545
+ Alphapose
546
+ Skeleton
547
+ joints graph
548
+ GCAE,
549
+ Deep clustering
550
+ Dirichlet process
551
+ mixture model
552
+ score
553
+ 0.79
554
+ 0.88
555
+ Clustering
556
+ Yang et al. [37]
557
+ UCSD Pedestrian 2,
558
+ ShanghaiTech
559
+ Outdoor
560
+ Multiple
561
+ Anomalous human
562
+ behaviours and
563
+ objects
564
+ Alphapose
565
+ Skeleton
566
+ joints graph,
567
+ Numerical
568
+ features
569
+ GCN
570
+ Skeleton cluster +
571
+ Object anomaly
572
+ score
573
+ 0.93
574
+ 0.82
575
+ Iterative self-
576
+ training
577
+ Nanjun et al. [38]
578
+ ShanghaiTech,
579
+ CUHK Avenue
580
+ Outdoor
581
+ Multiple
582
+ Human-related
583
+ anomalous events
584
+ Alphapose
585
+ Skeleton
586
+ joints graph,
587
+ Numerical
588
+ features
589
+ GCN
590
+ Self-trained fully
591
+ connected layers
592
+ output
593
+ 0.72, EER=34.1
594
+ 0.82, EER=23.9
595
+ Multivariate
596
+ gaussian
597
+ distribution
598
+ Tani and Shibata [39]
599
+ ShanghaiTech
600
+ Outdoor
601
+ Multiple
602
+ Anomalous human
603
+ behaviours
604
+ Openpose
605
+ Skeleton
606
+ joints graph
607
+ GCN, Multivariate
608
+ gaussian distribution
609
+ Mahalanobis
610
+ distance
611
+ 0.77
612
+
613
+ 4
614
+ and reconstructed skeletons, respectively. The video frames
615
+ and corresponding extracted skeletons served as input to the
616
+ framework during training; however, at test time, decision was
617
+ made based on only reconstruction error of video frames.
618
+ Challenges: AEs or their variants are widely used in
619
+ many video-based anomaly detection methods [5]. The choice
620
+ of the right architecture to model the skeletons is very
621
+ important. Further, being trained on the normal data, they
622
+ are expected to produce higher reconstruction error for the
623
+ abnormal inputs than the normal inputs, which has been
624
+ adopted as a criterion for identifying anomalies. However, this
625
+ assumption does not always hold in practice, that is, the AEs
626
+ can generalize well that it can also reconstruct anomalies well,
627
+ leading to false negatives [43].
628
+ B. Prediction Approaches
629
+ In prediction approaches, a network is generally trained to
630
+ learn the normal human behaviour by predicting the skeletons
631
+ at the next time step(s) using the skeletons representing normal
632
+ human actions at past time steps. During testing, the test sam-
633
+ ples with high prediction errors are flagged as anomalies as the
634
+ network is trained to predict only the skeletons representing
635
+ normal actions.
636
+ Rodrigues et al. [27] suggested that abnormal human activ-
637
+ ities can take place at different timescales, and the methods
638
+ that operate at a fixed timescale (frame-based or video-clip-
639
+ based) are not enough to capture the wide range of anomalies
640
+ occurring with different time duration. They proposed a multi-
641
+ timescale 1DConv encoder-decoder network where the inter-
642
+ mediate layers were responsible to generate future and past
643
+ predictions corresponding to different timescales. The network
644
+ was trained to make predictions on normal activity skeletons
645
+ input. The prediction errors from all timescales were combined
646
+ to get an anomaly score to detect abnormal activities. Luo
647
+ et al. [16] proposed a spatio-temporal Graph Convolutional
648
+ Network (GCN)-based prediction method for skeleton-based
649
+ video anomaly detection. The body joints were estimated and
650
+ built into skeleton graphs, where the body joints formed the
651
+ nodes of the graph. The spatial edges connected different joints
652
+ of a skeleton, and temporal edges connected the same joints
653
+ across time. A fully connected layer was used at the end
654
+ of the network to predict future skeletons. Zeng et al. [28]
655
+ proposed a hierarchical spatio-temporal GCN, where high-
656
+ level representations encoded the trajectories of people and the
657
+ interactions among multiple identities while low-level skeleton
658
+ graph representations encoded the local body posture of each
659
+ person. The method was proposed to detect anomalous human
660
+ behaviours in both sparse and dense scenes. The inputs were
661
+ organized into spatio-temporal skeleton graphs whose nodes
662
+ were human body joints from multiple frames and fed to
663
+ the network. The network was trained on the input skeleton
664
+ graph representations of normal activities. Optical flow fields
665
+ and size of skeleton bounding boxes were used to determine
666
+ sparse and dense scenes. For dense scenes with crowds, higher
667
+ weights were assigned to high-level representations while for
668
+ sparse scenes, the weights of low-level graph representations
669
+ were increased. During testing, the prediction errors from
670
+ different branches were weighted and combined to obtain the
671
+ final anomaly score. Fan et al. [29] proposed a GRU feed-
672
+ forward network that was trained to predict the next skeleton
673
+ using past skeleton sequences and a loss function that incorpo-
674
+ rated the range and speed of the predicted skeletons. Pang et
675
+ al. [30] proposed a skeleton transformer to predict future pose
676
+ components in video frames and considered error between
677
+ predicted pose components and corresponding expected values
678
+ as anomaly score. They applied a multi-head self-attention
679
+ module to capture long-range dependencies between arbitrary
680
+ pairwise pose components and the temporal convolutional
681
+ layer to concentrate on local temporal information.
682
+ Challenges: In these methods, it is difficult to choose
683
+ how far in future (or past) the prediction should be made to
684
+ achieve optimum results. This could potentially be determined
685
+ empirically; however, in the absence of a validation set such
686
+ solutions remain elusive. The future prediction-based methods
687
+ can be sensitive to noise in the past data [44]. Any small
688
+ changes in the past can result in significant variation in
689
+ prediction, and not all of these changes signify anomalous
690
+ situations.
691
+ C. Combinations of learning approaches
692
+ In this section, we discuss the existing methods that uti-
693
+ lize a combination of different learning approaches, namely,
694
+ reconstruction and prediction approaches, and reconstruction
695
+ and clustering approaches.
696
+ 1) Combination
697
+ of
698
+ reconstruction
699
+ and
700
+ prediction
701
+ ap-
702
+ proaches: Some skeletal video anomaly detection methods
703
+ utilize a multi-objective loss function consisting of both recon-
704
+ struction and prediction errors to learn the characteristics of
705
+ skeletons signifying normal behaviour and identify skeletons
706
+ with large errors as anomalies. Morais et al. [17] proposed a
707
+ method to model the normal human movements in surveillance
708
+ videos using human skeletons and their relative positions
709
+ in the scene. The human skeletons were decomposed into
710
+ two sub-components: global body movement and local body
711
+ posture. The global movement tracked the dynamics of the
712
+ whole body in the scene, while the local posture described the
713
+ skeleton configuration. The two components were passed as
714
+ input to different branches of a message passing GRU single-
715
+ encoder-dual-decoder-based network. The branches processed
716
+ their data separately and interacted via cross-branch message
717
+ passing at each time step. Each branch had an encoder, a
718
+ reconstruction-based decoder and a prediction-based decoder.
719
+ The network was trained using normal data, and during testing,
720
+ a frame-level anomaly score was generated by aggregating
721
+ the anomaly scores of all the skeletons in a frame to identify
722
+ anomalous frames. In order to avoid the inaccuracy caused by
723
+ incorrect detection of skeletons in video frames, the authors
724
+ leave out video frames where the skeletons cannot be estimated
725
+ by the pose estimation algorithm. Hence, the results in this
726
+ work was not a good representation of a real-world scenario,
727
+ which often consists of complex-scenes with occluding objects
728
+ and overlapping movement of people. Boekhoudt et al. [7]
729
+ utilized the network proposed by Morais et al. [17] for de-
730
+ tecting human crime-based anomalies in videos using a newly
731
+
732
+ 5
733
+ proposed crime-based video surveillance dataset. Similar to
734
+ the work by Morais et al. [17], Li and Zhang [31] proposed
735
+ a dual branch single-encoder-dual-decoder GRU network that
736
+ was trained on normal behaviour skeletons estimated from
737
+ pedestrian videos. The two decoders were responsible for
738
+ reconstructing the input skeletons and predicting future skele-
739
+ tons, respectively. However, unlike the work by Morais et al.
740
+ [17], there was no provision of message passing between the
741
+ branches. Li et al. [32] proposed a single-encoder-dual-decoder
742
+ architecture established on a spatio-temporal Graph CAE
743
+ (GCAE) embedded with a LSTM network in hidden layers.
744
+ The two decoders were used to reconstruct the input skeleton
745
+ sequences and predict the unseen future sequences, respec-
746
+ tively, from the latent vectors projected via the encoder. The
747
+ sum of maximum reconstruction and prediction errors among
748
+ all the skeletons within a frame was used as anomaly score for
749
+ detecting anomalous frames. Wu et al. [33] proposed a GCN-
750
+ based encoder-decoder architecture that was trained using
751
+ normal action skeleton graphs and keypoint confidence scores
752
+ as input to detect anomalous human actions in surveillance
753
+ videos. The skeleton graph input was decomposed into global
754
+ and local components. The network consisted of three encoder-
755
+ decoder pipelines: the global pipeline, the local pipeline and
756
+ the confidence score pipeline. The global and local encoder-
757
+ decoder-based pipelines learned to reconstruct and predict the
758
+ global and local components, respectively. The confidence
759
+ score pipeline learned to reconstruct the confidence scores.
760
+ Further, a Support Vector Data Description (SVDD)-based loss
761
+ was employed to learn the boundary of the normal action
762
+ global and local pipeline encoder output in latent feature space.
763
+ The network was trained using a multi-objective loss function,
764
+ composed of a weighted sum of skeleton graph reconstruction
765
+ and prediction losses, confidence score reconstruction loss and
766
+ multi-center SVDD loss.
767
+ 2) Combination
768
+ of
769
+ reconstruction
770
+ and
771
+ clustering
772
+ ap-
773
+ proaches: Some skeletal video anomaly detection methods
774
+ utilize a two-stage approach to identify anomalous human
775
+ actions using spatio-temporal skeleton graphs. In the first
776
+ pre-training stage, a GCAE-based model is trained to min-
777
+ imize the reconstruction loss on input skeleton graphs. In
778
+ the second fine-tuning stage, the latent features generated by
779
+ the pre-trained GCAE encoder is fed to a clustering layer
780
+ and a Dirichlet Process Mixture model is used to estimate
781
+ the distribution of the soft assignment of feature vectors
782
+ to clusters. Finally at the test time, the Dirichlet normality
783
+ score is used to identify the anomalous samples. Markovitz
784
+ et al. [34] identified that anomalous actions can be broadly
785
+ classified in two categories, fine and coarse-grained anomalies.
786
+ Fine-grained anomaly detection refers to detecting abnormal
787
+ variations of an action, e.g., abnormal type of walking. Coarse-
788
+ grained anomaly detection refers to defining particular normal
789
+ actions and regarding other actions as abnormal, such as
790
+ determining dancing as normal and gymnastics as abnormal.
791
+ They utilized a spatio-temporal GCAE to map the skeleton
792
+ graphs representing normal actions to a latent space, which
793
+ was soft assigned to clusters using a deep clustering layer. The
794
+ soft-assignment representation abstracted the type of data (fine
795
+ or coarse-grained) from the Dirichlet model. After pre-training
796
+ of GCAE, the latent feature output of the encoder and clusters
797
+ were fine-tuned by minimizing a multi-objective loss function
798
+ consisting of both the reconstruction loss and clustering loss.
799
+ They leveraged ShanghaiTech [45] dataset to test the perfor-
800
+ mance of their proposed method on fine-grained anomalies,
801
+ and NTU-RGB+D [46] and Kinetics-250 [47] datasets for
802
+ coarse-grained anomaly detection performance evaluation. Cui
803
+ et al. [35] proposed a semi-supervised prototype generation-
804
+ based method for video anomaly detection to reduce the
805
+ computational cost associated with graph-embedded networks.
806
+ Skeleton graphs for normal actions were estimated from the
807
+ videos and fed as input to a shift spatio-temporal GCAE to
808
+ generate features. It was not clear which pose estimation algo-
809
+ rithm was used to estimate the skeletons from video frames.
810
+ The generated features were fed to the proposed prototype gen-
811
+ eration module designed to map the features to prototypes and
812
+ update them during the training phase. In the pre-training step,
813
+ the GCAE and prototype generation module were optimized
814
+ using a loss function composed of reconstruction loss and
815
+ generation loss of prototypes. In the fine-tuning step, the entire
816
+ network was fine-tuned using a multi-objective loss function,
817
+ composed of reconstruction loss, prototype generation loss and
818
+ cluster loss. Later, Liu et al. [36] used self-attention augmented
819
+ graph convolutions for detecting abnormal human behaviours
820
+ based on skeleton graphs. Skeleton graphs were fed as input to
821
+ a spatio-temporal self-attention augmented GCAE and latent
822
+ features were extracted from the encoder part of the trained
823
+ GCAE. After pre-training of GCAE, the entire network was
824
+ fine-tuned using a multi-objective loss function consisting of
825
+ both the reconstruction loss and clustering loss.
826
+ Challenges: The combination-based methods can carry
827
+ the limitations of the individual learning approaches, as de-
828
+ scribed in Section II-A and II-B. Further, in the absence of a
829
+ validation set, it is difficult to determine the optimum value
830
+ of combination coefficients in a multi-objective loss function.
831
+ D. Other Approaches
832
+ This section discusses the methods that leveraged a pre-
833
+ trained deep learning model to encode latent features from the
834
+ input skeletons and used approaches such as, clustering and
835
+ multivariate gaussian distribution, in conjunction for detecting
836
+ human action-based anomalies in videos.
837
+ Yang et al. [37] proposed a two-stream fusion method to
838
+ detect anomalies pertaining to body movement and object
839
+ positions. YOLOv3 [48] was used to detect people and objects
840
+ in the video frames. Subsequently, skeletons were estimated
841
+ from the video frames and passed as input to a spatio-temporal
842
+ GCN, followed by a clustering-based fully connected layer to
843
+ generate anomaly scores for skeletons. The information per-
844
+ taining to the bounding box coordinates and confidence score
845
+ of the detected objects was used to generate object anomaly
846
+ scores. Finally, the skeleton and object normality scores were
847
+ combined to generate the final anomaly score for a frame.
848
+ Nanjun et al. [38] used the skeleton features estimated from
849
+ the videos for pedestrian anomaly detection using an iterative
850
+ self-training strategy. The training set consisted of unlabelled
851
+ normal and anomalous video sequences. The skeletons were
852
+
853
+ 6
854
+ decomposed into global and local components, which were fed
855
+ as input to an unsupervised anomaly detector, iForest [49], to
856
+ yield the pseudo anomalous and normal skeleton sets. The
857
+ pseudo sets were used to train an anomaly scoring module,
858
+ consisting of a spatial GCN and fully connected layers with
859
+ a single output unit. As part of the self-training strategy,
860
+ new anomaly scores were generated using previously trained
861
+ anomaly scoring module to update the membership of skeleton
862
+ samples in the skeleton sets. The scoring module was then
863
+ retrained using updated skeleton sets, until the best scoring
864
+ model was obtained. However, the paper doesn’t discuss the
865
+ criteria to decide the best scoring model. Tani and Shibata
866
+ [39] proposed a framework for training a frame-wise Adap-
867
+ tive GCN (AGCN) for action recognition using single frame
868
+ skeletons and used the features extracted from the AGCN to
869
+ train an anomaly detection model. As part of the proposed
870
+ framework, a pretrained action recognition model [50] was
871
+ used to identify the frames with large temporal attention in
872
+ the Kinetics-skeleton dataset [51] as the action frames to train
873
+ the AGCN. Further, the trained AGCN was used to extract
874
+ features from the normal behaviour skeletons identified in the
875
+ ShanghaiTech Campus dataset [17] to model a multivariate
876
+ gaussian distribution. During testing, the Mahalanobis distance
877
+ was used to calculate the anomaly score under the multivariate
878
+ gaussian distribution.
879
+ Challenges: The performance of these methods rely on
880
+ the pre-training strategy of the deep learning models used to
881
+ learn the latent features and the choice of training parameters
882
+ for the subsequent machine learning models.
883
+ III. DISCUSSION
884
+ This section leverages Table I and synthesizes the informa-
885
+ tion and trends that can be inferred from the existing work on
886
+ skeletal video anomaly detection.
887
+ • ShanghaiTech [45] and CUHK Avenue [52] were the
888
+ most frequently used video datasets to evaluate the perfor-
889
+ mance of the skeletal video anomaly detection methods.
890
+ The ShanghaiTech dataset has videos of people walking
891
+ along a sidewalk of the ShanghaiTech university. Anoma-
892
+ lous activities include bikers, skateboarders and people
893
+ fighting. It has 330 training videos and 107 test videos.
894
+ However, not all the anomalous activities are related
895
+ to humans. A subset of the ShanghaiTech dataset that
896
+ contained anomalous activities only related to humans
897
+ was termed as HR ShanghaiTech and was used in many
898
+ papers. The CUHK Avenue dataset consists of short video
899
+ clips looking at the side of a building with pedestrian
900
+ walking by it. Concrete columns that are part of the
901
+ building cause some occlusion. The dataset contains 16
902
+ training videos and 21 testing videos. The anomalous
903
+ events comprise of actions such as “throwing papers”,
904
+ “throwing bag”, “child skipping”, “wrong direction” and
905
+ “bag on grass”. Similarly, a subset of the CUHK Avenue
906
+ dataset containing anomalous activities only related to
907
+ humans, called HR Avenue, has been used to evaluate
908
+ the methods. Other video datasets that have been used
909
+ include UTD-MHAD [53], UMN [54], UCSD Pedestrian
910
+ [6], IITB-Corridor [27], HR Crime [7], NTU-RGB+D
911
+ [46], and Kinetics-250 [47]. From the type of anomalies
912
+ present in these datasets, it can be inferred that the
913
+ existing skeletal video anomaly detection methods have
914
+ been evaluated mostly on individual human action-based
915
+ anomalies. Hence, it is not clear how well can they
916
+ detect anomalies that involve interactions among multiple
917
+ individuals or interaction among people and objects.
918
+ • Most of the papers (19 out of 21), detected anomalous
919
+ human actions for multiple people in the video scene.
920
+ The usual approach was to estimate the skeletons for the
921
+ people in the scene using a pose estimation algorithm,
922
+ and calculate anomaly scores for each of the skeletons.
923
+ The maximum anomaly score among all the skeletons
924
+ within a frame was used to identify the anomalous
925
+ frames. A single video frame could contain multiple
926
+ people, among which not all of them were performing
927
+ anomalous actions. Hence, taking the maximum anomaly
928
+ score of all the skeletons helped to nullify the effect of
929
+ people with normal actions on the final decision for the
930
+ frame. Further, calculating anomaly scores for individual
931
+ skeletons helped to localize the source of anomaly within
932
+ a frame.
933
+ • The definition of anomalous human behaviours can differ
934
+ across applications. While most of the existing papers
935
+ focused on detecting anomalous human behaviours in
936
+ general, four papers focused on detecting anomalous be-
937
+ haviours for specific applications, that is, drunk walking
938
+ [23], poor body movements in children [24], abnormal
939
+ pedestrian behaviours at grade crossings [25] and crime-
940
+ based anomalies [7]. Further, the nature of anomalous
941
+ behaviours can vary depending upon various factors,
942
+ like span of time, crowded scenes, and specific action-
943
+ based anomalies. Some papers identified and addressed
944
+ the need to detect specific types of anomalies, namely,
945
+ multi-timescale anomalies occurring over different time
946
+ duration [27], anomalies in both sparse and crowded
947
+ scenes [28], fine and coarse-grained anomalies [34] and
948
+ body movement and object position anomalies [37].
949
+ • Alphapose [15] and Openpose [55] were the most com-
950
+ mon choice of pose estimation algorithm for extraction
951
+ of skeletons for the people in the scene. Other pose
952
+ estimation methods that have been used were Posenet
953
+ [56] and HRNet [57]. However, in general, the papers
954
+ did not provide any rationale behind their choice of the
955
+ pose estimation algorithm.
956
+ • The type of models used in the papers can broadly
957
+ be divided into two types, sequence-based and graph-
958
+ based models. The sequence-based models that have
959
+ been used include 1DConv-AE, LSTM-AE, GRU, and
960
+ Transformer. These models treated skeleton keypoints for
961
+ individual people across multiple frames as time series
962
+ input. The graph-based models that have been used in-
963
+ volve GCAE and GCN. The graph-based models received
964
+ spatio-temporal skeleton graphs for individual people as
965
+ input. The spatio-temporal graphs were constructed by
966
+ considering body joints as the nodes of the graph. The
967
+ spatial edges connected different joints of a skeleton, and
968
+
969
+ 7
970
+ temporal edges connected the same joints across time.
971
+ • Area Under Curve (AUC) of Receiver Operating Charac-
972
+ teristic (ROC) curve was the most common metric used
973
+ to evaluate the performance among the existing skeletal
974
+ video anomaly detection methods. Other performance
975
+ evaluation metrics include F score, accuracy, Equal Error
976
+ Rate (EER) and AUC of Precision-Recall (PR) Curve.
977
+ EER signifies the percentage of misclassified frames
978
+ when the false positive rate equals to the miss rate on
979
+ the ROC curve. While AUC(ROC) can provide a good
980
+ estimate of the classifier’s performance over different
981
+ thresholds, it can be misleading in case the data is
982
+ imbalanced [58]. In anomaly detection scenario, it is
983
+ common to have imbalance in the test data, as the anoma-
984
+ lous behaviours occur infrequently, particularly in many
985
+ medical applications [59], [60]. The AUC(PR) value
986
+ provides a good estimate of the classifier’s performance
987
+ on imbalanced datasets [58]; however, only one of the
988
+ papers used AUC(PR) as an evaluation metric.
989
+ • The highest AUC(ROC) values reported for the Shang-
990
+ haiTech [45] and CUHK Avenue [52] datasets across
991
+ different methods in Table I were 0.83 and 0.92, re-
992
+ spectively. A direct comparison may not be possible due
993
+ to the difference in the experimental setup and train-
994
+ test splits across the reviewed methods; however, it gives
995
+ some confidence on the viability of these approaches for
996
+ skeletal video anomaly detection.
997
+ IV. CHALLENGES AND FUTURE DIRECTIONS
998
+ In general, the efficiency of the skeletal video anomaly
999
+ detection algorithms depends upon the accuracy of the skele-
1000
+ tons estimated by the pose-estimation algorithm. If the pose
1001
+ estimation algorithm misses certain joints or produces ar-
1002
+ tifacts in the scene, then it can increase the number of
1003
+ false alarms. There are various challenges associated with
1004
+ estimating skeletons from video frames [61]: (i) complex
1005
+ body configuration causing self-occlusions and complex poses,
1006
+ (ii) diverse appearance, including clothing, and (iii) complex
1007
+ environment with occlusion from other people in the scene,
1008
+ various viewing angles, distance from camera and trunca-
1009
+ tion of parts in the camera view. This can lead to a poor
1010
+ approximation of skeletons and can negatively impact the
1011
+ performance of the anomaly detection algorithms. Methods
1012
+ have been proposed to address some of these challenges [62],
1013
+ [63]; however, extracting skeletons in complex environments
1014
+ remains a difficult problem. Some of the existing methods
1015
+ manually remove inaccurate and false skeletons [17], [25]
1016
+ to train the model, which is impractical in many real-world
1017
+ applications where the amount of available data is very large.
1018
+ There is a need of an automated false skeleton identification
1019
+ and removal step, when estimating skeletons from videos.
1020
+ The skeletons collected using Microsoft Kinect (depth)
1021
+ camera has been used in the past studies [64], [65]. However,
1022
+ the defunct production of the Microsoft Kinect camera [66]
1023
+ has lead to hardware constraints in the further development
1024
+ of skeletal anomaly detection approaches. Other commer-
1025
+ cial products include Vicon [67] with optical sensors and
1026
+ TheCaptury [68] with multiple cameras. But they function
1027
+ in very constrained environments or require special markers
1028
+ on the human body. New cameras, such as ‘Sentinare 2’
1029
+ from AltumView [69], circumvent such hardware requirements
1030
+ by directly processing videos on regular RGB cameras and
1031
+ transmitting skeletons information in real-time. The exist-
1032
+ ing approaches for skeletal video anomaly detection involve
1033
+ spatio-temporal skeleton graphs [16] or temporal sequences
1034
+ [17], which are constructed by tracking an individual across
1035
+ multiple frames. However, this is challenging in scenarios
1036
+ where there are multiple people within a scene. The entry
1037
+ and exit of people in the scene, overlapping of people during
1038
+ movement and presence of occluding objects make tracking
1039
+ people across frames a very challenging task. There can be
1040
+ deployment issues in these methods because the choice of
1041
+ threshold is not clear. In the absence of any validation set
1042
+ (containing both normal and unseen anomalies) in an anomaly
1043
+ detection setting, it is very hard to fine-tune an operating
1044
+ threshold using just the training data (comprising of normal
1045
+ activities only). To handle these situations, outliers within the
1046
+ normal activities can be used as a proxy for unseen anomalies
1047
+ [70]; however, inappropriate choices can lead to increased false
1048
+ alarms or missed alarms. Domain expertise can be utilized to
1049
+ adjust a threshold, which may not be available in many cases.
1050
+ The anomalous human behaviours of interest and their
1051
+ difficulty of detection can vary depending upon the definition
1052
+ of anomaly, application, time span of the anomalous actions,
1053
+ and presence of single/multiple people in the scenes. For
1054
+ example, in the case of driver anomaly detection application,
1055
+ the anomalous behaviours can include talking on the phone,
1056
+ dozing off or drinking [14]. The anomalous actions can span
1057
+ over different time lengths, ranging from few seconds to hours
1058
+ or days, e.g., jumping and falls [70] are short-term anomalies,
1059
+ while loitering and social isolation [71] are long-term events.
1060
+ More focus is needed on developing methods that can identify
1061
+ both short and long-term anomalies.
1062
+ Sparse scene anomalies can be described as anomalies
1063
+ in scenes with less number of humans, while dense scene
1064
+ anomalies can be described as anomalies in crowded scenes
1065
+ with large number of humans [28]. It is comparatively difficult
1066
+ to identify anomalous behaviours in dense scenes than sparse
1067
+ scenes due to tracking multiple people and finding their
1068
+ individual anomaly scores [17]. Thus, there is a need to
1069
+ develop methods that can effectively identify both sparse and
1070
+ dense scene anomalies. Further, there is a need to address the
1071
+ challenges associated with the granularity and the decision
1072
+ making time of the skeletal video anomaly detection methods
1073
+ for real time applications. The existing methods mostly output
1074
+ decision on a frame level, which becomes an issue when the
1075
+ input to the method is a real-time continuous video stream
1076
+ at multiple frames per second. This can lead to alarms going
1077
+ off multiple times a second, which can be counter-productive.
1078
+ One solution is for the methods to make decisions on a time-
1079
+ window basis, each window of length of a specified duration.
1080
+ However, this brings in the question about the optimal length
1081
+ of each decision window. A short window is impractical as
1082
+ it can lead to frequent and repetitive alarms, while a long
1083
+ window can lead to missed alarms, and delayed response
1084
+
1085
+ 8
1086
+ and intervention. Domain knowledge can be used to make a
1087
+ decision about the length of decision windows.
1088
+ Skeletons can be used in conjunction with optical flow
1089
+ [72] to develop privacy-protecting approaches to jointly learn
1090
+ from temporal and structural modalities. Approaches based
1091
+ on federated learning (that do not combine individual data,
1092
+ but only the models) can further improve the privacy of these
1093
+ methods [73]. Segmentation masks [74] can be leveraged in
1094
+ conjunction with skeletons to occlude humans while capturing
1095
+ the information pertaining to scene and human motion to
1096
+ develop privacy-protecting anomaly detection approaches.
1097
+ The skeletons signify motion and posture information for
1098
+ the individual humans in the video; however, they lack in-
1099
+ formation regarding human-human and human-object interac-
1100
+ tions. Information pertaining to interaction of the people with
1101
+ each other and the objects in the environment is important for
1102
+ applications such as, violence detection [7], theft detection [7]
1103
+ and agitation detection [60] in care home settings. Skeletons
1104
+ can be used to replace the bodies of the participants, while
1105
+ keeping the background information in video frames [75]
1106
+ to analyze both human-human and human-object interaction
1107
+ anomalies. Further, object bounding boxes can be used in
1108
+ conjunction with human skeletons to model human-object in-
1109
+ teraction while preserving the privacy of humans in the scene.
1110
+ The information from other modalities (e.g. wearable devices)
1111
+ along with skeleton features can be used to develop multi-
1112
+ modal anomaly detection methods to improve the detection
1113
+ performance.
1114
+ As can be seen in Table I, the existing skeletal video
1115
+ anomaly detection methods and available datasets focus to-
1116
+ wards detecting irregular body postures [16], and anomalous
1117
+ human actions [30] in mostly outdoor settings, and not in
1118
+ proper healthcare settings, such as personal homes and long-
1119
+ term care homes. This a gap towards real world deployment,
1120
+ as there is a need to extend the scope of detecting anomalous
1121
+ behaviours using skeletons to in-home and care home settings,
1122
+ where privacy is a very important concern. This can be utilized
1123
+ to address important applications, such as fall detection [76],
1124
+ agitation detection [60], [75], and independent assistive living.
1125
+ This will help to develop supportive homes and communi-
1126
+ ties and encourage autonomy and independence among the
1127
+ increasing older population and dementia residents in care
1128
+ homes. While leveraging skeletons helps to get rid of facial
1129
+ identity and appearance-based information, it is important to
1130
+ ask the question if skeletons can be considered private enough
1131
+ [77], [78] and what steps can be taken to further anonymize
1132
+ the skeletons.
1133
+ V. CONCLUSION
1134
+ In this paper, we provided a survey of recent works that
1135
+ leverage the skeletons or body joints estimated from videos
1136
+ for the anomaly detection task. The skeletons hide the facial
1137
+ identity and overall appearance of people and can provide
1138
+ vital information about joint angles [79], speed of walking
1139
+ [80], and interaction with other people in the scene [17].
1140
+ Our literature review showed that many deep learning-based
1141
+ approaches leverage reconstruction, prediction error and their
1142
+ other combinations to successfully detect anomalies in a
1143
+ privacy protecting manner. This review suggests the first
1144
+ steps towards increasing adoption of devices (and algorithms)
1145
+ focused on improving privacy in a residential or communal
1146
+ setting. It will further improve the deployment of anomaly
1147
+ detection systems to improve the safety and care of people.
1148
+ The skeleton-based anomaly detection methods can be used to
1149
+ design privacy-preserving technologies for the assisted living
1150
+ of older adults in a care environment [81] or enable older
1151
+ adults to live independently in their own homes to cope with
1152
+ the increasing cost of long-term care demands [82]. Privacy-
1153
+ preserving methods using skeleton features can be employed
1154
+ to assist with skeleton-based rehab exercise monitoring [83]
1155
+ or in social robots for robot-human interaction [84] that assist
1156
+ older people in their activities of daily living.
1157
+ VI. ACKNOWLEDGEMENTS
1158
+ This work was supported by AGE-WELL NCE Inc,
1159
+ Alzheimer’s Association, Natural Sciences and Engineering
1160
+ Research Council and UAE Strategic Research Grant.
1161
+ REFERENCES
1162
+ [1] S. S. Khan and M. G. Madden, “One-class classification: taxonomy of
1163
+ study and review of techniques,” The Knowledge Engineering Review,
1164
+ vol. 29, no. 3, pp. 345–374, 2014.
1165
+ [2] V. Chandola, A. Banerjee, and V. Kumar, “Anomaly detection: A survey,”
1166
+ ACM computing surveys (CSUR), vol. 41, no. 3, pp. 1–58, 2009.
1167
+ [3] C. Gautam, P. K. Mishra, A. Tiwari, B. Richhariya, H. M. Pandey,
1168
+ S. Wang, M. Tanveer, A. D. N. Initiative et al., “Minimum variance-
1169
+ embedded deep kernel regularized least squares method for one-class
1170
+ classification and its applications to biomedical data,” Neural Networks,
1171
+ vol. 123, pp. 191–216, 2020.
1172
+ [4] P. K. Mishra, C. Gautam, and A. Tiwari, “Minimum variance embedded
1173
+ auto-associative kernel extreme learning machine for one-class classifi-
1174
+ cation,” Neural Computing and Applications, vol. 33, no. 19, pp. 12 973–
1175
+ 12 987, 2021.
1176
+ [5] J. Nogas, S. S. Khan, and A. Mihailidis, “Deepfall: Non-invasive
1177
+ fall detection with deep spatio-temporal convolutional autoencoders,”
1178
+ Journal of Healthcare Informatics Research, vol. 4, no. 1, pp. 50–70,
1179
+ 2020.
1180
+ [6] W. Li, V. Mahadevan, and N. Vasconcelos, “Anomaly detection and
1181
+ localization in crowded scenes,” IEEE transactions on pattern analysis
1182
+ and machine intelligence, vol. 36, no. 1, pp. 18–32, 2013.
1183
+ [7] K. Boekhoudt, A. Matei, M. Aghaei, and E. Talavera, “Hr-crime:
1184
+ Human-related anomaly detection in surveillance videos,” in Inter-
1185
+ national Conference on Computer Analysis of Images and Patterns.
1186
+ Springer, 2021, pp. 164–174.
1187
+ [8] A. Senior, Protecting privacy in video surveillance.
1188
+ Springer, 2009,
1189
+ vol. 1.
1190
+ [9] P. Climent-P´erez and F. Florez-Revuelta, “Protection of visual privacy
1191
+ in videos acquired with rgb cameras for active and assisted living
1192
+ applications,” Multimedia Tools and Applications, vol. 80, no. 15, pp.
1193
+ 23 649–23 664, 2021.
1194
+ [10] B. Ye, S. S. Khan, B. Chikhaoui, A. Iaboni, L. S. Martin, K. Newman,
1195
+ A. Wang, and A. Mihailidis, “Challenges in collecting big data in a
1196
+ clinical environment with vulnerable population: Lessons learned from
1197
+ a study using a multi-modal sensors platform,” Science and engineering
1198
+ ethics, vol. 25, no. 5, pp. 1447–1466, 2019.
1199
+ [11] P. Schneider, J. Rambach, B. Mirbach, and D. Stricker, “Unsupervised
1200
+ anomaly detection from time-of-flight depth images,” in Proceedings of
1201
+ the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
1202
+ 2022, pp. 231–240.
1203
+ [12] V. Mehta, A. Dhall, S. Pal, and S. S. Khan, “Motion and region aware
1204
+ adversarial learning for fall detection with thermal imaging,” in 2020
1205
+ 25th International Conference on Pattern Recognition (ICPR).
1206
+ IEEE,
1207
+ 2021, pp. 6321–6328.
1208
+ [13] S. Denkovski, S. S. Khan, B. Malamis, S. Y. Moon, B. Ye, and
1209
+ A. Mihailidis, “Multi visual modality fall detection dataset,” IEEE
1210
+ Access, vol. 10, pp. 106 422–106 435, 2022.
1211
+
1212
+ 9
1213
+ [14] O. Kopuklu, J. Zheng, H. Xu, and G. Rigoll, “Driver anomaly detection:
1214
+ A dataset and contrastive learning approach,” in Proceedings of the
1215
+ IEEE/CVF Winter Conference on Applications of Computer Vision,
1216
+ 2021, pp. 91–100.
1217
+ [15] H.-S. Fang, S. Xie, Y.-W. Tai, and C. Lu, “Rmpe: Regional multi-person
1218
+ pose estimation,” in Proceedings of the IEEE international conference
1219
+ on computer vision, 2017, pp. 2334–2343.
1220
+ [16] W. Luo, W. Liu, and S. Gao, “Normal graph: Spatial temporal graph
1221
+ convolutional networks based prediction network for skeleton based
1222
+ video anomaly detection,” Neurocomputing, vol. 444, pp. 332–337,
1223
+ 2021.
1224
+ [17] R. Morais, V. Le, T. Tran, B. Saha, M. Mansour, and S. Venkatesh,
1225
+ “Learning regularity in skeleton trajectories for anomaly detection in
1226
+ videos,” in Proceedings of the IEEE/CVF conference on computer vision
1227
+ and pattern recognition, 2019, pp. 11 996–12 004.
1228
+ [18] A. Dhall, O. Ramana Murthy, R. Goecke, J. Joshi, and T. Gedeon, “Video
1229
+ and image based emotion recognition challenges in the wild: Emotiw
1230
+ 2015,” in Proceedings of the 2015 ACM on international conference on
1231
+ multimodal interaction, 2015, pp. 423–426.
1232
+ [19] B. Taati, S. Zhao, A. B. Ashraf, A. Asgarian, M. E. Browne, K. M.
1233
+ Prkachin, A. Mihailidis, and T. Hadjistavropoulos, “Algorithmic bias in
1234
+ clinical populations—evaluating and improving facial analysis technol-
1235
+ ogy in older adults with dementia,” IEEE Access, vol. 7, pp. 25 527–
1236
+ 25 534, 2019.
1237
+ [20] G. Menchetti, Z. Chen, D. J. Wilkie, R. Ansari, Y. Yardimci, and A. E.
1238
+ C¸ etin, “Pain detection from facial videos using two-stage deep learning,”
1239
+ in 2019 IEEE Global Conference on Signal and Information Processing
1240
+ (GlobalSIP).
1241
+ IEEE, 2019, pp. 1–5.
1242
+ [21] X. Chen, J. Cheng, R. Song, Y. Liu, R. Ward, and Z. J. Wang, “Video-
1243
+ based heart rate measurement: Recent advances and future prospects,”
1244
+ IEEE Transactions on Instrumentation and Measurement, vol. 68, no. 10,
1245
+ pp. 3600–3615, 2018.
1246
+ [22] T. Gatt, D. Seychell, and A. Dingli, “Detecting human abnormal
1247
+ behaviour through a video generated model,” in 2019 11th International
1248
+ Symposium on Image and Signal Processing and Analysis (ISPA). IEEE,
1249
+ 2019, pp. 264–270.
1250
+ [23] O. Temuroglu, Y. Kawanishi, D. Deguchi, T. Hirayama, I. Ide,
1251
+ H. Murase, M. Iwasaki, and A. Tsukada, “Occlusion-aware skeleton
1252
+ trajectory representation for abnormal behavior detection,” in Interna-
1253
+ tional Workshop on Frontiers of Computer Vision.
1254
+ Springer, 2020, pp.
1255
+ 108–121.
1256
+ [24] S. Suzuki, Y. Amemiya, and M. Sato, “Skeleton-based visualization
1257
+ of poor body movements in a child’s gross-motor assessment using
1258
+ convolutional auto-encoder,” in 2021 IEEE International Conference on
1259
+ Mechatronics (ICM).
1260
+ IEEE, 2021, pp. 1–6.
1261
+ [25] Z. Jiang, G. Song, Y. Qian, and Y. Wang, “A deep learning framework
1262
+ for detecting and localizing abnormal pedestrian behaviors at grade
1263
+ crossings,” Neural Computing and Applications, pp. 1–15, 2022.
1264
+ [26] Z. Fan, S. Yi, D. Wu, Y. Song, M. Cui, and Z. Liu, “Video anomaly
1265
+ detection using cyclegan based on skeleton features,” Journal of Visual
1266
+ Communication and Image Representation, vol. 85, p. 103508, 2022.
1267
+ [27] R. Rodrigues, N. Bhargava, R. Velmurugan, and S. Chaudhuri, “Multi-
1268
+ timescale trajectory prediction for abnormal human activity detection,”
1269
+ in Proceedings of the IEEE/CVF Winter Conference on Applications of
1270
+ Computer Vision, 2020, pp. 2626–2634.
1271
+ [28] X. Zeng, Y. Jiang, W. Ding, H. Li, Y. Hao, and Z. Qiu, “A hierarchical
1272
+ spatio-temporal graph convolutional neural network for anomaly detec-
1273
+ tion in videos,” IEEE Transactions on Circuits and Systems for Video
1274
+ Technology, pp. 1–1, 2021.
1275
+ [29] B. Fan, P. Li, S. Jin, and Z. Wang, “Anomaly detection based on pose
1276
+ estimation and gru-ffn,” in 2021 IEEE Sustainable Power and Energy
1277
+ Conference (iSPEC).
1278
+ IEEE, 2021, pp. 3821–3825.
1279
+ [30] W. Pang, Q. He, and Y. Li, “Predicting skeleton trajectories using a
1280
+ skeleton-transformer for video anomaly detection,” Multimedia Systems,
1281
+ pp. 1–14, 2022.
1282
+ [31] Y. Li and Z. Zhang, “Video abnormal behavior detection based on human
1283
+ skeletal information and gru,” in International Conference on Intelligent
1284
+ Robotics and Applications.
1285
+ Springer, 2022, pp. 450–458.
1286
+ [32] N. Li, F. Chang, and C. Liu, “Human-related anomalous event detection
1287
+ via spatial-temporal graph convolutional autoencoder with embedded
1288
+ long short-term memory network,” Neurocomputing, 2021.
1289
+ [33] T.-H. Wu, C.-L. Yang, L.-L. Chiu, T.-W. Wang, G. J. Faure, and S.-H.
1290
+ Lai, “Confidence-aware anomaly detection in human actions,” in Asian
1291
+ Conference on Pattern Recognition.
1292
+ Springer, 2022, pp. 240–254.
1293
+ [34] A. Markovitz, G. Sharir, I. Friedman, L. Zelnik-Manor, and S. Avidan,
1294
+ “Graph embedded pose clustering for anomaly detection,” in Proceed-
1295
+ ings of the IEEE/CVF Conference on Computer Vision and Pattern
1296
+ Recognition, 2020, pp. 10 539–10 547.
1297
+ [35] T. Cui, W. Song, G. An, and Q. Ruan, “Prototype generation based shift
1298
+ graph convolutional network for semi-supervised anomaly detection,” in
1299
+ Chinese Conference on Image and Graphics Technologies.
1300
+ Springer,
1301
+ 2021, pp. 159–169.
1302
+ [36] C. Liu, R. Fu, Y. Li, Y. Gao, L. Shi, and W. Li, “A self-attention
1303
+ augmented graph convolutional clustering networks for skeleton-based
1304
+ video anomaly behavior detection,” Applied Sciences, vol. 12, no. 1,
1305
+ p. 4, 2022.
1306
+ [37] Y. Yang, Z. Fu, and S. M. Naqvi, “A two-stream information fusion
1307
+ approach to abnormal event detection in video,” in ICASSP 2022-
1308
+ 2022 IEEE International Conference on Acoustics, Speech and Signal
1309
+ Processing (ICASSP).
1310
+ IEEE, 2022, pp. 5787–5791.
1311
+ [38] N. Li, F. Chang, and C. Liu, “A self-trained spatial graph convolutional
1312
+ network for unsupervised human-related anomalous event detection in
1313
+ complex scenes,” IEEE Transactions on Cognitive and Developmental
1314
+ Systems, 2022.
1315
+ [39] H. Tani and T. Shibata, “Frame-wise action recognition training frame-
1316
+ work for skeleton-based anomaly behavior detection,” in International
1317
+ Conference on Image Analysis and Processing.
1318
+ Springer, 2022, pp.
1319
+ 312–323.
1320
+ [40] L. Song, G. Yu, J. Yuan, and Z. Liu, “Human pose estimation and
1321
+ its application to action recognition: A survey,” Journal of Visual
1322
+ Communication and Image Representation, vol. 76, p. 103055, 2021.
1323
+ [41] A. Badiola-Bengoa and A. Mendez-Zorrilla, “A systematic review of the
1324
+ application of camera-based human pose estimation in the field of sport
1325
+ and physical exercise,” Sensors, vol. 21, no. 18, p. 5996, 2021.
1326
+ [42] S. Suzuki, Y. Amemiya, and M. Sato, “Enhancement of child gross-
1327
+ motor action recognition by motional time-series images conversion,” in
1328
+ 2020 IEEE/SICE International Symposium on System Integration (SII).
1329
+ IEEE, 2020, pp. 225–230.
1330
+ [43] D. Gong, L. Liu, V. Le, B. Saha, M. R. Mansour, S. Venkatesh, and
1331
+ A. v. d. Hengel, “Memorizing normality to detect anomaly: Memory-
1332
+ augmented deep autoencoder for unsupervised anomaly detection,” in
1333
+ Proceedings of the IEEE/CVF International Conference on Computer
1334
+ Vision, 2019, pp. 1705–1714.
1335
+ [44] Y. Tang, L. Zhao, S. Zhang, C. Gong, G. Li, and J. Yang, “Integrating
1336
+ prediction and reconstruction for anomaly detection,” Pattern Recogni-
1337
+ tion Letters, vol. 129, pp. 123–130, 2020.
1338
+ [45] W. Luo, W. Liu, and S. Gao, “A revisit of sparse coding based anomaly
1339
+ detection in stacked rnn framework,” in Proceedings of the IEEE
1340
+ international conference on computer vision, 2017, pp. 341–349.
1341
+ [46] A. Shahroudy, J. Liu, T.-T. Ng, and G. Wang, “Ntu rgb+ d: A large
1342
+ scale dataset for 3d human activity analysis,” in Proceedings of the
1343
+ IEEE conference on computer vision and pattern recognition, 2016, pp.
1344
+ 1010–1019.
1345
+ [47] W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijaya-
1346
+ narasimhan, F. Viola, T. Green, T. Back, P. Natsev et al., “The kinetics
1347
+ human action video dataset,” arXiv preprint arXiv:1705.06950, 2017.
1348
+ [48] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,”
1349
+ arXiv preprint arXiv:1804.02767, 2018.
1350
+ [49] F. T. Liu, K. M. Ting, and Z.-H. Zhou, “Isolation-based anomaly
1351
+ detection,” ACM Transactions on Knowledge Discovery from Data
1352
+ (TKDD), vol. 6, no. 1, pp. 1–39, 2012.
1353
+ [50] L. Shi, Y. Zhang, J. Cheng, and H. Lu, “Two-stream adaptive graph
1354
+ convolutional networks for skeleton-based action recognition,” in Pro-
1355
+ ceedings of the IEEE/CVF conference on computer vision and pattern
1356
+ recognition, 2019, pp. 12 026–12 035.
1357
+ [51] S. Yan, Y. Xiong, and D. Lin, “Spatial temporal graph convolutional
1358
+ networks for skeleton-based action recognition,” in Thirty-second AAAI
1359
+ conference on artificial intelligence, 2018.
1360
+ [52] C. Lu, J. Shi, and J. Jia, “Abnormal event detection at 150 fps in matlab,”
1361
+ in Proceedings of the IEEE international conference on computer vision,
1362
+ 2013, pp. 2720–2727.
1363
+ [53] C. Chen, R. Jafari, and N. Kehtarnavaz, “Utd-mhad: A multimodal
1364
+ dataset for human action recognition utilizing a depth camera and a
1365
+ wearable inertial sensor,” in 2015 IEEE International conference on
1366
+ image processing (ICIP).
1367
+ IEEE, 2015, pp. 168–172.
1368
+ [54] “Umn,” http://mha.cs.umn.edu/proj events.shtml.
1369
+ [55] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime multi-person 2d
1370
+ pose estimation using part affinity fields,” in Proceedings of the IEEE
1371
+ conference on computer vision and pattern recognition, 2017, pp. 7291–
1372
+ 7299.
1373
+ [56] G. Papandreou, T. Zhu, N. Kanazawa, A. Toshev, J. Tompson, C. Bregler,
1374
+ and K. Murphy, “Towards accurate multi-person pose estimation in the
1375
+
1376
+ 10
1377
+ wild,” in Proceedings of the IEEE conference on computer vision and
1378
+ pattern recognition, 2017, pp. 4903–4911.
1379
+ [57] K. Sun, B. Xiao, D. Liu, and J. Wang, “Deep high-resolution repre-
1380
+ sentation learning for human pose estimation,” in Proceedings of the
1381
+ IEEE/CVF conference on computer vision and pattern recognition, 2019,
1382
+ pp. 5693–5703.
1383
+ [58] T. Saito and M. Rehmsmeier, “The precision-recall plot is more informa-
1384
+ tive than the roc plot when evaluating binary classifiers on imbalanced
1385
+ datasets,” PloS one, vol. 10, no. 3, p. e0118432, 2015.
1386
+ [59] Y. M. Galv˜ao, L. Portela, J. Ferreira, P. Barros, O. A. D. A. Fagundes,
1387
+ and B. J. Fernandes, “A framework for anomaly identification applied
1388
+ on fall detection,” IEEE Access, vol. 9, pp. 77 264–77 274, 2021.
1389
+ [60] S. S. Khan, P. K. Mishra, N. Javed, B. Ye, K. Newman, A. Mihailidis,
1390
+ and A. Iaboni, “Unsupervised deep learning to detect agitation from
1391
+ videos in people with dementia,” IEEE Access, vol. 10, pp. 10 349–
1392
+ 10 358, 2022.
1393
+ [61] Y. Chen, Y. Tian, and M. He, “Monocular human pose estimation: A
1394
+ survey of deep learning-based methods,” Computer Vision and Image
1395
+ Understanding, vol. 192, p. 102897, 2020.
1396
+ [62] Y. Cheng, B. Yang, B. Wang, W. Yan, and R. T. Tan, “Occlusion-aware
1397
+ networks for 3d human pose estimation in video,” in Proceedings of
1398
+ the IEEE/CVF International Conference on Computer Vision, 2019, pp.
1399
+ 723–732.
1400
+ [63] S. Gong, T. Xiang, and S. Hongeng, “Learning human pose in crowd,”
1401
+ in Proceedings of the 1st ACM international workshop on Multimodal
1402
+ pervasive video analysis, 2010, pp. 47–52.
1403
+ [64] T.-N. Nguyen, H.-H. Huynh, and J. Meunier, “Skeleton-based abnormal
1404
+ gait detection,” Sensors, vol. 16, no. 11, p. 1792, 2016.
1405
+ [65] R. Baptista, G. Demisse, D. Aouada, and B. Ottersten, “Deformation-
1406
+ based abnormal motion detection using 3d skeletons,” in 2018 Eighth
1407
+ International Conference on Image Processing Theory, Tools and Ap-
1408
+ plications (IPTA).
1409
+ IEEE, 2018, pp. 1–6.
1410
+ [66] T. Warren, “Microsoft kills off Kinect, stops manufacturing it,”
1411
+ https://www.theverge.com/2017/10/25/16542870/microsoft-kinect-dead-
1412
+ stop-manufacturing, 2017, [Online; accessed 23-February-2022].
1413
+ [67] “Vicon,” https://www.vicon.com/, 2019.
1414
+ [68] “Thecaptury,” https://thecaptury.com/, 2019.
1415
+ [69] AltumView, “Sentinare 2,” https://altumview.ca/, 2022, [Online; accessed
1416
+ 24-February-2022].
1417
+ [70] S. S. Khan, M. E. Karg, D. Kuli´c, and J. Hoey, “Detecting falls with
1418
+ x-factor hidden markov models,” Applied Soft Computing, vol. 55, pp.
1419
+ 168–177, 2017.
1420
+ [71] S. A. Boamah, R. Weldrick, T.-S. J. Lee, and N. Taylor, “Social isolation
1421
+ among older adults in long-term care: A scoping review,” Journal of
1422
+ Aging and Health, vol. 33, no. 7-8, pp. 618–632, 2021.
1423
+ [72] E. Duman and O. A. Erdem, “Anomaly detection in videos using optical
1424
+ flow and convolutional autoencoder,” IEEE Access, vol. 7, pp. 183 914–
1425
+ 183 923, 2019.
1426
+ [73] A. Abedi and S. S. Khan, “Fedsl: Federated split learning on dis-
1427
+ tributed sequential data in recurrent neural networks,” arXiv preprint
1428
+ arXiv:2011.03180, 2020.
1429
+ [74] J. Yan, F. Angelini, and S. M. Naqvi, “Image segmentation based
1430
+ privacy-preserving human action recognition for anomaly detection,” in
1431
+ ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech
1432
+ and Signal Processing (ICASSP).
1433
+ IEEE, 2020, pp. 8931–8935.
1434
+ [75] P. K. Mishra, A. Iaboni, B. Ye, K. Newman, A. Mihailidis, and S. S.
1435
+ Khan, “Privacy-protecting behaviours of risk detection in people with
1436
+ dementia using videos,” arXiv preprint arXiv:2212.10682, 2022.
1437
+ [76] W. Feng, R. Liu, and M. Zhu, “Fall detection for elderly person care in a
1438
+ vision-based home surveillance environment using a monocular camera,”
1439
+ signal, image and video processing, vol. 8, no. 6, pp. 1129–1138, 2014.
1440
+ [77] H. Wang and L. Wang, “Learning content and style: Joint action
1441
+ recognition and person identification from human skeletons,” Pattern
1442
+ Recognition, vol. 81, pp. 23–35, 2018.
1443
+ [78] R. Liao, S. Yu, W. An, and Y. Huang, “A model-based gait recognition
1444
+ method with body pose and human prior knowledge,” Pattern Recogni-
1445
+ tion, vol. 98, p. 107069, 2020.
1446
+ [79] Q. Guo and S. S. Khan, “Exercise-specific feature extraction approach
1447
+ for assessing physical rehabilitation,” in 4th IJCAI Workshop on AI for
1448
+ Aging, Rehabilitation and Intelligent Assisted Living.
1449
+ IJCAI, 2021.
1450
+ [80] J. Kovaˇc and P. Peer, “Human skeleton model based dynamic features
1451
+ for walking speed invariant gait recognition,” Mathematical Problems in
1452
+ Engineering, vol. 2014, 2014.
1453
+ [81] A. A. Chaaraoui, P. Climent-P´erez, and F. Fl´orez-Revuelta, “A review
1454
+ on vision techniques applied to human behaviour analysis for ambient-
1455
+ assisted living,” Expert Systems with Applications, vol. 39, no. 12, pp.
1456
+ 10 873–10 888, 2012.
1457
+ [82] Y. Hbali, S. Hbali, L. Ballihi, and M. Sadgal, “Skeleton-based human
1458
+ activity recognition for elderly monitoring systems,” IET Computer
1459
+ Vision, vol. 12, no. 1, pp. 16–26, 2018.
1460
+ [83] ˇS. Obdrˇz´alek, G. Kurillo, F. Ofli, R. Bajcsy, E. Seto, H. Jimison, and
1461
+ M. Pavel, “Accuracy and robustness of kinect pose estimation in the
1462
+ context of coaching of elderly population,” in 2012 Annual International
1463
+ Conference of the IEEE Engineering in Medicine and Biology Society.
1464
+ IEEE, 2012, pp. 1188–1193.
1465
+ [84] M. Garcia-Salguero, J. Gonzalez-Jimenez, and F.-A. Moreno, “Human
1466
+ 3d pose estimation with a tilting camera for social mobile robot
1467
+ interaction,” Sensors, vol. 19, no. 22, p. 4943, 2019.
1468
+ Pratik K. Mishra obtained his Masters in Computer
1469
+ Science and Engineering from the Indian Institute
1470
+ of Technology (IIT) Indore, India, in 2020. He
1471
+ is currently pursuing his Ph.D. from the Institute
1472
+ of Biomedical Engineering, University of Toronto
1473
+ (UofT). He is currently working towards the appli-
1474
+ cation of computer vision for detecting behaviours of
1475
+ risk in people with dementia. Previously, he worked
1476
+ as a research volunteer at the Toronto Rehabilitation
1477
+ Institute, Canada and as a Data Management Support
1478
+ Specialist at IBM India Private Limited.
1479
+ Alex Mihailidis , PhD, PEng, is the Barbara G.
1480
+ Stymiest Research Chair in Rehabilitation Technol-
1481
+ ogy at KITE Research Institute at University Health
1482
+ Network/University of Toronto. He is the Scientific
1483
+ Director of the AGE-WELL Network of Centres of
1484
+ Excellence, which focuses on the development of
1485
+ new technologies and services for older adults. He
1486
+ is a Professor in the Department of Occupational
1487
+ Science and Occupational Therapy and in the In-
1488
+ stitute of Biomedical Engineering at the University
1489
+ of Toronto (U of T), as well as holds a cross
1490
+ appointment in the Department of Computer Science at the U of T.
1491
+ Mihailidis is very active in the rehabilitation engineering profession and
1492
+ is the Immediate Past President for the Rehabilitation Engineering and
1493
+ Assistive Technology Society for North America (RESNA) and was named a
1494
+ Fellow of RESNA in 2014, which is one of the highest honours within this
1495
+ field of research and practice. His research disciplines include biomedical
1496
+ and biochemical engineering, computer science, geriatrics and occupational
1497
+ therapy. Alex is an internationally recognized researcher in the field of
1498
+ technology and aging. He has published over 150 journal and conference
1499
+ papers in this field and co-edited two books: Pervasive computing in healthcare
1500
+ and Technology and Aging.
1501
+ Shehroz S. Khan obtained his B.Sc Engineering,
1502
+ Masters and Phd degrees in computer science in
1503
+ 1997, 2010 and 2016. He is currently working as a
1504
+ Scientist at KITE – Toronto Rehabilitation Institute
1505
+ (TRI), University Health Network, Canada. He is
1506
+ also cross appointed as an Assistant Professor at the
1507
+ Institute of Biomedical Engineering, University of
1508
+ Toronto (UofT). Previously, he worked as a post-
1509
+ doctoral researcher at the UofT and TRI. Prior to
1510
+ joining academics, he worked in various scientific
1511
+ and researcher roles in the industry and government
1512
+ jobs. He is an associate editor of the Journal of Rehabilitation and Assistive
1513
+ Technologies. He has organized four editions of the peer-reviewed workshop
1514
+ on AI in Aging, Rehabilitation and Intelligent Assisted Living held with
1515
+ top AI conferences (ICDM and IJCAI) from 2017-2021. His research is
1516
+ funded through several granting agencies in Canada and abroad, including
1517
+ NSERC, CIHR, AGEWELL, SSHRC, CABHI, AMS Healthcare, JP Bickell
1518
+ Foundation, United Arab Emirates University and LG Electronics. He has
1519
+ published 49 peer-reviewed research papers and his research focus is the
1520
+ development of AI algorithms for solving aging related health problems.
1521
+