jackkuo commited on
Commit
6ed2b50
·
verified ·
1 Parent(s): 36553a6

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -NAyT4oBgHgl3EQfdfcz/content/tmp_files/2301.00302v1.pdf.txt +419 -0
  2. -NAyT4oBgHgl3EQfdfcz/content/tmp_files/load_file.txt +341 -0
  3. -dAzT4oBgHgl3EQf_P7B/content/tmp_files/2301.01946v1.pdf.txt +2040 -0
  4. -dAzT4oBgHgl3EQf_P7B/content/tmp_files/load_file.txt +0 -0
  5. -dFST4oBgHgl3EQfcTgr/content/2301.13802v1.pdf +3 -0
  6. -dFST4oBgHgl3EQfcTgr/vector_store/index.faiss +3 -0
  7. -dFST4oBgHgl3EQfcTgr/vector_store/index.pkl +3 -0
  8. .gitattributes +84 -0
  9. 0NFQT4oBgHgl3EQfDTUM/content/2301.13233v1.pdf +3 -0
  10. 0NFQT4oBgHgl3EQfDTUM/vector_store/index.pkl +3 -0
  11. 0dAzT4oBgHgl3EQfDPpe/content/tmp_files/2301.00972v1.pdf.txt +1453 -0
  12. 0dAzT4oBgHgl3EQfDPpe/content/tmp_files/load_file.txt +0 -0
  13. 0dE4T4oBgHgl3EQfZgyj/content/tmp_files/2301.05057v1.pdf.txt +1463 -0
  14. 0dE4T4oBgHgl3EQfZgyj/content/tmp_files/load_file.txt +0 -0
  15. 0tFAT4oBgHgl3EQfCRy0/vector_store/index.pkl +3 -0
  16. 1tAyT4oBgHgl3EQf1flH/content/tmp_files/2301.00735v1.pdf.txt +1622 -0
  17. 1tAyT4oBgHgl3EQf1flH/content/tmp_files/load_file.txt +0 -0
  18. 29AyT4oBgHgl3EQfPvbO/content/tmp_files/2301.00032v1.pdf.txt +1717 -0
  19. 29AyT4oBgHgl3EQfPvbO/content/tmp_files/load_file.txt +0 -0
  20. 2NAyT4oBgHgl3EQfbve8/content/tmp_files/2301.00270v1.pdf.txt +1966 -0
  21. 2NAyT4oBgHgl3EQfbve8/content/tmp_files/load_file.txt +0 -0
  22. 3dE2T4oBgHgl3EQf6Ago/content/tmp_files/2301.04195v1.pdf.txt +1198 -0
  23. 3dE2T4oBgHgl3EQf6Ago/content/tmp_files/load_file.txt +0 -0
  24. 4NAzT4oBgHgl3EQfuv1g/vector_store/index.faiss +3 -0
  25. 4tAzT4oBgHgl3EQf9v5K/content/2301.01923v1.pdf +3 -0
  26. 4tAzT4oBgHgl3EQf9v5K/vector_store/index.faiss +3 -0
  27. 4tAzT4oBgHgl3EQf9v5K/vector_store/index.pkl +3 -0
  28. 6NE1T4oBgHgl3EQfBQJe/vector_store/index.pkl +3 -0
  29. 6tA0T4oBgHgl3EQfOP83/content/tmp_files/2301.02157v1.pdf.txt +970 -0
  30. 6tA0T4oBgHgl3EQfOP83/content/tmp_files/load_file.txt +0 -0
  31. 7tE1T4oBgHgl3EQfnQST/vector_store/index.faiss +3 -0
  32. 89E2T4oBgHgl3EQfQAYH/content/tmp_files/2301.03764v1.pdf.txt +0 -0
  33. 89E2T4oBgHgl3EQfQAYH/content/tmp_files/load_file.txt +0 -0
  34. 8dE3T4oBgHgl3EQfRwni/content/2301.04426v1.pdf +3 -0
  35. 8dE3T4oBgHgl3EQfRwni/vector_store/index.faiss +3 -0
  36. 8dE3T4oBgHgl3EQfRwni/vector_store/index.pkl +3 -0
  37. 99FQT4oBgHgl3EQfJzVJ/content/tmp_files/2301.13257v1.pdf.txt +1689 -0
  38. 99FQT4oBgHgl3EQfJzVJ/content/tmp_files/load_file.txt +0 -0
  39. A9E4T4oBgHgl3EQfEwz_/vector_store/index.faiss +3 -0
  40. AdFJT4oBgHgl3EQfrS3C/vector_store/index.faiss +3 -0
  41. B9AyT4oBgHgl3EQfePhg/content/2301.00317v1.pdf +3 -0
  42. B9AyT4oBgHgl3EQfePhg/vector_store/index.faiss +3 -0
  43. B9AyT4oBgHgl3EQfePhg/vector_store/index.pkl +3 -0
  44. CNE1T4oBgHgl3EQf9wYh/vector_store/index.pkl +3 -0
  45. CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf +3 -0
  46. D9E0T4oBgHgl3EQfQgCE/content/2301.02194v1.pdf +3 -0
  47. D9E0T4oBgHgl3EQfQgCE/vector_store/index.pkl +3 -0
  48. DNE2T4oBgHgl3EQf9Qm6/content/2301.04227v1.pdf +3 -0
  49. DNE2T4oBgHgl3EQf9Qm6/vector_store/index.pkl +3 -0
  50. DtE0T4oBgHgl3EQfQgBB/content/2301.02193v1.pdf +3 -0
-NAyT4oBgHgl3EQfdfcz/content/tmp_files/2301.00302v1.pdf.txt ADDED
@@ -0,0 +1,419 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.00302v1 [math.CO] 31 Dec 2022
2
+ On Harmonious coloring of hypergraphs
3
+ Sebastian Czerwiński
4
+ Institute of Mathematics, University of Zielona Góra, Poland
5
+ January 3, 2023
6
+ Abstract
7
+ A harmonious coloring of a k-uniform hypergraph H is a vertex col-
8
+ oring such that no two vertices in the same edge have the same color,
9
+ and each k-element subset of colors appears on at most one edge. The
10
+ harmonious number h(H) is the least number of colors needed for such a
11
+ coloring.
12
+ The paper contains a new proof of the upper bounded h(H) = O(
13
+ k√
14
+ k!m)
15
+ on the harmonious number of k-hypergraphs of maximum degree ∆ with
16
+ m edges. We use the local cut lemma of A. Bernshteyn.
17
+ 1
18
+ Introducion
19
+ Let H = (V, E) be a k-uniform hypergraph with the set of vertices V and the
20
+ set of edges E. The set of edges is a family of k-elements sets of V , where k ≥ 2.
21
+ A rainbow coloring h of a hypergraph H is a map c : V �→ {1, . . . , r} in
22
+ which no two vertices in the same edge have the same color. If two vertices are
23
+ in the same edge e with the same color, we say that the edge e is bad.
24
+ A coloring c is called harmonious if c(e) ̸= c(f) for every pair of distinct
25
+ edges e, f ∈ E and c is a rainbow coloring.
26
+ We say that distinct edges e and f have the same pattern of colors if c(e\f) =
27
+ c(f \ e) and there is no uncolored vertex in the set e \ f.
28
+ Let h(H) be the least number of colors needed for a harmonious of H. In
29
+ Bosek et al. (2016) proved that
30
+ Theorem 1 (Bosek et al. (2016)). For every ε > 0 and every ∆ > 0 there
31
+ exist integers k0 and m0 such that every k-uniform hypergraph H with m edges
32
+ (where m ≥ m0 and k ≥ k0) and maximum degree ∆ satisfies
33
+ h(H) ≤ (1 + ε)
34
+ k
35
+ k − 1
36
+ k�
37
+ ∆(k − 1)k!m.
38
+ Remark 1. The paper Bosek et al. (2016) contains an upper bound on the
39
+ harmonious number
40
+ h(H) ≤
41
+ k
42
+ k − 1
43
+ k�
44
+ ∆(k − 1)k!m+1+∆2+(k−1)∆+
45
+ k−1
46
+
47
+ i=2
48
+ i
49
+ i − 1
50
+ i
51
+
52
+ (i − 1)i(k − 1)∆2
53
+ k − i
54
+ .
55
+ 1
56
+
57
+ The proof of this theorem is based on the entropy compression method, see
58
+ Grytczuk et al. (2013); Esperet and Parreau (2013).
59
+ Because a number r of used colors must satisfy the inequality
60
+ �r
61
+ k
62
+
63
+ ≤ m, we get
64
+ lower bound Ω(
65
+ k√
66
+ k!m). By these observations, it is conjectured by Bosek et al.
67
+ (2016) that
68
+ Conjecture 1. For each k, ∆ ≥ 2 there exist a constant c = c(k, ∆) such that
69
+ every k-uniform hypergraph H with m edges and maximum degree ∆ satisfies
70
+ h(H) ≤
71
+ k√
72
+ k!m + c.
73
+ This conjecture was posed by Edwards (1997b) for simple graphs. He prove
74
+ there that
75
+ h(G) ≤ (1 + o(1))
76
+
77
+ 2m.
78
+ There are many results about the harmonious number of particular classes of
79
+ graphs, see Aflaki et al. (2012); Akbari et al. (2012); Edwards (1997a); Edwards and McDiarmid
80
+ (1994a); Edwards (1996); Edwards and McDiarmid (1994b); Krasikov and Roditty
81
+ (1994) or Aigner et al. (1992); Balister et al. (2002, 2003); Bazgan et al. (1999);
82
+ Burris and Schelp (1997).
83
+ The paper contains proof of the theorem of Bosek et al., we use a different
84
+ method, the local cut lemma of Bernshteyn (2017, 2016). The proof is simpler
85
+ and shorter than the original proof of Bosek et al.
86
+ 2
87
+ A special version of the Local Cut Lemma
88
+ Let A be a family of subsets of a powerset Pow(I), where I is a finite set. We
89
+ say that it is downwards-closed if for each S ∈ A, implies Pow(S) ⊆ (A). A
90
+ subset ∂A of I is called boundary of a downwards-closed family A if
91
+ ∂A := {i ∈ I : S ∈ A and S ∪ {i} ̸∈ A for some S ⊆ I \ {i}}.
92
+ Let τ : T �→ [1; +∞) be a function, then for every X ⊆ I we denote by τ(X) a
93
+ number
94
+ τ(X) :=
95
+
96
+ x∈X
97
+ τ(x).
98
+ Let B a random event, X ⊆ I and i ∈ I. We introduce two quantities:
99
+ σA
100
+ τ (B, X) := max
101
+ Z⊆I\X Pr(B and Z ∪ X ̸∈ A|Z ∈ A) · τ(X)
102
+ and
103
+ σA
104
+ τ (B, i) := min
105
+ i∈X⊆I σA
106
+ τ (B, X).
107
+ If Pr(Z ∈ A) = 0, then Pr(P|Z ∈ A) = 0 for all events P.
108
+ 2
109
+
110
+ Theorem 2 (Bernshteyn (2017)). Let I be a finite set. Let Ω be a probabil-
111
+ ity space and let A: Ω �→ Pow(Pow(I)) be a random variable such that with
112
+ probability 1, A is a nonempty downwards-closed family of subsets of I. For
113
+ each i ∈ I, Let B(i) be a finite collection of random events such that whenever
114
+ i ∈ ∂A, at least one of the events in B(i) holds. Suppose that there is a function
115
+ τ : I �→ [1, +∞) such that for all i ∈ I we have
116
+ τ(i) ≥ 1 +
117
+
118
+ B∈B(i)
119
+ σA
120
+ τ (B, i).
121
+ Then Pr(I ∈ A) ≥ 1/τ(I) > 0.
122
+ 3
123
+ Proof of theorem
124
+ We choose a coloring f : V �→ {1, . . ., t} uniformly at random. Let A be a subset
125
+ of the power set of V given by
126
+ A := {S ⊆ V : c is a harmonious coloring of H(V )}.
127
+ It is a nonempty downwards-closed with probability 1 (the empty set is an
128
+ element of A)
129
+ By a set ∂A, we denote the set of all vertices v such that there is an element
130
+ X of A such that the coloring c is not a harmonious coloring of X ∪ {v}. If
131
+ the coloring c is not harmonious coloring there is a bad edge or there are two
132
+ different edges with the same pattern of colors. So, we define for every v ∈ V a
133
+ collection B(v) as union of sets:
134
+ B1(v) := {Be : v ∈ e ∈ E(H) and e is not proper colored}
135
+ and for every i ∈ {0, 1, . . ., k − 1}
136
+ B2
137
+ i (v) := {Be,f : v ∈ e, f ∈ E(H) and c(e) = c(f), |e \ f| = i}.
138
+ That is B(v) = B1(v) ∪ �k−1
139
+ i=1 B2
140
+ i (v).
141
+ We assume that the event Be happens if and only if the edge e is the bad
142
+ edge and the event Be,f happens if and only if edges e and f have the same
143
+ pattern of colors.
144
+ We also assume that a function τ is a constant function, that is τ(v) = τ ∈
145
+ [1, +∞). This implies that for any subset S of V , we have τ(S) = τ |S|.
146
+ Now, we must find an upper bound on
147
+ σA
148
+ τ (B, v) =
149
+ min
150
+ X⊆V :v∈X max
151
+ Z⊆V \X Pr(B ∧ Z ∪ X ̸∈ A|Z ∈ A)τ(X),
152
+ where v ∈ V and B ∈ B(v).
153
+ We will be use an estimation σA
154
+ τ (B, v) ≤
155
+ maxZ⊆V \X Pr(B|Z ∈ A)τ(X). Now, we consider two cases.
156
+ 3
157
+
158
+ Case 1: B ∈ B1, i.e. B = Be
159
+ We choose as X the set {e}. Because the colors of distinct vertices are indepen-
160
+ dent, we get an upper bound σA
161
+ τ (Be, v) ≤ Pr(Be)τ k (events Be and ”Z ∈ A”
162
+ are independent). The probability Pr(Be) opposite to Pr(Be) full fields
163
+ Pr(Be) = 1 − t
164
+ t · t − 1
165
+ t
166
+ · . . . · t − k + 1
167
+ t
168
+ ≥ 1 − (1 − k − 1
169
+ t
170
+ )k−1.
171
+ Through Bernoulli’s inequality, we get
172
+ Pr(Be) ≥ 1 − (1 − k − 1
173
+ t
174
+ · (k − 1)) = (k − 1)2
175
+ t
176
+ .
177
+ So, Pr(Be) ≤ k2
178
+ t .
179
+ Case 2: B ∈ B2
180
+ i , i.e. B = Be,f and |e \ f| = i
181
+ Now, we set X = e \ f. The probability Pr(Be,f) is bonded above by i!
182
+ ti . So, we
183
+ get
184
+ σA
185
+ τ (Be,f, v) ≤ Pr(Be,f)τ i ≤ i!
186
+ ti τ i.
187
+ To end the proof we must find an upper bound on sizes of sets B1(v), B2
188
+ 0(v)
189
+ and B2
190
+ i (v), where i > 0. Because the degree of a vertex is bounded by above ∆
191
+ and the number of edges is m we get that
192
+ |B1(v)| ≤ ∆ and |B2
193
+ 0(v)| ≤ ∆m.
194
+ The hardest part is an upper bound on B2
195
+ i (v), i > 0. The number of edges
196
+ f such that e \ f = i is bounded above by
197
+ k∆
198
+ k−i. There are at most k∆ edges
199
+ with a nonempty intersection with the edge e and the edge f has exactly k − i
200
+ common elements with e. So, we have |B2
201
+ i (v)| ≤ ∆ k∆
202
+ k−i. To apply theorem 2 we
203
+ must find τ ∈ [1, +∞) and c ∈ N such that for all v ∈ V below inequality holds
204
+ τ ≥ 1 + ∆k2
205
+ t τ k + ∆mk!
206
+ tk τ k +
207
+ k−1
208
+
209
+ i=1
210
+ ∆ k∆
211
+ k − i
212
+ i!
213
+ ti τ i.
214
+ If we choose τ =
215
+ k
216
+ k−1 and t =
217
+ k
218
+ k−1
219
+ k�
220
+ ∆(k − 1)k!m(1 + ε), it is easy to see that
221
+ the inequality holds for sufficiently large hypergraph.
222
+ Acknowledgments
223
+ References
224
+ A. Aflaki, S. Akbari, K. J. Edwards, D. S. Eskandani, M. Jamaali, and H. Ra-
225
+ vanbod. On harmonious colouring of trees. Electron. J. Combin., 19(1):Paper
226
+ 3, 9, 2012. URL https://doi.org/10.37236/9.
227
+ 4
228
+
229
+ M. Aigner, E. Triesch, and Z. Tuza.
230
+ Irregular assignments and vertex-
231
+ distinguishing edge-colorings of graphs. In Combinatorics ’90 (Gaeta, 1990),
232
+ volume 52 of Ann. Discrete Math., pages 1–9. North-Holland, Amsterdam,
233
+ 1992. URL https://doi.org/10.1016/S0167-5060(08)70896-3.
234
+ S. Akbari, J. Kim, and A. Kostochka.
235
+ Harmonious coloring of trees with
236
+ large maximum degree.
237
+ Discrete Math., 312(10):1633–1637, 2012.
238
+ URL
239
+ https://doi.org/10.1016/j.disc.2012.02.009.
240
+ P. N. Balister, B. Bollobás, and R. H. Schelp.
241
+ Vertex distinguishing color-
242
+ ings of graphs with ∆(G) = 2. Discrete Math., 252(1-3):17–29, 2002. URL
243
+ https://doi.org/10.1016/S0012-365X(01)00287-4.
244
+ P. N. Balister, O. M. Riordan, and R. H. Schelp.
245
+ Vertex-distinguishing
246
+ edge colorings of graphs.
247
+ J. Graph Theory, 42(2):95–109, 2003.
248
+ URL
249
+ https://doi.org/10.1002/jgt.10076.
250
+ C. Bazgan, A. Harkat-Benhamdine, H. Li, and M. Woźniak. On the vertex-
251
+ distinguishing proper edge-colorings of graphs. J. Combin. Theory Ser. B, 75
252
+ (2):288–301, 1999. URL https://doi.org/10.1006/jctb.1998.1884.
253
+ A.
254
+ Bernshteyn.
255
+ New
256
+ bounds
257
+ for
258
+ the
259
+ acyclic
260
+ chromatic
261
+ in-
262
+ dex.
263
+ Discrete
264
+ Math.,
265
+ 339(10):2543–2552,
266
+ 2016.
267
+ URL
268
+ https://doi.org/10.1016/j.disc.2016.05.002.
269
+ A. Bernshteyn. The local cut lemma. European J. Combin., 63:95–114, 2017.
270
+ URL https://doi.org/10.1016/j.ejc.2017.03.005.
271
+ B. Bosek, S. Czerwiński, J. Grytczuk, and P. Rzążewski. Harmonious coloring
272
+ of uniform hypergraphs. Appl. Anal. Discrete Math., 10(1):73–87, 2016. URL
273
+ https://doi.org/10.2298/AADM160411008B.
274
+ A.
275
+ C.
276
+ Burris
277
+ and
278
+ R.
279
+ H.
280
+ Schelp.
281
+ Vertex-distinguishing
282
+ proper
283
+ edge-colorings.
284
+ J.
285
+ Graph
286
+ Theory,
287
+ 26(2):73–82,
288
+ 1997.
289
+ URL
290
+ https://doi.org/10.1002/(SICI)1097-0118(199710)26:2<73::AID-JGT2>3.0.CO;2-C.
291
+ K.
292
+ Edwards.
293
+ The
294
+ harmonious
295
+ chromatic
296
+ number
297
+ of
298
+ bounded
299
+ de-
300
+ gree
301
+ trees.
302
+ Combin.
303
+ Probab.
304
+ Comput.,
305
+ 5(1):15–28,
306
+ 1996.
307
+ URL
308
+ https://doi.org/10.1017/S0963548300001802.
309
+ K.
310
+ Edwards.
311
+ The
312
+ harmonious
313
+ chromatic
314
+ number
315
+ and
316
+ the
317
+ achro-
318
+ matic number,
319
+ volume 241 of London Math. Soc. Lecture Note Ser.,
320
+ pages
321
+ 13–47.
322
+ Cambridge
323
+ Univ.
324
+ Press,
325
+ Cambridge,
326
+ 1997a.
327
+ URL
328
+ https://doi.org/10.1017/CBO9780511662119.003.
329
+ K.
330
+ Edwards.
331
+ The
332
+ harmonious
333
+ chromatic
334
+ number
335
+ of
336
+ bounded
337
+ degree
338
+ graphs.
339
+ J.
340
+ London
341
+ Math.
342
+ Soc.
343
+ (2),
344
+ 55(3):435–447,
345
+ 1997b.
346
+ URL
347
+ https://doi.org/10.1112/S0024610797004857.
348
+ 5
349
+
350
+ K.
351
+ Edwards
352
+ and
353
+ C.
354
+ McDiarmid.
355
+ New
356
+ upper
357
+ bounds
358
+ on
359
+ harmo-
360
+ nious
361
+ colorings.
362
+ J.
363
+ Graph
364
+ Theory,
365
+ 18(3):257–267,
366
+ 1994a.
367
+ URL
368
+ https://doi.org/10.1002/jgt.3190180305.
369
+ K.
370
+ Edwards
371
+ and
372
+ C.
373
+ McDiarmid.
374
+ New
375
+ upper
376
+ bounds
377
+ on
378
+ harmo-
379
+ nious
380
+ colorings.
381
+ J.
382
+ Graph
383
+ Theory,
384
+ 18(3):257–267,
385
+ 1994b.
386
+ URL
387
+ https://doi.org/10.1002/jgt.3190180305.
388
+ L. Esperet and A. Parreau.
389
+ Acyclic edge-coloring using entropy com-
390
+ pression.
391
+ European
392
+ J.
393
+ Combin.,
394
+ 34(6):1019–1027,
395
+ 2013.
396
+ URL
397
+ https://doi.org/10.1016/j.ejc.2013.02.007.
398
+ J. a. Grytczuk, J. Kozik, and P. Micek.
399
+ New approach to nonrepetitive
400
+ sequences.
401
+ Random Structures Algorithms, 42(2):214–225, 2013.
402
+ URL
403
+ https://doi.org/10.1002/rsa.20411.
404
+ I.
405
+ Krasikov and
406
+ Y.
407
+ Roditty.
408
+ Bounds
409
+ for
410
+ the
411
+ harmonious
412
+ chromatic
413
+ number of a graph.
414
+ J. Graph Theory,
415
+ 18(2):205–209, 1994.
416
+ URL
417
+ https://doi.org/10.1002/jgt.3190180212.
418
+ 6
419
+
-NAyT4oBgHgl3EQfdfcz/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,341 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf,len=340
2
+ page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
3
+ page_content='00302v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
4
+ page_content='CO] 31 Dec 2022 On Harmonious coloring of hypergraphs Sebastian Czerwiński Institute of Mathematics, University of Zielona Góra, Poland January 3, 2023 Abstract A harmonious coloring of a k-uniform hypergraph H is a vertex col- oring such that no two vertices in the same edge have the same color, and each k-element subset of colors appears on at most one edge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
5
+ page_content=' The harmonious number h(H) is the least number of colors needed for such a coloring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
6
+ page_content=' The paper contains a new proof of the upper bounded h(H) = O( k√ k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
7
+ page_content='m) on the harmonious number of k-hypergraphs of maximum degree ∆ with m edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
8
+ page_content=' We use the local cut lemma of A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
9
+ page_content=' Bernshteyn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
10
+ page_content=' 1 Introducion Let H = (V, E) be a k-uniform hypergraph with the set of vertices V and the set of edges E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
11
+ page_content=' The set of edges is a family of k-elements sets of V , where k ≥ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
12
+ page_content=' A rainbow coloring h of a hypergraph H is a map c : V �→ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
13
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
14
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
15
+ page_content=' , r} in which no two vertices in the same edge have the same color.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
16
+ page_content=' If two vertices are in the same edge e with the same color, we say that the edge e is bad.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
17
+ page_content=' A coloring c is called harmonious if c(e) ̸= c(f) for every pair of distinct edges e, f ∈ E and c is a rainbow coloring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
18
+ page_content=' We say that distinct edges e and f have the same pattern of colors if c(e\\f) = c(f \\ e) and there is no uncolored vertex in the set e \\ f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
19
+ page_content=' Let h(H) be the least number of colors needed for a harmonious of H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
20
+ page_content=' In Bosek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
21
+ page_content=' (2016) proved that Theorem 1 (Bosek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
22
+ page_content=' (2016)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
23
+ page_content=' For every ε > 0 and every ∆ > 0 there exist integers k0 and m0 such that every k-uniform hypergraph H with m edges (where m ≥ m0 and k ≥ k0) and maximum degree ∆ satisfies h(H) ≤ (1 + ε) k k − 1 k� ∆(k − 1)k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
24
+ page_content='m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
25
+ page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
26
+ page_content=' The paper Bosek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
27
+ page_content=' (2016) contains an upper bound on the harmonious number h(H) ≤ k k − 1 k� ∆(k − 1)k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
28
+ page_content='m+1+∆2+(k−1)∆+ k−1 � i=2 i i − 1 i � (i − 1)i(k − 1)∆2 k − i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
29
+ page_content=' 1 The proof of this theorem is based on the entropy compression method, see Grytczuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
30
+ page_content=' (2013);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
31
+ page_content=' Esperet and Parreau (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
32
+ page_content=' Because a number r of used colors must satisfy the inequality �r k � ≤ m, we get lower bound Ω( k√ k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
33
+ page_content='m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
34
+ page_content=' By these observations, it is conjectured by Bosek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
35
+ page_content=' (2016) that Conjecture 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
36
+ page_content=' For each k, ∆ ≥ 2 there exist a constant c = c(k, ∆) such that every k-uniform hypergraph H with m edges and maximum degree ∆ satisfies h(H) ≤ k√ k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
37
+ page_content='m + c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
38
+ page_content=' This conjecture was posed by Edwards (1997b) for simple graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
39
+ page_content=' He prove there that h(G) ≤ (1 + o(1)) √ 2m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
40
+ page_content=' There are many results about the harmonious number of particular classes of graphs, see Aflaki et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
41
+ page_content=' (2012);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
42
+ page_content=' Akbari et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
43
+ page_content=' (2012);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
44
+ page_content=' Edwards (1997a);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
45
+ page_content=' Edwards and McDiarmid (1994a);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
46
+ page_content=' Edwards (1996);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
47
+ page_content=' Edwards and McDiarmid (1994b);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
48
+ page_content=' Krasikov and Roditty (1994) or Aigner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
49
+ page_content=' (1992);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
50
+ page_content=' Balister et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
51
+ page_content=' (2002, 2003);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
52
+ page_content=' Bazgan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
53
+ page_content=' (1999);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
54
+ page_content=' Burris and Schelp (1997).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
55
+ page_content=' The paper contains proof of the theorem of Bosek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
56
+ page_content=', we use a different method, the local cut lemma of Bernshteyn (2017, 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
57
+ page_content=' The proof is simpler and shorter than the original proof of Bosek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
58
+ page_content=' 2 A special version of the Local Cut Lemma Let A be a family of subsets of a powerset Pow(I), where I is a finite set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
59
+ page_content=' We say that it is downwards-closed if for each S ∈ A, implies Pow(S) ⊆ (A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
60
+ page_content=' A subset ∂A of I is called boundary of a downwards-closed family A if ∂A := {i ∈ I : S ∈ A and S ∪ {i} ̸∈ A for some S ⊆ I \\ {i}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
61
+ page_content=' Let τ : T �→ [1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
62
+ page_content=' +∞) be a function, then for every X ⊆ I we denote by τ(X) a number τ(X) := � x∈X τ(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
63
+ page_content=' Let B a random event, X ⊆ I and i ∈ I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
64
+ page_content=' We introduce two quantities: σA τ (B, X) := max Z⊆I\\X Pr(B and Z ∪ X ̸∈ A|Z ∈ A) · τ(X) and σA τ (B, i) := min i∈X⊆I σA τ (B, X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
65
+ page_content=' If Pr(Z ∈ A) = 0, then Pr(P|Z ∈ A) = 0 for all events P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
66
+ page_content=' 2 Theorem 2 (Bernshteyn (2017)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
67
+ page_content=' Let I be a finite set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
68
+ page_content=' Let Ω be a probabil- ity space and let A: Ω �→ Pow(Pow(I)) be a random variable such that with probability 1, A is a nonempty downwards-closed family of subsets of I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
69
+ page_content=' For each i ∈ I, Let B(i) be a finite collection of random events such that whenever i ∈ ∂A, at least one of the events in B(i) holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
70
+ page_content=' Suppose that there is a function τ : I �→ [1, +∞) such that for all i ∈ I we have τ(i) ≥ 1 + � B∈B(i) σA τ (B, i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
71
+ page_content=' Then Pr(I ∈ A) ≥ 1/τ(I) > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
72
+ page_content=' 3 Proof of theorem We choose a coloring f : V �→ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
73
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
74
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
75
+ page_content=', t} uniformly at random.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
76
+ page_content=' Let A be a subset of the power set of V given by A := {S ⊆ V : c is a harmonious coloring of H(V )}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
77
+ page_content=' It is a nonempty downwards-closed with probability 1 (the empty set is an element of A) By a set ∂A, we denote the set of all vertices v such that there is an element X of A such that the coloring c is not a harmonious coloring of X ∪ {v}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
78
+ page_content=' If the coloring c is not harmonious coloring there is a bad edge or there are two different edges with the same pattern of colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
79
+ page_content=' So, we define for every v ∈ V a collection B(v) as union of sets: B1(v) := {Be : v ∈ e ∈ E(H) and e is not proper colored} and for every i ∈ {0, 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
80
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
81
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
82
+ page_content=', k − 1} B2 i (v) := {Be,f : v ∈ e, f ∈ E(H) and c(e) = c(f), |e \\ f| = i}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
83
+ page_content=' That is B(v) = B1(v) ∪ �k−1 i=1 B2 i (v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
84
+ page_content=' We assume that the event Be happens if and only if the edge e is the bad edge and the event Be,f happens if and only if edges e and f have the same pattern of colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
85
+ page_content=' We also assume that a function τ is a constant function, that is τ(v) = τ ∈ [1, +∞).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
86
+ page_content=' This implies that for any subset S of V , we have τ(S) = τ |S|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
87
+ page_content=' Now, we must find an upper bound on σA τ (B, v) = min X⊆V :v∈X max Z⊆V \\X Pr(B ∧ Z ∪ X ̸∈ A|Z ∈ A)τ(X), where v ∈ V and B ∈ B(v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
88
+ page_content=' We will be use an estimation σA τ (B, v) ≤ maxZ⊆V \\X Pr(B|Z ∈ A)τ(X).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
89
+ page_content=' Now, we consider two cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
90
+ page_content=' 3 Case 1: B ∈ B1, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
91
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
92
+ page_content=' B = Be We choose as X the set {e}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
93
+ page_content=' Because the colors of distinct vertices are indepen- dent, we get an upper bound σA τ (Be, v) ≤ Pr(Be)τ k (events Be and ”Z ∈ A” are independent).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
94
+ page_content=' The probability Pr(Be) opposite to Pr(Be) full fields Pr(Be) = 1 − t t · t − 1 t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
95
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
96
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
97
+ page_content=' · t − k + 1 t ≥ 1 − (1 − k − 1 t )k−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
98
+ page_content=' Through Bernoulli’s inequality, we get Pr(Be) ≥ 1 − (1 − k − 1 t (k − 1)) = (k − 1)2 t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
99
+ page_content=' So, Pr(Be) ≤ k2 t .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
100
+ page_content=' Case 2: B ∈ B2 i , i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
101
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
102
+ page_content=' B = Be,f and |e \\ f| = i Now, we set X = e \\ f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
103
+ page_content=' The probability Pr(Be,f) is bonded above by i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
104
+ page_content=' ti .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
105
+ page_content=' So, we get σA τ (Be,f, v) ≤ Pr(Be,f)τ i ≤ i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
106
+ page_content=' ti τ i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
107
+ page_content=' To end the proof we must find an upper bound on sizes of sets B1(v), B2 0(v) and B2 i (v), where i > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
108
+ page_content=' Because the degree of a vertex is bounded by above ∆ and the number of edges is m we get that |B1(v)| ≤ ∆ and |B2 0(v)| ≤ ∆m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
109
+ page_content=' The hardest part is an upper bound on B2 i (v), i > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
110
+ page_content=' The number of edges f such that e \\ f = i is bounded above by k∆ k−i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
111
+ page_content=' There are at most k∆ edges with a nonempty intersection with the edge e and the edge f has exactly k − i common elements with e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
112
+ page_content=' So, we have |B2 i (v)| ≤ ∆ k∆ k−i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
113
+ page_content=' To apply theorem 2 we must find τ ∈ [1, +∞) and c ∈ N such that for all v ∈ V below inequality holds τ ≥ 1 + ∆k2 t τ k + ∆mk!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
114
+ page_content=' tk τ k + k−1 � i=1 ∆ k∆ k − i i!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
115
+ page_content=' ti τ i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
116
+ page_content=' If we choose τ = k k−1 and t = k k−1 k� ∆(k − 1)k!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
117
+ page_content='m(1 + ε), it is easy to see that the inequality holds for sufficiently large hypergraph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
118
+ page_content=' Acknowledgments References A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
119
+ page_content=' Aflaki, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
120
+ page_content=' Akbari, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
121
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
122
+ page_content=' Edwards, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
123
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
124
+ page_content=' Eskandani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
125
+ page_content=' Jamaali, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
126
+ page_content=' Ra- vanbod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
127
+ page_content=' On harmonious colouring of trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
128
+ page_content=' Electron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
129
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
130
+ page_content=' Combin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
131
+ page_content=', 19(1):Paper 3, 9, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
132
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
133
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
134
+ page_content='37236/9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
135
+ page_content=' 4 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
136
+ page_content=' Aigner, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
137
+ page_content=' Triesch, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
138
+ page_content=' Tuza.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
139
+ page_content=' Irregular assignments and vertex- distinguishing edge-colorings of graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
140
+ page_content=' In Combinatorics ’90 (Gaeta, 1990), volume 52 of Ann.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
141
+ page_content=' Discrete Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
142
+ page_content=', pages 1–9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
143
+ page_content=' North-Holland, Amsterdam, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
144
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
145
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
146
+ page_content='1016/S0167-5060(08)70896-3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
147
+ page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
148
+ page_content=' Akbari, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
149
+ page_content=' Kim, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
150
+ page_content=' Kostochka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
151
+ page_content=' Harmonious coloring of trees with large maximum degree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
152
+ page_content=' Discrete Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
153
+ page_content=', 312(10):1633–1637, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
154
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
155
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
156
+ page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
157
+ page_content='disc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
158
+ page_content='2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
159
+ page_content='02.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
160
+ page_content='009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
161
+ page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
162
+ page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
163
+ page_content=' Balister, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
164
+ page_content=' Bollobás, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
165
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
166
+ page_content=' Schelp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
167
+ page_content=' Vertex distinguishing color- ings of graphs with ∆(G) = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
168
+ page_content=' Discrete Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
169
+ page_content=', 252(1-3):17–29, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
170
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
171
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
172
+ page_content='1016/S0012-365X(01)00287-4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
173
+ page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
174
+ page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
175
+ page_content=' Balister, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
176
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
177
+ page_content=' Riordan, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
178
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
179
+ page_content=' Schelp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
180
+ page_content=' Vertex-distinguishing edge colorings of graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
181
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
182
+ page_content=' Graph Theory, 42(2):95–109, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
183
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
184
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
185
+ page_content='1002/jgt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
186
+ page_content='10076.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
187
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
188
+ page_content=' Bazgan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
189
+ page_content=' Harkat-Benhamdine, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
190
+ page_content=' Li, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
191
+ page_content=' Woźniak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
192
+ page_content=' On the vertex- distinguishing proper edge-colorings of graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
193
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
194
+ page_content=' Combin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
195
+ page_content=' Theory Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
196
+ page_content=' B, 75 (2):288–301, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
197
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
198
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
199
+ page_content='1006/jctb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
200
+ page_content='1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
201
+ page_content='1884.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
202
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
203
+ page_content=' Bernshteyn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
204
+ page_content=' New bounds for the acyclic chromatic in- dex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
205
+ page_content=' Discrete Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
206
+ page_content=', 339(10):2543–2552, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
207
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
208
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
209
+ page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
210
+ page_content='disc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
211
+ page_content='2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
212
+ page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
213
+ page_content='002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
214
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
215
+ page_content=' Bernshteyn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
216
+ page_content=' The local cut lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
217
+ page_content=' European J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
218
+ page_content=' Combin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
219
+ page_content=', 63:95–114, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
220
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
221
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
222
+ page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
223
+ page_content='ejc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
224
+ page_content='2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
225
+ page_content='03.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
226
+ page_content='005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
227
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
228
+ page_content=' Bosek, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
229
+ page_content=' Czerwiński, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
230
+ page_content=' Grytczuk, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
231
+ page_content=' Rzążewski.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
232
+ page_content=' Harmonious coloring of uniform hypergraphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
233
+ page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
234
+ page_content=' Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
235
+ page_content=' Discrete Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
236
+ page_content=', 10(1):73–87, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
237
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
238
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
239
+ page_content='2298/AADM160411008B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
240
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
241
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
242
+ page_content=' Burris and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
243
+ page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
244
+ page_content=' Schelp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
245
+ page_content=' Vertex-distinguishing proper edge-colorings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
246
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
247
+ page_content=' Graph Theory, 26(2):73–82, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
248
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
249
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
250
+ page_content='1002/(SICI)1097-0118(199710)26:2<73::AID-JGT2>3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
251
+ page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
252
+ page_content='CO;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
253
+ page_content='2-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
254
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
255
+ page_content=' Edwards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
256
+ page_content=' The harmonious chromatic number of bounded de- gree trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
257
+ page_content=' Combin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
258
+ page_content=' Probab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
259
+ page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
260
+ page_content=', 5(1):15–28, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
261
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
262
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
263
+ page_content='1017/S0963548300001802.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
264
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
265
+ page_content=' Edwards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
266
+ page_content=' The harmonious chromatic number and the achro- matic number, volume 241 of London Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
267
+ page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
268
+ page_content=' Lecture Note Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
269
+ page_content=', pages 13–47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
270
+ page_content=' Cambridge Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
271
+ page_content=' Press, Cambridge, 1997a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
272
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
273
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
274
+ page_content='1017/CBO9780511662119.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
275
+ page_content='003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
276
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
277
+ page_content=' Edwards.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
278
+ page_content=' The harmonious chromatic number of bounded degree graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
279
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
280
+ page_content=' London Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
281
+ page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
282
+ page_content=' (2), 55(3):435–447, 1997b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
283
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
284
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
285
+ page_content='1112/S0024610797004857.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
286
+ page_content=' 5 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
287
+ page_content=' Edwards and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
288
+ page_content=' McDiarmid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
289
+ page_content=' New upper bounds on harmo- nious colorings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
290
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
291
+ page_content=' Graph Theory, 18(3):257–267, 1994a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
292
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
293
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
294
+ page_content='1002/jgt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
295
+ page_content='3190180305.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
296
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
297
+ page_content=' Edwards and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
298
+ page_content=' McDiarmid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
299
+ page_content=' New upper bounds on harmo- nious colorings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
300
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
301
+ page_content=' Graph Theory, 18(3):257–267, 1994b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
302
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
303
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
304
+ page_content='1002/jgt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
305
+ page_content='3190180305.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
306
+ page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
307
+ page_content=' Esperet and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
308
+ page_content=' Parreau.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
309
+ page_content=' Acyclic edge-coloring using entropy com- pression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
310
+ page_content=' European J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
311
+ page_content=' Combin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
312
+ page_content=', 34(6):1019–1027, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
313
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
314
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
315
+ page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
316
+ page_content='ejc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
317
+ page_content='2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
318
+ page_content='02.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
319
+ page_content='007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
320
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
321
+ page_content=' a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
322
+ page_content=' Grytczuk, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
323
+ page_content=' Kozik, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
324
+ page_content=' Micek.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
325
+ page_content=' New approach to nonrepetitive sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
326
+ page_content=' Random Structures Algorithms, 42(2):214–225, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
327
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
328
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
329
+ page_content='1002/rsa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
330
+ page_content='20411.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
331
+ page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
332
+ page_content=' Krasikov and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
333
+ page_content=' Roditty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
334
+ page_content=' Bounds for the harmonious chromatic number of a graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
335
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
336
+ page_content=' Graph Theory, 18(2):205–209, 1994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
337
+ page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
338
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
339
+ page_content='1002/jgt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
340
+ page_content='3190180212.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
341
+ page_content=' 6' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NAyT4oBgHgl3EQfdfcz/content/2301.00302v1.pdf'}
-dAzT4oBgHgl3EQf_P7B/content/tmp_files/2301.01946v1.pdf.txt ADDED
@@ -0,0 +1,2040 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ EPR-Net: Constructing non-equilibrium potential landscape via a variational force
2
+ projection formulation
3
+ Yue Zhao,1 Wei Zhang,2, ∗ and Tiejun Li1, 3, 4, †
4
+ 1Center for Data Science, Peking University, Beijing 100871, China
5
+ 2Zuse Institute Berlin, D-14195 Berlin, Germany
6
+ 3LMAM and School of Mathematical Sciences, Peking University, Beijing 100871, China
7
+ 4Center for Machine Learning Research, Peking University, Beijing 100871, China
8
+ (Dated: January 6, 2023)
9
+ We present a novel yet simple deep learning approach, dubbed EPR-Net, for constructing the
10
+ potential landscape of high-dimensional non-equilibrium steady state (NESS) systems. The key idea
11
+ of our approach is to utilize the fact that the negative potential gradient is the orthogonal projection
12
+ of the driving force in a weighted Hilbert space with respect to the steady-state distribution. The
13
+ constructed loss function also coincides with the entropy production rate (EPR) formula in NESS
14
+ theory. This approach can be extended to dealing with dimensionality reduction and state-dependent
15
+ diffusion coefficients in a unified fashion. The robustness and effectiveness of the proposed approach
16
+ are demonstrated by numerical studies of several high-dimensional biophysical models with multi-
17
+ stability, limit cycle, or strange attractor with non-vanishing noise.
18
+ Since Waddington’s famous landscape metaphor on the
19
+ development of cells in the 1950s [1], the construction of
20
+ potential landscape for non-equilibrium biochemical reac-
21
+ tion systems has been recognized as an important prob-
22
+ lem in theoretical biology, as it provides insightful pic-
23
+ tures for understanding complex dynamical mechanisms
24
+ of biological processes. This problem has attracted con-
25
+ siderable attention in recent decades in both biophysics
26
+ and applied mathematics community. Until now, several
27
+ approaches have been proposed to realize Waddington’s
28
+ landscape metaphor in a rational way, see [2–10] and
29
+ the references therein for details and [11–14] for reviews.
30
+ Broadly speaking, these proposals can be classified into
31
+ two types: (T1) the construction of potential landscape
32
+ in the finite noise regime [3–5] and (T2) the construction
33
+ of the quasi-potential in the zero noise limit [2, 6–9].
34
+ For low-dimensional systems (i.e., dimension less than
35
+ 4), the potential landscape can be numerically computed
36
+ either by solving a Fokker-Planck equation (FPE) using
37
+ grid-based methods until the steady solution is reached
38
+ approximately as in (T1) type proposals [3, 5], or by solv-
39
+ ing a Hamilton-Jacobi-Bellman (HJB) equation using, for
40
+ instance, the ordered upwind method [15] or minimum
41
+ action type method [8] as in (T2) type proposals. How-
42
+ ever, these approaches suffer from the curse of dimen-
43
+ sionality when applied to high-dimensional systems. Al-
44
+ though methods based on mean field approximations are
45
+ able to provide a semi-quantitative description of the en-
46
+ ergy landscape for typical systems [4, 16], direct and gen-
47
+ eral approaches are still favored in applications. In this
48
+ aspect, pioneering work has been done recently, which
49
+ allows direct construction of high-dimensional potential
50
+ landscape using deep neural networks (DNN), based on
51
+ either the steady viscous HJB equation satisfied by the
52
53
54
+ landscape function in (T1) case [17, 18], or the point-
55
+ wise orthogonal decomposition of the force field in (T2)
56
+ case [19]. These works have brought significant advances
57
+ in the methodological developments in both cases. How-
58
+ ever, these approaches, which are based on solving HJB
59
+ equations alone, may encounter numerical difficulties due
60
+ to the non-uniqueness of the weak solution to the non-
61
+ viscous HJB equation in (T2) case [20], and challenges
62
+ in solving the steady HJB equation with a small noise in
63
+ (T1) case.
64
+ Setup.
65
+ In this letter, we present a simple yet ef-
66
+ fective DNN approach, EPR-Net, for constructing the
67
+ potential landscape of high-dimensional non-equilibrium
68
+ steady state (NESS) systems in (T1) type. Our key ob-
69
+ servation is that the negative potential gradient is the or-
70
+ thogonal projection of the driving force under a weighted
71
+ inner product with respect to the steady-state distribu-
72
+ tion. To be specific, let us consider the stochastic differ-
73
+ ential equations (SDEs)
74
+ dx(t)
75
+ dt
76
+ = F (x(t)) +
77
+
78
+ 2D ˙w,
79
+ x(0) = x0,
80
+ (1)
81
+ where x0 ∈ Rd, F : Rd → Rd is a smooth function,
82
+ ˙w = ( ˙w1, . . . , ˙wd)⊤ is the d-dimensional temporal Gaus-
83
+ sian white noise with E ˙wi(t) = 0 and E[ ˙wi(t) ˙wj(s)] =
84
+ δijδ(t − s) for i, j = 1, . . . , d, s, t > 0 and D > 0 is the
85
+ noise strength, which is often related to the system’s tem-
86
+ perature T by D = kBT, where kB is the Boltzmann con-
87
+ stant. We assume that (1) is ergodic and denote by pss(x)
88
+ its steady-state probability density function (PDF).
89
+ We follow the (T1) type proposal in [3] to derive the po-
90
+ tential landscape of (1) in the case of D > 0. That is, we
91
+ define the potential U = −D ln pss and the steady proba-
92
+ bility flux Jss = pssF −D∇pss in the domain Ω, which we
93
+ assume for simplicity is either Rd or a d-dimensional hy-
94
+ perrectangle. The steady-state PDF pss(x) satisfies the
95
+ Fokker-Planck equation (FPE)
96
+ ∇ · (pssF ) − D∆pss = 0,
97
+ for x ∈ Ω,
98
+ (2)
99
+ arXiv:2301.01946v1 [physics.bio-ph] 5 Jan 2023
100
+
101
+ 2
102
+ and we assume the asymptotic boundary condition (BC)
103
+ pss(x) → 0 as |x| → ∞ when Ω = Rd, or the re-
104
+ flecting boundary condition Jss · n = 0 when Ω ⊂ Rd
105
+ is a d-dimensional hyperrectangle, where n is the unit
106
+ outer normal.
107
+ In both cases, we have pss(x) ≥ 0 and
108
+
109
+ Ω pss(x) dx = 1.
110
+ Learning approach. Aiming at an effective approach
111
+ for high-dimensional applications, we employ DNNs to
112
+ approximate U(x), and the key idea in this letter is to
113
+ learn U by training DNN with the following loss function
114
+ LEPR(V ) =
115
+
116
+
117
+ |F (x) + ∇V (x; θ)|2 dπ(x),
118
+ (3)
119
+ where V := V (x; θ) is a neural network function with
120
+ parameters θ [21], and dπ(x) = pss(x) dx.
121
+ To justify
122
+ (3), we note that U satisfies the important orthogonality
123
+ relation: for any suitable function W : Rd → R,
124
+
125
+
126
+
127
+ F (x) + ∇U(x)
128
+
129
+ · ∇W(x) dπ(x) = 0.
130
+ (4)
131
+ Therefore, U(x) is the unique minimizer (up to a con-
132
+ stant) of the loss LEPR and, moreover, the negative po-
133
+ tential gradient −∇U is in fact the projection of the force
134
+ field F in the π-weighted Hilbert space. See Sec. A and B
135
+ in the Supplemental Material (SM) for derivations in de-
136
+ tail.
137
+ The minimum loss LEPR(U) has a clear physical inter-
138
+ pretation. Indeed, we have (see SM Sec. B)
139
+ LEPR(U) =
140
+
141
+
142
+ |Jss|2 1
143
+ pss
144
+ dx = ess
145
+ p ,
146
+ (5)
147
+ where ess
148
+ p denotes the steady entropy production rate
149
+ (EPR) of the NESS system (1) [3, 22, 23]. Therefore,
150
+ minimizing (3) is equivalent to approximating the steady
151
+ EPR. This explains the name EPR-Net of our approach.
152
+ To utilize (3) in numerical computations, we replace
153
+ the spatial integral in (3) with respect to the unknown π
154
+ by its empirical average using data sampled from (1):
155
+ �LEPR(θ) = 1
156
+ N
157
+ N
158
+
159
+ i=1
160
+ ��F (xi) + ∇V (xi; θ)
161
+ ��2,
162
+ (6)
163
+ where (xi)1≤i≤N could be either the final states (at time
164
+ T) of N trajectories starting from different initializations
165
+ or equally spaced time series along a single long trajec-
166
+ tory up to time T, where T ≫ 1.
167
+ In both cases, the
168
+ ergodicity of SDE (1) guarantees that (6) is a good ap-
169
+ proximation of (3) as long as T is large [24]. We adopt
170
+ the former approach in the numerical experiments in this
171
+ work, where the gradients of both V (with respect to x)
172
+ and the loss itself (with respect to θ) in (6) are calculated
173
+ by auto-differentiation through PyTorch [25]. The stabil-
174
+ ity analysis of this approximation is presented in detail
175
+ in SM Sec. C.
176
+ We apply our method to a toy model first in order to
177
+ check its applicability and accuracy. We take
178
+ F (x) = −(I + A) · ∇U0(x),
179
+ (7)
180
+ where A ∈ Rd×d is a constant skew-symmetric matrix,
181
+ i.e., A⊤ = −A, and U0 is some known function. With this
182
+ choice of F , one can check that the true potential land-
183
+ scape is U(x) = U0(x). In particular, the system is re-
184
+ versible when A = 0. Based on the proposed method, we
185
+ construct a double-well model with known potential U0
186
+ for verification. We take D = 0.1. As shown in Fig. 1(A),
187
+ the learned potential agrees well with the simulated sam-
188
+ ples.
189
+ Also, the decomposition of the force field shows
190
+ that the negative gradient part −∇V (x; θ) around the
191
+ wells points towards the attractor and is nearly orthog-
192
+ onal to the non-gradient part. The overall non-gradient
193
+ field shows a counter-clockwise rotation.
194
+ The relative
195
+ root mean square error (rRMSE) of the potential V (x; θ)
196
+ learned by EPR loss is 0.0987 (averaged over 3 runs),
197
+ which supports the effectiveness of our approach.
198
+ See
199
+ SM Sec. F F.1 for details of the problem setting.
200
+ The correct interpretation of the computational results
201
+ based on the EPR loss (3) is that the accuracy of V (x)
202
+ is guaranteed only when π(x) is evidently above zero
203
+ for any specific x.
204
+ In the “visible” domain of π (i.e.,
205
+ the places where there are sample points of {xi}), the
206
+ trained potential V gives reliable approximation; while
207
+ in the weakly visible or invisible domain, especially in
208
+ local transition regions between meta-stable states and
209
+ boundaries of the visible domain, we must resort to the
210
+ original FPE (2) which holds pointwise in space.
211
+ Learning strategy for small D. Substituting the rela-
212
+ tion pss(x) = exp(−U(x)/D) into (2), we get the viscous
213
+ HJB equation
214
+ NHJB(U) := F · ∇U + |∇U|2 − D∆U − D∇ · F = 0 (8)
215
+ with the asymptotic BC U → ∞ as |x| → ∞ in the case
216
+ of Ω = Rd, or the reflecting BC (F + ∇U) · n = 0 on ∂Ω
217
+ when Ω is a d-dimensional hyperrectangle, respectively.
218
+ As in the framework of physics-informed neural networks
219
+ (PINNs) [26], (8) motivates the HJB loss
220
+ LHJB(V ) =
221
+
222
+
223
+ ��NHJB(V (x; θ))
224
+ ��2 dµ(x),
225
+ (9)
226
+ where µ is any desirable distribution.
227
+ By choosing µ
228
+ properly, this loss allows the use of sample data that
229
+ better cover the domain Ω and, when combined with the
230
+ loss in (3), leads to significant improvement of the train-
231
+ ing results in our numerical experiments when D is small.
232
+ Specifically, for small D, we propose the enhanced loss in
233
+ training which has the form
234
+ �Lenh(θ) = �LEPR(θ) + λ �LHJB(θ),
235
+ (10)
236
+ where
237
+ �LEPR(θ)
238
+ is
239
+ defined
240
+ in
241
+ (6),
242
+ �LHJB(θ)
243
+ =
244
+ 1
245
+ N ′
246
+ �N ′
247
+ i=1 |NHJB(V (x′
248
+ i; θ))|2 is an approximation of (9) us-
249
+ ing an independent data set (x′
250
+ i)1≤i≤N ′ obtained by sam-
251
+ pling the trajectories of (1) with a larger D′ > D, and
252
+ λ > 0 is a weight parameter balancing the contribution
253
+ of the two terms in (10). Note that the proposed strategy
254
+ is both general and easily adaptable. For instance, one
255
+
256
+ 3
257
+ FIG. 1. Filled contour plots of the learned potential V (x; θ) for (A) toy model learned by EPR loss (3) with D = 0.1, and
258
+ (B)-(C) a biochemical oscillation network model [3] and a tri-stable cell development model [5] learned by enhanced loss (10).
259
+ The force field F (x) is decomposed into the gradient part −∇V (x; θ) (white arrows) and the non-gradient part (gray arrows).
260
+ The length of an arrow denotes the scale of the vector. The solid dots are samples from the simulated invariant distribution.
261
+ can alternatively use data (x′
262
+ i)1≤i≤N ′ that contains more
263
+ samples in the transition region, or employ a modification
264
+ of the loss (9) in (10) [17].
265
+ We apply our enhanced loss (10) to construct the land-
266
+ scape for a 2D biological system with a limit cycle [3]
267
+ and a 2D multistable system [5]. The potential V (x; θ)
268
+ learned by the enhanced loss (10), the force decomposi-
269
+ tion, and sample points from the simulated invariant dis-
270
+ tribution are shown in Fig. 1(B) and (C). As in the toy
271
+ model case, the gradient part (white arrows) points di-
272
+ rectly towards the attractors, while the non-gradient part
273
+ (gray arrows) shows a counter-clockwise rotation for the
274
+ limit cycle, and a splitting-and-back flow from the mid-
275
+ dle attractor to the other two attractors for the tri-stable
276
+ dynamical model. To further verify the accuracy of the
277
+ method, we numerically solve the FPE (2) as reference
278
+ solutions by a fine grid discretization. Comparisons be-
279
+ tween the proposed method and the method based on
280
+ the naive HJB loss on these two problems are demon-
281
+ strated in SM. Averaged over 3 runs, the rRMSE of the
282
+ potential V learned by our enhanced loss is 0.0524 and
283
+ 0.0402, respectively, which shows an evident advantage
284
+ over the naive HJB loss. See SM Sec. F for details of the
285
+ comparisons.
286
+ Dimensionality reduction.
287
+ When applying the ap-
288
+ proach above to high-dimensional problems, dimensional-
289
+ ity reduction is necessary in order to visualize the results
290
+ and gain physical insights. A straightforward approach is
291
+ to first learn the high-dimensional potential U and then
292
+ find its low-dimensional representation, i.e., the reduced
293
+ potential or the free energy function, using dimension-
294
+ ality reduction techniques (see SM Sec. D D.1). In the
295
+ following, we present an alternative approach that allows
296
+ to directly learn the low-dimensional reduced potential.
297
+ For simplicity, we consider the linear case and, with a
298
+ slight abuse of notation, denote by x = (y, z)⊤, where
299
+ z = (xi, xj) ∈ R2 contains the coordinates of two vari-
300
+ ables of interest, and y ∈ Rd−2 corresponds to the
301
+ other d − 2 variables.
302
+ The domain Ω (either Rd or
303
+ a d-dimensional hyperrectangle) has the decomposition
304
+ Ω = Σ × �Ω, where Σ ⊆ Rd−2 and �Ω ⊆ R2 are the do-
305
+ mains of y and z, respectively. As can be seen in the
306
+ numerical examples, this setting is applicable to many
307
+ interesting biochemical systems. Extensions to nonlinear
308
+ low-dimensional reduced variables with general domains
309
+ are possible, e.g., by applying the approach developed
310
+ in [27]. In the current setting, the reduced potential is
311
+ �U(z) = −D ln �pss(z) = −D ln
312
+
313
+ Σ
314
+ pss(y, z) dy,
315
+ (11)
316
+ and one can show that �U minimizes the following loss
317
+ function:
318
+ LP-EPR(�V ) =
319
+
320
+
321
+ ��Fz(y, z)+∇z �V (z; θ)
322
+ ��2 dπ(y, z), (12)
323
+ where Fz(y, z) ∈ R2 is the z-component of the force
324
+ field F = (Fy, Fz)⊤. Similar to (6), the empirical form
325
+ of (12) can be used in learning the reduced potential �U.
326
+ Moreover, one can derive an enhanced loss as in (10) that
327
+ could be used for systems with small D. To this end, we
328
+ note that �U satisfies the projected HJB equation
329
+ NP-HJB(�U) := �F · ∇z �U + |∇z �U|2
330
+ − D∆z �U − D∇z · �F = 0 ,
331
+ (13)
332
+ with asymptotic BC �U
333
+ → ∞ as |z| → ∞, or the
334
+ reflecting BC ( �F + ∇z �U) · �n
335
+ =
336
+ 0 on ∂�Ω, where
337
+ �F (z)
338
+ :=
339
+
340
+ Σ Fz(y, z)dπ(y|z) is the projected force
341
+ defined using the conditional distribution dπ(y|z) =
342
+ pss(y, z)/�pss(z) dy, and �n denotes the unit outer normal
343
+ on ∂�Ω. Based on (13), we can formulate the projected
344
+ HJB loss
345
+ LP-HJB(�V ) =
346
+
347
+ �Ω
348
+ ��NP-HJB(�V (z; θ))
349
+ ��2 dµ(z),
350
+ (14)
351
+
352
+ A
353
+ B
354
+ 3.0
355
+ 2.00
356
+ 2.00
357
+ 0.20
358
+ 8
359
+ 2.5
360
+ 7
361
+ 2.5
362
+ 1.60
363
+ 1.60
364
+ 0.16
365
+ 6
366
+ 2.0
367
+ 2.0
368
+ 5
369
+ 0.12
370
+ 1.20
371
+ 1.20
372
+ 1.5
373
+ >1.5
374
+ 4
375
+ 0.04
376
+ 0.80
377
+ -0.80
378
+ 1.0
379
+ 3
380
+ 1.0-
381
+ 2
382
+ -0.40 0.5
383
+ 0.40
384
+ 0.04
385
+ 0.5
386
+ 0.00
387
+ 0.0
388
+ 0.00
389
+ -0.00 0.0
390
+ 0
391
+ 0.5
392
+ 2.5
393
+ 6
394
+ 0.5
395
+ 2.0
396
+ 1.5
397
+ 2.0
398
+ 2
399
+ 8
400
+ 1.0
401
+ 1.5
402
+ 2.5
403
+ 0.0
404
+ 1.0
405
+ 3.0
406
+ 0
407
+ 4
408
+ 0.0
409
+ +
410
+ X
411
+ X4
412
+ where µ is any suitable distribution over �Ω, and �F in (13)
413
+ is learned beforehand by training a DNN with the loss
414
+ LP-For( �
415
+ G) =
416
+
417
+
418
+ ��Fz(y, z) − �
419
+ G(z; θ)
420
+ ��2 dπ(y, z).
421
+ (15)
422
+ The overall enhanced loss used in numerical computa-
423
+ tions comprises two terms, which are empirical estimates
424
+ of (12) and (14) based on two different sets of sample
425
+ data. See SM Sec. D for derivation details.
426
+ We then apply our dimensionality reduction approach
427
+ to construct the landscape for an 8D cell cycle model con-
428
+ taining both a limit cycle and a stable equilibrium point
429
+ for the chosen parameters, and take CycB and Cdc20 as
430
+ the reduced variables following [4]. As shown in Fig. 2, we
431
+ can find that the depth of the reduced potential and force
432
+ strength agree well with the density of projected samples.
433
+ Moreover, we can also get some important insights from
434
+ Fig. 2 on the projection of the high-dimensional dynam-
435
+ ics with a limit cycle to two dimensions. One particular
436
+ feature is that the limit cycle induced by the projected
437
+ force �
438
+ G (outer red circle) has minor differences with the
439
+ limit cycle directly projected from high dimensions (yel-
440
+ low circle), and the difference is slight or moderate de-
441
+ pending on whether the density of samples is high or
442
+ low. This is natural in the reduction since the distribu-
443
+ tion π(y|z) in the projection is not of Dirac type when
444
+ D > 0, and this difference will disappear as D → 0.
445
+ Another feature is that we unexpectedly get an addi-
446
+ tional stable limit cycle (inner red circle) and a stable
447
+ point (red dot in the center) emerging inside the limit
448
+ cycle.
449
+ Though virtual in high dimensions and biologi-
450
+ cally irrelevant, the existence of such two limit sets is
451
+ reminiscent of the Poincar´e-Bendixson theorem in pla-
452
+ nar dynamics theory [28, Chapter 10.6], which depicts
453
+ a common phenomenon when performing dimensionality
454
+ reduction with limit cycles to 2D plane. The emergence
455
+ of these two limit sets, though being not a general sit-
456
+ uation, is specific in the considered model due to the
457
+ relatively flat landscape of the potential in the centering
458
+ region. In addition, close to the saddle point (0.13, 0.55)
459
+ of �V (green star), there is a barrier domain along the
460
+ limit cycle direction, while a local well domain along the
461
+ Cdc20 direction, which characterizes the region that bi-
462
+ ological cycle paths mainly go through.
463
+ Last but not
464
+ the least, a zoom-in view of the local well domain out-
465
+ side of the limit cycle shows its detailed spiral structure
466
+ (Fig. 2C), which has not been revealed before by mak-
467
+ ing a Gaussian approximation. Some other applications
468
+ of our approach to Ferrell’s three-ODE model [29], 52D
469
+ stem cell network model [16] and 3D Lorenz model are
470
+ demonstrated in SM Sec. G and H.
471
+ Extension to variable diffusion coefficient case.
472
+ The
473
+ EPR-Net formulation can be extended to the case of
474
+ state-dependent diffusion coefficients without any diffi-
475
+ culty. Consider the Ito SDEs
476
+ dx(t)
477
+ dt
478
+ = F (x(t)) +
479
+
480
+ 2Dσ(x(t)) ˙w,
481
+ x(0) = x0,
482
+ (16)
483
+ FIG. 2. Dimensionality reduction of an 8D cell cycle model
484
+ with two reduced variables. (A) Reduced potential landscape
485
+ �V with projected contour lines. (B) Projected sample points,
486
+ streamlines of the projected force field �
487
+ G and the filled con-
488
+ tour plot of �V . The red circles and dots are stable limit sets
489
+ of the projected force field. The yellow circle is the projection
490
+ of the original high-dimensional limit cycle. (C) The detailed
491
+ spiral structure of the streamlines of �
492
+ G around the stable
493
+ point by zooming in the square domain in (B).
494
+ with diffusion matrix σ(x) ∈ Rd×m and
495
+ ˙w is an m-
496
+ dimensional temporal Gaussian white noise. We assume
497
+ that m ≥ d and the matrix a(x) := (σσ⊤)(x) satisfies
498
+ u⊤a(x)u ≥ c0|u|2 for all x, u ∈ Rd, where c0 > 0 is a
499
+ positive constant. Using a similar derivation as before,
500
+ we can again show that the high-dimensional landscape
501
+ function U of (16) minimizes the EPR loss
502
+ LV-EPR(V ) =
503
+
504
+
505
+ |F v(x) + a(x)∇V (x)|2
506
+ a−1(x) dπ(x),
507
+ (17)
508
+ where F v(x) = F (x) − D∇ · a(x) and |u|2
509
+ a−1(x) :=
510
+ u⊤a−1(x)u for u ∈ Rd. We provide derivation details
511
+ of (17) in SM Sec. E. However, we will not pursue a nu-
512
+ merical study of (16)–(17) in this paper.
513
+ Discussions and Conclusion. Below we make some fi-
514
+ nal remarks. First, concerning the use of the steady-state
515
+ distribution π(x) in (3) and its approximation by a long
516
+ time series of the SDE (1) in EPR-Net, we emphasize that
517
+ it is the sampling approximation of π that naturally cap-
518
+ tures the important parts of the potential function, and
519
+ the landscape beyond the sampled regions is not that
520
+ essential in practice.
521
+ Second, as is exemplified in SM
522
+ Sec. F F.4, we found that a direct application of density
523
+ estimation methods (DEM), e.g., normalizing flows [30],
524
+ to the sampled time series data does not give potential
525
+
526
+ A
527
+ B
528
+ 1.0
529
+ 0.08
530
+ 0.08
531
+ 0.07
532
+ 0.06
533
+ 0.04
534
+ 0.06
535
+ 0.02
536
+ 0.05
537
+ 0.00
538
+ -0.02
539
+ 0.8
540
+ 0.04
541
+ -0.04
542
+ 0.03
543
+ 1.0
544
+ 0.8
545
+ 0.02
546
+ 0.60
547
+ 0.01
548
+ 0.4
549
+ 0.2
550
+ 0.0
551
+ 0.1
552
+ 0.2
553
+ 0.6
554
+ 0.3
555
+ 0.4
556
+ 0.5
557
+ CycB
558
+ Cdc20
559
+ C
560
+ 3.0
561
+ 0.16
562
+ 0.4
563
+ 2.5
564
+ 0.14
565
+ 2.0
566
+ 0.12
567
+ 1.5
568
+ 0.10
569
+ 0.2
570
+ 1.0
571
+ 0.08
572
+ 0.5
573
+ 0.06
574
+ 0.18
575
+ 0.20
576
+ 0.22
577
+ 0.24
578
+ 0.26
579
+ 0.0
580
+ 0.1
581
+ 0.2
582
+ 0.3
583
+ 0.4
584
+ 0.5
585
+ CycB5
586
+ landscape with satisfactory accuracy. We speculate that
587
+ such deficiency of DEM is due to its over-generality and
588
+ the fact that it does not take advantage of the force field
589
+ information explicitly compared to (3).
590
+ Overall, we have presented the EPR-Net, a simple
591
+ yet effective DNN approach, for constructing the non-
592
+ equilibrium potential landscape of NESS systems. This
593
+ approach is both elegant and robust due to its variational
594
+ structure and its flexibility to be combined with other
595
+ types of loss functions. Further extension of dimensional-
596
+ ity reduction to nonlinear reduced variables and numeri-
597
+ cal investigations in the case of state-dependents diffusion
598
+ coefficients will be explored in future work.
599
+ Acknowledgement.
600
+ We thank Professors Chunhe Li,
601
+ Xiaoliang Wan and Dr. Yufei Ma for helpful discus-
602
+ sions. TL and YZ acknowledge the support from NSFC
603
+ and MSTC under Grant No.s 11825102, 12288101 and
604
+ 2021YFA1003300.
605
+ WZ is supported by the DFG un-
606
+ der Germany’s Excellence Strategy-MATH+: The Berlin
607
+ Mathematics Research Centre (EXC-2046/1)-project ID:
608
+ 390685689.
609
+ The numerical computations of this work
610
+ were conducted on the High-performance Computing
611
+ Platform of Peking University.
612
+
613
+ 6
614
+ Supplemental Material for:
615
+ EPR-Net: Constructing non-equilibrium potential landscape via
616
+ a variational force projection formulation
617
+ CONTENTS
618
+ Part 1: Theory
619
+ 6
620
+ A. Validation of the EPR loss
621
+ 6
622
+ B. EPR loss and entropy production rate
623
+ 7
624
+ C. Stability of the EPR minimizer
625
+ 7
626
+ D. Dimensionality reduction
627
+ 8
628
+ D.1. Gradient projection loss
629
+ 8
630
+ D.2. Projected EPR loss
631
+ 8
632
+ D.3. Force projection loss
633
+ 9
634
+ D.4. HJB equation for the reduced potential
635
+ 9
636
+ E. State-dependent diffusion coefficients
637
+ 9
638
+ Part 2: Computation
639
+ 10
640
+ F. 2D models and comparisons
641
+ 10
642
+ F.1. Toy model and enhanced EPR
643
+ 10
644
+ F.2. 2D limit cycle model
645
+ 11
646
+ F.3. 2D multi-stable model
647
+ 12
648
+ F.4. Numerical comparisons
649
+ 12
650
+ G. 3D models
651
+ 13
652
+ G.1. 3D Lorenz system
653
+ 14
654
+ G.2. Ferrell’s three-ODE model
655
+ 14
656
+ H. High dimensional models
657
+ 15
658
+ H.1. 8D complex system
659
+ 15
660
+ H.2. 52D multi-stable system
661
+ 16
662
+ References
663
+ 17
664
+ In this supplemental material (SM), we will present
665
+ further theoretical derivations and computational details
666
+ of the contents in the main text (MT). This SM consists
667
+ of two parts: Theory and computation.
668
+ PART 1: THEORY
669
+ We will first provide details of theoretical derivations
670
+ omitted in the MT.
671
+ A.
672
+ VALIDATION OF THE EPR LOSS
673
+ In this section, we show that, up to an additive con-
674
+ stant, the potential function U(x) := −D ln pss(x) is the
675
+ unique minimizer of the EPR loss (3) defined in the MT.
676
+ First, we show that the orthogonality relation
677
+
678
+
679
+ (F + ∇U) · ∇W dπ = 0
680
+ (18)
681
+ holds for any suitable function W(x) : Rd → R under
682
+ both choices of the boundary conditions (BC) considered
683
+ in the MT, where dπ(x) := pss(x)dx. To see this, we
684
+ note that
685
+
686
+
687
+ (F + ∇U) · ∇W dπ
688
+ =
689
+
690
+
691
+ (F pss − D∇pss) · ∇W dx
692
+ =
693
+
694
+ ∂Ω
695
+ W(F pss − D∇pss) · n dx
696
+
697
+
698
+
699
+ W∇ · (F pss − D∇pss) dx
700
+ :=P1 − P2
701
+ where we have used integration by parts and the relation
702
+ pss(x) = exp(−U(x)/D).
703
+ The term P1 is zero due to
704
+ the fact that pss(x) tends to 0 exponentially as |x| → ∞
705
+ when Ω = Rd, and the reflecting BC Jss · n = 0 which
706
+ holds on ∂Ω when Ω is bounded. The term P2 is zero
707
+ due to the steady state Fokker-Planck equation (FPE)
708
+ satisfied by pss.
709
+ Now consider the EPR loss, we have
710
+ LEPR(V ) =
711
+
712
+
713
+ |F + ∇V |2 dπ
714
+ =
715
+
716
+
717
+ |F + ∇U + ∇V − ∇U|2 dπ
718
+ =
719
+
720
+
721
+
722
+ |F + ∇U|2 + |∇V − ∇U|2�
723
+
724
+ + 2
725
+
726
+
727
+ (F + ∇U) · ∇(V − U) dπ
728
+ =
729
+
730
+
731
+ |F + ∇U|2 + |∇V − ∇U|2 dπ,
732
+ where we have used the orthogonality relation (18) to
733
+ arrive at the last equality, from which we conclude that
734
+
735
+ 7
736
+ U(x) is the unique minimizer of the EPR loss up to an
737
+ additive constant.
738
+ In fact, define the π-weighted inner product for any
739
+ square integrable functions f, g on Ω:
740
+ (f, g)π :=
741
+
742
+
743
+ f(x)g(x) dπ(x)
744
+ (19)
745
+ and the corresponding L2
746
+ π-norm ∥·∥π by ∥f∥2
747
+ π := (f, f)π,
748
+ we get a Hilbert space L2
749
+ π (see, e.g., [31, Chapter II.1]).
750
+ Choosing W = U in (18), we observe that the minimiza-
751
+ tion of EPR loss finds the orthogonal projection of F
752
+ under the π-weighted inner product, i.e.,
753
+ F (x) = −∇U(x) + l(x), such that (∇U, l)π = 0. (20)
754
+ However, we remark that this orthogonality holds only
755
+ in the L2
756
+ π-inner product sense instead of the pointwise
757
+ sense. Furthermore, the two orthogonality relations (18)
758
+ and (20) can be understood as follows. Using (20), the
759
+ relation (18) is equivalent to
760
+
761
+ Ω l · ∇Wdπ = 0 for any
762
+ W. Integration by parts gives ∇ · (l e−U/D) = 0, which
763
+ is equivalent to ∇U · l + D∇ · l = 0. When D → 0, we
764
+ recover the pointwise orthogonality, which is adopted in
765
+ computing quasi-potentials in [19].
766
+ B.
767
+ EPR LOSS AND ENTROPY PRODUCTION
768
+ RATE
769
+ In this section, we show that the minimum EPR loss
770
+ coincides with the steady entropy production rate in non-
771
+ equilibrium steady state (NESS) theory.
772
+ Following [22, 23], we have the important identity con-
773
+ cerning the entropy production for the SDE (1) defined
774
+ in the MT:
775
+ DdS(t)
776
+ dt
777
+ = ep(t) − hd(t),
778
+ (21)
779
+ where S(t) := −
780
+
781
+ Ω p(x, t) ln p(x, t) dx is the entropy of
782
+ the probability density function p(x, t) at time t, ep is
783
+ the entropy production rate (EPR)
784
+ ep(t) =
785
+
786
+
787
+ |F (x) − D∇ ln p(x, t)|2 p(x, t) dx,
788
+ (22)
789
+ and hd is the heat dissipation rate
790
+ hd(t) =
791
+
792
+
793
+ F (x) · J(x, t) dx,
794
+ (23)
795
+ with the probability flux J(x, t)
796
+ :=
797
+ p(x, t)(F (x) −
798
+ D∇ ln p(x, t)) at time t. When D = kBT, the above for-
799
+ mulas have clear physical meaning in statistical physics.
800
+ At the steady state, we get the steady EPR
801
+ ess
802
+ p =
803
+
804
+
805
+ |F − D∇ ln pss|2 pss dx
806
+ =
807
+
808
+
809
+ |F + ∇U|2 pss dx
810
+ =
811
+
812
+
813
+ |Jss|2 1
814
+ pss
815
+ dx = LEPR(U),
816
+ where Jss(x) = pss(x)(F (x)+∇U(x)) is the steady prob-
817
+ ability flux.
818
+ This shows the relation between the pro-
819
+ posed EPR loss function and the entropy production rate
820
+ in the NESS theory.
821
+ C.
822
+ STABILITY OF THE EPR MINIMIZER
823
+ In this section, we formally show that small perturba-
824
+ tions of the invariant distribution π will not introduce
825
+ a disastrous change to the minimizer of the correspond-
826
+ ing EPR loss. We only consider the bounded domain,
827
+ i.e., Ω is a hyperrectangle. The argument for unbounded
828
+ domains is similar.
829
+ Suppose dπ(x) = p(x)dx, dµ(x) = q(x)dx, and the
830
+ functions U(x) and ¯U(x) are the unique minimizers (up
831
+ to a constant) of the following two EPR losses
832
+ U = arg min
833
+ V
834
+
835
+
836
+ |F + ∇V |2 dπ,
837
+ ¯U = arg min
838
+ V
839
+
840
+
841
+ |F + ∇V |2 dµ,
842
+ respectively.
843
+ It is not difficult to find that the Euler-
844
+ Lagrange equations of U, ¯U are given by the following
845
+ partial differential equation (PDE) with suitable BCs:
846
+ ∇ · ((F + ∇U)p) = 0 in Ω, (F + ∇U) · n = 0 on ∂Ω,
847
+ ∇ · ((F + ∇ ¯U)q) = 0 in Ω, (F + ∇ ¯U) · n = 0 on ∂Ω.
848
+ The PDEs above defined inside the domain Ω can be
849
+ converted to
850
+ ∆Up + ∇U · ∇p = −∇ · (pF ),
851
+ ∆ ¯Uq + ∇ ¯U · ∇q = −∇ · (qF ).
852
+ Define U0(x) = −D ln p(x) and ¯U0(x) = −D ln q(x). We
853
+ then obtain
854
+ −∇U · ∇U0 + D∆U = F · ∇U0 − D∇ · F ,
855
+ (24)
856
+ −∇ ¯U · ∇ ¯U0 + D∆ ¯U = F · ∇ ¯U0 − D∇ · F .
857
+ (25)
858
+ Assuming that δU0 := U0 − ¯U0 = O(ε), where 0 < ϵ ≪ 1
859
+ denotes a small constant, we have the PDE for U − ¯U by
860
+ subtracting (25) from (24):
861
+ −∇(U− ¯U) · ∇U0 + D∆(U − ¯U)
862
+ = F · ∇(δU0) + ∇ ¯U · ∇(δU0)
863
+ with BC ∇(U − ¯U) · n = 0. Since U0, ¯U, F ∼ O(1), we
864
+ can obtain that
865
+ U(x) − ¯U(x) = O(ε)
866
+ by the regularity theory of elliptic PDE [32, Section 6.3]
867
+ when D ∼ O(1), or by the matched asymptotic expan-
868
+ sion when D ≪ 1 [33, Chapter 2]. In fact, the closeness
869
+ between U(x) and ¯U(x) can be ensured as long as U0 and
870
+ ¯U0 are close enough in the region where p(x) and q(x) are
871
+ bounded away from zero by the method of characteristics
872
+ analysis [32, Section 2.1] and matched asymptotics.
873
+
874
+ 8
875
+ D.
876
+ DIMENSIONALITY REDUCTION
877
+ In this section, we study dimensionality reduction for
878
+ high-dimensional problems in order to learn the projected
879
+ potential.
880
+ Denote by x = (y, z)⊤ ∈ Ω. As in the MT, we assume
881
+ the domain
882
+ Ω = �Ω × Σ,
883
+ where �Ω ⊆ R2 and Σ ⊆ Rd−2 are the domain of y and z,
884
+ respectively. The reduced potential �U(z) is defined as
885
+ �U(z) = −D ln �pss(z) = −D ln
886
+
887
+ Σ
888
+ pss(y, z) dy.
889
+ (26)
890
+ One natural approach for constructing �U(z) is directly
891
+ integrating pss(y, z) based on the learned U(y, z) with
892
+ the EPR loss, i.e.,
893
+ �U(z) = −D ln
894
+
895
+ Σ
896
+ exp(−U(y, z)/D) dy.
897
+ (27)
898
+ However, performing this integration is not a straightfor-
899
+ ward numerical task (see, e.g., [34, Chapter 7]).
900
+ D.1.
901
+ Gradient projection loss
902
+ In this subsection, we study a simple approach to
903
+ approximate �U(z) based on sample points, which ap-
904
+ proximately obey the invariant distribution π(x), and
905
+ the learned high dimensional potential function U(x) by
906
+ EPR loss. This approach is not investigated numerically
907
+ in this work, but it will be useful for the derivations in
908
+ the next subsection. The idea is to utilize the gradient
909
+ projection (GP) loss on the z components of ∇U:
910
+ LGP(�V ) =
911
+
912
+
913
+ ��∇zU(y, z) − ∇z �V (z)
914
+ ��2 dπ(y, z).
915
+ (28)
916
+ To justify (28), we note that
917
+ LGP(�V ) =
918
+
919
+
920
+ ��∇zU − ∇z �V
921
+ ��2 dπ(x)
922
+ =
923
+
924
+
925
+ ��∇zU − ∇z �U + ∇z �U − ∇z �V
926
+ ��2 dπ(x)
927
+ =
928
+
929
+
930
+ ���∇zU − ∇z �U
931
+ ��2 +
932
+ ��∇z �U − ∇z �V
933
+ ��2�
934
+ dπ(x)
935
+ + 2
936
+
937
+
938
+ (∇zU − ∇z �U) · ∇z(�U − �V ) dπ(x)
939
+ =:P1 + P2,
940
+ where P1 and P2 denote the terms in the third and the
941
+ fourth line above, respectively. The term P2 = 0 since
942
+
943
+
944
+ ∇zU · ∇z(�U − �V ) dπ(x)
945
+ =
946
+
947
+ �Ω
948
+ ��
949
+ Σ
950
+ ∇zUe− U
951
+ D dy
952
+
953
+ · ∇z(�U − �V ) dz
954
+ = − D
955
+
956
+ �Ω
957
+ ∇z
958
+ ��
959
+ Σ
960
+ e− U
961
+ D dy
962
+
963
+ · ∇z(�U − �V ) dz
964
+ = − D
965
+
966
+ �Ω
967
+ ∇z�pss · ∇z(�U − �V ) dz
968
+ =
969
+
970
+ �Ω
971
+ ∇z �U · ∇z(�U − �V ) �pss dz
972
+ and
973
+
974
+
975
+ ∇z �U · ∇z(�U − �V ) dπ(x)
976
+ =
977
+
978
+ �Ω
979
+ ∇z �U · ∇z(�U − �V ) �pss dz,
980
+ which cancel with each other in P2.
981
+ Therefore, the minimization of GP loss is equivalent to
982
+ minimizing
983
+
984
+ �Ω
985
+ ��∇z �U − ∇z �V
986
+ ��2 �pss dz,
987
+ which clearly implies that �U(z) is the unique minimizer
988
+ (up to a constant) of the proposed GP loss.
989
+ D.2.
990
+ Projected EPR loss
991
+ In this subsection, we study the projected EPR (P-
992
+ EPR) loss, which has the form
993
+ LP-EPR(�V ) =
994
+
995
+
996
+ ��Fz(y, z) + ∇z �V (z)
997
+ ��2 dπ(y, z),
998
+ (29)
999
+ where Fz(y, z) ∈ R2 is the z-component of the force field
1000
+ F = (Fy, Fz)⊤.
1001
+ Define
1002
+ �LP-EPR(�V ) =
1003
+
1004
+
1005
+ ��F (y, z) + ∇�V (z)
1006
+ ��2 dπ(y, z),
1007
+ (30)
1008
+ where ∇ is the full gradient with respect to x. To justify
1009
+ (29), we first note the following equivalence
1010
+ min LP-EPR(�V )
1011
+ ⇐⇒
1012
+ min �LP-EPR(�V ),
1013
+ (31)
1014
+ since ∇y �V (z) = 0 and the y-components of F +∇�V only
1015
+ introduce an irrelevant constant in (30). Furthermore, we
1016
+ have
1017
+ �LP-EPR(�V ) =
1018
+
1019
+
1020
+ ��F + ∇�V
1021
+ ��2 dπ(x)
1022
+ =
1023
+
1024
+
1025
+ ��F + ∇U + ∇�V − ∇U
1026
+ ��2 dπ(x)
1027
+ =
1028
+
1029
+
1030
+ ��F + ∇U
1031
+ ��2 +
1032
+ ��∇�V − ∇U
1033
+ ��2 dπ(x),
1034
+
1035
+ 9
1036
+ where the last equality is due to the orthogonality rela-
1037
+ tion (18). Using a similar argument for deriving (31), the
1038
+ equivalence (31) itself, as well as the GP loss in (28), we
1039
+ get
1040
+ min LP-EPR(�V )
1041
+ ⇐⇒
1042
+ min LGP(�V ).
1043
+ (32)
1044
+ Since �U minimizes the GP loss as is shown in the previous
1045
+ subsection, we conclude that �U minimizes the loss in (29).
1046
+ D.3.
1047
+ Force projection loss
1048
+ In this subsection, we study the force projection (P-
1049
+ For) loss for approximating the projection of Fz onto the
1050
+ z-space.
1051
+ Denote by
1052
+ �F (z) :=
1053
+
1054
+ Σ
1055
+ Fz(y, z) dπ(y|z)
1056
+ (33)
1057
+ the projected force defined using the conditional distri-
1058
+ bution
1059
+ dπ(y|z) = pss(y, z)/�pss(z) dy.
1060
+ (34)
1061
+ We can learn �F (z) via the following force projection loss
1062
+ LP-For( �
1063
+ G) =
1064
+
1065
+
1066
+ ��Fz(y, z) − �
1067
+ G(z)
1068
+ ��2 dπ(y, z).
1069
+ (35)
1070
+ To justify (35), we note that
1071
+
1072
+
1073
+ ��Fz(y, z) − �
1074
+ G(z)
1075
+ ��2 dπ(y, z)
1076
+ =
1077
+
1078
+
1079
+
1080
+ |Fz(y, z)|2 + | �
1081
+ G(z)|2�
1082
+ dπ(y, z)
1083
+ − 2
1084
+
1085
+
1086
+ Fz(y, z) · �
1087
+ G(z) dπ(y, z)
1088
+ =:P1 − 2P2.
1089
+ The term P2 can be simplified as
1090
+ P2 =
1091
+
1092
+ �Ω
1093
+ ��
1094
+ Σ
1095
+ Fz(y, z)dπ(y|z)
1096
+
1097
+ · �
1098
+ G(z) �pss(z) dz
1099
+ =
1100
+
1101
+ �Ω
1102
+ �F (z) · �
1103
+ G(z) �pss(z) dz.
1104
+ Therefore, we have the equivalence
1105
+ min LP-For( �
1106
+ G)
1107
+ ⇐⇒
1108
+ min �LP-For( �
1109
+ G),
1110
+ (36)
1111
+ where
1112
+ �LP-For( �
1113
+ G) :=
1114
+
1115
+ �Ω
1116
+ �� �F (z) − �
1117
+ G(z)
1118
+ ��2�pss(z) dz.
1119
+ From the analysis above we can conclude that �F(z) min-
1120
+ imizes the loss in (35).
1121
+ D.4.
1122
+ HJB equation for the reduced potential
1123
+ In this subsection, we show that the reduced potential
1124
+ �U satisfies the projected HJB equation
1125
+ �F · ∇z �U + |∇z �U|2 − D∆z �U − D∇z · �F = 0 ,
1126
+ (37)
1127
+ with asymptotic BC �U → ∞ as |z| → ∞, or the reflecting
1128
+ BC ( �F + ∇z �U) · �n = 0 on ∂�Ω, where �n denotes the unit
1129
+ outer normal on ∂�Ω. We will only consider the rectangu-
1130
+ lar domain case here. The argument for the unbounded
1131
+ case is similar.
1132
+ Recall that pss(x) satisfies the FPE
1133
+ ∇ · (pssF ) − D∆pss = 0.
1134
+ (38)
1135
+ Integrating both sides of (38) on Σ with respect to y
1136
+ and utilizing the boundary condition Jss · n = 0, where
1137
+ Jss = pssF − D∇pss, we get
1138
+ ∇z ·
1139
+ � �
1140
+ Σ
1141
+ Fzpss dy
1142
+
1143
+ − D∆z�pss = 0.
1144
+ (39)
1145
+ Taking (33) and (34) into account, we obtain
1146
+ ∇z ·
1147
+
1148
+ �pss �F
1149
+
1150
+ − D∆z�pss = ∇z · �
1151
+ J = 0,
1152
+ (40)
1153
+ i.e., a FPE for �pss(z) with the reduced force field �F ,
1154
+ where �
1155
+ J := �pss �F −D∇z�pss. The corresponding boundary
1156
+ condition can be also derived by integrating the original
1157
+ BC Jss · n = 0 on Σ with respect to y for z ∈ ∂�Ω, which
1158
+ gives
1159
+
1160
+ J · �n =
1161
+
1162
+ �pss �F − D∇z�pss
1163
+
1164
+ · �n = 0.
1165
+ (41)
1166
+ Substituting the relation �pss(z) = exp
1167
+
1168
+ −�U(z)/D
1169
+
1170
+ into
1171
+ (40) and (41), we get (37) and the corresponding reflect-
1172
+ ing BC after some algebraic manipulations.
1173
+ E.
1174
+ STATE-DEPENDENT DIFFUSION
1175
+ COEFFICIENTS
1176
+ In this section, we study the EPR loss for NESS sys-
1177
+ tems with a state-dependent diffusion coefficient.
1178
+ Consider the Ito SDEs
1179
+ dx(t)
1180
+ dt
1181
+ = F (x(t)) +
1182
+
1183
+ 2Dσ(x(t)) ˙w
1184
+ (42)
1185
+ with the state-dependent diffusion matrix σ(x). Under
1186
+ the same assumptions as in the MT, we have the FPE
1187
+ ∇ · (pssF ) − D∇2 : (pssa) = 0.
1188
+ (43)
1189
+ We show that the high dimensional landscape function
1190
+ U of (42) minimizes the EPR loss
1191
+ LV-EPR(V ) =
1192
+
1193
+
1194
+ |F v(x) + a(x)∇V (x)|2
1195
+ a−1(x) dπ(x),
1196
+ (44)
1197
+
1198
+ 10
1199
+ where F v(x) := F (x) − D∇ · a(x) and |u|2
1200
+ a−1(x) :=
1201
+ u⊤a−1(x)u for u ∈ Rd.
1202
+ To justify (44), we first note that (43) can be rewritten
1203
+ as
1204
+ ∇ · (pssF v − Da∇pss) = 0 ,
1205
+ (45)
1206
+ which, together with the BC, implies the orthogonality
1207
+ relation
1208
+
1209
+
1210
+
1211
+ F v + a∇U
1212
+
1213
+ · ∇W dπ = 0
1214
+ (46)
1215
+ for a suitable test function W(x). Following the same
1216
+ reasoning used in establishing (18) and utilizing (46), we
1217
+ have
1218
+
1219
+
1220
+ |F v + a∇V |2
1221
+ a−1 dπ
1222
+ =
1223
+
1224
+
1225
+ ��F v + a∇U + a∇(V − U)
1226
+ ��2
1227
+ a−1 dπ
1228
+ =
1229
+
1230
+
1231
+ |F v + a∇U
1232
+ ��2
1233
+ a−1 dπ +
1234
+
1235
+
1236
+ ��a∇(V − U)
1237
+ ��2
1238
+ a−1 dπ.
1239
+ The last expression implies that U(x) is the unique min-
1240
+ imizer of LV-EPR(V ) up to a constant.
1241
+ The above derivation for the state-dependent diffusion
1242
+ case will permit us to construct the landscape for the
1243
+ chemical Langevin dynamics, which will be studied in
1244
+ future work.
1245
+ PART 2: COMPUTATION
1246
+ Now we present the computational details and results
1247
+ omitted in the MT in the computation part.
1248
+ F.
1249
+ 2D MODELS AND COMPARISONS
1250
+ In this section, we will describe the computational
1251
+ setup and results for some 2D models which we utilize
1252
+ for the test of different formulations, including the toy
1253
+ model with known potential in the MT, a 2D biologi-
1254
+ cal system with a limit cycle [3] and a 2D multi-stable
1255
+ system [5]. We will also demonstrate the motivation for
1256
+ enhanced EPR and its advantage over other methods.
1257
+ F.1.
1258
+ Toy model and enhanced EPR
1259
+ In the toy model, we set the force field as
1260
+ F (x) = −(I + A) · ∇U0(x),
1261
+ (47)
1262
+ and choose the potential
1263
+ U0 = ((x − 1.5)2 − 1.0))2 + 0.5(y − 1.5)2,
1264
+ (48)
1265
+ where x = (x, y)⊤. We take the anti-symmetric matrix
1266
+ A =
1267
+
1268
+ 0
1269
+ 0.5
1270
+ −0.5
1271
+ 0
1272
+
1273
+ ,
1274
+ (49)
1275
+ which introduces a counter-clockwise rotation for a fo-
1276
+ cusing central force field.
1277
+ This sets up a simple non-
1278
+ equilibrium system. In this model, we have
1279
+ F (x) = −∇U0(x) + l(x), l(x) = −A · ∇U0(x)
1280
+ and
1281
+ l(x) · ∇U0(x) = 0
1282
+ holds in the pointwise sense. So, we have constructed
1283
+ a double-well non-reversible system with analytically
1284
+ known potential which can be used to verify the accu-
1285
+ racy of the learned potential. We focus on the domain
1286
+ Ω = [0, 3] × [0, 3].
1287
+ Primarily, the single EPR loss works well for the toy
1288
+ model with a relatively large diffusion coefficient D = 0.1,
1289
+ as shown in Fig. 1(A) in the MT. A slice plot of the poten-
1290
+ tial at y = 1.5 (Fig. 3(A)) shows the EPR solution coin-
1291
+ cides well with the analytical solution. The relative root
1292
+ mean square error (rRMSE) and the relative mean abso-
1293
+ lute error (rMAE), which will be defined in Section F F.4,
1294
+ have mean and standard deviation of 0.099 ± 0.010 and
1295
+ 0.081 ± 0.013 over 3 runs, respectively.
1296
+ However, when decreasing D to 0.05, the samples from
1297
+ simulated invariant distribution mainly stay in the dou-
1298
+ ble wells and away from the transition region (orange
1299
+
1300
+ 11
1301
+ FIG. 3. An illustration for the motivation of enhanced EPR. (A) and (B) show the comparisons of the learned potentials and
1302
+ true solution on the line y = 1.5 in the toy model with D = 0.1 and D = 0.05, respectively. (C) shows the filled contour plot
1303
+ of the potential learned by only the EPR loss. The orange points are samples from the simulated invariant distribution with
1304
+ D = 0.05, While green points are enhanced samples simulated from a more diffusive distribution with D′ = 0.1, which are used
1305
+ in the enhanced EPR.
1306
+ points in Fig. 3(C)). In this case, the double well do-
1307
+ main can still be learned well, yet the transition region,
1308
+ without enough samples, has not been effectively trained.
1309
+ Thus, as shown in Fig. 3(B), the single EPR result cap-
1310
+ tures the double wells, but cannot accurately connect
1311
+ them in the transition domain, which makes the left well
1312
+ a bit higher than the right one. The pointwise HJB loss
1313
+ with enhanced samples that better cover the transition
1314
+ domain thus helps the EPR loss with samples for small
1315
+ D, which mainly focuses on the local well domain. Us-
1316
+ ing these enhanced samples for D′ = 0.1 (green points
1317
+ in Fig. 3(A)), the enhanced EPR method performs much
1318
+ better in the transition domain between the two wells
1319
+ and thus agrees well with the true solution.
1320
+ The above strategy is general, and we apply it to com-
1321
+ pute the landscape for all of the continued 2D problems
1322
+ and compare it with other methods in Section F F.4.
1323
+ F.2.
1324
+ 2D limit cycle model
1325
+ We apply our approach to the limit cycle dynamics
1326
+ with a Mexican-hat shape landscape [3].
1327
+ Before proceeding to the concrete dynamical model,
1328
+ we have the following observation. For any SDEs like
1329
+ dx
1330
+ dt = F (x) +
1331
+
1332
+ 2D ˙w,
1333
+ (50)
1334
+ the corresponding steady FPE is
1335
+ ∇ · (F pss) − D∆pss = 0.
1336
+ If we make the transformation
1337
+ F → κF , D → κD
1338
+ in (50), then the steady state PDF
1339
+ pss(x) ∝ exp
1340
+
1341
+ −U(x)
1342
+ D
1343
+
1344
+ = exp
1345
+
1346
+ −κU(x)
1347
+ κD
1348
+
1349
+ is not changed.
1350
+ The transformation only changes the
1351
+ timescale of the dynamics (50) from t0 to κt0. However,
1352
+ this transformation changes the learned potential from U
1353
+ to κU if we utilize the drift κF (x) and noise strength κD
1354
+ in the system (50), which is helpful to set the scale of U
1355
+ to be O(1) by adjusting κ suitably for a specific problem.
1356
+ An alternative approach to accomplish this task is by
1357
+ choosing F to be κF in the EPR loss.
1358
+ We take D = 0.1 and consider the limit cycle dynamics
1359
+ dx
1360
+ dt = κ
1361
+ �α2 + x2
1362
+ 1 + x2
1363
+ 1
1364
+ 1 + y − ax
1365
+
1366
+ ,
1367
+ (51)
1368
+ dy
1369
+ dt = κ
1370
+ τ0
1371
+
1372
+ b −
1373
+ y
1374
+ 1 + cx2
1375
+
1376
+ ,
1377
+ (52)
1378
+ where the parameters are κ = 100, α = a = b = 0.1, c =
1379
+ 100, and τ0 = 5. Here the choice of κ = 100 is to make
1380
+ U ∼ O(1) following [17]. We focus on the domain Ω =
1381
+ [0, 8] × [0, 8] and compute the potential landscape and
1382
+ force decomposition which is presented in the MT. As
1383
+ explained in the above paragraph, this corresponds to
1384
+ the case D = 0.1/κ = 0.001 for the force field considered
1385
+ in [3].
1386
+
1387
+ B
1388
+ A
1389
+ C
1390
+ 1.4
1391
+ 1.4
1392
+ 3.0
1393
+ EPR
1394
+ EPR
1395
+ True
1396
+ 1.2
1397
+ 1.2 -
1398
+ True
1399
+
1400
+ 2.5
1401
+ Enhanced EPR
1402
+ 1.0
1403
+ 1.0
1404
+ 2.0
1405
+ 0.8 -
1406
+ 0.8
1407
+ >
1408
+ y1.5
1409
+ >
1410
+ 0.6
1411
+ 0.6
1412
+ 1.0
1413
+ 0.4
1414
+ 0.4
1415
+ 0.5
1416
+ 0.2
1417
+ 0.2
1418
+ 0.0
1419
+ 0.0
1420
+ 0.0 -
1421
+ 2.5
1422
+ 0.5
1423
+ 1.5
1424
+ 0.5
1425
+ 1.0
1426
+ 1.5
1427
+ 2.0
1428
+ 1.0
1429
+ 1.5
1430
+ 2.0
1431
+ 2.5
1432
+ 2.5
1433
+ 0.5
1434
+ 1.0
1435
+ 2.0
1436
+ 3.0
1437
+ 0.0
1438
+ 3.0
1439
+ 0.0
1440
+ 3.0
1441
+ 0.0
1442
+ X
1443
+ +
1444
+ +12
1445
+ FIG. 4. Filled contour plots of the potential V (x; θ) for the toy model with D = 0.05 learned by (A) Enhanced EPR, (B) Naive
1446
+ HJB, and (C) Normalizing Flow. The force field F (x) is decomposed into the gradient part −∇V (x; θ) (white arrows) and the
1447
+ non-gradient part (gray arrows). The length of an arrow denotes the scale of the vector. The solid dots are samples from the
1448
+ simulated invariant distribution.
1449
+ F.3.
1450
+ 2D multi-stable model
1451
+ We also apply the enhanced approach to study the
1452
+ dynamics of a multi-stable system [5]
1453
+ dx
1454
+ dt =
1455
+ axn
1456
+ Sn + xn +
1457
+ bSn
1458
+ Sn + yn − k1x,
1459
+ (53)
1460
+ dy
1461
+ dt =
1462
+ ayn
1463
+ Sn + yn +
1464
+ bSn
1465
+ Sn + xn − k2y,
1466
+ (54)
1467
+ where the parameters are a = b = k1 = k2 = 1, S = 0.5,
1468
+ and n = 4. We focus on the domain Ω = [0, 3] × [0, 3]
1469
+ and present the results for D = 0.01 in the MT.
1470
+ F.4.
1471
+ Numerical comparisons
1472
+ In this subsection, we conduct a comparison study on
1473
+ the previous 2D problems to show the priority of our
1474
+ enhanced EPR approach over other methods.
1475
+ For the
1476
+ toy model, we have the analytical solution; while for the
1477
+ other two 2D examples, we take the reference solution as
1478
+ the numerical solution of the steady FPE by a piecewise
1479
+ bilinear finite element method with fine rectangular grids
1480
+ and the least squares solver for the obtained sparse lin-
1481
+ ear system (a normalization condition
1482
+
1483
+ Ω pss(x)dx = 1 is
1484
+ added to fix the extra shifting degree of freedom).
1485
+ We use a fully connected neural network with 3 lay-
1486
+ ers and 20 hidden states as the potential V (x; θ). We
1487
+ train the network with a batch size of 2048 and a learn-
1488
+ ing rate of 0.001 by the Adam [35] optimizer for 3000
1489
+ epochs. We simulate the SDEs by the Euler-Maruyama
1490
+ scheme with reflecting boundaries on the boundary of
1491
+ the domain and obtain a dataset of size 10000 to approx-
1492
+ imate the invariant distribution. We update the dataset
1493
+ by one time step at each training iteration to make it
1494
+ closer to the invariant distribution.
1495
+ In the toy model,
1496
+ we try different scales to enhance samples and report the
1497
+ best performance (when D′ = 2D) for naive HJB. For
1498
+ fairness, we use the same enhanced samples in enhanced
1499
+ EPR as naive HJB does. In SM, we denote the enhanced
1500
+ loss as λ1 LEPR +λ2 LHJB and use λ1 = 0.1, λ2 = 1.0 in
1501
+ the three models. We can also use Gaussian disturbances
1502
+ of the SDE data to obtain enhanced data, as we do in
1503
+ the limit cycle problem. We use D′ = 5D in the multi-
1504
+ stable problem for a better covering of the transition do-
1505
+ main.
1506
+ For the comparison with normalizing flows, we
1507
+ train a neural spline flow [36] using the implementation
1508
+ from [37]. We repeat 4 blocks of the rational quadratic
1509
+ spline with 3 layers of 64 hidden units and a followed LU
1510
+ linear permutation. We train the flow model by Adam of
1511
+ the learning rate 0.0001 for 20000 epochs, based on the
1512
+ same sample dataset as enhanced EPR.
1513
+ We shift the potential to the origin by its minimum
1514
+ and focus on the domain
1515
+ D = {x ∈ Ω|V (x; θ) ≤ 20D}.
1516
+ We then define the modified potential
1517
+ U m
1518
+ 0 (x) := min(U0(x), 20D),
1519
+ V m(x; θ) := min(V (x; θ), 20D)
1520
+ for the shifted potential U0(x) and V (x; θ) since only the
1521
+ potential values in the domain D is of practical interest.
1522
+ We use the relative root mean square error (rRMSE) and
1523
+ the relative mean absolute error (rMAE) to describe the
1524
+ accuracy.
1525
+ rRMSE =
1526
+ ��
1527
+ Ω |V m(x; θ) − U m
1528
+ 0 (x)|2 dx
1529
+
1530
+ Ω |U m
1531
+ 0 (x)|2dx
1532
+ ,
1533
+ (55)
1534
+ rMAE =
1535
+
1536
+ Ω |V m(x; θ) − U m
1537
+ 0 (x)| dx
1538
+
1539
+ Ω |U m
1540
+ 0 (x)|dx
1541
+ .
1542
+ (56)
1543
+ We summarize the comparison of numerical errors for
1544
+ the 2D problems in Table I. The advantages of enhanced
1545
+ EPR over both naive HJB and normalizing flow can be
1546
+ identified from the following points.
1547
+
1548
+ A
1549
+ B
1550
+ C
1551
+ 3.0
1552
+ 3.0
1553
+ 3.0
1554
+ 1.4
1555
+ 1.4
1556
+ 1.4
1557
+ 2.5
1558
+ 2.5
1559
+ 2.5
1560
+ 1.2
1561
+ 1.2
1562
+ 1.2
1563
+ 2.0
1564
+ 2.0
1565
+ 2.0
1566
+ 1.0
1567
+ 1.0
1568
+ 1.0
1569
+ 0.8
1570
+ 0.8
1571
+ 0.8
1572
+ 1.5
1573
+ 1.5
1574
+ >1.5
1575
+ 0.6
1576
+ 0.6
1577
+ 0.6
1578
+ 1.0
1579
+ 1.0
1580
+ 1.0
1581
+ 0.4
1582
+ 0.4
1583
+ 0.4
1584
+ 0.5
1585
+ 0.5
1586
+ 0.5
1587
+ 0.2
1588
+ 0.2
1589
+ 0.2
1590
+ 0.0
1591
+ 0.0
1592
+ 0.0
1593
+ 0.0
1594
+ 0.0
1595
+ 0.0
1596
+ 1.5
1597
+ 0.5
1598
+ 1.0
1599
+ 2.5
1600
+ 0.5
1601
+ 1.0
1602
+ 1.5
1603
+ 2.0
1604
+ 2.5
1605
+ 0.5
1606
+ 1.5
1607
+ 2.0
1608
+ 3.0
1609
+ 0.0
1610
+ 0.0
1611
+ 2.0
1612
+ 2.5
1613
+ 0.0
1614
+ 3.0
1615
+ 1.0
1616
+ 3.0
1617
+ X
1618
+ X
1619
+ X13
1620
+ FIG. 5. Slices of the learned 3D potential V (x; θ) in the Lorenz system. The solid dots are samples from the simulated invariant
1621
+ distribution.
1622
+ TABLE I. Comparisons on Numerical Methods. We report
1623
+ the mean and the standard deviation over 3 random seeds.
1624
+ Problem
1625
+ Method
1626
+ rRMSE
1627
+ rMAE
1628
+ Toy, D=0.1
1629
+ Enhanced EPR
1630
+ 0.027±0.012 0.023±0.011
1631
+ Naive HJB
1632
+ 0.195±0.007 0.094±0.020
1633
+ Normalizing Flow 0.260±0.007 0.222±0.010
1634
+ Toy, D=0.05
1635
+ Enhanced EPR
1636
+ 0.048±0.021 0.030±0.012
1637
+ Naive HJB
1638
+ 0.237±0.020 0.142±0.042
1639
+ Normalizing Flow 0.284±0.028 0.231±0.030
1640
+ Limit Cycle
1641
+ Enhanced EPR
1642
+ 0.052±0.039 0.029±0.016
1643
+ Naive HJB
1644
+ 0.107±0.043 0.048±0.019
1645
+ Normalizing Flow 0.255±0.007 0.210±0.015
1646
+ Multi-stable
1647
+ Enhanced EPR
1648
+ 0.040±0.008 0.022±0.005
1649
+ Naive HJB
1650
+ 0.103±0.014 0.053±0.006
1651
+ Normalizing Flow 0.199±0.059 0.123±0.055
1652
+ • Without the guidance of EPR loss, naive HJB can
1653
+ not effectively optimize to the true solution with
1654
+ the heuristically chosen distribution. As shown in
1655
+ Table I, the enhanced EPR significantly achieves
1656
+ much better performances than naive HJB. Also,
1657
+ in the toy model with D = 0.05, naively training
1658
+ by HJB leads to an unreliable solution in Fig. 4(B)
1659
+ with relative errors larger than 0.1. Our computa-
1660
+ tional experiences show that the enhanced EPR is
1661
+ more robust than naive HJB and less sensitive to
1662
+ the enhanced data distribution and parameters.
1663
+ • The enhanced EPR converges faster than the naive
1664
+ HJB. For instance, in the toy model with D = 0.1,
1665
+ the enhanced EPR has achieved rRMSE of 0.087 ±
1666
+ 0.069 and rMAE of 0.066 ± 0.013 in 2000 epochs,
1667
+ while the naive HJB can not attain the same level
1668
+ even after 3000 epochs.
1669
+ • Without information from the dynamics, the nor-
1670
+ malizing flow performs the worst only based on
1671
+ the simulated invariant distribution dataset. The
1672
+ learned potential tends to be rough and non-
1673
+ smooth at the edge of samples as shown in Fig. 4.
1674
+ Thus the enhanced EPR explicitly utilizing the in-
1675
+ formation of the force field does help in more accu-
1676
+ rate training of the potential.
1677
+ We further compare the potential landscape computed
1678
+ by different methods in Fig. 4. We remark that we omit
1679
+ the space {x|V (x) ≥ 30D} in both Fig. 3 and Fig. 4 since
1680
+ these domains are not of practical interest (their proba-
1681
+ bility is less than 10−9 according to the Gibbs form of
1682
+ the invariant distribution). The enhanced EPR presents
1683
+ the landscape more consistent with the simulated sam-
1684
+ ples and the true/reference solution than other methods.
1685
+ The decomposition of the force also shows better match-
1686
+ ing for the toy model. The normalizing flow captures the
1687
+ high probability domain but lacks information on the dy-
1688
+ namics, thus making its error larger than enhanced EPR
1689
+ and naive HJB.
1690
+ G.
1691
+ 3D MODELS
1692
+ In this section, we describe the computational setup
1693
+ for the Lorenz system in three dimensions and Ferrell’s
1694
+ three-ODE model. We demonstrate the slices of the 3D
1695
+
1696
+ 20.0
1697
+ 40
1698
+ 17.5
1699
+ 35
1700
+ 15.0
1701
+ 30
1702
+ 12.5
1703
+ 25
1704
+ Z
1705
+ 10.0
1706
+ 20
1707
+ 7.5
1708
+ 15
1709
+ 5.0
1710
+ 2.5
1711
+ 10
1712
+ 0.0
1713
+ -15 -10 -5
1714
+ 12 1416
1715
+ 0
1716
+ 0
1717
+ 2
1718
+ 5
1719
+ 10
1720
+ 4
1721
+ 6
1722
+ X
1723
+ 1514
1724
+ FIG. 6. Streamlines of the projected force ˜
1725
+ G(z) and filled contour plot of the reduced potential ˜V (z; θ) for Ferrell’s three-ODE
1726
+ model learned by enhanced EPR.
1727
+ potential for the former and conduct the proposed di-
1728
+ mensionality reduction on the latter.
1729
+ G.1.
1730
+ 3D Lorenz system
1731
+ In this subsection, we apply our landscape construction
1732
+ approach to the 3D Lorenz system [38] with isotropic
1733
+ temporal Gaussian white noise.
1734
+ The Lorenz system has the form
1735
+ dx
1736
+ dt = β1(y − x),
1737
+ (57)
1738
+ dy
1739
+ dt = x (β2 − z) − y,
1740
+ (58)
1741
+ dz
1742
+ dt = xy − β3z,
1743
+ (59)
1744
+ where β1 = 10, β2 = 28 and β3 = 8
1745
+ 3. We add the noise
1746
+ with strength D = 1. This model was also considered
1747
+ in [18] with D = 20.
1748
+ We obtain the enhanced data by adding Gaussian
1749
+ noises with standard deviation σ = 5 to the SDEs-
1750
+ simulation data.
1751
+ We directly train the 3D potential
1752
+ V (x; θ) by enhanced EPR with λ1 = 10.0, λ2 = 1.0
1753
+ and present a slice view of the landscape in Fig. 5. The
1754
+ learned 3D potential agrees well with the simulated sam-
1755
+ ples and shows a butterfly-like shape as the original sys-
1756
+ tem does.
1757
+ G.2.
1758
+ Ferrell’s three-ODE model
1759
+ In this subsection, we consider Ferrell’s three-ODE
1760
+ model for a simplified cell cycle dynamics [29] denoted
1761
+ by
1762
+ x = [CDK1], y = [Plk1], z = [APC]
1763
+
1764
+ 0.7
1765
+ 0.200
1766
+ 0.175
1767
+ 0.6
1768
+ 0.150
1769
+ 0.5
1770
+ 0.125
1771
+ 0.4
1772
+ Plk1
1773
+ 0.100
1774
+ 0.3
1775
+ 0.075
1776
+ 0.2
1777
+ 0.050
1778
+ 0.1
1779
+ 0.025
1780
+ 0.0
1781
+ 0.000
1782
+ 0.0
1783
+ 0.3
1784
+ 0.5
1785
+ 0.1
1786
+ 0.2
1787
+ 0.6
1788
+ 0.4
1789
+ 0.7
1790
+ CDK115
1791
+ for the concentration of CDK1, Plk1, and APC. We have
1792
+ the ODEs
1793
+ dx
1794
+ dt = α1 − β1x
1795
+ zn1
1796
+ Kn1
1797
+ 1
1798
+ + zn1 ,
1799
+ (60)
1800
+ dy
1801
+ dt = α2 (1 − y)
1802
+ xn2
1803
+ Kn2
1804
+ 2
1805
+ + xn2 − β2y,
1806
+ (61)
1807
+ dz
1808
+ dt = α3 (1 − z)
1809
+ yn3
1810
+ Kn3
1811
+ 3
1812
+ + yn3 − β3z,
1813
+ (62)
1814
+ where α1 = 0.1, α2 = α3 = β1 = 3, β2 = β3 = 1, K1 =
1815
+ K2 = K3 = 0.5, n1 = n2 = 8, and n3 = 8. We add the
1816
+ noise scale D = 0.01 with isotropic temporal Gaussian
1817
+ white noise.
1818
+ By taking the reduced variables z = (x, y)⊤, we can
1819
+ apply our force projection loss and enhanced loss to learn
1820
+ the projected force ˜G(x) and potential ˜V (x; θ), and the
1821
+ results are shown in Fig. 6.
1822
+ We use three-layer net-
1823
+ works with 80 hidden states in this problem and en-
1824
+ hanced samples simulated from a more diffusive distribu-
1825
+ tion with D′ = 5D. We train the projected force ˜G(x)
1826
+ for 1000 epochs and then conduct enhanced EPR with
1827
+ λ1 = 0.1, λ2 = 1.0 for 4000 epochs to compute the pro-
1828
+ jected potential. The obtained reduced potential shows
1829
+ a plateau in the centering region and a local-well tube
1830
+ domain along the reduced limit cycle.
1831
+ H.
1832
+ HIGH DIMENSIONAL MODELS
1833
+ In this section, we apply our approach to 8D limit cy-
1834
+ cle dynamics [4] and 52D multistable dynamics [39]. We
1835
+ directly train the reduced force field ˜G(z) and poten-
1836
+ tial ˜V (z; θ) according to the selected reduction variables
1837
+ suggested in the corresponding literature. We use three-
1838
+ layer networks with 80 hidden states for both force and
1839
+ potential. The training strategies are similar to previous
1840
+ examples.
1841
+ H.1.
1842
+ 8D complex system
1843
+ We consider an 8D system in which the dynamics and
1844
+ parameters are the same as the supporting information
1845
+ of [4], and take CycB and Cyc20 as the reduction variable
1846
+ z. We set the mass in this problem as m = 0.8.
1847
+ In [4], the noise strength D = 0.0005 is not suitable for
1848
+ direct neural network training since the scale of the po-
1849
+ tential is O(10−5). Borrowing the idea in Section F F.2,
1850
+ we amplify the original force field F considered in [4]
1851
+ by κ = 1000 times, and take D = 0.01 for the trans-
1852
+ formed force field. This amounts to set D = 10−5 for the
1853
+ original force field, which is even smaller than the case
1854
+ considered in [4]. We simulate the SDEs without bound-
1855
+ aries first and then fix the dataset without updating. We
1856
+ obtain the enhanced samples by adding Gaussian pertur-
1857
+ bations to the obtained dataset. Only the data within the
1858
+ FIG. 7. Streamlines and limit sets of the projected force field
1859
+ of the 8D cell cycle model by two reduced variables CycB and
1860
+ Cdc20. The outer red circle is the stable limit cycle of the
1861
+ reduced force field corresponding to the yellow circle as the
1862
+ projection of the original high dimensional limit cycle. The
1863
+ inner red circle, red dot and two green circles are stable and
1864
+ unstable limit sets of the reduced dynamics, which are virtual
1865
+ in high dimensions.
1866
+ biologically meaningful domain of [0, 1.5]8 is utilized for
1867
+ computation.
1868
+ We train the projected force ˜G(z; θ) for 5000 epochs
1869
+ and conduct the enhanced EPR with λ1 = 0.1, λ2 = 1.0
1870
+ for 10000 epochs. Some essential features of the reduced
1871
+ potential and dynamics on the plane have been presented
1872
+ in MT.
1873
+ In the SM Fig. 7, we present a more thorough picture
1874
+ of the reduced dynamics for the 8D model than the MT
1875
+ Fig. 2. To be more specific, we further show two unstable
1876
+
1877
+ 0.8
1878
+ 0.6
1879
+ Cdc20
1880
+ 0.4
1881
+ 0.2
1882
+ 0.0
1883
+ 0.1
1884
+ 0.2
1885
+ 0.3
1886
+ 0.4
1887
+ 0.5
1888
+ CycB16
1889
+ FIG. 8. Projected force ˜
1890
+ G(x) and potential ˜V (x; θ) of the 52D double-well model learned by enhanced EPR.
1891
+ limit cycles of the projected force field, two green circles
1892
+ obtained by reverse time integration, in SM Fig. 7. They
1893
+ fall between the outer and inner stable limit cycles (inner
1894
+ and outer red circles), and the inner stable limit cycle and
1895
+ inner stable node (red dot in the center), which play the
1896
+ role of separatrices between the neighboring stable limit
1897
+ sets. This picture occurs as the result that the landscape
1898
+ of the considered system in the centering region is very
1899
+ flat. These inner limit sets are virtual in high dimensions,
1900
+ but they naturally appear in the reduced dynamics on the
1901
+ plane. Similar features might also occur in other reduced
1902
+ dynamics in two dimensions.
1903
+ H.2.
1904
+ 52D multi-stable system
1905
+ We also apply our approach to a biological system
1906
+ with 52 ODEs constructed by [39] and take GATA6 and
1907
+ NANOG as the reduction variable z. We define Ai as
1908
+ the set of indices for activating xi and Ri as the set of
1909
+ indices for repressing xi, the corresponding relationships
1910
+ are defined as the 52D node network shown in [39]. For
1911
+ i = 1, ..., 52,
1912
+ dxi
1913
+ dt = −kxi +
1914
+
1915
+ j∈Aj
1916
+ axn
1917
+ j
1918
+ Sn + xn
1919
+ j
1920
+ +
1921
+
1922
+ j∈Rj
1923
+ bSn
1924
+ Sn + xn
1925
+ j
1926
+ ,
1927
+ (63)
1928
+ where a = 0.37, b = 0.5, k = 1, S = 0.5, and n = 3. We
1929
+ choose the noise strength D = 0.01.
1930
+ We train the force ˜G(z; θ) for 500 epochs and conduct
1931
+ enhanced EPR with λ1 = 100.0, λ2 = 1.0 for 500 epochs.
1932
+ We use enhanced samples simulated from a more diffusive
1933
+ distribution with D′ = 5D.
1934
+ As shown in Fig. 8, the
1935
+ projected force demonstrate the reduced dynamics and
1936
+ the depth of the constructed potential agrees well with
1937
+ the density of the sample points.
1938
+
1939
+ 2.00
1940
+ 0.200
1941
+ 1.75
1942
+ 0.175
1943
+ 1.50
1944
+ 0.150
1945
+ 1.25
1946
+ 0.125
1947
+ IANOG
1948
+ 1.00
1949
+ 0.100
1950
+ 0.75
1951
+ 0.075
1952
+ 0.050
1953
+ 0.50
1954
+ 0.025
1955
+ 0.25
1956
+ 0.000
1957
+ 0.00
1958
+ 0.5
1959
+ 1.0
1960
+ 1.5
1961
+ 2.0
1962
+ 2.5
1963
+ 3.0
1964
+ GATA617
1965
+ [1] C. Waddington, The Strategy of the Genes (George Allen
1966
+ & Unwin, Ltd., London, 1957).
1967
+ [2] P. Ao, J. Phys. A-Math. Gen. 37, L25 (2004).
1968
+ [3] J. Wang, L. Xu, and E. Wang, Proc. Nat. Acad. Sci. USA
1969
+ 105, 12271 (2008).
1970
+ [4] J. Wang, C. Li, and E. Wang, Proc. Nat. Acad. Sci. USA
1971
+ 107, 8195 (2010).
1972
+ [5] J. Wang, K. Zhang, L. Xu, and E. Wang, Proc. Nat.
1973
+ Acad. Sci. USA 108, 8257 (2011).
1974
+ [6] J. Zhou, M. Aliyu, E. Aurell, and S. Huang, J. R. Soc.,
1975
+ Interface 9, 3539 (2012).
1976
+ [7] H. Ge and H. Qian, Chaos 22, 023140 (2012).
1977
+ [8] C. Lv, X. Li, F. Li, and T. Li, PLoS ONE 9, e88167
1978
+ (2014).
1979
+ [9] P. Zhou and T. Li, J. Chem. Phys. 144, 094109 (2016).
1980
+ [10] J. Shi, K. Aihara, T. Li, and L. Chen, Nat. Sci. Rev. 9,
1981
+ nwac116 (2022).
1982
+ [11] J. J. Ferrell, Curr. Biol. 22, R458 (2012).
1983
+ [12] R. Yuan, X. Zhu, G. Wang, S. Li, and P. Ao, Rep. Prog.
1984
+ Phys. 80, 042701 (2017).
1985
+ [13] J. Wang, Adv. Phys. 64, 1 (2015).
1986
+ [14] X. Fang, K. Kruse, T. Lu, and J. Wang, Rev. Mod. Phys.
1987
+ 91, 045004 (2019).
1988
+ [15] M. Cameron, Phys. D 241, 1532 (2012).
1989
+ [16] C. Li and J. Wang, Proc. Nat. Acad. Sci. USA 111, 14130
1990
+ (2014).
1991
+ [17] B. Lin, Q. Li, and W. Ren, J. Sci. Comp. 91, 77 (2022).
1992
+ [18] B. Lin, Q. Li, and W. Ren, J. Comp. Phys. 474, 111783
1993
+ (2023).
1994
+ [19] B. Lin, Q. Li, and W. Ren, in Proc. Mach. Learn. Res.,
1995
+ 2nd Annual Conference on Mathematical and Scientific
1996
+ Machine Learning, Vol. 145 (2021) p. 652.
1997
+ [20] M. Crandall and P. Lions, Trans. Amer. Math. Soc. 277,
1998
+ 1 (1983).
1999
+ [21] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learn-
2000
+ ing (MIT Press, Cambridge, 2016).
2001
+ [22] H. Qian, Phys. Rev. E 65, 016102 (2001).
2002
+ [23] X. Zhang, H. Qian, and M. Qian, Phys. Rep. 510, 1
2003
+ (2012).
2004
+ [24] R. Khasminskii, Stochastic Stability of Differential Equa-
2005
+ tions, 2nd ed. (Springer Verlag, Berlin and Heidelberg,
2006
+ 2012).
2007
+ [25] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury,
2008
+ G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga,
2009
+ et al., Advances in Neural Information Processing Sys-
2010
+ tems 32 (2019).
2011
+ [26] M. Raissi, P. Perdikaris, and G. Karniadakis, J. Comp.
2012
+ Phys. 378, 686 (2019).
2013
+ [27] W. Zhang, C. Hartmann, and C. Sch¨utte, Faraday Dis-
2014
+ cuss. 195, 365 (2016).
2015
+ [28] M. Hirsch, S. Smale, and R. Devaney, Differential Equa-
2016
+ tions, Dynamical Systems, and an Introduction to Chaos,
2017
+ 2nd ed. (Academic Press, San Diego, 2004).
2018
+ [29] J. E. Ferrell, T. Y.-C. Tsai, and Q. Yang, Cell 144, 874
2019
+ (2011).
2020
+ [30] I. Kobyzev, S. Prince, and M. Brubaker, IEEE Trans.
2021
+ Patt. Anal. Mach. Intel. 43, 3964 (2021).
2022
+ [31] R. Courant and D. Hilbert, Methods of Mathematical
2023
+ Physics, Vol. 1 (Interscience Publishers, New York, 1953).
2024
+ [32] L. Evans, Partial Differential Equations, 2nd ed. (Amer-
2025
+ ican Mathematical Society, Rode Island, 2010).
2026
+ [33] M. Holmes, Introduction to Perturbation Methods, 2nd
2027
+ ed. (Springer Verlag, New York, 2013).
2028
+ [34] D. Frenkel and B. Smit, Understanding Molecular Sim-
2029
+ ulation: From Algorithms to Applications, 2nd ed. (Aca-
2030
+ demic Press, San Diego, 2002).
2031
+ [35] D. P. Kingma and J. Ba, in Proceedings of the Interna-
2032
+ tional Conference on Learning Representations (2015).
2033
+ [36] C. Durkan, A. Bekasov, I. Murray, and G. Papamakarios,
2034
+ Advances in Neural Information Processing Systems 32
2035
+ (2019).
2036
+ [37] Https://github.com/VincentStimper/normalizing-flows.
2037
+ [38] E. N. Lorenz, J. Atmos. Sci. 20, 130 (1963).
2038
+ [39] C. Li and J. Wang, PLoS Comput. Biol. 9, e1003165
2039
+ (2013).
2040
+
-dAzT4oBgHgl3EQf_P7B/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
-dFST4oBgHgl3EQfcTgr/content/2301.13802v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34546e621e06c9455eceefd93db0aa3362473c9de57a0deb3971fff330468e43
3
+ size 680188
-dFST4oBgHgl3EQfcTgr/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1167fd0e984e6c33c2d1b6b99d126492ecd6c95ef8cd7e69a8f85a7edba42c6c
3
+ size 1310765
-dFST4oBgHgl3EQfcTgr/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ce468db637f1954e6a3ae98ad236ab99ce10f01c18bd5bdff1b405264a1ddd6
3
+ size 52164
.gitattributes CHANGED
@@ -9041,3 +9041,87 @@ p9FST4oBgHgl3EQfODiE/content/2301.13750v1.pdf filter=lfs diff=lfs merge=lfs -tex
9041
  zNE1T4oBgHgl3EQfkQT7/content/2301.03273v1.pdf filter=lfs diff=lfs merge=lfs -text
9042
  tdE3T4oBgHgl3EQf9wsO/content/2301.04818v1.pdf filter=lfs diff=lfs merge=lfs -text
9043
  GNE3T4oBgHgl3EQfWAqd/content/2301.04465v1.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9041
  zNE1T4oBgHgl3EQfkQT7/content/2301.03273v1.pdf filter=lfs diff=lfs merge=lfs -text
9042
  tdE3T4oBgHgl3EQf9wsO/content/2301.04818v1.pdf filter=lfs diff=lfs merge=lfs -text
9043
  GNE3T4oBgHgl3EQfWAqd/content/2301.04465v1.pdf filter=lfs diff=lfs merge=lfs -text
9044
+ AdFJT4oBgHgl3EQfrS3C/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9045
+ HNE5T4oBgHgl3EQfWQ9u/content/2301.05557v1.pdf filter=lfs diff=lfs merge=lfs -text
9046
+ OtE3T4oBgHgl3EQfCAnu/content/2301.04273v1.pdf filter=lfs diff=lfs merge=lfs -text
9047
+ OtFRT4oBgHgl3EQf5Dhx/content/2301.13671v1.pdf filter=lfs diff=lfs merge=lfs -text
9048
+ A9E4T4oBgHgl3EQfEwz_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9049
+ g9AzT4oBgHgl3EQfa_wH/content/2301.01376v1.pdf filter=lfs diff=lfs merge=lfs -text
9050
+ J9AyT4oBgHgl3EQff_g3/content/2301.00349v1.pdf filter=lfs diff=lfs merge=lfs -text
9051
+ z9E1T4oBgHgl3EQfRQNt/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9052
+ UNE0T4oBgHgl3EQfVADN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9053
+ ONE3T4oBgHgl3EQfxAvE/content/2301.04708v1.pdf filter=lfs diff=lfs merge=lfs -text
9054
+ r9E5T4oBgHgl3EQflw-X/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9055
+ 7tE1T4oBgHgl3EQfnQST/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9056
+ hNE0T4oBgHgl3EQf6gJU/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9057
+ z9E4T4oBgHgl3EQfZQyl/content/2301.05055v1.pdf filter=lfs diff=lfs merge=lfs -text
9058
+ ytE0T4oBgHgl3EQf-wLX/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9059
+ VNAyT4oBgHgl3EQfuvm-/content/2301.00620v1.pdf filter=lfs diff=lfs merge=lfs -text
9060
+ o9E2T4oBgHgl3EQfKAbq/content/2301.03699v1.pdf filter=lfs diff=lfs merge=lfs -text
9061
+ zNE1T4oBgHgl3EQfkQT7/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9062
+ ftE0T4oBgHgl3EQf6AJn/content/2301.02758v1.pdf filter=lfs diff=lfs merge=lfs -text
9063
+ htFLT4oBgHgl3EQfaS9u/content/2301.12073v1.pdf filter=lfs diff=lfs merge=lfs -text
9064
+ J9AyT4oBgHgl3EQff_g3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9065
+ rtFJT4oBgHgl3EQfbiyy/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9066
+ g9AzT4oBgHgl3EQfa_wH/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9067
+ ytE0T4oBgHgl3EQfcwA0/content/2301.02366v1.pdf filter=lfs diff=lfs merge=lfs -text
9068
+ 4NAzT4oBgHgl3EQfuv1g/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9069
+ YNAzT4oBgHgl3EQfKvsF/content/2301.01100v1.pdf filter=lfs diff=lfs merge=lfs -text
9070
+ tNE2T4oBgHgl3EQf1wiA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9071
+ aNFQT4oBgHgl3EQffjaD/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9072
+ P9E4T4oBgHgl3EQf-g5_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9073
+ zNE2T4oBgHgl3EQfhwde/content/2301.03951v1.pdf filter=lfs diff=lfs merge=lfs -text
9074
+ IdE2T4oBgHgl3EQf_gkq/content/2301.04248v1.pdf filter=lfs diff=lfs merge=lfs -text
9075
+ sdE3T4oBgHgl3EQfjgoO/content/2301.04588v1.pdf filter=lfs diff=lfs merge=lfs -text
9076
+ kdFPT4oBgHgl3EQfGjRx/content/2301.13004v1.pdf filter=lfs diff=lfs merge=lfs -text
9077
+ NNFLT4oBgHgl3EQfNy8i/content/2301.12021v1.pdf filter=lfs diff=lfs merge=lfs -text
9078
+ sdE3T4oBgHgl3EQfjgoO/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9079
+ wtAzT4oBgHgl3EQfQfse/content/2301.01198v1.pdf filter=lfs diff=lfs merge=lfs -text
9080
+ z9E4T4oBgHgl3EQfZQyl/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9081
+ IdE2T4oBgHgl3EQf_gkq/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9082
+ WdE1T4oBgHgl3EQfbgRN/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9083
+ DtE0T4oBgHgl3EQfQgBB/content/2301.02193v1.pdf filter=lfs diff=lfs merge=lfs -text
9084
+ p9AyT4oBgHgl3EQfl_hG/content/2301.00462v1.pdf filter=lfs diff=lfs merge=lfs -text
9085
+ zNE2T4oBgHgl3EQfhwde/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9086
+ VNAyT4oBgHgl3EQfuvm-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9087
+ 4tAzT4oBgHgl3EQf9v5K/content/2301.01923v1.pdf filter=lfs diff=lfs merge=lfs -text
9088
+ ytE0T4oBgHgl3EQfcwA0/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9089
+ WtE2T4oBgHgl3EQfuQi1/content/2301.04079v1.pdf filter=lfs diff=lfs merge=lfs -text
9090
+ TtAzT4oBgHgl3EQfJfvP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9091
+ XdAyT4oBgHgl3EQf9PqM/content/2301.00871v1.pdf filter=lfs diff=lfs merge=lfs -text
9092
+ edAyT4oBgHgl3EQfw_kw/content/2301.00657v1.pdf filter=lfs diff=lfs merge=lfs -text
9093
+ wtAzT4oBgHgl3EQfQfse/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9094
+ XdAyT4oBgHgl3EQf9PqM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9095
+ CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf filter=lfs diff=lfs merge=lfs -text
9096
+ -dFST4oBgHgl3EQfcTgr/content/2301.13802v1.pdf filter=lfs diff=lfs merge=lfs -text
9097
+ WtE2T4oBgHgl3EQfuQi1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9098
+ GNE2T4oBgHgl3EQf-glT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9099
+ 4tAzT4oBgHgl3EQf9v5K/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9100
+ TtAzT4oBgHgl3EQfJfvP/content/2301.01082v1.pdf filter=lfs diff=lfs merge=lfs -text
9101
+ ZdAyT4oBgHgl3EQfWve4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9102
+ -dFST4oBgHgl3EQfcTgr/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9103
+ o9FLT4oBgHgl3EQfhS9Z/content/2301.12102v1.pdf filter=lfs diff=lfs merge=lfs -text
9104
+ DNE2T4oBgHgl3EQf9Qm6/content/2301.04227v1.pdf filter=lfs diff=lfs merge=lfs -text
9105
+ GNE2T4oBgHgl3EQf-glT/content/2301.04239v1.pdf filter=lfs diff=lfs merge=lfs -text
9106
+ p9AyT4oBgHgl3EQfl_hG/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9107
+ QNE1T4oBgHgl3EQfagS4/content/2301.03163v1.pdf filter=lfs diff=lfs merge=lfs -text
9108
+ ydE4T4oBgHgl3EQfYgwr/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9109
+ NNFLT4oBgHgl3EQfNy8i/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9110
+ _tE5T4oBgHgl3EQfSQ5r/content/2301.05527v1.pdf filter=lfs diff=lfs merge=lfs -text
9111
+ EtE3T4oBgHgl3EQfVQrm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9112
+ u9E3T4oBgHgl3EQfNwnN/content/2301.04387v1.pdf filter=lfs diff=lfs merge=lfs -text
9113
+ tNE4T4oBgHgl3EQfWAwH/content/2301.05028v1.pdf filter=lfs diff=lfs merge=lfs -text
9114
+ B9AyT4oBgHgl3EQfePhg/content/2301.00317v1.pdf filter=lfs diff=lfs merge=lfs -text
9115
+ ytFQT4oBgHgl3EQfBjWV/content/2301.13227v1.pdf filter=lfs diff=lfs merge=lfs -text
9116
+ DtE0T4oBgHgl3EQfQgBB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9117
+ WdE1T4oBgHgl3EQfbgRN/content/2301.03173v1.pdf filter=lfs diff=lfs merge=lfs -text
9118
+ QNE1T4oBgHgl3EQfagS4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9119
+ RNFQT4oBgHgl3EQfaTZS/content/2301.13319v1.pdf filter=lfs diff=lfs merge=lfs -text
9120
+ tNE4T4oBgHgl3EQfWAwH/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9121
+ 0NFQT4oBgHgl3EQfDTUM/content/2301.13233v1.pdf filter=lfs diff=lfs merge=lfs -text
9122
+ D9E0T4oBgHgl3EQfQgCE/content/2301.02194v1.pdf filter=lfs diff=lfs merge=lfs -text
9123
+ hNA0T4oBgHgl3EQfH__c/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9124
+ B9AyT4oBgHgl3EQfePhg/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
9125
+ 8dE3T4oBgHgl3EQfRwni/content/2301.04426v1.pdf filter=lfs diff=lfs merge=lfs -text
9126
+ a9AzT4oBgHgl3EQf2v6K/content/2301.01819v1.pdf filter=lfs diff=lfs merge=lfs -text
9127
+ 8dE3T4oBgHgl3EQfRwni/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
0NFQT4oBgHgl3EQfDTUM/content/2301.13233v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ae27f1210a61f5473d88063cdbdf13d6cf88b96ab5c470e8b1122472128f3da
3
+ size 1037810
0NFQT4oBgHgl3EQfDTUM/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67ba2957c99bb36b51479d2638b58696f7e51a65a673dcf3296dab2cc05293e2
3
+ size 185338
0dAzT4oBgHgl3EQfDPpe/content/tmp_files/2301.00972v1.pdf.txt ADDED
@@ -0,0 +1,1453 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ EZInterviewer: To Improve Job Interview Performance with
2
+ Mock Interview Generator
3
+ Mingzhe Li∗†
4
+ Peking University
5
6
+ Xiuying Chen*
7
+ CBRC, KAUST
8
+ CEMSE, KAUST
9
10
+ Weiheng Liao
11
+ Made by DATA
12
13
+ Yang Song
14
+ BOSS Zhipin NLP Center
15
16
+ Tao Zhang
17
+ BOSS Zhipin
18
19
+ Dongyan Zhao
20
+ Peking University
21
22
+ Rui Yan‡
23
+ Gaoling School of AI
24
+ Renmin University of China
25
26
+ ABSTRACT
27
+ Interview has been regarded as one of the most crucial step for
28
+ recruitment. To fully prepare for the interview with the recruiters,
29
+ job seekers usually practice with mock interviews between each
30
+ other. However, such a mock interview with peers is generally far
31
+ away from the real interview experience: the mock interviewers are
32
+ not guaranteed to be professional and are not likely to behave like
33
+ a real interviewer. Due to the rapid growth of online recruitment in
34
+ recent years, recruiters tend to have online interviews, which makes
35
+ it possible to collect real interview data from real interviewers. In
36
+ this paper, we propose a novel application named EZInterviewer,
37
+ which aims to learn from the online interview data and provides
38
+ mock interview services to the job seekers. The task is challenging
39
+ in two ways: (1) the interview data are now available but still of
40
+ low-resource; (2) to generate meaningful and relevant interview
41
+ dialogs requires thorough understanding of both resumes and job
42
+ descriptions. To address the low-resource challenge, EZInterviewer
43
+ is trained on a very small set of interview dialogs. The key idea is
44
+ to reduce the number of parameters that rely on interview dialogs
45
+ by disentangling the knowledge selector and dialog generator so
46
+ that most parameters can be trained with ungrounded dialogs as
47
+ well as the resume data that are not low-resource. Specifically, to
48
+ keep the dialog on track for professional interviews, we pre-train
49
+ a knowledge selector module to extract information from resume
50
+ in the job-resume matching. A dialog generator is also pre-trained
51
+ with ungrounded dialogs, learning to generate fluent responses.
52
+ * Both authors contributed equally to this research.
53
+ † Work done during an internship at BOSS Zhipin.
54
+ ‡ Corresponding author: Rui Yan ([email protected]).
55
+ Permission to make digital or hard copies of all or part of this work for personal or
56
+ classroom use is granted without fee provided that copies are not made or distributed
57
+ for profit or commercial advantage and that copies bear this notice and the full citation
58
+ on the first page. Copyrights for components of this work owned by others than the
59
+ author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
60
+ republish, to post on servers or to redistribute to lists, requires prior specific permission
61
+ and/or a fee. Request permissions from [email protected].
62
+ WSDM ’23, February 27-March 3, 2023, Singapore, Singapore
63
+ © 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.
64
+ ACM ISBN 978-1-4503-9407-9/23/02...$15.00
65
+ https://doi.org/10.1145/3539597.3570476
66
+ Then, a decoding manager is finetuned to combine information
67
+ from the two pre-trained modules to generate the interview ques-
68
+ tion. Evaluation results on a real-world job interview dialog dataset
69
+ indicate that we achieve promising results to generate mock in-
70
+ terviews. With the help of EZInterviewer, we hope to make mock
71
+ interview practice become easier for job seekers.
72
+ CCS CONCEPTS
73
+ • Computing methodologies → Natural language generation.
74
+ KEYWORDS
75
+ EZInterviewer, mock interview generation, knowledge-grounded
76
+ dialogs, online recruitment, low-resource deep learning
77
+ ACM Reference Format:
78
+ Mingzhe Li, Xiuying Chen, Weiheng Liao, Yang Song, Tao Zhang, Dongyan
79
+ Zhao, Rui Yan. 2023. EZInterviewer: To Improve Job Interview Performance
80
+ with Mock Interview Generator. In Proceedings of the Sixteenth ACM Inter-
81
+ national Conference on Web Search and Data Mining (WSDM ’23), February
82
+ 27-March 3, 2023, Singapore, Singapore. ACM, New York, NY, USA, 9 pages.
83
+ https://doi.org/10.1145/3539597.3570476
84
+ 1
85
+ INTRODUCTION
86
+ To make better preparations, job seekers practice mock interviews,
87
+ which aims to anticipate interview questions and prepare them
88
+ for what they might get asked in their real turn. However, the
89
+ outcome of such an approach is unsatisfactory, since those “mock
90
+ interviewers” do not have interview experience themselves, and
91
+ do not know what the real recruiters would be interested in. Mock
92
+ Interview Generation (MIG) represents a plausible solution to this
93
+ problem. Not only makes interviews more cost-effective, but mock
94
+ interview generators also appear to be feasible, since much can be
95
+ learned about the job seekers from their resumes, as can the job
96
+ itself from the job description (JD). An illustration of MIG task is
97
+ shown in Figure 1.
98
+ There are two main challenges in this task. One is that the
99
+ knowledge-grounded interviews are extremely time-consuming
100
+ and costly to collect. Without a sufficient amount of training data,
101
+ arXiv:2301.00972v1 [cs.CL] 3 Jan 2023
102
+
103
+ WSDM ’23, February 27-March 3, 2023, Singapore, Singapore
104
+ Mingzhe Li et al.
105
+ Figure 1: An example of the Mock Interview Generation task.
106
+ Based on the candidate’s work experience and the current di-
107
+ alog on the experience of web page development, the system
108
+ generates an interview question “If a product needs a three-
109
+ level classification selection, which component would you
110
+ use and how to achieve it?”.
111
+ the performance of such dialog generation models drops dramati-
112
+ cally [37]. The second challenge is to make the knowledge-grounded
113
+ dialog relevant to the candidate resume, job description, and previ-
114
+ ous dialog utterances. This makes MIG a complex task involving
115
+ text understanding, knowledge selection, and dialog generation.
116
+ In this paper, we propose EZInterviewer, a novel mock interview
117
+ generator, with the aim of making interviews easier to prepare. The
118
+ key idea is to train EZInterviewer in a low-resource setting: the
119
+ model is first pre-trained on large-scale ungrounded dialogs and
120
+ resume data, and then fine-tuned on a very small set of resume-
121
+ grounded interview dialogs. Specifically, the knowledge selector
122
+ consists of a resume encoder to encode the resume, and a key-value
123
+ memory network with mask self-attention mechanism, responsible
124
+ for selecting relevant information in the resume to focus on to help
125
+ generate the next interview utterance. The dialog generator also
126
+ has two components, a context encoder which encodes the current
127
+ dialog context, and a response decoder, responsible for generating
128
+ the next dialog utterance without knowledge from the resumes.
129
+ This knowledge-insensitive dialog generator is coordinated with
130
+ the knowledge selector by a decoding manager that dynamically
131
+ determines which component is activated for utterance generation.
132
+ It is noted that the number of parameters in the decoding man-
133
+ ager can be small, therefore it only requires a small number of
134
+ resume-grounded interview dialogs. Extensive experiments on real-
135
+ world interview dataset demonstrate the effectiveness of our model.
136
+ To summarize, our contributions are three-fold:
137
+ • We introduce a novel Mock Interview Generation task, which
138
+ is a pilot study of intelligent online recruitment with potential
139
+ commercial values.
140
+ • To address the low-resource challenge, we propose to reduce
141
+ the number of parameters that rely on interview dialogs by dis-
142
+ entangling knowledge selector and dialog generator so that the
143
+ majority of parameters can be trained with large-scale ungrounded
144
+ dialog and resume data.
145
+ • We propose a novel model to jointly process dialog contexts,
146
+ candidate resumes, and job descriptions and generate highly rele-
147
+ vant, knowledge-aware interview dialogs.
148
+ 2
149
+ RELATED WORK
150
+ Multi-turn response generation aims to generate a response that is
151
+ natural and relevant to the entire context, based on utterances in its
152
+ previous turns. [36] concatenated multiple utterances into one sen-
153
+ tence and utilized RNN encoder or Transformer to encode the long
154
+ sequence, simplifying multi-turn dialog into a single-turn dialog.
155
+ To better model the relationship between multi-turn utterances,
156
+ [4, 10] introduced interaction between utterances after encoding
157
+ each utterance.
158
+ As human conversations are almost always grounded with exter-
159
+ nal knowledge, the absence of knowledge grounding has become
160
+ one of the major gaps between current open-domain dialog systems
161
+ and real human conversations [8, 24, 35]. A series of work [20, 29]
162
+ focused on generating a response based on the interaction between
163
+ context and unstructured document knowledge, while a few oth-
164
+ ers [22, 33] introduced knowledge graphs into conversations. These
165
+ models, however, usually under-perform in a low-resource setting.
166
+ To address the low resource problem, [16] proposed to enhance
167
+ the context-dependent cross-lingual mapping upon the pre-trained
168
+ monolingual BERT representations. [28] extended the meta-learning
169
+ algorithm, which utilized knowledge learned from high-resource
170
+ domains to boost the performance of low-resource unsupervised
171
+ neural machine translation. Different from the above methods, [37]
172
+ proposed a disentangled response decoder in order to isolate pa-
173
+ rameters that depend on knowledge-grounded dialogs from the
174
+ entire generation model. Our model takes a step further, taking
175
+ into account the changes in attention on knowledge in multi-turn
176
+ dialog scenarios.
177
+ 3
178
+ MODEL
179
+ 3.1
180
+ Problem Formulation
181
+ For an input multi-turn dialog context 𝑈 = {𝑢1,𝑢2, . . . ,𝑢𝑚} be-
182
+ tween a job candidate and an interviewer, where 𝑢𝑖 represents the
183
+ 𝑖-th utterance, we assume there is a ground truth textual interview
184
+ question 𝑌 = {𝑦1,𝑦2, . . . ,𝑦𝑛}. 𝑚 is the utterance number in the
185
+ dialog context and 𝑛 is the total number of words in question 𝑌.
186
+ In the 𝑖-th utterance, 𝑢𝑖 = {𝑥𝑖
187
+ 1,𝑥𝑖
188
+ 2, . . . ,𝑥𝑖
189
+ 𝑇 𝑖𝑢 }. Meanwhile, there is a
190
+ candidate resume 𝑅 = {(𝑘1, 𝑣1), (𝑘2, 𝑣2), . . . , (𝑘𝑇𝑟 , 𝑣𝑇𝑟 )} correspond-
191
+ ing to the candidate in the interview, which has 𝑇𝑟 key-value pairs,
192
+ and each of which represents an attribute in the resume. For the
193
+ job-resume matching pre-training task, there is an external job
194
+ description 𝐽 = {𝑗1, 𝑗2, . . . , 𝑗𝑇𝑗 }, which has 𝑇𝑗 words. The goal is to
195
+ generate an interview question 𝑌
196
+ ′ that is not only coherent with the
197
+ dialog context 𝑈 but also pertinent to the job candidate’s resume 𝑅.
198
+ 3.2
199
+ System Overview
200
+ In this section, we propose our Low-resource Mock Interview Gen-
201
+ erator (EZInterviewer) model, which is divided into three parts as
202
+ shown in Figure 2:
203
+
204
+ O
205
+ 100%10:38
206
+ 100%10:47
207
+ 99%11:09
208
+ MyOnlineResume
209
+ Preview
210
+ <
211
+ <
212
+ Mock Interview
213
+ Web Front-end
214
+ Expected Position
215
+ Hello, I'm an undergraduate, and I
216
+ Development Engineer
217
+ am confident that I am qualified for
218
+ Python 8-9K
219
+ Fresh graduate
220
+ Undergraduate
221
+ the intern position of web front-end
222
+ San Francisco
223
+ engineer.I hope you can see my
224
+ information.
225
+ HR
226
+ Work Experience
227
+ Do you have experience in
228
+ Job Description
229
+ Company
230
+ Emini programs or pc website
231
+ 2019.01-now>
232
+ Web front-end development
233
+ Web front-end development
234
+ Webpack
235
+ development?
236
+ Content.
237
+ GIT
238
+ Gulp
239
+ JavaScript
240
+ Vue
241
+ :Iusedtodevelopfront-endwebpagesbasedor
242
+ Adjust the style of the system
243
+ :HTML and CSS.
244
+ I'm sorry that I have not done any
245
+ back-stage on the PC front-end and call
246
+ mini program work, but I used to
247
+ Web front-end
248
+ Vue
249
+ Mini program
250
+ develop Web page in the past.
251
+ the interface to decelop a small part of
252
+ thefunction
253
+ Work closely with the back-end devel-
254
+ Project Experience
255
+ :Let's do a test. If a product needs a
256
+ opment team to ensure limited code
257
+ :three-level .classification. selection...
258
+ which component would you use
259
+ :docking,optimize front-end perfor-
260
+ and how to achieve it?
261
+ Education Experience
262
+
263
+ :mance, and participate in mobile inter-
264
+ :face development and architecture
265
+ university
266
+ 2019-2022
267
+ :design;
268
+ Message
269
+ ?
270
+ Undergraduate
271
+ Computer ScienceEZInterviewer: To Improve Job Interview Performance with Mock Interview Generator
272
+ WSDM ’23, February 27-March 3, 2023, Singapore, Singapore
273
+ Job Desc
274
+ Job Encoder
275
+ Resume
276
+ Resume Encoder
277
+ Multi-turn
278
+ Interview History
279
+ Self Attention
280
+ Add & Norm
281
+ Position-Wise FeedForward
282
+ [CLS]
283
+ [CLS]
284
+ [CLS]
285
+ ...
286
+ ...
287
+ ...
288
+ Utterance
289
+ States
290
+ Masked Self Attention
291
+ Cross Attention Manager
292
+ Add & Norm
293
+ Position-Wise FeedForward
294
+ Decoder Input
295
+ xN
296
+ Linear
297
+ Linear
298
+ Linear
299
+ Decoder
300
+ State
301
+ Utterance
302
+ States
303
+ Cross
304
+ Attention
305
+ Cross
306
+ Attention
307
+ xN
308
+ updated output
309
+ Fusion gate
310
+ Pretrained
311
+ Soft Target
312
+ Updated
313
+ Distribution
314
+ One-hot
315
+ Target
316
+ Masked
317
+ Self-attention
318
+ Visible Matrix
319
+ Cross
320
+ Attention
321
+ Transfer
322
+ Memory
323
+ Job-resume
324
+ Matching
325
+ Knowledge Selector (Pretraining)
326
+ Dialog Generator (Pretraining)
327
+ Decoding Manager
328
+ Self Attention
329
+ Key-Value
330
+ Memory Network
331
+ Figure 2: Overview of EZInterviewer, which consists of three parts: (1) Knowledge Selector selects salient knowledge infor-
332
+ mation from the candidate resume; (2) Dialog Generator predicts the next word without knowledge of resumes; (3) Decoding
333
+ Manager coordinates the output from knowledge selector and dialog generator to produce the interview question.
334
+ • Dialog Generator predicts the next word of a response based on
335
+ the prior sub-sequence. In our model, we pre-train it by large-scale
336
+ ungrounded dialogs.
337
+ • Knowledge Selector selects salient knowledge information from
338
+ the candidate resume for interview question generation. In our
339
+ model, we augment the ability of the knowledge selector by em-
340
+ ploying it to perform job-resume matching.
341
+ • Decoding Manager coordinates the output from knowledge
342
+ selector and dialog generator to predict the interview question.
343
+ It is important to note that to train an EZInterviewer model,
344
+ two pre-train techniques are employed. Firstly, we pre-train the
345
+ knowledge selector in a job-matching task. This is because while
346
+ it is hard to attend to appropriate content in a resume just on its
347
+ own, the salient information in a resume can be identified in a
348
+ job-resume matching task [13, 34]. Secondly, the context encoder
349
+ and response decoder of the dialog generator are pre-trained with
350
+ a large scale of ungrounded dialogs, so as to predict the next word
351
+ of response based on the prior sub-sequence. Finally, the decoding
352
+ manager, which relies on a few parameters, coordinates the two
353
+ components to generate knowledge grounded interview utterance.
354
+ 3.3
355
+ Dialog Generator
356
+ Context Encoder. Instead of processing the dialog context as a
357
+ flat sequence, we employ a hierarchical encoder [3] to capture intra-
358
+ and inter-utterance relations, which is composed of a local sentence
359
+ encoder and a global context encoder. For the sentence encoder, to
360
+ model the semantic meaning of the dialog context, we learn the
361
+ representation of each utterance 𝑢𝑖 by a self-attention mechanism
362
+ (SAM) initialized by BERT [5]:
363
+ ℎ𝑖
364
+ 𝑗 = SAMu(𝑒(𝑥𝑖
365
+ 𝑗),ℎ𝑖
366
+ ∗).
367
+ (1)
368
+ We extract the state at “[cls]” position to denote the utterance state,
369
+ abbreviated as ℎ𝑖. Apart from the local information exchange in
370
+ each utterance, we let information flow across multi-turn context:
371
+ ℎ𝑐
372
+ 𝑡 = SAMc(ℎ𝑡,ℎ𝑐
373
+ ∗),
374
+ (2)
375
+ where ℎ𝑐
376
+ 𝑡 denotes the hidden state of the 𝑡-th utterance in SAMc.
377
+ Response Decoder. Response decoder is responsible for under-
378
+ standing the previous dialog context and generates the response
379
+ without the knowledge of resume information [19]. Our decoder
380
+ also follows the style of Transformer.
381
+ Concretely, we first apply the self-attention on the masked de-
382
+ coder input, obtaining𝑑𝑡. Based on𝑑𝑡 we compute the cross-attention
383
+ scores over previous utterances:
384
+ 𝛼𝑐
385
+ 𝑡 = ReLU([𝑑𝑡𝑊𝑑 (ℎ𝑐
386
+ 𝑖𝑊ℎ)𝑇 ]).
387
+ (3)
388
+ The attention weights 𝛼𝑐
389
+ 𝑡 is then used to obtain the context vectors
390
+ as𝑐𝑡 = �𝑚
391
+ 𝑖=1 𝛼𝑐
392
+ 𝑡 ℎ𝑐
393
+ 𝑖 . The context vectors𝑐𝑡, treated as salient contents
394
+ of various sources, are concatenated with the decoder hidden state
395
+ 𝑑𝑡 to produce the distribution over the target vocabulary:
396
+ 𝑃𝑤
397
+ 𝑣 = Softmax (𝑊𝑜 [𝑑𝑡;𝑐𝑡]) .
398
+ (4)
399
+ Pre-training process. While interview dialogs are hard to come
400
+ by, online conversation is abundant on the internet, and can be
401
+ easily collected. Hence, we pre-train the dialog generator on un-
402
+ grounded conversations. Concretely, during pre-training process,
403
+ we employ the context encoder to first encode the multi-turn pre-
404
+ vious dialog context. Then, at the 𝑡-th decoding step, we use the
405
+ response decoder to predict the 𝑡-th word in the response. We set
406
+ the loss as the negative log likelihood of the target word 𝑦𝑡:
407
+ 𝐿𝑜𝑠𝑠𝑔 = − 1
408
+ 𝑛
409
+ �𝑛
410
+ 𝑡=1 log 𝑃𝑤
411
+ 𝑣 (𝑦𝑡).
412
+ (5)
413
+ 3.4
414
+ Knowledge Selector
415
+ Resume Encoder. As shown in Figure 2, a resume contains several
416
+ key-value pairs (𝑘𝑖, 𝑣𝑖). Most of key and value fields include a single
417
+ word or a phrase such as “skills” or “gender”, and we can obtain the
418
+ feature representation through an embedding matrix. Concretely,
419
+ for each key or value field with a single word or a phrase, we estab-
420
+ lish a corresponding resume embedding matrix 𝑒𝑖𝑟 that is different
421
+ from the previous one. Then we use the resume embedding matrix
422
+ to map each field word 𝑘𝑖 or 𝑣𝑖 into to a high-dimensional vector
423
+ space, denoted as 𝑒𝑖𝑟 (𝑘𝑖) or 𝑒𝑖𝑟 (𝑣𝑖). For fields with more than one
424
+ word such as “work experience” or “I used to...”, we denote them as
425
+ 𝑣𝑖 = (𝑣1
426
+ 𝑖 , ...𝑣𝑙𝑖
427
+ 𝑖 ), where 𝑙𝑖 denotes the word number of the current
428
+
429
+ Birthday
430
+ 19980501
431
+ Gender
432
+ MaleWSDM ’23, February 27-March 3, 2023, Singapore, Singapore
433
+ Mingzhe Li et al.
434
+ field. We first process them through the previous word embedding
435
+ matrix 𝑒, then there is an SAMR, similar with SAMu in Section 3.3,
436
+ to model the temporal interactions between words:
437
+ ℎ𝑟𝑖
438
+ 𝑡 = SAMR(𝑒(𝑣 𝑗
439
+ 𝑖 ),ℎ𝑟𝑖
440
+ 𝑡−1).
441
+ (6)
442
+ We use the last hidden state of the SAMR, i.e., ℎ𝑟𝑖
443
+ 𝑙𝑖 to denote the
444
+ overall representation for field 𝑣𝑖.
445
+ For brevity, in the following sections, we use ℎ𝑘
446
+ 𝑖 and ℎ𝑣
447
+ 𝑖 to denote
448
+ the encoded key-value pair (𝑘𝑖, 𝑣𝑖) in the resume.
449
+ Masked Self-attention. Traditional self-attention can be used
450
+ to update representation of each resume item due to its flexibility in
451
+ relating two elements in a distance-agnostic manner [17]. However,
452
+ as shown in [21], too much knowledge incorporation may divert the
453
+ representation from its correct meaning, which is called knowledge
454
+ noise (KN) issue. In our scenario, the information in the resume
455
+ is divided into several parts, i.e., basic personal information, work
456
+ experiences and extended work, each of which contains variable
457
+ number of items. The items within each part are closely connected,
458
+ while different parts can be considered as different domains, and
459
+ the interaction may introduce a certain amount of noise. To over-
460
+ come this problem, we introduce a visible matrix, in which items
461
+ belonging to the same part are visible to each other, while the visi-
462
+ bility degree between items is determined by the cosine similarity
463
+ of semantic representations, i.e., 𝐶𝑖,𝑗 = cos_sim(ℎ𝑣
464
+ 𝑖 ,ℎ𝑣
465
+ 𝑗 ). Then, the
466
+ scaled dot-product masked self-attention is defined as:
467
+ 𝛼𝑖,𝑗 =
468
+ exp
469
+
470
+ (ℎ𝑘
471
+ 𝑖 𝑊𝑞)𝐶𝑖,𝑗 (ℎ𝑘
472
+ 𝑗𝑊𝑘)𝑇 �
473
+ �𝑇𝑟
474
+ 𝑛=1 exp
475
+
476
+ (ℎ𝑘
477
+ 𝑖 𝑊𝑞)𝐶𝑖,𝑛(ℎ𝑘
478
+ 𝑗𝑊𝑘)𝑇
479
+ � ,
480
+ (7)
481
+ ˆℎ𝑣
482
+ 𝑖 =
483
+ ∑︁𝑇𝑟
484
+ 𝑗=1
485
+ 𝛼𝑖,𝑗ℎ𝑣
486
+ 𝑗
487
+
488
+ 𝑑
489
+ ,
490
+ (8)
491
+ where 𝑑 stands for hidden dimension and 𝐶 is the visible matrix.
492
+ ˆℎ𝑣
493
+ 𝑖 is then utilized as the updated resume value representation.
494
+ Key-Value Memory Network. The goal of key matching is to
495
+ calculate the relevance between each attribute of the resume and
496
+ the previous dialog context. Given dialog context ℎ𝑖, for the 𝑗-th
497
+ attribute pair (𝑘𝑗, 𝑣𝑗), we calculate the probability of ℎ𝑖 over 𝑘𝑗,
498
+ i.e., 𝑃(𝑘𝑗 |ℎ𝑖), as the matching score 𝛽𝑖,𝑗. To this end, we exploit the
499
+ context representation ℎ𝑖 to calculate the matching score:
500
+ 𝛽𝑖,𝑗 =
501
+ exp
502
+
503
+ ℎ𝑖𝑊𝑎ℎ𝑘
504
+ 𝑗
505
+
506
+ �𝑇𝑟
507
+ 𝑛=1 exp
508
+
509
+ ℎ𝑖𝑊𝑎ℎ𝑘𝑛
510
+ � .
511
+ (9)
512
+ Since context representation ℎ𝑖 and resume key representation ℎ𝑘
513
+ 𝑗
514
+ are not in the same semantic space, we use a trainable key matching
515
+ parameter𝑊𝑎 to transform these representations into a same space.
516
+ As the relevance between context ℎ𝑖 and each pair in the resume
517
+ table (𝑘𝑗, 𝑣𝑗), the matching score 𝛽𝑖,𝑗 can help to capture the most
518
+ relevant pair for generating a correct question. Therefore, as shown
519
+ in Equation 10, the knowledge selector reads the information 𝑀𝑖
520
+ from KVMN via summing over the stored values, and guides the
521
+ follow-up response generation, so we have:
522
+ 𝑀𝑖 =
523
+ ∑︁𝑇𝑟
524
+ 𝑗=1 𝛽𝑖,𝑗 ˆℎ𝑣
525
+ 𝑗,
526
+ (10)
527
+ where ˆℎ𝑣
528
+ 𝑗 is the representation of value 𝑣𝑗, and 𝛽𝑖,𝑗 is the matching
529
+ score between dialog context ℎ𝑖 and key 𝑘𝑗.
530
+ Pre-training Process. In practice, the resume knowledge con-
531
+ tains a variety of professional and advanced scientific concepts such
532
+ as “Web front-end”, “HTML”, and “CSS”. These technical terms are
533
+ difficult to understand for people not familiar with the specific
534
+ domain, not to mention for the model that is not able to access
535
+ a large-scale resume-grounded dialog dataset. Hence, it would be
536
+ difficult for the knowledge selector to understand the resume con-
537
+ tent and previous context about the resume, so as to select the next
538
+ resume pair to focus on.
539
+ On the other hand, we notice that in job-resume matching task,
540
+ it is crucial to capture the decisive information in the resume to
541
+ perform a good matching. For example, recruiters may tend to hire
542
+ the candidate with particular experiences among several candidates
543
+ with similar backgrounds [34]. Intuitively, the key-value pair that is
544
+ important for job-resume matching is also the key factor to consider
545
+ in a job interview. Hence, if we can let the model learn the salient
546
+ information in the resume by performing the job-resume matching
547
+ task on large-scale job-resume data, then it would also bring benefits
548
+ for selecting salient information in interview question generation.
549
+ Concretely, we use the job description to attend to the resume to
550
+ perform a job-resume matching task, as a pre-training process for
551
+ knowledge selector module. As shown in Figure 2, the Job Encoder
552
+ encodes the job description by a SAMjd:
553
+ ℎ𝑗𝑑
554
+ 𝑖
555
+ = SAMjd(𝑒(𝑗𝑖),ℎ𝑗𝑑
556
+ 𝑖−1),
557
+ (11)
558
+ where 𝑗𝑖 denotes the 𝑖-th word in the job description, and 𝑒(𝑗𝑖)
559
+ is mapped by the previous embedding matrix 𝑒. We use the final
560
+ hidden state of the SAMjd, i.e., ℎ𝑗𝑑
561
+ 𝑇𝑗 as the overall representation
562
+ for the description, abbreviated as ℎ𝑗𝑑. ℎ𝑗𝑑 plays a similar part as
563
+ the context representation ℎ𝑖, which first attends to the keys in the
564
+ resume, and then is used to “weightedly” read the values in the
565
+ resume. We use 𝑚𝑗𝑑 to denote the weighted read result.
566
+ In the training process, we first pre-train the knowledge selector
567
+ by job-resume matching task, which can be formulated as a classi-
568
+ fication problem [26]. The objective is to maximize the scores of
569
+ positive samples while minimizing that of the negative samples.
570
+ Specifically, we concatenateℎ𝑗𝑑 and𝑚𝑗𝑑 since vector concatenation
571
+ for matching is known to be effective [27]. Then the concatenated
572
+ vector is fed to a multi-layer, fully-connected, feed-forward neural
573
+ network, and the job-resume matching score 𝑠𝑗𝑟 is obtained as:
574
+ 𝑠𝑗𝑟 = 𝜎
575
+
576
+ 𝐹𝑠 ([ℎ𝑗𝑑;𝑚𝑗𝑑]])
577
+
578
+ ,
579
+ (12)
580
+ where [; ] denotes concatenation operation, and the outputs are the
581
+ probabilities of successfully matching. We use the job-resume pairs
582
+ in interviews as positive samples, and then use the job-resume pairs
583
+ without interviews as negative instances.
584
+ After pre-training, the job description is replaced by the context
585
+ representations, while the key matching and value combination
586
+ processes remain the same. We use a knowledge memory 𝑀 to store
587
+ the selection result, where each slot stores the value combination
588
+ result 𝑀𝑖 in Equation 10.
589
+
590
+ EZInterviewer: To Improve Job Interview Performance with Mock Interview Generator
591
+ WSDM ’23, February 27-March 3, 2023, Singapore, Singapore
592
+ 3.5
593
+ Decoding Manager
594
+ The decoding manager is supposed to generate the proper word
595
+ based on the knowledge memory and the response decoder. Our
596
+ idea is inspired by an observation on the nature of interview dialogs:
597
+ despite the fact that a dialog is based on the resume, words and utter-
598
+ ances in the dialog are not always related to resume. Therefore, we
599
+ postulate that formation of a response can be decomposed into two
600
+ uncorrelated actions: (1) selecting a word according to the context
601
+ to make the dialog coherent (corresponding to the dialog generator);
602
+ (2) selecting a word according to the extra knowledge memory to
603
+ ground the dialog (corresponding to the knowledge selector). The
604
+ two actions can be independently performed, which becomes the
605
+ key reason why the large resume-job matching and ungrounded
606
+ dialog datasets, although seemingly unrelated to interview dialogs,
607
+ can be very useful in an MIG task.
608
+ Note that in Section §3.4, we store the selected knowledge 𝑀𝑖 in
609
+ a knowledge memory 𝑀. To select a word based on it, similar to
610
+ the response decoder, we use 𝑑𝑡 to attend to each slot of knowledge
611
+ memory, and we can obtain the knowledge context vector 𝑔𝑘
612
+ 𝑡 and
613
+ the output decoder state 𝑑𝑘𝑜
614
+ 𝑡 .
615
+ The response decoder and knowledge selector are controlled by
616
+ the decoding manager with a “fusion gate” to decide how much
617
+ information from each side should be focused on at each step of
618
+ interview question prediction.
619
+ 𝛾𝑓 = 𝜎 (𝐹𝑚(𝑑𝑡)) ,
620
+ (13)
621
+ where 𝑑𝑡 is the 𝑡-th decoder hidden state. Then, the probability to
622
+ predict word 𝑦𝑡 can be formulated as:
623
+ 𝑑𝑜
624
+ 𝑡 = 𝛾𝑓 𝑑𝑤𝑜
625
+ 𝑡
626
+ + (1 − 𝛾𝑓 )𝑑𝑘𝑜
627
+ 𝑡 ,
628
+ (14)
629
+ 𝑃𝑣 = softmax �𝑊𝑣𝑑𝑜
630
+ 𝑡 + 𝑏𝑣
631
+ � .
632
+ (15)
633
+ As for the optimization goal, generation models that use one-
634
+ hot distribution optimization target always suffer from the over-
635
+ confidence issue, which leads to poor generation diversity [32].
636
+ Hence, aside from the ground truth one-hot label 𝑃, we also propose
637
+ a soft target label 𝑃𝑤
638
+ 𝑣 (see Equation 4), which is borrowed from the
639
+ pre-trained Dialog Generator in Section 3.3. Forcing the decoding
640
+ manager to simulate the pre-trained decoder can help it learn the
641
+ context of the interview dialog. We combine the one-hot label with
642
+ the soft label by an editing gate 𝜆, as shown in Figure 2. Concretely,
643
+ a smooth target distribution 𝑃 ′ is proposed to replace the hard
644
+ target distribution 𝑃 as:
645
+ 𝑃 ′ = 𝜆𝑃 + (1 − 𝜆)𝑃𝑤
646
+ 𝑣 .
647
+ (16)
648
+ where 𝜆 ∈ [0, 1] is an adaption factor, 𝑃𝑤
649
+ 𝑣 is obtained from Equa-
650
+ tion 4, and 𝑃 is the hard target as one-hot distribution which assigns
651
+ a probability of 1 for the target word 𝑦𝑡 and 0 otherwise.
652
+ 4
653
+ EXPERIMENTAL SETUP
654
+ 4.1
655
+ Dataset
656
+ In this paper, we conduct experiments on a real-world dataset pro-
657
+ vided by “Boss Zhipin” 1, the largest online recruiting platform
658
+ in China. To protect the privacy of candidates, user records are
659
+ anonymized with all personal identity information removed. The
660
+ 1https://www.zhipin.com
661
+ Table 1: Statistics of the datasets used in the experiments.
662
+ Statistics
663
+ Values
664
+ Interview Dialog Dataset
665
+ Total number of resumes
666
+ 12,666
667
+ Total number of dialog utterances
668
+ 49,214
669
+ Avg turns # per dialog context
670
+ 4.47
671
+ Avg words # per utterance
672
+ 13.18
673
+ Job-Resume Dataset
674
+ Key-value pairs # per resume
675
+ 22
676
+ Avg words # per work experience in resume
677
+ 72.80
678
+ Avg words # per self description in resume
679
+ 51.13
680
+ Avg words # per job description
681
+ 74.26
682
+ Ungrounded Dialog Dataset
683
+ Total number of context-response pairs
684
+ 2,995,000
685
+ Avg turns # per dialog context
686
+ 4
687
+ Avg words # per utterance
688
+ 15.15
689
+ dataset includes 12,666 resumes, 8,032 job descriptions, and 49,214
690
+ interview dialog utterances. The statistics of the dataset is summa-
691
+ rized in Table 1. We then tokenize each sentence into words with
692
+ the benchmark Chinese tokenizer toolkit “JieBa” 2.
693
+ To pre-train the knowledge selector module, we use a job-resume
694
+ matching dataset [34], again from “Boss Zhipin”. The training
695
+ set and the validation set include 355,000 and 1,006 job-resume
696
+ pairs, respectively. To pre-train dialog generator, we choose Weibo
697
+ dataset [2], which includes a massive number of multi-turn con-
698
+ versations collected from “Weibo”3. The data includes 2,990,000
699
+ context-response pairs for training and 5,000 pairs for validation.
700
+ The details are also summarized in Table 1.
701
+ 4.2
702
+ Comparisons
703
+ We compare our proposed model against traditional knowledge-
704
+ insensitive dialog generation baselines, and knowledge-aware dia-
705
+ log generation baselines.
706
+ • Knowledge-insensitive dialog generation baselines:
707
+ Transformer [30]: is based solely on attention mechanisms.
708
+ BERT [5]: initializes Transformer with BERT as the encoder. Di-
709
+ aloGPT [36]: proposes a large, tunable neural conversational re-
710
+ sponse generation model trained on more conversation-like ex-
711
+ changes. T5-CLAPS [14]: generates samples for contrastive learn-
712
+ ing by adding small and large perturbations, respectively.
713
+ • Knowledge-aware dialog generation baselines:
714
+ TMN [6]: is built upon a transformer architecture with an ex-
715
+ ternal memory hosting the knowledge. ITDD [20]: incrementally
716
+ encodes multi-turn dialogs and knowledge and decodes responses
717
+ with a deliberation technique. DiffKS [38]: utilizes the differential
718
+ information between selected knowledge in multi-turn conversa-
719
+ tion for knowledge selection. DRD [37]: tackles the low-resource
720
+ challenge with pre-training techniques using ungrounded dialogs
721
+ and documents. DDMN [31]: dynamically keeps track of dialog
722
+ context for multi-turn interactions and incorporates KB knowledge
723
+ 2https://github.com/fxsjy/jieba
724
+ 3https://www.weibo.com
725
+
726
+ WSDM ’23, February 27-March 3, 2023, Singapore, Singapore
727
+ Mingzhe Li et al.
728
+ Table 2: Comparing model performance on full dataset: automatic evaluation metrics.
729
+ BLEU-1
730
+ BLEU-2
731
+ BLEU-3
732
+ BLEU-4
733
+ Extrema
734
+ Average
735
+ Greedy
736
+ Dist-1
737
+ Dist-2
738
+ Entity F1
739
+ Cor
740
+ Knowledge-insensitive dialog generation
741
+ Transformer [30]
742
+ 0.5339
743
+ 0.3811
744
+ 0.2836
745
+ 0.2530
746
+ 0.4859
747
+ 0.7673
748
+ 0.6803
749
+ 0.0928
750
+ 0.3157
751
+ 0.3606
752
+ 0.2711
753
+ BERT [5]
754
+ 0.5671
755
+ 0.3864
756
+ 0.2735
757
+ 0.2583
758
+ 0.4861
759
+ 0.7669
760
+ 0.6792
761
+ 0.0947
762
+ 0.3558
763
+ 0.3711
764
+ 0.2894
765
+ DialoGPT [36]
766
+ 0.5722
767
+ 0.4015
768
+ 0.3004
769
+ 0.2697
770
+ 0.4858
771
+ 0.7670
772
+ 0.6814
773
+ 0.1001
774
+ 0.3620
775
+ 0.3843
776
+ 0.3002
777
+ T5-CLAPS [14]
778
+ 0.5846
779
+ 0.4126
780
+ 0.3020
781
+ 0.2783
782
+ 0.4837
783
+ 0.7851
784
+ 0.6674
785
+ 0.0970
786
+ 0.3702
787
+ 0.3549
788
+ 0.2870
789
+ Knowledge-aware dialog generation
790
+ TMN [6]
791
+ 0.5437
792
+ 0.3891
793
+ 0.2963
794
+ 0.2630
795
+ 0.4841
796
+ 0.7655
797
+ 0.6811
798
+ 0.0996
799
+ 0.3299
800
+ 0.3830
801
+ 0.2652
802
+ ITDD [20]
803
+ 0.5484
804
+ 0.4009
805
+ 0.2929
806
+ 0.2656
807
+ 0.4833
808
+ 0.7650
809
+ 0.6859
810
+ 0.1055
811
+ 0.3703
812
+ 0.3661
813
+ 0.2715
814
+ DiffKS [38]
815
+ 0.5617
816
+ 0.3898
817
+ 0.2776
818
+ 0.2441
819
+ 0.4826
820
+ 0.7830
821
+ 0.6752
822
+ 0.0937
823
+ 0.3612
824
+ 0.3672
825
+ 0.2750
826
+ DRD [37]
827
+ 0.5711
828
+ 0.4001
829
+ 0.2914
830
+ 0.2548
831
+ 0.4824
832
+ 0.7813
833
+ 0.6783
834
+ 0.0867
835
+ 0.3661
836
+ 0.3825
837
+ 0.2883
838
+ DDMN [31]
839
+ 0.5693
840
+ 0.4065
841
+ 0.2968
842
+ 0.2694
843
+ 0.4831
844
+ 0.7655
845
+ 0.6811
846
+ 0.0944
847
+ 0.3640
848
+ 0.3754
849
+ 0.2869
850
+ Persona [9]
851
+ 0.5532
852
+ 0.3829
853
+ 0.2715
854
+ 0.2377
855
+ 0.4823
856
+ 0.7822
857
+ 0.6783
858
+ 0.0911
859
+ 0.3598
860
+ 0.3833
861
+ 0.2928
862
+ EZInterviewer
863
+ 0.6106
864
+ 0.4320
865
+ 0.3284
866
+ 0.2917
867
+ 0.4893
868
+ 0.7884
869
+ 0.6886
870
+ 0.1071
871
+ 0.3747
872
+ 0.3927
873
+ 0.3145
874
+ No Pre-train
875
+ 0.5738
876
+ 0.4029
877
+ 0.2929
878
+ 0.2599
879
+ 0.4846
880
+ 0.7833
881
+ 0.6831
882
+ 0.0981
883
+ 0.3673
884
+ 0.3819
885
+ 0.3007
886
+ w/o KM
887
+ 0.5795
888
+ 0.4127
889
+ 0.3069
890
+ 0.2754
891
+ 0.4847
892
+ 0.7841
893
+ 0.6762
894
+ 0.0979
895
+ 0.3685
896
+ 0.3803
897
+ 0.3010
898
+ w/o KS
899
+ 0.5775
900
+ 0.4122
901
+ 0.3067
902
+ 0.2746
903
+ 0.4781
904
+ 0.7668
905
+ 0.6787
906
+ 0.1003
907
+ 0.3691
908
+ 0.3848
909
+ 0.2994
910
+ w/o LS
911
+ 0.6007
912
+ 0.4232
913
+ 0.3176
914
+ 0.2821
915
+ 0.4869
916
+ 0.7863
917
+ 0.6832
918
+ 0.0969
919
+ 0.3664
920
+ 0.3902
921
+ 0.3127
922
+ into generation. Persona [9]: introduces personal memory into
923
+ knowledge selection to address the personalization issue.
924
+ 4.3
925
+ Implementation Details
926
+ We implement our experiments in TensorFlow [1] on an NVIDIA
927
+ GTX 1080 Ti GPU. For our model and all baselines, we follow the
928
+ same setting as described below. We truncate input dialog to 100
929
+ words with 20 words in each utterance, as we did not find significant
930
+ improvement when increasing input length from 100 to 200 tokens.
931
+ The minimum decoding step is 10, and the maximum step is 20.
932
+ The word embedding dimension is set to 128 and the number of
933
+ hidden units is 256. Experiments are performed with a batch size
934
+ of 256, and the vocabulary is comprised of the most frequent 50k
935
+ words. We use Adam optimizer [12] as our optimizing algorithm.
936
+ We selected the 5 best checkpoints based on performance on the
937
+ validation set and report averaged results on the test set. Note that
938
+ for better performance, our model is built based on BERT, and the
939
+ decoding process is the same as Transformer [30]. Finally, due to
940
+ the limitation of time and memory, small settings are used in the
941
+ pre-trained baselines.
942
+ 4.4
943
+ Evaluation Metrics
944
+ To evaluate the performance of EZInterviewer against baselines,
945
+ we adopt the following metrics widely used in existing studies.
946
+ Overlap-based Metric. Following [18], we utilize BLEU score
947
+ [25] to measure n-grams overlaps between ground-truth and gener-
948
+ ated response. In addition, we apply Correlation (Cor) to calculate
949
+ the words overlap between generated question and job description,
950
+ which measures how well the generated questions line up with the
951
+ recruitment intention.
952
+ Embedding Metrics. We compute the similarity between the
953
+ bag-of-words (BOW) embeddings of generated results and reference
954
+ to capture their semantic matching degrees [11]. In particular we
955
+ adopt three metrics: 1) Greedy, i.e., greedily matching words in two
956
+ Table 3: Human evaluation results on: Readability (Read),
957
+ Informativeness (Info), Meaningfulness (Mean), Usefulness
958
+ (Use), Relevance (Rel), and Coherence (Coh).
959
+ Model
960
+ Dialog-level
961
+ Interview-level
962
+ Read
963
+ Info
964
+ Mean
965
+ Use
966
+ Rel
967
+ Coh
968
+ DiffKS
969
+ 1.79
970
+ 2.01
971
+ 1.87
972
+ 2.03
973
+ 1.99
974
+ 2.10
975
+ DDMN
976
+ 1.97
977
+ 1.83
978
+ 1.63
979
+ 2.12
980
+ 2.14
981
+ 1.91
982
+ DRD
983
+ 2.05
984
+ 2.11
985
+ 2.09
986
+ 2.08
987
+ 2.17
988
+ 2.02
989
+ EZInterviewer
990
+ 2.42▲
991
+ 2.51▲
992
+ 2.39▲
993
+ 2.46▲
994
+ 2.57▲
995
+ 2.38▲
996
+ utterances based on cosine similarities; 2) Average, cosine similarity
997
+ between the averaged word embeddings in two utterances [23];
998
+ 3) Extrema, cosine similarity between the largest extreme values
999
+ among the word embeddings in the two utterances [7].
1000
+ Distinctness. The distinctness score [15] measures word-level
1001
+ diversity by calculating the ratio of distinct uni-gram and bi-grams
1002
+ in generated responses.
1003
+ Entity F1. Entity F1 is computed by micro-averaging precision
1004
+ and recall over knowledge-based entities in the entire set of sys-
1005
+ tem responses, and evaluates the ability of a model to generate
1006
+ relevant entities to achieve specific tasks from the provided knowl-
1007
+ edge base [31]. The entities we use are extracted from an entity
1008
+ vocabulary provided by “Boss Zhipin”.
1009
+ Human Evaluation Metrics. We further employ human eval-
1010
+ uations aside from automatic evaluations. Three well-educated
1011
+ annotators from different majors are hired to evaluate the quality
1012
+ of generated responses, where the evaluation is conducted in a
1013
+ double-blind fashion. In total 100 randomly sampled responses gen-
1014
+ erated by each model are rated by each annotator on both dialog
1015
+ level and interview level. We adopt the Readability (is the response
1016
+ grammatically correct?) and Informativeness (does the response
1017
+ include informative words?) to judge the quality of the generated
1018
+
1019
+ EZInterviewer: To Improve Job Interview Performance with Mock Interview Generator
1020
+ WSDM ’23, February 27-March 3, 2023, Singapore, Singapore
1021
+ responses on the dialog level. On the interview level, we adopt
1022
+ Meaningfulness (is the generated question meaningful?), Usefulness
1023
+ (is the question worth the job candidate preparing in advance?),
1024
+ Relevance (is the question relevant to the resume?) and Coherence (is
1025
+ the generated text coherent with the context?) to assess the overall
1026
+ performance of a model and the quality of user experience. Each
1027
+ metric is given a score between 1 and 3 (1 = bad, 2 = average, 3 =
1028
+ good).
1029
+ 5
1030
+ EXPERIMENTAL RESULT
1031
+ 5.1
1032
+ Overall Performance
1033
+ Automatic evaluation. The comparison between EZInterviewer
1034
+ and state-of-the-art generative baselines is listed in Table 2.
1035
+ We take note that the knowledge-aware dialog generation mod-
1036
+ els outperform traditional dialog models, suggesting that utilizing
1037
+ external knowledge introduces advantages in generating relevant
1038
+ response. We also notice the pre-train based model DRD outper-
1039
+ forms other baselines, showing that initializing parameters by pre-
1040
+ training on large-scale data can lead to a substantial improvement
1041
+ in performance. It is worth noting some models achieve better En-
1042
+ tity F1 but a lower BLEU score; this suggests that those models tend
1043
+ to copy necessary entity words from the knowledge but are not
1044
+ able to use them properly.
1045
+ EZInterviewer outperforms baselines on all automatic metrics.
1046
+ Firstly, our model improves BLEU-1 by 6.92% over DRD. On the Dis-
1047
+ tinctness metric Dist-1, our model outperforms DialoGPT by 6.99%,
1048
+ suggesting that the generated interview questions are diversified
1049
+ and personalized with different candidates’ resumes. Moreover our
1050
+ model attains a good score of 0.3927 on entity F1, which evaluates
1051
+ the degree to which the generated question is grounded on the
1052
+ knowledge base. Finally, Cor score of 0.3145 suggests the ques-
1053
+ tions generated by EZInterviewer is in line with the job description,
1054
+ hence reflect the intention of the recruiters. Overall the metrics
1055
+ demonstrate that our model successfully learns an interviewer’s
1056
+ points of interest in a resume, and incorporates this knowledge into
1057
+ interview questions properly.
1058
+ Human evaluation. The results of human evaluations on all
1059
+ models are listed in Table 3. EZInterviewer is the top performer on
1060
+ all the metrics. Specifically, our model outperforms DiffKS by 35.20%
1061
+ on Readability, suggesting that EZInterviewer manages to reduce
1062
+ the grammatical errors and improve the readability of the generated
1063
+ response. As for the Informativeness metric, our model scores 0.68
1064
+ higher than DDMN. This indicates that EZInterviewer captures
1065
+ salient information in the resume. On the interview level, EZInter-
1066
+ viewer’s Usefulness score is 18.27% better than DRD, demonstrating
1067
+ its capabilities to help job seekers to pick the right questions to
1068
+ prepare. On Relevance metric, our model outperforms all baselines
1069
+ by a considerable margin, suggesting that the generated questions
1070
+ are closely related to the interview process. Our model also per-
1071
+ forms better than other baselines in Meaningfulness and Coherence
1072
+ metrics, suggesting the overall higher quality of our model.
1073
+ The above results demonstrate the competence of EZInterviewer
1074
+ in producing meaningful and useful interview questions whilst
1075
+ keeping the interview dialog flowing smoothly, just like a human
1076
+ recruiter. Note that the average kappa statistics of human evaluation
1077
+ are 0.51 and 0.48 on dialog level and interview level, respectively,
1078
+ Figure 3: Visualization of key matching between dialog con-
1079
+ text and selected resume keys, i.e., work experiment (Exp),
1080
+ self description (Desc), skills (Ski), work years (Year), ex-
1081
+ pected position (Pos), school (Sch), and major (Maj). 𝑈𝑖 de-
1082
+ notes the 𝑖-th utterance.
1083
+ which indicates moderate agreement between annotators. To prove
1084
+ the significance of these results, we also conduct the two-tailed
1085
+ paired student t-test between our model and DRD (row with shaded
1086
+ background). The statistical significance of observed differences is
1087
+ denoted using ▲(or ▼) for strong (or weak) significance for 𝛼 = 0.01.
1088
+ Moreover, we obtain an average p-value of 5 × 10−6 and 3 × 10−4
1089
+ for both levels, respectively.
1090
+ 5.2
1091
+ Ablation Study
1092
+ We conduct an ablation study to assess the contribution of individ-
1093
+ ual components in the model. The results are shown in Table 2.
1094
+ To verify the effectiveness of knowledge memory, we omit the
1095
+ knowledge selection of dialog context history and directly use the
1096
+ last utterance representation to select knowledge. The results (see
1097
+ row w/o KM) confirm that employing each turn of historical dialog
1098
+ to select knowledge and saving it in memory contribute to gener-
1099
+ ating better responses. To confirm whether selecting knowledge
1100
+ helps with the response generation process, we remove it from the
1101
+ model, then simply add the representation of each utterance with
1102
+ all resume values, and store it into the memory. This results in a
1103
+ drop of 5.42% in BLEU-1 (see row w/o KS), suggesting that selecting
1104
+ resume knowledge is beneficial in response generation.
1105
+ 5.3
1106
+ Analysis of Knowledge Selector
1107
+ In Section § 3.4, we introduce the selecting mechanism of knowl-
1108
+ edge selector, where the final attention (matching) score is obtained
1109
+ in Equation 9. To study what specific information is attended by
1110
+ the knowledge selector, and whether the selected information is
1111
+ suitable for the next interview question, we conduct a case study to
1112
+ visualize the matching score produced by the knowledge selector,
1113
+ as shown in Table 4 and Figure 3. The first utterance in the history
1114
+ is “Have you been engaged in front-end development work before?”,
1115
+ and the knowledge selector learns that this utterance focuses on the
1116
+ work experience in the resume. Accordingly, the fourth utterance “I
1117
+ have more than 10 years of work experience.” pays more attention
1118
+ to work years and work experience than other items in the resume.
1119
+ This demonstrates that the knowledge selector learns which item
1120
+ in the resume to focus on when generating each utterance. Hence,
1121
+ when we want to ask the candidate to “introduce a React related
1122
+ project”, the knowledge selector focuses on the work experience in
1123
+ the resume and generates the mock interview question.
1124
+
1125
+ U1
1126
+ U2
1127
+ U3
1128
+ U4
1129
+ Ski
1130
+ Sch
1131
+ Exp
1132
+ Year
1133
+ Desc
1134
+ Pos
1135
+ MajWSDM ’23, February 27-March 3, 2023, Singapore, Singapore
1136
+ Mingzhe Li et al.
1137
+ Table 4: Translated interview questions generated by baselines and EZInterviewer: an example.
1138
+ denotes information
1139
+ extracted and words generated by knowledge selector, whereas
1140
+ denotes words generated from dialog generator.
1141
+ Resume
1142
+ Interview
1143
+ Gender
1144
+ Male
1145
+ Job Description:
1146
+ The main content of this work includes design and development based on the React
1147
+ front-end framework. It requires the ability to efficiently complete front-end
1148
+ development work and serve customers well.
1149
+ Age
1150
+ 28
1151
+ Education
1152
+ Undergraduate
1153
+ Major
1154
+ Computer Science
1155
+ Work Years
1156
+ 10
1157
+ Context:
1158
+ U1: Have you been engaged in front-end development work before?
1159
+ U2: Yes, I am good at Vue, Node.js and some other skills.
1160
+ U3: Okay, so do you have any React related experience?
1161
+ U4: Yes, I have more than 10 years of work experience.
1162
+ Expected Position
1163
+ Front-end Engineer
1164
+ Low Salary
1165
+ 5
1166
+ High Salary
1167
+ 6
1168
+ Skills
1169
+ Vue, Node.js, Java
1170
+ Experience
1171
+ I was engaged in front-end design and was re-
1172
+ sponsible for the project development based
1173
+ on the React front-end framework and par-
1174
+ ticipated in the system architecture process.
1175
+ Ground Truth: So can you introduce a React related project you have done?
1176
+ DDMN: What other front-end frameworks would you use?
1177
+ DRD: Hello, can you tell us about your previous work?
1178
+ EZInterviewer: Well, can you introduce the experience based on React framework?
1179
+ Figure 4: Automatic evaluation metrics of DDMN, DRD and EZInterviewer on training data of different scales.
1180
+ 5.4
1181
+ Impact of Training Data Scales
1182
+ To understand how our model and baseline models perform in a low-
1183
+ resource scenario, we first evaluate them on the full training dataset,
1184
+ then on smaller portions of the training dataset. Figure 4 presents
1185
+ the performance of the models, DDMN, DRD, and EZInterviewer,
1186
+ on the full, 1/2, 1/4, 1/8 and 1/10 of the training dataset (data scale),
1187
+ respectively. It is observed that as the size of training dataset re-
1188
+ duces, DDMN suffers a massive drop across all metrics, whereas the
1189
+ scores of pre-training based models, i.e., DRD and EZInterviewer,
1190
+ stay relatively stable. This demonstrates pre-training as an effective
1191
+ strategy to tackle the low-resource challenge. Moreover, our model
1192
+ outperforms DRD on all data scales, demonstrating the superiority
1193
+ of our model. Figure 4 shows EZInterviewer eventually achieves the
1194
+ best performance on all metrics and outperforms (albeit slightly),
1195
+ with only 1/10 training data against all state-of-the-art baselines
1196
+ trained with the full training dataset.
1197
+ 5.5
1198
+ Case Study
1199
+ Table 4 presents a translated example of EZInterviewer and baseline
1200
+ models. We observe that the question from EZInterviewer not only
1201
+ catches the context, but also expands the conversation with proper
1202
+ knowledge. This is highlighted in color codes: pink-colored words,
1203
+ i.e., “experience” and “React framework”, are what knowledge selec-
1204
+ tor extracts from resume knowledge, whereas blue-colored words,
1205
+ i.e., “Well, can you introduce the...” and “based on”, which closely
1206
+ connect to the context, are generated by dialog generator. In con-
1207
+ trast, the questions from the baselines respond to the dialog but fail
1208
+ to make connection with the resume knowledge.
1209
+ 6
1210
+ CONCLUSION
1211
+ In this paper, we conduct a pilot study for the novel application of
1212
+ intelligent online recruitment, namely EZInterviewer, which aims
1213
+ to serve as mock interviewers for job-seekers. The mock interview
1214
+ is generated with thorough understanding of the candidate’s re-
1215
+ sume, the job requirements, the previous utterances in the context,
1216
+ as well as the selected knowledge for grounded interviews. To ad-
1217
+ dress the low-resource challenge, EZInterviewer is trained on a very
1218
+ small set of interview dialogs. The key idea is to reduce the number
1219
+ of parameters that rely on interview dialogs by disentangling the
1220
+ knowledge selector and dialog generator so that most parameters
1221
+ can be trained with ungrounded dialogs as well as the resume data
1222
+ that are not low-resource. We conduct extensive experiments to
1223
+ demonstrate the effectiveness of the proposed solution EZInter-
1224
+ viewer. Our model achieves the best results using full training data
1225
+ as well as small subsets of the training data in terms of various
1226
+ metrics such as BLEU, embedding based similarity and diversity,
1227
+ as well as human judgments. In particular, the human evaluation
1228
+ indicates that our solution EZInterviewer can provide satisfactory
1229
+ mock interviews to help the job-seekers prepare the real interview,
1230
+ making the interview preparation process easier.
1231
+ ACKNOWLEDGMENTS
1232
+ We would like to thank the anonymous reviewers for their con-
1233
+ structive comments. This work was supported by National Natural
1234
+ Science Foundation of China (NSFC Grant No. 62122089). Rui Yan
1235
+ is supported by Beijing Academy of Artificial Intelligence (BAAI).
1236
+
1237
+ 0.600
1238
+ 0.575
1239
+ Score
1240
+ 0.550
1241
+ 0.525
1242
+ BLEU
1243
+ 0.500
1244
+ DDMN
1245
+ 0.475
1246
+ DRD
1247
+ 0.450
1248
+ EZInterviewer
1249
+ 1
1250
+ 1/2
1251
+ 1/4
1252
+ 1/8
1253
+ 1/10
1254
+ Data Scale0.105
1255
+ : Score
1256
+ 0.100
1257
+ Distinct
1258
+ 0.095
1259
+ 0.090
1260
+ DDMN
1261
+ DRD
1262
+ 0.085
1263
+ EZinterviewer
1264
+ i
1265
+ 1/2
1266
+ 1/4
1267
+ 1/8
1268
+ 1/10
1269
+ Data ScaleEmbedding Score
1270
+ 0.78
1271
+ 0.76
1272
+ DDMN
1273
+ 0.74
1274
+ DRD
1275
+ EZInterviewer
1276
+ 0.72
1277
+ 0.70
1278
+ i
1279
+ 1/2
1280
+ 1/4
1281
+ 1/8
1282
+ 1/10
1283
+ Data Scale0.39
1284
+ 0.38
1285
+ score
1286
+ 0.37
1287
+ S
1288
+ 0.36
1289
+ Entity
1290
+ 0.35
1291
+ 0.34
1292
+ DDMN
1293
+ DRD
1294
+ 0.33
1295
+ EZInterviewer
1296
+ 0.32
1297
+ i
1298
+ 1/2
1299
+ 1/4
1300
+ 1/8
1301
+ 1/10
1302
+ Data ScaleScore
1303
+ 0.31
1304
+ 0.30
1305
+ Correlation s
1306
+ 0.29
1307
+ 0.28
1308
+ DDMN
1309
+ DRD
1310
+ 0.27
1311
+ EZlnterviewer
1312
+ i
1313
+ 1/2
1314
+ 1/4
1315
+ 1/8
1316
+ 1/10
1317
+ Data ScaleEZInterviewer: To Improve Job Interview Performance with Mock Interview Generator
1318
+ WSDM ’23, February 27-March 3, 2023, Singapore, Singapore
1319
+ REFERENCES
1320
+ [1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey
1321
+ Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manju-
1322
+ nath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek Gordon Murray,
1323
+ Benoit Steiner, Paul A. Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke,
1324
+ Yuan Yu, and Xiaoqiang Zhang. 2016. TensorFlow: A System for Large-Scale
1325
+ Machine Learning. In OSDI.
1326
+ [2] Zhangming Chan, Juntao Li, Xiaopeng Yang, Xiuying Chen, Wenpeng Hu,
1327
+ Dongyan Zhao, and Rui Yan. 2019. Modeling personalization in continuous
1328
+ space for response generation via augmented wasserstein autoencoders. In Pro-
1329
+ ceedings of the 2019 Conference on Empirical Methods in Natural Language Pro-
1330
+ cessing and the 9th International Joint Conference on Natural Language Processing
1331
+ (EMNLP-IJCNLP). 1931–1940.
1332
+ [3] Xiuying Chen, Hind Alamro, Mingzhe Li, Shen Gao, Rui Yan, Xin Gao, and
1333
+ Xiangliang Zhang. 2022. Target-aware Abstractive Related Work Generation
1334
+ with Contrastive Learning. arXiv preprint arXiv:2205.13339 (2022).
1335
+ [4] Xiuying Chen, Zhi Cui, Jiayi Zhang, Chen Wei, Jianwei Cui, Bin Wang, Dongyan
1336
+ Zhao, and Rui Yan. 2020. Reasoning in Dialog: Improving Response Generation
1337
+ by Context Reading Comprehension. arXiv preprint arXiv:2012.07410 (2020).
1338
+ [5] J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT:
1339
+ Pre-training of Deep Bidirectional Transformers for Language Understanding. In
1340
+ NAACL-HLT.
1341
+ [6] Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason
1342
+ Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents.
1343
+ ICLR (2019).
1344
+ [7] Gabriel Forgues, Joelle Pineau, Jean-Marie Larchevêque, and Réal Tremblay. 2014.
1345
+ Bootstrapping dialog systems with word embeddings. In Nips, workshop, Vol. 2.
1346
+ [8] Chenpeng Fu, Zhixu Li, Qiang Yang, Zhigang Chen, Junhua Fang, Pengpeng Zhao,
1347
+ and Jiajie Xu. 2019. Multiple Interaction Attention Model for Open-World Knowl-
1348
+ edge Graph Completion. In Web Information Systems Engineering–WISE 2019:
1349
+ 20th International Conference, Hong Kong, China, January 19–22, 2020, Proceedings.
1350
+ 630–644.
1351
+ [9] Tingchen Fu, Xueliang Zhao, Chongyang Tao, Ji-Rong Wen, and Rui Yan.
1352
+ 2022. There Are a Thousand Hamlets in a Thousand People’s Eyes: Enhancing
1353
+ Knowledge-grounded Dialogue with Personal Memory. ACL (2022).
1354
+ [10] Shen Gao, Xiuying Chen, Chang Liu, Li Liu, Dongyan Zhao, and Rui Yan. 2020.
1355
+ Learning to Respond with Stickers: A Framework of Unifying Multi-Modality in
1356
+ Multi-Turn Dialog. In Proceedings of The Web Conference 2020. 1138–1148.
1357
+ [11] Xiaodong Gu, Kyunghyun Cho, Jung-Woo Ha, and Sunghun Kim. 2019. Dialog-
1358
+ WAE: Multimodal Response Generation with Conditional Wasserstein Auto-
1359
+ Encoder. In International Conference on Learning Representations.
1360
+ https://
1361
+ openreview.net/forum?id=BkgBvsC9FQ
1362
+ [12] Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic opti-
1363
+ mization. ICLR (2015).
1364
+ [13] Ran Le, Wenpeng Hu, Yang Song, Tao Zhang, Dongyan Zhao, and Rui Yan.
1365
+ 2019. Towards effective and interpretable person-job fitting. In Proceedings of the
1366
+ 28th ACM International Conference on Information and Knowledge Management.
1367
+ 1883–1892.
1368
+ [14] Seanie Lee, Dong Bok Lee, and Sung Ju Hwang. 2021. Contrastive Learning with
1369
+ Adversarial Perturbations for Conditional Text Generation. In 9th International
1370
+ Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May
1371
+ 3-7, 2021. OpenReview.net.
1372
+ [15] J. Li, Michel Galley, Chris Brockett, Jianfeng Gao, and W. Dolan. 2016. A Diversity-
1373
+ Promoting Objective Function for Neural Conversation Models. NAACL (2016).
1374
+ [16] Juntao Li, Chang Liu, Jian Wang, Lidong Bing, Hongsong Li, Xiaozhong
1375
+ Liu, Dongyan Zhao, and Rui Yan. 2020. Cross-Lingual Low-Resource Set-to-
1376
+ Description Retrieval for Global E-Commerce. AAAI (2020).
1377
+ [17] Mingzhe Li, Xiuying Chen, Shen Gao, Zhangming Chan, Dongyan Zhao, and Rui
1378
+ Yan. 2020. VMSMO: Learning to Generate Multimodal Summary for Video-based
1379
+ News Articles. In Proceedings of the 2020 Conference on Empirical Methods in
1380
+ Natural Language Processing (EMNLP). 9360–9369.
1381
+ [18] Mingzhe Li, Xiuying Chen, Min Yang, Shen Gao, Dongyan Zhao, and Rui Yan. 2021.
1382
+ The Style-Content Duality of Attractiveness: Learning to Write Eye-Catching
1383
+ Headlines via Disentanglement. AAAI (2021).
1384
+ [19] Mingzhe Li, Xiexiong Lin, Xiuying Chen, Jinxiong Chang, Qishen Zhang, Feng
1385
+ Wang, Taifeng Wang, Zhongyi Liu, Wei Chu, Dongyan Zhao, et al. 2022. Keywords
1386
+ and Instances: A Hierarchical Contrastive Learning Framework Unifying Hybrid
1387
+ Granularities for Text Generation. In Proceedings of the 60th Annual Meeting of
1388
+ the Association for Computational Linguistics (Volume 1: Long Papers). 4432–4441.
1389
+ [20] Zekang Li, Cheng Niu, Fandong Meng, Yang Feng, Qian Li, and Jie Zhou. 2019.
1390
+ Incremental Transformer with Deliberation Decoder for Document Grounded
1391
+ Conversations. In Proceedings of the 57th Annual Meeting of the Association for
1392
+ Computational Linguistics. 12–21.
1393
+ [21] Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping
1394
+ Wang. 2020. K-bert: Enabling language representation with knowledge graph. In
1395
+ Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 2901–2908.
1396
+ [22] Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, and Ting
1397
+ Liu. 2020. Towards Conversational Recommendation over Multi-Type Dialogs.
1398
+ arXiv preprint arXiv:2005.03954 (2020).
1399
+ [23] Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composi-
1400
+ tion. NAACL-HLT (2008), 236–244.
1401
+ [24] Lei Niu, Chenpeng Fu, Qiang Yang, Zhixu Li, Zhigang Chen, Qingsheng Liu,
1402
+ and Kai Zheng. 2021. Open-world knowledge graph completion with multiple
1403
+ interaction attention. World Wide Web 24, 1 (2021), 419–439.
1404
+ [25] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a
1405
+ method for automatic evaluation of machine translation. In ACL. ACL, 311–318.
1406
+ [26] Chuan Qin, Hengshu Zhu, Tong Xu, Chen Zhu, Liang Jiang, Enhong Chen, and
1407
+ Hui Xiong. 2018. Enhancing person-job fit for talent recruitment: An ability-
1408
+ aware neural network approach. In The 41st International ACM SIGIR Conference
1409
+ on Research & Development in Information Retrieval. 25–34.
1410
+ [27] Aliaksei Severyn and Alessandro Moschitti. 2015. Learning to rank short text pairs
1411
+ with convolutional deep neural networks. In Proceedings of the 38th international
1412
+ ACM SIGIR conference on research and development in information retrieval. 373–
1413
+ 382.
1414
+ [28] Yunwon Tae, Cheonbok Park, Taehee Kim, Soyoung Yang, Mohammad Azam
1415
+ Khan, Eunjeong Park, Tao Qin, and Jaegul Choo. 2020.
1416
+ Meta-Learning
1417
+ for Low-Resource Unsupervised Neural MachineTranslation. arXiv preprint
1418
+ arXiv:2010.09046 (2020).
1419
+ [29] Zhiliang Tian, Wei Bi, Dongkyu Lee, Lanqing Xue, Yiping Song, Xiaojiang Liu, and
1420
+ Nevin L Zhang. 2020. Response-Anticipated Memory for On-Demand Knowledge
1421
+ Integration in Response Generation. arXiv preprint arXiv:2005.06128 (2020).
1422
+ [30] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
1423
+ Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all
1424
+ you need. In Advances in neural information processing systems. 5998–6008.
1425
+ [31] Jian Wang, Junhao Liu, Wei Bi, Xiaojiang Liu, Kejing He, Ruifeng Xu, and Min
1426
+ Yang. 2020. Dual Dynamic Memory Network for End-to-End Multi-turn Task-
1427
+ oriented Dialog Systems. In Proceedings of the 28th International Conference on
1428
+ Computational Linguistics. 4100–4110.
1429
+ [32] Yida Wang, Yinhe Zheng, Yong Jiang, and Minlie Huang. 2021. Diversifying
1430
+ Dialog Generation via Adaptive Label Smoothing. arXiv preprint arXiv:2105.14556
1431
+ (2021).
1432
+ [33] Hongcai Xu, Junpeng Bao, and Junqing Wang. 2020. Knowledge-graph based
1433
+ Proactive Dialogue Generation with Improved Meta-Learning. arXiv preprint
1434
+ arXiv:2004.08798 (2020).
1435
+ [34] Rui Yan, Ran Le, Yang Song, Tao Zhang, Xiangliang Zhang, and Dongyan Zhao.
1436
+ 2019. Interview choice reveals your preference on the market: To improve job-
1437
+ resume matching through profiling memories. In Proceedings of the 25th ACM
1438
+ SIGKDD International Conference on Knowledge Discovery & Data Mining.
1439
+ [35] Xiangliang Zhang, Qiang Yang, Somayah Albaradei, Xiaoting Lyu, Hind Alamro,
1440
+ Adil Salhi, Changsheng Ma, Manal Alshehri, Inji Ibrahim Jaber, Faroug Tifratene,
1441
+ et al. 2021. Rise and fall of the global conversation and shifting sentiments during
1442
+ the COVID-19 pandemic. Humanities and social sciences communications 8, 1
1443
+ (2021), 1–10.
1444
+ [36] Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang
1445
+ Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. Dialogpt: Large-scale
1446
+ generative pre-training for conversational response generation. arXiv preprint
1447
+ arXiv:1911.00536 (2019).
1448
+ [37] Xueliang Zhao, Wei Wu, Chongyang Tao, Can Xu, Dongyan Zhao, and Rui Yan.
1449
+ 2020. Low-resource knowledge-grounded dialogue generation. ICLR (2020).
1450
+ [38] Chujie Zheng, Yunbo Cao, Daxin Jiang, and Minlie Huang. 2020. Difference-
1451
+ aware Knowledge Selection for Knowledge-grounded Conversation Generation.
1452
+ In Findings of the Association for Computational Linguistics: EMNLP 2020. 115–125.
1453
+
0dAzT4oBgHgl3EQfDPpe/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
0dE4T4oBgHgl3EQfZgyj/content/tmp_files/2301.05057v1.pdf.txt ADDED
@@ -0,0 +1,1463 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.05057v1 [q-bio.QM] 19 Dec 2022
2
+ AN OVERVIEW OF OPEN SOURCE DEEP LEARNING-BASED
3
+ LIBRARIES FOR NEUROSCIENCE
4
+ Louis Fabrice Tshimanga
5
+ Department of Neuroscience (DNS)
6
+ University of Padova
7
8
+ Manfredo Atzori
9
+ Department of Neuroscience (DNS),
10
+ Padova Neuroscience Center (PNC)
11
+ University of Padova
12
+ Information Systems Institute
13
+ University of Applied Sciences Western Switzerland (HES-SO Valais)
14
15
+ Federico Del Pup
16
+ Department of Neuroscience (DNS),
17
+ Department of Information Engineering (DEI)
18
+ University of Padova
19
20
+ Maurizio Corbetta
21
+ Department of Neuroscience (DNS),
22
+ Padova Neuroscience Center (PNC)
23
+ University of Padova
24
+ Department of Neurology
25
+ Washington University School of Medicine
26
27
+ ABSTRACT
28
+ In recent years, deep learning revolutionized machine learning and its applications, producing re-
29
+ sults comparable to human experts in several domains, including neuroscience. Each year, hundreds
30
+ of scientific publications present applications of deep neural networks for biomedical data analysis.
31
+ Due to the fast growth of the domain, it could be a complicated and extremely time-consuming task
32
+ for worldwide researchers to have a clear perspective of the most recent and advanced software
33
+ libraries. This work contributes to clarify the current situation in the domain, outlining the most
34
+ useful libraries that implement and facilitate deep learning application to neuroscience, allowing
35
+ scientists to identify the most suitable options for their research or clinical projects. This paper
36
+ summarizes the main developments in Deep Learning and their relevance to Neuroscience; it then
37
+ reviews neuroinformatic toolboxes and libraries, collected from the literature and from specific hubs
38
+ of software projects oriented to neuroscience research. The selected tools are presented in tables
39
+ detailing key features grouped by domain of application (e.g. data type, neuroscience area, task),
40
+ model engineering (e.g. programming language, model customization) and technological aspect
41
+ (e.g. interface, code source). The results show that, among a high number of available software
42
+ tools, several libraries are standing out in terms of functionalities for neuroscience applications. The
43
+ aggregation and discussion of this information can help the neuroscience community to devolop
44
+ their research projects more efficiently and quickly, both by means of readily available tools, and by
45
+ knowing which modules may be improved, connected or added.
46
+ Keywords Deep Learning · Neuroscience · Neuroinformatics · Open source
47
+ 1
48
+ Introduction
49
+ In the last decade, Deep Learning (DL) has taken over most classic approaches in Machine Learning (ML), Computer
50
+ Vision, Natural Language Processing, showing an unprecedented versatility, and matching or surpassing the perfor-
51
+ mances of human experts in narrow tasks.
52
+ The recent growth of DL applications to several domains, including Neuroscience, consequently offers numerous open-
53
+
54
+ source software opportunities for researchers.
55
+ Mapping available resources can allow a faster and more precise exploitation.
56
+ Neuroscience is a diversified field on its own, as much for the objects and scales it focuses on, as for the types of data
57
+ it relies on.
58
+ The discipline is also historically tied to developments in electrical, electronic, and information technology. Modern
59
+ Neuroscience relies on computerization in many aspects of data generation, acquisition, and analysis. Statistical and
60
+ Machine Learning techniques already empower many software packages, that have become de facto standards in sev-
61
+ eral subfields of Neuroscience, such as Principal and Independent Component Analysis in Electroencephalography
62
+ and Neuroimaging, to name a few.
63
+ Meanwhile, the rich and rapidly evolving taxonomy of Deep Neural Networks (DNNs) is becoming both an opportu-
64
+ nity and hindrance. On the one hand, currently open-source DL libraries allow an increasing number of applications
65
+ and studies in Neuroscience. On the other hand, the adoption of available methods is slowed down by a lack of stan-
66
+ dards, reference frameworks and established workflows. Scientific communities whose primary focus or background
67
+ is not in machine learning engineering may be left partially aside from the ongoing Artificial Intelligence (AI) gold
68
+ rush.
69
+ For such reasons it is fundamental to overview open-source libraries and toolkits. Framing a panorama could help
70
+ researchers in selecting ready-made tools and solutions when convenient, as well as in pointing out and filling in the
71
+ blanks with new applications. This work would contribute to advancing the community’s possibilities, reducing the
72
+ workload for researchers to exploit DL, allowing Neuroscience to benefit of its most recent advancements.
73
+ 2
74
+ Background
75
+ 2.1
76
+ Deep Learning
77
+ Deep Learning (DL) has contributed many best solutions to problems in its parent field, Machine Learning, thanks to
78
+ theoretical and technological achievements that unlocked its intrinsic versatility.
79
+ Machine Learning is the study of computer algorithms that tackle problems without complete access to predefined
80
+ rules or analytical, closed-form solutions.
81
+ The algorithms often require a training phase to adjust parameters and satisfy internal or external constraints (e.g. of
82
+ exactness, approximation or generality) on dedicated data for which solutions might be already known.
83
+ Machine Learning comprises a wide array of statistical and mathematical methods, including Artificial Neural Net-
84
+ works (ANNs), biologically inspired systems that connect inputs and outputs through simple computing units (neu-
85
+ rons), which act as function approximators.
86
+ Each unit implements a nonlinear function of the weighted sum of its inputs, thus the output of the whole ANN is a
87
+ composite function, as formally intended in mathematics. The networks of neurons are most often layered and "feed-
88
+ forward", meaning that units from any layer only output results to units in subsequent layers. The width of a layer
89
+ refers to its neuron count, while the depth of a network refers to its layer count. The typical architecture instantiating
90
+ the above characteristics is the MultiLayer Perceptron [1] (MLP).
91
+ Universal approximation theorems [2] [3] ensure that, whenever a nonlinear network as the MLP is either bound in
92
+ width and unbound in depth or viceversa, its weights can then be set to represent virtually any function (i.e. a wide
93
+ variety of functions families).
94
+ The training problem thus consists in building networks with sets of weights so to instantiate or approximate the func-
95
+ tion that would solve the assigned task, or that represents the input-output relation. This search is not trivial: it can
96
+ be framed as the optimization problem for a functional over the ANN weights. Such functional, typically called "loss
97
+ function", associates the "errors" made on the training data to the neural net parameters (its weights), acting as a total
98
+ performance score. Approaching local minima of the loss function and improving the network performance on the
99
+ training data is the prerequisite to generalize on real world and unseen data.
100
+ DL is concerned with the use of deep ANNs, namely characterized by depth, stacking several intermediate, (hidden)
101
+ layers between input and output units.
102
+ As mentioned above, other dimensions being equal, depth increases the representational power of ANNs and, more
103
+ specifically, aims at modeling complicated functions as meaningful compositions of simpler ones.
104
+ As with their biological counterparts [4], depth is supposed to manage hierarchies of features from larger input por-
105
+ tions, capturing characteristics often inherent to real world objects and effective in modeling actual data.
106
+ Overall, depth is one of the key features that allowed to overcome historical limits [5] of simpler ANNs such as the
107
+ Perceptron. At the same time, depth comes with numerical and methodological hardships in models training.
108
+ Part of the difficulties arise as the search space for the optimal set of parameters grows considerably with the number
109
+ of layers (and their width as well).
110
+ Other issues are strictly numerical, since the training algorithms include long computation chains that may affect the
111
+ stability of training and learning.
112
+ 2
113
+
114
+ Hence, new or rediscovered ideas in training protocols and mathematical optimization (e.g. applying the "backpropa-
115
+ gation of errors" algorithm to neural nets [6]) played an important role through times when the scientific interest and
116
+ hopes in ANNs faded (so called "AI winters"), paving the way for later advancement.
117
+ The main drivers for the latest success of deep neural networks are of varied nature, and can be schematised as techni-
118
+ cal and human related factors.
119
+ On a technical side DL has profited from [7]:
120
+ • the datafication of the world, i.e. the growing availability of (Big) data
121
+ • the diffusion of Graphical Processing Units (GPUs) as hardware tools.
122
+ To outperform classic machine learning models, deep neural networks often require larger quantities of data samples.
123
+ Such data hunger and high parameters count contribute to the high requirements of deep models in terms of memory,
124
+ number of operations and computation time. Training models with highly parallelized and smartly scheduled compu-
125
+ tations gained momentum thanks to GPUs.
126
+ In 2012 a milestone exemplified both the above technical aspects, when AlexNet [8], a deep Convolutional Neural
127
+ Network (CNN) based on ideas from Fukushima [4] and LeCun [9] - [10], won the ImageNet Large Scale Visual
128
+ Recognition Challenge after being trained using two GPUs [11]. Since then, Deep Learning has brought new out-
129
+ standing results in various tasks and domains, processing different data types. Deep networks can nowadays work on
130
+ image, video, audio, text, and speech data, time series and sequences, graphs, and more; the main tasks consist in
131
+ classification, prediction, or estimating the probability density of data distributions, with the possibility of modifying,
132
+ completing the input or even generating new instances.
133
+ On a more sociological side, the drivers of Deep Learning success can be related to the synergy of big tech companies,
134
+ advanced research centers, and developer communities [12]. Investments of economical and scientific resources in
135
+ relatively independent, collective projects, such as open-source libraries, frameworks, and APIs (Application Program-
136
+ ming Interfaces), have offered varied tools adapted to multiple specific situations and objectives, exploiting horizontal
137
+ organization [13] and mixing top-down and bottom-up approaches. It is difficult to imagine a rapid rise of successful
138
+ endeavors, without both active communities and the technical means to incorporate and manage lower-level aspects.
139
+ In fact, applying Deep Learning to a relevant problem in any research field requires, in addition to specific domain
140
+ knowledge, a vast background of statistical, mathematical, and programming notions and skills. The tools that support
141
+ scientists and engineers in focusing on their main tasks encompass the languages to express numerical operations on
142
+ GPUs, such as CUDA [14] and cuDNN [15] by NVIDIA, as well as the frameworks to design models, like Tensor-
143
+ Flow [16] and Keras [17] by Google, and PyTorch by Meta [18], or the supporting strategies to build data pipelines.
144
+ Many Deep Learning achievements are relevant to biomedical and clinical research, and the above presented tools
145
+ have enabled explorations of the capabilities of deep neural networks with neuroscience and biomedical data.
146
+ A fuller exploitation and routinely employment of modern algorithms are yet to come, both in research and clinical
147
+ practice. This process would accelerate by popularizing, democratizing, and jointly developing models, improving
148
+ their usability, and expanding their environments, i.e. by wrapping solutions into libraries and shared frameworks.
149
+ 2.2
150
+ Neuroscience
151
+ As per the Nature journal, «Neuroscience is a multidisciplinary science that is concerned with the study of the structure
152
+ and function of the nervous system. It encompasses the evolution, development, cellular and molecular biology,
153
+ physiology, anatomy and pharmacology of the nervous system, as well as computational, behavioural and cognitive
154
+ neuroscience» [19].
155
+ Expanding, neuroscience investigates:
156
+ • the evolutionary and individual development of the nervous system;
157
+ • the cellular and molecular biology that characterizes neurons and glial cells;
158
+ • the physiology of living organisms and the role of the nervous system in the homeostatic function;
159
+ • the anatomy, i.e. the identification and description of the system’s structures;
160
+ • pharmacology, i.e. the effect of chemicals of external origin on the nervous system, their interactions with
161
+ endogenous molecules;
162
+ • the computational features of the brain and nerves, how information is processed, which mathematical and
163
+ physical models best predict and approximate the behaviour of neurons;
164
+ • cognition, the mental processes at the intersection of psychology and computational neuroscience;
165
+ • behaviour as a phenomenon rooted in genetics, development, mental states, and so forth.
166
+ 3
167
+
168
+ The techniques to access tissues and structures of the nervous system are often shared by disciplines focused on other
169
+ physiological systems, and some of these processes have been computer aided for long.
170
+ Moreover, nerve cells have distinctive electromagnetic properties and their activity directly and indirectly generates
171
+ detectable signals, adding physical and technical specificity to Neuroscience.
172
+ Overall, neuroscience research is profoundly multi-modal. Data are managed and processed inside a model depending
173
+ on their type and format. The most prominent categories of data involved in neuroscience research comprise 2,3-D
174
+ images or video on the one side, and sequences or signals on the other. Still it is important to acknowledge the differ-
175
+ ent phenomena, autonomous or provoked by the measurement apparatus, underlying data generation and acquisition.
176
+ Bioimages may be produced from:
177
+ • Magnetic Resonance Imaging (MRI)
178
+ • X-rays
179
+ • Tomography with different penetrating waves
180
+ • Histopathology microscopy
181
+ • Fundus photography (retinal images)
182
+ and more.
183
+ Neuroscience sequences may come from:
184
+ • Electromiography (EMG)
185
+ • Electroencephalography (EEG)
186
+ • Natural language, text records
187
+ • Genetic sequencing
188
+ • Eye-tracking
189
+ and more.
190
+ Adding to the above, other data types are common in neuroscience, e.g. tabular data, text that may come from
191
+ medical records written by physicians for diagnostic purposes, test scores, inspections of cognitive and sensorimotor
192
+ functions, as the National Institute of Health (NIH) Stroke Scale test scores [20], and more broadly clinical reports
193
+ from anamneses or surveys.
194
+ 2.3
195
+ Neuroinformatics
196
+ Neuroscience is evolving into a data-centric discipline. Modern research heavily depends on human researchers as
197
+ well as machine agents to store, manage and process computerized data from the experimental apparatus to the end
198
+ stage.
199
+ Before delving in the specifics of artificial neural networks applied to the study of biological neural systems, it is
200
+ useful to outline the broader concepts of Neuroinformatics, regarding data and coding, especially in the light of open
201
+ culture.
202
+ According to the International Neuroinformatics Coordinating Facility (INCF), «Neuroinformatics is a research field
203
+ devoted to the development of neuroscience data and knowledge bases together with computational models and ana-
204
+ lytical tools for sharing, integration, and analysis of experimental data and advancement of theories about the nervous
205
+ system function. In the INCF context, neuroinformatics refers to scientific information about primary experimental
206
+ data, ontology, metadata, analytical tools, and computational models of the nervous system. The primary data includes
207
+ experiments and experimental conditions concerning the genomic, molecular, structural, cellular, networks, systems
208
+ and behavioural level, in all species and preparations in both the normal and disordered states» [21]. Given the rele-
209
+ vance of Neuroinformatics to Neuroscience, supporting open and reproducible science implies and requires attention
210
+ to standards and best practices regarding open data and code.
211
+ The INCF itself is an independent organization devoted to validate and promote such standards and practices, inter-
212
+ acting with the research communities [22] and aiming at the "FAIR principles for scientific data management and
213
+ stewardship" [23].
214
+ FAIR principles consist in:
215
+ • being Findable, registered and indexed, searchable, richly described in metadata;
216
+ • being Accessible, through open, free, universally implementable protocols;
217
+ • being Interoperable, with appropriate standards for metadata in the context of knowledge representation;
218
+ 4
219
+
220
+ • being Reusable, clearly licensed, well described, relevant to a domain and meeting community standards.
221
+ Among free and open resources, several software and organized packages integrating pre-processing and data analysis
222
+ workflows for neuroimaging and signal processing became the reference for worldwide researchers in Neuroscience.
223
+ Such tools allow to perform scientific research in neuroscience easily in solid and repeatable ways. It can be useful to
224
+ mention, for neuroimaging, Freesurfer1 [24] and FSL2 [25] that are standalone softwares, and the MATLAB-connected
225
+ SPM3 [26]. In the domain of signal processing, examples are EEGLAB4 [27], Brainstorm5 [28], PaWFE6 [29], all
226
+ MATLAB related yet free and open, and MNE7 [30], that runs on Python. Regarding applications for neurorobotics
227
+ and Brain Computer Interfaces (BCIs), a recent opensource platform can be found in ROS-neuro8 [31].
228
+ The interested readers can find lists of open resources for computational neuroscience (including code, data, mod-
229
+ els, repositories, textbooks, analysis, simulation and management software) at Open Computational Neuroscience
230
+ Resource 9 (by Austin Soplata), and at Open Neuroscience 10. Additional software resources oriented to Neuroinfor-
231
+ matics in general, but not necessarily open, can also be found as indexed at "COMPUTATIONAL NEUROSCIENCE
232
+ on the Web" 11 (by Jim Perlewitz).
233
+ 2.4
234
+ Bringing Deep Learning to the Neurosciences
235
+ The Deep Learning community is accustomed to open science, as many datasets, models, programming frameworks
236
+ and scientific outcomes are publicly released by both academia and companies continuously. However, while Deep
237
+ Learning can openly provide state-of-the-art models to old and new problems in Neuroscience, theoretical understand-
238
+ ing, formalization and standardisation are often yet to be achieved, which may prevent adoption in other research
239
+ endeavors. From a technical standpoint, deep networks are a viable tool for many tasks involving data from the brain
240
+ sciences. Image classification has arguably been the task in which deep neural networks have had the highest mo-
241
+ mentum, in terms of pushing the state of the art forward. This translates now in a rich taxonomy of architectures
242
+ and pre-trained models that consistently maintain interesting performances in pattern recognition, across a number of
243
+ image domains.
244
+ Pattern recognition is indeed central for diagnostic purposes, in the form of classification of images with pathological
245
+ features (e.g. types of brain tumors or meningiomas), segmentation of structures (such as the brain, brain tumors
246
+ or stroke lesions), classification of signals (e.g. classification of electromyography or electro encephalography data),
247
+ as well as for action recognition in Human-Computer Interfaces (HCIs). The initiatives BRain Tumor Segmentation
248
+ (BRATS) Challenge12 [32], Ischemic Stroke LEsion Segmentation (ISLES) Challenge13 [33]- [34], and Ninapro14 [35]
249
+ are examples of data releases for which above-mentioned tools proved effective.
250
+ There are models learning image-to-image functions, capable of enhancing data, preprocessing it, correcting artifacts
251
+ and aberrations, allowing smart compression as well as super-resolution, and even expressing cross-modal transforma-
252
+ tions between different acquisition apparatus.
253
+ In the related tasks of object tracking, action recognition and pose estimation, research results from the automotive
254
+ sector or crowd analysis have inspired solutions for behavioural neuroscience, especially in animal behavioral studies.
255
+ When dealing with sequences, deep networks success in Computer Vision has inspired CNN-based approaches to EEG
256
+ and EMG studies [36] - [37], either with or without relying on 2D data, given that mathematical convolution has a 1D
257
+ version, and 1D signals have 2D spectra. Other architectures more directly instantiate temporal and sequential aspects,
258
+ e.g. Recurrent Neural Networks (RNNs) such as the Long Short Term Memory (LSTM) [38] and Gated Recurrent
259
+ Units (GRUs) [39], and they too can be applied to sequence problems and sub-tasks in neuroscience, such as decoding
260
+ time-dependent brain signals.
261
+ Although deep neural network do not explicitly model the nervous system, they are inspired by biological knowledge
262
+ and mimic some aspects of biological computation and dynamical systems. This has inspired new comparative studies,
263
+ 1https://surfer.nmr.mgh.harvard.edu/
264
+ 2https://fsl.fmrib.ox.ac.uk/fsl/fslwiki
265
+ 3https://www.fil.ion.ucl.ac.uk/spm/
266
+ 4https://sccn.ucsd.edu/eeglab/index.php
267
+ 5https://neuroimage.usc.edu/brainstorm/Introduction
268
+ 6http://ninapro.hevs.ch/node/229
269
+ 7https://mne.tools/stable/index.html
270
+ 8https://github.com/rosneuro
271
+ 9https://github.com/asoplata/open-computational-neuroscience-resources
272
+ 10https://open-neuroscience.com/
273
+ 11https://compneuroweb.com/sftwr.html
274
+ 12https://www.med.upenn.edu/cbica/brats/
275
+ 13https://www.isles-challenge.org/
276
+ 14http://ninaweb.hevs.ch/node/7
277
+ 5
278
+
279
+ and analogy approaches to learning and perception, in a unique way among machine learning algorithms [40].
280
+ Many neuroinformatic studies demonstrate how novel deep learning concepts and methods apply to neurological
281
+ data [12]. However, they often showcase new further achievements in performance metrics that do not translate di-
282
+ rectly to new accepted neuroscience discoveries or clinical best practices.
283
+ Such results are very often published together with open code repositories, allowing reproducibility, yet they may not
284
+ be explicitly organized for widespread routinely adoption in domains different from machine learning. Algorithms are
285
+ usually written in open programming languages like Python [41], R [42], Julia [43], and deep learning design frame-
286
+ works such as TensorFlow, PyTorch or Flux [44]. Still, they are more inspiring to the experienced machine learning
287
+ researcher, rather than practically helpful to end-users such as neuroscientists.
288
+ In fact, to successfully build a deep learning application from scratch, a vast knowledge is needed in the data science
289
+ aspect of the task and in coding , as much as in the theoretical and experimental foundations and frontiers of the
290
+ application domain, here being Neuroscience.
291
+ For the above reasons, the open source and open science domains are promising frames for common development
292
+ and testing of relevant solutions for Neuroscience, as they provide an active flow of ideas and robust diversification,
293
+ avoiding "reinvention of the wheel", harmful redundancies or starting from completely blank states.
294
+ As a contribution in clarifying the current situation and reducing the workload for researchers, this work collects and
295
+ analyzes several open libraries that implement and facilitate Deep Learning application in Neuroscience, with the aim
296
+ of allowing worldwide scientists to identify the most suitable options for their inquiries and clinical tasks.
297
+ 3
298
+ Methods
299
+ The large corpus of available open code makes useful to specify what qualifies as a coding library or a framework,
300
+ rather than as a model accompanied by utilities, for the present scope.
301
+ In programming, a library is a collection of pre-coded functions and object definitions, often relying on one another,
302
+ and written to optimize programming for custom tasks. The functions are considered useful and unmodified across
303
+ multiple unrelated programs and tasks. The main program at hand calls the library, in the control flow specified by the
304
+ end-users.
305
+ A framework is a higher level concept, akin to the library, but typically with a pre-designed control flows in which
306
+ custom code from the end-users is inserted.
307
+ For instance, a repository that simply collects the functions that define and instantiate a deep model would not be
308
+ considered a library. On the other hand, collections of notebooks that allow to train, retrain and test models with
309
+ several architectures, while possibly taking care also of data pre-processing and preparation, would be considered
310
+ libraries (and frameworks) for the present scopes.
311
+ The explicit definition of the authors, their aims and their
312
+ maintainance of the library is relevant as well, in determining if a repository would be considered a library, toolkit,
313
+ toolbox, or other.
314
+ For the sake of the review, several resources were queried or scanned. Google Scholar was queried with:
315
+ • "deep learning library" OR "deep learning toolbox" OR "deep learning package" -"MATLAB deep learning
316
+ toolbox" -"deep learning toolbox MATLAB"
317
+ preserving the top 100 search results, ordered for relevance by the engine algorithm. On PubMed the queries were:
318
+ • opensource (deep learning) AND (toolbox OR toolkit OR library);
319
+ • (EEG OR EMG OR MRI OR (brain (X-ray OR CT OR PT))) (deep learning) AND (toolbox OR toolkit OR
320
+ library).
321
+ Moreover, the site https://open-neuroscience.com/ was scanned specifically for "deep learning" mentions, and
322
+ relevant papers cited or automatically suggested throughout the query process were considered for evaluation, as well
323
+ as the platform of the Journal of Open Source Software at https://joss.theoj.org/.
324
+ The collected libraries were organized according to the principal aim, in the form of data type processed, or the
325
+ supporting function in the workflow, thus dividing:
326
+ 1. libraries for sequence data (e.g. EMG, EEG)
327
+ 2. libraries for image data (including scalar volumes, 4-dimensional data as in fMRI, video)
328
+ 3. libraries and frameworks to support model building, evaluation, data ingestion
329
+ In each category, a set of three tables present separately the results related to the following libraries characteristics:
330
+ 6
331
+
332
+ 1. domain of application
333
+ 2. model engineering
334
+ 3. technology and sources
335
+ The domain of application comprises the Neuroscience area, the Data types handled, the provision of Datasets, and
336
+ the machine learning Task to which the library is dedicated.
337
+ The model engineering tables include informations on the architecture of DL Models manageable in the library, the
338
+ DL framework and Programming language main dependencies, and the possibility of Customization for the model
339
+ structure or training parameters.
340
+ Technology and sources refer to the type of Interface available for a library, whether it works Online//Offline, specif-
341
+ ically with real-time data or logged data. Maintenance refers to the ongoing activity of releasing features, solving
342
+ issues and bugs or offering support through channels, Source specifies where code files and instructions are made
343
+ available.
344
+ 4
345
+ Results: Deep Learning Libraries
346
+ The analysis of the literature allowed to select a total of 48 publications describing libraries that implement or em-
347
+ power deep learning applications for neuroscience. Despite open source and effectiveness, several publications did not
348
+ provide an ecosystem of reusable functions. Proofs of concept and single-shot experiments were discarded.
349
+ 4.1
350
+ Libraries for sequence data
351
+ Libraries and frameworks for sequence data are shown in Tables 1 (domains of application), 2 (models characteris-
352
+ tics), 3 (technologies and sources). The majority of process EEG sygnals, which are among the most common types
353
+ of sequential data in Neuroscience research. A common objective is deducing the activity or state of the subject,
354
+ based on temporal or spectral (2D) patterns. Deep Learning is capable of bypassing some of the preprocessing steps
355
+ often required by other common statistical and engineering techniques, and it comprises both 1D and 2D approaches,
356
+ through MLPs, CNNs or RNNs architectures. BioPyC is an example of such scenario. It offers the possibility to
357
+ train a pre-set CNN architecture as well as loading and training a custom model. Moreover, It can process different
358
+ types of sequence data, making it very versatile and applicable/ suitable/usable in/for different neuroscience area.
359
+ Another example of sequence-oriented library is gumpy, whose intended area of application is that of Brain Computer
360
+ Interfaces (BCIs), where decoding a signal is the first step towards communication and interaction with a computer
361
+ or robotic system. Given the setting, gumpy allows working with EEG or EMG data and suits them with specific
362
+ defaults, e.g. 1-D CNNs, or LSTMs.
363
+ Notable mentions in the sequence category are the library Traja and the VARDNN toolbox, as they depart from
364
+ the common scenarios of previous examples. Traja stands out as an example of less usual sequential data, namely
365
+ trajectory data (sequences of coordinates in 2 or 3 dimensions, through time). Moreover, in Traja sequences are
366
+ modeled and analyzed employing the advanced architectures of Variational AutoEncoders (VAEs) and Generative
367
+ Adversarial Networks (GANs), usually encountered in image tasks. With different theoretical backgrounds, both
368
+ architectures allow simulation and characterization of data through their statistical properties. The VARDNN toolbox
369
+ allows analyses on BOLD signals, in the established domain of functional Magnetic Resonance Imaging (fMRI), but
370
+ uses a unique approach to autoregressive processes mixed with deep neural networks, allowing to perform causal
371
+ analysis and to study functional connections between brain regions through their patterns of activity in time.
372
+ 7
373
+
374
+ Name
375
+ Neuroscience
376
+ area
377
+ Data type
378
+ Datasets
379
+ Task
380
+ BioPyC [45]
381
+ General
382
+ Sequences (EEG, miscellaneous)
383
+ No
384
+ Classification
385
+ braindecode [46]
386
+ General
387
+ Sequences (EEG, MEG)
388
+ External
389
+ Classification
390
+ DeLINEATE [47]
391
+ General
392
+ Images, sequences
393
+ External
394
+ Classification
395
+ EEG-DL [48]
396
+ BCI
397
+ Sequences (EEG)
398
+ No
399
+ Classification
400
+ gumpy [49]
401
+ BCI
402
+ Sequences (EEG, EMG)
403
+ No
404
+ Classification
405
+ DeepEEG
406
+ Electrophysiology
407
+ Sequences (EEG)
408
+ No
409
+ Classification
410
+ ExBrainable [50]
411
+ Electrophysiology
412
+ Sequences (EEG)
413
+ External
414
+ Classification, XAI
415
+ Traja [51]
416
+ Behavioural
417
+ neuro-
418
+ science
419
+ Sequences (Trajectory coordinates over time)
420
+ No
421
+ Prediction, Classification, Synthesis
422
+ VARDNN toolbox [52] toolbox
423
+ Connectomics
424
+ (Functional
425
+ Connectiv-
426
+ ity)
427
+ Sequences (BOLD signal)
428
+ No
429
+ Time series causal analysis
430
+ Table 1: Domains of applications for the libraries and frameworks processing sequence data
431
+ 8
432
+
433
+ Name
434
+ Models
435
+ DL framework
436
+ Customization
437
+ Programming language
438
+ BioPyC
439
+ 1-D CNN
440
+ Lasagne
441
+ Yes (weights, model)
442
+ Python
443
+ braindecode
444
+ 1-D CNN
445
+ PyTorch
446
+ Yes (weights, model)
447
+ Python
448
+ DeLINEATE
449
+ CNN
450
+ Keras, TensorFlow
451
+ Yes (weights, model)
452
+ Python
453
+ EEG-DL
454
+ Miscellaneous
455
+ TensorFlow
456
+ Yes (weights, model)
457
+ Python, MATLAB
458
+ gumpy
459
+ CNN, LSTM
460
+ Keras, Theano
461
+ Yes (weights, model)
462
+ Python
463
+ DeepEEG
464
+ MLP, 1,2,3-D CNN, LSTM
465
+ Keras, TensorFlow
466
+ Yes (weights)
467
+ Python
468
+ ExBrainable
469
+ CNN
470
+ PyTorch
471
+ Yes (weights)
472
+ Python
473
+ Traja
474
+ LSTM, VAE, GAN
475
+ PyTorch
476
+ Yes (weights, model)
477
+ Python
478
+ VARDNN toolbox
479
+ Vector Auto-Regressive DNN
480
+ Deep Learning Toolbox (MATLAB)
481
+ Yes (weights)
482
+ MATLAB
483
+ Table 2: Model engineering specifications for the libraries and frameworks processing sequence data
484
+ 9
485
+
486
+ Name
487
+ Interface
488
+ Online/Offline
489
+ Maintenance
490
+ Source
491
+ BioPyC
492
+ Jupyter Notebooks
493
+ Offline
494
+ Active
495
+ gitlab.inria.fr/biopyc/BioPyC/
496
+ braindecode
497
+ None
498
+ Offline
499
+ Active
500
+ github.com/braindecode/braindecode
501
+ DeLINEATE
502
+ GUI, Colab Notebooks
503
+ Offline
504
+ Active
505
+ bitbucket.org/delineate/delineate
506
+ EEG-DL
507
+ None
508
+ Offline
509
+ Active
510
+ github.com/SuperBruceJia/EEG-DL
511
+ gumpy
512
+ None
513
+ Online, Offline
514
+ Inactive
515
+ github.com/gumpy-bci
516
+ DeepEEG
517
+ Colab Notebooks
518
+ Offline
519
+ Inactive
520
+ github.com/kylemath/DeepEEG
521
+ ExBrainable
522
+ GUI
523
+ Offline
524
+ Active
525
+ github.com/CECNL/ExBrainable
526
+ Traja
527
+ None
528
+ Offline
529
+ Active
530
+ github.com/traja-team/traja
531
+ VARDNN toolbox
532
+ None
533
+ Offline
534
+ Active
535
+ github.com/takuto-okuno-riken/vardnn
536
+ Table 3: Technological aspects and code sources for the libraries and frameworks processing sequence data
537
+ 10
538
+
539
+ 4.2
540
+ Libraries for image data
541
+ Libraries and frameworks for image data are shown in Tables 4 (domains of application), 5 (models characteristics),6
542
+ (technologies and sources). Computer vision and 2D image processing are arguably the fields in which DL has
543
+ achieved the most impressive and state-of-art defining results, often inspiring and translating breakthroughs in other
544
+ domanis. Classification and segmentation (i.e. the separation of parts of the image based on their classes) are the
545
+ most common tasks addressed by the image processing libraries. Magnetic resonance is the primary source of data;
546
+ however, various deep learning libraries are built microscopic and eye-tracking data as well. Most of the libraries
547
+ collected in our analysis take advantage of classical CNN architectures for classification, Convolutional AutoEncoders
548
+ (CAEs) for segmentation, and GANs for synthesis. It is common to employ transfer learning to lessen the compu-
549
+ tational and memory burden during the training phase, and take advantage of pre-trained models. Transfer learning
550
+ consists in initializing models with parameters learnt on usually larger data sets, possibly from different domains and
551
+ tasks, with varying amounts of further training in the target domain. The best such examples are pose-estimation
552
+ libraries extending the DeepLabCut system, arguably the most relevant project on the topic. DeepLabCut is an
553
+ interactive framework for labelling, training, testing and refining models, that originally exploits the weights learned
554
+ from ResNets (or newer architectures) on the ImageNet data. The results match human annotation using quite few
555
+ training samples, holding for many (human and non-human) animals, and settings. The documentation and demon-
556
+ strative notebooks and tools offered by the Mathis Lab allow different levels of understanding and customization of
557
+ the process, with high levels of robustness. Among the considered libraries, two set apart from the majority given
558
+ the type of tasks they perform: GaNDLF addresses eXplainable AI (XAI), i.e. Artificial Intelligence whose deci-
559
+ sions and outputs can be understood by humans through more transparent mental models; ANTsX performs both the
560
+ co-registration step and super-resolution as a quality enhancing step for neuroimages, with the former being usually
561
+ performed by traditional algorithms. GaNDLF sets its goal as the provision of deep learning resources in different
562
+ layers of abstraction, allowing medical researchers with virtually no ML knowledge to perform robust experiments
563
+ with models trained on carefully split data, with augmentations and preprocessing, under standardized protocols that
564
+ can easily integrate interpretability tools such as Grad-CAM [53] and attention maps, which highlight the parts of
565
+ an image according to how they influenced a model outcome. The ANTsX ecosystem is of similar wide scope, and
566
+ is intended to build workflows on quantitative biology and medical imaging data, both in Python and R languages.
567
+ Packages from the same ecosystem perform registration of brain structures (by classical methods) as well as brain
568
+ extraction by deep networks.
569
+ 11
570
+
571
+ Name
572
+ Neuroscience
573
+ area
574
+ Data type
575
+ Datasets
576
+ Task
577
+ AxonDeepSeg [54]
578
+ Microbiology,
579
+ Histology
580
+ Img (SEM, TEM)
581
+ External
582
+ Segm.
583
+ DeepCINAC [55]
584
+ Electrophys.
585
+ Vid (2-photon calcium)
586
+ No
587
+ Class.
588
+ DeepLabCut [56]
589
+ Behavioral
590
+ neuroscience
591
+ Vid
592
+ No
593
+ Pose est.
594
+ DeepNeuro [57]
595
+ Neuroimaging
596
+ Img (fMRI, misc.)
597
+ No
598
+ Class., Segm., Synthesis
599
+ DeepVOG [58]
600
+ Oculography
601
+ Img, Vid
602
+ Demo
603
+ Segm.
604
+ DeLINEATE [47]
605
+ General
606
+ Img, sequences
607
+ External
608
+ Class.
609
+ DNNBrain [59]
610
+ Brain
611
+ map-
612
+ ping
613
+ Img
614
+ No
615
+ Class.
616
+ ivadomed [60]
617
+ Neuroimaging
618
+ Img (2D, 3D)
619
+ No
620
+ Class., Segm.
621
+ MEYE [61]
622
+ Oculography
623
+ Img, Vid
624
+ Yes
625
+ Segm.
626
+ Allen Cell Structure Segmenter [62]
627
+ Microbiology,
628
+ Histology
629
+ Img (3D-fluor. microscopy)
630
+ No
631
+ Segm.
632
+ VesicleSeg [63]
633
+ Microbiology,
634
+ Histology
635
+ Img (EM)
636
+ No
637
+ Segm.
638
+ CDeep3M2 [64]
639
+ Microbiology,
640
+ Histology
641
+ Img (misc. microscopy)
642
+ Yes
643
+ Segm.
644
+ CASCADE [65]
645
+ Electrophys.
646
+ Vid (2-photon calcium), Seq
647
+ Yes
648
+ Event detection
649
+ ScLimibic [66]
650
+ Neuroimaging
651
+ Img (MRI)
652
+ External
653
+ Segm.
654
+ ALMA [67]
655
+ Behavioral
656
+ neuroscience
657
+ Vid
658
+ External
659
+ Pose est., Class.
660
+ fetal-code [68]
661
+ Neuroimaging
662
+ Img (rs-fMRI)
663
+ External
664
+ Segm.
665
+ ClinicaDL [69]
666
+ Neuroimaging
667
+ Img (MRI, PET)
668
+ External
669
+ Class., Segm.
670
+ DeepNeuron [70]
671
+ Microbiology,
672
+ Histology
673
+ Img (confocal microscopy)
674
+ No
675
+ Obj. detect., Segm.
676
+ GaNDLF [71]
677
+ Medical
678
+ Imaging
679
+ Img (2D, 3D)
680
+ External
681
+ Segm., Regression, XAI
682
+ MesoNet [72]
683
+ Neuroimaging
684
+ Img (fluoresc. microscopy)
685
+ External
686
+ Segm., Registration
687
+ MARS, BENTO [73]
688
+ Behavioral
689
+ neuroscience
690
+ Vid
691
+ Yes
692
+ Pose est., Class., Action rec., Tag
693
+ NiftyNet [74]
694
+ Medical
695
+ Imaging
696
+ Img (MRI, CT)
697
+ No
698
+ Class., Segm., Synth.
699
+ ANTsX [75] (ANTsPyNet, ANTsRNet)
700
+ Neuroimaging
701
+ Img (MRI)
702
+ No
703
+ Classificastion, Segm., Registr., Super-res.
704
+ MARS, BENTO [73]
705
+ Behavioral
706
+ neuroscience
707
+ Vid
708
+ Yes
709
+ Pose est., Class., Action rec., Tag
710
+ Visual Fields Analysis [76]
711
+ Eye tracking,
712
+ Behavioral
713
+ neuroscience
714
+ Vid
715
+ No
716
+ Pose est., Class.
717
+ Table 4: Domains of applications for the libraries and frameworks processing image data
718
+ 12
719
+
720
+ Name
721
+ Models
722
+ DL framework
723
+ Customization
724
+ Programming language
725
+ AxonDeepSeg
726
+ CAE
727
+ TensorFlow
728
+ Yes (weights)
729
+ Python
730
+ DeepCINAC
731
+ DeepCINAC
732
+ (CNN+LSTM)
733
+ Keras, TensorFlow
734
+ Yes (weights)
735
+ Python
736
+ DeepLabCut
737
+ CNN
738
+ TensorFlow
739
+ Yes (weights)
740
+ Python
741
+ DeepNeuro
742
+ CNN, CAE, GAN
743
+ Keras, TensorFlow
744
+ Yes (weights, model)
745
+ Python
746
+ DeepVOG
747
+ CAE
748
+ TensorFlow
749
+ No
750
+ Python
751
+ DeLINEATE
752
+ CNN
753
+ Keras, TensorFlow
754
+ Yes (weights, model)
755
+ Python
756
+ DNNBrain
757
+ CNN
758
+ PyTorch
759
+ Yes (model)
760
+ Python
761
+ ivadomed
762
+ 2,3-D CNN, CAE
763
+ PyTorch
764
+ Yes (weights, model)
765
+ Python
766
+ MEYE
767
+ CAE, CNN
768
+ TensorFlow
769
+ Yes (model)
770
+ Python
771
+ Allen Cell Structure Segmenter
772
+ CAE
773
+ PyTorch
774
+ No
775
+ Python
776
+ VesicleSeg
777
+ CNN
778
+ PyTorch
779
+ No
780
+ Python
781
+ CDeep3M2
782
+ CAE
783
+ TensorFlow
784
+ Yes (weights)
785
+ Python
786
+ CASCADE
787
+ 1-D CNN
788
+ TensorFlow
789
+ Yes (weights)
790
+ Python
791
+ ScLimibic
792
+ 3-D CAE
793
+ neurite, TensorFlow
794
+ No
795
+ Python
796
+ ALMA
797
+ CNN
798
+ Unspecified
799
+ No
800
+ Python
801
+ fetal-code
802
+ 2-D CNN
803
+ TensorFlow
804
+ No
805
+ Python
806
+ ClinicaDL
807
+ CNN, CAE
808
+ PyTorch
809
+ Yes
810
+ Python
811
+ DeepNeuron
812
+ CNN
813
+ Unspecified
814
+ No
815
+ C++
816
+ GaNDLF
817
+ CNN, CAE
818
+ PyTorch
819
+ Yes
820
+ Python
821
+ MesoNet
822
+ CNN, CAE
823
+ Keras, TensorFlow
824
+ No
825
+ Python
826
+ NiftyNet
827
+ CNN
828
+ TensorFlow
829
+ Yes
830
+ Python
831
+ ANTsX (ANTsPyNet, ANTsRNet)
832
+ CNN, CAE, GAN
833
+ Keras, TensorFlow
834
+ Yes
835
+ Python, R, C++
836
+ MARS, BENTO
837
+ CNN
838
+ TensorFlow
839
+ Yes (weights)
840
+ Python
841
+ Visual Fields Analysis
842
+ DeepLabCut
843
+ TensorFlow, DeepLabCut
844
+ Yes (weights)
845
+ Python
846
+ Table 5: Model engineering specifications for the libraries and frameworks processing image data
847
+ 13
848
+
849
+ Name
850
+ Interface
851
+ Online/Offline
852
+ Maintenance
853
+ Source
854
+ AxonDeepSeg
855
+ Jupyter
856
+ Note-
857
+ books
858
+ Offline
859
+ Active
860
+ github.com/axondeepseg/axondeepseg
861
+ DeepCINAC
862
+ GUI,
863
+ Colab
864
+ Notebooks
865
+ Offline
866
+ Active
867
+ gitlab.com/cossartlab/deepcinac
868
+ DeepLabCut
869
+ GUI,
870
+ Colab
871
+ Notebooks
872
+ Offline
873
+ Active
874
+ github.com/DeepLabCut/DeepLabCut
875
+ DeepNeuro
876
+ None
877
+ Offline
878
+ Active
879
+ github.com/QTIM-Lab/DeepNeuro
880
+ DeepVOG
881
+ None
882
+ Offline
883
+ Inactive
884
+ github.com/pydsgz/DeepVOG
885
+ DeLINEATE
886
+ GUI,
887
+ Colab
888
+ Notebooks
889
+ Offline
890
+ Active
891
+ bitbucket.org/delineate/delineate
892
+ DNNBrain
893
+ None
894
+ Offline
895
+ Active
896
+ github.com/BNUCNL/dnnbrain
897
+ ivadomed
898
+ None
899
+ Offline
900
+ Active
901
+ github.com/ivadomed/ivadomed
902
+ MEYE
903
+ Web app
904
+ Online, Offline
905
+ Active
906
+ pupillometry.it
907
+ Allen Cell Structure Segmenter
908
+ GUI,
909
+ Jupyter
910
+ Notebooks
911
+ Offline
912
+ Active
913
+ github.com/AllenCell/aics-ml-segmentation
914
+ VesicleSeg
915
+ GUI
916
+ Offline
917
+ Active
918
+ github.com/Imbrosci/synaptic-vesicles-detection
919
+ CDeep3M2
920
+ GUI,
921
+ Colab
922
+ Notebooks
923
+ Offline
924
+ Active
925
+ github.com/CRBS/cdeep3m2
926
+ CASCADE
927
+ GUI,
928
+ Colab
929
+ Notebooks
930
+ Offline
931
+ Active
932
+ github.com/HelmchenLabSoftware/Cascade
933
+ ScLimibic
934
+ Unspecified
935
+ Offline
936
+ Active
937
+ surfer.nmr.mgh.harvard.edu/fswiki/ScLimbic
938
+ ALMA
939
+ GUI
940
+ Offline
941
+ Active
942
+ github.com/sollan/alma
943
+ fetal-code
944
+ GUI,
945
+ Colab
946
+ Notebooks
947
+ Offline
948
+ Active
949
+ github.com/saigerutherford/fetal-code
950
+ ClinicaDL
951
+ GUI,
952
+ Colab
953
+ Notebooks
954
+ Offline
955
+ Active
956
+ github.com/aramis-lab/clinicadl
957
+ DeepNeuron
958
+ GUI
959
+ Online, Offline
960
+ Inactive
961
+ github.com/Vaa3D/Vaa3D_Data/releases/tag/1.0
962
+ GaNDLF
963
+ GUI
964
+ Offline
965
+ Active
966
+ github.com/CBICA/GaNDLF
967
+ MesoNet
968
+ GUI,
969
+ Colab
970
+ Notebooks
971
+ Offline
972
+ Active
973
+ osf.io/svztu
974
+ NiftyNet
975
+ None
976
+ Offline
977
+ Inactive
978
+ github.com/NifTK/NiftyNet
979
+ ANTsX (ANTsPyNet, ANTsRNet)
980
+ None
981
+ Offline
982
+ Active
983
+ github.com/ANTsX
984
+ MARS, BENTO
985
+ GUI,
986
+ MATLAB
987
+ GUI,
988
+ Jupyter
989
+ Notebooks
990
+ Offline
991
+ Active
992
+ github.com/neuroethology
993
+ Visual Fields Analysis
994
+ GUI
995
+ Offline
996
+ Active
997
+ github.com/mathjoss/VisualFieldsAnalysis
998
+ Table 6: Technological aspects and code sources for the libraries and frameworks processing image data
999
+ 14
1000
+
1001
+ 4.3
1002
+ Libraries targeting data types different from sequences or images and general applications
1003
+ Libraries and frameworks for sequence data are shown in Tables 7 (domains of application), 8 (models characteris-
1004
+ tics), 9 (technologies and sources). In this category fall libraries and projects with either varying input data type, or
1005
+ other than sequence and image data analysis; other libraries target computational platforms, higher hierarchy frame-
1006
+ works, or supporting functions for deep learning like specific preprocessing and augmentations. NeuroCAAS is an
1007
+ ambitious project that both standardizes experimental schedules, analyses and offers computational resources on the
1008
+ cloud. The platform lifts the burden of configuring and deploying data analysis tool, guaranteeing also replicability
1009
+ and readily available usage of pre-made pipelines, with high efficiency. MONAI is a project that brings deep learning
1010
+ tools to many health and biology problems, and is a commonly used framework for the 3D variations of UNet [77]
1011
+ lately dominating the yearly BraTS challenge [32] (see at http://braintumorsegmentation.org/). The paradigm
1012
+ builds on PyTorch and aims at unifying healthcare AI practices throughout both academia and enterprise research, not
1013
+ only in the model development but also in the creation of shared annotated datasets. Lastly, it focuses on deployment
1014
+ and work in real world clinical production, settling as a strong candidate for being the standard solution in the do-
1015
+ main. Predify and THINGvision are two libraries that bridge deep learning research and computational neuroscience.
1016
+ The former allows to include an implementation of a «predictive coding mechanism» (as hypothesized in [78]) into
1017
+ virtually any pre-built architectures, evaluating its impact on performance. The latter offers a single environment for
1018
+ Representational Similarity Analysis, i.e. the study of the encodings of biological and artificial neural networks that
1019
+ process visual data.
1020
+ 15
1021
+
1022
+ Name
1023
+ Neuroscience area
1024
+ Data type
1025
+ Datasets
1026
+ Task
1027
+ NeuroCAAS [79]
1028
+ Virtually all
1029
+ Virtually all
1030
+ External availability
1031
+ Virtually all
1032
+ MONAI [80]
1033
+ Virtually all
1034
+ Virtually all
1035
+ External availability
1036
+ Virtually all
1037
+ Predify [81]
1038
+ Computational
1039
+ Neuro-
1040
+ science
1041
+ Images, Virtually all
1042
+ No
1043
+ Classification, Adversarial attacks, virtually all
1044
+ THINGvision [82]
1045
+ Computational
1046
+ Neuro-
1047
+ science
1048
+ Images, Text
1049
+ External availability
1050
+ Classification
1051
+ TorchIO [83]
1052
+ Imaging
1053
+ All images
1054
+ No
1055
+ Augmentation
1056
+ Table 7: Domains of applications for the libraries and frameworks for special applications
1057
+ 16
1058
+
1059
+ Name
1060
+ Models
1061
+ DL framework
1062
+ Customization
1063
+ Programming language
1064
+ NeuroCAAS
1065
+ CNN
1066
+ TensorFlow
1067
+ Yes
1068
+ Python
1069
+ MONAI
1070
+ Virtually All
1071
+ PyTorch
1072
+ Yes
1073
+ Python
1074
+ Predify
1075
+ CNN, Virtually all
1076
+ PyTorch
1077
+ Yes
1078
+ Python
1079
+ THINGvision
1080
+ CNN, RNN, Transformers
1081
+ PyTorch, TensorFlow
1082
+ No
1083
+ Python
1084
+ TorchIO
1085
+ CNN
1086
+ PyTorch
1087
+ Yes
1088
+ Python
1089
+ Table 8: Model engineering specifications for the libraries and frameworks for special applications
1090
+ 17
1091
+
1092
+ Name
1093
+ Interface
1094
+ Online/Offline
1095
+ Maintenance
1096
+ Source
1097
+ NeuroCAAS
1098
+ GUI, Jupyter Notebooks
1099
+ Offline
1100
+ Active
1101
+ github.com/cunningham-lab/neurocaas
1102
+ MONAI
1103
+ GUI, Colab Notebooks
1104
+ Offline
1105
+ Active
1106
+ github.com/Project-MONAI/MONAI
1107
+ Predify
1108
+ Text UI (TOML)
1109
+ Offline
1110
+ Active
1111
+ github.com/miladmozafari/predify
1112
+ THINGvision
1113
+ None
1114
+ Offline
1115
+ Active
1116
+ github.com/ViCCo-Group/THINGSvision
1117
+ TorchIO
1118
+ GUI, Command line
1119
+ Offline
1120
+ Active
1121
+ torchio.rtfd.io
1122
+ Table 9: Technological aspects and code sources for the libraries and frameworks for special applications
1123
+ 18
1124
+
1125
+ 5
1126
+ Discussion
1127
+ The panorama of open-source libraries dedicated to deep learning applications in neuroscience is quite rich and diver-
1128
+ sified. There is a corpus of organized packages that integrate preprocessing, training, testing and performance analyses
1129
+ of deep neural networks for neurological research. Most of these projects are tuned to specific data modalities and
1130
+ formats, but some libraries are quite versatile and customizabile, and there are projects that encompass quantitative
1131
+ biology and medical analysis as a whole. There is a common tendency to develop GUIs, enhancing user-friendliness of
1132
+ toolkits for non-programmers and researchers unacquainted with the command line interfaces, for example. Moreover,
1133
+ for the many libraries developed in Python, the (Jupyter) Notebook format appears as a widespread tool both for tutori-
1134
+ als, documentation and as an interface to cloud computational resources (e.g. Google Colab [84]). Apart from specific
1135
+ papers and documentation, and outside of deep learning per se, it is important to make researchers and developers
1136
+ aware of the main topics and initiatives in open culture and Neuroinformatics. For this reason, the interested reader
1137
+ is invited to rely on competent institutions (e.g. INCF) and databases of open resources (e.g. open-neuroscience)
1138
+ dedicated to Neuroscience. Among the possibly missing technologies, the queries employed did not retrieve results
1139
+ in Natural Language Processing libraries dedicated to neuroscience, nor toolkits specifically employing Graph Neural
1140
+ Networks (GNNs), although available in EEG-DL. NLP is actually fundamental in healthcare, since medical reports
1141
+ often come in non standardized forms. Large language models, Named Entity Recognition (NER) systems and text
1142
+ mining approaches in biomedical research exist [85], [86]. GNNs comprise recent architectures that are extremely
1143
+ promising in a variety of fields [87], including biomedical research and particularly neuroscience [88], [89]. Even if
1144
+ promising, their application is still less mature than that of computer vision models or time series analysis.
1145
+ Considering the available software for imaging and signal processing in the domain of neuroscience, at this moment a
1146
+ single alternative targeting the opportunities offered by modern deep learning seems to be missing. Overall, it seems
1147
+ still unlikely to develop a common deep learning framework for Neuroscience as a separate whole, but the engineering
1148
+ knowledge relevant and compressible into such framework would be common to other biomedical fields, and projects
1149
+ such as MONAI are strong candidates toward this goal. Instead, it seems achievable to deliver models and functions
1150
+ in a concerted way, restricted either to a sub-field or a data modality, based on the modularity of existent tools and the
1151
+ organizing possibilities of project initiation and management of open culture.
1152
+ 6
1153
+ Conclusions
1154
+ Although a large and growing number of repositories offer code to build specific models, as published in experimental
1155
+ papers, these resources seldom aim to constitute proper libraries or frameworks for research or clinical practice. Both
1156
+ deep learning and neuroscience gain much value even from sophisticated proofs of concept. In parallel, organized
1157
+ packages are spreading and starting to provide and integrate pre-processing, training, testing and performance analyses
1158
+ of deep neural networks for neurological and biomedical research. This paper has offered both an historical and a
1159
+ technical context for the use of deep neural networks in Neuroinformatics, focusing on open-source tools that scientists
1160
+ can comprehend and adapt to their necessities. At the same time, this work underlines the value of the open culture and
1161
+ points to relevant institutions and platforms for neuroscientists. Although the aim is not restricted to making clinicians
1162
+ develop their own deep models without coding or Machine Learning background, as was the case in [90], the overall
1163
+ effect of these libraries and sources is to democratize deep learning applications and results, as well as standardizing
1164
+ such complex and varied models, supporting the research community in obtaining proper means to an end, and in
1165
+ envisioning then realizing collectively new projects and tools.
1166
+ Acknowledgments
1167
+ This work was supported by the "Department of excellence 2018-2022" initiative of the Italian Ministry of education
1168
+ (MIUR) awarded to the Department of Neuroscience - University of Padua.
1169
+ References
1170
+ [1] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain.
1171
+ Psychological Review, 65(6):386–408, 1958.
1172
+ [2] G. Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and
1173
+ Systems, 2(4):303–314, December 1989.
1174
+ [3] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approx-
1175
+ imators. Neural Networks, 2(5):359–366, January 1989.
1176
+ 19
1177
+
1178
+ [4] Kunihiko Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recog-
1179
+ nition unaffected by shift in position. Biological Cybernetics, 36(4):193–202, April 1980.
1180
+ [5] Marvin Minsky and Seymour Papert. Perceptrons: An Introduction to Computational Geometry. MIT Press,
1181
+ Cambridge, MA, USA, 1969.
1182
+ [6] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning Representations by Back-propagating
1183
+ Errors. Nature, 323(6088):533–536, 1986.
1184
+ [7] Erik Meijering. A bird’s-eye view of deep learning in bioimage analysis. Computational and Structural Biotech-
1185
+ nology Journal, 18:2312–2325, 2020.
1186
+ [8] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural
1187
+ networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information
1188
+ Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012.
1189
+ [9] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation
1190
+ applied to handwritten zip code recognition. Neural Computation, 1:541–551, 1989.
1191
+ [10] Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document
1192
+ recognition. In PROCEEDINGS OF THE IEEE, pages 2278–2324, 1998.
1193
+ [11] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej
1194
+ Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual
1195
+ recognition challenge. 2014. cite arxiv:1409.0575Comment: 43 pages, 16 figures. v3 includes additional com-
1196
+ parisons with PASCAL VOC (per-category comparisons in Table 3, distribution of localization difficulty in Fig
1197
+ 16), a list of queries used for obtaining object detection images (Appendix C), and some additional references.
1198
+ [12] Aly Al-Amyn Valliani, Daniel Ranti, and Eric Karl Oermann. Deep Learning and Neurology: A Systematic
1199
+ Review. Neurology and Therapy, 8(2):351–365, December 2019.
1200
+ [13] Eric S. Raymond. The cathedral and the bazaar: musings on Linux and open source by an accidental revolu-
1201
+ tionary. O’Reilly Media, Beijing; Cambridge; Farnham; Köln; Paris; Sebastopol; Taip, 2., überarb. und erw. a.
1202
+ edition, 2001. With a foreword by Bob Young.
1203
+ [14] NVIDIA, Péter Vingelmann, and Frank H.P. Fitzek. Cuda, release: 10.2.89, 2020.
1204
+ [15] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan M. Cohen, John Tran, Bryan Catanzaro, and
1205
+ Evan Shelhamer. cudnn: Efficient primitives for deep learning. ArXiv, abs/1410.0759, 2014.
1206
+ [16] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado,
1207
+ Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving,
1208
+ Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion
1209
+ Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner,
1210
+ Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals,
1211
+ Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale
1212
+ machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
1213
+ [17] Francois Chollet et al. Keras, 2015.
1214
+ [18] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen,
1215
+ Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito,
1216
+ Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chin-
1217
+ tala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information
1218
+ Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019.
1219
+ [19] Nature neuroscience. https://www.nature.com/subjects/neuroscience. Accessed: 2022-08-18.
1220
+ [20] Thomas Brott, Harold P. Adams, Charles P. Olinger, John R. Marle, William G. Barsan, Jose Biller, Judith
1221
+ Spilker, Renée Holleran, Robert Eberle, Vicki Hertzberg, Marvin Rorick, Charles J. Moomaw, and Michael
1222
+ Walker. Measurements of acute cerebral infarction: A clinical examination scale. Stroke, 20(7):864–870, July
1223
+ 1989.
1224
+ [21] What is neuroinformatics? https://www.incf.org/about/what-is-neuroinformatics. Accessed: 2022-
1225
+ 08-18.
1226
+ [22] Mathew Birdsall Abrams, Jan G. Bjaalie, Samir Das, Gary F. Egan, Satrajit S. Ghosh, Wojtek J. Goscinski, Jef-
1227
+ frey S. Grethe, Jeanette Hellgren Kotaleski, Eric Tatt Wei Ho, David N. Kennedy, Linda J. Lanyon, Trygve B.
1228
+ Leergaard, Helen S. Mayberg, Luciano Milanesi, Roman Mouˇcek, J. B. Poline, Prasun K. Roy, Stephen C.
1229
+ Strother, Tong Boon Tang, Paul Tiesinga, Thomas Wachtler, Daniel K. Wójcik, and Maryann E. Martone. A
1230
+ Standards Organization for Open and FAIR Neuroscience: the International Neuroinformatics Coordinating Fa-
1231
+ cility. Neuroinformatics, January 2021.
1232
+ 20
1233
+
1234
+ [23] Mark D. Wilkinson, Michel Dumontier, IJsbrand Jan Aalbersberg, Gabrielle Appleton, Myles Axton, Arie Baak,
1235
+ Niklas Blomberg, Jan-Willem Boiten, Luiz Bonino da Silva Santos, Philip E. Bourne, Jildau Bouwman, An-
1236
+ thony J. Brookes, Tim Clark, Mercè Crosas, Ingrid Dillo, Olivier Dumon, Scott Edmunds, Chris T. Evelo,
1237
+ Richard Finkers, Alejandra Gonzalez-Beltran, Alasdair J.G. Gray, Paul Groth, Carole Goble, Jeffrey S. Grethe,
1238
+ Jaap Heringa, Peter A.C ’t Hoen, Rob Hooft, Tobias Kuhn, Ruben Kok, Joost Kok, Scott J. Lusher, Maryann E.
1239
+ Martone, Albert Mons, Abel L. Packer, Bengt Persson, Philippe Rocca-Serra, Marco Roos, Rene van Schaik,
1240
+ Susanna-Assunta Sansone, Erik Schultes, Thierry Sengstag, Ted Slater, George Strawn, Morris A. Swertz, Mark
1241
+ Thompson, Johan van der Lei, Erik van Mulligen, Jan Velterop, Andra Waagmeester, Peter Wittenburg, Kather-
1242
+ ine Wolstencroft, Jun Zhao, and Barend Mons. The FAIR Guiding Principles for scientific data management and
1243
+ stewardship. Scientific Data, 3:160018, March 2016.
1244
+ [24] Bruce Fischl. Freesurfer. Neuroimage, 62(2):774–781, 2012.
1245
+ [25] Stephen M. Smith, Mark Jenkinson, Mark W. Woolrich, Christian F. Beckmann, Timothy Edward John Behrens,
1246
+ Heidi Johansen-Berg, Peter R. Bannister, M. De Luca, Ivana Drobnjak, David Flitney, Rami K. Niazy, James
1247
+ Saunders, John Vickers, Yongyue Zhang, Nicola De Stefano, Joanne Brady, and Paul M. Matthews. Advances in
1248
+ functional and structural mr image analysis and implementation as fsl. NeuroImage, 23:S208–S219, 2004.
1249
+ [26] G. Flandin and K.J. Friston. Statistical parametric mapping. Scholarpedia, 3(4):6232, 2008.
1250
+ [27] Arnaud Delorme and Scott Makeig. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics
1251
+ including independent component analysis. Journal of Neuroscience Methods, 134(1):9–21, March 2004.
1252
+ [28] François Tadel, Sylvain Baillet, John C. Mosher, Dimitrios Pantazis, and Richard M. Leahy. Brainstorm: A
1253
+ User-Friendly Application for MEG/EEG Analysis. Computational Intelligence and Neuroscience, 2011:1–13,
1254
+ 2011.
1255
+ [29] Manfredo Atzori and Henning Müller. PaWFE: Fast Signal Feature Extraction Using Parallel Time Windows.
1256
+ Frontiers in Neurorobotics, 13, 2019.
1257
+ [30] Alexandre Gramfort, Martin Luessi, Eric Larson, Denis A. Engemann, Daniel Strohmeier, Christian Brodbeck,
1258
+ Roman Goj, Mainak Jas, Teon Brooks, Lauri Parkkonen, and Matti S. Hämäläinen. MEG and EEG data analysis
1259
+ with MNE-Python. Frontiers in Neuroscience, 7(267):1–13, 2013.
1260
+ [31] Tonin Luca, Beraldo Gloria, Tortora Stefano, and Menegatti Emanuele. Ros-neuro: An open-source platform for
1261
+ neurorobotics. Frontiers in Neurorobotics, 16, 2022.
1262
+ [32] Bjoern H. Menze, Andras Jakab, Stefan Bauer, Jayashree Kalpathy-Cramer, Keyvan Farahani, Justin Kirby,
1263
+ Yuliya Burren, Nicole Porz, Johannes Slotboom, Roland Wiest, Levente Lanczi, Elizabeth Gerstner, Marc-André
1264
+ Weber, Tal Arbel, Brian B. Avants, Nicholas Ayache, Patricia Buendia, D. Louis Collins, Nicolas Cordier, Ja-
1265
+ son J. Corso, Antonio Criminisi, Tilak Das, Hervé Delingette, Ça˘gatay Demiralp, Christopher R. Durst, Michel
1266
+ Dojat, Senan Doyle, Joana Festa, Florence Forbes, Ezequiel Geremia, Ben Glocker, Polina Golland, Xiaotao Guo,
1267
+ Andac Hamamci, Khan M. Iftekharuddin, Raj Jena, Nigel M. John, Ender Konukoglu, Danial Lashkari, José An-
1268
+ tónio Mariz, Raphael Meier, Sérgio Pereira, Doina Precup, Stephen J. Price, Tammy Riklin Raviv, Syed M. S.
1269
+ Reza, Michael Ryan, Duygu Sarikaya, Lawrence Schwartz, Hoo-Chang Shin, Jamie Shotton, Carlos A. Silva,
1270
+ Nuno Sousa, Nagesh K. Subbanna, Gabor Szekely, Thomas J. Taylor, Owen M. Thomas, Nicholas J. Tustison,
1271
+ Gozde Unal, Flor Vasseur, Max Wintermark, Dong Hye Ye, Liang Zhao, Binsheng Zhao, Darko Zikic, Marcel
1272
+ Prastawa, Mauricio Reyes, and Koen Van Leemput. The multimodal brain tumor image segmentation benchmark
1273
+ (brats). IEEE Transactions on Medical Imaging, 34(10):1993–2024, 2015.
1274
+ [33] Stefan Winzeck, Arsany Hakim, Richard McKinley, José A.A.D.S.R. Pinto, Victor Alves, Carlos Silva, Maxim
1275
+ Pisov, Egor Krivov, Mikhail Belyaev, Miguel Monteiro, Arlindo Oliveira, Youngwon Choi, Myunghee Cho Paik,
1276
+ Yongchan Kwon, Hanbyul Lee, Beom Joon Kim, Joong Ho Won, Mobarakol Islam, Hongliang Ren, David
1277
+ Robben, Paul Suetens, Enhao Gong, Yilin Niu, Junshen Xu, John M. Pauly, Christian Lucas, Mattias P. Hein-
1278
+ rich, Luis C. Rivera, Laura S. Castillo, Laura A. Daza, Andrew L. Beers, Pablo Arbelaezs, Oskar Maier, Ken
1279
+ Chang, James M. Brown, Jayashree Kalpathy-Cramer, Greg Zaharchuk, Roland Wiest, and Mauricio Reyes. Isles
1280
+ 2016 and 2017-benchmarking ischemic stroke lesion outcome prediction based on multispectral mri. Frontiers
1281
+ in Neurology, 9(SEP), September 2018. Publisher Copyright: © 2007-2018 Frontiers Media S.A. All Rights
1282
+ Reserved.
1283
+ [34] Moritz Roman Hernandez Petzsche, Ezequiel de la Rosa, Uta Hanning, Roland Wiest, Waldo Enrique Valenzuela
1284
+ Pinilla, Mauricio Reyes, Maria Ines Meyer, Sook-Lei Liew, Florian Kofler, Ivan Ezhov, David Robben, Alexander
1285
+ Hutton, Tassilo Friedrich, Teresa Zarth, Johannes Bürkle, The Anh Baran, Bjoern Menze, Gabriel Broocks, Lukas
1286
+ Meyer, Claus Zimmer, Tobias Boeckh-Behrens, Maria Berndt, Benno Ikenberg, Benedikt Wiestler, and Jan S.
1287
+ Kirschke. Isles 2022: A multi-center magnetic resonance imaging stroke lesion segmentation dataset, 2022.
1288
+ 21
1289
+
1290
+ [35] Manfredo Atzori, Arjan Gijsberts, Simone Heynen, Anne-Gabrielle Mittaz Hager, Olivier Deriaz, Patrick van der
1291
+ Smagt, Claudio Castellini, Barbara Caputo, and Henning Muller. Building the ninapro database: A resource for
1292
+ the biorobotics community. 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and
1293
+ Biomechatronics (BioRob), pages 1258–1265, 2012.
1294
+ [36] Ki-Hee Park and Seong-Whan Lee. Movement intention decoding based on deep learning for multiuser myo-
1295
+ electric interfaces. In 2016 4th International Winter Conference on Brain-Computer Interface (BCI), pages 1–2,
1296
+ February 2016.
1297
+ [37] Manfredo Atzori, Matteo Cognolato, and Henning Müller. Deep Learning with Convolutional Neural Networks
1298
+ Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands. Fron-
1299
+ tiers in Neurorobotics, 10:9, September 2016.
1300
+ [38] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780,
1301
+ 1997.
1302
+ [39] Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural
1303
+ machine translation: Encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Se-
1304
+ mantics and Structure in Statistical Translation, pages 103–111, Doha, Qatar, October 2014. Association for
1305
+ Computational Linguistics.
1306
+ [40] Daniel L. K. Yamins and James J. DiCarlo. Using goal-driven deep learning models to understand sensory cortex.
1307
+ Nature Neuroscience, 19(3):356–365, March 2016. Number: 3 Publisher: Nature Publishing Group.
1308
+ [41] Guido Van Rossum and Fred L. Drake. Python 3 Reference Manual. CreateSpace, Scotts Valley, CA, 2009.
1309
+ [42] R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Comput-
1310
+ ing, Vienna, Austria, 2022.
1311
+ [43] Jeff Bezanson, Alan Edelman, Stefan Karpinski, and Viral B Shah. Julia: A fresh approach to numerical comput-
1312
+ ing. SIAM Review, 59(1):65–98, 2017.
1313
+ [44] Mike Innes. Flux: Elegant machine learning with julia. Journal of Open Source Software, 2018.
1314
+ [45] Aurélien Appriou, Léa Pillette, David Trocellier, Dan Dutartre, Andrzej Cichocki, and Fabien Lotte. BioPyC,
1315
+ an Open-Source Python Toolbox for Offline Electroencephalographic and Physiological Signals Classification.
1316
+ Sensors, 21(17):5740, January 2021. Number: 17 Publisher: Multidisciplinary Digital Publishing Institute.
1317
+ [46] Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katha-
1318
+ rina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, and Tonio Ball. Deep learning with
1319
+ convolutional neural networks for EEG decoding and visualization. Human Brain Mapping, 38(11):5391–5420,
1320
+ November 2017.
1321
+ [47] Karl M. Kuntzelman, Jacob M. Williams, Phui Cheng Lim, Ashok Samal, Prahalada K. Rao, and Matthew R.
1322
+ Johnson. Deep-Learning-Based Multivariate Pattern Analysis (dMVPA): A Tutorial and a Toolbox. Frontiers in
1323
+ Human Neuroscience, 15:638052, March 2021.
1324
+ [48] Yimin Hou, Lu Zhou, Shuyue Jia, and Xiangmin Lun. A novel approach of decoding EEG four-class motor
1325
+ imagery tasks via scout ESI and CNN. Journal of Neural Engineering, 17(1):016048, February 2020.
1326
+ [49] Zied Tayeb, Nicolai Waniek, Juri Fedjaev, Nejla Ghaboosi, Leonard Rychly, Christian Widderich, Christoph
1327
+ Richter, Jonas Braun, Matteo Saveriano, Gordon Cheng, and Jörg Conradt. Gumpy: a Python toolbox suitable
1328
+ for hybrid brain-computer interfaces. Journal of Neural Engineering, 15(6):065003, December 2018.
1329
+ [50] Ya-Lin Huang, Chia-Ying Hsieh, Jian-Xue Huang, and Chun-Shu Wei. ExBrainable: An Open-Source GUI for
1330
+ CNN-based EEG Decoding and Model Interpretation. arXiv:2201.04065 [cs, eess, q-bio], January 2022. arXiv:
1331
+ 2201.04065.
1332
+ [51] Justin Shenk, Wolf Byttner, Saranraj Nambusubramaniyan, and Alexander Zoeller. Traja: A Python toolbox for
1333
+ animal trajectory analysis. Journal of Open Source Software, 6(63):3202, July 2021.
1334
+ [52] Takuto Okuno and Alexander Woodward. Vector Auto-Regressive Deep Neural Network: A Data-Driven Deep
1335
+ Learning-Based Directed Functional Connectivity Estimation Toolbox. Frontiers in Neuroscience, 15:764796,
1336
+ 2021.
1337
+ [53] Ramprasaath R. Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell, Devi Parikh, and Dhruv
1338
+ Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. International Journal
1339
+ of Computer Vision, 128:336–359, 2016.
1340
+ [54] Aldo Zaimi. AxonDeepSeg: automatic axon and myelin segmentation from microscopy data using convolutional
1341
+ neural networks. SCIENtIFIC REPOrTS, page 11, 2018.
1342
+ 22
1343
+
1344
+ [55] Julien Denis, Robin F. Dard, Eleonora Quiroli, Rosa Cossart, and Michel A. Picardo. DeepCINAC: A Deep-
1345
+ Learning-Based Python Toolbox for Inferring Calcium Imaging Neuronal Activity Based on Movie Visualization.
1346
+ eneuro, 7(4):ENEURO.0038–20.2020, July 2020.
1347
+ [56] Tanmay Nath, Alexander Mathis, An Chi Chen, Amir Patel, Matthias Bethge, and Mackenzie Weygandt
1348
+ Mathis. Using DeepLabCut for 3D markerless pose estimation across species and behaviors. Nature Protocols,
1349
+ 14(7):2152–2176, July 2019. Number: 7 Publisher: Nature Publishing Group.
1350
+ [57] Andrew Beers, James Brown, Ken Chang, Katharina Hoebel, Jay Patel, K. Ina Ly, Sara M. Tolaney, Priscilla
1351
+ Brastianos, Bruce Rosen, Elizabeth R. Gerstner, and Jayashree Kalpathy-Cramer. DeepNeuro: an open-source
1352
+ deep learning toolbox for neuroimaging. Neuroinformatics, 19(1):127–140, January 2021.
1353
+ [58] Yuk-Hoi Yiu, Moustafa Aboulatta, Theresa Raiser, Leoni Ophey, Virginia L. Flanagin, Peter zu Eulenburg, and
1354
+ Seyed-Ahmad Ahmadi. DeepVOG: Open-source pupil segmentation and gaze estimation in neuroscience using
1355
+ deep learning. Journal of Neuroscience Methods, 324:108307, August 2019.
1356
+ [59] Xiayu Chen, Ming Zhou, Zhengxin Gong, Wei Xu, Xingyu Liu, Taicheng Huang, Zonglei Zhen, and Jia Liu.
1357
+ DNNBrain: A Unifying Toolbox for Mapping Deep Neural Networks and Brains. Frontiers in Computational
1358
+ Neuroscience, 14:580632, November 2020.
1359
+ [60] Charley Gros, Andreanne Lemay, Olivier Vincent, Lucas Rouhier, Anthime Bucquet, Joseph Paul Cohen, and
1360
+ Julien Cohen-Adad. ivadomed: A Medical Imaging Deep Learning Toolbox. arXiv:2010.09984 [cs, eess],
1361
+ October 2020. arXiv: 2010.09984.
1362
+ [61] Raffaele Mazziotti, Fabio Carrara, Aurelia Viglione, Leonardo Lupori, Luca Lo Verde, Alessandro Benedetto,
1363
+ Giulia Ricci, Giulia Sagona, Giuseppe Amato, and Tommaso Pizzorusso. MEYE: Web App for Translational
1364
+ and Real-Time Pupillometry. eneuro, 8(5):ENEURO.0122–21.2021, September 2021.
1365
+ [62] Jianxu Chen, Liya Ding, Matheus P. Viana, HyeonWoo Lee, M. Filip Sluezwski, Benjamin Morris, Melissa C.
1366
+ Hendershott, Ruian Yang, Irina A. Mueller, and Susanne M. Rafelski. The Allen Cell and Structure Segmenter: a
1367
+ new open source toolkit for segmenting 3D intracellular structures in fluorescence microscopy images. preprint,
1368
+ Cell Biology, December 2018.
1369
+ [63] Barbara Imbrosci, Dietmar Schmitz, and Marta Orlando. Automated Detection and Localization of Synaptic
1370
+ Vesicles in Electron Microscopy Images. eNeuro, 9(1), January 2022. Publisher: Society for Neuroscience
1371
+ Section: Research Article: Methods/New Tools.
1372
+ [64] Matthias G. Haberl, Christopher Churas, Lucas Tindall, Daniela Boassa, Sébastien Phan, Eric A. Bushong,
1373
+ Matthew Madany, Raffi Akay, Thomas J. Deerinck, Steven T. Peltier, and Mark H. Ellisman. CDeep3M—Plug-
1374
+ and-Play cloud-based deep learning for image segmentation. Nature Methods, 15(9):677–680, September 2018.
1375
+ Number: 9 Publisher: Nature Publishing Group.
1376
+ [65] Peter Rupprecht, Stefano Carta, Adrian Hoffmann, Mayumi Echizen, Antonin Blot, Alex C. Kwan, Yang Dan,
1377
+ Sonja B. Hofer, Kazuo Kitamura, Fritjof Helmchen, and Rainer W. Friedrich. A database and deep learning tool-
1378
+ box for noise-optimized, generalized spike inference from calcium imaging. Nature Neuroscience, 24(9):1324–
1379
+ 1337, September 2021. Number: 9 Publisher: Nature Publishing Group.
1380
+ [66] Douglas N. Greve, Benjamin Billot, Devani Cordero, Andrew Hoopes, Malte Hoffmann, Adrian V. Dalca, Bruce
1381
+ Fischl, Juan Eugenio Iglesias, and Jean C. Augustinack. A deep learning toolbox for automatic segmentation of
1382
+ subcortical limbic structures from MRI images. NeuroImage, 244:118610, December 2021.
1383
+ [67] Almir Aljovic, Shuqing Zhao, Maryam Chahin, Clara de la Rosa, Valerie Van Steenbergen, Martin Kerschen-
1384
+ steiner, and Florence M. Bareyre. A deep learning-based toolbox for Automated Limb Motion Analysis (ALMA)
1385
+ in murine models of neurological disorders. Communications Biology, 5(1):131, February 2022.
1386
+ [68] Saige Rutherford, Pascal Sturmfels, Mike Angstadt, Jasmine Hect, Jenna Wiens, Marion I. van den Heuvel,
1387
+ Dustin Scheinost, Chandra Sripada, and Moriah Thomason. Automated Brain Masking of Fetal Functional MRI
1388
+ with Open Data. Neuroinformatics, June 2021.
1389
+ [69] Elina Thibeau-Sutre, Mauricio Diaz, Ravi Hassanaly, Alexandre Routier, Didier Dormont, Olivier Colliot, and
1390
+ Ninon Burgos. ClinicaDL: an open-source deep learning software for reproducible neuroimaging processing.
1391
+ Computer Methods and Programs in Biomedicine, 2022.
1392
+ [70] Zhi Zhou, Hsien-Chi Kuo, Hanchuan Peng, and Fuhui Long. DeepNeuron: an open deep learning toolbox for
1393
+ neuron tracing. Brain Informatics, 5(2):3, June 2018.
1394
+ [71] Sarthak Pati, Siddhesh P. Thakur, Megh Bhalerao, Spyridon Thermos, Ujjwal Baid, Karol Gotkowski, Camila
1395
+ Gonzalez, Orhun Guley, Ibrahim Ethem Hamamci, Sezgin Er, Caleb Grenko, Brandon Edwards, Micah Sheller,
1396
+ Jose Agraz, Bhakti Baheti, Vishnu Bashyam, Parth Sharma, Babak Haghighi, Aimilia Gastounioti, Mark
1397
+ Bergman, Anirban Mukhopadhyay, Sotirios A. Tsaftaris, Bjoern Menze, Despina Kontos, Christos Davatzikos,
1398
+ 23
1399
+
1400
+ and Spyridon Bakas. GaNDLF: A Generally Nuanced Deep Learning Framework for Scalable End-to-End Clin-
1401
+ ical Workflows in Medical Imaging. arXiv:2103.01006 [cs], September 2021. arXiv: 2103.01006.
1402
+ [72] Dongsheng Xiao, Brandon J. Forys, Matthieu P. Vanni, and Timothy H. Murphy. MesoNet allows automated
1403
+ scaling and segmentation of mouse mesoscale cortical maps using machine learning. Nature Communications,
1404
+ 12:5992, October 2021.
1405
+ [73] Cristina Segalin, Jalani Williams, Tomomi Karigo, May Hui, Moriel Zelikowsky, Jennifer J Sun, Pietro Perona,
1406
+ David J Anderson, and Ann Kennedy. The Mouse Action Recognition System (MARS) software pipeline for
1407
+ automated analysis of social behaviors in mice. eLife, 10:e63720, November 2021. Publisher: eLife Sciences
1408
+ Publications, Ltd.
1409
+ [74] Eli Gibson, Wenqi Li, Carole Sudre, Lucas Fidon, Dzhoshkun I. Shakir, Guotai Wang, Zach Eaton-Rosen, Robert
1410
+ Gray, Tom Doel, Yipeng Hu, Tom Whyntie, Parashkev Nachev, Marc Modat, Dean C. Barratt, Sébastien Ourselin,
1411
+ M. Jorge Cardoso, and Tom Vercauteren. NiftyNet: a deep-learning platform for medical imaging. Computer
1412
+ Methods and Programs in Biomedicine, 158:113–122, May 2018.
1413
+ [75] Nicholas J. Tustison, Philip A. Cook, Andrew J. Holbrook, Hans J. Johnson, John Muschelli, Gabriel A. Devenyi,
1414
+ Jeffrey T. Duda, Sandhitsu R. Das, Nicholas C. Cullen, Daniel L. Gillen, Michael A. Yassa, James R. Stone,
1415
+ James C. Gee, and Brian B. Avants. The ANTsX ecosystem for quantitative biological and medical imaging.
1416
+ Scientific Reports, 11(1):9068, April 2021.
1417
+ [76] Mathilde Josserand, Orsola Rosa-Salva, Elisabetta Versace, and Bastien S. Lemaire. Visual Field Analysis: A
1418
+ reliable method to score left and right eye use using automated tracking. Behavior Research Methods, October
1419
+ 2021.
1420
+ [77] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image
1421
+ segmentation. CoRR, abs/1505.04597, 2015.
1422
+ [78] Rajesh Rao and Dana Ballard. Predictive coding in the visual cortex: a functional interpretation of some extra-
1423
+ classical receptive-field effects. Nature neuroscience, 2:79–87, 02 1999.
1424
+ [79] Neuroscience Cloud Analysis As a Service | bioRxiv.
1425
+ [80] M. Jorge Cardoso, Wenqi Li, Richard Brown, Nic Ma, Eric Kerfoot, Yiheng Wang, Benjamin Murrey, Andriy
1426
+ Myronenko, Can Zhao, Dong Yang, Vishwesh Nath, Yufan He, Ziyue Xu, Ali Hatamizadeh, Andriy Myronenko,
1427
+ Wentao Zhu, Yun Liu, Mingxin Zheng, Yucheng Tang, Isaac Yang, Michael Zephyr, Behrooz Hashemian, Sachi-
1428
+ danand Alle, Mohammad Zalbagi Darestani, Charlie Budd, Marc Modat, Tom Vercauteren, Guotai Wang, Yiwen
1429
+ Li, Yipeng Hu, Yunguan Fu, Benjamin Gorman, Hans Johnson, Brad Genereaux, Barbaros S. Erdal, Vikash
1430
+ Gupta, Andres Diaz-Pinto, Andre Dourson, Lena Maier-Hein, Paul F. Jaeger, Michael Baumgartner, Jayashree
1431
+ Kalpathy-Cramer, Mona Flores, Justin Kirby, Lee A. D. Cooper, Holger R. Roth, Daguang Xu, David Bericat,
1432
+ Ralf Floca, S. Kevin Zhou, Haris Shuaib, Keyvan Farahani, Klaus H. Maier-Hein, Stephen Aylward, Prerna Do-
1433
+ gra, Sebastien Ourselin, and Andrew Feng. MONAI: An open-source framework for deep learning in healthcare,
1434
+ November 2022. arXiv:2211.02701 [cs].
1435
+ [81] Bhavin Choksi, Milad Mozafari, Callum Biggs O’ May, B. ADOR, Andrea Alamia, and Rufin VanRullen. Pred-
1436
+ ify: Augmenting deep neural networks with brain-inspired predictive coding dynamics. In Advances in Neural
1437
+ Information Processing Systems, volume 34, pages 14069–14083. Curran Associates, Inc., 2021.
1438
+ [82] Lukas Muttenthaler and Martin N. Hebart. THINGSvision: A Python Toolbox for Streamlining the Extraction
1439
+ of Activations From Deep Neural Networks. Frontiers in Neuroinformatics, 15:679838, 2021.
1440
+ [83] Fernando Pérez-García, Rachel Sparks, and Sébastien Ourselin. TorchIO: A Python library for efficient loading,
1441
+ preprocessing, augmentation and patch-based sampling of medical images in deep learning. Computer Methods
1442
+ and Programs in Biomedicine, 208:106236, September 2021.
1443
+ [84] Ekaba Bisong. Google Colaboratory. In Building Machine Learning and Deep Learning Models on Google
1444
+ Cloud Platform: A Comprehensive Guide for Beginners, pages 59–64. Apress, Berkeley, CA, 2019.
1445
+ [85] Ganga Prasad Basyal, Bhaskar Prasad Rimal, and David Zeng. A systematic review of natural language process-
1446
+ ing for knowledge management in healthcare. CoRR, abs/2007.09134, 2020.
1447
+ [86] Saskia Locke, Anthony Bashall, Sarah Al-Adely, John Moore, Anthony Wilson, and Gareth B. Kitchen. Natural
1448
+ language processing in medicine: A review. Trends in Anaesthesia and Critical Care, 38:4–9, 2021.
1449
+ [87] Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Graph neural networks:
1450
+ A review of methods and applications. 12 2018.
1451
+ [88] Xiao-Meng Zhang, Li Liang, Lin Liu, and Ming-Jing Tang. Graph Neural Networks and Their Current Applica-
1452
+ tions in Bioinformatics. Frontiers in Genetics, 12:690049, 2021.
1453
+ 24
1454
+
1455
+ [89] Michelle M. Li, Kexin Huang, and Marinka Zitnik. Graph Representation Learning in Biomedicine, June 2022.
1456
+ arXiv:2104.04883 [cs, q-bio].
1457
+ [90] Livia Faes, Siegfried K Wagner, Dun Jack Fu, Xiaoxuan Liu, Edward Korot, Joseph R Ledsam, Trevor Back,
1458
+ Reena Chopra, Nikolas Pontikos, Christoph Kern, Gabriella Moraes, Martin K Schmid, Dawn Sim, Konstantinos
1459
+ Balaskas, Lucas M Bachmann, Alastair K Denniston, and Pearse A Keane. Automated deep learning design for
1460
+ medical image classification by health-care professionals with no coding experience: a feasibility study. The
1461
+ Lancet Digital Health, 1(5):e232–e242, September 2019.
1462
+ 25
1463
+
0dE4T4oBgHgl3EQfZgyj/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
0tFAT4oBgHgl3EQfCRy0/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1d95d961160ea5f633a0068fcb664afd31353a3d5acd5e0ac1e88a3db2d4430
3
+ size 347483
1tAyT4oBgHgl3EQf1flH/content/tmp_files/2301.00735v1.pdf.txt ADDED
@@ -0,0 +1,1622 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.00735v1 [math.DG] 2 Jan 2023
2
+ FAILURE OF CURVATURE-DIMENSION CONDITIONS ON
3
+ SUB-RIEMANNIAN MANIFOLDS VIA TANGENT ISOMETRIES
4
+ LUCA RIZZI AND GIORGIO STEFANI
5
+ Abstract. We prove that, on any sub-Riemannian manifold endowed with a positive
6
+ smooth measure, the Bakry–Émery inequality for the corresponding sub-Laplacian,
7
+ 1
8
+ 2∆(∥∇u∥2) ≥ g(∇u, ∇∆u) + K∥∇u∥2,
9
+ K ∈ R,
10
+ implies the existence of enough Killing vector fields on the tangent cone to force the latter
11
+ to be Euclidean at each point, yielding the failure of the curvature-dimension condition
12
+ in full generality. Our approach does not apply to non-strictly-positive measures. In
13
+ fact, we prove that the weighted Grushin plane does not satisfy any curvature-dimension
14
+ condition, but, nevertheless, does admit an a.e. pointwise version of the Bakry–Émery
15
+ inequality.
16
+ As recently observed by Pan and Montgomery, one half of the weighted
17
+ Grushin plane satisfies the RCD(0, N) condition, yielding a counterexample to gluing
18
+ theorems in the RCD setting.
19
+ 1. Introduction and statements
20
+ In the last twenty years, there has been an impressive effort in extending the concept of
21
+ ‘Ricci curvature lower bound’ to non-Riemannian structures, and even to general metric
22
+ spaces equipped with a measure (metric-measure spaces, for short). We refer the reader
23
+ to the ICM notes [3] for a survey of this line of research.
24
+ There are two distinct points of view on the matter, traditionally known as the La-
25
+ grangian and Eulerian approaches, respectively.
26
+ The Lagrangian point of view is the one adopted by Lott–Villani and Sturm [36, 48,
27
+ 49]. In this formulation, Ricci curvature lower bounds are encoded by convexity-type
28
+ inequalities for entropy functionals on the Wasserstein space. Such inequalities are called
29
+ curvature-dimension conditions, CD(K, N) for short, where K ∈ R represents the lower
30
+ bound on the curvature and N ∈ [1, ∞] stands for an upper bound on the dimension.
31
+ The Eulerian point of view, instead, employs the metric-measure structure to define an
32
+ energy form and, in turn, an associated diffusion operator. The notion of Ricci curvature
33
+ lower bound is therefore encoded in the so-called Bakry–Émery inequality, BE(K, N) for
34
+ short, for the diffusion operator, which can be expressed in terms of a suitable Gamma
35
+ calculus, see the monograph [10].
36
+ Thanks to several key contributions [4, 6, 7, 24], the Lagrangian and the Eulerian ap-
37
+ proaches are now known to be essentially equivalent. In particular, CD(K, N) always
38
+ Date: January 3, 2023.
39
+ 2020 Mathematics Subject Classification. Primary 53C17. Secondary 54E45, 28A75.
40
+ Key words and phrases. Sub-Riemannian manifold, CD(K, ∞) condition, Bakry–Émery inequality,
41
+ infinitesimally Hilbertian, Grushin plane, privileged coordinates.
42
+ 1
43
+
44
+ 2
45
+ L. RIZZI AND G. STEFANI
46
+ implies BE(K, N) in infinitesimal Hilbertian metric-measure spaces, as introduced in [25],
47
+ while the converse implication requires further technical assumptions.
48
+ Such synthetic theory of curvature-dimension conditions, besides being consistent with
49
+ the classical notions of Ricci curvature and dimension on smooth Riemannian manifolds,
50
+ is stable under pointed-measure Gromov–Hausdorff convergence. Furthermore, it yields
51
+ a comprehensive approach for establishing all results typically associated with Ricci cur-
52
+ vature lower bounds, like Poincaré, Sobolev, log-Sobolev and Gaussian isoperimetric in-
53
+ equalities, as well as Brunn–Minkowski, Bishop–Gromov and Bonnet–Myers inequalities.
54
+ 1.1. The sub-Riemannian framework. Although the aforementioned synthetic cur-
55
+ vature-dimension conditions embed a large variety of metric-measure spaces, a relevant
56
+ and widely-studied class of smooth structures is left out—the family of sub-Riemmanian
57
+ manifolds. A sub-Riemannian structure is a natural generalization of a Riemannian one,
58
+ in the sense that its distance is induced by a scalar product that is defined only on a
59
+ smooth sub-bundle of the tangent bundle, whose rank possibly varies along the manifold.
60
+ See the monographs [2,40,45] for a detailed presentation.
61
+ The first result in this direction was obtained by Driver–Melcher [23], who proved that
62
+ an integrated version of the BE(K, ∞), the so-called pointwise gradient estimate for the
63
+ heat flow, is false for the three-dimensional Heisenberg group.
64
+ In [31], Juillet proved the failure of the CD(K, ∞) property for all Heisenberg groups
65
+ (and even for the strictly related Grushin plane, see [32]). Later, Juillet [33] extended his
66
+ result to any sub-Riemannian manifold endowed with a possibly rank-varying distribution
67
+ of rank strictly smaller than the manifold’s dimension, and with any positive smooth
68
+ measure, by exploiting the notion of ample curves introduced in [1]. The idea of [31,33]
69
+ is to construct a counterexample to the Brunn–Minkowski inequality.
70
+ The ‘no-CD theorem’ of [31] was extended to all Carnot groups by Ambrosio and the
71
+ second-named author in [8, Prop. 3.6] with a completely different technique, namely, by
72
+ exploiting the optimal version of the reverse Poincaré inequality obtained in [16].
73
+ In the case of sub-Riemannian manifolds endowed with an equiregular distribution and
74
+ a positive smooth measure, Huang–Sun [29] proved the failure of the CD(K, N) condition
75
+ for all values of K ∈ R and N ∈ (1, ∞) contradicting a bi-Lipschitz embedding result.
76
+ Very recently, in order to address the structures left out in [33], Magnabosco–Rossi [37]
77
+ recently extended the ‘no-CD theorem’ to almost-Riemannian manifolds M of dimension 2
78
+ or strongly regular. The approach of [37] relies on the localization technique developed by
79
+ Cavalletti–Mondino [19] in metric-measure spaces.
80
+ To complete the picture, we mention that several replacements for the Lott–Sturm–
81
+ Villani curvature-dimension property have been proposed and studied in the sub-Rieman-
82
+ nian framework in recent years. Far from being complete, we refer the reader to [11–15,38]
83
+ for an account on the Lagrangian approach, to [17] concerning the Eulerian one, and finally
84
+ to [47] for a first link between entropic inequalities and contraction properties of the heat
85
+ flow in the special setting of metric-measure groups.
86
+ Main aim. At the present stage, a ‘no-CD theorem’ for sub-Riemannian structures in
87
+ full generality is missing, since the aforementioned approaches [8,23,29,31,33,37] either
88
+ require the ambient space to satisfy some structural assumptions, or leave out the infinite
89
+ dimensional case N = ∞.
90
+
91
+ FAILURE OF CD CONDITIONS ON SUB-RIEMANNIAN MANIFOLDS
92
+ 3
93
+ The main aim of the present paper is to fill this gap by showing that (possibly rank-
94
+ varying) sub-Riemannian manifolds do not satisfy any curvature bound in the sense of
95
+ Lott–Sturm–Villani or Bakry–Émery when equipped with a positive smooth measure, i.e.,
96
+ a Radon measure whose density in local charts with respect to the Lebesgue measure is
97
+ a strictly positive smooth function.
98
+ 1.2. Failure of the Bakry–Émery inequality. The starting point of our strategy is
99
+ the weakest curvature-dimension condition, as we now define.
100
+ Definition 1.1 (Bakry–Émery inequality). We say that a sub-Riemannian manifold
101
+ (M, d) endowed with a positive smooth measure m satisfies the Bakry–Émery BE(K, ∞)
102
+ inequality, for K ∈ R, if
103
+ 1
104
+ 2 ∆(∥∇u∥2) ≥ g(∇u, ∇∆u) + K∥∇u∥2
105
+ for all u ∈ C∞(M),
106
+ (1.1)
107
+ where ∆ is the corresponding sub-Laplacian, and ∇ the sub-Riemannian gradient.
108
+ Our first main result is the following rigidity property for sub-Riemannian structures
109
+ supporting the Bakry–Émery inequality (1.1).
110
+ Theorem 1.2 (no-BE). Let (M, d) be a complete sub-Riemannian manifold endowed
111
+ with a positive smooth measure m. If (M, d, m) satisfies the BE(K, ∞) inequality for some
112
+ K ∈ R, then rank Dx = dim M at each x ∈ M, so that (M, d) is Riemannian.
113
+ The idea behind our proof of Theorem 1.2 is to show that the metric tangent cone
114
+ in the sense of Gromov [26] at each point of (M, d) is Euclidean. This line of thought is
115
+ somehow reminiscent of the deep structural result for RCD(K, N) spaces, with K ∈ R and
116
+ N ∈ (1, ∞), proved by Mondino–Naber [39]. However, differently from [39], Theorem 1.2
117
+ provides information about the metric tangent cone at each point of the manifold. Showing
118
+ that the distribution D is Riemannian at almost every point in fact would not be enough,
119
+ as this would not rule out almost-Riemannian structures.
120
+ Starting from (1.1), we first blow-up the sub-Riemannian structure and pass to its
121
+ metric-measure tangent cone, showing that (1.1) is preserved with K = 0. Note that, in
122
+ this blow-up procedure, the positivity of the density of m is crucial, since otherwise the
123
+ resulting metric tangent cone would be endowed with the null measure.
124
+ The resulting blown-up sub-Riemannian space is isometric to a homogeneous space
125
+ of the form G/H, where G = exp g is the Carnot group associated to the underlying
126
+ (finite-dimensional and stratified) Lie algebra g of bracket-generating vector fields, and
127
+ H = exp h is its subgroup corresponding to the Lie subalgebra h of vector fields vanishing
128
+ at the origin, see [18]. Of course, the most difficult case is when H is non-trivial, that is,
129
+ the tangent cone is not a Carnot group.
130
+ At this point, the key idea is to show that the Bakry–Émery inequality BE(K, ∞)
131
+ implies the existence of special isometries on the tangent cone.
132
+ Definition 1.3 (Sub-Riemannian isometries). Let M be a sub-Riemannian manifold,
133
+ with distribution D and metric g. A diffeomorphism φ : M → M is an isometry if
134
+ (φ∗D)|x = Dφ(x)
135
+ for all x ∈ M,
136
+ (1.2)
137
+ and, furthermore, φ∗ is an orthogonal map with respect to g. We say that a smooth vector
138
+ field V is Killing if its flow φV
139
+ t is an isometry for all t ∈ R.
140
+
141
+ 4
142
+ L. RIZZI AND G. STEFANI
143
+ For precise definitions of g and h in the next statement, we refer to Section 2.4.
144
+ Theorem 1.4 (Existence of Killing fields). Let (M, d) be a complete sub-Riemannian
145
+ manifold equipped with a positive smooth measure m If (M, d, m) satisfies the BE(K, ∞)
146
+ inequality for some K ∈ R, then, for the nilpotent approximation at any given point, there
147
+ exists a vector space i ⊂ g1 such that
148
+ g1 = i ⊕ h1
149
+ (1.3)
150
+ and every Y ∈ i is a Killing vector field.
151
+ The existence of the space of isometries i forces the Lie algebra g to be commutative and
152
+ of maximal rank, thus implying that the original manifold (M, d) was in fact Riemannian.
153
+ Theorem 1.5 (Killing implies commutativity). If there exists a subspace i ⊂ g1 of Killing
154
+ vector fields such that g1 = i ⊕ h1, then g is commutative.
155
+ Theorem 1.5 states that, if a Carnot group contains enough horizontal symmetries, then
156
+ it must be commutative. As it will be evident from its proof, Theorem 1.5 holds simply
157
+ assuming that, for each V ∈ i, the flow φV
158
+ t is pointwise distribution-preserving, namely it
159
+ satisfies (1.2), without being necessarily isometries.
160
+ 1.3. Infinitesimal Hilbertianity. The Bakry–Émery inequality BE(K, ∞) in (1.1) is a
161
+ consequence of the CD(K, ∞) condition as soon as the ambient metric-measure space is
162
+ infinitesimal Hilbertian as defined in [25].
163
+ Let (X, d) be a complete separable metric space, m be a locally bounded Borel mea-
164
+ sure, and q ∈ [1, ∞). We let |Du|w,q ∈ Lq(X, m) be the minimal q-upper gradient of a
165
+ measurable function u : X → R, see [5, Sec. 4.4]. We define the Banach space
166
+ W1,q(X, d, m) = {u ∈ Lq(X, m) : |Du|w,q ∈ Lq(X, m)}
167
+ with the norm
168
+ ∥u∥W1,q(X,d,m) =
169
+
170
+ ∥u∥q
171
+ Lq(X,m) + ∥|Du|w,q∥q
172
+ Lq(X,m)
173
+ �1/q .
174
+ Definition 1.6 (Infinitesimal Hilbertianity). A metric measure space (X, d, m) is in-
175
+ finitesimally Hilbertian if W1,2(X, d, m) is a Hilbert space.
176
+ The infinitesimal Hilbertianity of sub-Riemannian structures has been recently proved
177
+ in [35], with respect to any Radon measure.
178
+ In particular, Theorem 1.2 immediately
179
+ yields the following ‘no-CD theorem’ for sub-Riemannian manifolds, thus extending all
180
+ the aforementioned results [8,23,29,31,33,37].
181
+ Corollary 1.7 (no-CD). Let (M, d) be a complete sub-Riemannian manifold endowed
182
+ with a positive smooth measure m. If (M, d, m) satisfies the CD(K, ∞) condition for some
183
+ K ∈ R, then (M, d) is Riemannian.
184
+ However, since the measure in Corollary 1.7 is positive and smooth, we can avoid to
185
+ rely on the general result of [35], instead providing a simpler and self-contained proof
186
+ of the infinitesimal Hilbertianity property. In particular, we prove the following result,
187
+ which actually refines [35, Th. 5.6] in the case of smooth measures. In the following,
188
+ HW1,q(M, m) denotes the sub-Riemannian Sobolev spaces (see Section 2.2).
189
+
190
+ FAILURE OF CD CONDITIONS ON SUB-RIEMANNIAN MANIFOLDS
191
+ 5
192
+ Theorem 1.8 (Infinitesimal Hilbertianity). Let q ∈ (1, ∞). Let (M, d) be a complete sub-
193
+ Riemannian manifold equipped with a positive smooth measure m. The following hold.
194
+ (i) W1,q(M, d, m) = HW1,q(M, m), with |Du|w,q = ∥∇u∥ m-a.e. on M for all u ∈
195
+ W1,q(M, d, m). In particular, taking q = 2, (M, d, m) is infinitesimally Hilbertian.
196
+ (ii) If (M, d, m) satisfies the CD(K, ∞) condition for some K ∈ R, then the Bakry–
197
+ Émery BE(K, ∞) inequality (1.1) holds on M.
198
+ Note that Theorem 1.8 holds for less regular measures, see Remark 3.6.
199
+ Remark 1.9 (The case of a.e. smooth measures). Theorem 1.8 can be adapted also to
200
+ the case of a Borel and locally finite measure m which is smooth and positive only on Ω,
201
+ where Ω ⊂ M is an open set with m(∂Ω) = 0. In this case, we obtain HW1,q(Ω, m) =
202
+ W1,q(Ω, d, m), with |Du|w,q = ∥∇u∥ m-a.e. on Ω for all u ∈ W1,q(Ω, d, m). In particular,
203
+ if m is smooth and positive out of a closed set Z, with m(Z) = 0, an elementary ap-
204
+ proximation argument proves that (M, d, m) is infinitesimally Hilbertian and, if (M, d, m)
205
+ satisfies the CD(K, ∞) condition for K ∈ R, then the Bakry-Émery BE(K, ∞) inequality
206
+ (1.1) holds on M \Z. This is the case, for example, of the Grushin planes and half-planes
207
+ with weighted measures of Section 1.5. The proof follows the same argument of the one of
208
+ Theorem 1.8, exploiting the locality of the q-upper gradient, see for example [5, Sec. 8.2]
209
+ and [25, Prop. 2.6], and similar properties for the distributional derivative.
210
+ 1.4. An alternative approach to the ‘no-CD theorem’. We mention an alternative
211
+ proof of the ‘no-CD theorem’ for almost-Riemannian structures (i.e., sub-Riemannian
212
+ structures that are Riemannian outside a closed nowhere dense singular set). The strategy
213
+ relies on the Gromov-Hausdorff continuity of the metric tangent at interior points of
214
+ geodesics in RCD(K, N) spaces, with N < ∞, proved by Deng in [22],
215
+ For example, consider the standard Grushin plane (introduced in Section 1.5) equipped
216
+ with a smooth positive measure. The curve γ(t) = (t, 0), t ∈ R, is a geodesic between
217
+ any two of its point. The metric tangent at γ(t) is (isometric to) the Euclidean plane for
218
+ every t ̸= 0, while it is (isometric to) the Grushin plane itself for t = 0. Since the Grushin
219
+ plane cannot be bi-Lipschitz embedded into the Euclidean plane, the two spaces are at
220
+ positive Gromov-Hausdorff distance, contradicting the continuity result.
221
+ This strategy has a few drawbacks.
222
+ On the one hand, it relies on the (non-trivial)
223
+ machinery developed in [22].
224
+ Consequently, this argument does not work in the case
225
+ N = ∞. On the other hand, the formalization of this strategy for general almost-Rie-
226
+ mannian structures requires certain quantitative bi-Lipschitz non-embedding results for
227
+ almost-Riemannian structures into Euclidean spaces, which we are able to prove only
228
+ under the same assumptions of [37].
229
+ 1.5. Weighted Grushin structures. When the density of the smooth measure is al-
230
+ lowed to vanish, the ‘no-CD theorem’ breaks down. In fact, in this situation, the following
231
+ two interesting phenomena occur:
232
+ (A) the Bakry-Émery BE(K, ∞) inequality no longer implies the CD(K, ∞) condition;
233
+ (B) there exist almost-Riemannian structures with boundary satisfying the CD(0, N)
234
+ condition for N ∈ [1, ∞].
235
+
236
+ 6
237
+ L. RIZZI AND G. STEFANI
238
+ We provide examples of both phenomena on the so-called weighted Grushin plane. This
239
+ is the sub-Riemannian structure on R2 induced by the family F = {X, Y }, where
240
+ X = ∂x,
241
+ Y = x ∂y,
242
+ (x, y) ∈ R2.
243
+ (1.4)
244
+ The induced distribution D = span{X, Y } has maximal rank outside the singular region
245
+ S = {x = 0} and rank 1 on S. Since [X, Y ] = ∂y on R2, the resulting sub-Riemannian
246
+ metric space (R2, d) is Polish and geodesic. It is almost-Riemannian in the sense that, out
247
+ of S, the metric is locally equivalent to the Riemannian one given by the metric tensor
248
+ g = dx ⊗ dx + 1
249
+ x2 dy ⊗ dy,
250
+ x ̸= 0.
251
+ (1.5)
252
+ We endow the metric space (R2, d) with the weighted Lebesgue measure
253
+ mp = |x|p dx dy,
254
+ where p ∈ R is a parameter. The choice p = −1 corresponds to the Riemannian density
255
+ volg = 1
256
+ |x| dx dy,
257
+ x ̸= 0,
258
+ (1.6)
259
+ so that
260
+ mp = e−V volg,
261
+ V (x) = −(p + 1) log |x|,
262
+ x ̸= 0.
263
+ (1.7)
264
+ We call the metric-measure space Gp = (R2, d, mp) the (p-)weighted Grushin plane.
265
+ We can now state the following result, illustrating phenomenon (A).
266
+ Theorem 1.10. Let p ∈ R and let Gp = (R2, d, mp) be the weighted Grushin plane.
267
+ (i) If p ≥ 0, then Gp does not satisfy the CD(K, ∞) property for all K ∈ R.
268
+ (ii) If p ≥ 1, then Gp satisfies the BE(0, ∞) inequality (1.1) almost everywhere.
269
+ To prove (i), we show that the corresponding Brunn–Minkowski inequality is violated.
270
+ In fact, the case p = 0 is due to Juillet [32], while the case p > 0 can be achieved via a
271
+ simple argument which was pointed out to us by J. Pan. Claim (ii), instead, is obtained
272
+ by direct computations.
273
+ Somewhat surprisingly, the weighted Grushin half -plane G+
274
+ p —obtained by restricting
275
+ the metric-measure structure of Gp to the (closed) half-plane [0, ∞)×R—does satisfy the
276
+ CD(0, N) condition for sufficiently large N ∈ [1, ∞]. Precisely, we can prove the following
277
+ result, illustrating phenomenon (B).
278
+ Theorem 1.11. Let p ≥ 1. The weighted Grushin half-plane G+
279
+ p satisfies the CD(0, N)
280
+ condition if and only if N ≥ Np, where Np ∈ (2, ∞] is given by
281
+ Np = (p + 1)2
282
+ p − 1
283
+ + 2,
284
+ (1.8)
285
+ with the convention that N1 = ∞. Furthermore, G+
286
+ p is infinitesimally Hilbertian, and it
287
+ is thus an RCD(0, N) space for N ≥ Np.
288
+ While we were completing this work, Pan and Montgomery [41] observed that the spaces
289
+ built in [20, 42] as Ricci limits are actually the weighted Grushin half-spaces presented
290
+ above. Our construction and method of proof are more direct with respect to the approach
291
+ of [20,42], and easily yield sharp dimensional bounds.
292
+
293
+ FAILURE OF CD CONDITIONS ON SUB-RIEMANNIAN MANIFOLDS
294
+ 7
295
+ 1.6. Counterexample to gluing theorems. We end this introduction with an inter-
296
+ esting by-product of our analysis, in in connection with the so-called gluing theorems.
297
+ Perelman’s Doubling Theorem [43, Sect. 5.2] states that a finite dimensional Alexan-
298
+ drov space with a curvature lower bound can be doubled along its boundary yielding an
299
+ Alexandrov space with same curvature lower bound and dimension. This result has been
300
+ extended by Petrunin [44, Th. 2.1] to the gluing of Alexandrov spaces.
301
+ It is interesting to understand whether these classical results hold true for general
302
+ metric-measure spaces satisfying synthetic Ricci curvature lower bounds in the sense of
303
+ Lott–Sturm–Villani. In [34], the gluing theorem was proved for CD(K, N) spaces with
304
+ Alexandrov curvature bounded from below (while it is false for MCP spaces, see [46]).
305
+ Here we obtain that, in general, the assumption of Alexandrov curvature bounded
306
+ from below cannot be removed from the results in [34]. More precisely, Theorems 1.10
307
+ and 1.11, and the fact that the metric-measure double of the Grushin half-plane G+
308
+ p is Gp
309
+ (see [46, Prop. 6]) yield the following corollary.
310
+ Corollary 1.12 (Counterexample to gluing in RCD spaces). For all N ≥ 10, there exists
311
+ a geodesically convex RCD(0, N) metric-measure space with boundary such that its metric-
312
+ measure double does not satisfy the CD(K, ∞) condition for any K ∈ R.
313
+ In [34, Conj. 1.6], the authors conjecture the validity of the gluing theorem for non-
314
+ collapsed RCD(K, N), with N the Hausdorff dimension of the metric-measure space.
315
+ As introduced in [21], a non-collapsed RCD(K, N) space is an infinitesimally Hilbertian
316
+ CD(K, N) space with m = H N, where H N denotes the N-dimensional Hausdorff mea-
317
+ sure of (X, d). Since the weighted half-Grushin spaces are indeed collapsed, Corollary 1.12
318
+ also shows that the non-collapsing assumption cannot be removed from [34, Conj. 1.6].
319
+ 1.7. Acknowledgments. We wish to thank Michel Bonnefont for fruitful discussions
320
+ and, in particular, for bringing some technical details in [23] that inspired the strategy of
321
+ the proof of Theorem 1.2 to our attention.
322
+ This work has received funding from the European Research Council (ERC) under the
323
+ European Union’s Horizon 2020 research and innovation programme (grant agreement No.
324
+ 945655) and the ANR grant ‘RAGE’ (ANR-18-CE40-0012). The second-named author
325
+ is member of the Istituto Nazionale di Alta Matematica (INdAM), Gruppo Nazionale
326
+ per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA), and is par-
327
+ tially supported by the INdAM–GNAMPA 2022 Project Analisi geometrica in strutture
328
+ subriemanniane, codice CUP_E55F22000270001.
329
+ 2. Preliminaries
330
+ In this section, we introduce some notation and recall some results about sub-Rieman-
331
+ nian manifolds and curvature-dimension conditions.
332
+ 2.1. Sub-Riemannian structures. For L ∈ N, we let F = {X1, . . . , XL} be a family
333
+ of smooth vector fields globally defined on a smooth n-dimensional manifold M, n ≥ 2.
334
+ The (generalized) sub-Riemannian distribution induced by the family F is defined by
335
+ D =
336
+
337
+ x∈M
338
+ Dx,
339
+ Dx = span{X1|x, . . . , XL|x} ⊂ TxM,
340
+ x ∈ M.
341
+ (2.1)
342
+
343
+ 8
344
+ L. RIZZI AND G. STEFANI
345
+ Note that we do not require the dimension of Dx to be constant as x ∈ M varies, that is,
346
+ we may consider rank-varying distributions. With a standard abuse of notation, we let
347
+ Γ(D) = C∞-module generated by F.
348
+ Notice that, for any smooth vector field V , it holds
349
+ V ∈ Γ(D) =⇒ Vx ∈ Dx for all x ∈ M,
350
+ but the converse is false in general. We let
351
+ ∥V ∥x = min
352
+
353
+ |u| : u ∈ RL such that V =
354
+ L
355
+
356
+ i=1
357
+ ui Xi|x, Xi ∈ F
358
+
359
+ (2.2)
360
+ whenever V ∈ D and x ∈ M. The norm ∥ · ∥x induced by the family F satisfies the
361
+ parallelogram law and, consequently, it is induced by a scalar product
362
+ gx : Dx × Dx → R.
363
+ An admissible curve is a locally Lipschitz in charts path γ : [0, 1] → M such that there
364
+ exists a control u ∈ L∞([0, 1]; RL) such that
365
+ ˙γ(t) =
366
+ L
367
+
368
+ i=1
369
+ ui(t)Xi|γ(t)
370
+ for a.e. t ∈ [0, 1].
371
+ The length of an admissible curve γ is defined via the norm (2.2) as
372
+ length(γ) =
373
+ � 1
374
+ 0 ∥˙γ(t)∥γ(t) dt
375
+ and the Carnot–Carathéodory (or sub-Riemannian) distance between x, y ∈ M is
376
+ d(x, y) = inf{length(γ) : γ admissible with γ(0) = x, γ(1) = y}.
377
+ We assume that the family F satisfies the bracket-generating condition
378
+ TxM = {X|x : X ∈ Lie(F)}
379
+ for all x ∈ M,
380
+ (2.3)
381
+ where Lie(F) is the smallest Lie subalgebra of vector fields on M containing F, namely,
382
+ Lie(F) = span
383
+
384
+ [Xi1, . . . , [Xij−1, Xij]] : Xiℓ ∈ F, j ∈ N
385
+
386
+ .
387
+ Under the assumption (2.3), the Chow–Rashevskii Theorem implies that d is a well-defined
388
+ finite distance on M inducing the same topology of the ambient manifold.
389
+ 2.2. Gradient, sub-Laplacian and Sobolev spaces. The gradient of a function u ∈
390
+ C∞(M) is the unique vector field ∇u ∈ Γ(D) such that
391
+ g(∇u, V ) = du(V )
392
+ for all V ∈ Γ(D).
393
+ (2.4)
394
+ One can check that ∇u can be globally represented as
395
+ ∇u =
396
+ L
397
+
398
+ i=1
399
+ Xiu Xi,
400
+ with
401
+ ∥∇u∥2 =
402
+ L
403
+
404
+ i=1
405
+ (Xiu)2,
406
+ (2.5)
407
+ even if the family F is not linearly independent, see Corollary A.2 for a proof.
408
+
409
+ FAILURE OF CD CONDITIONS ON SUB-RIEMANNIAN MANIFOLDS
410
+ 9
411
+ We equip the manifold M with a positive smooth measure m. The sub-Laplacian of a
412
+ function u ∈ C∞(M) is the unique function ∆u ∈ C∞(M) such that
413
+
414
+ M g(∇u, ∇v) dm = −
415
+
416
+ M v ∆u dm
417
+ (2.6)
418
+ for all v ∈ C∞
419
+ c (M). On can check that ∆u can be globally represented as
420
+ ∆u =
421
+ L
422
+
423
+ i=1
424
+
425
+ X2
426
+ i u + Xiu divm(Xi)
427
+
428
+ ,
429
+ (2.7)
430
+ see Corollary A.2 for a proof. In (2.7), divmV is the divergence of the vector field V
431
+ computed with respect to m, that is,
432
+
433
+ M v divm(V ) dm = −
434
+
435
+ M g(∇v, V ) dm
436
+ for all v ∈ C∞
437
+ c (M).
438
+ For q ∈ [1, ∞), we say that u ∈ L1
439
+ loc(M, m) has q-integrable distributional Xi-derivative
440
+ if there exists a function Xiu ∈ Lq(M, m) such that
441
+
442
+ M vXiu dm =
443
+
444
+ M uX∗
445
+ i v dm
446
+ for all v ∈ C∞
447
+ c (M),
448
+ where X∗
449
+ i v = −Xiv − v divm(Xi) denotes the adjoint action of Xi. We thus let
450
+ HW1,q(M, m) = {u ∈ Lq(M, m) : Xiu ∈ Lq(M, m), i = 1, . . ., L}
451
+ be the usual horizontal W1,q Sobolev space induced by the the family F and the measure m
452
+ on M, endowed with the natural norm
453
+ ∥u∥HW1,q(M,m) =
454
+
455
+ ∥u∥q
456
+ Lq(M,m) + ∥∇u∥q
457
+ Lq(M,m)
458
+ �1/q
459
+ for all u ∈ HW1,q(M, m), where ∇u =
460
+ L
461
+
462
+ i=1
463
+ Xiu Xi in accordance with (2.5) and
464
+ ∥∇u∥q
465
+ Lq(M,m) =
466
+
467
+ M ∥∇u∥q dm.
468
+ 2.3. Privileged coordinates. Following [18,30], we introduce privileged coordinates, a
469
+ fundamental tool in the description of the tangent cone of sub-Riemannian manifolds.
470
+ Given a multi-index I ∈ {1, . . ., L}×i, i ∈ N, we let |I| = i be its length and we set
471
+ XI = [XI1, [. . ., [XIi−1, XIi]]]].
472
+ Accordingly, we define
473
+ Di
474
+ x = span{XI|x : |I| ≤ i}
475
+ (2.8)
476
+ and
477
+ ki(x) = dim Di
478
+ x
479
+ for all x ∈ M and i ∈ N. In particular, D0
480
+ x = {0} and D1
481
+ x = Dx as in (2.1) for all x ∈ M.
482
+ The spaces defined in (2.8) naturally yield the filtration
483
+ {0} = D0
484
+ x ⊂ D1
485
+ x ⊂ · · · ⊂ Ds(x)
486
+ x
487
+ = TxM
488
+ for all x ∈ M, where s = s(x) ∈ N is the step of the sub-Riemannian structure at the
489
+ point x. We say that x ∈ M is a regular point if the dimension of each space Di
490
+ y remains
491
+ constant as y ∈ M varies in an open neighborhood of x, otherwise x is a singular point.
492
+
493
+ 10
494
+ L. RIZZI AND G. STEFANI
495
+ Definition 2.1 (Adapted and privileged coordinates). Let o ∈ M and let U ⊂ M be an
496
+ open neighborhood of o. We say that the local coordinates given by a diffeomorphism
497
+ z : U → Rn are adapted at o if they are centered at o, i.e. z(o) = 0, and ∂z1|0, . . ., ∂zki|0
498
+ form a basis for Di
499
+ o in these coordinates for all i = 1, . . ., s(o). We say that the adapted
500
+ coordinate zi has weight wi = j if ∂zi|0 ∈ Dj
501
+ o \ Dj−1
502
+ o
503
+ . Furthermore, we say that the coor-
504
+ dinates z are privileged at o if they are adapted at o and, in addition, zi(x) = O(d(x, o)wi)
505
+ for all x ∈ U and i = 1, . . ., n.
506
+ Privileged coordinates exist in a neighborhood of any point, see [18, Th. 4.15].
507
+ 2.4. Nilpotent approximation. From now on, we fix a set of privileged coordinates
508
+ z : U → Rn around a point o ∈ M in the sense of Definition 2.1.
509
+ Without loss of
510
+ generality, we identify the coordinate domain U ⊂ M with Rn and the base point o ∈ M
511
+ with the origin 0 ∈ Rn. Similarly, the vector fields in F defined on U are identified with
512
+ vector fields on Rn, and the restriction of the sub-Riemannian distance d to U is identified
513
+ with a distance function on Rn, which is induced by the family F, for which we keep the
514
+ same notation.
515
+ On (Rn, F), we define a family of dilations, for λ ≥ 0, by letting
516
+ dilλ : Rn → Rn,
517
+ dilλ(z1, . . . , zn) = (λw1z1, . . . , λwnzn)
518
+ for all z = (z1, . . . , zn) ∈ Rn, where the wi’s are the weights given by Definition 2.1. We
519
+ say that a differential operator P is homogeneous of degree −d ∈ Z if
520
+ P(f ◦ dilλ) = λ−d(Pf) ◦ dilλ
521
+ for all λ > 0 and f ∈ C∞(Rn).
522
+ (2.9)
523
+ Note that the monomial zi is homogeneous of degree wi, while the vector field ∂zi is
524
+ homogeneous of degree −wi, for i = 1, . . . , n. As a consequence, the differential operator
525
+ zµ1
526
+ 1 · · · · · zµn
527
+ n
528
+ ∂|ν|
529
+ ∂zν1
530
+ 1 · · · ∂zνn
531
+ n
532
+ ,
533
+ νi, µj ∈ N ∪ {0},
534
+ is homogeneous of degree �n
535
+ i=1 wi(µi − νi). For more details, see [18, Sec. 5].
536
+ We can now introduce the new family
537
+
538
+ F =
539
+ ��
540
+ X1, . . . , �
541
+ XL
542
+
543
+ by defining
544
+
545
+ Xi = lim
546
+ ε→0 Xε
547
+ i ,
548
+
549
+ i = ε (dil1/ε)∗Xi,
550
+ (2.10)
551
+ for all i = 1, . . . , L, where (dil1/ε)∗ stands for the usual push-forward via the differential
552
+ of the dilation map dil1/ε, see [18, Sec. 5.3]. The convergence in (2.10) can be actually
553
+ made more precise, in the sense that
554
+
555
+ i = �
556
+ Xi + Rε
557
+ i,
558
+ i = 1, . . ., L,
559
+ where Rε
560
+ i locally uniformly converges to zero as ε → 0, see [18, Th. 5.19].
561
+ The family �
562
+ F is a set of complete vector fields on Rn, homogeneous of degree −1, with
563
+ polynomial coefficients, and can be understood as the ‘principal part’ of F upon blow-up
564
+ by dilations. Since F satisfies the bracket-generating condition (2.3), also the new family
565
+
566
+ F is bracket-generating at all points of Rn, and thus induces a finite sub-Riemannian
567
+ distance �d, see [18, Prop. 5.17]. The resulting n-dimensional sub-Riemannian structure
568
+ (Rn, �
569
+ F ) is called nilpotent approximation of (Rn, F) at 0 ∈ Rn.
570
+
571
+ FAILURE OF CD CONDITIONS ON SUB-RIEMANNIAN MANIFOLDS
572
+ 11
573
+ The family �
574
+ F =
575
+ ��
576
+ X1, . . ., �
577
+ XL
578
+
579
+ generates a finite-dimensional stratified Lie algebra
580
+ g = Lie( �
581
+ F ) = g1 ⊕ · · · ⊕ gs
582
+ of step s = s(0) ∈ N, where the grading is given by the degree of the vector fields,
583
+ according to the definition in (2.9), that is, the layer gi corresponds to vector fields
584
+ homogeneous of degree −i with respect to dilations, see [18, Sec. 5.4].
585
+ In particular,
586
+ g1 = span
587
+ ��
588
+ X1, . . . , �
589
+ XL
590
+
591
+ , so that g is generated by its first stratum, namely,
592
+ gj+1 = [g1, gj],
593
+ ∀j = 1, . . . , s − 1.
594
+ (2.11)
595
+ Finally, define the Lie subalgebra of vector fields vanishing at 0,
596
+ h =
597
+ ��
598
+ X ∈ g : �
599
+ X|0 = 0
600
+
601
+ = h1 ⊕ · · · ⊕ hs,
602
+ which inherits the grading from the one of g,
603
+ hj+1 = [h1, hj],
604
+ ∀j = 1, . . ., s − 1.
605
+ (2.12)
606
+ It is a fundamental fact [18, Th. 5.21] that the nilpotent approximation (Rn, �
607
+ F ) is diffeo-
608
+ morphic to the homogeneous sub-Riemannian space G/H, where G is the Carnot group
609
+ G = exp g (explicitly realized as the subgroup of the flows of the vector fields of g acting
610
+ on Rn from the right) and H = exp h is the Carnot subgroup induced by h.
611
+ In particular, if 0 ∈ Rn is a regular point, then H = {0}, and so the nilpotent approxi-
612
+ mation (Rn, �
613
+ F ) is diffeomorphic to the Carnot group G, see [18, Prop. 5.22].
614
+ Recall that the smooth measure m on the original manifold M can be identified with
615
+ a smooth measure on U ≃ Rn, for which we keep the same notation. In particular, m
616
+ is absolutely continuous with respect to the n-dimensional Lebesgue measure L n on Rn,
617
+ with m = ρ L n for some positive smooth function ρ: Rn → (0, ∞). The corresponding
618
+ blow-up measure on the nilpotent approximation is naturally given by
619
+ �m = lim
620
+ ε→0 mε = ρ(0) L n,
621
+ mε = εQ (dil1/ε)#m,
622
+ in the sense of weak∗ convergence of measures in Rn, where
623
+ Q =
624
+ n
625
+
626
+ i=1
627
+ i wi ∈ N
628
+ is the so-called homogeneous dimension of (Rn, �
629
+ F ) and (dil1/ε)# stands for the push-
630
+ forward in the measure-theoretic sense via the dilation map dil1/ε. Consequently, without
631
+ loss of generality, we can assume that ρ(0) = 1, thus endowing (Rn, �
632
+ F ) with the n-
633
+ dimensional Lebesgue measure.
634
+ Notice that divL n �
635
+ Xi = 0, for all i = 1, . . . , L, since
636
+ each �
637
+ Xi is homogeneous of degree −1. Hence, by (2.7), the sub-Laplacian of a function
638
+ u ∈ C∞(Rn) can be globally represented as
639
+ �∆u =
640
+ L
641
+
642
+ i=1
643
+
644
+ X 2
645
+ i u.
646
+ (2.13)
647
+ It is worth noticing that the metric space (Rn, �d ) induced by the nilpotent approxi-
648
+ mation (Rn, �
649
+ F ) actually coincides with the metric tangent cone at o ∈ M of the metric
650
+ space (M, d) in the sense of Gromov [26], see [18, Th. 7.36] for the precise statement.
651
+
652
+ 12
653
+ L. RIZZI AND G. STEFANI
654
+ In fact, the sub-Riemmanian distance dε induced by the vector fields Xε
655
+ i , i = 1, . . . , L,
656
+ defined in (2.10) is uniformly converging to the distance �d on compact sets as ε → 0.
657
+ It is not difficult to check that the family {(Rn, dε, mε, 0)}ε>0 of pointed metric-measure
658
+ spaces converge to the pointed metric-measure space (Rn, �d, L n, 0) as ε → 0 in the pointed
659
+ measure Gromov–Hausdorff topology, see [13, Sec. 10.3] for details.
660
+ 2.5. The curvature-dimension condition. We end this section by recalling the defi-
661
+ nition of curvature-dimension conditions of introduced in [36,48,49].
662
+ On a Polish (i.e., separable and complete) metric space (X, d), we let P(X) be the set
663
+ of probability Borel measures on X and define the Wasserstein (extended) distance W2
664
+ W2
665
+ 2(µ, ν) = inf
666
+ ��
667
+ X×X d2(x, y) dπ : π ∈ Plan(µ, ν)
668
+
669
+ ∈ [0, ∞],
670
+ for µ, ν ∈ P(X), where
671
+ Plan(µ, ν) = {π ∈ P(X × X) : (p1)#π = µ, (p2)#π = ν},
672
+ where pi : X ×X → X, i = 1, 2, are the projections on each component and T#µ ∈ P(Y )
673
+ denotes the push-forward measure given by any µ-measurable map T : X → X. The
674
+ function W2 is a distance on the Wasserstein space
675
+ P2(X) =
676
+
677
+ µ ∈ P(X) :
678
+
679
+ X d2(x, x0) dµ(x) < ∞ for some, and thus any, x0 ∈ X
680
+
681
+ .
682
+ Note that (P2(X), W2) is a Polish metric space which is geodesic as soon as (X, d) is. In
683
+ addition, letting Geo(X) be the set of geodesics of (X, d), namely, curves γ : [0, 1] → X
684
+ such that d(γs, γt) = |s−t| d(γ0, γ1), for all s, t ∈ [0, 1], any W2-geodesic µ: [0, 1] → P2(X)
685
+ can be (possibly non-uniquely) represented as µt = (et)♯ν for some ν ∈ P(Geo(X)), where
686
+ et: Geo(X) → X is the evaluation map at time t ∈ [0, 1].
687
+ We endow the metric space (X, d) with a non-negative Borel measure m such that
688
+ m is finite on bounded sets and supp(m) = X.
689
+ We define the (relative) entropy functional Entm : P2(X) → [−∞, +∞] by letting
690
+ Entm(µ) =
691
+
692
+ X ρ log ρ dm
693
+ if µ = ρm and ρ log ρ ∈ L1(X, m), while we set Entm(µ) = +∞ otherwise.
694
+ Definition 2.2 (CD(K, ∞) property). We say that a metric-measure space (X, d, m)
695
+ satisfies the CD(K, ∞) property if, for any µ0, µ1 ∈ P2(X) with Entm(µi) < +∞, i = 0, 1,
696
+ there exists a W2-geodesic [0, 1] ∋ s �→ µs ∈ P2(X) joining them such that
697
+ Entm(µs) ≤ (1 − s) Entm(µ0) + s Entm(µ1) − K
698
+ 2 s(1 − s) W2
699
+ 2(µ0, µ1)
700
+ (2.14)
701
+ for every s ∈ [0, 1].
702
+ The geodesic K-convexity of Entm in (2.14) can be reinforced to additionally encode an
703
+ upper bound on the dimension on the space, as recalled below. For N ∈ (1, ∞), we let
704
+ SN(µ, m) = −
705
+
706
+ X ρ−1/N dµ,
707
+ µ = ρm + µ⊥,
708
+ be the N-Rényi entropy of µ ∈ P2(X) with respect to m, where µ = ρm + µ⊥ denotes
709
+ the Radon–Nikodym decomposition of µ with respect to m.
710
+
711
+ FAILURE OF CD CONDITIONS ON SUB-RIEMANNIAN MANIFOLDS
712
+ 13
713
+ Definition 2.3 (CD(K, N) property). We say that a metric-measure space (X, d, m)
714
+ satisfies the CD(K, N) property for some N ∈ [1, ∞) if, for any µ0, µ1 ∈ P2(X) with
715
+ µi = ρim, i = 0, 1, there exists a W2-geodesic [0, 1] ∋ s �→ µs ∈ P2(X) joining them, with
716
+ µs = (es)♯ν for some ν ∈ P(Geo(X)) such that
717
+ SN′(µs, m) ≤ −
718
+
719
+ Geo(X)
720
+
721
+ τ (1−s)
722
+ K,N′ (d(γ0, γ1))ρ−1/N′
723
+ 0
724
+ (γ0) + τ (s)
725
+ K,N′(d(γ0, γ1))ρ−1/N′
726
+ 1
727
+ (γ1)
728
+
729
+ dν(γ)
730
+ for every s ∈ [0, 1], N′ ≥ N. Here τ (s)
731
+ K,N is the model distortion coefficient, see [49, p. 137].
732
+ Remark 2.4. The CD(0, N) corresponds to the convexity of the N′-Rényi entropy
733
+ SN′(µs, m) ≤ (1 − s)SN′(µ0, m) + sSN′(µ1, m),
734
+ for every s ∈ [0, 1] and N′ ≥ N, with µ0, µ1 ∈ P2(X) as in Definition 2.3.
735
+ Remark 2.5. For a CD(K, N) metric-measure space, K and N represent a lower bound
736
+ on the Ricci tensor and an upper bond on the dimension, respectively, and we have
737
+ CD(K, N) =⇒ CD(K, N′)
738
+ for all N′ ≥ N, N, N′ ∈ [1, ∞],
739
+ CD(K, N) =⇒ CD(K′, N)
740
+ for all K′ ≤ K, K, K′ ∈ R.
741
+ In particular, the CD(K, ∞) condition (2.14) is the weakest of all the curvature-dimension
742
+ conditions for fixed K ∈ R.
743
+ 3. Proofs
744
+ We first deal with Theorems 1.4 and 1.5, from which Theorem 1.2 immediately follows.
745
+ 3.1. Proof of Theorem 1.4. We divide the proof in four steps.
746
+ Step 1: passing to the nilpotent approximation via blow-up. Let (Rn, �
747
+ F ) be the nilpotent
748
+ approximation of (M, F) at some fixed point o ∈ M as explained in Section 2.4. Let
749
+ u ∈ C∞
750
+ c (M) and, without loss of generality, let us assume that supp u is contained in the
751
+ domain of the privileged coordinates at o ∈ M. In particular, we identify u with a C∞
752
+ c
753
+ function on Rn. We now apply (1.1) to the dilated function
754
+ uε = u ◦ dil1/ε ∈ C∞
755
+ c (Rn),
756
+ for ε > 0,
757
+ and evaluate this expression at the point dilε(x) ∈ Rn. Exploiting the expressions in Corol-
758
+ lary A.2, we get that
759
+ L
760
+
761
+ i,j=1
762
+
763
+ i u
764
+
765
+
766
+ ijju − Xε
767
+ jjiu
768
+
769
+ − (Xε
770
+ iju)2 + Rε
771
+ i,j u ≤ 0,
772
+ (3.1)
773
+ where Xε
774
+ i is as in (2.10) , Xijk = XiXjXk whenever i, j, k ∈ {1, . . . , L}, and Rε
775
+ i,j is a
776
+ reminder locally uniformly vanishing as ε → 0. Therefore, letting ε → 0 in (3.1), by the
777
+ convergence in (2.10) we get
778
+ L
779
+
780
+ i,j=1
781
+
782
+ Xiu
783
+ ��
784
+ Xijju − �
785
+ Xjjiu
786
+
787
+
788
+ ��
789
+ Xiju
790
+ �2 ≤ 0,
791
+ (3.2)
792
+ which is (1.1) with K = 0 for the nilpotent approximation (Rn, �
793
+ F ).
794
+
795
+ 14
796
+ L. RIZZI AND G. STEFANI
797
+ Step 2: improvement via homogeneous structure. We now show that (3.2) implies a
798
+ stronger identity, see (3.4) below, obtained from (3.2) by removing the squared term and
799
+ replacing the inequality with an equality. Recall, in particular, the definition of weight of
800
+ (privileged) coordinates in Definition 2.1. We take u ∈ C∞(Rn) of the form
801
+ u = α + γ,
802
+ where α and γ are homogeneous polynomial of weighted degree 1 and at least 3, respec-
803
+ tively. Since XIα = 0 as soon as the multi-index satisfies |I| ≥ 2 (see [18, Prop. 4.10]),
804
+ we can take the terms with lowest homogeneous degree in (3.2) to get
805
+ L
806
+
807
+ i,j=1
808
+
809
+ Xiα
810
+ ��
811
+ Xijjγ − �
812
+ Xjjiγ
813
+
814
+ =
815
+ L
816
+
817
+ i=1
818
+
819
+ Xiα
820
+ ��
821
+ Xi, �∆
822
+
823
+ (γ) ≤ 0
824
+ for all such α and γ. In the second equality, we used the fact that the sub-Laplacian �∆ is
825
+ a sum of squares as in (2.13). Since α can be replaced with −α, we must have that
826
+ L
827
+
828
+ i=1
829
+
830
+ Xiα
831
+ ��
832
+ Xi, �∆
833
+
834
+ (γ) = 0.
835
+ (3.3)
836
+ Observing that �
837
+ Xiα is homogeneous of degree 0, and thus a constant function, we can
838
+ rewrite (3.3) as
839
+ � L
840
+
841
+ i=1
842
+
843
+ Xiα �
844
+ Xi, �∆
845
+
846
+ (γ) = 0,
847
+ (3.4)
848
+ which is the seeked improvement of (3.2).
849
+ Step 3: construction of the space i ⊂ g1. Let Pn
850
+ 1 be the vector space of homogeneous
851
+ polynomials of weighted degree 1 on Rn. Notice that
852
+ Pn
853
+ 1 = span{zi | i = 1, . . . , k1},
854
+ k1 = dim D|0,
855
+ that is, Pn
856
+ 1 is generated by the monomials given by the coordinates of lowest weight. We
857
+ now define a linear map φ: Pn
858
+ 1 → g1 by letting
859
+ φ[α] = �
860
+ ∇α =
861
+ L
862
+
863
+ i=1
864
+
865
+ Xiα �
866
+ Xi
867
+ for all α ∈ Pn
868
+ 1 (recall Corollary A.2). We claim that φ is injective. Indeed, if φ[α] = 0 for
869
+ some α ∈ Pn
870
+ 1, then, by applying the operator φ[α] to the polynomial α, we get
871
+ 0 = φ[α](α) =
872
+ � L
873
+
874
+ i=1
875
+
876
+ Xiα �
877
+ Xi
878
+
879
+ (α) =
880
+ L
881
+
882
+ i=1
883
+ (�
884
+ Xiα)2.
885
+ Thus �
886
+ Xiα = 0 for all i = 1, . . ., L.
887
+ Hence α must have weighted degree at least 2.
888
+ However, since α is homogeneous of weighted degree 1, we conclude that α = 0, proving
889
+ that ker φ = {0}. We can thus define the subspace
890
+ i = φ[Pn
891
+ 1] ⊂ g1.
892
+
893
+ FAILURE OF CD CONDITIONS ON SUB-RIEMANNIAN MANIFOLDS
894
+ 15
895
+ By (3.4), any �
896
+ X ∈ i is such that [�
897
+ X, �∆](γ) = 0 for any homogeneous polynomial γ
898
+ of degree at least 3. Exploiting the definitions given in Section 2.4, we observe that a
899
+ differential operator P, homogeneous of weighted degree −d ∈ Z, has the form
900
+ P =
901
+
902
+ µ,ν
903
+ aµ,νzµ ∂|ν|
904
+ ∂zν ,
905
+ (3.5)
906
+ where µ = (µ1, . . . , µn), ν = (ν1, . . . , νn), µi, νj ∈ N ∪ {0}, aµ,ν ∈ R, and the weighted
907
+ degree of every addend in (3.5) is equal to −d, namely, �n
908
+ i=1(µi − νi)wi = −d.
909
+ Thus, since �
910
+ X and �∆ are homogeneous differential operators of order −1 and −2,
911
+ respectively, then [�
912
+ X, �∆] has order −3, see [18, Prop. 5.16]. It follows that [�
913
+ X, �∆] = 0 as
914
+ differential operator acting on C∞(Rn).
915
+ We now show (1.3). Let us first observe that i ∩ h = {0}. Indeed, if φ[α] ∈ h for some
916
+ α ∈ Pn
917
+ 1, that is, φ[α]|0 = 0, then �
918
+ Xiα|0 = 0 for all i = 1, . . ., L. Since �
919
+ Xiα is a constant
920
+ function, this implies φ[α] = 0, as claimed. Therefore, since dim i = dim Pn
921
+ 1 = k1, we must
922
+ have g1 = i ⊕ h1 thanks to Lemma 3.1 below.
923
+ Lemma 3.1. With the same notation of Section 2.4, if g1 = v ⊕ h1, then dim v = k1.
924
+ Proof. We claim that the dimension of v is preserved by evaluation at zero, that is,
925
+ dim v|0 = dim v, where dim v|0 is the dimension of v|0 as a subspace of T0Rn, while
926
+ dim v is the dimension of v as a subspace of g. Indeed, we have the trivial inequality
927
+ dim v|0 ≤ dim v. On the other hand, if strict inequality holds, then v must contain non-
928
+ zero vector fields vanishing at zero, contradicting the fact that v ∩ h = {0}. Therefore,
929
+ since dim g1|0 = k1 and dim h1|0 = 0, we get dim v = dim v|0 = k1 as desired.
930
+
931
+ Step 4: proof of the Killing property. We have so far proved the existence of i such that
932
+ g1 = i ⊕ h1, and such that any element Y ∈ i commutes with the sub-Laplacian �∆. We
933
+ now show that all such Y is a Killing vector field.
934
+ Let Y ∈ i. Since [Y, �∆] = 0, the induced flow φY
935
+ s , for s ∈ R, commutes with �∆ when
936
+ acting on smooth functions, that is,
937
+ �∆(u ◦ φY
938
+ s ) = ( �∆u) ◦ φY
939
+ s
940
+ (3.6)
941
+ for all u ∈ C∞(Rn) and s ∈ R. Recall the sub-Riemannian Hamiltonian �
942
+ H : T ∗Rn → R,
943
+
944
+ H(λ) = 1
945
+ 2
946
+ L
947
+
948
+ i=1
949
+ ⟨λ, �
950
+ Xi⟩2,
951
+ (3.7)
952
+ for all λ ∈ T ∗Rn. By (2.13), �
953
+ H is the principal symbol of �∆. Thus, from (3.6) it follows
954
+
955
+ H ◦
956
+
957
+ φY
958
+ s
959
+ �∗ = �
960
+ H,
961
+ for all s ∈ R, where the star denotes the pull-back, and thus
962
+
963
+ φY
964
+ s
965
+ �∗ is a diffeomorphism
966
+ on T ∗Rn. This means that φY
967
+ s is an isometry, as we now show. Indeed, for any given
968
+ x ∈ Rn, the restriction �
969
+ H|T ∗
970
+ x Rn is a quadratic form on T ∗
971
+ xRn, so (φY
972
+ s )∗ must preserve its
973
+ kernel, that is,
974
+ (φY
975
+ s )∗ ker �
976
+ H|T ∗
977
+ φYs (x)Rn = ker �
978
+ H|T ∗
979
+ x Rn
980
+ (3.8)
981
+
982
+ 16
983
+ L. RIZZI AND G. STEFANI
984
+ for all x ∈ Rn. By (3.7), it holds ker �
985
+ H|T ∗
986
+ x Rn = �
987
+ D⊥
988
+ x , where ⊥ denotes the annihilator of a
989
+ vector space. By duality, from (3.8) we obtain that (φY
990
+ s )∗ �
991
+ Dx = �
992
+ DφYs (x) for all x ∈ Rn as
993
+ required by (1.2). Finally, for λ ∈ T ∗
994
+ xM, let λ# ∈ Dx be uniquely defined by gx(λ#, V ) =
995
+ ⟨λ, V ⟩x for all V ∈ Dx, and notice that the map λ �→ λ# is surjective on Dx. Then it holds
996
+ ∥λ#∥2
997
+ x = 2�
998
+ H(λ), see Lemma A.1. Thus, since (φY
999
+ s )∗ preserves �
1000
+ H, the map (φY
1001
+ s )∗ preserves
1002
+ the sub-Riemannian norm, and thus g. This means that φY
1003
+ s is an isometry, concluding
1004
+ the proof of Theorem 1.4.
1005
+
1006
+ 3.2. Proof of Theorem 1.5. We claim that
1007
+ gj = hj
1008
+ for all j ≥ 2.
1009
+ (3.9)
1010
+ Note that (3.9) is enough to conclude the proof of Theorem 1.5, since, from (3.9) combined
1011
+ with (2.11) and (2.12), we immediately get that
1012
+ g = g1 ⊕ h2 ⊕ · · · ⊕ hs.
1013
+ In particular, we deduce that g|0 = g1|0, which in turn implies that g must be commu-
1014
+ tative, otherwise the bracket-generating condition would fail. To prove (3.9), we proceed
1015
+ by induction on j ≥ 2 as follows.
1016
+ Proof of the base case j = 2. We begin by proving the base case j = 2 in (3.9). To this
1017
+ aim, let �
1018
+ X ∈ i and �Y ∈ g1. By definition of Lie bracket, we can write
1019
+
1020
+ φ �
1021
+ X
1022
+ −s
1023
+
1024
+
1025
+ �Y = s
1026
+ ��
1027
+ X, �Y
1028
+
1029
+ + o(s)
1030
+ as s → 0,
1031
+ where φ �
1032
+ X
1033
+ s , for s ∈ R, is the flow of �
1034
+ X. Since g1|x = �
1035
+ D|x for all x ∈ Rn, and since �
1036
+ X is
1037
+ Killing (in particular (1.2) holds for its flow), we have that [�
1038
+ X, �Y ]|x ∈ �
1039
+ D|x for all x ∈ Rn.
1040
+ Since [�
1041
+ X, �Y ] ∈ g2 and so, in particular, [�
1042
+ X, �Y ] is homogeneous of degree −2, we have
1043
+ [�
1044
+ X, �Y ]|0 =
1045
+
1046
+ j : wj=2
1047
+ aj ∂zj|0,
1048
+ for some constants aj ∈ R. But we also must have that [�
1049
+ X, �Y ]|0 ∈ �
1050
+ D|0 and so, since
1051
+
1052
+ D|0 = span
1053
+
1054
+ ∂zj : wj = 1
1055
+
1056
+ according to Definition 2.1, [�
1057
+ X, �Y ]|0 = 0, that is, [�
1058
+ X, �Y ] ∈ h. We thus have proved that
1059
+ [i, g1] ⊂ h2. In particular, since g1 = i ⊕ h1, we get
1060
+ [i, i] ⊂ h2
1061
+ and
1062
+ [i, h1] ⊂ h2,
1063
+ (3.10)
1064
+ from which we readily deduce (3.9) for j = 2.
1065
+ Proof of the induction step. Let us assume that (3.9) holds for some j ∈ N, j ≥ 2. Since
1066
+ g1 = i ⊕ h1, by the induction hypothesis we can write
1067
+ gj+1 = [g1, gj] = [g1, hj] = [i, hj] + [h1, hj] = [i, hj] + hj+1.
1068
+ We thus just need to show that [i, hj] ⊂ hj+1 for all j ∈ N with j ≥ 2. Note that we
1069
+ actually already proved the case j = 1 in (3.10). Again arguing by induction (taking
1070
+ j = 1 as base case), by the Jacobi identity and (3.10) we have
1071
+ [i, hj+1] = [i, [h1, hj]] = [h1, [hj, i]] + [hj, [i, h1]] ⊂ [h1, hj+1] + [hj, h2] = hj+2
1072
+
1073
+ FAILURE OF CD CONDITIONS ON SUB-RIEMANNIAN MANIFOLDS
1074
+ 17
1075
+ as desired, concluding the proof of the induction step.
1076
+
1077
+ Remark 3.2 (Proof of Theorem 1.5 in the case h = {0}). The proof of Theorem 1.5 is
1078
+ much simpler if the nilpotent approximation (Rn, �
1079
+ F) is a Carnot group, i.e., h = {0}.
1080
+ Indeed, in this case, the base case j = 2 in (3.9) immediately implies that g2 = h2 = {0},
1081
+ which in turn gives g = g1, so that g is commutative.
1082
+ 3.3. Proof of Theorem 1.8. In the following, we assume that the reader is familiar with
1083
+ the notions of upper gradient and of q-upper gradient, see [5] for the precise definitions.
1084
+ The next two lemmas are proved in [27] for sub-Riemannians structures on Rn equipped
1085
+ with the Lebesgue measure, and are immediately extended to the weighted case.
1086
+ Lemma 3.3. Let (M, d, m) be as in Theorem 1.8. If u ∈ C(M) and 0 ≤ g ∈ L1
1087
+ loc(M, m)
1088
+ be an upper gradient of u, then u ∈ HW1,1
1089
+ loc(M, m) with ∥∇u∥ ≤ g m-a.e. In particular, if
1090
+ u ∈ Lip(M, d), then ∥∇u∥ ≤ Lip(u).
1091
+ Proof. Without loss of generality we may assume that M = Ω ⊂ Rn is a bounded open
1092
+ set, the sub-Riemannian structure is induced by a family of smooth bracket-generating
1093
+ vector fields F = {X1, . . ., XL} on Ω and m = θL n, where θ: Ω → [0, ∞) is smooth
1094
+ and satisfies 0 < infΩ θ ≤ supΩ θ < ∞. Hence, L1(Ω, θL n) = L1(Ω, L n) as sets, with
1095
+ equivalent norms, so that 0 ≤ g ∈ L1
1096
+ loc(Ω, L n) is an upper gradient of u ∈ C(Ω). Hence,
1097
+ by [27, Th. 11.7], we get that u ∈ HW1,1
1098
+ loc(Ω, L n), with ∥∇u∥ ≤ g L n-a.e., and thus
1099
+ θL n-a.e., on Ω. By definition of distributional derivative, we can write
1100
+
1101
+ Ω v Xiu dx =
1102
+
1103
+ Ω u [−Xiv + div(Xi)v] dx,
1104
+ ∀ v ∈ C1
1105
+ c (Ω), i = 1, . . . , L,
1106
+ where div denotes the Euclidean divergence.
1107
+ We apply the above formula with test
1108
+ function v = θw, for any w ∈ C1
1109
+ c (Ω), getting
1110
+
1111
+ Ω w Xiu θ dx =
1112
+
1113
+ Ω u
1114
+
1115
+ −Xiw + div(Xi)w + Xiθ
1116
+ θ w
1117
+
1118
+ θ dx,
1119
+ ∀ w ∈ C1
1120
+ c (Ω), i = 1, . . . , L.
1121
+ The function within square brackets is the adjoint X∗
1122
+ i w with respect to the measure
1123
+ θL n. It follows that HW1,q(Ω, θL n) = HW1,q(Ω, L n) as sets, with equivalent norms. In
1124
+ particular, u ∈ W1,1
1125
+ D,loc(Ω, θL n) as desired.
1126
+
1127
+ Lemma 3.4 (Meyers–Serrin). Let (M, d, m) be as in Theorem 1.8 and let q ∈ [1, ∞).
1128
+ Then HW1,q(M, m) ∩ C∞(M) is dense in HW1,q(M, m).
1129
+ Proof. Up to a partition of unity and exhaustion argument, we can reduce to the case
1130
+ M = Ω ⊂ Rn is a bounded open set and m = θL n, where θ: Ω → [0, ∞) is as in the
1131
+ previous proof, so that HW1,q(Ω, L n) = HW1,q(Ω, θL n) as sets, with equivalent norms.
1132
+ In particular, we can assume that θ ≡ 1. This case is proved in [27, Th. 11.9].
1133
+
1134
+ Lemma 3.5. Let (M, d, m) be as in Theorem 1.8 and let q ∈ [1, ∞). If u ∈ HW1,q(M, m),
1135
+ then ∥∇u∥ is the minimal q-upper gradient of u.
1136
+ Proof. Let us first prove that ∥∇u∥ is a q-upper gradient of u. Indeed, by Lemma 3.4, we
1137
+ can find (uk)k∈N ⊂ HW1,q(M, m) ∩ C∞(M) such that uk → u in HW1,q(M, m) as k → ∞.
1138
+
1139
+ 18
1140
+ L. RIZZI AND G. STEFANI
1141
+ It is well-known that the sub-Riemannian norm of the gradient of a smooth function is
1142
+ an upper gradient, see [27, Prop. 11.6]. Thus, for uk it holds
1143
+ |uk(γ(1)) − uk(γ(0))| ≤
1144
+
1145
+ γ ∥∇uk∥ ds.
1146
+ Arguing as in [28, p. 179], using Fuglede’s lemma (see [28, Lem. 7.5 and Sec. 10]), we pass
1147
+ to the limit for k → ∞ in the previous equality, outside a q-exceptional family of curves.
1148
+ This proves that any Borel representative of ∥∇u∥ is a q-upper gradient of u.
1149
+ We now prove that ∥∇u∥ is indeed minimal. Let 0 ≤ g ∈ Lq(M, m) be any q-upper
1150
+ gradient of u. Arguing as in [28, p. 194], we can find a sequence (gk)k∈N ⊂ Lq(M, m) of
1151
+ upper gradients of u such that gk ≥ g for all k ∈ N and gk → g both pointwise m-a.e.
1152
+ on M and in Lq(M, m) as k → ∞. By Lemma 3.3, we thus must have that ∥∇u∥ ≤ gk
1153
+ m-a.e. on M for all k ∈ N. Hence, passing to the limit, we conclude that ∥∇u∥ ≤ g m-a.e.
1154
+ on M for any q-upper gradient g, concluding the proof.
1155
+
1156
+ We are now ready to deal with the proof of Theorem 1.8.
1157
+ Proof of (i). Recall that, here, q > 1. We begin by claiming that
1158
+ W1,q(M, d, m) ⊂ HW1,q(M, m)
1159
+ (3.11)
1160
+ isometrically, with ∥∇u∥ = |Du|w,q. Indeed, let u ∈ W1,q(M, d, m). By a well-known
1161
+ approximation argument, combining [5, Prop. 4.3, Th. 5.3 and Th. 7.4], we find (uk)k∈N ⊂
1162
+ Lip(M, d) ∩ W1,q(M, d, m) such that
1163
+ uk → u
1164
+ and
1165
+ |Duk|w,q → |Du|w,q
1166
+ in Lq(M, m).
1167
+ (3.12)
1168
+ Since uk ∈ Lip(M, d), by Lemma 3.3 we know that uk ∈ HW1,q(M, m).
1169
+ Hence, by
1170
+ Lemma 3.5, |Duk|w,q = ∥∇uk∥, and we immediately get that
1171
+ sup
1172
+ k∈N
1173
+
1174
+ M ∥∇uk∥q dm < ∞.
1175
+ Therefore, up to passing to a subsequence, (Xiuk)k∈N is weakly convergent in Lq(M, m),
1176
+ say Xiuk ⇀ αi ∈ Lq(M, m), for all i = 1, . . ., L. We thus get that u ∈ HW1,q(M, m) with
1177
+ Xiu = αi and thus ∇u =
1178
+ �L
1179
+ i=1 αiXi. By stability of q-upper gradients, [5, Th. 5.3 and
1180
+ Thm. 7.4], ∥∇u∥ is a q-upper gradient of u. By semi-continuity of the norm, we obtain
1181
+
1182
+ M ∥∇u∥q dm ≤ lim inf
1183
+ k→∞
1184
+
1185
+ M ∥∇uk∥q dm =
1186
+
1187
+ M |Du|q
1188
+ w,q dm,
1189
+ where we used (3.12). By definition of minimal q-upper gradient we thus get that ∥∇u∥ =
1190
+ |Du|w,q m-a.e., and the claimed inclusion in (3.11) immediately follows.
1191
+ We now observe that it also holds
1192
+ HW1,q(M, m) ∩ C∞(M) ⊂ W1,q(M, d, m),
1193
+ (3.13)
1194
+ with ∥∇u∥ = |Du|w,q. We just need to notice that, if u ∈ C∞(M), then ∥∇u∥ is an
1195
+ upper gradient of u, see [27, Prop. 11.6]. Therefore, by Lemma 3.3, ∥∇u∥ must coincide
1196
+ with the minimal q-upper gradient of u, i.e., ∥∇u∥ = |Du|w m-a.e., and (3.13) readily
1197
+ follows. In view of the isometric inclusions (3.11) and (3.13), and of the density provided
1198
+ by Lemma 3.4, this concludes the proof of (i).
1199
+
1200
+
1201
+ FAILURE OF CD CONDITIONS ON SUB-RIEMANNIAN MANIFOLDS
1202
+ 19
1203
+ Proof of (ii). Let us assume that (M, d, m) satisfies the CD(K, ∞) property for some
1204
+ K ∈ R.
1205
+ By the previous point (i), we know that (M, d, m) satisfies the RCD(K, ∞)
1206
+ property. Consequently, since clearly C∞
1207
+ c (M) ⊂ W1,2(M, d, m) by (3.13), [6, Rem. 6.3]
1208
+ (even if the measure m is σ-finite, see [4, Sec. 7] for a discussion) implies that
1209
+ 1
1210
+ 2
1211
+
1212
+ M ∆v ∥∇u∥2 dm −
1213
+
1214
+ M v g(∇u, ∇∆u) dm ≥ K
1215
+
1216
+ M v ∥∇u∥2 dm
1217
+ for all u, v ∈ C∞
1218
+ c (M) with v ≥ 0 on M, from which we readily deduce (1.1).
1219
+
1220
+ Remark 3.6. The above proofs work for more general measures m. Namely, we can
1221
+ assume that, locally on any bounded coordinate neighborhood Ω ⊂ Rn, m = θL n with
1222
+ θ ∈ W1,1(Ω, L n) ∩ L∞(Ω, L n).
1223
+ In this case, the positivity of m corresponds to the
1224
+ requirement that θ is locally essentially bounded from below away from zero, in charts.
1225
+ 3.4. Proof of Theorem 1.10. We prove the two points in the statement separately.
1226
+ Proof of (i). The case p = 0 has been already considered by Juillet in [32]. For p > 0,
1227
+ we can argue as follows. Let A0 = [−ℓ −1, −ℓ]×[0, 1] and A1 = [ℓ, ℓ + 1]×[0, 1] for ℓ > 0.
1228
+ We will shortly prove that the midpoint set
1229
+ A1/2 =
1230
+
1231
+ q ∈ R2 : ∃ q0 ∈ A0, ∃ q1 ∈ A1 with d(q, q0) = d(q, q1) = 1
1232
+ 2 d(q0, q1)
1233
+
1234
+ satisfies
1235
+ A1/2 ⊂ [−1 − εℓ, 1 + εℓ] × [0, 1]
1236
+ (3.14)
1237
+ for some εℓ > 0, with εℓ ↓ 0 as ℓ → ∞. Since mp(A0) = mp(A1) ∼ ℓp as ℓ → ∞, we get
1238
+
1239
+ mp(A0) mp(A1) > mp(A1/2)
1240
+ for large ℓ > 0. This contradicts the logarithmic Brunn–Minkowski BM(0, ∞) inequality,
1241
+ which is a consequence of the CD(0, ∞) condition, see [50, Th. 30.7].
1242
+ To prove (3.14), let qi ∈ Ai, qi = (xi, yi), and let γ(t) = (x(t), y(t)), t ∈ [0, 1], be a
1243
+ geodesic such that γ(i) = qi, with i = 0, 1. We first note that
1244
+ min{y0, y1} ≤ y(t) ≤ max{y0, y1}
1245
+ for all t ∈ [0, 1],
1246
+ (3.15)
1247
+ since any curve that violates (3.15) can be replaced by a strictly shorter one satisfy-
1248
+ ing (3.15). In particular, we get that A1/2 ⊂ R × [0, 1]. Let us now observe that
1249
+ |xa − xb| ≤ d(a, b) ≤ |xa − xb| +
1250
+ |ya − yb|
1251
+ max{|xa|, |xb|}
1252
+ for all a = (xa, ya) and b = (xb, yb) with xa, xb ̸= 0. Therefore, if q = (x, y) ∈ A1/2, then
1253
+ |x − x0| ≤ d(q, q0) = 1
1254
+ 2 d(q0, q1) ≤ ℓ + 1 + O(1/ℓ)
1255
+ and, similarly, |x − x1| ≤ ℓ + 1 + O(1/ℓ). Since x0 ∈ [−ℓ − 1, −ℓ] and x1 ∈ [ℓ, ℓ + 1], we
1256
+ deduce that |x| ≤ 1 + O(1/ℓ), concluding the proof of the claimed (3.14).
1257
+
1258
+
1259
+ 20
1260
+ L. RIZZI AND G. STEFANI
1261
+ Proof of (ii). Out of the negligible set {x = 0}, the metric g on Gp given by (1.5) is
1262
+ locally Riemannian. Recalling (1.6) and (1.7), the BE(K, ∞) inequality (1.1) is implied by
1263
+ the lower bound Ric∞,V ≥ K via Bochner’s formula, where Ric∞,V is the ∞-Bakry–Émery
1264
+ Ricci tensor of (R2, g, e−V volg), see [50, Ch. 14, Eqs. (14.36) – (14.51)]. By Lemma 3.7
1265
+ below, we have Ric∞,V ≥ 0 for all p ≥ 1, concluding the proof.
1266
+
1267
+ Lemma 3.7. Let p ∈ R and N > 2. The N-Bakry–Émery Ricci tensor of the Grushin
1268
+ metric (1.5), with weighted measure mp = |x|p dx dy, for all x ̸= 0 is
1269
+ RicN,V = p − 1
1270
+ x2
1271
+ g −(p + 1)2
1272
+ N − 2
1273
+ dx ⊗ dx
1274
+ x2
1275
+ ,
1276
+ with the convention that 1/∞ = 0.
1277
+ Proof. The N-Bakry–Émery Ricci tensor of a n-dimensional weighted Riemannian struc-
1278
+ ture (g, e−V volg), for N > n, is given by
1279
+ RicN,V = Ricg + HessgV − dV ⊗ dV
1280
+ N − n ,
1281
+ (3.16)
1282
+ see [50, Eq. (14.36)]. In terms of the frame (1.4), the Levi-Civita connection is given by
1283
+ ∇XX = ∇XY = 0,
1284
+ ∇Y X = −1
1285
+ xY,
1286
+ ∇Y Y = 1
1287
+ xX,
1288
+ whenever x ̸= 0. Recalling that, from (1.7), V (x) = −(p + 1) log |x|, for x ̸= 0, we obtain
1289
+ Ricg = − 2
1290
+ x2 g,
1291
+ HessgV = (p + 1)
1292
+ x2
1293
+ g,
1294
+ dV = −p + 1
1295
+ x
1296
+ dx,
1297
+ (3.17)
1298
+ whenever x ̸= 0. The conclusion thus follows by inserting (3.17) into (3.16).
1299
+
1300
+ 3.5. Proof of Theorem 1.11. The statement is a consequence of the geodesic convexity
1301
+ of G+
1302
+ p and the computation of the N-Bakry–Émery curvature in Lemma 3.7. Since the
1303
+ proof uses quite standard arguments, we simply sketch its main steps.
1304
+ The interior of G+
1305
+ p , i.e., the open half-plane, can be regarded as a (non-complete)
1306
+ weighted Riemannian manifold with metric g as in (1.5) and weighted volume as in (1.7).
1307
+ Let µ0, µ1 ∈ P2(G+
1308
+ p ), µ0, µ1 ≪ mp, with bounded support contained in the Riemannian
1309
+ region {x > ε}, for some ε ≥ 0.
1310
+ Let (µs)s∈[0,1] be a W2-geodesic joining µ0 and µ1.
1311
+ By a well-known representation
1312
+ theorem (see [50, Cor. 7.22]), there exists ν ∈ P(Geo(G+
1313
+ p )), supported on the set
1314
+ Γ = (e0 × e1)−1(supp µ0 × supp µ1), such that µs = (es)♯ν for all s ∈ [0, 1]. Since the
1315
+ set {x ≥ ε} is a geodesically convex subset of the full Grushin plane Gp (by the same
1316
+ argument of [46, Prop. 5]), any γ ∈ Γ is contained for all times in the region {x > 0}.
1317
+ Therefore, Γ is a set of Riemannian geodesics contained in the weighted Riemannian struc-
1318
+ ture ({x > 0}, g, e−V volg). By Lemma 3.7, we have RicN,V ≥ 0 for all N ≥ Np, where Np
1319
+ is as in (1.8). At this point, a standard argument shows that the Rényi entropy is convex
1320
+ along Wasserstein geodesics joining µ0 with µ1, see the proof of [49, Th. 1.7] for example.
1321
+ The extension to µ0, µ1 ∈ P2(G+
1322
+ p ), with µ0, µ1 ≪ mp and compact support possibly
1323
+ touching the singular region {x = 0}, is achieved via a standard approximation argument.
1324
+ More precisely, one reduces to the previous case and exploits the stability of optimal
1325
+ transport [50, Th. 28.9] and the lower semi-continuity of the Rényi entropy [50, Th. 29.20].
1326
+
1327
+ FAILURE OF CD CONDITIONS ON SUB-RIEMANNIAN MANIFOLDS
1328
+ 21
1329
+ Finally, the extension to general µ0, µ1 ∈ P2(G+
1330
+ p ) follows the routine argument outlined
1331
+ in [9, Rem. 2.12], which works when µs = (es)♯ν, s ∈ [0, 1], and ν is concentrated on a set
1332
+ of non-branching geodesics. This proves the ‘if’ part of the statement.
1333
+ The ‘only if’ part is also standard. The CD(0, N) condition for N > 2 implies that, on
1334
+ the Riemannian region {x > 0}, RicN,V ≥ 0, but this is false for N < Np.
1335
+ The fact that G+
1336
+ p is infinitesimally Hilbertian follows from Remark 1.9, by noting that
1337
+ mp is positive and smooth out of the closed set {x = 0}, which has zero measure. An
1338
+ alternative proof follows from the observation that G+
1339
+ p is a Ricci limit, see [42].
1340
+
1341
+ Appendix A. Gradient and Laplacian representations formulas
1342
+ For the reader’s convenience, in this appendix we provide a short proof of the repre-
1343
+ sentation formulas (2.5) and (2.7), in the rank-varying case.
1344
+ Lemma A.1. For λ ∈ T ∗M, let λ# ∈ D be uniquely defined by
1345
+ g(λ#, V ) = ⟨λ, V ⟩
1346
+ for all V ∈ D, where ⟨·, ·⟩ denotes the action of covectors on vectors. Then
1347
+ ∥λ#∥2 =
1348
+ L
1349
+
1350
+ i=1
1351
+
1352
+ λ#, Xi
1353
+ �2.
1354
+ (A.1)
1355
+ As a consequence, if λ, µ ∈ T ∗M, then
1356
+ g(λ#, µ#) =
1357
+ L
1358
+
1359
+ i=1
1360
+ ⟨λ, Xi⟩⟨µ, Xi⟩.
1361
+ (A.2)
1362
+ Proof. Given u ∈ RL, we set Xu = �L
1363
+ i=1 uiXi and define
1364
+ u∗ ∈ argmin
1365
+
1366
+ v �→ |v| : v ∈ RL, Xv = Xu
1367
+
1368
+ .
1369
+ In other words, for Xu ∈ D, u∗ is the element of minimal Euclidean norm such that
1370
+ Xu∗ = Xu. Note that, by definition, it holds ∥Xu∥ = |u∗|. We thus have
1371
+ ∥λ#∥ = sup
1372
+
1373
+ g(λ#, X) : ∥X∥ = 1, X ∈ D
1374
+
1375
+ = sup
1376
+
1377
+ g(λ#, Xu) : |u∗| = 1, u ∈ RL�
1378
+ .
1379
+ We now claim that
1380
+ sup
1381
+
1382
+ g(λ#, Xu) : |u∗| = 1, u ∈ RL�
1383
+ = sup
1384
+
1385
+ g(λ#, Xu) : |u| = 1, u ∈ RL�
1386
+ .
1387
+ (A.3)
1388
+ Indeed, the inequality ≤ in (A.3) is obtained by observing that Xu = Xu∗ for any u ∈ RL.
1389
+ To prove the inequality ≥ in (A.3), we observe that, if u ∈ RL is such that |u| = 1 and
1390
+ 0 < |u∗| < 1, then v = u/|u∗| satisfies |v∗| = 1 and gives
1391
+ g(λ#, Xv) > g(λ#, Xv) |u∗| = g(λ#, Xu).
1392
+ (A.4)
1393
+ Furthermore, if |u| = 1 and u∗ = 0, then Xu = 0 so also in this case we find v ∈ Rn with
1394
+ v∗ = 1 such that (A.4) holds. This ends the proof of the claimed (A.3). Hence, since
1395
+ g(λ#, Xu) =
1396
+ L
1397
+
1398
+ i=1
1399
+ g(λ#, Xi) ui,
1400
+
1401
+ 22
1402
+ L. RIZZI AND G. STEFANI
1403
+ we easily conclude that
1404
+ ∥λ#∥ = sup
1405
+
1406
+ g(λ#, Xu) : |u| = 1, u ∈ RL�
1407
+ =
1408
+
1409
+
1410
+
1411
+
1412
+ L
1413
+
1414
+ i=1
1415
+ g(λ#, Xi)2,
1416
+ proving (A.1). Equality (A.2) then follows by polarization.
1417
+
1418
+ Corollary A.2. The following formulas hold:
1419
+ ∇u =
1420
+ L
1421
+
1422
+ i=1
1423
+ Xiu Xi,
1424
+ (A.5)
1425
+ ∆u =
1426
+ L
1427
+
1428
+ i=1
1429
+
1430
+ X2
1431
+ i u + Xiu divm(Xi)
1432
+
1433
+ ,
1434
+ (A.6)
1435
+ g(∇u, ∇v) =
1436
+ L
1437
+
1438
+ i=1
1439
+ Xiu Xiv,
1440
+ (A.7)
1441
+ for all u, v ∈ C∞(M). In particular, ∥∇u∥ =
1442
+ �L
1443
+ i=1(Xiu)2 for all u ∈ C∞(M).
1444
+ Proof. We prove each formula separately.
1445
+ Proof of (A.5). Recalling the definition in (2.4), we can pick λ = du in (A.2) to get
1446
+
1447
+ du, µ#�
1448
+ = g(∇u, µ#) =
1449
+ L
1450
+
1451
+ i=1
1452
+ ⟨du, Xi⟩⟨µ, Xi⟩
1453
+ =
1454
+ L
1455
+
1456
+ i=1
1457
+ Xiu ⟨µ, Xi⟩ = g
1458
+
1459
+ µ#,
1460
+ L
1461
+
1462
+ i=1
1463
+ Xiu Xi
1464
+
1465
+ whenever µ ∈ T ∗
1466
+ xM. Since the map #: T ∗
1467
+ xM → Dx is surjective, we immediately get (A.5).
1468
+ Proof of (A.6). Recall that
1469
+ divm(fX) = Xf + f divm(X)
1470
+ for any f ∈ C∞(M) and X ∈ Γ(TM). Hence, from the definition in (2.6), we can compute
1471
+ ∆u = divm(∇u) =
1472
+ L
1473
+
1474
+ i=1
1475
+ divm(Xiu Xi) =
1476
+ L
1477
+
1478
+ i=1
1479
+
1480
+ X2
1481
+ i u Xi + Xiu divm(Xi)
1482
+
1483
+ ,
1484
+ which is the desired (A.6).
1485
+ Proof of (A.7). Choosing λ = du and µ = dv in (A.2), we can compute
1486
+ g(∇u, ∇v) =
1487
+ L
1488
+
1489
+ i=1
1490
+ ⟨du, Xi⟩ ⟨dv, Xi⟩ =
1491
+ L
1492
+
1493
+ i=1
1494
+ Xiu Xiv
1495
+ and the proof is complete.
1496
+
1497
+
1498
+ FAILURE OF CD CONDITIONS ON SUB-RIEMANNIAN MANIFOLDS
1499
+ 23
1500
+ References
1501
+ [1] A. Agrachev, D. Barilari, and L. Rizzi, Curvature: a variational approach, Mem. Amer. Math. Soc.
1502
+ 256 (2018), no. 1225, v+142.
1503
+ [2] A. Agrachev, D. Barilari, and U. Boscain, A comprehensive introduction to sub-Riemannian geome-
1504
+ try, Cambridge Studies in Advanced Mathematics, vol. 181, Cambridge University Press, Cambridge,
1505
+ 2020. From the Hamiltonian viewpoint, With an appendix by Igor Zelenko.
1506
+ [3] L. Ambrosio, Calculus, heat flow and curvature-dimension bounds in metric measure spaces, Pro-
1507
+ ceedings of the International Congress of Mathematicians—Rio de Janeiro 2018. Vol. I. Plenary
1508
+ lectures, 2018, pp. 301–340.
1509
+ [4] L. Ambrosio, N. Gigli, A. Mondino, and T. Rajala, Riemannian Ricci curvature lower bounds in
1510
+ metric measure spaces with σ-finite measure, Trans. Amer. Math. Soc. 367 (2015), no. 7, 4661–4701.
1511
+ [5] L. Ambrosio, N. Gigli, and G. Savaré, Density of Lipschitz functions and equivalence of weak gradients
1512
+ in metric measure spaces, Rev. Mat. Iberoam. 29 (2013), no. 3, 969–996.
1513
+ [6]
1514
+ , Metric measure spaces with Riemannian Ricci curvature bounded from below, Duke Math.
1515
+ J. 163 (2014), no. 7, 1405–1490.
1516
+ [7]
1517
+ , Bakry-Émery curvature-dimension condition and Riemannian Ricci curvature bounds, Ann.
1518
+ Probab. 43 (2015), no. 1, 339–404.
1519
+ [8] L. Ambrosio and G. Stefani, Heat and entropy flows in Carnot groups, Rev. Mat. Iberoam. 36 (2020),
1520
+ no. 1, 257–290.
1521
+ [9] K. Bacher and K.-T. Sturm, Localization and tensorization properties of the curvature-dimension
1522
+ condition for metric measure spaces, J. Funct. Anal. 259 (2010), no. 1, 28–56.
1523
+ [10] D. Bakry, I. Gentil, and M. Ledoux, Analysis and geometry of Markov diffusion operators,
1524
+ Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sci-
1525
+ ences], vol. 348, Springer, Cham, 2014.
1526
+ [11] Z. M. Balogh, A. Kristály, and K. Sipos, Geometric inequalities on Heisenberg groups, Calc. Var.
1527
+ Partial Differential Equations 57 (2018), no. 2, Paper No. 61, 41.
1528
+ [12]
1529
+ , Jacobian determinant inequality on corank 1 Carnot groups with applications, J. Funct.
1530
+ Anal. 277 (2019), no. 12, 108293, 36.
1531
+ [13] D. Barilari, A. Mondino, and L. Rizzi, Unified synthetic Ricci curvature lower bounds for Riemannian
1532
+ and sub-Riemannian structures, 2022. Preprint, availabe at arXiv:2211.07762.
1533
+ [14] D. Barilari and L. Rizzi, Sub-Riemannian interpolation inequalities, Invent. Math. 215 (2019), no. 3,
1534
+ 977–1038.
1535
+ [15]
1536
+ , Bakry-Émery curvature and model spaces in sub-Riemannian geometry, Math. Ann. 377
1537
+ (2020), no. 1-2, 435–482.
1538
+ [16] F. Baudoin and M. Bonnefont, Reverse Poincaré inequalities, isoperimetry, and Riesz transforms in
1539
+ Carnot groups, Nonlinear Anal. 131 (2016), 48–59.
1540
+ [17] F. Baudoin and N. Garofalo, Curvature-dimension inequalities and Ricci lower bounds for sub-
1541
+ Riemannian manifolds with transverse symmetries, J. Eur. Math. Soc. (JEMS) 19 (2017), no. 1,
1542
+ 151–219.
1543
+ [18] A. Bellaïche, The tangent space in sub-Riemannian geometry, Sub-Riemannian geometry, 1996, pp. 1–
1544
+ 78.
1545
+ [19] F. Cavalletti and A. Mondino, Sharp and rigid isoperimetric inequalities in metric-measure spaces
1546
+ with lower Ricci curvature bounds, Invent. Math. 208 (2017), no. 3, 803–849.
1547
+ [20] X. Dai, S. Honda, J. Pan, and G. Wei, Singular Weyl’s law with Ricci curvature bounded below, 2022.
1548
+ Preprint, available at arXiv:2208.13962.
1549
+ [21] G. De Philippis and N. Gigli, Non-collapsed spaces with Ricci curvature bounded from below, J. Éc.
1550
+ polytech. Math. 5 (2018), 613–650.
1551
+ [22] Q. Deng, Hölder continuity of tangent cones in RCD(k, n) spaces and applications to non-branching,
1552
+ 2022. Preprint, available at arXiv:2009.07956v2.
1553
+ [23] B. K. Driver and T. Melcher, Hypoelliptic heat kernel inequalities on the Heisenberg group, J. Funct.
1554
+ Anal. 221 (2005), no. 2, 340–365.
1555
+
1556
+ 24
1557
+ L. RIZZI AND G. STEFANI
1558
+ [24] M. Erbar, K. Kuwada, and K.-T. Sturm, On the equivalence of the entropic curvature-dimension
1559
+ condition and Bochner’s inequality on metric measure spaces, Invent. Math. 201 (2015), no. 3, 993–
1560
+ 1071.
1561
+ [25] N. Gigli, On the differential structure of metric measure spaces and applications, Mem. Amer. Math.
1562
+ Soc. 236 (2015), no. 1113, vi+91.
1563
+ [26] M. Gromov, Structures métriques pour les variétés riemanniennes, Textes Mathématiques [Mathe-
1564
+ matical Texts], vol. 1, CEDIC, Paris, 1981. Edited by J. Lafontaine and P. Pansu.
1565
+ [27] P. Hajł asz and P. Koskela, Sobolev met Poincaré, Mem. Amer. Math. Soc. 145 (2000), no. 688,
1566
+ x+101.
1567
+ [28] J. Heinonen, Nonsmooth calculus, Bull. Amer. Math. Soc. (N.S.) 44 (2007), no. 2, 163–232.
1568
+ [29] Y. Huang and S. Sun, Non-embedding theorems of nilpotent Lie groups and sub-Riemannian mani-
1569
+ folds, Front. Math. China 15 (2020), no. 1, 91–114.
1570
+ [30] F. Jean, Control of nonholonomic systems: from sub-Riemannian geometry to motion planning,
1571
+ SpringerBriefs in Mathematics, Springer, Cham, 2014.
1572
+ [31] N. Juillet, Geometric inequalities and generalized Ricci bounds in the Heisenberg group, Int. Math.
1573
+ Res. Not. IMRN 13 (2009), 2347–2373.
1574
+ [32]
1575
+ , On a method to disprove generalized Brunn-Minkowski inequalities, Probabilistic approach
1576
+ to geometry, 2010, pp. 189–198.
1577
+ [33]
1578
+ , Sub-Riemannian structures do not satisfy Riemannian Brunn-Minkowski inequalities, Rev.
1579
+ Mat. Iberoam. 37 (2021), no. 1, 177–188.
1580
+ [34] V. Kapovitch, C. Ketterer, and K.-T. Sturm, On gluing Alexandrov spaces with lower Ricci curvature
1581
+ bounds, Comm. Anal. Geom. (in press).
1582
+ [35] E. Le Donne, D. Lučić, and E. Pasqualetto, Universal infinitesimal Hilbertianity of sub-Riemannian
1583
+ manifolds, Potential Anal. (2022).
1584
+ [36] J. Lott and C. Villani, Ricci curvature for metric-measure spaces via optimal transport, Ann. of
1585
+ Math. (2) 169 (2009), no. 3, 903–991.
1586
+ [37] M. Magnabosco and T. Rossi, Almost-Riemannian manifolds do not satisfy the CD condition (2022).
1587
+ Preprint, available at arXiv:2202.08775.
1588
+ [38] E. Milman, The quasi curvature-dimension condition with applications to sub-Riemannian manifolds,
1589
+ Comm. Pure Appl. Math. 74 (2021), no. 12, 2628–2674.
1590
+ [39] A. Mondino and A. Naber, Structure theory of metric measure spaces with lower Ricci curvature
1591
+ bounds, J. Eur. Math. Soc. (JEMS) 21 (2019), no. 6, 1809–1854.
1592
+ [40] R. Montgomery, A tour of subriemannian geometries, their geodesics and applications, Mathematical
1593
+ Surveys and Monographs, vol. 91, American Mathematical Society, Providence, RI, 2002.
1594
+ [41] J. Pan, The Grushin hemisphere as a Ricci limit space with curvature ≥ 1, 2022. Preprint, available
1595
+ at arXiv:2211.02747v2.
1596
+ [42] J. Pan and G. Wei, Examples of Ricci limit spaces with non-integer Hausdorff dimension, Geom.
1597
+ Funct. Anal. 32 (2022), no. 3, 676–685.
1598
+ [43] G. Perelman, Alexandrov’s spaces with curvatures bounded from below II, 1991. Unpublished, a copy
1599
+ the manuscript can be found on Anton Petrunin’s website.
1600
+ [44] A. Petrunin, Applications of quasigeodesics and gradient curves, Comparison geometry (Berkeley,
1601
+ CA, 1993–94), 1997, pp. 203–219.
1602
+ [45] L. Rifford, Sub-Riemannian geometry and optimal transport, SpringerBriefs in Mathematics,
1603
+ Springer, Cham, 2014.
1604
+ [46] L. Rizzi, A counterexample to gluing theorems for MCP metric measure spaces, Bull. Lond. Math.
1605
+ Soc. 50 (2018), no. 5, 781–790.
1606
+ [47] G. Stefani, Generalized Bakry-Émery curvature condition and equivalent entropic inequalities in
1607
+ groups, J. Geom. Anal. 32 (2022), no. 4, Paper No. 136, 98.
1608
+ [48] K.-T. Sturm, On the geometry of metric measure spaces. I, Acta Math. 196 (2006), no. 1, 65–131.
1609
+ [49]
1610
+ , On the geometry of metric measure spaces. II, Acta Math. 196 (2006), no. 1, 133–177.
1611
+ [50] C. Villani, Optimal transport, Grundlehren der mathematischen Wissenschaften [Fundamental Prin-
1612
+ ciples of Mathematical Sciences], vol. 338, Springer-Verlag, Berlin, 2009. Old and new.
1613
+
1614
+ FAILURE OF CD CONDITIONS ON SUB-RIEMANNIAN MANIFOLDS
1615
+ 25
1616
+ (L. Rizzi) Scuola Internazionale Superiore di Studi Avanzati (SISSA), via Bonomea 265,
1617
+ 34136 Trieste (TS), Italy
1618
+ Email address: [email protected]
1619
+ (G. Stefani) Scuola Internazionale Superiore di Studi Avanzati (SISSA), via Bonomea 265,
1620
+ 34136 Trieste (TS), Italy
1621
1622
+
1tAyT4oBgHgl3EQf1flH/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
29AyT4oBgHgl3EQfPvbO/content/tmp_files/2301.00032v1.pdf.txt ADDED
@@ -0,0 +1,1717 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Bayesian Learning for Dynamic Inference
2
+ Aolin Xu
3
+ Peng Guan
4
+ Abstract
5
+ The traditional statistical inference is static, in the sense that the estimate of the quantity
6
+ of interest does not affect the future evolution of the quantity. In some sequential estimation
7
+ problems however, the future values of the quantity to be estimated depend on the estimate of
8
+ its current value. This type of estimation problems has been formulated as the dynamic inference
9
+ problem. In this work, we formulate the Bayesian learning problem for dynamic inference,
10
+ where the unknown quantity-generation model is assumed to be randomly drawn according to
11
+ a random model parameter. We derive the optimal Bayesian learning rules, both offline and
12
+ online, to minimize the inference loss. Moreover, learning for dynamic inference can serve as a
13
+ meta problem, such that all familiar machine learning problems, including supervised learning,
14
+ imitation learning and reinforcement learning, can be cast as its special cases or variants. Gaining
15
+ a good understanding of this unifying meta problem thus sheds light on a broad spectrum of
16
+ machine learning problems as well.
17
+ 1
18
+ Introduction
19
+ 1.1
20
+ Dynamic inference
21
+ Traditional statistical estimation, or statistical inference in general is static, in the sense that the
22
+ estimate of the quantity of interest does not affect the future evolution of the quantity. In some
23
+ sequential estimation problems however, we do encounter the situation where the future value of the
24
+ quantity to be estimated depends on the estimate of its current value. Examples include 1) stock
25
+ price prediction by big investors, where the prediction of the tomorrow’s price of a stock affects
26
+ tomorrow’s investment decision, which further changes the stock’s supply-demand status and hence
27
+ its price the day after tomorrow; 2) interactive product recommendation, where the estimate of
28
+ a user’s preference based on the user’s activity leads to certain product recommendations to the
29
+ user, which would in turn shape the user’s future activity and preference; 3) behavior prediction in
30
+ multi-agent systems, e.g. vehicles on the road, where the estimate of an adjacent vehicle’s intention
31
+ based on its current driving situation leads to a certain action of the ego vehicle, which can change
32
+ the future driving situation and intention of the adjacent vehicle. We may call such problems as
33
+ dynamic inference, which is formulated and studied in depth in [1]. It is shown that the problem of
34
+ dynamic inference can be converted to an Markov decision-making process (MDP), and the optimal
35
+ estimation strategy can be derived through dynamic programming. We give a brief overview of the
36
+ problem of dynamic inference in Section 2.
37
+ 1.2
38
+ Learning for dynamic inference
39
+ There are two major ingredients in dynamic inference: the probability transition kernels of the
40
+ quantity of interest given each observation, and the probability transition kernels of the next
41
+ 1
42
+ arXiv:2301.00032v1 [cs.LG] 30 Dec 2022
43
+
44
+ observation given the current observation and the estimate of the current quantity of interest. We
45
+ may call them the quantity-generation model and the observation-transition model, respectively.
46
+ Solving the dynamic inference problem requires the knowledge of the two models. However, in most
47
+ of the practically interesting situations, we do not have such knowledge. Instead, we either have a
48
+ training dataset from which we can learn these models or we can learn them on-the-fly during the
49
+ inference.
50
+ In this work, we set up the learning problem in a Bayesian framework, and derive the optimal
51
+ learning rules, both offline (Section 3) and online (Section 4), for dynamic inference under this
52
+ framework. Specifically, we assume the unknown models are elements in some parametric families
53
+ of probability transition kernels, and the unknown model parameters are randomly drawn according
54
+ to some prior distributions. The goal is then to find an optimal Bayesian learning rule, which can
55
+ return an estimation strategy that minimizes the inference loss. The approach we take toward this
56
+ goal is converting the learning problem to an MDP with an augmented state, which consists of
57
+ the current observation and a belief vector of the unknown parameters, and solving the MDP by
58
+ dynamic programming over the augmented state space. The solution, though optimal, may still be
59
+ computationally challenging unless the belief vector can be compactly represented. Nevertheless, it
60
+ already has a greatly reduced search space compared to the original learning problem, and provides
61
+ a theoretical basis for the design of more computationally efficient approximate solutions.
62
+ Perhaps equally importantly, the problem of learning for dynamic inference can serve as a meta
63
+ problem, such that almost all familiar learning problems can be cast as its special cases or variants.
64
+ Examples include supervised learning, imitation learning, and reinforcement learning, including
65
+ bandit and contextual bandit problems. For instance, the Bayesian offline learning for dynamic
66
+ inference can be viewed as an extension of the behavior cloning method in imitation learning [2–4],
67
+ in that it not only learns the demonstrator’s action-generation model, but simultaneously learns a
68
+ policy based on the learned model to minimize the overall imitation error. As another instance, the
69
+ quantity to be estimated in dynamic inference may be viewed as a latent variable of the loss function,
70
+ so that the Bayesian online learning for dynamic inference can be viewed as Bayesian reinforcement
71
+ learning [5–8], where an optimal policy is learned by estimating the unknown loss function. Learning
72
+ for dynamic inference thus provides us with a unifying formulation of different learning problems.
73
+ Having a good understanding of this problem is helpful for gaining better understandings of the
74
+ other learning problems as well.
75
+ 1.3
76
+ Relation to existing works
77
+ The problem of dynamic inference and learning for dynamic inference appear to be new, but it
78
+ can be viewed from different angles, and is related to a variety of existing problems. The most
79
+ intimately related work is the original formulations of imitation learning [9]. The online learning for
80
+ dynamic inference is closely related to ans subsumes Bayesian reinforcement learning. Some recent
81
+ study on Bayesian reinforcement learning and interactive decision making include [10,11].
82
+ A problem formulation with a similar spirit in a minimax framework appear recently in [12]. In
83
+ that work, an adversarial online learning problem where the action in each round affects the future
84
+ observed data is set up. It may be viewed as adversarial online learning for dynamic minimax
85
+ inference, from our standpoint. The advantage of the Bayesian formulation is that all the variables
86
+ under consideration, including the unknown model parameters, are generated from some fixed joint
87
+ distribution, thus the optimality of learning can be defined and the optimal learning rule can be
88
+ derived. On the contrary, with the adversarial formulation, only certain definitions of regret can be
89
+ 2
90
+
91
+ studied.
92
+ The overall optimality proof technique we adopt is similar to those used in solving partially
93
+ observed MDP (POMDP) and Bayesian reinforcement learning over the augmented belief space
94
+ [13,14]. Several proofs are adapted from the rigorous exposition of the optimality of the belief-state
95
+ MDP reformulation of the POMDP [15].
96
+ As mentioned in the previous subsection, Bayesian learning for dynamic inference can be viewed
97
+ as a unifying formulation for Bayesian imitation learning and Bayesian reinforcement learning.
98
+ These problems are surveyed in [16–18] for relevant imitation learning, and in [8,19–22] for relevant
99
+ reinforcement learning.
100
+ 2
101
+ Overview of dynamic inference
102
+ 2.1
103
+ Problem formulation
104
+ The problem of an n-round dynamic inference is to estimate n unknown quantities of interest
105
+ Y n sequentially based on observations Xn, where in the ith round of estimation, Xi depends on
106
+ the observation Xi−1 and the estimate �Yi−1 of Yi−1 in the previous round, while the quantity of
107
+ interest Yi only depends on Xi, and the estimate �Yi of Yi can depend on everything available so
108
+ far, namely (Xi, �Y i−1), through an estimator ψi as �Yi = ψi(Xi, �Y i−1). The sequence of estimators
109
+ ψn = (ψ1, . . . , ψn) constitute an estimation strategy. We assume to know the distribution PX1 of the
110
+ initial observation, and the probability transition kernels (KXi|Xi−1,�Yi−1)n
111
+ i=2 and (KYi|Xi)n
112
+ i=1. These
113
+ distributions and ψn define a joint distribution of (Xn, Y n, �Y n), all the variables under consideration.
114
+ The Bayesian network of the random variables in dynamic inference with a Markov estimation
115
+ strategy, meaning that each estimator has the form ψi : X → �Y, is illustrated in Fig. 1.
116
+ Figure 1: Bayesian network of the random variables under consideration with n = 4. Here we
117
+ assume the estimates are made with Markov estimators, such that �Yi = ψi(Xi).
118
+ The goal of dynamic inference can then be formally stated as finding an estimation strategy to
119
+ minimize the accumulated expected loss over the n-rounds:
120
+ arg min
121
+ ψn
122
+ E
123
+
124
+ n
125
+
126
+ i=1
127
+ ℓ(Xi, Yi, �Yi)
128
+
129
+ ,
130
+ �Yi = ψi(Xi, �Y i−1)
131
+ (1)
132
+ where ℓ : X×Y× �Y → R is a loss function that evaluates the estimate made in each round. Compared
133
+ with the traditional statistical inference under the Bayesian formulation, where the goal is to find
134
+ an estimator ψ of a random quantity Y based on a jointly distributed observation X to minimize
135
+ E[ℓ(Y, ψ(X))], we summarize the two distinctive features of dynamic inference in (1):
136
+ 3
137
+
138
+ Yi
139
+ Y2
140
+ Y3
141
+ Y4
142
+ 1
143
+ X1
144
+ ★ X2
145
+ + X3
146
+ <Y1
147
+ 4
148
+ <Y3
149
+ 4
150
+ 12
151
+ 14• The joint distribution of the pair (Xi, Yi) changes in each round in a controlled manner, as it
152
+ depends on (Xi−1, �Yi−1);
153
+ • The loss in each round is contextual, as it depends on Xi.
154
+ 2.2
155
+ Optimal estimation strategy for dynamic inference
156
+ It is shown in [1] that optimization problem in (1) is equivalent to
157
+ arg min
158
+ ψn
159
+ E
160
+
161
+ n
162
+
163
+ i=1
164
+ ¯ℓ(Xi, �Yi)
165
+
166
+ ,
167
+ (2)
168
+ where ¯ℓ(x, ˆy) ≜ E[ℓ(x, Y, ˆy)|X = x, �Y = ˆy], and for any realization (xi, ˆyi) of (Xi, �Yi), it can be
169
+ computed as ¯ℓ(xi, ˆyi) = E[ℓ(xi, Yi, ˆyi)|Xi = xi]. With this reformulation, the unknown quantities Yi
170
+ do not appear in the loss function any more, and the optimization problem becomes a standard
171
+ MDP. The observations Xn become the states in this MDP, the estimates �Y n become the actions,
172
+ the probability transition kernel KXi|Xi−1,�Yi−1 now defines the controlled state transition, and any
173
+ estimation strategy ψn becomes a policy of this MDP. The goal becomes finding an optimal policy
174
+ for this MDP to minimize the accumulated expected loss defined w.r.t. ¯ℓ. The solution to the MDP
175
+ will be an optimal estimation strategy for dynamic inference.
176
+ From the theory of MDP it is known that the optimal estimators (ψ∗
177
+ 1, . . . , ψ∗
178
+ n) for the optimization
179
+ problem in (2) can be Markov, meaning that ψ∗
180
+ i can take only Xi as input, and the values of the
181
+ optimal estimates ψ∗
182
+ i (x) for i = 1, . . . , n and x ∈ X can be found via dynamic programming.
183
+ Define the functions Q∗
184
+ i : X × �Y → R and V ∗
185
+ i : X → R recursively as Q∗
186
+ n(x, ˆy) ≜ ¯ℓ(x, ˆy), V ∗
187
+ i (x) ≜
188
+ minˆy∈�Y Q∗
189
+ i (x, ˆy) for i = n, . . . , 1, and Q∗
190
+ i (x, ˆy) ≜ ¯ℓ(x, ˆy) + E[V ∗
191
+ i+1(Xi+1)|Xi = x, �Yi = ˆy] for i =
192
+ n − 1, . . . , 1. The optimal estimate to make in the ith round when Xi = x is then
193
+ ψ∗
194
+ i (x) ≜ arg min
195
+ ˆy∈�Y
196
+ Q∗
197
+ i (x, ˆy).
198
+ (3)
199
+ It is shown that the estimators (ψ∗
200
+ 1, . . . , ψ∗
201
+ n) defined in (3) achieve the minimum in (1). Moreover,
202
+ For any i = 1, . . . , n and any initial distribution PXi,
203
+ min
204
+ ψi,...,ψn E
205
+
206
+ n
207
+
208
+ j=i
209
+ ℓ(Xj, Yj, �Yj)
210
+
211
+ = E[V ∗
212
+ i (Xi)],
213
+ (4)
214
+ with the minimum achieved by (ψ∗
215
+ i , . . . , ψ∗
216
+ n). As shown by the examples in [1], the implication of
217
+ the optimal estimation strategy is that, in each round of estimation, the estimate to make is not
218
+ necessarily the optimal single-round estimate in that round, but one which takes into account the
219
+ accuracy in that round, and tries to steer the future observations toward those with which the
220
+ quantities of interest tend to easy to estimate.
221
+ 3
222
+ Bayesian offline learning for dynamic inference
223
+ Solving dynamic inference requires the knowledge of the quantity-generation models (KYi|Xi)n
224
+ i=1
225
+ and the observation-transition models (KXi|Xi−1,�Yi−1)n
226
+ i=2. In most of the practically interesting
227
+ situations however, we may not have such knowledge. Instead we may have a training dataset from
228
+ 4
229
+
230
+ which we can learn these models, or may learn them on-the-fly during inference. In this section and
231
+ the next one, we study the offline learning and the online learning problems for dynamic inference
232
+ respectively, with unknown quantity-generation models but known observation transition models.
233
+ This is already a case of sufficient interest, as the observation-transition model in many problems, e.g.
234
+ imitation learning, are available. The proof techniques we develop carry over to the case where the
235
+ observation-transition models are also unknown. In that case, the solution will have the same form,
236
+ but a further-augmented state with a belief vector of the observation-transition model parameter;
237
+ and the belief update has two parts, separately for the parameters of the quantity-generation model
238
+ and the observation transition model.
239
+ Formally, in this section we assume that the initial distribution PX1 and the probability transition
240
+ kernels (KXi|Xi−1,�Yi−1)n
241
+ i=2 are still known, while the unknown KYi|Xi’s are the same element PY |X,W
242
+ of a parametrized family of kernels {PY |X,w, w ∈ W} and the unknown parameter W is a random
243
+ element of W with prior distribution PW . The training data Zm consists of m samples, and is drawn
244
+ from some distribution PZm|W with W as a parameter. This setup is quite flexible, in that the Zm
245
+ need not be generated in the same way as the data generated during inference. One example is a
246
+ setup similar to imitation learning, where Zm = ((X′
247
+ 1, Y ′
248
+ 1), . . . , (X′
249
+ m, Y ′
250
+ m)) and
251
+ PZm|W = PX′
252
+ 1KY ′
253
+ 1|X′
254
+ 1
255
+ n
256
+
257
+ i=2
258
+ KX′
259
+ i|X′
260
+ i−1,Y ′
261
+ i−1KY ′
262
+ i |X′
263
+ i
264
+ (5)
265
+ with PX′
266
+ 1 = PX1, (KX′
267
+ i|X′
268
+ i−1,�Y ′
269
+ i−1)n
270
+ i=2 = (KXi|Xi−1,�Yi−1)n
271
+ i=2, and KY ′
272
+ i |X′
273
+ i = KYi|Xi = PY |X,W for
274
+ i = 1, . . . , n. With a training dataset, we can define the offline-learned estimation strategy for
275
+ dynamic inference as follows.
276
+ Definition 1. An offline-learned estimation strategy with an m-sample training dataset for an
277
+ n-round dynamic inference is a sequence of estimators ψn
278
+ m = (ψm,1, . . . , ψm,n), where ψm,i : (X ×
279
+ �Y)m × Xi × �Yi−1 → �Y is the estimator for the ith round of estimation, which maps the dataset Zm
280
+ as well as the past observations and estimates (Xi, �Y i−1) up to the ith round to an estimate �Yi of
281
+ Yi, such that �Yi = ψm,i(Zm, Xi, �Y i−1), i = 1, . . . , n.
282
+ Any specification of the above probabilistic models and an offline-learned estimation strategy
283
+ determines a joint distribution of the random variables (W, Zm, Xn, Y n, �Y n) under consideration.
284
+ The Bayesian network of the variables is shown in Fig. 2, where the training data is assumed to
285
+ be generated in the imitation learning setup. A crucial observation from the Bayesian network is
286
+ that W is conditionally independent of (Xn, �Y n) given Zm, as the quantities of interest Y n are not
287
+ observed. In other words, given the training data, no more information about W can be gained
288
+ during inference. We formally state this observation as the following lemma.
289
+ Lemma 1. In offline learning for dynamic inference, the parameter W is conditionally independent
290
+ of the observations and the estimates (Xn, �Y n) during inference given the training data Zm.
291
+ Given an offline-learned estimation strategy ψn
292
+ m for an n-round dynamic inference with an
293
+ m-sample training dataset, we can define its inference loss as E
294
+ � �n
295
+ i=1 ℓ(Xi, Yi, �Yi)
296
+ �. The goal of
297
+ offline learning is to find an offline-learned estimation strategy to minimize the inference loss:
298
+ arg min
299
+ ψn
300
+ m
301
+ E
302
+
303
+ n
304
+
305
+ i=1
306
+ ℓ(Xi, Yi, �Yi)
307
+
308
+ ,
309
+ with �Yi = ψm,i(Zm, Xi, �Y i−1).
310
+ (6)
311
+ 5
312
+
313
+ Figure 2: Bayesian network of the random variables in offline learning for dynamic inference with
314
+ the imitation learning setup, with m = n = 4. Here we assume the estimates are made with Markov
315
+ estimators, such that �Yi = ψm,i(Zm, Xi).
316
+ 3.1
317
+ MDP reformulation
318
+ 3.1.1
319
+ Equivalent expression of inference loss
320
+ We first show that the inference loss in (6) can be expressed in terms of a loss function that does
321
+ not take the unknown Yi as input.
322
+ Theorem 1. For any offline-learned estimation strategy ψn
323
+ m, its inference loss can be written as
324
+ E
325
+
326
+ n
327
+
328
+ i=1
329
+ ℓ(Xi, Yi, �Yi)
330
+
331
+ = E
332
+
333
+ n
334
+
335
+ i=1
336
+ ˜ℓ(πm, Xi, �Yi)
337
+
338
+ ,
339
+ (7)
340
+ where πm(·) ≜ P[W ∈ ·|Zm] is the posterior distribution of the kernel parameter W given the training
341
+ dataset Zm, and ˜ℓ : ∆ × X × �Y → R, with ∆ being the space of probability distributions on W, is
342
+ defined as
343
+ ˜ℓ(π, x, ˆy) ≜
344
+
345
+ W
346
+
347
+ Y
348
+ π(dw)PY |X,W (dy|x, w)ℓ(x, y, ˆy).
349
+ (8)
350
+ The proof is given in Appendix A. Theorem 1 states that the inference loss of an offline-learned
351
+ estimation strategy ψn
352
+ m is equal to
353
+ J(ψn
354
+ m) ≜ E
355
+
356
+ n
357
+
358
+ i=1
359
+ ˜ℓ(πm, Xi, �Yi)
360
+
361
+ ,
362
+ (9)
363
+ with �Yi = ψm,i(Zm, Xi, �Y i−1). It follows that the offline learning problem in (6) can be equivalently
364
+ written as
365
+ arg min
366
+ ψn
367
+ m
368
+ J(ψn
369
+ m).
370
+ (10)
371
+ 3.1.2
372
+ (πm, Xi)n
373
+ i=1 as a controlled Markov chain
374
+ Next, we show that the sequence (πm, Xi)n
375
+ i=1 appearing in (9) form a controlled Markov chain with
376
+ �Y n as the control sequence. In other words, the tuple (πm, Xi+1) depends on the history (πm, Xi, �Y i)
377
+ only through (πm, Xi, �Yi), as formally stated in the following lemma.
378
+ 6
379
+
380
+ W
381
+ Y1
382
+ Y2
383
+ Y?
384
+ Y4
385
+ Y1
386
+ Y2
387
+ Y3
388
+ Y4
389
+ 1
390
+ x1
391
+ X2
392
+ X3
393
+ X4
394
+ X1
395
+ 24
396
+ X2
397
+ X3
398
+ X4
399
+ <Y1
400
+ 2
401
+ A3Lemma 2. Given any offline-learned estimation strategy ψn
402
+ m, we have
403
+ P
404
+ �(πm, Xi+1) ∈ A × B
405
+ ��πm, Xi, �Y i� = 1{πm ∈ A}P
406
+ �Xi+1 ∈ B|Xi, �Yi
407
+
408
+ (11)
409
+ for any Borel sets A ⊂ ∆ and B ⊂ X, any realization of (πm, Xi, �Y i), and any i = 1, . . . , n − 1.
410
+ The proof is given in Appendix B.
411
+ 3.1.3
412
+ Optimality of Markov offline-learned estimators
413
+ Furthermore, the next three lemmas will show that the search space of the minimization problem
414
+ in (10) can be restricted to Markov offline-learned estimators ¯ψm,i : ∆ × X → Y, such that
415
+ �Yi = ¯ψm,i(πm, Xi). We start with a generalization of Blackwell’s principle of irrelevant information.
416
+ Lemma 3 (Generalized Blackwell’s principle of irrelevant information). For any fixed functions
417
+ ℓ : Y × �Y → R and f : X → Y, the following equality holds:
418
+ min
419
+ g:X→�Y
420
+ E
421
+ �ℓ
422
+ �f(X), g(X)
423
+ �� = min
424
+ g:Y→�Y
425
+ E
426
+ �ℓ
427
+ �f(X), g(f(X))
428
+ ��.
429
+ (12)
430
+ Remark. The original Blackwell’s principle of irrelevant information, stating that for any fixed
431
+ function ℓ : Y × �Y → R,
432
+ min
433
+ g:X×Y→�Y
434
+ E
435
+ �ℓ
436
+ �Y, g(X, Y )
437
+ �� = min
438
+ g:Y→�Y
439
+ E
440
+ �ℓ
441
+ �Y, g(Y )
442
+ ��,
443
+ (13)
444
+ can be seen as a special case of the above lemma.
445
+ The proof of Lemma 3 is given in Appendix C. The first application of Lemma 3 is to prove that
446
+ the last estimator of an optimal offline-learned estimation strategy can be replaced by a Markov
447
+ one, which preserves the optimality.
448
+ Lemma 4 (Last-round lemma for offline learning). Given any offline-learned estimation strategy
449
+ ψn
450
+ m, there exists a Markov offline-learned estimator ¯ψm,n : ∆ × X → �Y, such that
451
+ J(ψm,1, . . . , ψm,n−1, ¯ψm,n) ≤ J(ψn
452
+ m).
453
+ (14)
454
+ The proof is given in Appendix D. Lemma 3 can be further used to prove that whenever the last
455
+ offline-learned estimator is Markov, the preceding estimator can also be replaced by a Markov one
456
+ which preserves the optimality.
457
+ Lemma 5 ((i − 1)th-round lemma for offline learning). For any i ≥ 2, given any offline-learned
458
+ estimation strategy (ψm,1, . . . , ψm,i−1, ¯ψm,i) for an i-round dynamic inference with an m-sample
459
+ training dataset, if the offline-learned estimator for the ith round of estimation is a Markov one
460
+ ¯ψm,i : ∆ × X → �Y, then there exists a Markov offline-learned estimator ¯ψm,i−1 : ∆ × X → �Y for the
461
+ (i − 1)th round, such that
462
+ J(ψm,1, . . . , ψm,i−2, ¯ψm,i−1, ¯ψm,i) ≤ J(ψm,1, . . . , ψm,i−1, ¯ψm,i).
463
+ (15)
464
+ The proof is given in Appendix E. With Lemma 4 and Lemma 5, we can prove the optimality of
465
+ Markov offline-learned estimators, as given in Appendix F.
466
+ Theorem 2. The minimum of J(ψn
467
+ m) in (10) can be achieved by an offline-learned estimation
468
+ strategy ¯ψn
469
+ m with Markov estimators ¯ψm,i : ∆ × X → �Y, i = 1, . . . , n, such that �Yi = ¯ψm,i(πm, Xi).
470
+ 7
471
+
472
+ 3.1.4
473
+ Conversion to MDP
474
+ Theorem 1 and Theorem 2 with Lemma 2 imply that the original offline learning problem in (6) is
475
+ equivalent to
476
+ arg min
477
+ ψn
478
+ m
479
+ E
480
+
481
+ n
482
+
483
+ i=1
484
+ ˜ℓ(πm, Xi, �Yi)
485
+
486
+ ,
487
+ �Yi = ψm,i(πm, Xi),
488
+ (16)
489
+ and the sequence (πm, Xi)n
490
+ i=1 is a controlled Markov chain driven by �Y n. With this reformulation,
491
+ we see that the offline learning problem becomes a standard MDP. The tuples (πm, Xi)n
492
+ i=1 become
493
+ the states in this MDP, the estimates �Y n become the actions, the probability transition kernel
494
+ P (πm,Xi)|(πm,Xi−1),�Yi−1 now defines the controlled state transition, and any Markov offline-learned
495
+ estimation strategy ψn
496
+ m becomes a policy of this MDP. The goal of learning becomes finding the
497
+ optimal policy of the MDP to minimize the accumulated expected loss defined w.r.t. ˜ℓ. The solution
498
+ to this MDP will be an optimal offline-learned estimation strategy for dynamic inference.
499
+ 3.2
500
+ Solution via dynamic programming
501
+ 3.2.1
502
+ Optimal offline-learned estimation strategy
503
+ From the theory of MDP it is known that the optimal policy for the MDP in (16), namely the
504
+ optimal offline-learned estimation strategy, can be found via dynamic programming. To derive the
505
+ optimal estimators, define the functions Q∗
506
+ m,i : ∆ × X × �Y → R and V ∗
507
+ m,i : ∆ × X → R for offline
508
+ learning recursively for i = n, . . . , 1 as Q∗
509
+ m,n(π, x, ˆy) ≜ ˜ℓ(π, x, ˆy), and
510
+ V ∗
511
+ m,i(π, x) ≜ min
512
+ ˆy∈�Y
513
+ Q∗
514
+ m,i(π, x, ˆy),
515
+ i = n, . . . , 1
516
+ (17)
517
+ Q∗
518
+ m,i(π, x, ˆy) ≜ ˜ℓ(π, x, ˆy) + E[V ∗
519
+ m,i+1(π, Xi+1)|Xi = x, �Yi = ˆy],
520
+ i = n − 1, . . . , 1
521
+ (18)
522
+ with ˜ℓ is as defined in (8), and the conditional expectation in (18) is taken w.r.t. Xi+1. The optimal
523
+ offline-learned estimate to make in the ith round when πm = π and Xi = x is then
524
+ ψ∗
525
+ m,i(π, x) ≜ arg min
526
+ ˆy∈�Y
527
+ Q∗
528
+ m,i(π, x, ˆy).
529
+ (19)
530
+ 3.2.2
531
+ Minimum inference loss and loss-to-go
532
+ For any offline-learned estimation strategy ψn
533
+ m, we can define its loss-to-go in the ith round of
534
+ estimation when πm = π and Xi = x as
535
+ Vm,i(π, x; ψn
536
+ m) ≜ E
537
+
538
+ n
539
+
540
+ j=i
541
+ ℓ(Xj, Yj, �Yj)
542
+ ���πm = π, Xi = x
543
+
544
+ ,
545
+ (20)
546
+ which is the conditional expected loss accumulated from the ith round to the final round when
547
+ (ψm,i, . . . , ψm,n) are used as the offline-learned estimators, given that the posterior distribution of
548
+ the kernel parameter W given the training dataset Zm is π and the observation in the ith round is x.
549
+ The following theorem states that the offline-learned estimation strategy (ψ∗
550
+ m,1, . . . , ψ∗
551
+ m,n) derived
552
+ from dynamic programming not only achieves the minimum inference loss over the n rounds, but
553
+ also achieves the minimum loss-to-go in each round with any training dataset and any observation
554
+ in that round.
555
+ 8
556
+
557
+ Theorem 3. The offline-learned estimators (ψ∗
558
+ m,1, . . . , ψ∗
559
+ m,n) defined in (19) according to the recur-
560
+ sion in (17) and (18) constitute an optimal offline-learned estimation strategy for dynamic inference,
561
+ which achieves the minimum in (6). Moreover, for any Markov offline-learned estimation strategy
562
+ ψn
563
+ m, with ψm,i : ∆ × X → Y, its loss-to-go satisfies
564
+ Vm,i(π, x; ψn
565
+ m) ≥ V ∗
566
+ m,i(π, x)
567
+ (21)
568
+ for all π ∈ ∆, x ∈ X and i = 1, . . . , n, where the equality holds if ψm,j(π, x) = ψ∗
569
+ m,j(π, x) for all
570
+ π ∈ ∆, x ∈ X and j ≥ i.
571
+ The proof is given in Appendix G. A consequence of Theorem 3 is that in offline learning for
572
+ dynamic inference, the minimum expected loss accumulated from the ith round to the final round
573
+ can be expressed in terms of V ∗
574
+ m,i, as stated in the following corollary.
575
+ Corollary 1. In offline learning for dynamic inference, for any i and any initial distribution PXi,
576
+ min
577
+ ψm,i,...,ψm,n E
578
+
579
+ n
580
+
581
+ j=i
582
+ ℓ(Xj, Yj, �Yj)
583
+
584
+ = E[V ∗
585
+ m,i(πm, Xi)],
586
+ (22)
587
+ and the minimum is achieved by the estimators (ψ∗
588
+ m,i, . . . , ψ∗
589
+ m,n) defined in (19).
590
+ 4
591
+ Bayesian online learning for dynamic inference
592
+ In the setup of offline learning for dynamic inference, we assume that before the inference takes place,
593
+ a training dataset Zm drawn from some distribution PZm|W is observed, and W can be estimated
594
+ from Zm. In the online learning setup, we assume that there is no training dataset available before
595
+ the inference; instead, during the inference, after an estimate �Yi is made in each round, the true
596
+ value Yi is revealed, and W can be estimated on-the-fly in each round from all the observations
597
+ available so far.
598
+ Same as the offline learning setup, we assume that during inference, the initial distribution PX1
599
+ and the probability transition kernels KXi|Xi−1,�Yi−1, i = 1, . . . , n are still known, while the unknown
600
+ KYi|Xi’s are the same element PY |X,W of a parametrized family of kernels {PY |X,w, w ∈ W} and the
601
+ unknown kernel parameter W is a random element of W with prior distribution PW . We can define
602
+ the online-learned estimation strategy for dynamic inference as follows. Note that we overload the
603
+ notations ψi as an online-learned estimator and Zi as (Xi, Yi) throughout this section.
604
+ Definition 2. An online-learned estimation strategy for an n-round dynamic inference is a sequence
605
+ of estimators ψn = (ψ1, . . . , ψn), where ψi : (X × Y)i−1 × �Yi−1 × X → �Y is the estimator in the ith
606
+ round of estimation, which maps the past observations Zi−1 = (Xj, Yj)i−1
607
+ j=1 and estimates �Y i−1 in
608
+ addition to a new observation Xi to an estimate �Yi of Yi, such that �Yi = ψi(Zi−1, �Y i−1, Xi).
609
+ The Bayesian network of all the random variables (W, Xn, Y n, �Y n) in online learning for dynamic
610
+ inference is shown in Fig. 3.
611
+ A crucial observation from the Bayesian network is that W is
612
+ conditionally independent of (Xi, �Y i) given Zi−1, as stated in the following lemma.
613
+ Lemma 6. In online learning for dynamic inference, in the ith round of estimation, the kernel
614
+ parameter W is conditionally independent of the current observation Xi and the estimates �Y i up to
615
+ the ith round given the past observations Zi−1.
616
+ 9
617
+
618
+ Figure 3: Bayesian network of variables in online learning for dynamic inference, with n = 3.
619
+ Same as the offline learning setup, given an online-learned estimation strategy ψn, we can define
620
+ its inference loss as E
621
+ � �n
622
+ i=1 ℓ(Xi, Yi, �Yi)
623
+ �. The goal of online learning for an n-round dynamic
624
+ inference is to find an online-learned estimation strategy to minimize the inference loss:
625
+ arg min
626
+ ψn
627
+ E
628
+
629
+ n
630
+
631
+ i=1
632
+ ℓ(Xi, Yi, �Yi)
633
+
634
+ ,
635
+ with �Yi = ψi(Zi−1, �Y i−1, Xi).
636
+ (23)
637
+ 4.1
638
+ MDP reformulation
639
+ 4.1.1
640
+ Equivalent expression of inference loss
641
+ We first show that the inference loss in (23) can be expressed in terms of a loss function that does
642
+ not take the unknown Yi as input.
643
+ Theorem 4. For any online-learned estimation strategy ψn, its inference loss can be written as
644
+ E
645
+
646
+ n
647
+
648
+ i=1
649
+ ℓ(Xi, Yi, �Yi)
650
+
651
+ = E
652
+
653
+ n
654
+
655
+ i=1
656
+ ˜ℓ(πi, Xi, �Yi)
657
+
658
+ ,
659
+ (24)
660
+ where πi(·) ≜ P[W ∈ ·|Zi−1] is the posterior distribution of the kernel parameter W given the past
661
+ observations Zi−1 to the ith round, and ˜ℓ : ∆ × X × �Y → R, with ∆ being the space of probability
662
+ distributions on W, is defined in the same way as in (8),
663
+ ˜ℓ(π, x, ˆy) =
664
+
665
+ W
666
+
667
+ Y
668
+ π(dw)PY |X,W (dy|x, w)ℓ(x, y, ˆy).
669
+ (25)
670
+ The proof is given in Appendix H. Theorem 4 states that the inference loss of an online-learned
671
+ estimation strategy ψn is equal to
672
+ J(ψn) = E
673
+
674
+ n
675
+
676
+ i=1
677
+ ˜ℓ(πi, Xi, �Yi)
678
+
679
+ ,
680
+ with �Yi = ψi(Zi−1, �Y i−1, Xi).
681
+ (26)
682
+ It follows that the learning problem in (23) can be equivalently written as
683
+ arg min
684
+ ψn
685
+ J(ψn).
686
+ (27)
687
+ 10
688
+
689
+ M
690
+ Y
691
+ Y2
692
+ Y3
693
+ 4
694
+ +
695
+ X1
696
+ X2
697
+ X3
698
+ 44.1.2
699
+ (πi, Xi)n
700
+ i=1 as a controlled Markov chain
701
+ Next, we show that the sequence (πi, Xi)n
702
+ i=1 appearing in (26) form a controlled Markov chain
703
+ with �Y n as the control sequence. In other words, the tuple (πi+1, Xi+1) depends on the history
704
+ (πi, Xi, �Y i) only through (πi, Xi, �Yi), as formally stated in the following lemma.
705
+ Lemma 7. There exists a function f : ∆ × X × Y → ∆, such that given any learned estimation
706
+ strategy ψn, we have
707
+ P
708
+ �(πi+1, Xi+1) ∈ A × B
709
+ ��πi, Xi, �Y i� =
710
+
711
+ W
712
+
713
+ Y
714
+ πi(dw)PY |X,W (dyi|Xi, w)P[f(πi, Xi, yi) ∈ A]P
715
+ �Xi+1 ∈ B|Xi, �Yi
716
+
717
+ (28)
718
+ for any Borel sets A ⊂ ∆ and B ⊂ X, any realization of (πi, Xi, �Y i), and any i = 1, . . . , n − 1.
719
+ Lemma 7 is proved in Appendix I, based on the auxiliary lemma below proved in Appendix J.
720
+ Lemma 8. For a generic random tuple (T, U, V ) ∈ T×U×V that forms a Markov chain T −U −V ,
721
+ we have
722
+ P
723
+ �V ∈ A
724
+ ��PV |U(·|U) = p, T ∈ B
725
+ � = p(A)
726
+ (29)
727
+ for any Borel sets A ∈ V and B ∈ T, and any probability distribution p on V.
728
+ 4.1.3
729
+ Optimality of Markov online-learned estimators
730
+ The next two lemmas will show that the search space of the minimization problem in (27) can
731
+ be restricted to Markov online-learned estimators ¯ψi : ∆ × X → Y, such that �Yi = ¯ψi(πi, Xi). In
732
+ parallel to the discussion of the offline learning, we first prove that the last estimator of an optimal
733
+ online-learned estimation strategy can be replaced by a Markov one, which preserves the optimality.
734
+ Lemma 9 (Last-round lemma for online learning). Given any online-learned estimation strategy
735
+ ψn, there exists a Markov online-learned estimator ¯ψn : ∆ × X → �Y, such that
736
+ J(ψ1, . . . , ψn−1, ¯ψn) ≤ J(ψn).
737
+ (30)
738
+ The proof is given in Appendix K. We further prove that whenever the last online-learned
739
+ estimator is Markov, the preceding estimator can be replaced by a Markov one which preserves the
740
+ optimality.
741
+ Lemma 10 ((i − 1)th-round lemma for online learning). For any i ≥ 2, given any online-learned
742
+ estimation strategy (ψ1, . . . , ψi−1, ¯ψi) for an i-round dynamic inference, if the last estimator is a
743
+ Markov one ¯ψi : ∆ × X → �Y, then there exists a Markov onlined-learned estimator ¯ψi−1 : ∆ × X → �Y
744
+ for the (i − 1)th round, such that
745
+ J(ψ1, . . . , ψi−2, ¯ψi−1, ¯ψi) ≤ J(ψ1, . . . , ψi−1, ¯ψi).
746
+ (31)
747
+ The proof is given in Appendix L. With Lemma 4 and Lemma 5, we can prove the optimality of
748
+ Markov online-learned estimators, as given in Appendix M.
749
+ Theorem 5. The minimum of J(ψn) in (27) can be achieved by a online-learned estimation strategy
750
+ ¯ψn with Markov online-learned estimators ¯ψi : ∆ × X → �Y, such that �Yi = ¯ψi(πi, Xi).
751
+ 11
752
+
753
+ 4.1.4
754
+ Conversion to MDP
755
+ Theorem 4 and Theorem 5 with Lemma 7 imply that the original online learning problem in (23) is
756
+ equivalent to
757
+ arg min
758
+ ψn
759
+ E
760
+
761
+ n
762
+
763
+ i=1
764
+ ˜ℓ(πi, Xi, �Yi)
765
+
766
+ ,
767
+ �Yi = ψi(πi, Xi)
768
+ (32)
769
+ and the sequence (πi, Xi)n
770
+ i=1 is a controlled Markov chain driven by �Y n. With this reformulation,
771
+ we see that the online learning problem becomes a standard MDP. The tuples (πi, Xi)n
772
+ i=1 become
773
+ the states in this MDP, the estimates �Y n become the actions, the probability transition kernel
774
+ P (πi,Xi)|(πi−1,Xi−1),�Yi−1 now defines the controlled state transition, and any Markov online-learned
775
+ estimation strategy ψn becomes a policy of this MDP. The goal of online learning becomes finding
776
+ the optimal policy of the MDP to minimize the accumulated expected loss defined w.r.t. ˜ℓ. The
777
+ solution to this MDP will be an optimal online-learned estimation strategy for dynamic inference.
778
+ 4.2
779
+ Solution via dynamic programming
780
+ 4.2.1
781
+ Optimal online-learned estimation strategy
782
+ From the theory of MDP it is known that the optimal policy for the MDP in (32), namely the
783
+ optimal online-learned estimation strategy, can be found via dynamic programming. To derive
784
+ the optimal estimators, define the functions Q∗
785
+ i : ∆ × X × �Y → R and V ∗
786
+ i : ∆ × X → R for online
787
+ learning recursively for i = n, . . . , 1 as Q∗
788
+ n(π, x, ˆy) ≜ ˜ℓ(π, x, ˆy), and
789
+ V ∗
790
+ i (π, x) ≜ min
791
+ ˆy∈�Y
792
+ Q∗
793
+ i (π, x, ˆy),
794
+ i = n, . . . , 1
795
+ (33)
796
+ Q∗
797
+ i (π, x, ˆy) ≜ ˜ℓ(π, x, ˆy) + E[V ∗
798
+ i+1(πi+1, Xi+1)|πi = π, Xi = x, �Yi = ˆy], i = n − 1, . . . , 1
799
+ (34)
800
+ with ˜ℓ is as defined in (8), and the conditional expectation in (34) is taken w.r.t. (πi+1, Xi+1). The
801
+ optimal online-learned estimate to make in the ith round when πi = π and Xi = x is then
802
+ ψ∗
803
+ i (π, x) ≜ arg min
804
+ ˆy∈�Y
805
+ Q∗
806
+ i (π, x, ˆy).
807
+ (35)
808
+ 4.2.2
809
+ Minimum inference loss and loss-to-go
810
+ For any online-learned estimation strategy ψn, we can define its loss-to-go in the ith round of
811
+ estimation when πi = π and Xi = x as
812
+ Vi(π, x; ψn) ≜ E
813
+
814
+ n
815
+
816
+ j=i
817
+ ℓ(Xj, Yj, �Yj)
818
+ ���πi = π, Xi = x
819
+
820
+ ,
821
+ (36)
822
+ which is the conditional expected loss accumulated from the ith round to the final round when
823
+ (ψi, . . . , ψn) are used as the learned estimators, given that in the ith round the posterior distribution
824
+ of the kernel parameter W given the past observations Zi−1 is π and the observation Xi is x.
825
+ The following theorem states that the online-learned estimation strategy (ψ∗
826
+ 1, . . . , ψ∗
827
+ n) derived from
828
+ dynamic programming not only achieves the minimum inference loss over the n rounds, but also
829
+ achieves the minimum loss-to-go in each round with any past and current observations in that
830
+ round.
831
+ 12
832
+
833
+ Theorem 6. The online-learned estimators (ψ∗
834
+ 1, . . . , ψ∗
835
+ n) defined in (35) according to the recursion
836
+ in (33) and (34) constitute an optimal online-learned estimation strategy for dynamic inference,
837
+ which achieves the minimum in (23). Moreover, for any Markov online-learned estimation strategy
838
+ ψn, with ψi : ∆ × X → Y, its loss-to-go satisfies
839
+ Vi(π, x; ψn) ≥ V ∗
840
+ i (π, x)
841
+ (37)
842
+ for all π ∈ ∆, x ∈ X and i = 1, . . . , n, where the equality holds if ψj(π, x) = ψ∗
843
+ j (π, x) for all π ∈ ∆,
844
+ x ∈ X and j ≥ i.
845
+ The proof is given in Appendix N. A consequence of Theorem 6 is that in online learning for
846
+ dynamic inference, the minimum expected loss accumulated from the ith round to the final round
847
+ can be expressed in terms of V ∗
848
+ i , as stated in the following corollary.
849
+ Corollary 2. In online learning for dynamic inference, for any i and any initial distribution PXi,
850
+ min
851
+ ψi,...,ψn E
852
+
853
+ n
854
+
855
+ j=i
856
+ ℓ(Xj, Yj, �Yj)
857
+
858
+ = E[V ∗
859
+ i (πi, Xi)],
860
+ (38)
861
+ and the minimum is achieved by the estimators (ψ∗
862
+ i , . . . , ψ∗
863
+ n) defined in (35).
864
+ A
865
+ Proof of Theorem 1
866
+ For each i = 1, . . . , n, we have
867
+ E
868
+ �ℓ(Xi, Yi, �Yi)
869
+ ��Zm, Xi, �Y i−1�
870
+ =
871
+
872
+ Y
873
+ PYi|Zm,Xi,�Y i−1(dy)ℓ(Xi, y, �Yi)
874
+ (39)
875
+ =
876
+
877
+ W
878
+
879
+ Y
880
+ PW|Zm,Xi,�Y i−1(dw)PYi|Zm,Xi,�Y i−1,W=w(dy)ℓ(Xi, y, �Yi)
881
+ (40)
882
+ =
883
+
884
+ W
885
+
886
+ Y
887
+ πm(dw)PY |X,W (dy|Xi, w)ℓ(Xi, y, �Yi)
888
+ (41)
889
+ =˜ℓ(πm, Xi, �Yi),
890
+ (42)
891
+ where (39) is due to the fact that Xi and �Yi are determined by (Zm, Xi, �Y i−1); and (41) follows
892
+ from the fact that W is conditionally independent of (Xi, �Y i−1) given Zm as stated in Lemma 1,
893
+ and the fact that Yi is conditionally independent of (Zm, Xi−1, �Y i−1) given (Xi, W). With the
894
+ above equality and the fact that
895
+ E
896
+
897
+ n
898
+
899
+ i=1
900
+ ℓ(Xi, Yi, �Yi)
901
+
902
+ =
903
+ n
904
+
905
+ i=1
906
+ E
907
+ �E[ℓ(Xi, Yi, �Yi)|Zm, Xi, �Y i−1]
908
+ �,
909
+ (43)
910
+ we obtain (7).
911
+ 13
912
+
913
+ B
914
+ Proof of Lemma 2
915
+ For any offline-learned estimation strategy ψn
916
+ m, any Borel sets A ⊂ ∆ and B ⊂ X, and any realization
917
+ of (πm, Xi, �Y i),
918
+ P
919
+ �(πm, Xi+1) ∈ A × B
920
+ ��πm, Xi, �Y i� = P
921
+ �πm ∈ A
922
+ ��πm
923
+ �P
924
+ �Xi+1 ∈ B|πm, Xi, �Y i�
925
+ (44)
926
+ = 1{πm ∈ A}P
927
+ �Xi+1 ∈ B|Xi, �Yi
928
+
929
+ (45)
930
+ where the second equality is due to the fact that Xi+1 is conditionally independent of (πm, Xi−1, �Y i−1)
931
+ given (Xi, �Yi). This proves the claim, and we can see that the right side of (11) only depends on
932
+ (πm, Xi, �Yi).
933
+ C
934
+ Proof of Lemma 3
935
+ . The left side of (12) is the Bayes risk of estimating f(X) based on X, defined w.r.t. the loss
936
+ function ℓ, which can be written as Rℓ(f(X)|X); while the right side of (12) is the Bayes risk of
937
+ estimating f(X) based on f(X) itself, also defined w.r.t. the loss function ℓ, which can be written as
938
+ Rℓ(f(X)|f(X)). It follows from a data processing inequality of the generalized conditional entropy
939
+ that
940
+ Rℓ(f(X)|X) ≤ Rℓ(f(X)|f(X)),
941
+ (46)
942
+ as f(X) − X − f(X) form a Markov chain. If follows from the same data processing inequality that
943
+ Rℓ(f(X)|X) ≥ Rℓ(f(X)|f(X)),
944
+ (47)
945
+ as X − f(X) − f(X) also form a Markov chain. Hence Rℓ(f(X)|X) = Rℓ(f(X)|f(X)), which proves
946
+ the claim.
947
+ D
948
+ Proof of Lemma 4
949
+ The inference loss of ψn
950
+ m can be written as
951
+ J(ψn
952
+ m) = E
953
+ � n−1
954
+
955
+ i=1
956
+ ˜ℓ
957
+ �(πm, Xi), �Yi
958
+ ��
959
+ + E
960
+ �˜ℓ
961
+ �(πm, Xn), ψm,n(Zm, Xn, �Y n−1)
962
+ ��.
963
+ (48)
964
+ Since the first expectation in (48) does not depend on ψm,n, it suffices to show that there exists a
965
+ learned estimator ¯ψm,n : ∆ × X → �Y, such that
966
+ E
967
+ �˜ℓ
968
+ �(πm, Xn), ¯ψm,n(πm, Xn)
969
+ �� ≤ E
970
+ �˜ℓ
971
+ �(πm, Xn), ψm,n(Zm, Xn, �Y n−1)
972
+ ��.
973
+ (49)
974
+ The existence of such an estimator is guaranteed by Lemma 3, as (πm, Xn) is a function of
975
+ (Zm, Xn, �Y n−1).
976
+ 14
977
+
978
+ E
979
+ Proof of Lemma 5
980
+ The inference loss of the given (ψm,1, . . . , ψm,i−1, ¯ψm,i) is
981
+ J(ψm,1, . . . , ψm,i−1, ¯ψm,i) = E
982
+ � i−2
983
+
984
+ j=1
985
+ ˜ℓ
986
+ �(πm, Xj), �Yj
987
+ ��
988
+ +
989
+ E
990
+ �˜ℓ
991
+ �(πm, Xi−1), �Yi−1
992
+ ��+
993
+ E
994
+ �˜ℓ
995
+ �(πm, Xi), ¯ψm,i(πm, Xi)
996
+ ��.
997
+ (50)
998
+ Since the first expectation in (50) does not depend on ψm,i−1, it suffices to show that there exists a
999
+ learned estimator ¯ψm,i−1 : ∆ × X → �Y, such that
1000
+ E
1001
+ �˜ℓ
1002
+ �(πm, Xi−1), ¯ψm,i−1(πm, Xi−1)
1003
+ �� + E
1004
+ �˜ℓ
1005
+ �(πm, ¯Xi), ¯ψm,i(πm, ¯Xi)
1006
+ ��
1007
+ ≤E
1008
+ �˜ℓ
1009
+ �(πm, Xi−1), �Yi−1
1010
+ �� + E
1011
+ �˜ℓ
1012
+ �(πm, Xi), ¯ψm,i(πm, Xi)
1013
+ ��,
1014
+ (51)
1015
+ where ¯Xi on the left side is the observation in the ith round when the Markov offline-learned
1016
+ estimator ¯ψm,i−1 is used in the (i − 1)th round. To get around with the dependence of Xi on ψm,i−1,
1017
+ we write the second expectation on the right side of (51) as
1018
+ E
1019
+ �E
1020
+ �˜ℓ
1021
+ �(πm, Xi), ¯ψm,i(πm, Xi)
1022
+ ���πm, Xi−1, �Yi−1
1023
+ ��
1024
+ (52)
1025
+ and notice that the conditional expectation E
1026
+ �˜ℓ
1027
+ �(πm, Xi), ¯ψi(πm, Xi)
1028
+ ���πm, Xi−1, �Yi−1
1029
+ � does not
1030
+ depend on ψi−1. This is because the conditional distribution of (πm, Xi) given (πm, Xi−1, �Yi−1)
1031
+ is solely determined by the probability transition kernel P Xi|Xi−1,�Yi−1, as shown in the proof of
1032
+ Lemma 2 stating that (πm, Xi)n
1033
+ i=1 is a controlled Markov chain with �Y n as the control sequence. It
1034
+ follows that the right side of (51) can be written as
1035
+ E
1036
+ �˜ℓ
1037
+ �(πm, Xi−1), �Yi−1
1038
+ � + E
1039
+ �˜ℓ
1040
+ �(πm, Xi), ¯ψm,i(πm, Xi)
1041
+ ���πm, Xi−1, �Yi−1
1042
+ ��
1043
+ =E
1044
+ �g
1045
+ �πm, Xi−1, �Yi−1
1046
+ ��
1047
+ (53)
1048
+ =E
1049
+ �g
1050
+ �πm, Xi−1, ψm,i−1(Zm, Xi−1, �Y i−2)
1051
+ ��
1052
+ (54)
1053
+ for a function g that does not depend on ψm,i−1. Since (πm, Xi−1) is a function of (Zm, Xi−1, �Y i−2),
1054
+ it follows from Lemma 3 that there exists a learned estimator ¯ψm,i−1 : ∆ × X → �Y, such that
1055
+ E
1056
+ �g
1057
+ �πm, Xi−1, ψm,i−1(Zm, Xi−1, �Y i−2)
1058
+ ��
1059
+ (55)
1060
+ ≥E
1061
+ �g
1062
+ �πm, Xi−1, ¯ψm,i−1(πm, Xi−1)
1063
+ ��
1064
+ (56)
1065
+ =E
1066
+ �˜ℓ
1067
+ �(πm, Xi−1), ¯ψm,i−1(πm, Xi−1)
1068
+ �+
1069
+ E
1070
+ �˜ℓ
1071
+ �(πm, ¯Xi), ¯ψm,i(πm, ¯Xi)
1072
+ ���πm, Xi−1, ¯ψm,i−1(πm, Xi−1)
1073
+ ��
1074
+ (57)
1075
+ =E
1076
+ �˜ℓ
1077
+ �(πm, Xi−1), ¯ψm,i−1(πm, Xi−1)
1078
+ �� + E
1079
+ �˜ℓ
1080
+ �(πm, ¯Xi), ¯ψm,i(πm, ¯Xi)
1081
+ ��,
1082
+ (58)
1083
+ which proves (51) and the claim.
1084
+ 15
1085
+
1086
+ F
1087
+ Proof of Theorem 2
1088
+ Picking an optimal offline-learned estimation strategy ψn
1089
+ m, we can first replace its last estimator by
1090
+ a Markov one that preserves the optimality of the strategy, which is guaranteed by Lemma 4. Then,
1091
+ for i = n, . . . , 2, we can repeatedly replace the (i − 1)th estimator by a Markov one that preserves
1092
+ the optimality of the previous strategy, which is guaranteed by Lemma 5 and the additive structure
1093
+ of the inference loss as in (9). Finally we obtain an offline-learned estimation strategy consisting
1094
+ of Markov estimators that achieves the same inference loss as the originally picked offline-learned
1095
+ estimation strategy.
1096
+ G
1097
+ Proof of Theorem 3
1098
+ The first claim stating that the offline-learned estimation strategy (ψ∗
1099
+ m,1, . . . , ψ∗
1100
+ m,n) achieves the
1101
+ minimum in (6) follows from the equivalence between (6) and the MDP in (16), and from the
1102
+ well-known optimality of the solution derived from dynamic programming to MDP.
1103
+ The second claim can be proved via backward induction. Consider an arbitrary Markov offline-
1104
+ learned estimation strategy ψn
1105
+ m with ψm,i : ∆ × X → Y, based on which the learned estimates during
1106
+ inference are made.
1107
+ • In the final round, for all π ∈ ∆ and x ∈ X,
1108
+ Vm,n(π, x; ψn
1109
+ m) = ˜ℓ(π, x, ψm,n(π, x))
1110
+ (59)
1111
+ ≥ V ∗
1112
+ m,n(π, x),
1113
+ (60)
1114
+ where (59) is due to the definitions of Vm,n in (20) and ˜ℓ in (8); and (60) is due to the definition
1115
+ of V ∗
1116
+ m,n in (17), while the equality holds if ψm,n(π, x) = ψ∗
1117
+ m,n(π, x).
1118
+ • For i = n − 1, . . . , 1, suppose (21) holds in the (i + 1)th round. We first show a self-recursive
1119
+ expression of Vm,i(π, x; ψn
1120
+ m):
1121
+ Vm,i(π, x; ψn
1122
+ m) = E
1123
+
1124
+ n
1125
+
1126
+ j=i
1127
+ ℓ(Xj, Yj, �Yj)
1128
+ ���πm = π, Xi = x
1129
+
1130
+ (61)
1131
+ = E[ℓ(Xi, Yi, �Yi)|πm = π, Xi = x] + E
1132
+
1133
+ n
1134
+
1135
+ j=i+1
1136
+ ℓ(Xj, Yj, �Yj)
1137
+ ���πm = π, Xi = x
1138
+
1139
+ (62)
1140
+ = E
1141
+ �E[ℓ(Xi, Yi, �Yi)| �Yi, πm = π, Xi = x]
1142
+ ��πm = π, Xi = x
1143
+ �+
1144
+ E
1145
+
1146
+ E
1147
+
1148
+ n
1149
+
1150
+ j=i+1
1151
+ ℓ(Xj, Yj, �Yj)
1152
+ ���Xi+1, πm = π, Xi = x
1153
+ ������πm = π, Xi = x
1154
+
1155
+ (63)
1156
+ = E
1157
+ �˜ℓ(π, x, �Yi)
1158
+ ��πm = π, Xi = x
1159
+ �+
1160
+ E
1161
+
1162
+ E
1163
+
1164
+ n
1165
+
1166
+ j=i+1
1167
+ ℓ(Xj, Yj, �Yj)
1168
+ ���πm = π, Xi+1
1169
+ ������πm = π, Xi = x
1170
+
1171
+ (64)
1172
+ = ˜ℓ(π, x, ψm,i(π, x)) + E
1173
+ �Vm,i+1(π, Xi+1; ψn
1174
+ m)|πm = π, Xi = x
1175
+
1176
+ (65)
1177
+ 16
1178
+
1179
+ where the second term of (64) follows from the fact that Xi is conditionally independent of
1180
+ (Xn
1181
+ i+1, Y n
1182
+ i+1, �Y n
1183
+ i+1) given (πm, Xi+1), which is a consequence of the assumption that the offline-
1184
+ learned estimators are Markov and the specification of the joint distribution of (Zm, Xn, Y n, �Y n)
1185
+ in the setup of the offline learning problem, and can be seen from Fig. 2. Then,
1186
+ Vm,i(π, x; ψn
1187
+ m) ≥ ˜ℓ(π, x, ψm,i(π, x)) + E
1188
+ �V ∗
1189
+ m,i+1(π, Xi+1)|πm = π, Xi = x
1190
+
1191
+ (66)
1192
+ = ˜ℓ(π, x, ψm,i(π, x)) + E
1193
+ �V ∗
1194
+ m,i+1(π, Xi+1)|πm = π, Xi = x, �Yi = ψm,i(π, x)
1195
+
1196
+ (67)
1197
+ = ˜ℓ(π, x, ψm,i(π, x)) + E
1198
+ �V ∗
1199
+ m,i+1(π, Xi+1)|Xi = x, �Yi = ψm,i(π, x)
1200
+
1201
+ (68)
1202
+ = Q∗
1203
+ m,i(π, x, ψm,i(π, x))
1204
+ (69)
1205
+ ≥ V ∗
1206
+ m,i(π, x)
1207
+ (70)
1208
+ where (66) follows from the inductive assumption; (67) follows from the fact that �Yi is determined
1209
+ given πm = π and Xi = x; (68) follows from the fact that Xi+1 is independent of πm given
1210
+ (Xi, �Yi); and the final inequality with the equality condition follow from the definitions of V ∗
1211
+ m,i
1212
+ and ψ∗
1213
+ m,i in (17) and (19).
1214
+ This proves the second claim.
1215
+ H
1216
+ Proof of Theorem 4
1217
+ For each i = 1, . . . , n, we have
1218
+ E
1219
+ �ℓ(Xi, Yi, �Yi)
1220
+ ��Zi−1, �Y i−1, Xi
1221
+
1222
+ =
1223
+
1224
+ Y
1225
+ PYi|Zi−1,�Y i−1,Xi(dy)ℓ(Xi, y, �Yi)
1226
+ (71)
1227
+ =
1228
+
1229
+ W
1230
+
1231
+ Y
1232
+ PW|Zi−1,�Y i−1,Xi(dw)PYi|Zi−1,�Y i−1,Xi,W=w(dy)ℓ(Xi, y, �Yi)
1233
+ (72)
1234
+ =
1235
+
1236
+ W
1237
+
1238
+ Y
1239
+ πi(dw)PY |X,W (dy|Xi, w)ℓ(Xi, y, �Yi)
1240
+ (73)
1241
+ =˜ℓ(πi, Xi, �Yi),
1242
+ (74)
1243
+ where (71) is due to the fact that Xi and �Yi are determined by (Zi−1, �Y i−1, Xi); and (73) follows
1244
+ from the fact that W is conditionally independent of ( �Y i−1, Xi) given Zi−1 as a consequence of
1245
+ Lemma 6, and the fact that Yi is conditionally independent of (Zi−1, �Y i−1) given (Xi, W). With
1246
+ the above equality and the fact that
1247
+ E
1248
+
1249
+ n
1250
+
1251
+ i=1
1252
+ ℓ(Xi, Yi, �Yi)
1253
+
1254
+ =
1255
+ n
1256
+
1257
+ i=1
1258
+ E
1259
+ �E[ℓ(Xi, Yi, �Yi)|Zi−1, �Y i−1, Xi]
1260
+ �,
1261
+ (75)
1262
+ we obtain (24).
1263
+ 17
1264
+
1265
+ I
1266
+ Proof of Lemma 7
1267
+ We first show that πi+1 can be determined by (πi, Xi, Yi). To see it, we express πi+1 as
1268
+ PW|Zi = PW,Zi|Zi−1/PZi|Zi−1
1269
+ (76)
1270
+ = PW|Zi−1PXi|W,Zi−1PYi|Xi,W,Zi−1/PZi|Zi−1
1271
+ (77)
1272
+ = πiPXi|Xi−1,�Yi−1PYi|Xi,W /PZi|Zi−1
1273
+ (78)
1274
+ =
1275
+ πiPYi|Xi,W
1276
+
1277
+ W πi(dw′)PYi|Xi,W=w′
1278
+ (79)
1279
+ where (78) follows from the facts that 1) �Yi−1 is determined by Zi−1, and Xi is conditionally
1280
+ independent of (W, Zi−2, Yi−1) given (Xi−1, �Yi−1); and 2) Yi is conditionally independent of Zi−1
1281
+ given (Xi, W). It follows that πi+1 can be written as
1282
+ πi+1 = f(πi, Xi, Yi)
1283
+ (80)
1284
+ for a function f that maps
1285
+ �πi(·), Xi, Yi
1286
+ � to πi+1(·) ∝ πi(·)PY |X,W (Yi|Xi, ·).
1287
+ With (80), for any online-learned estimation strategy ψn, any Borel sets A ⊂ ∆ and B ⊂ X, and
1288
+ any realization of (πi, Xi, �Y i), we have
1289
+ P
1290
+ �(πi+1, Xi+1) ∈ A × B
1291
+ ��πi, Xi, �Y i�
1292
+ =
1293
+
1294
+ Y
1295
+ P
1296
+ �dyi
1297
+ ��πi, Xi, �Y i�P
1298
+ �(πi+1, Xi+1) ∈ A × B
1299
+ ��πi, Xi, �Y i, Yi = yi
1300
+
1301
+ (81)
1302
+ =
1303
+
1304
+ Y
1305
+ P
1306
+ �dyi
1307
+ ��πi, Xi, �Y i�P
1308
+ �f(πi, Xi, yi) ∈ A]P
1309
+ �Xi+1 ∈ B
1310
+ ��Xi, �Yi
1311
+
1312
+ (82)
1313
+ =
1314
+
1315
+ Y
1316
+
1317
+ W
1318
+ P
1319
+ �dw
1320
+ ��πi, Xi, �Y i�P
1321
+ �dyi
1322
+ ��πi, Xi, �Y i, W = w
1323
+ �P
1324
+ �f(πi, Xi, yi) ∈ A]P
1325
+ �Xi+1 ∈ B
1326
+ ��Xi, �Yi
1327
+
1328
+ (83)
1329
+ =
1330
+
1331
+ Y
1332
+
1333
+ W
1334
+ πi(dw)PY |X,W (dyi|Xi, w)P[f(πi, Xi, yi) ∈ A]P
1335
+ �Xi+1 ∈ B|Xi, �Yi
1336
+ �,
1337
+ (84)
1338
+ where (82) follows from (80) and the fact that Xi+1 is conditionally independent of (Zi−1, Yi, �Y i−1)
1339
+ given (Xi, �Yi); and (84) follows from 1) Lemma 8 and the fact that W is conditionally independent
1340
+ of (Zi−1, Xi, �Y i) given Zi−1, as a consequence of Lemma 6, and 2) the fact that Yi is conditionally
1341
+ independent of (Zi−1, �Y i) given (Xi, W).
1342
+ This proves the Lemma 7, and we see that the right side of (28) only depends on (πi, Xi, �Yi).
1343
+ J
1344
+ Proof of Lemma 8
1345
+ Given a probability distribution p on V, let Up ≜ {u ∈ U : PV |U(·|u) = p}. Then, for any Borel sets
1346
+ A ∈ V and B ∈ T,
1347
+ P
1348
+ �V ∈ A
1349
+ ��PV |U(·|U) = p, T ∈ B
1350
+ � = P
1351
+ �V ∈ A, PV |U(·|U) = p, T ∈ B
1352
+
1353
+ P
1354
+ �PV |U(·|U) = p, T ∈ B
1355
+
1356
+ (85)
1357
+ =
1358
+
1359
+ Up PU(du)PV |U(A|u)PT|U(B|u)
1360
+
1361
+ Up PU(du)PT|U(B|u)
1362
+ (86)
1363
+ = p(A),
1364
+ (87)
1365
+ 18
1366
+
1367
+ where (86) follows from the definition of Up and the assumption that T and V are conditionally
1368
+ independent given U; and (87) follows from the fact that PV |U(A|u) = p(A) for all u ∈ Up.
1369
+ K
1370
+ Proof of Lemma 9
1371
+ The inference loss of ψn
1372
+ i can be written as
1373
+ J(ψn) = E
1374
+ � n−1
1375
+
1376
+ i=1
1377
+ ˜ℓ
1378
+ �(πi, Xi), �Yi
1379
+ ��
1380
+ + E
1381
+ �˜ℓ
1382
+ �(πn, Xn), ψn(Zn−1, �Y n−1, Xn)
1383
+ ��.
1384
+ (88)
1385
+ Since the first expectation in (88) does not depend on ψn, it suffices to show that there exists a
1386
+ Markov online-learned estimator ¯ψn : ∆ × X → �Y, such that
1387
+ E
1388
+ �˜ℓ
1389
+ �(πn, Xn), ¯ψn(πn, Xn)
1390
+ �� ≤ E
1391
+ �˜ℓ
1392
+ �(πn, Xn), ψn(Zn−1, �Y n−1, Xn)
1393
+ ��.
1394
+ (89)
1395
+ The existence of such an estimator is guaranteed by Lemma 3, as (πn, Xn) is a function of
1396
+ (Zn−1, �Y n−1, Xn).
1397
+ L
1398
+ Proof of Lemma 10
1399
+ The proof is given in Appendix L. The inference loss of the given (ψ1, . . . , ψi−1, ¯ψi) is
1400
+ J(ψ1, . . . , ψi−1, ¯ψi) = E
1401
+ � i−2
1402
+
1403
+ j=1
1404
+ ˜ℓ
1405
+ �(πj, Xj), �Yj
1406
+ ��
1407
+ +
1408
+ E
1409
+ �˜ℓ
1410
+ �(πi−1, Xi−1), �Yi−1
1411
+ ��+
1412
+ E
1413
+ �˜ℓ
1414
+ �(πi, Xi), ¯ψi(πi, Xi)
1415
+ ��.
1416
+ (90)
1417
+ Since the first expectation in (90) does not depend on ψi−1, it suffices to show that there exists a
1418
+ Markov online-learned estimator ¯ψi−1 : ∆ × X → �Y, such that
1419
+ E
1420
+ �˜ℓ
1421
+ �(πi−1, Xi−1), ¯ψi−1(πi−1, Xi−1)
1422
+ �� + E
1423
+ �˜ℓ
1424
+ �(πi, ¯Xi), ¯ψi(πi, ¯Xi)
1425
+ ��
1426
+ ≤E
1427
+ �˜ℓ
1428
+ �(πi−1, Xi−1), �Yi−1
1429
+ �� + E
1430
+ �˜ℓ
1431
+ �(πi, Xi), ¯ψi(πi, Xi)
1432
+ ��,
1433
+ (91)
1434
+ where ¯Xi on the left side is the observation in the ith round when the Markov estimator ¯ψi−1 is
1435
+ used in the (i − 1)th round. To get around with the dependence of Xi on ψi−1, we write the second
1436
+ expectation on the right side of (91) as
1437
+ E
1438
+ �E
1439
+ �˜ℓ
1440
+ �(πi, Xi), ¯ψi(πi, Xi)
1441
+ ���πi−1, Xi−1, �Yi−1
1442
+ ��
1443
+ (92)
1444
+ and notice that the conditional expectation E
1445
+ �˜ℓ
1446
+ �(πi, Xi), ¯ψi(πi, Xi)
1447
+ ���πi−1, Xi−1, �Yi−1
1448
+ � does not de-
1449
+ pend on ψi−1. This is because the conditional distribution of (πi, Xi) given (πi−1, Xi−1, �Yi−1) is
1450
+ solely determined by the probability transition kernels PYi−1|Xi−1,W and P Xi|Xi−1,�Yi−1, as shown in
1451
+ the proof of Lemma 7 stating that (πi, Xi)n
1452
+ i=1 is a controlled Markov chain driven by �Y n. It follows
1453
+ 19
1454
+
1455
+ that the right side of (91) can be written as
1456
+ E
1457
+ �˜ℓ
1458
+ �(πi−1, Xi−1), �Yi−1
1459
+ � + E
1460
+ �˜ℓ
1461
+ �(πi, Xi), ¯ψi(πi, Xi)
1462
+ ���πi−1, Xi−1, �Yi−1
1463
+ ��
1464
+ =E
1465
+ �g
1466
+ �πi−1, Xi−1, �Yi−1
1467
+ ��
1468
+ (93)
1469
+ =E
1470
+ �g
1471
+ �πi−1, Xi−1, ψi−1(Zi−2, �Y i−2, Xi−1)
1472
+ ��
1473
+ (94)
1474
+ for a function g that does not depend on ψi−1. Since (πi−1, Xi−1) is a function of (Zi−2, �Y i−2, Xi−1),
1475
+ it follows from Lemma 3 that there exists a learned estimator ¯ψi−1 : ∆ × X → �Y, such that
1476
+ E
1477
+ �g
1478
+ �πi−1, Xi−1, ψi−1(Zi−2, �Y i−2, Xi−1)
1479
+ ��
1480
+ (95)
1481
+ ≥E
1482
+ �g
1483
+ �πi−1, Xi−1, ¯ψi−1(πi−1, Xi−1)
1484
+ ��
1485
+ (96)
1486
+ =E
1487
+ �˜ℓ
1488
+ �(πi−1, Xi−1), ¯ψi−1(πi−1, Xi−1)
1489
+ �+
1490
+ E
1491
+ �˜ℓ
1492
+ �(πi, ¯Xi), ¯ψi(πi, ¯Xi)
1493
+ ���πi−1, Xi−1, ¯ψi−1(πi−1, Xi−1)
1494
+ ��
1495
+ (97)
1496
+ =E
1497
+ �˜ℓ
1498
+ �(πi−1, Xi−1), ¯ψi−1(πi−1, Xi−1)
1499
+ �� + E
1500
+ �˜ℓ
1501
+ �(πi, ¯Xi), ¯ψi(πi, ¯Xi)
1502
+ ��,
1503
+ (98)
1504
+ which proves (91) and the claim.
1505
+ M
1506
+ Proof of Theorem 5
1507
+ Picking an optimal online-learned estimation strategy ψn, we can first replace its last estimator by
1508
+ a Markov one that preserves the optimality of the strategy, which is guaranteed by Lemma 9. Then,
1509
+ for i = n, . . . , 2, we can repeatedly replace the (i − 1)th estimator by a Markov one that preserves
1510
+ the optimality of the previous strategy, which is guaranteed by Lemma 10 and the additive structure
1511
+ of the inference loss as in (26). Finally we obtain an online-learned estimation strategy consisting
1512
+ of Markov online-learned estimators that achieves the same inference loss as the originally picked
1513
+ online-learned estimation strategy.
1514
+ N
1515
+ Proof of Theorem 6
1516
+ The first claim stating that the online-learned estimation strategy (ψ∗
1517
+ 1, . . . , ψ∗
1518
+ n) achieves the minimum
1519
+ in (23) follows from the equivalence between (23) and the MDP in (32), and from the well-known
1520
+ optimality of the solution derived from dynamic programming to MDP.
1521
+ The second claim can be proved via backward induction. Consider an arbitrary Markov online-
1522
+ learned estimation strategy ψn with ψi : ∆ × X → Y, based on which the learned estimates are
1523
+ made. For any pair (i, j) such that 1 ≤ i ≤ j ≤ n,
1524
+ E
1525
+ �ℓ(Xj, Yj, �Yj)
1526
+ ��πi, Xi
1527
+
1528
+ =E
1529
+ �E[ℓ(Xj, Yj, �Yj)|πj, Xj, πi, Xi]
1530
+ ��πi, Xi
1531
+
1532
+ (99)
1533
+ =E
1534
+ � �
1535
+ W
1536
+ P(dw|πj, Xj, πi, Xi)
1537
+
1538
+ Y
1539
+ P(dyj|πj, Xj, πi, Xi, W = w)ℓ(Xj, yj, �Yj)
1540
+ ���πi, Xi
1541
+
1542
+ (100)
1543
+ =E
1544
+ � �
1545
+ W
1546
+
1547
+ Y
1548
+ πj(dw)PY |X,W (dyj|Xj, w)ℓ(Xj, yj, �Yj)
1549
+ ���πi, Xi
1550
+
1551
+ (101)
1552
+ =E
1553
+ �˜ℓ(πj, Xj, �Yj)
1554
+ ��πi, Xi
1555
+
1556
+ (102)
1557
+ 20
1558
+
1559
+ where (100) follows from the fact that �Yj is determined by (πj, Xj); (101) follows from 1) Lemma 8
1560
+ and the fact that W is conditionally independent of (Zi−1, Xi, Xj) given Zj−1, and 2) Yj is
1561
+ conditionally independent of Zj−1 given (Xj, W); and (102) follows from the definition of ˜ℓ in (8).
1562
+ With the above identity, the loss-to-go defined in (36) can be rewritten as
1563
+ Vi(π, x; ψn) = E
1564
+
1565
+ n
1566
+
1567
+ j=i
1568
+ ˜ℓ(πj, Xj, �Yj)
1569
+ ���πi = π, Xi = x
1570
+
1571
+ ,
1572
+ i = 1, . . . , n.
1573
+ (103)
1574
+ Now we can proceed with proving the second claim via backward induction.
1575
+ • In the final round, for all π ∈ ∆ and x ∈ X,
1576
+ Vn(π, x; ψn) = ˜ℓ(π, x, ψn(π, x))
1577
+ (104)
1578
+ ≥ V ∗
1579
+ n (π, x),
1580
+ (105)
1581
+ where (104) is due to (102) with i = j = n; and (105) is due to the definition of V ∗
1582
+ n in (33), while
1583
+ the equality holds if ψn(π, x) = ψ∗
1584
+ n(π, x).
1585
+ • For i = n − 1, . . . , 1, suppose (37) holds in the (i + 1)th round. We first show a self-recursive
1586
+ expression of Vi(π, x; ψn):
1587
+ Vi(π, x; ψn)
1588
+ = E
1589
+
1590
+ n
1591
+
1592
+ j=i
1593
+ ˜ℓ(πj, Xj, �Yj)
1594
+ ���πi = π, Xi = x
1595
+
1596
+ (106)
1597
+ = E[˜ℓ(πi, Xi, �Yi)|πi = π, Xi = x] + E
1598
+
1599
+ n
1600
+
1601
+ j=i+1
1602
+ ˜ℓ(πj, Xj, �Yj)
1603
+ ���πi = π, Xi = x
1604
+
1605
+ (107)
1606
+ = ˜ℓ(π, x, ψi(π, x)) + E
1607
+
1608
+ E
1609
+
1610
+ n
1611
+
1612
+ j=i+1
1613
+ ˜ℓ(πj, Xj, �Yj)
1614
+ ���πi+1, Xi+1, πi = π, Xi = x
1615
+ ������πi = π, Xi = x
1616
+
1617
+ (108)
1618
+ = ˜ℓ(π, x, ψi(π, x)) + E
1619
+
1620
+ E
1621
+
1622
+ n
1623
+
1624
+ j=i+1
1625
+ ˜ℓ(πj, Xj, �Yj)
1626
+ ���πi+1, Xi+1
1627
+ ������πi = π, Xi = x
1628
+
1629
+ (109)
1630
+ = ˜ℓ(π, x, ψi(π, x)) + E
1631
+ �Vi+1(πi+1, Xi+1; ψn)|πi = π, Xi = x
1632
+
1633
+ (110)
1634
+ where the second term of (109) follows from the fact that �Yi+1 is determined by (πi+1, Xi+1),
1635
+ and the fact that (πj, Xj)n
1636
+ j=i+1 is conditionally independent of (πi, Xi) given (πi+1, Xi+1, �Yi+1)
1637
+ as guaranteed by Lemma 7. Then,
1638
+ Vi(π, x; ψn) ≥ ˜ℓ(π, x, ψi(π, x)) + E
1639
+ �V ∗
1640
+ i+1(πi+1, Xi+1)|πi = π, Xi = x
1641
+
1642
+ (111)
1643
+ = ˜ℓ(π, x, ψi(π, x)) + E
1644
+ �V ∗
1645
+ i+1(πi+1, Xi+1)|πi = π, Xi = x, �Yi = ψi(π, x)
1646
+
1647
+ (112)
1648
+ = Q∗
1649
+ i (π, x, ψi(π, x))
1650
+ (113)
1651
+ ≥ V ∗
1652
+ i (π, x)
1653
+ (114)
1654
+ where (111) follows from the inductive assumption; (112) follows from the fact that �Yi is
1655
+ determined given (πi, Xi); (113) follows from the definition of Q∗
1656
+ i in (34); and the final inequality
1657
+ with the equality condition follow from the definitions of V ∗
1658
+ i and ψ∗
1659
+ i in (33) and (35).
1660
+ This proves the second claim.
1661
+ 21
1662
+
1663
+ Acknowledgement
1664
+ The authors would like to thank Prof. Maxim Raginsky for the encouragement of looking into
1665
+ dynamic aspects of statistical problems, and Prof. Lav Varshney for helpful discussions on this work.
1666
+ References
1667
+ [1] A. Xu, “Dynamic inference,” arXiv 2111.14746, 2021.
1668
+ [2] D. B. Grimes, D. R. Rashid, and R. P. Rao, “Learning nonparametric models for probabilistic
1669
+ imitation,” In Advances in Neural Information Processing Systems, 2006.
1670
+ [3] P. Englert, A. Paraschos, J. Peters, and M. P. Deisenroth, “Probabilistic model-based imitation
1671
+ learning,” Adaptive Behavior, 2013.
1672
+ [4] F. Torabi, G. Warnell, and P. Stone, “Behavioral cloning from observation,” in International
1673
+ Joint Conference on Artificial Intelligence, 2018.
1674
+ [5] A. Feldbaum, “Dual control theory, Parts I and II,” Automation and Remote Control, vol. 21,
1675
+ 1967.
1676
+ [6] M. Strens, “A Bayesian framework for reinforcement learning.” in Proceedings of the 17th
1677
+ International Conference on Machine Learning, pp. 943–950, 2000.
1678
+ [7] P. Poupart, N. Vlassis, J. Hoey, and K. Regan, “An analytic solution to discrete Bayesian
1679
+ reinforcement learning,” International Conference on Machine Learning, 2006.
1680
+ [8] M. Ghavamzadeh, S. Mannor, J. Pineau, and A. Tamar, Bayesian Reinforcement Learning: A
1681
+ Survey.
1682
+ Now Foundations and Trends, 2015.
1683
+ [9] S. Ross, G. J. Gordon, and D. Bagnell, “A reduction of imitation learning and structured
1684
+ prediction to no-regret online learning,” In International Conference on Artificial Intelligence
1685
+ and Statistics, 2011.
1686
+ [10] D. J. Foster, S. M. Kakade, J. Qian, and A. Rakhlin, “The statistical complexity of interactive
1687
+ decision making,” arXiv:2112.13487, 2022.
1688
+ [11] H. Zhong, W. Xiong, S. Zheng, L. Wang, Z. Wang, Z. Yang, and T. Zhang, “A posterior
1689
+ sampling framework for interactive decision making,” arXiv:2211.01962, 2022.
1690
+ [12] K. Bhatia and K. Sridharan, “Online learning with dynamics: A minimax perspective,”
1691
+ arXiv:2012.01705, 2020.
1692
+ [13] H. Unbehauen, “Adaptive dual control systems: a survey,” in Proceedings of the IEEE 2000
1693
+ Adaptive Systems for Signal Processing, Communications, and Control Symposium, 2000, pp.
1694
+ 171–180.
1695
+ [14] M. O. G. Duff, “Optimal learning: Computational procedure for Bayes-adaptive Markov
1696
+ decision processes,” Ph.D. dissertation, University of Massachusetts, Amherst, 2002.
1697
+ 22
1698
+
1699
+ [15] M. Raginsky, Lecture notes for ECE 555 Control of Stochastic Systems, Spring 2019, University
1700
+ of Illinois at Urbana-Champaign, 2019.
1701
+ [16] T. Osa, J. Pajarinen, G. Neumann, J. A. Bagnell, P. Abbeel, and J. Peters, “An algorithmic
1702
+ perspective on imitation learning,” Foundations and Trends in Robotics, vol. 7, no. 1-2, 2018.
1703
+ [17] J. Choi and K. Kim, “Map inference for Bayesian inverse reinforcement learning,” In Advances
1704
+ in Neural Information Processing Systems, 2011.
1705
+ [18] D. Ramachandran and E. Amir, “Bayesian inverse reinforcement learning,” vol. 7, pp. 2586–2591,
1706
+ 2007.
1707
+ [19] R. Dearden, N. Friedman, and D. Andre, “Model based Bayesian exploration,” Uncertainty in
1708
+ Artificial Intelligence (UAI), vol. 15, pp. 150–159, 1999.
1709
+ [20] E. D. Klenske and P. Hennig, “Dual control for approximate Bayesian reinforcement learning,”
1710
+ J. Machine Learn. Res., 2016.
1711
+ [21] A. Guez, D. Silver, and P. Dayan, “Efficient Bayes-adaptive reinforcement learning using
1712
+ sample-based search,” Advances in Neural Information Processing Systems, 2012.
1713
+ [22] B. Michini and J. How, “Improving the efficiency of Bayesian inverse reinforcement learning,”
1714
+ IEEE International Conference on Robotics and Automation, 2012.
1715
1716
+ 23
1717
+
29AyT4oBgHgl3EQfPvbO/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
2NAyT4oBgHgl3EQfbve8/content/tmp_files/2301.00270v1.pdf.txt ADDED
@@ -0,0 +1,1966 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ULTRAPROP: Principled and Explainable Propagation on
2
+ Large Graphs
3
+ Meng-Chieh Lee
4
5
+ Pittsburgh, USA
6
+ Carnegie Mellon University
7
+ Shubhranshu Shekhar
8
9
+ Pittsburgh, USA
10
+ Carnegie Mellon University
11
+ Jaemin Yoo
12
13
+ Pittsburgh, USA
14
+ Carnegie Mellon University
15
+ Christos Faloutsos
16
17
+ Pittsburgh, USA
18
+ Carnegie Mellon University
19
+ ABSTRACT
20
+ Given a large graph with few node labels, how can we (a) identify
21
+ the mixed network-effect of the graph and (b) predict the unknown
22
+ labels accurately and efficiently? This work proposes Network
23
+ Effect Analysis (NEA) and ULTRAPROP, which are based on two
24
+ insights: (a) the network-effect (NE) insight: a graph can exhibit
25
+ not only one of homophily and heterophily, but also both or none
26
+ in a label-wise manner, and (b) the neighbor-differentiation (ND)
27
+ insight: neighbors have different degrees of influence on the target
28
+ node based on the strength of connections.
29
+ NEA provides a statistical test to check whether a graph ex-
30
+ hibits network-effect or not, and surprisingly discovers the ab-
31
+ sence of NE in many real-world graphs known to have heterophily.
32
+ ULTRAPROP solves the node classification problem with notable
33
+ advantages: (a) Accurate, thanks to the network-effect (NE) and
34
+ neighbor-differentiation (ND) insights; (b) Explainable, precisely
35
+ estimating the compatibility matrix; (c) Scalable, being linear
36
+ with the input size and handling graphs with millions of nodes;
37
+ and (d) Principled, with closed-form formula and theoretical guar-
38
+ antee. Applied on eight real-world graph datasets, ULTRAPROP
39
+ outperforms top competitors in terms of accuracy and run time,
40
+ requiring only stock CPU servers. On a large real-world graph
41
+ with 1.6M nodes and 22.3M edges, ULTRAPROP achieves ≥ 9×
42
+ speedup (12 minutes vs. 2 hours) compared to most competitors.
43
+ ACM Reference Format:
44
+ Meng-Chieh Lee, Shubhranshu Shekhar, Jaemin Yoo, and Christos Falout-
45
+ sos. 2023. ULTRAPROP: Principled and Explainable Propagation on
46
+ Large Graphs. In Under Submission. ACM, New York, NY, USA, 12 pages.
47
+ https://doi.org/10.1145/nnnnnnn.nnnnnnn
48
+ 1
49
+ INTRODUCTION
50
+ Given a large, undirected, and unweighted graph with few la-
51
+ beled nodes, how can we infer the labels of remaining unlabeled
52
+ nodes, often without node features? Node classification is often
53
+ employed to infer labels on large real-world graphs, since manual
54
+ labeling is expensive and time-consuming. For example, in social
55
+ networks with millions of users, identifying even a fraction (say
56
+ 5%) of users’ groups is prohibitive, which limits the application
57
+ of methods that assume a large fraction of labels are given. More-
58
+ over, node features are frequently missing in real-world graphs.
59
+ For those methods that require node features in classification,
60
+ Under Submission, ,
61
+ 2023. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM...$15.00
62
+ https://doi.org/10.1145/nnnnnnn.nnnnnnn
63
+ they create the features based on the graph [9, 12, 13], such as
64
+ using the one-hot encoding of node degree.
65
+ Previous works on node classification have two main limita-
66
+ tions. First, they ignore the complex network-effect of real-world
67
+ graphs and understand their characteristic as either homophily or
68
+ heterophily. The co-existing case of homophily and heterophily,
69
+ which we call X-ophily in this work, has been neglected. Sec-
70
+ ond, they either a) ignore the different influences of neighboring
71
+ nodes during inference or b) require extensive computation to
72
+ give dynamic weights to the adjacency matrix. In this work, we
73
+ address these two challenges and consider the dynamic and com-
74
+ plex relationships between neighboring nodes with two insights
75
+ network-effect and neighbor-differentiation for designing an ac-
76
+ curate and efficient approach for node classification.
77
+ NE (network-effect): The first goal is to analyze the network-
78
+ effect of a graph (i.e., homophily, heterophily, or any combination
79
+ which we call X-ophily) in a principled and class-conditional way.
80
+ That is, a single graph can have homophily and heterophily at
81
+ the same time between different pairs of classes. The challenge
82
+ is usually avoided in literature: inference-based methods assume
83
+ that the relationship is given by domain experts [10]; deep graph
84
+ models either assume homophily [16, 39] or misidentify graphs
85
+ having no NE as heterophily graphs [23, 41].
86
+ ND (neighbor-differentiation): The second goal is to approx-
87
+ imate different influence levels of neighboring nodes effectively.
88
+ Existing works require extensive computation to measure the
89
+ influence levels in node classification. For instance, HOLS [8]
90
+ solves ND by mining 𝑘−cliques, while listing all the instances
91
+ is time-consuming; Graph Attention Network (GAT) [35] learns
92
+ more than one relationship for each neighbor, while heavily rely-
93
+ ing on the node features.
94
+ We provide an informal definition of the problem:
95
+ INFORMAL PROBLEM 1.
96
+ • Given an undirected and unweighted graph
97
+ – with few labeled nodes,
98
+ – without node features,
99
+ • Infer the labels of all the remaining nodes
100
+ – accurately under any types of network effects,
101
+ – explaining the predictions to human experts,
102
+ – efficiently in large-scale graphs with scalability.
103
+ Our solutions: We propose Network Effect Analysis (NEA),
104
+ an algorithm to statistically test NE of a real-world graph with
105
+ only a few observed node labels. NEA analyzes the relationships
106
+ between all pairs of different classes in an efficient manner. In
107
+ arXiv:2301.00270v1 [cs.SI] 31 Dec 2022
108
+
109
+ Under Submission, ,
110
+ Meng-Chieh Lee, Shubhranshu Shekhar, Jaemin Yoo, and Christos Faloutsos
111
+ Label
112
+ 2400x6
113
+ Node ID
114
+ Label ID
115
+ Adjacency
116
+ 2400x2400
117
+ Input
118
+ Estimated
119
+ Compatibility Matrix
120
+ Homophily
121
+ Heterophily
122
+ Output
123
+ Compatibility
124
+ 6x6
125
+ (a) NE: Compatibility Matrix Estimation
126
+ R3
127
+ R1
128
+ R2
129
+ X
130
+ B4
131
+ B3
132
+ B2
133
+ B1
134
+ B8
135
+ B7
136
+ B6
137
+ B5
138
+ 1.66
139
+ 1.96
140
+ Proposed
141
+ Embedding Space
142
+ Predicts Blue
143
+ predicts Red
144
+ ?
145
+ B1-4
146
+ R1-3
147
+ X
148
+ (b) ND: Neighbor Differentiation
149
+ 0.5
150
+ 1.0
151
+ 1.5
152
+ 2.0
153
+ # of Edges
154
+ 1e7
155
+ 0
156
+ 2000
157
+ 4000
158
+ 6000
159
+ Run Time (s)
160
+ UltraProp
161
+ GCN
162
+ HOLS
163
+ 11x
164
+ 4x
165
+ (c) Scalability
166
+ Figure 1: ULTRAPROP is Effective, Explainable, and Scalable. (a) Thanks to Network Effect Formula, ULTRAPROP ex-
167
+ plains the dataset by precisely estimating the compatibility matrix, observing both heterophily and homophily. (b) Thanks
168
+ to “Emphasis” Matrix, ULTRAPROP predicts the label of the gray node X correctly, while LINBP fails. (c) ULTRAPROP is
169
+ fast and scales linearly with the number of edges. See Introduction for more details.
170
+ Figure 2, we show that surprisingly many large public datasets
171
+ known as heterophily graphs do not have NE at all.
172
+ We then propose ULTRAPROP, a principled approach using
173
+ both insights of NE and ND to conduct accurate node classifi-
174
+ cation on large graphs with explainability. The explainability is
175
+ built upon the combination of influential neighbors (ND) and the
176
+ compatibility matrix that we carefully and automatically estimate
177
+ (see Lemma 3). Figure 1 illustrates the advantages of ULTRA-
178
+ PROP. Figure 1a shows how ULTRAPROP provides explanation
179
+ by estimating a compatibility matrix from only 5% of node labels:
180
+ the interrelations of classes imply that the first half follows het-
181
+ erophily, while the other half follows homophily. Figure 1b shows
182
+ that ULTRAPROP predicts the different influences of neighbors
183
+ correctly by ND, where the central vertex X is closer to the red
184
+ nodes R1, R2, and R3 in the embedding space, as it participates
185
+ in a closely-knit community with them. Finally, Figure 1c shows
186
+ the linear scalability of ULTRAPROP with the number of edges.
187
+ It is 9× faster than most of the competitors, and requires only 12
188
+ minutes on a large real-world graph with over 22M edges.
189
+ In summary, the advantages of ULTRAPROP are
190
+ (1) Accurate, thanks to the precise estimation of the compati-
191
+ bility matrix, and the reliable measurement of the different
192
+ importance of neighbors,
193
+ (2) Explainable, interpreting the datasets with estimated com-
194
+ patibility matrices, which work for homophily, heterophily,
195
+ or any combination – X-ophily,
196
+ (3) Scalable, scaling linearly with the input size,
197
+ (4) Principled, providing a tight bound of convergence for
198
+ the random walks, and the closed-form formula for the
199
+ compatibility matrix (see Lemma 2 and 3).
200
+ Reproducibility: Our implemented source code and prepro-
201
+ cessed datasets will be published once the paper is accepted.
202
+ 2
203
+ BACKGROUND AND RELATED WORK
204
+ We introduce preliminaries, and related works on label propaga-
205
+ tion and node embedding. Table 1 presents qualitative compari-
206
+ son of state-of-the-art approaches against our proposed method
207
+ ULTRAPROP. No competitor fulfills all the specs in Table 1.
208
+ Notation. Let 𝐺 be an undirected and unweighted graph with
209
+ 𝑛 nodes and 𝑚 edges with 𝑨 as the adjacency matrix. 𝑨𝑖𝑗 = 1
210
+ indicates that nodes 𝑖, 𝑗 are connected by an edge. Each node 𝑖 has
211
+ a unique label 𝑙(𝑖) ∈ {1, 2, . . . ,𝑐}, where 𝑐 denotes the number
212
+ of classes. Let 𝑬 ∈ R𝑛×𝑐 be the initial belief matrix containing
213
+ the prior information, i.e., the labeled nodes. 𝑬𝑖𝑘 = 1 if 𝑙(𝑖) = 𝑘,
214
+ and the rest entries of the 𝑖𝑡ℎ row are filled up with zeros. For
215
+ the nodes without labels, all the entries corresponding to those
216
+ nodes are set to 1/𝑐. 𝑯 ∈ R𝑐×𝑐 is a row-normalized compatibility
217
+ matrix where 𝑯𝑘𝑙 denotes the relative influence of class 𝑙 on
218
+ class 𝑘. The residual of a matrix around 𝑘 is denoted as ˆ𝒀 and is
219
+ defined as ˆ𝒀 = 𝒀 −𝑘 × 1 where 𝒀 is centered 1 around 𝑘, and 1 is
220
+ matrix of ones.
221
+ 2.1
222
+ Label Propagation
223
+ Belief Propagation. Belief Propagation (BP) is a popular method
224
+ for label inference in graphs [10, 18, 28]. FABP [18] and LINBP
225
+ [10] accelerate BP by approximating the final belief assignment
226
+ from BP. In particular, LINBP approximates the final belief as:
227
+ ˆ𝑩 = ˆ𝑬 + 𝑨ˆ𝑩 ˆ𝑯,
228
+ (1)
229
+ where ˆ𝑩 is a residual final belief matrix, initialized with all zeros,
230
+ 𝑨 is the adjacency matrix. The compatibility matrix 𝑯 and initial
231
+ beliefs 𝑬 are centered around 1/𝑐 to ensure convergence.
232
+ Higher-Order Propagation Methods. HOLS [8] leverages
233
+ higher-order graph structures, i.e. 𝑘−cliques. It propagates the
234
+ labels by incorporating the weights from higher-order cliques.
235
+ However, mining cliques is computationally intensive, and pro-
236
+ hibitive for large graphs.
237
+ 2.2
238
+ Embedding Methods
239
+ Traditional Embedding Methods. Numerous embedding meth-
240
+ ods [5, 7, 29] have been proposed to capture neighborhood sim-
241
+ ilarity and role of nodes in the graph. Chen et al. [5] propose a
242
+ random walk based generalized embedding method to capture
243
+ non-linear relations among nodes. Similarly, Pixie [7] utilizes
244
+ localized random walk based on node features. Further, [29] intro-
245
+ duced a generalized method that derives the matrix closed forms
246
+ of different graph embedding methods.
247
+ 1A matrix “centered around” 𝑘 has all its entries close to 𝑘 and the average of the
248
+ entries is exactly 𝑘.
249
+
250
+ ULTRAPROPLINBPULTRAPROPULTRAPROP: Principled and Explainable Propagation on Large Graphs
251
+ Under Submission, ,
252
+ Table 1: ULTRAPROP matches all specs, while competitors
253
+ miss one or more of the properties. Each property corre-
254
+ sponds to a contribution in Introduction. ‘?’ indicates that
255
+ it is unclear from the original paper.
256
+ Property
257
+ Method
258
+ BP [10, 18]
259
+ HOLS [8]
260
+ General GNNs [16, 17]
261
+ Attention GNNs [15, 35]
262
+ Heterophily GNNs [2, 6]
263
+ ULTRAPROP
264
+ Contr. (1): Handling NE
265
+ ?
266
+
267
+ Contr. (1): Handling ND
268
+
269
+
270
+
271
+ Contr. (2): Explainable
272
+
273
+ Contr. (3): Scalable
274
+
275
+
276
+
277
+ Contr. (4): Principled
278
+
279
+
280
+
281
+ Deep Graph Models. Graph Convolutional Networks (GCN) [16]
282
+ employ approximate spectral convolutions to incorporate neigh-
283
+ borhood information. APPNP [17] utilizes personalized PageR-
284
+ ank to leverage the local information and a larger neighborhood.
285
+ To account for ND, Graph Attention Networks (GAT) [15, 35] al-
286
+ low for assigning importance weights to neighborhoods. However,
287
+ attention GNNs require node features, and need many learnable
288
+ parameters, making it infeasible for large graphs. MIXHOP [2]
289
+ makes no assumption of homophily, and mixes powers of the
290
+ adjacency matrix to incorporate more than 1-hop neighbors in
291
+ each layer. H2GCN [41] is built on three key designs to better
292
+ learn the structure of heterophily graphs; nevertheless, it requires
293
+ too much memory and thus is not able to handle large graphs.
294
+ GPR-GNN [6] allows the learnable weights to be negative during
295
+ propagation with Generalized PageRank. LINKX [23] introduces
296
+ multiple large heterophily datasets, but it is not applicable to
297
+ graphs without node features. [26] empirically evaluates the per-
298
+ formance of GNNs on small heterophily datasets (≤ 10K nodes).
299
+ However, most of the conclusions are made based on the evalua-
300
+ tions where the node features are used. While deep graph models
301
+ have been shown to be state-of-the-art methods, it relies on node
302
+ features and is not scalable without GPU. Further, it is hard to
303
+ supply explanations or provide theoretical analysis.
304
+ 3
305
+ PROPOSED METHOD PART I – “NEA”
306
+ Given a graph with few node labels, how can we identify what are
307
+ the classes that a node with a specific class connects to? In other
308
+ words, how can we find whether the graph exhibits X-ophily – ho-
309
+ mophily, heterophily, or even none? We propose Network Effect
310
+ Analysis (NEA), a statistical approach to identify the network-
311
+ effect (NE) in a graph. It leads to interesting discovery that many
312
+ widely used heterophily graphs exhibit no NE.
313
+ 3.1
314
+ Network Effect Analysis (NEA)
315
+ Previous works on identifying NE of a graph [23, 41] have two
316
+ main limitations. First, when a class connects to all existing
317
+ classes uniformly, they misunderstand this non-homophily class
318
+ as heterophily, which should be considered as having no NE. Sec-
319
+ ond, they require the labels of most nodes in a graph, even though
320
+ Data: Edges E and priors P
321
+ Result: 𝑝-value table 𝑭
322
+ /* edges with both nodes in priors
323
+ */
324
+ 1 Extract E
325
+ ′ such that (𝑖, 𝑗) ∈ E,𝑖, 𝑗 ∈ P ∀(𝑖, 𝑗) ∈ E
326
+ ′;
327
+ 2 𝑻 ← 𝑶𝑐×𝑐;
328
+ // test statistic table
329
+ /* do 𝜒2 test for 𝐵 times
330
+ */
331
+ 3 for 𝑏1 = 1, ..., 𝐵 do
332
+ 4
333
+ for 𝑐1 = 1, ...,𝑐 do
334
+ 5
335
+ for 𝑐2 = 𝑐1 + 1, ...,𝑐 do
336
+ 6
337
+ 𝑽 ← 𝑶2×2;
338
+ // contingency table
339
+ 7
340
+ Shuffle(E
341
+ ′);
342
+ // sampling
343
+ 8
344
+ for (𝑖, 𝑗) ∈ E
345
+ ′ do
346
+ 9
347
+ if 𝑙(𝑖) = 𝑐1 and 𝑙(𝑗) = 𝑐1 then
348
+ 10
349
+ 𝑽11 ← 𝑽11 + 2;
350
+ 11
351
+ else if (𝑙(𝑖) = 𝑐1 and 𝑙(𝑗) = 𝑐2) or
352
+ 12
353
+ (𝑙(𝑖) = 𝑐2 and 𝑙(𝑗) = 𝑐1) then
354
+ 13
355
+ 𝑽21 ← 𝑽21 + 1;
356
+ 14
357
+ 𝑽12 ← 𝑽12 + 1;
358
+ 15
359
+ else if 𝑙(𝑖) = 𝑐2 and 𝑙(𝑗) = 𝑐2 then
360
+ 16
361
+ 𝑽22 ← 𝑽22 + 2;
362
+ 17
363
+ if �2
364
+ 𝑖=1
365
+ �2
366
+ 𝑗=1 𝑽𝑖𝑗 > 250 then
367
+ 18
368
+ Break;
369
+ 19
370
+ end
371
+ 20
372
+ end
373
+ /* record statistics of class pairs
374
+ */
375
+ 21
376
+ 𝑇 = 𝜒2-Test-Statistic(𝑽);
377
+ 22
378
+ 𝑻𝑐1𝑐2 ← 𝑻𝑐1𝑐2 +𝑇/𝐵;
379
+ 23
380
+ 𝑻𝑐2𝑐1 ← 𝑻𝑐2𝑐1 +𝑇/𝐵;
381
+ 24
382
+ end
383
+ 25
384
+ end
385
+ 26 end
386
+ 27 Compute 𝑝-value table 𝑭𝑐×𝑐 with average statistics in 𝑻;
387
+ 28 Return 𝑭;
388
+ Algorithm 1: Network Effect Analysis (NEA)
389
+ in most real-world node classification tasks only a few node la-
390
+ bels are observed. We propose NEA to address such limitations.
391
+ Before introducing NEA, we provide two propositions:
392
+ PROPOSITION 1. Given a graph and a class 𝑐𝑖, if the nodes
393
+ with class 𝑐𝑖 tend to connect uniformly to the nodes with all
394
+ classes 1, ...,𝑐 equally, then class 𝑐𝑖 has no NE.
395
+ PROPOSITION 2. If all classes 𝑐𝑖 = 1, ...,𝑐 in a graph have no
396
+ NE, then this graph has no NE.
397
+ We separate heterophily graphs from those with no NE by the
398
+ propositions. In heterophily graphs, the nodes of a specific class
399
+ are likely to be connected to the nodes of other classes, such as
400
+ in bipartite graphs that connect different classes of nodes. In this
401
+ case, knowing the label of a node gives meaningful information
402
+ about the labels about its neighbors. On the other hand, if a graph
403
+ has no NE, every node has equal probabilities for more than one
404
+ class even after we consider the structural information from its
405
+ neighbors, which is useless to infer its true label.
406
+ To analyze whether a specific class 𝑐𝑖 has NE or not, we use
407
+ 𝜒2 test to identify whether there exists a statistically significant
408
+ contingency between the classes. Given two classes 𝑐1 and 𝑐2, the
409
+
410
+ Under Submission, ,
411
+ Meng-Chieh Lee, Shubhranshu Shekhar, Jaemin Yoo, and Christos Faloutsos
412
+ 1
413
+ 2
414
+ Class ID
415
+ 1
416
+ 2
417
+ Class ID
418
+ Edge Counting
419
+ 105
420
+ 106
421
+ 2 × 105
422
+ 3 × 105
423
+ 4 × 105
424
+ 6 × 105
425
+ 1
426
+ 2
427
+ Class ID
428
+ 1
429
+ 2
430
+ Class ID
431
+ p-value Table
432
+ 0.00
433
+ 0.01
434
+ 0.02
435
+ 0.03
436
+ 0.04
437
+ 0.05
438
+ (a) “Genius”: No NE
439
+ 1
440
+ 2
441
+ Class ID
442
+ 1
443
+ 2
444
+ Class ID
445
+ Edge Counting
446
+ 6.6 × 105
447
+ 6.7 × 105
448
+ 6.8 × 105
449
+ 6.9 × 105
450
+ 7 × 105
451
+ 7.1 × 105
452
+ 7.2 × 105
453
+ 7.3 × 105
454
+ 7.4 × 105
455
+ 1
456
+ 2
457
+ Class ID
458
+ 1
459
+ 2
460
+ Class ID
461
+ p-value Table
462
+ 0.00
463
+ 0.01
464
+ 0.02
465
+ 0.03
466
+ 0.04
467
+ 0.05
468
+ (b) “Penn94”: No NE
469
+ 1
470
+ 2
471
+ Class ID
472
+ 1
473
+ 2
474
+ Class ID
475
+ Edge Counting
476
+ 3 × 106
477
+ 3.2 × 106
478
+ 3.4 × 106
479
+ 3.6 × 106
480
+ 3.8 × 106
481
+ 4 × 106
482
+ 1
483
+ 2
484
+ Class ID
485
+ 1
486
+ 2
487
+ Class ID
488
+ p-value Table
489
+ 0.00
490
+ 0.01
491
+ 0.02
492
+ 0.03
493
+ 0.04
494
+ 0.05
495
+ (c) “Twitch”: No NE
496
+ 1
497
+ 2
498
+ 3
499
+ 4
500
+ 5
501
+ Class ID
502
+ 1
503
+ 2
504
+ 3
505
+ 4
506
+ 5
507
+ Class ID
508
+ Edge Counting
509
+ 105
510
+ 3 × 104
511
+ 4 × 104
512
+ 6 × 104
513
+ 2 × 105
514
+ 1
515
+ 2
516
+ 3
517
+ 4
518
+ 5
519
+ Class ID
520
+ 1
521
+ 2
522
+ 3
523
+ 4
524
+ 5
525
+ Class ID
526
+ p-value Table
527
+ 0.00
528
+ 0.01
529
+ 0.02
530
+ 0.03
531
+ 0.04
532
+ 0.05
533
+ (d) “arXiv-Year”: X-ophily with Weak NE
534
+ 1
535
+ 2
536
+ 3
537
+ 4
538
+ 5
539
+ Class ID
540
+ 1
541
+ 2
542
+ 3
543
+ 4
544
+ 5
545
+ Class ID
546
+ Edge Counting
547
+ 105
548
+ 2 × 105
549
+ 3 × 105
550
+ 4 × 105
551
+ 6 × 105
552
+ 1
553
+ 2
554
+ 3
555
+ 4
556
+ 5
557
+ Class ID
558
+ 1
559
+ 2
560
+ 3
561
+ 4
562
+ 5
563
+ Class ID
564
+ p-value Table
565
+ 0.00
566
+ 0.01
567
+ 0.02
568
+ 0.03
569
+ 0.04
570
+ 0.05
571
+ (e) “Patent-Year”: Heterophily with Weak NE
572
+ 1
573
+ 2
574
+ Class ID
575
+ 1
576
+ 2
577
+ Class ID
578
+ Edge Counting
579
+ 107
580
+ 8 × 106
581
+ 9 × 106
582
+ 1.1 × 107
583
+ 1.2 × 107
584
+ 1.3 × 107
585
+ 1
586
+ 2
587
+ Class ID
588
+ 1
589
+ 2
590
+ Class ID
591
+ p-value Table
592
+ 0.00
593
+ 0.01
594
+ 0.02
595
+ 0.03
596
+ 0.04
597
+ 0.05
598
+ (f) “Pokec-Gender”: Heterophily with Strong NE
599
+ Figure 2: NEA discovers that real-world heterophily graphs do not necessarily have network-effect (NE). For each dataset,
600
+ we report the edge counting on the left, and the 𝑝-value table output from NEA on the right. We have a case of X-ophily, e.g.
601
+ in “arXiv-Year”, class 1 is homophily, and the rest are heterophily.
602
+ input to the test is 2 × 2 contingency table with counts of edges
603
+ where nodes of each edge ∈ {𝑐1,𝑐2}.
604
+ NULL HYPOTHESIS 1. Edges are equally likely to exhibit
605
+ homophily and heterophilly.
606
+ Algorithm 1 presents the procedure for the proposed NEA.
607
+ A practical challenge is that if the numbers in the table are too
608
+ large, 𝑝-value becomes extremely small and meaningless [24].
609
+ However, sampling for only a single round can be unstable and
610
+ output very different results. To address this, we combine 𝑝-
611
+ values from different random sampling by Universal Inference
612
+ [38]. We firstly sample edges to add to the contingency table
613
+ until the frequency is above a specified threshold, and compute
614
+ the 𝜒2 test statistic for each class pair. Next, following Universal
615
+ Inference, we repeat the procedure for random samples of edges
616
+ for 𝐵 rounds and average the statistics. At last, we use the average
617
+ statistics to compute the 𝑝-value table 𝑭𝑐×𝑐 of 𝜒2 tests.
618
+ It is worth noting that, NEA is robust to the noisy edges, thanks
619
+ to the random sampling. It also works well given either a few or
620
+ many node labels. Given only a few observations, 𝜒2 test works
621
+ well enough when the frequency in the contingency table are only
622
+ at least 5; given many observations, the sampling and combining
623
+ trick ensures the correctness of 𝑝-value.
624
+ We give observations based on the result of NEA:
625
+ OBSERVATION 1. If a class accepts all the null hypotheses in
626
+ Algorithm 1, then this class has no NE.
627
+ We then extend Observation 1 to an extreme case:
628
+ OBSERVATION 2. If all classes in a graph obey Observation 1,
629
+ the node classification problem is unsolvable under our setting.
630
+ 3.2
631
+ Discoveries
632
+ For each dataset, we equally sample 5% of node labels and com-
633
+ pute the 𝑝-value table by Algorithm 1. This is because a) only
634
+ a few labels are observed in most node classification tasks, and
635
+ thus it is natural to make the same assumption in this analysis,
636
+ and b) our NEA can correctly analyze NE even from partial ob-
637
+ servations. We set 𝐵 = 1000 to output stable results. Based on
638
+ Observation 2, here is our surprising discovery:
639
+ DISCOVERY 1 (NO NE). “Genius”, “Penn94”, and “Twitch”
640
+ have no NE, exhibiting neither homophily nor heterophily.
641
+ “Genius” [22], “Penn94” [34], and “Twitch” [31] have been
642
+ widely used in previous works [21, 23, 25, 27, 36, 40]. In “Ge-
643
+ nius” (Figure 2a), we see that both classes 1 and 2 tend to connect
644
+ to class 1. This makes the class 2 indistinguishable by the graph
645
+ structure. NEA thus accepts the null hypothesis and identifies
646
+ that there exists no statistically significant difference. This means
647
+ that the edges have the same probabilities to be homophily and
648
+ heterophily. We can see a similar phenomenon in “Penn94” (Fig-
649
+ ure 2b). “Twitch” (Figure 2c) is not considered as a homophily
650
+ graph because the effect is too weak, where the scales on the
651
+ color bar are very close. However, it is not a heterophily graph
652
+ as well, where NEA correctly identifies that every class tends to
653
+ connect to both classes near-uniformly.
654
+ We further analyzed three more datasets:
655
+ DISCOVERY 2 (WEAK AND STRONG NE). “Arxiv” and
656
+ “Patent-Year” exhibit weak NE; and “Pokec-Gender” exhibits
657
+ strong NE.
658
+ The “arXiv-Year” and “Patent-Year” datasets (Figure 2d and 2e)
659
+ have weak NE, where one of the classes accepts more than one
660
+ null hypothesis. “Pokec-Gender” (Figure 2f) shows strong NE,
661
+ where the estimated 𝑝-value is 0.008. These three datasets will
662
+ later be used in our experiments.
663
+ 4
664
+ PROPOSED METHOD PART II –
665
+ ULTRAPROP
666
+ We propose ULTRAPROP, our approach for accurate node classifi-
667
+ cation. Algorithm 2 shows the algorithm of ULTRAPROP. In line
668
+ 1, given an adjacency matrix 𝑨 and rank 𝑑, we make “Emphasis”
669
+ Matrix 𝑨∗ (in Section 4.1) to handle the neighbor-differentiation
670
+ (ND). To handle network-effect (NE), we estimate the compati-
671
+ bility matrix ˆ𝑯∗ from 𝑨∗ in line 2 (in Section 4.2). In line 3 to 7,
672
+ we initialize and propagate the beliefs ˆ𝑩 iteratively through 𝑨∗
673
+ until they converge. In each iteration, we aggregate the beliefs of
674
+ neighbors in ˆ𝑩, weighted by the values in 𝑨∗. This aims to draw
675
+ attention to the neighbors that are more structurally important.
676
+
677
+ ULTRAPROP: Principled and Explainable Propagation on Large Graphs
678
+ Under Submission, ,
679
+ Data: Adjacency matrix 𝑨, initial belief ˆ𝑬, priors P, and
680
+ decomposition rank 𝑑
681
+ Result: Final belief 𝑩
682
+ 1 𝑨∗ ← “Emphasis”-Matrix(𝑨,𝑑);
683
+ 2 ˆ𝑯∗ ← Compatibility-Matrix-Estimation(𝑨∗, ˆ𝑬, P);
684
+ /* propagation
685
+ */
686
+ 3 ˆ𝑩(0) ← 𝑶𝑛×𝑐,𝑡 ← 0;
687
+ 4 while inferences changed and
688
+ � | ˆ𝑩(𝑡+1)− ˆ𝑩(𝑡) |
689
+ 𝑛𝑐
690
+ >
691
+ 1
692
+ lg𝑛𝑐 do
693
+ 5
694
+ ˆ𝑩(𝑡+1) ← ˆ𝑬 + 𝑓 𝑨∗ ˆ𝑩(𝑡) ˆ𝑯∗;
695
+ 6
696
+ 𝑡 ← 𝑡 + 1;
697
+ 7 end
698
+ 8 Return 𝑩 ← ˆ𝑩(𝑡) + 1
699
+ 𝑐 ;
700
+ Algorithm 2: ULTRAPROP
701
+ The interrelations between classes is handled by multiplying with
702
+ ˆ𝑯∗. We further include an early stopping criterion in line 4 for
703
+ more efficient propagation.
704
+ 4.1
705
+ “Emphasis” Matrix
706
+ To incorporate the idea of ND, where neighbors have different
707
+ importances, we propose to replace the unweighted adjacency
708
+ matrix 𝑨 with a weighted one. The weight of edge (𝑖, 𝑗) reflects
709
+ the influence of node 𝑖 for 𝑗. We present an efficient solution to
710
+ weigh 𝑨 without using any node labels. It firstly embeds nodes
711
+ into structure-aware representations via random walks, and then
712
+ measures their similarities via distances in the embedding space.
713
+ Structure-Aware Node Representation. We represent nodes
714
+ in 𝑑-dimensional vector space efficiently using Singular Value
715
+ Decomposition (SVD) on the high-order proximity matrix of the
716
+ graph and capture information from pairwise connections. To
717
+ fast approximate the higher-order proximity matrix, we utilize
718
+ random walks described in Algorithm 3 from line 1 to 8. Given a
719
+ proximity matrix 𝑾 ′, 𝑾 ′
720
+ 𝑖𝑗 records the number of times we visit
721
+ node 𝑗 if we start a random walk from node 𝑖. Each neighbor has
722
+ the same probability of being visited in the unweighted graphs,
723
+ where only those structurally important neighbors are visited
724
+ more frequently.
725
+ To theoretically justify why it works, we prove that the neigh-
726
+ bor distribution for each node converges after a number of trials:
727
+ LEMMA 1 (CONVERGENCE OF REGULAR RANDOM WALKS).
728
+ With probability 1−𝛿, the error 𝜖 between the approximated distri-
729
+ bution and the true one for a node walking to its 1-hop neighbor
730
+ by a regular random walk of length 𝐿 with 𝑀 trials is less than
731
+ 𝜖 ≤ ⌈(𝐿 − 1)/2⌉
732
+ 𝐿
733
+ √︂
734
+ log (2/𝛿)
735
+ 2𝐿𝑀
736
+ (2)
737
+ PROOF. Omitted for brevity. Proof in Supplementary A.1.
738
+
739
+ To further make the estimation converge faster, we use non-
740
+ backtracking random walk. Given the start node 𝑠 and walk length
741
+ 𝐿, its function is defined as follows:
742
+ W(𝑠, 𝐿) =
743
+
744
+ (𝑤0 = 𝑠, ...,𝑤𝐿)
745
+ 𝑤𝑙 ∈ 𝑁 (𝑤𝑙−1), ∀𝑙 ∈ [1, 𝐿]
746
+ 𝑤𝑙−1 ≠ 𝑤𝑙+1, ∀𝑙 ∈ [1, 𝐿 − 1] ,
747
+ (3)
748
+ Data: Adjacency matrix 𝑨, number of trials 𝑀, number
749
+ of steps 𝐿, and dimension 𝑑
750
+ Result: Emphasis matrix 𝑨∗
751
+ 1 𝑾 ′ ← 𝑶𝑛×𝑛;
752
+ /* approximate proximity matrix by random walk
753
+ */
754
+ 2 for node 𝑖 in 𝐺 do
755
+ 3
756
+ for 𝑚 = 1, ..., 𝑀 do
757
+ 4
758
+ for 𝑗 ∈ W(𝑖, 𝐿) do
759
+ 5
760
+ 𝑾 ′
761
+ 𝑖𝑗 ← 𝑾 ′
762
+ 𝑖𝑗 + 1;
763
+ 6
764
+ end
765
+ 7
766
+ end
767
+ 8 end
768
+ /* masking, degree normalization and logarithm
769
+ */
770
+ 9 𝑾𝑛×𝑛 ← log (𝑫−1(𝑾 ′ ◦ 𝑨));
771
+ // proximity matrix
772
+ 10 𝑼𝑛×𝑑, 𝚺𝑑×𝑑, 𝑽𝑇
773
+ 𝑑×𝑛 ← SVD(𝑾,𝑑);
774
+ // embedding
775
+ 11 Weigh 𝑨∗
776
+ 𝑛×𝑛, where 𝑨∗
777
+ 𝑖𝑗 = S(𝑼𝑖, 𝑼 𝑗), ∀{𝑖, 𝑗|𝑨𝑖𝑗 = 1};
778
+ 12 Return 𝑨∗;
779
+ Algorithm 3: “Emphasis” Matrix
780
+ where 𝑁 (𝑖) denotes the neighbors of node 𝑖. Thus, with the same
781
+ 𝐿 and 𝑀, we improve Lemma 1 to have a tighter bound of 𝜖:
782
+ LEMMA 2 (CONVERGENCE OF NON-BACKTRACKING RAN-
783
+ DOM WALKS). With the same condition as in Lemma 1, the error
784
+ 𝜖 by a non-backtracking random walks is less than
785
+ 𝜖 ≤ ⌈(𝐿 − 1)/3⌉
786
+ 𝐿
787
+ √︂
788
+ log (2/𝛿)
789
+ 2𝐿𝑀
790
+ (4)
791
+ PROOF. Omitted for brevity. Proof in Supplementary A.1.
792
+
793
+ For example, when using regular random walks of length
794
+ 𝐿 = 4 with 𝑀 = 30 trials, the estimated error by Lemma 1 with
795
+ probability 95% is about 6.2%. Nevertheless, if we instead use
796
+ non-backtracking random walks, the error is reduced to 3.1%,
797
+ which is 2× lower than the one by regular walks, indicating that
798
+ the approximated distribution converges well to the true one.
799
+ In Algorithm 3 line 9, an element-wise multiplication by 𝑨
800
+ is done to keep the approximation of 1-hop neighbor for each
801
+ node, which sufficiently supplies necessary information as well
802
+ as keeps the resulting matrix sparse. We use the inverse of the
803
+ degree matrix 𝑫−1 to reduce the influence of nodes with large de-
804
+ grees. This prevents them from dominating the pairwise distance
805
+ by containing more elements in their rows. The element-wise
806
+ logarithm aims to rescale the distribution in 𝑾, in order to en-
807
+ large the difference between smaller structures. We use SVD for
808
+ efficient rank-𝑑 decomposition of the sparse proximity matrix
809
+ 𝑾. We multiply the left-singular vectors 𝑼 by the corresponding
810
+ squared eigenvalues
811
+
812
+ 𝚺 to correct the scale.
813
+ Node Similarity. To estimate the node similarity, we compute
814
+ the distance of nodes in the embedding space. The intuition is that
815
+ the nodes that are closer in the embedding space should be better
816
+ connected with higher-order structures. Given the aforementioned
817
+ embedding 𝑼, the node similarity function S is:
818
+ S(𝑼𝑖, 𝑼 𝑗) = 𝑒−D(𝑼 𝑖𝑘,𝑼 𝑗𝑘),
819
+ (5)
820
+ where 𝑒 ≈ 2.718 denotes Euler’s number. Equation 5 is a universal
821
+ law proposed by Shepard [32], connecting the similarity with
822
+ distance via an exponential function. While the function D can
823
+
824
+ Under Submission, ,
825
+ Meng-Chieh Lee, Shubhranshu Shekhar, Jaemin Yoo, and Christos Faloutsos
826
+ be any distance metric, we use Euclidean because it is empirically
827
+ shown to work well. Negative exponential distribution is used
828
+ to bound the similarity from 0 to 1, which is close to 0 if the
829
+ distance is too large. Given 𝑨 and 𝑼, “Emphasis” Matrix 𝑨∗
830
+ with weighted edges estimated by S is defined in line 11. Since
831
+ S(𝑼𝑖, 𝑼 𝑗) = S(𝑼 𝑗, 𝑼𝑖), 𝑨∗ is still a symmetric matrix. This is a
832
+ convenient property, which is later used for the fast computation
833
+ of the spectral radius (see Lemma 4).
834
+ 4.2
835
+ Compatibility Matrix Estimation
836
+ A compatibility matrix contains the class-wise strength of edges
837
+ and is important for properly inferring the node labels. In this
838
+ subsection, we show how to turn compatibility matrix estimation
839
+ into an optimization problem by introducing our closed-form
840
+ formula, which overcomes the defect of edge counting. We then
841
+ illustrate how we conquer several practical challenges to give a
842
+ precise and fast estimation.
843
+ Why NOT Edge Counting. The naive way to estimate com-
844
+ patibility matrix is via counting labeled edges. However, it is
845
+ inaccurate and has limitations: 1) rare labels will get neglected,
846
+ and 2) being noisy or biased due to few labeled nodes in real
847
+ graphs. The result is even more unreliable if the given labels are
848
+ imbalanced. Figure 3 is an example that edge counting fails if we
849
+ upsample 10× labels for only class 1. This occurs commonly in
850
+ practice, since we have only partial labels in node classification
851
+ tasks, and becomes fatal if the observed distribution is different
852
+ from the true one.
853
+ Closed-Form Formula. In Equation 1, if we initialize the final
854
+ belief with the initial one, and omit the addition of the initial
855
+ belief for the iterative propagation purpose, we have:
856
+ ˆ𝑩 = 𝑨ˆ𝑬 ˆ𝑯
857
+ (6)
858
+ Our goal is to estimate the compatibility matrix ˆ𝑯 of a given
859
+ graph, so that the difference between belief propagated by the
860
+ given priors and the final belief is minimized. To solve this, we
861
+ firstly derive the closed-form solution of Equation 6 based on our
862
+ proposed Network Effect Formula:
863
+ LEMMA 3 (NETWORK EFFECT FORMULA). Given adja-
864
+ cency matrix 𝑨 and initial and final beliefs ˆ𝑬 and ˆ𝑩, the closed-
865
+ form solution of vectorized compatibility matrix vec( ˆ𝑯) is:
866
+ vec( ˆ𝑯) = (𝑿𝑇 𝑿)−1𝑿𝑇𝒚,
867
+ (7)
868
+ where 𝑿 = 𝑰𝑐×𝑐 ⊗ (𝑨ˆ𝑬) and 𝒚 = vec( ˆ𝑩).
869
+ PROOF. Omitted for brevity. Proof in Supplementary A.2.
870
+
871
+ Although the final belief matrix ˆ𝑩 is not available before we
872
+ run actual propagation on the graph, we can replace it by 𝒚 =
873
+ vec( ˆ𝑬), and extract the ones that are corresponding to the priors
874
+ P. In other words, we change the problem into minimizing the
875
+ difference between initial belief of each node 𝑖 ∈ P by the initial
876
+ beliefs of its neighbors in the priors P, i.e., 𝑁 (𝑖) ∩ P. Intuitively,
877
+ neighbors should be able to estimate the belief for the node. The
878
+ optimization problem can then be formulated as follows:
879
+ min
880
+ ˆ𝑯
881
+ ∑︁
882
+ 𝑖 ∈P
883
+ 𝑐∑︁
884
+ 𝑢=1
885
+ ˆ𝑬𝑖𝑢 − (
886
+ 𝑐∑︁
887
+ 𝑘=1
888
+ ∑︁
889
+ 𝑗 ∈𝑁 (𝑖)∩P
890
+ ˆ𝑬 𝑗𝑘 ˆ𝑯𝑘𝑙)
891
+ (8)
892
+ With the help of Network Effect Formula, the optimization prob-
893
+ lem can then be solved by regression.
894
+ 1
895
+ 2
896
+ 3
897
+ 4
898
+ 5
899
+ 6
900
+ Class ID
901
+ 1
902
+ 2
903
+ 3
904
+ 4
905
+ 5
906
+ 6
907
+ Class ID
908
+ Edge Counting
909
+ 0.0
910
+ 0.2
911
+ 0.4
912
+ 0.6
913
+ 0.8
914
+ 1.0
915
+ (a) Balanced Prior
916
+ 1
917
+ 2
918
+ 3
919
+ 4
920
+ 5
921
+ 6
922
+ Class ID
923
+ 1
924
+ 2
925
+ 3
926
+ 4
927
+ 5
928
+ 6
929
+ Class ID
930
+ Edge Counting
931
+ 0.0
932
+ 0.2
933
+ 0.4
934
+ 0.6
935
+ 0.8
936
+ 1.0
937
+ (b) Imbalanced Prior
938
+ Figure 3:
939
+ Edge counting can not handle imbalanced case.
940
+ Class 1 is upsampled in this example.
941
+ Data: Emphasis Matrix 𝑨∗, initial belief ˆ𝑬, and priors P
942
+ Result: Estimated compatibility matrix ˆ𝑯∗
943
+ 1 𝒊 ← ∅;
944
+ // indices only related to priors
945
+ 2 for 𝑝 ∈ P do
946
+ 3
947
+ for 𝑗 = 1, ...,𝑐 do
948
+ 4
949
+ 𝒊 ← 𝒊 ∪ {𝑝 + (𝑗 − 1) ∗ 𝑐};
950
+ 5
951
+ end
952
+ 6 end
953
+ 7 𝑿 ← (𝑰𝑐×𝑐 ⊗ (𝑨∗ ˆ𝑬));
954
+ // feature matrix
955
+ 8 𝒚 ← vec( ˆ𝑬);
956
+ // target vector
957
+ 9 ˆ𝑯∗ ← 𝑅𝑖𝑑𝑔𝑒𝐶𝑉 (𝑿 [𝒊],𝒚[𝒊]);
958
+ 10 Return row-normalize(max ( ˆ𝑯∗, 0));
959
+ Algorithm 4: Compatibility Matrix Estimation
960
+ Practical Challenges and Solutions. Network Effect Formula
961
+ allows us to estimate the compatibility matrix by solving this op-
962
+ timization problem, but there still exists two practical challenges
963
+ that need to be addressed.
964
+ First, with few labels, it is difficult to properly separate them
965
+ into training and validation sets for the regression. We thus use
966
+ ridge regression with leave-one-out cross-validation (RidgeCV)
967
+ instead of the traditional linear regression. This allows us to fully
968
+ exploit the observations without having a bias caused by random
969
+ splits of training and validation sets. Moreover, the regularization
970
+ effect of ridge regression makes the compatibility matrix more
971
+ robust to noisy observations. It is noteworthy that the additional
972
+ computational cost of RidgeCV is negligible.
973
+ Next, the compatibility matrix estimated with the adjacency
974
+ matrix 𝑨 is easily interfered with by noisy neighbors, i.e., weakly-
975
+ connected pairs. To address this issue, we use our proposed “Em-
976
+ phasis” Matrix 𝑨∗ instead (see Section 4.1), to pay attention to
977
+ the labels of neighbors that are structurally important. Since the
978
+ rows of the estimated matrix 𝑯 do not sum to one in this ap-
979
+ proach, we filter out the negative values and normalize the sum
980
+ of each row to one. This is done safely, since the negative values
981
+ represent negligible relationships between nodes.
982
+ Algorithm. The overall process of estimation is shown in Al-
983
+ gorithm 4. We extract the indices that are corresponding to the
984
+ priors after the Kronecker product and vectorization in line 2 to
985
+ 7. The optimization is then conducted in line 8 to 10 to estimate
986
+ the compatibility matrix ˆ𝑯∗. The negative value filtering and row
987
+ normalization is done on line 11.
988
+
989
+ ULTRAPROP: Principled and Explainable Propagation on Large Graphs
990
+ Under Submission, ,
991
+ 4.3
992
+ Theoretical Analysis
993
+ Convergence Guarantee. To ensure the convergence of propa-
994
+ gation, we introduce a scaling factor multiplied to it during the
995
+ iterations. The exact convergence of ULTRAPROP is as follows:
996
+ LEMMA 4 (EXACT CONVERGENCE). The criterion for the
997
+ exact convergence of ULTRAPROP is:
998
+ ULTRAPROP exactly converges ⇔ 0 < 𝑓 <
999
+ 1
1000
+ 𝜌(𝑨∗) ,
1001
+ (9)
1002
+ where 𝜌(·) denotes the spectral radius of the given matrix.
1003
+ PROOF. Omitted for brevity. Proof in Supplementary A.3.
1004
+
1005
+ A smaller scaling factor leads to a faster convergence, never-
1006
+ theless, distorts the results. In ULTRAPROP, we recommend a
1007
+ large eigenvalue close to 1, setting 𝑓 = 0.9/𝜌(𝑨∗) as a reason-
1008
+ able default. Since 𝑨∗ is built to be symmetric and sparse (see
1009
+ Section 4.1), the computation of the spectral radius can be done
1010
+ efficiently.
1011
+ Complexity Analysis. ULTRAPROP uses sparse matrix repre-
1012
+ sentation of graphs. The time complexity is given as:
1013
+ LEMMA 5. ULTRAPROP scales linearly on the input size. the
1014
+ time complexity of ULTRAPROP is at most
1015
+ 𝑂(𝑚),
1016
+ (10)
1017
+ and the space complexity is at most
1018
+ 𝑂(max (𝑚,𝑛 · 𝐿 · 𝑀) + 𝑛 · 𝑐2).
1019
+ (11)
1020
+ PROOF. Omitted for brevity. Proof in Supplementary A.4.
1021
+
1022
+ 5
1023
+ EXPERIMENTS
1024
+ In this section, we aims to answer the following questions.
1025
+ Q1. Accuracy: How well does ULTRAPROP work on real-world
1026
+ graphs as compared to the baselines?
1027
+ Q2. Scalability: How does the running-time of ULTRAPROP
1028
+ scale w.r.t. graph size?
1029
+ Q3. Explainability: How to explain the results of ULTRAPROP?
1030
+ Experimental Setup
1031
+ Datasets. We focus on large graphs and include eight graph
1032
+ datasets with at least 22.5K nodes (details in Supplementary B.1)
1033
+ in our evaluation. The statistics of datasets are shown in Table 2
1034
+ and 3. For each dataset, we sample only a few node labels as
1035
+ initial beliefs. We do this for five times and report the average
1036
+ and standard deviation to omit the biases.
1037
+ “Synthetic” is the enlarged version of the graph shown in
1038
+ Figure 1, which contains both heterophily and homophily NE.
1039
+ Noisy edges are injected in the background, and the dense blocks
1040
+ are constructed by randomly generating higher-order structures.
1041
+ Baselines. We compare ULTRAPROP with five state-of-the-art
1042
+ baselines and separate them into four groups: General GNNs:
1043
+ GCN [16], and APPNP [17]. Heterophily GNN: MIXHOP [2],
1044
+ and GPR-GNN [6]. BP-based methods: HOLS [8]. Our pro-
1045
+ posed methods: ULTRAPROP-Hom and ULTRAPROP. ULTRA-
1046
+ PROP-Hom is ULTRAPROP using identity matrix as compatibility
1047
+ matrix, which assumes homophily and does not handle NE. The
1048
+ details of baselines are given in Supplementary B.2.
1049
+ Experimental Settings. For deep graph models, since we fo-
1050
+ cus on the graph without node features, the node degrees are
1051
+ transformed into one hot encoding and used as the node fea-
1052
+ tures, which is suggested and implemented by several studies
1053
+ (e.g. GraphSAGE and PyTorch Geometric) [9, 12, 13]. The de-
1054
+ tails of hyperparameters are given in Supplementary B.3. To give
1055
+ fair comparisons on run time, all the experiments are run on the
1056
+ same machine, which is a stock Linux server with 3.2GHz Intel
1057
+ Xeon CPU. In Section 5.2, we further investigate how much the
1058
+ extra cost is, if a more powerful and but more expensive machine
1059
+ is used.
1060
+ 5.1
1061
+ Q1 - Accuracy
1062
+ In Table 2 and 3, we report the accuracy and wall-clock time for
1063
+ each method. We highlight the top three from dark to light by
1064
+ ,
1065
+ and
1066
+ denoting the first, second and third place.
1067
+ OBSERVATION 3. ULTRAPROP wins on X-ophily, heterophily
1068
+ and homophily datasets.
1069
+ X-ophily and Heterophily. In Table 2, ULTRAPROP outper-
1070
+ forms all the competitors significantly by more than 34.4% and
1071
+ 12.8% accuracy on the “Synthetic” and “Pokec-Gender” datasets,
1072
+ respectively. These datasets have strong NE, thus ULTRAPROP
1073
+ boosts the accuracy owing to precise estimations of compatibility
1074
+ matrix. The success in “Synthetic” further demonstrates its ability
1075
+ to handle the dataset with X-ophily. Heterophily GNNs, namely
1076
+ MIXHOP and GPR-GNN, all fail to predict correctly, giving
1077
+ results close to random guessing. With homophily assumption,
1078
+ General GNNs and BP-based methods also perform poorly.
1079
+ Both “arXiv-Year” and “Patent-Year” datasets are shown to
1080
+ only have weak NE (in Section 3.2), thus resulting in relatively
1081
+ low accuracy for all methods compared with the other two datasets
1082
+ with strong NE. Even so, ULTRAPROP still outperforms the
1083
+ competitors by estimating a reasonable compatibility matrix. In
1084
+ “arXiv-Year”, ULTRAPROP receives the second place by running
1085
+ 74.6× faster than MIXHOP. In “Patent-Year”, only ULTRAPROP,
1086
+ APPNP and MIXHOP are able to give accuracy higher than
1087
+ random guessing, which is 26.1%.
1088
+ In the cases that ULTRAPROP is faster than ULTRAPROP-
1089
+ Hom is because of both the low cost of compatibility matrix
1090
+ estimation, and the lower spectral radius of ˆ𝑯∗, leading to a
1091
+ faster convergence while propagating.
1092
+ Homophily. In Table 3, ULTRAPROP-Hom outperforms all
1093
+ the competitors on two homophily datasets, namely “GitHub”
1094
+ and “Pokec-Locality”. ULTRAPROP performs similarly to UL-
1095
+ TRAPROP-Hom, indicating its generalizability to the homophily
1096
+ datasets by estimating near-identity matrices. In addition, ULTRA-
1097
+ PROP-Hom gives competitive results with HOLS on the other
1098
+ two homophily datasets “Facebook” and “arXiv-Category”, while
1099
+ being 84.9× and 5.7× faster than HOLS respectively. General
1100
+ GNNs rely heavily on node features for inference which explains
1101
+ their poor performance.
1102
+ OBSERVATION 4. Our optimizations makes difference.
1103
+ We evaluate the effect of different compatibility matrices – (i)
1104
+ ULTRAPROP-EC conducts edge counting on the labels of adja-
1105
+ cent nodes in the priors, instead of using our Network Effect
1106
+ Formula, and (ii) ULTRAPROP-A uses the adjacency matrix in-
1107
+ stead of “Emphasis” Matrix to estimate the compatibility matrix
1108
+
1109
+ Under Submission, ,
1110
+ Meng-Chieh Lee, Shubhranshu Shekhar, Jaemin Yoo, and Christos Faloutsos
1111
+ Table 2: ULTRAPROP wins on X-ophily and Heterophily datasets. Accuracy, running time, and speedup are reported. Win-
1112
+ ners and runner-ups in
1113
+ ,
1114
+ and
1115
+ .
1116
+ Dataset
1117
+ Synthetic
1118
+ Pokec-Gender
1119
+ arXiv-Year
1120
+ Patent-Year
1121
+ # of Nodes / Edges / Classes
1122
+ 1.2M / 34.0M / 6
1123
+ 1.6M / 22.3M / 2
1124
+ 169K / 1.2M / 5
1125
+ 1.3M / 4.3M / 5
1126
+ Label Fraction
1127
+ 4%
1128
+ 0.4%
1129
+ 4%
1130
+ 4%
1131
+ NE Strength
1132
+ Strong
1133
+ Strong
1134
+ Weak
1135
+ Weak
1136
+ NE Type
1137
+ X-ophily
1138
+ Heterophily
1139
+ X-ophily
1140
+ Heterophily
1141
+ Method
1142
+ Accuracy (%)
1143
+ Time (s)
1144
+ Speedup Accuracy (%)
1145
+ Time (s)
1146
+ Speedup Accuracy (%)
1147
+ Time (s)
1148
+ Speedup Accuracy (%)
1149
+ Time (s)
1150
+ Speedup
1151
+ GCN
1152
+ 16.7±0.0
1153
+ 3456
1154
+ 4.7×
1155
+ 51.8±0.1
1156
+ 2906
1157
+ 3.9×
1158
+ 35.3±0.1
1159
+ 132
1160
+ 3.3×
1161
+ 26.0±0.0
1162
+ 894
1163
+ 3.3×
1164
+ APPNP
1165
+ 18.6±1.1
1166
+ 7705
1167
+ 10.4×
1168
+ 50.9±0.3
1169
+ 6770
1170
+ 9.1×
1171
+ 33.5±0.2
1172
+ 423
1173
+ 10.6×
1174
+ 27.5±0.2
1175
+ 2050
1176
+ 7.6×
1177
+ MIXHOP
1178
+ 16.7±0.0
1179
+ 58391
1180
+ 79.0×
1181
+ 53.4±1.2
1182
+ 53871
1183
+ 72.7×
1184
+ 39.6±0.1
1185
+ 2983
1186
+ 74.6×
1187
+ 26.8±0.1
1188
+ 18787
1189
+ 70.1×
1190
+ GPR-GNN
1191
+ 18.9±1.2
1192
+ 7637
1193
+ 10.3×
1194
+ 50.7±0.2
1195
+ 6699
1196
+ 9.0×
1197
+ 30.1±1.4
1198
+ 400
1199
+ 10.0×
1200
+ 25.3±0.1
1201
+ 2034
1202
+ 7.6×
1203
+ HOLS
1204
+ 46.1±0.1
1205
+ 1672
1206
+ 2.3×
1207
+ 54.4±0.1
1208
+ 8552
1209
+ 11.5×
1210
+ 34.1±0.3
1211
+ 566
1212
+ 14.2×
1213
+ 23.6±0.0
1214
+ 510
1215
+ 1.9×
1216
+ ULTRAPROP-Hom
1217
+ 45.7±0.1
1218
+ 726
1219
+ 1.0×
1220
+ 56.9±0.1
1221
+ 736
1222
+ 1.0×
1223
+ 37.0±0.3
1224
+ 44
1225
+ 1.0×
1226
+ 24.1±0.0
1227
+ 316
1228
+ 1.2×
1229
+ ULTRAPROP
1230
+ 80.5±0.0
1231
+ 739
1232
+ 1.0×
1233
+ 67.2±0.1
1234
+ 742
1235
+ 1.0×
1236
+ 38.9±0.3
1237
+ 42
1238
+ 1.0×
1239
+ 28.6±0.1
1240
+ 268
1241
+ 1.0×
1242
+ Table 3: ULTRAPROP wins on Homophily datasets. Accuracy, running time, and speedup are reported. Winners and runner-
1243
+ ups in
1244
+ ,
1245
+ and
1246
+ .
1247
+ Dataset
1248
+ Facebook
1249
+ GitHub
1250
+ arXiv-Category
1251
+ Pokec-Locality
1252
+ # of Nodes / Edges / Classes
1253
+ 22.5K / 171K / 4
1254
+ 37.7K / 289K / 2
1255
+ 169K / 1.2M / 40
1256
+ 1.6M / 22.3M / 10
1257
+ Label Fraction
1258
+ 4%
1259
+ 4%
1260
+ 0.4%
1261
+ 0.4%
1262
+ Method
1263
+ Accuracy (%)
1264
+ Time (s)
1265
+ Speedup Accuracy (%)
1266
+ Time (s)
1267
+ Speedup Accuracy (%)
1268
+ Time (s)
1269
+ Speedup Accuracy (%)
1270
+ Time (s)
1271
+ Speedup
1272
+ GCN
1273
+ 67.0±0.8
1274
+ 12
1275
+ 3.0×
1276
+ 81.0±0.6
1277
+ 28
1278
+ 2.5×
1279
+ 24.5±0.6
1280
+ 209
1281
+ 1.7×
1282
+ 17.3±0.4
1283
+ 4002
1284
+ 3.3×
1285
+ APPNP
1286
+ 50.5±2.2
1287
+ 46
1288
+ 10.5×
1289
+ 74.2±0.0
1290
+ 73
1291
+ 6.6×
1292
+ 17.7±1.3
1293
+ 993
1294
+ 8.1×
1295
+ 16.8±1.7
1296
+ 11885
1297
+ 9.7×
1298
+ MIXHOP
1299
+ 69.2±0.7
1300
+ 296
1301
+ 73.5×
1302
+ 77.8±1.3
1303
+ 526
1304
+ 47.8×
1305
+ 23.6±0.5
1306
+ 3029
1307
+ 24.8×
1308
+ 16.9±0.3
1309
+ 52139
1310
+ 43.9×
1311
+ GPR-GNN
1312
+ 51.9±1.5
1313
+ 47
1314
+ 11.8×
1315
+ 74.1±0.1
1316
+ 75
1317
+ 6.8×
1318
+ 18.4±1.2
1319
+ 1016
1320
+ 8.3×
1321
+ 30.0±2.0
1322
+ 11959
1323
+ 9.7×
1324
+ HOLS
1325
+ 86.0±0.4
1326
+ 934
1327
+ 84.9×
1328
+ 80.8±0.5
1329
+ 126
1330
+ 11.5×
1331
+ 52.0±0.5
1332
+ 692
1333
+ 5.7×
1334
+ 63.7±0.3
1335
+ 8139
1336
+ 6.6×
1337
+ ULTRAPROP-Hom
1338
+ 84.7±0.5
1339
+ 4
1340
+ 1.0×
1341
+ 81.7±0.7
1342
+ 11
1343
+ 1.0×
1344
+ 49.5±1.2
1345
+ 124
1346
+ 1.0×
1347
+ 65.4±0.3
1348
+ 1270
1349
+ 1.0×
1350
+ ULTRAPROP
1351
+ 84.7±0.5
1352
+ 4
1353
+ 1.0×
1354
+ 81.7±0.7
1355
+ 11
1356
+ 1.0×
1357
+ 48.4±2.5
1358
+ 122
1359
+ 1.0×
1360
+ 64.6±1.0
1361
+ 1231
1362
+ 1.0×
1363
+ Table 4: Ablation Study: Estimating compatibility matrix by
1364
+ the proposed “Emphasis” Matrix is essential. Accuracy (%)
1365
+ is reported in the table.
1366
+ Datasets
1367
+ NE Strength
1368
+ ULTRAPROP-Hom ULTRAPROP-EC ULTRAPROP-A
1369
+ ULTRAPROP
1370
+ Synthetic
1371
+ Strong
1372
+ 77.7±0.0
1373
+ 68.0±0.1
1374
+ 77.4±0.0
1375
+ 80.5±0.0
1376
+ Pokec-Gender
1377
+ 56.9±0.1
1378
+ 64.9±0.2
1379
+ 64.8±0.2
1380
+ 67.2±0.1
1381
+ arXiv-Year (imba.)
1382
+ Weak
1383
+ 37.0±0.3
1384
+ 36.5±1.0
1385
+ 35.7±0.6
1386
+ 38.4±0.0
1387
+ Patent-Year (imba.)
1388
+ 24.1±0.0
1389
+ 24.0±0.9
1390
+ 28.7±0.1
1391
+ 28.7±0.0
1392
+ Table 5: ULTRAPROP is thrifty. AWS total dollar amount ($)
1393
+ is reported in the table. The blue and red fonts denote run-
1394
+ ning a single experiment by t3.small and p3.2xlarge, respec-
1395
+ tively. Accuracy (%) is reported in Table 2 and 3.
1396
+ Datasets
1397
+ ULTRAPROP
1398
+ GCN
1399
+ Pokec-Gender
1400
+ $ 0.28 (1.0×)
1401
+ $ 12.61 (45.0×)
1402
+ Pokec-Locality
1403
+ $ 0.47 (1.0×)
1404
+ $ 13.66 (29.1×)
1405
+ in Algorithm 4. To demonstrate effectiveness of our proposed
1406
+ estimation over edge counting, we upsample 5% labels to the
1407
+ class with the fewest labels in the datasets with weak NE, which
1408
+ are class 2 in “arXiv-Year” and class 1 in “Patent-Year”. We use
1409
+ the original labels for propagation in the imbalanced datasets.
1410
+ In Table 4, we find that ULTRAPROP outperforms all its vari-
1411
+ ants in four datasets. In the datasets with strong NE, ULTRAPROP
1412
+ shows its robustness to the structural noises and gives better re-
1413
+ sults. In the imbalanced datasets, while ULTRAPROP-EC brings
1414
+ its vulnerability to light, ULTRAPROP stays with high accuracy.
1415
+ This study highlights the importance of a precise compatibility
1416
+ matrix estimation, as well as forming it into an optimization
1417
+ problem by our Network Effect Formula as shown in Lemma 3.
1418
+ Furthermore, we compare ULTRAPROP with LINBP to dis-
1419
+ play its advantages in Figure 4. In Figure 4a, the accuracy gap
1420
+ between them indicates the necessity of precisely estimating the
1421
+ compatibility matrix. In figure 4b, owing to “Emphasis” Matrix,
1422
+ ULTRAPROP-Hom improves the accuracy in all homophily cases
1423
+ 100
1424
+ 101
1425
+ 102
1426
+ 103
1427
+ Run Time
1428
+ 0.3
1429
+ 0.4
1430
+ 0.5
1431
+ 0.6
1432
+ 0.7
1433
+ 0.8
1434
+ Accuracy
1435
+ LinBP
1436
+ UltraProp
1437
+ Synthetic
1438
+ Pokec-Gender
1439
+ arXiv-Year
1440
+ Patent-Year
1441
+ Facebook
1442
+ GitHub
1443
+ arXiv-Category
1444
+ Pokec-Locality
1445
+ (a) Run Time vs. Accuracy
1446
+ Synthetic
1447
+ Pokec-Gender
1448
+ arXiv-Year
1449
+ Patent-Year
1450
+ Facebook
1451
+ GitHub
1452
+ arXiv-Category
1453
+ Pokec-Locality
1454
+ 0.2
1455
+ 0.4
1456
+ 0.6
1457
+ 0.8
1458
+ Accruacy
1459
+ LinBP
1460
+ UltraProp-Hom
1461
+ UltraProp
1462
+ 1.8x
1463
+ (b) Accuracy
1464
+ Figure 4: Ablation Study: ULTRAPROP wins. It provides the
1465
+ best trade-off between accuracy and running time compared
1466
+ with LINBP.
1467
+ compared with LINBP; owing to both “Emphasis” Matrix and
1468
+ Network Effect Formula, ULTRAPROP improves the accuracy
1469
+ in all cases while adding negligible penalty on run time, provid-
1470
+ ing the best trade-off compared with LINBP. ULTRAPROP per-
1471
+ forming similarly to ULTRAPROP-Hom on homophily datasets,
1472
+ indicates that it correctly estimates near-identity matrices.
1473
+
1474
+ ULTRAPROP: Principled and Explainable Propagation on Large Graphs
1475
+ Under Submission, ,
1476
+ 5.2
1477
+ Q2 - Scalability
1478
+ We vary the edge number in “Pokec-Gender” and plot against
1479
+ the wall-clock running time for ULTRAPROP in Figure 1c, in-
1480
+ cluding both training and inference time. As there is no good
1481
+ way to sample the graph [19], and also it is prohibitive to use
1482
+ graph generator with million nodes, we try our best to ensure the
1483
+ connectivity by continuously removing the nodes in the graph,
1484
+ until the number of edges is no greater than the target. Note that
1485
+ ULTRAPROP scales linearly as expected from Lemma 5.
1486
+ Not only ULTRAPROP is scalable and linear, but it is also
1487
+ thrifty, achieving up to 45× savings in dollar cost. It requires only
1488
+ CPU, while comparable speeds by competitors, require GPUs.
1489
+ Table 5 shows the estimated cost, assuming that we use a small
1490
+ CPU machine for ULTRAPROP, and a GPU machine for GCN.
1491
+ Details of computation are provided in Supplementary B.4.
1492
+ 5.3
1493
+ Q3 - Explainability
1494
+ OBSERVATION 5. ULTRAPROP estimated the correct compat-
1495
+ ibility matrices.
1496
+ We illustrate that the estimations of compatibility matrix by
1497
+ Network Effect Formula are precise in Figure 5, so as to inter-
1498
+ preting the interrelations of classes extremely well. The inter-
1499
+ relations of shown estimated compatibility matrices are similar
1500
+ to the ones of edge counting in Figure 2, while being more ro-
1501
+ bust to the noisy neighbors, namely, weakly connected ones. For
1502
+ “Synthetic”, ULTRAPROP gives the exact answer that we use
1503
+ to generate the dataset. For “Pokec-Gender”, ULTRAPROP suc-
1504
+ cessfully estimates that people tend to connect to the ones with
1505
+ opposite gender. This corresponds to the fact that people incline
1506
+ to have more opposite gender interactions during their reproduc-
1507
+ tive age [11], where the average ages of male and female in the
1508
+ dataset are 25.4 and 24.2, respectively. Although “arXiv-Year”
1509
+ and “Patent-Year” do not have strong NE, ULTRAPROP still gives
1510
+ an estimated compatibility matrices making much sense in the
1511
+ real world, where the papers and patents only cite to the ones
1512
+ whose published dates are relatively close to them. We omit the re-
1513
+ sults on homophily datasets, for brevity. In all cases ULTRAPROP
1514
+ resulted in an near-identity compatibility matrix, as expected,
1515
+ supported by giving similar results as ULTRAPROP-Hom, which
1516
+ uses identity matrix as compatibility matrix.
1517
+ 6
1518
+ CONCLUSIONS
1519
+ We firstly presented Network Effect Analysis (NEA) to identify
1520
+ whether a graph exhibit network-effect or not, and surprisingly dis-
1521
+ cover the absence of it in many real-world graphs known to have
1522
+ heterophily. Next, we present ULTRAPROP to solve node classi-
1523
+ fication based two insights, network-effect (NE) and neighbor-
1524
+ differentiation (ND), which has the following advantages:
1525
+ (1) Accurate: thanks to the precise compatibility matrix esti-
1526
+ mation by NE, and ND that weighs important neighbors.
1527
+ (2) Explainable: it interprets interrelations of classes with the
1528
+ estimated compatibility matrix.
1529
+ (3) Scalable: it scales linearly with the input size.
1530
+ (4) Principled: it provides provable guarantees (Lemma 1, 2
1531
+ and 4) and closed-form solution (Lemma 3).
1532
+ Applied on real-world million-scale graph datasets with over
1533
+ 22M edges, ULTRAPROP only requires 12 minutes on a stock
1534
+ 1
1535
+ 2
1536
+ 3
1537
+ 4
1538
+ 5
1539
+ 6
1540
+ Class ID
1541
+ 1
1542
+ 2
1543
+ 3
1544
+ 4
1545
+ 5
1546
+ 6
1547
+ Class ID
1548
+ Est. Comp. Matrix
1549
+ 0.0
1550
+ 0.2
1551
+ 0.4
1552
+ 0.6
1553
+ 0.8
1554
+ 1.0
1555
+ (a)
1556
+ “Synthetic”: X-ophily with
1557
+ Strong NE
1558
+ 1
1559
+ 2
1560
+ Class ID
1561
+ 1
1562
+ 2
1563
+ Class ID
1564
+ Est. Comp. Matrix
1565
+ 0.0
1566
+ 0.2
1567
+ 0.4
1568
+ 0.6
1569
+ 0.8
1570
+ 1.0
1571
+ (b) “Pokec-Gender”: Heterophily
1572
+ with Strong NE
1573
+ 1
1574
+ 2
1575
+ 3
1576
+ 4
1577
+ 5
1578
+ Class ID
1579
+ 1
1580
+ 2
1581
+ 3
1582
+ 4
1583
+ 5
1584
+ Class ID
1585
+ Est. Comp. Matrix
1586
+ 0.2
1587
+ 0.3
1588
+ 0.4
1589
+ 0.5
1590
+ 0.6
1591
+ 0.7
1592
+ 0.8
1593
+ (c)
1594
+ “arXiv-Year”: X-ophily with
1595
+ Weak NE
1596
+ 1
1597
+ 2
1598
+ 3
1599
+ 4
1600
+ 5
1601
+ Class ID
1602
+ 1
1603
+ 2
1604
+ 3
1605
+ 4
1606
+ 5
1607
+ Class ID
1608
+ Est. Comp. Matrix
1609
+ 0.1
1610
+ 0.2
1611
+ 0.3
1612
+ 0.4
1613
+ 0.5
1614
+ 0.6
1615
+ (d)
1616
+ “Patent-Year”: Heterophily
1617
+ with Weak NE
1618
+ Figure 5: ULTRAPROP is explainable. The estimated com-
1619
+ patibility matrices are similar to the edge counting matrix
1620
+ (in Figure 2), while being robust to the noises.
1621
+ CPU-machine, and outperforms recent baselines on accuracy, as
1622
+ well as on speed (≥ 9×).
1623
+ Reproducibility: Our implemented source code and prepro-
1624
+ cessed datasets will be published once the paper is accepted.
1625
+
1626
+ Under Submission, ,
1627
+ Meng-Chieh Lee, Shubhranshu Shekhar, Jaemin Yoo, and Christos Faloutsos
1628
+ REFERENCES
1629
+ [1] Nvidia rtx a6000 deep learning benchmarks. https://lambdalabs.com/blog/
1630
+ nvidia-rtx-a6000-benchmarks/.
1631
+ [2] S. Abu-El-Haija, B. Perozzi, A. Kapoor, N. Alipourfard, K. Lerman, H. Haru-
1632
+ tyunyan, G. Ver Steeg, and A. Galstyan. Mixhop: Higher-order graph convo-
1633
+ lutional architectures via sparsified neighborhood mixing. In ICML, pages
1634
+ 21–29, 2019.
1635
+ [3] N. Alon, I. Benjamini, E. Lubetzky, and S. Sodin. Non-backtracking random
1636
+ walks mix faster. Communications in Contemporary Mathematics, 9(04):585–
1637
+ 603, 2007.
1638
+ [4] D. S. Bernstein. Matrix Mathematics. Princeton University Press, 2009.
1639
+ [5] S. Chen, S. Niu, L. Akoglu, J. Kovacevic, and C. Faloutsos. Fast, warped
1640
+ graph embedding: Unifying framework and one-click algorithm.
1641
+ CoRR,
1642
+ abs/1702.05764, 2017.
1643
+ [6] E. Chien, J. Peng, P. Li, and O. Milenkovic. Adaptive universal generalized
1644
+ pagerank graph neural network. In ICLR, 2021.
1645
+ [7] C. Eksombatchai, P. Jindal, J. Z. Liu, Y. Liu, R. Sharma, C. Sugnet, M. Ulrich,
1646
+ and J. Leskovec. Pixie: A system for recommending 3+ billion items to 200+
1647
+ million users in real-time. In TheWebConf, pages 1775–1784, 2018.
1648
+ [8] D. Eswaran, S. Kumar, and C. Faloutsos. Higher-order label homogeneity and
1649
+ spreading in graphs. In TheWebConf, pages 2493–2499, 2020.
1650
+ [9] M. Fey and J. E. Lenssen. Fast graph representation learning with PyTorch
1651
+ Geometric. In ICLR Workshop on Representation Learning on Graphs and
1652
+ Manifolds, 2019.
1653
+ [10] W. Gatterbauer, S. Günnemann, D. Koutra, and C. Faloutsos. Linearized and
1654
+ single-pass belief propagation. PVLDB, 8(5):581–592, 2015.
1655
+ [11] A. Ghosh, D. Monsivais, K. Bhattacharya, R. I. Dunbar, and K. Kaski. Quanti-
1656
+ fying gender preferences in human social interactions using a large cellphone
1657
+ dataset. EPJ Data Science, 8(1):9, 2019.
1658
+ [12] W. L. Hamilton, R. Ying, and J. Leskovec. Inductive representation learning
1659
+ on large graphs. In NeurIPS, pages 1025–1035, 2017.
1660
+ [13] W. L. Hamilton, R. Ying, and J. Leskovec. Representation learning on graphs:
1661
+ Methods and applications. arXiv preprint arXiv:1709.05584, 2017.
1662
+ [14] W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu, M. Catasta, and
1663
+ J. Leskovec. Open graph benchmark: Datasets for machine learning on graphs.
1664
+ Advances in neural information processing systems, 33:22118–22133, 2020.
1665
+ [15] D. Kim and A. Oh. How to find your friendly neighborhood: Graph attention
1666
+ design with self-supervision. In ICLR, 2020.
1667
+ [16] T. N. Kipf and M. Welling. Semi-supervised classification with graph convo-
1668
+ lutional networks. arXiv preprint arXiv:1609.02907, 2016.
1669
+ [17] J. Klicpera, A. Bojchevski, and S. Günnemann.
1670
+ Predict then propa-
1671
+ gate: Graph neural networks meet personalized pagerank. arXiv preprint
1672
+ arXiv:1810.05997, 2018.
1673
+ [18] D. Koutra, T. Ke, U. Kang, D. H. Chau, H. K. Pao, and C. Faloutsos. Unifying
1674
+ guilt-by-association approaches: Theorems and fast algorithms. In ECML/P-
1675
+ KDD (2), volume 6912 of Lecture Notes in Computer Science, pages 245–260.
1676
+ Springer, 2011.
1677
+ [19] J. Leskovec and C. Faloutsos. Sampling from large graphs. In KDD, pages
1678
+ 631–636. ACM, 2006.
1679
+ [20] J. Leskovec, J. Kleinberg, and C. Faloutsos. Graphs over time: Densification
1680
+ laws, shrinking diameters and possible explanations. In KDD, pages 177–187,
1681
+ 2005.
1682
+ [21] X. Li, R. Zhu, Y. Cheng, C. Shan, S. Luo, D. Li, and W. Qian. Finding global
1683
+ homophily in graph neural networks when meeting heterophily. arXiv preprint
1684
+ arXiv:2205.07308, 2022.
1685
+ [22] D. Lim and A. R. Benson. Expertise and dynamics within crowdsourced
1686
+ musical knowledge curation: A case study of the genius platform. arXiv
1687
+ preprint arXiv:2006.08108, 2020.
1688
+ [23] D. Lim, F. Hohne, X. Li, S. L. Huang, V. Gupta, O. Bhalerao, and S. N. Lim.
1689
+ Large scale learning on non-homophilous graphs: New benchmarks and strong
1690
+ simple methods. NeurIPS, 34, 2021.
1691
+ [24] M. Lin, H. C. Lucas Jr, and G. Shmueli. Research commentary—too big to
1692
+ fail: Large samples and the p-value problem. Information Systems Research,
1693
+ 24(4):906–917, 2013.
1694
+ [25] Y. Liu, X. Ao, F. Feng, and Q. He. Ud-gnn: Uncertainty-aware debiased
1695
+ training on semi-homophilous graphs. 2022.
1696
+ [26] Y. Ma, X. Liu, N. Shah, and J. Tang. Is homophily a necessity for graph neural
1697
+ networks? arXiv preprint arXiv:2106.06134, 2021.
1698
+ [27] J. Park, S. Yun, H. Park, J. Kang, J. Jeong, K.-M. Kim, J.-W. Ha, H. J. Kim,
1699
+ N. CLOVA, and N. A. LAB. Deformable graph transformer. arXiv preprint
1700
+ arXiv:2206.14337, 2022.
1701
+ [28] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible
1702
+ Inference. Elsevier, 2014.
1703
+ [29] J. Qiu, Y. Dong, H. Ma, J. Li, K. Wang, and J. Tang. Network embedding as
1704
+ matrix factorization: Unifying deepwalk, line, pte, and node2vec. In WSDM,
1705
+ pages 459–467, 2018.
1706
+ [30] B. Rozemberczki, C. Allen, and R. Sarkar. Multi-scale attributed node embed-
1707
+ ding, 2019.
1708
+ [31] B. Rozemberczki and R. Sarkar. Twitch gamers: a dataset for evaluating prox-
1709
+ imity preserving and structural role-based node embeddings. arXiv preprint
1710
+ arXiv:2101.03091, 2021.
1711
+ [32] R. N. Shepard. Toward a universal law of generalization for psychological
1712
+ science. Science, 237(4820):1317–1323, 1987.
1713
+ [33] L. Takac and M. Zabovsky. Data analysis in public social networks. In
1714
+ International Scientific Conference and International Workshop Present Day
1715
+ Trends of Innovations, volume 1, 2012.
1716
+ [34] A. L. Traud, P. J. Mucha, and M. A. Porter. Social structure of facebook net-
1717
+ works. Physica A: Statistical Mechanics and its Applications, 391(16):4165–
1718
+ 4180, 2012.
1719
+ [35] P. Veliˇckovi´c, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio.
1720
+ Graph attention networks. arXiv preprint arXiv:1710.10903, 2017.
1721
+ [36] H. Wang, J. Zhang, Q. Zhu, and W. Huang. Augmentation-free graph con-
1722
+ trastive learning. arXiv preprint arXiv:2204.04874, 2022.
1723
+ [37] K. Wang, Z. Shen, C. Huang, C.-H. Wu, Y. Dong, and A. Kanakia. Microsoft
1724
+ academic graph: When experts are not enough. Quantitative Science Studies,
1725
+ 1(1):396–413, 2020.
1726
+ [38] L. Wasserman, A. Ramdas, and S. Balakrishnan. Universal inference. Pro-
1727
+ ceedings of the National Academy of Sciences, 117(29):16880–16890, 2020.
1728
+ [39] F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, and K. Weinberger. Simplifying
1729
+ graph convolutional networks. In ICML, pages 6861–6871. PMLR, 2019.
1730
+ [40] T. Xiao, Z. Chen, Z. Guo, Z. Zhuang, and S. Wang. Decoupled self-supervised
1731
+ learning for non-homophilous graphs. arXiv preprint arXiv:2206.03601, 2022.
1732
+ [41] J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and D. Koutra. Beyond
1733
+ homophily in graph neural networks: Current limitations and effective designs.
1734
+ Advances in Neural Information Processing Systems, 33:7793–7804, 2020.
1735
+
1736
+ ULTRAPROP: Principled and Explainable Propagation on Large Graphs
1737
+ Under Submission, ,
1738
+ A
1739
+ PROOF
1740
+ A.1
1741
+ Proof of Lemma 1 and 2
1742
+ PROOF. For a 𝐿-steps random walk sequence 𝑆 with 𝑀 trials,
1743
+ the sequence length |𝑆| is 𝐿𝑀. We define the random variable 𝑋,
1744
+ denoting the probability of node 𝑖 will walk to its 𝑗-th neighbor:
1745
+ 𝑋 = P(node 𝑖 walks to 𝑁 (𝑖)𝑗) =
1746
+ �|𝑆 |
1747
+ 𝑘=1 1(𝑁 (𝑖)𝑗 = 𝑆𝑘)
1748
+ |𝑆|
1749
+ ,
1750
+ (12)
1751
+ where P denotes the probability and 1 denotes the indicator. With
1752
+ regular random walk in the graph without self-loops, the random
1753
+ variable 𝑋 is upper-bounded by ⌈(𝐿−1)/2⌉
1754
+ 𝐿
1755
+ . We can thus apply
1756
+ Hoeffding’s inequality:
1757
+ P(| ˆ𝜇|𝑆 | − 𝜇| ≥ 𝜖) ≤ 2 exp
1758
+ −2𝐿3𝑀𝑡2
1759
+ ⌈(𝐿 − 1)/2⌉2 ,
1760
+ (13)
1761
+ where ˆ𝜇|𝑆 | denotes the sampled mean of the given random vari-
1762
+ able, and 𝜇 denotes the expectation. Let 𝛿 = 2 exp
1763
+ −2𝐿3𝑀𝑡2
1764
+ ⌈(𝐿−1)/2⌉2 ,
1765
+ with probability 1 − 𝛿, the error 𝜖 is:
1766
+ 𝜖 = | ˆ𝜇|𝑆 | − 𝜇| ≤ ⌈(𝐿 − 1)/2⌉
1767
+ 𝐿
1768
+ √︂
1769
+ log (2/𝛿)
1770
+ 2𝐿𝑀
1771
+ (14)
1772
+ With the help of non-backtracking random walk [3], we can
1773
+ further shrink the upper bound of 𝑋 into ⌈(𝐿−1)/3⌉
1774
+ 𝐿
1775
+ . Now, let
1776
+ 𝛿 = 2 exp
1777
+ −2𝐿3𝑀𝑡2
1778
+ ⌈(𝐿−1)/3⌉2 , with probability 1 − 𝛿, the error 𝜖 can thus
1779
+ be improved to:
1780
+ 𝜖 = | ˆ𝜇|𝑆 | − 𝜇| ≤ ⌈(𝐿 − 1)/3⌉
1781
+ 𝐿
1782
+ √︂
1783
+ log (2/𝛿)
1784
+ 2𝐿𝑀
1785
+ (15)
1786
+
1787
+ A.2
1788
+ Proof of Lemma 3
1789
+ PROOF. In the beginning, we introduce two necessary nota-
1790
+ tions. vec(·) denotes the vectorization operator:
1791
+ vec(𝑿) = [𝑿11, · · · , 𝑿𝑚1, 𝑿12, · · · , 𝑿𝑚2, · · · , 𝑿𝑚𝑛]⊤,
1792
+ (16)
1793
+ where 𝑿 is an 𝑚 ×𝑛 matrix, and 𝑿𝑖𝑗 denotes the element of 𝑿 on
1794
+ the 𝑖-th row and the 𝑗-th column. Next, the Knronecker product
1795
+ of given two 𝑚 × 𝑛 matrices 𝑿 and 𝒀 is:
1796
+ 𝑿 ⊗ 𝒀 =
1797
+ 
1798
+ 𝑿11𝒀
1799
+ 𝑿12𝒀
1800
+ · · ·
1801
+ 𝑿1𝑛𝒀
1802
+ 𝑿21𝒀
1803
+ 𝑿22𝒀
1804
+ · · ·
1805
+ 𝑿2𝑛𝒀
1806
+ ...
1807
+ ...
1808
+ ...
1809
+ ...
1810
+ 𝑿𝑚1𝒀
1811
+ 𝑿𝑚2𝒀
1812
+ · · ·
1813
+ 𝑿𝑚𝑛𝒀
1814
+ 
1815
+ (17)
1816
+ The idea of this proof is to reformulate the equation in order to
1817
+ derive the final result by the closed formula of Linear Regression.
1818
+ We firstly show two well-known equations that will be used in
1819
+ our proof. Given the features 𝑿 and target 𝒚, the closed formula
1820
+ of the weights 𝑾 of Linear Regression is:
1821
+ 𝑾 = (𝑿𝑇 𝑿)−1𝑿𝑇𝒚.
1822
+ (18)
1823
+ The famous property of the mixed Kronecker matrix-vector prod-
1824
+ uct [4] is also used:
1825
+ vec(𝑩𝑽𝑨𝑇 ) = (𝑨 ⊗ 𝑩)𝒗,
1826
+ (19)
1827
+ where the matrix 𝑽 = vec−1(𝒗) is the result of the inverse of the
1828
+ vectorization operator on 𝒗.
1829
+ To begin the derivation, we vectorize Equation 6 into:
1830
+ vec( ˆ𝑩) = vec((𝑨ˆ𝑬) ˆ𝑯𝑰𝑐×𝑐),
1831
+ (20)
1832
+ where 𝑰𝑐×𝑐 is a 𝑐 × 𝑐 identity matrix. The trick here, which is the
1833
+ key of this proof, is to multiply one more identity matrix by ˆ𝑯.
1834
+ Therefore, we use Equation 19 to reformulate the equation to:
1835
+ vec( ˆ𝑩) = (𝑰𝑐×𝑐 ⊗ (𝑨ˆ𝑬))vec( ˆ𝑯)
1836
+ (21)
1837
+ By letting 𝑿 = 𝑰𝑐×𝑐 ⊗ (𝑨ˆ𝑬) and 𝒚 = vec( ˆ𝑩) in Equation 18, we
1838
+ can then derive the closed-form solution of vectorized compati-
1839
+ bility matrix as follows:
1840
+ vec( ˆ𝑯) = (𝑿𝑇 𝑿)−1𝑿𝑇𝒚
1841
+ (22)
1842
+
1843
+ A.3
1844
+ Proof of Lemma 4
1845
+ PROOF. ULTRAPROP exactly converges if and only if
1846
+ 𝜌(𝑨∗)𝜌( ˆ𝑯∗) < 1. However, the compatibility matrix 𝑯∗ is row-
1847
+ normalized, so the largest eigenvalue 𝜌(𝑯∗) = 1 is a constant,
1848
+ and is less than one after centering. Thus, the scaling factor 𝑓
1849
+ multiplied to the propagation (in Algorithm 2 line 5) should be in
1850
+ the range of (0,
1851
+ 1
1852
+ 𝜌 (𝑨∗) ) to meet the criterion of exact convergence.
1853
+
1854
+ A.4
1855
+ Proof of Lemma 5
1856
+ PROOF. In the neighbor-differentiation phase, for each ran-
1857
+ dom walk, each node visits at most 𝐿·𝑀 unique nodes, so the max-
1858
+ imum number of non-zero elements in 𝑾 is either 𝑛 · 𝐿 · 𝑀 if we
1859
+ have not walked through all the edges, or 𝑚 otherwise. The time
1860
+ complexity of SVD on 𝑾 then takes 𝑂(𝑑 · max (𝑚,𝑛 · 𝐿 · 𝑀)). In
1861
+ the network-effect phase, the time complexity for the Fisher’s ex-
1862
+ act test is 𝑂(max (𝑪)), where max (𝑪) is a constant bounded by
1863
+ 500 in our algorithm. Therefore, network-effect analysis takes
1864
+ 𝑂(|𝒆
1865
+ ′| · 𝑐2). For the regression, since there are 𝑐 sets of pa-
1866
+ rameters are independent, we can separate the problem into 𝑐
1867
+ tasks, where each contains 𝑐 features and |𝒑| samples. Thus
1868
+ the complexity can be reduced to 𝑂(|𝒑| · 𝑐3), and the efficient
1869
+ leave-one-out cross-validation only needs to be done once. In the
1870
+ propagation phase, it takes at most 𝑂(𝑚 + 𝑛) for sparse matrix
1871
+ multiplication to run 𝑡 iterations. Thus, the time complexity is
1872
+ 𝑂(𝑑 max (𝑚,𝑛 · 𝐿 · 𝑀) + |𝒑| ·𝑐3 +𝑚). However, in practice, 𝑐, |𝒑|
1873
+ and 𝑡 are usually small constants which are negligible, and 𝑚
1874
+ is usually much larger than them. Therefore, keeping only the
1875
+ dominating terms, the time complexity is approximately 𝑂(𝑚).
1876
+ 𝑾 contains at most max (𝑚,𝑛 · 𝐿 · 𝑀) non-zero elements. The
1877
+ Kronecker product at most contains 𝑛 · 𝑐2 non-zero elements. ˆ𝑩
1878
+ and ˆ𝑯 contain at most 𝑛 ·𝑐 and 𝑐2 non-zero elements, respectively.
1879
+ Thus, the space complexity is 𝑂(max (𝑚,𝑛 · 𝐿 · 𝑀) + 𝑛 · 𝑐2).
1880
+
1881
+ B
1882
+ REPRODUCIBILITY
1883
+ B.1
1884
+ Datasets
1885
+ • “Pokec-Gender” [33] is an online social network in Slo-
1886
+ vakia. [23] re-labels the nodes by users’ genders instead.
1887
+ • “arXiv-Year” [14] is a citation network between all Com-
1888
+ puter Science arXiv papers. [23] re-labels the nodes by the
1889
+ posted years.
1890
+ • “Patent-Year” [20] is the patent citation network from
1891
+ 1980 to 1985. [23] re-labels the nodes by the application
1892
+ year, bucketized into five consecutive 3-year ranges.
1893
+ • “Synthetic” is a graph enlarged by the one in Figure 1. It
1894
+ contains both heterophily and homophily network-effect.
1895
+
1896
+ Under Submission, ,
1897
+ Meng-Chieh Lee, Shubhranshu Shekhar, Jaemin Yoo, and Christos Faloutsos
1898
+ Table 6: Hyperparameters for Deep Graph Models
1899
+ Method
1900
+ Hyperparameters
1901
+ GCN
1902
+ lr=0.01, wd=0.0005, hidden=16, dropout=0.5
1903
+ APPNP
1904
+ lr=0.002, wd=0.0005, hidden=64, dropout=0.5, K=10, alpha=0.1
1905
+ MIXHOP
1906
+ lr=0.01, wd=0.0005, cutoff=0.1, layers1=[200, 200, 200], layers2=[200, 200, 200]
1907
+ GPR-GNN
1908
+ lr=0.002, wd=0.0005, hidden=64, dropout=0.5, K=10, alpha=0.1
1909
+ Noisy edges are randomly injected in the background, and
1910
+ the dense blocks are constructed by randomly creating
1911
+ higher-order structures.
1912
+ • “Facebook” [30] is a page-to-page network of verified
1913
+ Facebook sites. Nodes are labeled by the categories such
1914
+ as politicians and companies.
1915
+ • “GitHub” [30] is a social network of developers in June
1916
+ 2019. Nodes are labeled as web or a machine learning
1917
+ developer.
1918
+ • “arXiv-Category” [37] is the same dataset as the arXiv-
1919
+ Year dataset. Nodes are labeled by the primary categories.
1920
+ • “Pokec-Locality” [33] is the same dataset as the Pokec-
1921
+ Gender dataset. Nodes are labeled by the uses’ localities.
1922
+ B.2
1923
+ Baselines
1924
+ • GCN2 [16] is a well-known deep graph model, learning
1925
+ and aggregating the weights of two-hop neighbors.
1926
+ • APPNP4 [17] utilizes personalized PageRank to leverage
1927
+ the local information and a larger neighborhood.
1928
+ • MIXHOP3 [2] mixes powers of the adjacency matrix to
1929
+ incorporate more than 1-hop neighbors in each layer.
1930
+ • GPR-GNN4 [6] allows the learnable weights to be nega-
1931
+ tive during propagation with Generalized PageRank.
1932
+ • HOLS5 [8] is a label propagation method with attention,
1933
+ by increasing the importance of a neighbor if they appear
1934
+ in the same motif at the same time.
1935
+ B.3
1936
+ Hyperparameters
1937
+ For ULTRAPROP and ULTRAPROP-Hom, we use random walks
1938
+ of length 4 with 10 trials except GitHub, arXiv-Category and
1939
+ Pokec-Locality datasets, where we use 30 trials. The decompo-
1940
+ sition rank is set to be 128, which is empirically shown to be
1941
+ enough in the embedding tasks. The weights of HOLS for differ-
1942
+ ent motifs are set to be equal. For the deep graph models, under
1943
+ the setting that the given labels are very few, it is impossible to
1944
+ separate a validation set. We then train them for a fixed number
1945
+ of epochs (i.e. 200 epochs), which is usually sufficient enough for
1946
+ them to converge. All the fully connected layers are replaced by
1947
+ the sparse version in order to fit into memory. Both adjacency ma-
1948
+ trices and features are normalized and turn into sparse matrices if
1949
+ needed. For other hyperparameters, we use the default settings
1950
+ given by the authors, and give the details in Table 6.
1951
+ 2https://github.com/tkipf/pygcn
1952
+ 3https://github.com/benedekrozemberczki/MixHop-and-N-GCN
1953
+ 4https://github.com/jianhao2016/GPRGNN
1954
+ 5https://github.com/dhivyaeswaran/hols
1955
+ B.4
1956
+ Scalability
1957
+ We select machines provided by AWS with comparable specs as
1958
+ we use for the experiments. For CPU machine, we select t3.small
1959
+ with 3.3GHz CPU and 2GB RAM, which is faster than ours, and
1960
+ costs $0.023 per hour. For GPU machine, we select p3.2xlarge
1961
+ with a V100 GPU, which costs $3.06 per hour. According to [1],
1962
+ it is 0.89 slower than the RTX A6000 GPU we use on running
1963
+ PyTorch. The running time of GCN on “Pokec-Gender” and
1964
+ “Pokec-Locality” are 673 and 730 seconds, respectively. Using the
1965
+ provided information, the results in Table 5 can be computed.
1966
+
2NAyT4oBgHgl3EQfbve8/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
3dE2T4oBgHgl3EQf6Ago/content/tmp_files/2301.04195v1.pdf.txt ADDED
@@ -0,0 +1,1198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ORBIT: A Unified Simulation Framework for
2
+ Interactive Robot Learning Environments
3
+ Mayank Mittal1,2, Calvin Yu3, Qinxi Yu3, Jingzhou Liu1,3, Nikita Rudin1,2, David Hoeller1,2,
4
+ Jia Lin Yuan3, Pooria Poorsarvi Tehrani3, Ritvik Singh1,3, Yunrong Guo1, Hammad Mazhar1,
5
+ Ajay Mandlekar1, Buck Babich1, Gavriel State1, Marco Hutter2, Animesh Garg1,3
6
+ Fig. 1: ORBIT framework provides a large set of robots, sensors, rigid and deformable objects, motion generators, and teleoperation
7
+ interfaces. Through these, we aim to simplify the process of defining new and complex environments, thereby providing a common
8
+ platform for algorithmic research in robotics and robot learning.
9
+ Abstract— We present ORBIT, a unified and modular frame-
10
+ work for robot learning powered by NVIDIA Isaac Sim. It
11
+ offers a modular design to easily and efficiently create robotic
12
+ environments with photo-realistic scenes and fast and accurate
13
+ rigid and deformable body simulation. With ORBIT, we provide
14
+ a suite of benchmark tasks of varying difficulty– from single-
15
+ stage cabinet opening and cloth folding to multi-stage tasks
16
+ such as room reorganization. To support working with diverse
17
+ observations and action spaces, we include fixed-arm and
18
+ mobile manipulators with different physically-based sensors
19
+ and motion generators. ORBIT allows training reinforcement
20
+ learning policies and collecting large demonstration datasets
21
+ from hand-crafted or expert solutions in a matter of minutes
22
+ by leveraging GPU-based parallelization. In summary, we offer
23
+ an open-sourced framework that readily comes with 16 robotic
24
+ platforms, 4 sensor modalities, 10 motion generators, more than
25
+ 20 benchmark tasks, and wrappers to 4 learning libraries. With
26
+ this framework, we aim to support various research areas,
27
+ including representation learning, reinforcement learning, imi-
28
+ tation learning, and task and motion planning. We hope it helps
29
+ establish interdisciplinary collaborations in these communities,
30
+ and its modularity makes it easily extensible for more tasks
31
+ and applications in the future. For videos, documentation, and
32
+ code: https://isaac-orbit.github.io/.
33
+ I. INTRODUCTION
34
+ The recent surge in machine learning has led to a paradigm
35
+ shift in robotics research. Methods such as reinforcement
36
+ learning (RL) have shown incredible success in challenging
37
+ problems such as quadrupedal locomotion [1], [2], [3] and
38
+ in-hand manipulation [4], [5]. However, learning techniques
39
+ 1 NVIDIA, 2 ETH Z¨urich, 3 University of Toronto, Vector Institute.
40
41
+ require a wealth of training data, which is often challenging
42
+ and expensive to obtain at scale on a physical system. This
43
+ makes simulators an appealing alternative for developing
44
+ systems safely, efficiently, and cost-effectively.
45
+ An ideal robot simulation framework needs to provide fast
46
+ and accurate physics, high-fidelity sensor simulation, diverse
47
+ asset handling, and easy-to-use interfaces for integrating new
48
+ tasks and environments. However, existing frameworks often
49
+ make a trade-off between these aspects depending on their
50
+ target application. For instance, simulators designed mainly
51
+ for vision, such as Habitat [18] or ManipulaTHOR [16], offer
52
+ decent rendering but simplify low-level interaction intricacies
53
+ such as grasping. On the other hand, physics simulators for
54
+ robotics, such as IsaacGym [17] or Sapien [15], provide fast
55
+ and reasonably accurate rigid-body contact dynamics but do
56
+ not include physically-based rendering (PBR), deformable
57
+ objects simulation or ROS [19] support out-of-the-box.
58
+ In this work, we present ORBIT an open-source frame-
59
+ work, built on NVIDIA Isaac Sim [20], for intuitive designing
60
+ of environments and tasks for robot learning with photo-
61
+ realistic scenes and state-of-the-art physics simulation. Its
62
+ modular design supports various robotic applications, such as
63
+ reinforcement learning (RL), learning from demonstrations
64
+ (LfD), and motion planning. Through careful design of inter-
65
+ faces, we aim to support learning for a diverse range of robots
66
+ and tasks, allowing operation at different levels of observa-
67
+ tion (proprioception, images, pointclouds) and action spaces
68
+ (joint space, task space). To ensure high-simulation through-
69
+ put, we leverage hardware-accelerated robot simulation, and
70
+ include GPU implementations for motion generation and
71
+ arXiv:2301.04195v1 [cs.RO] 10 Jan 2023
72
+
73
+ TABLE I: Comparison between different simulation frameworks and ORBIT. The check (✓) and cross (X) denote presence or absence of
74
+ the feature. In Robotic Platforms column, M stands for manipulator. In Scene Authoring column, G stands for game-based designing,
75
+ M for mesh-scan scenes, and P for procedural-generation.
76
+ Vectorization
77
+ Supported Dynamics
78
+ Sensors
79
+ Robotic Platforms
80
+ Name
81
+ Physics Engine
82
+ Renderer
83
+ CPU
84
+ GPU
85
+ Rigid
86
+ Cloth
87
+ Soft
88
+ Fluid
89
+ PBR
90
+ Tracing
91
+ RGBD
92
+ Semantic
93
+ LiDAR
94
+ Contact
95
+ Fixed-M
96
+ Mobile-M
97
+ Legged
98
+ Scene
99
+ Authoring
100
+ MetaWorld [6]
101
+ MuJoCo
102
+ OpenGL
103
+
104
+ X
105
+
106
+ X
107
+ X
108
+ X
109
+ X
110
+ X
111
+ X
112
+ X
113
+ X
114
+
115
+ X
116
+ X
117
+ P
118
+ RoboSuite [7]
119
+ MuJoCo
120
+ OpenGL, OptiX
121
+
122
+ X
123
+
124
+ X
125
+ X
126
+ X
127
+ X
128
+
129
+
130
+ X
131
+
132
+
133
+ X
134
+ X
135
+ P
136
+ DoorGym [8]
137
+ MuJoCo
138
+ Unity
139
+ X
140
+ X
141
+
142
+ X
143
+ X
144
+ X
145
+
146
+
147
+
148
+ X
149
+ X
150
+
151
+ X
152
+ X
153
+ P, G
154
+ DEDO [9]
155
+ Bullet
156
+ OpenGL
157
+
158
+ X
159
+
160
+
161
+
162
+ X
163
+ X
164
+
165
+ X
166
+ X
167
+ X
168
+
169
+ X
170
+ X
171
+ P, G
172
+ RLBench [10]
173
+ Bullet/ODE
174
+ OpenGL
175
+ X
176
+ X
177
+
178
+ X
179
+ X
180
+ X
181
+ X
182
+
183
+
184
+ X
185
+
186
+
187
+ X
188
+ X
189
+ P
190
+ iGibson [11]
191
+ Bullet
192
+ MeshRenderer
193
+
194
+ X
195
+
196
+ X
197
+ X
198
+ X
199
+
200
+
201
+
202
+
203
+ X
204
+ X
205
+
206
+ X
207
+ M
208
+ Habitat 2.0 [12]
209
+ Bullet
210
+ Magnum
211
+ X
212
+ X
213
+
214
+ X
215
+ X
216
+ X
217
+
218
+
219
+
220
+ X
221
+ X
222
+
223
+
224
+
225
+ P, M
226
+ SoftGym [13]
227
+ FleX
228
+ OpenGL
229
+
230
+ X
231
+
232
+
233
+
234
+
235
+ X
236
+ X
237
+ X
238
+ X
239
+ X
240
+
241
+ X
242
+ X
243
+ P
244
+ ThreeDWorld [14]
245
+ PhysX 4/FleX/Obi
246
+ Unity3D
247
+ X
248
+ X
249
+ ✓∗
250
+ ✓∗
251
+ ✓∗
252
+ ✓∗
253
+
254
+
255
+
256
+ X
257
+ X
258
+ X
259
+
260
+ X
261
+ P
262
+ SAPIEN [15]
263
+ PhysX 4
264
+ OptiX, Kuafu
265
+
266
+ X
267
+
268
+ X
269
+ X
270
+ X
271
+
272
+
273
+
274
+ X
275
+
276
+
277
+
278
+ X
279
+ P
280
+ ManipulatorThor [16]
281
+ PhysX 4
282
+ Unity
283
+ X
284
+ X
285
+
286
+ X
287
+ X
288
+ X
289
+
290
+
291
+
292
+ X
293
+ X
294
+ X
295
+
296
+ X
297
+ P, G
298
+ IsaacGymEnvs [17]
299
+ PhysX 5
300
+ Vulkan
301
+
302
+
303
+
304
+ X
305
+ X
306
+ X
307
+ X
308
+
309
+
310
+ X
311
+
312
+
313
+ X
314
+
315
+ P
316
+ ORBIT (ours)
317
+ PhysX 5
318
+ Omniverse RTX
319
+
320
+
321
+
322
+
323
+
324
+
325
+
326
+
327
+
328
+
329
+
330
+
331
+
332
+
333
+ P, M, G
334
+ * ThreeDWorld supports simulation of rigid bodies and deformable bodies based on whether PhysX 4 or FleX/Obi is enabled respectively. Thus, it is limited in simulating interactions between rigid and deformable bodies.
335
+ observations processing. This allows training and evaluation
336
+ of a complete robotic system at scale, without abstracting
337
+ out low-level details in robot-environment interactions.
338
+ The release of ORBIT v1.0 features:
339
+ 1) models for three quadrupeds, seven robotic arms, four
340
+ grippers, two hands, and four mobile manipulators;
341
+ 2) a selection of CPU and GPU-based motion generators
342
+ implementations for each robot category, including pre-
343
+ trained locomotion policies, inverse kinematics, opera-
344
+ tional space control, and model predictive control;
345
+ 3) utilities for collecting human demonstrations using pe-
346
+ ripherals (keyboard, gamepad or 3D mouse), replaying
347
+ demonstration datasets, and utilizing them for learning;
348
+ 4) a suite of standardized tasks of varying complexity for
349
+ benchmark purposes. These include eleven rigid object
350
+ manipulation, thirteen deformable object manipulation,
351
+ and two locomotion environments. Within each task, we
352
+ allow switching robots, objects, and sensors easily.
353
+ In the remaining of the paper, we describe the underlying
354
+ simulation choices (Sec. II), the framework’s design deci-
355
+ sions and abstractions (Sec. III), and its highlighted features
356
+ (Sec. IV). We demonstrate the framework’s applicability for
357
+ different workflows (Sec. V) – particularly RL using various
358
+ libraries, LfD with robomimic [21], motion planning [22],
359
+ [23], and connection to physical robots for deployment.
360
+ II. RELATED WORK
361
+ Recent years have seen several simulation frameworks,
362
+ each specializing for particular robotic applications. In this
363
+ section, we highlight the design choices crucial for building
364
+ a unified simulation platform and how ORBIT compares to
365
+ other frameworks (also summarized in Table I).
366
+ a) Physics Engine: Increasing the complexity and re-
367
+ alism of physically simulated environments is essential for
368
+ advancing robotics research. This includes improving the
369
+ contact dynamics, having better collision handling for non-
370
+ convex geometries (such as threads), stable solvers for de-
371
+ formable bodies, and high simulation throughput.
372
+ Prior frameworks [7], [10] using MuJoCo [24] or Bul-
373
+ let [25] focus mainly on rigid object manipulation tasks.
374
+ Since their underlying physics engines are CPU-based, they
375
+ need CPU clusters to achieve massive parallelization [17]. On
376
+ the other hand, frameworks for deformable bodies [9], [13]
377
+ mainly employ Bullet [25] or FleX [26], which use particle-
378
+ based dynamics for soft bodies and cloth simulation. How-
379
+ ever, limited tooling exists in these frameworks compared
380
+ to those for rigid object tasks. ORBIT aims to bridge this
381
+ gap by providing a robotics framework that supports rigid
382
+ and deformable body simulation via PhysX SDK 5 [27]. In
383
+ contrast to other engines, it features GPU-based hardware
384
+ acceleration for high throughput, signed-distance field (SDF)
385
+ collision checking [28], and more stable solvers based on
386
+ finite elements for deformable body simulation.
387
+ b) Sensor simulation: Various existing frameworks [7],
388
+ [10], [17] use classic rasterization that limits the photo-
389
+ realism in the generated images. Recent techniques [29], [30]
390
+ simulate the interaction of rays with object’s textures in a
391
+ physically correct manner. These methods helps capture fine
392
+ visual properties such as transparency and reflection, thereby
393
+ are promising for bridging sim-to-real visual domain gap.
394
+ While recent frameworks [15], [12], [16] include physically-
395
+ based renderers, they mainly support camera-based sen-
396
+ sors (RGB, depth). This is insufficient for certain mobile
397
+ robot applications that need range sensors, such as LiDARs.
398
+ Leveraging the ray-tracing technology in NVIDIA Isaac Sim,
399
+ ORBIT supports all these modalities and includes APIs to
400
+ obtain additional information such as semantic annotations.
401
+ c) Scene designing and asset handling: Frameworks
402
+ support scene creation procedurally [6], [7], [15], via mesh
403
+ scans [11], [12] or through game-engine style interfaces [31],
404
+ [14]. While mesh scans simplify generating large amounts of
405
+ scenes, they often suffer from geometric artifacts and lighting
406
+ problems. On the other hand, procedural generation allows
407
+ leveraging object datasets for diverse scenes. To not restrict
408
+ to either possibility, we facilitate scene designing by using
409
+ graphical interfaces and also providing tools for importing
410
+ different datasets [32], [33], [34].
411
+ Simulators are typically general-purpose and expose ac-
412
+ cess to various internal properties, often alienating non-
413
+ expert users due to a steep learning curve. ORBIT inherits
414
+ many utilities from the NVIDIA Omniverse and Isaac Sim
415
+ platforms, such as high-quality rendering, multi-format asset
416
+ import, ROS support, and domain randomization (DR) tools.
417
+ However, its contributions lie in the specialization of inter-
418
+ faces for robot learning that simplify environment designing
419
+ and facilitate transfer to a real robot. For instance, we provide
420
+ unified abstractions for different robot and object types, allow
421
+
422
+ Fig. 3: ORBIT’s abstractions comprise World, analogous to the real world, and Agent, the computation graph behind the embodied system.
423
+ The nodes in the agent’s graph can perform observation-based or action-based processing. Through a graph-cut over this computation
424
+ graph and specifying an extrinsic goal, it is feasible to design different tasks within the same World instance.
425
+ injecting actuator models into the simulation to assist in
426
+ sim-to-real transfer, and support various peripherals for data
427
+ collection. Overall, it provides a highly featured state-of-the-
428
+ art simulation framework (Table I) while preserving usability
429
+ through intuitive abstractions.
430
+ III. ORBIT: ABSTRACTIONS AND INTERFACES DESIGN
431
+ At a high level, the framework design comprises a world
432
+ and an agent, similar to the real world and the software
433
+ stack running on the robot. The agent receives raw observa-
434
+ tions from the world and computes the actions to apply on the
435
+ embodiment (robot). Typically in learning, it is assumed
436
+ that all the perception and motion generation occurs at the
437
+ same frequency. However, in the real world, that is rarely the
438
+ case: (1) different sensors tick at differing frequencies, (2)
439
+ depending on the control architecture, actions are applied at
440
+ different time-scales [35], and (3) various unmodeled sources
441
+ cause delays and noises in the real system. In ORBIT, we
442
+ carefully design the interfaces and abstractions to support
443
+ (1) and (2), and for (3), we include implementation of
444
+ different actuator and noise models as part of the robot
445
+ and sensors respectively.
446
+ a) World: Analogous to the real world, we define a
447
+ world where robots, sensors, and objects (static
448
+ or dynamic) exist on the same stage. The world can be de-
449
+ signed procedurally (script-based), via scanned meshes [33],
450
+ [32], through the game-based GUI of Isaac Sim, or a
451
+ combination of them, such as importing scanned meshes
452
+ and adding objects to it. This flexibility reaps the benefits
453
+ of 3D reconstructed meshes, which capture various archi-
454
+ tectural layouts, with game-based designing, that simplifies
455
+ the experience of creating and verifying the scene physics
456
+ properties by playing the simulation.
457
+ Robots are a crucial component of the world since
458
+ they serve as the embodiment for interaction. They consist
459
+ of an articulated system, sensors, and low-level controllers.
460
+ The robot class loads its model from USD files. It may
461
+ DC Motor
462
+ Actuator Net
463
+ (MLP/LSTM)
464
+ Fig. 4: Illustration of actuator groups for a legged mobile manipu-
465
+ lator. This allows decomposing a complex system into sub-groups
466
+ and defining specific transmission models for each of them flexibly.
467
+ have onboard sensors specified through the same USD file or
468
+ configuration files. The low-level controller processes input
469
+ actions through the configured actuator models and applies
470
+ desired joint position, velocity, or torque commands to the
471
+ simulator (as shown in Fig. 4). The actuator dynamics can be
472
+ modeled using first-principle from physics or be learned as
473
+ neural networks. This allows injection of real world actuator
474
+ characteristics into simulation thereby facilitating sim-to-real
475
+ transfer of control policies [36].
476
+ Sensors may exist both on the articulation (as part of the
477
+ robot) or externally (such as, third-person cameras). ORBIT
478
+ interface unifies different physics-based (range, force, and
479
+ contact sensor) and rendering-based (RGB, depth, normals)
480
+ sensors under a common interface. To simulate asynchronous
481
+ sensing and actuation, each sensor has an internal timer that
482
+ governs its operating frequency. The sensor only reads the
483
+ simulator buffers at the configured frequency. Between the
484
+ timesteps, the sensor returns the previously obtained values.
485
+ Objects are passive entities in the world. While several
486
+ objects may exist in the scene, the user can define objects
487
+ of interest for a specified task and retrieve data/properties
488
+ only for them. Object properties mainly comprise visual and
489
+ collision meshes, textures, and physics materials. For any
490
+ given object, we support randomization of its textures and
491
+ physics properties, such as friction and joint parameters.
492
+
493
+ rt
494
+ Learning
495
+ Learning
496
+ Task Logic
497
+ Task
498
+ Agent
499
+ Rewards/Costs
500
+ Oracle Reset
501
+ World
502
+ Agent
503
+ Sensors
504
+ Ot
505
+ Node 1
506
+ Passive
507
+ Camera
508
+ External Sensors
509
+ Objects
510
+ Node 2
511
+ LiDAR
512
+ Robot
513
+ at
514
+ ....
515
+ Node n
516
+ Actuator Model
517
+ Height Scan
518
+ Visualization
519
+ On-board
520
+ Sensors
521
+ Markers
522
+ Contact Report
523
+ Computation Nodes
524
+ O NVIDIA Isaac Sim
525
+ Motion Generation
526
+ Perception
527
+ Filtering
528
+ Learning-based
529
+ PhysX
530
+ NVIDIA
531
+
532
+ USD
533
+ Mapping
534
+ Model-based
535
+ Iray
536
+ by NVIDIAGripper
537
+ open/close
538
+ joint velocity
539
+ Mimic Group
540
+ (1)
541
+ (6)
542
+ Actions
543
+ joint position
544
+ Arm
545
+ joint torque
546
+ DC Motor
547
+ (6)
548
+ (6)
549
+ Base
550
+ joint position
551
+ Actuator Net
552
+ joint torque
553
+ (12)
554
+ (MLP/LSTM)
555
+ (12)Rigid
556
+ Articulated
557
+ Deformable
558
+ Cloth
559
+ IK
560
+ OSC
561
+ RMPFlow
562
+ OCS2
563
+ NN Policy
564
+ Teleoperation
565
+ End-Effector
566
+ Arm
567
+ Mobile Base
568
+ Height Scan
569
+ Camera
570
+ Contact Reporter
571
+ Proprioception
572
+ Fig. 5: Overview of features included in ORBIT. We provide models of different sensors, robotic platforms, objects from different
573
+ datasets, motion generators and teleoperation devices. Using RTX-accelerated ray-tracing, we can obtain high-fidelity images in real-time
574
+ for different modalities such as RGB, depth, surface normal, instance and semantic segmentation (pixel-wise and bounding boxes).
575
+ b) Agent: An agent refers to the decision-making
576
+ process (“intelligence”) guiding the embodied system. While
577
+ roboticists have embraced the modularity of ROS [19], most
578
+ robot learning frameworks often focus only on the environ-
579
+ ment definition. This practice requires code replication, &
580
+ adds friction to switching between different implementations.
581
+ Keeping modularity at its core, an agent in ORBIT com-
582
+ prises various nodes that formulate a computation graph
583
+ exchanging information between them. Broadly, we consider
584
+ nodes are of two types: 1) perception-based i.e., they process
585
+ inputs into another representation (such as RGB-D image to
586
+ point-cloud/TSDF), or 2) action-based i.e., they process in-
587
+ puts into action commands (such as task-level commands to
588
+ joint commands). Currently, the flow of information between
589
+ nodes happens synchronously via Python, which avoids the
590
+ data exchange overhead of service-client protocols.
591
+ c) Learning task and agent: Paradigms such as RL
592
+ require specification of a task, a world and may include
593
+ some computation nodes of the agent. The task logic
594
+ helps specify the goal for the agent, compute metrics (re-
595
+ wards/costs) to evaluate the agent’s performance, and manage
596
+ the episodic resets. With this component as a separate mod-
597
+ ule, it becomes feasible to use the same world definition
598
+ for different tasks, similar to learning in the real world,
599
+ where tasks are specified through extrinsic reward signals.
600
+ The task definition may also contain different nodes of the
601
+ agent. An intuitive way to formalize this is by considering
602
+ that learning for a particular node happens through a graph
603
+ cut on the agent’s computation graph.
604
+ To further concretize the design motivation, consider the
605
+ example of learning over task space instead of low-level joint
606
+ actions for lifting a cube [35]. In this case, the task-space
607
+ controller, such as inverse kinematics (IK), would typically
608
+ run at 50Hz, while the joint controller requires commands
609
+ at 1000 Hz. Although the task-space controller is a part of
610
+ the agent’s and not the world’s computation, it is possible
611
+ to encapsulate that into the task design. This functionality
612
+ easily allows switching between motion generators, such as
613
+ IK, operational-space control (OSC), or reactive planners.
614
+ IV. ORBIT: FEATURES
615
+ While various robotic benchmarks have been proposed [9],
616
+ [6], [10], the right choice of necessary and sufficient tasks to
617
+ demonstrate “intelligent” behaviors remains an open ques-
618
+ tion. Instead of being prescriptive about tasks, we provide
619
+ ORBIT as a platform to easily design new tasks. To facilitate
620
+ the same, we include a diverse set of supported robots,
621
+ peripheral devices, and motion generators and a large set
622
+ of tasks for rigid and soft object manipulation for essential
623
+ skills such as folding cloth, opening the dishwasher, and
624
+ screwing a nut into a bolt. Each task showcases aspects of
625
+ physics and renderer that we believe will facilitate answering
626
+ crucial research questions, such as building representations
627
+ for deformable object manipulation and learning skills that
628
+ generalize to different objects and robots.
629
+ a) Robots: We support 4 mobile platforms (one om-
630
+ nidirectional drive base and three quadrupeds), 7 robotic
631
+ arms (two 6-DoF and five 7-DoF), and 6 end-effectors (four
632
+ parallel-jaw grippers and two robotic hands). We provide
633
+ tools to compose different combinations of these articulations
634
+ into a complex robotic system such as a legged mobile
635
+ manipulator. This provides a large set of robot platforms,
636
+ each of which can be switched in the World.
637
+ b) I/O Devices: Devices define the interface to periph-
638
+ eral controllers that teleoperate the robot in real-time. The
639
+ interface reads the input commands from an I/O device and
640
+ parses them into control commands for subsequent nodes.
641
+ This helps not only in collecting demonstrations [21] but also
642
+ in debugging the task designs. Currently, we include support
643
+ for Keyboard, Gamepad (Xbox controller), and Spacemouse
644
+ from 3Dconnexion.
645
+ c) Motion Generators: Motion generators transform
646
+ high-level actions into lower-level commands by treating
647
+ input actions as reference tracking signals. For instance,
648
+ inverse kinematics (IK) [37] interprets commands as the
649
+ desired end-effector poses and computes the desired joint
650
+ positions. Employing these controllers, particularly in task
651
+ space, has shown to help sim-to-real transferability of robot
652
+ manipulation policies [7], [35].
653
+ With ORBIT, we include GPU-based implementations for:
654
+ differential IK [37], operational-space control [38] and joint-
655
+ level control. Additionally, we provide CPU implementa-
656
+ tion of state-of-the-art model-based planners such as RMP-
657
+ Flow [22] for fixed-arm manipulators and OCS2 [23] for
658
+ whole-body control of mobile manipulators. We also provide
659
+ pre-trained policies for legged locomotion [39] to facilitate
660
+ solving navigation tasks using base velocity commands.
661
+
662
+ 40%MORE
663
+ French's
664
+ YELLOL3DconnexionF1
665
+ F4
666
+ F6
667
+ F7
668
+ F8
669
+ SYGR
670
+ Lock
671
+ Bresk
672
+ 2
673
+ 3
674
+ 6
675
+ 7
676
+ 8
677
+ Q
678
+ R
679
+ T
680
+ Home
681
+ Pgup
682
+ Cops Lock
683
+ G
684
+ H
685
+ Enter
686
+ Doier
687
+ Booe
688
+ Z
689
+ X
690
+ tsift
691
+ B
692
+ tshint
693
+ Pon
694
+ Ente
695
+ Alt
696
+ Alt
697
+ Ctrt11GPU
698
+ IK
699
+ OSC
700
+ NN PolicyCPU
701
+ OCS2
702
+ RMPF1owated
703
+ Fluid
704
+ ClotlTeleoperationFig. 6: Demonstration of the designed tasks using hand-crafted state machines and task-space controllers. Leveraging recent advances
705
+ in physics engines, we support high-fidelity simulation of rigid and deformable objects. We include environments that allow switching
706
+ between robots, objects, observations, and action spaces through configuration files (Task videos).
707
+ d) Rigid-body Environments: For rigid-body environ-
708
+ ments, it is vital to have accurate contact physics, fast
709
+ collision checking, and articulate joints simulation. While
710
+ some of these tasks exist in prior works [6], [10], [28],
711
+ [39], we enhance them with our framework’s interfaces and
712
+ provide more variability using DR tools. We also extend ma-
713
+ nipulation tasks for fixed-arm robots to mobile manipulators.
714
+ For brevity, we list the environments are as follows:
715
+ 1) Reach - Track desired pose of the end-effector.
716
+ 2) Lift - Take an object to a desired position.
717
+ 3) Beat the Buzz - Displace a key around a pole
718
+ without touching the pole.
719
+ 4) Nut-Bolt - Tighten a nut on a given bolt.
720
+ 5) Cabinet - Open or close a cabinet (articulated object).
721
+ 6) Pyramid Stack - Stack blocks into pyramids.
722
+ 7) Hockey [10] - Shoot a puck into the net using a stick.
723
+ 8) Peg In Hole - Insert blocks into their holes.
724
+ 9) Jenga [10] - Remove and stack blocks into a tower.
725
+ 10) In-Hand Repose - Using dexterous robotic hands.
726
+ 11) Velocity Locomotion - Track a desired velocity
727
+ command via a legged robot on various terrains.
728
+ e) Deformable-body Environments:
729
+ Deformable ob-
730
+ jects have a high dimensional state and complex dynamics
731
+ which are difficult to capture succinctly for robot learning.
732
+ With ORBIT, we provide seventeen deformable objects assets
733
+ (such as toys and garments) with valid physics configurations
734
+ and methods to generate new assets (such as rectangular
735
+ cloth) procedurally. A concise list of included environments
736
+ are as follows:
737
+ 1) Cloth Lifting - Lift a cloth to a target position.
738
+ 2) Cloth Folding - Fold a cloth into a desired state.
739
+ 3) Cloth Spreading - Spread a cloth on a table.
740
+ 4) Cloth Dropping - Drop a cloth into a container.
741
+ 5) Flag Hoisting - Hoist a flag standing on a table.
742
+ 6) Soft Lifting - Lift a soft object to a target position.
743
+ 7) Soft Placing - Place a soft object on a shelf.
744
+ 8) Soft Stacking - Stack soft objects on each other.
745
+ 9) Soft Dropping - Drop soft objects into a container.
746
+ 10) Tower of Hanoi - Stack toruses around a pole.
747
+ 11) Rope Reshaping - Reshape a rope on a table.
748
+ 12) Fluid Pouring - Pour fluid into another container.
749
+ 13) Fluid Transport - Move a filled container without
750
+ causing any spillages.
751
+ It should be noted that the environments (1), (2), and
752
+ (3) carry the same World definition. They only differ in
753
+ their task logic module, i.e. the associated reward associ-
754
+ ated, which is defined through configuration managers. This
755
+ modularity allows code reusage and makes it easier to define
756
+ a large set of tasks within the same World.
757
+ V. EXEMPLAR WORKFLOWS WITH ORBIT
758
+ ORBIT is a unified simulation infrastructure that provides
759
+ both pre-built environments and easy-to-use interfaces that
760
+ enables extendability and customization. Owing to high-
761
+ quality physics, sensor simulation, and rendering, ORBIT us
762
+ useful for multiple robotics challenges in both perception
763
+ and decision-making. We outline a subset of such use cases
764
+ through exemplar workflows.
765
+ A. GPU-based Reinforcement Learning
766
+ We provide wrappers to different RL frameworks (rl-
767
+ games [40], RSL-rl [39], and stable-baselines-3 [41]). This
768
+ allows users to test their environments on a larger set of RL
769
+ algorithms and facilitate algorithmic developments in RL.
770
+ In Fig. 7, we show the training of Franka-Reach with
771
+ PPO [42] with different frameworks. Although we ensure
772
+ same parameters settings for PPO in the frameworks, we
773
+ notice a difference in their learning performance and training
774
+ time. Since RSL-rl and rl-games are optimized for GPU, we
775
+ observe a training speed of 50,000-75,000 frames per second
776
+ (FPS) with 2048 environments on an NVIDIA RTX3090.
777
+ With stable-baselines3, we receive 6,000-18,0000 FPS.
778
+ We also demonstrate training results for different action
779
+ spaces in the Franka-Cabinet-Opening task, and var-
780
+ ious network architectures and domain randomizations (DR)
781
+ in the ShadowHand-Reposing task. In our testing, we
782
+ observed that simulation throughput for these environments
783
+ are at par with the ones in IsaacGymEnvs [17].
784
+ B. Teleoperation and Imitation Learning
785
+ Many manipulation tasks are computationally expensive
786
+ or beyond the reach of current RL algorithms. In these
787
+
788
+ 0.5
789
+ 1.0
790
+ 1.5
791
+ Steps
792
+ ×107
793
+ 7
794
+ 8
795
+ 9
796
+ 10
797
+ Average Return
798
+ PPO on Franka-Reach
799
+ Stable Baselines3
800
+ RL Games
801
+ RSL RL
802
+ 0.5
803
+ 1.0
804
+ 1.5
805
+ 2.0
806
+ Steps
807
+ ×107
808
+ 20
809
+ 40
810
+ 60
811
+ 80
812
+ 100
813
+ 120
814
+ Average Return
815
+ RSLRL PPO on Franka-Cabinet-Opening
816
+ Joint, position
817
+ Joint, velocity
818
+ 0.5
819
+ 1.0
820
+ 1.5
821
+ 2.0
822
+ Steps
823
+ ×107
824
+ 0
825
+ 10
826
+ 20
827
+ 30
828
+ 40
829
+ Consecutive Successes
830
+ RLGames PPO on ShadowHand-Repose
831
+ Full-State Feed Forward (FF)
832
+ Asymmetric actor-critic (AC) FF
833
+ Asymmetric AC-FF with DR
834
+ Asymmetric AC-LSTM
835
+ Asymmetric AC-LSTM with DR
836
+ Fig. 7: Franka-Reach is trained with joint position action space using PPO from Stable Baseline3, RL Games, and RSL RL.
837
+ Franka-Cabinet-Opening is trained with PPO using different controllers. ShadowHand-Repose for in-hand manipulation of
838
+ a cube is trained using variants of PPO with different randomizations, observations, and network types. We evaluate over five seeds and
839
+ plot the mean and one standard deviation of the average reward.
840
+ Fig. 8: Interactive grasp and motion planning demonstration using ORBIT. The World comprises of objects for table-top manipulation.
841
+ The user can select an object from the GUI to grasp. This triggers an image-based grasp generator and allows previewing of the generated
842
+ grasps and the robot motion sequence. The user can then choose the grasp and execute the motion on the robot.
843
+ TABLE II: Evaluation of policies obtained from behavior cloning
844
+ on Franka-Block-Lift environment in the same setting (No
845
+ Change), changing initial states (I), goal states (G), and changing
846
+ both initial and goal states (Both). We report the the success rate
847
+ and trajectory lengths obtained over 100 trials.
848
+ Algorithm
849
+ Average Traj. Len
850
+ Succ. Rate
851
+ Eval. Setup
852
+ BC
853
+ 234
854
+ 1.00
855
+ No Change
856
+ 307
857
+ 0.89
858
+ G
859
+ 321
860
+ 0.47
861
+ I
862
+ 324
863
+ 0.43
864
+ Both
865
+ BC-RNN
866
+ 249
867
+ 1.00
868
+ No Change
869
+ 251
870
+ 1.00
871
+ G
872
+ 286
873
+ 0.88
874
+ I
875
+ 293
876
+ 0.87
877
+ Both
878
+ scenarios, boostrapping from user demonstrations provides a
879
+ viable path to skill learning. ORBIT provides a data collection
880
+ interface that is useful for interacting with the provided
881
+ environments using I/O devices and collect data similar to
882
+ roboturk [43]. We also provide an interface robomimic [21]
883
+ for training imitation learning models.
884
+ As
885
+ an
886
+ example,
887
+ we
888
+ show
889
+ LfD
890
+ for
891
+ the
892
+ Franka-Block-Lift
893
+ task.
894
+ For
895
+ each
896
+ of
897
+ the
898
+ four
899
+ settings of initial and desired object positions (fixed or
900
+ random start and desired positions), we collect 2000
901
+ trajectories. Using these demonstrations, we train policies
902
+ using Behavior Cloning (BC) and BC with an RNN policy
903
+ (BC-RNN). We show the performance at test time on 100
904
+ trials in Table II.
905
+ C. Motion planning
906
+ Motion planning is one of the well-studied domains
907
+ in robotics. The traditional Sense-Model-Plan-Act (SMPA)
908
+ methodology decomposes the complex problem of reasoning
909
+ and control into possible sub-components. ORBIT supports
910
+ doing this both procedurally and interactively via the GUI.
911
+ a) Hand-crafted policies: We create a state machine
912
+ for a given task to perform sequential planning as a separate
913
+ node in the agent. It provides the goal states for reaching a
914
+ target object, closing the gripper, interacting with the object,
915
+ and maneuvering to the next target position. We demonstrate
916
+ this paradigm for several tasks in Fig. 6. These hand-crafted
917
+ policies can also be utilized for collecting expert demonstra-
918
+ tions for challenging tasks such as cloth manipulation.
919
+ b) Interactive motion planning: We define a system of
920
+ nodes for grasp generation, teleoperation, task-space control,
921
+ and motion previewing (shown in Fig. 8). Through the GUI,
922
+ the user can select an object to grasp and view the possible
923
+ grasp poses and the robot motion sequences generated using
924
+ the RMP controller . After confirming the grasp pose, the
925
+ robot executes the motion and lifts the object. Following this,
926
+ the user obtains teleoperation control of the robot.
927
+ D. Deployment on real robot
928
+ Deploying an agent on a real robot faces various chal-
929
+ lenges, such as dealing with real-time control and safety con-
930
+ straints. Different data transport layers, such as ROSTCP [19]
931
+ or ZeroMQ (ZMQ) [44], exist for connecting a robotic stack
932
+
933
+ FRANKA
934
+ THORAFSFRANKA
935
+ THOR ATSTHOR ATSTHOR ATSa.1
936
+ a.2
937
+ b.1
938
+ b.2
939
+ Fig. 9: Using simulator as a digital twin to compute and apply commands on the simulated and real robot via ZMQ connection. a) Franka
940
+ Panda arm with Allegro hand lifting two objects at once (video). b) Franka Panda performing object avoidance using RMP (video).
941
+ 1
942
+ 2
943
+ 3
944
+ Fig. 10: Deployment of an RL policy on ANYmal-D robot using
945
+ ROS connection (video). The policy is trained in simulation and
946
+ runs at 50 Hz while the actuator net functions at 200 Hz.
947
+ to a real platform. We showcase how these mechanisms can
948
+ be used with ORBIT to run policies on a real robot.
949
+ a) Using
950
+ ZMQ:
951
+ To
952
+ maintain
953
+ a
954
+ light-weight
955
+ and
956
+ effiecient communication between, we use ZMQ to send joint
957
+ commands from ORBIT to a computer running the real-time
958
+ kernel for Franka Emika robot. To abide by the real-time
959
+ safety constraints, we use a quintic interpolator to upsample
960
+ the 60 Hz joint commands from the simulator to 1000 Hz
961
+ for execution on the robot (shown in Fig. 9).
962
+ We run experiments on two configurations of the Franka
963
+ robot: one with the Franka Emika hand and the other with
964
+ an Allegro hand. For each configuration, we showcase three
965
+ tasks: 1) teleoperation using a Spacemouse device, 2) de-
966
+ ployment of a state machine, and 3) waypoint tracking with
967
+ obstacle avoidance. The modular nature of the agent makes
968
+ it easy to switch between different control architectures for
969
+ each task while using the same interface for the real robot.
970
+ b) Using ROS: A variety of existing robots come with
971
+ their ROS software stack. In this demonstration, we focus on
972
+ how policies trained using ORBIT can be exported and de-
973
+ ployed on a robotic platform, particularly for the quadrupedal
974
+ robot from ANYbotics, ANYmal-D.
975
+ We train a locomotion policy entirely in simulation using
976
+ an actuator network [36] for the legged base. To make the
977
+ policy robust, we randomize the base mass (22 ± 5 kg) and
978
+ add simulated random pushes. We use the contact reporter to
979
+ obtain the contact forces and use them in reward design. The
980
+ learned policy is deployed on the robot using the ANYmal
981
+ ROS stack, (Fig. 10). This sim-to-real transfer indicates the
982
+ viability of the simulated contact dynamics and its suitability
983
+ for contact-rich tasks in ORBIT.
984
+ VI. DISCUSSION
985
+ In this paper, we proposed ORBIT: a framework to sim-
986
+ plify environment design, enable easier task specifications
987
+ and lower the barrier to entry into robotics and robot learn-
988
+ ing. ORBIT builds on state-of-the-art physics and render-
989
+ ing engines, and provides interfaces to easily design novel
990
+ realistic environments comprising various robotic platforms
991
+ interacting with rigid and deformable objects, physics-based
992
+ sensor simulation and sensor noise models, and different
993
+ actuator models. We readily support a broad set of robotic
994
+ platforms, ranging from fixed-arm to legged mobile manip-
995
+ ulators, CPU and GPU-based motion generators, and object
996
+ datasets (such as YCB and Partnet-Mobility).
997
+ The breadth of environments possible, as demonstrated
998
+ in part in Sec. IV, makes ORBIT useful for broad set of
999
+ research questions in robotics. Keeping modularity at its
1000
+ core, we demonstrated the framework’s extensibility to dif-
1001
+ ferent paradigms, including reinforcement learning, imitation
1002
+ learning, and motion planning. We also showcased the ability
1003
+ to interface the framework to the Franka Emika Panda robot
1004
+ via ZMQ-based message-passing and sim-to-real deployment
1005
+ of RL policies for quadrupedal locomotion.
1006
+ By open-sourcing this framework1, we aim to reduce the
1007
+ overhead for developing new applications and provide a
1008
+ unified platform for future robot learning research. While
1009
+ we continue improving and adding more features to the
1010
+ framework, we hope that researchers contribute to making
1011
+ it a one-stop solution for robotics research.
1012
+ VII. FUTURE WORK
1013
+ ORBIT can notably simulate physics at up to 125,000
1014
+ samples per second; however, camera rendering is currently
1015
+ bottlenecked to a total of 270 frames per second for ten
1016
+ cameras rendering 640×480 RGB images on an RTX 3090.
1017
+ While this number is comparable to other frameworks, we
1018
+ are actively improving it further by leveraging GPU-based
1019
+ acceleration for training for visuomotor policies.
1020
+ 1NVIDIA Isaac Sim is free with an individual license. ORBIT will be
1021
+ open-sourced, and available at https://isaac-orbit.github.io.
1022
+
1023
+ CHEEZLIT
1024
+ THORLABSCHEEZIT
1025
+ THORLABS
1026
+ .QHEEZIT
1027
+ THORLAIS
1028
+ THORAISCHEEZIT
1029
+ THORLATS
1030
+ THORLATSCHEEZ-T
1031
+ ORIGiNal
1032
+ THORLATS
1033
+ THORATSTHORLAIS
1034
+ THORATISCHEEZ-IT
1035
+ ORiGiNal
1036
+ THORLABSD
1037
+ OR
1038
+ THORLABSAdditionally, though our experiments showcase the fidelity
1039
+ of rigid-contact modeling, the accuracy of contacts in de-
1040
+ formable objects simulation is still unexplored. It is essential
1041
+ to note that until now, robot manipulation research in this
1042
+ domain has not relied on sim-to-real since existing solvers
1043
+ are typically fragile or slow. Using FEM-based solvers and
1044
+ physically-based rendering, we believe our framework will
1045
+ help answer these open questions in the future.
1046
+ ACKNOWLEDGMENT
1047
+ We thank Farbod Farshidian for helping with OCS2, Umid
1048
+ Targuliyev for assisting with imitation learning experiments,
1049
+ as well as Ossama Samir Ahmed, Lukasz Wawrzyniak, Avi
1050
+ Rudich, Bryan Peele, Nathan Ratliff, Milad Rakhsha, Vik-
1051
+ tor Makoviychuk, Jean-Francois Lafleche, Yashraj Narang,
1052
+ Miles Macklin, Liila Torabi, Philipp Reist, Adam Mora-
1053
+ vansky, and other members of the NVIDIA PhysX and
1054
+ Omniverse teams for their assistance with the simulator.
1055
+ REFERENCES
1056
+ [1] A. Kumar, Z. Fu, et al., “Rma: Rapid motor adaptation for legged
1057
+ robots,” Robotics: Science and Systems (RSS), 2021.
1058
+ [2] Z. Xie, X. Da, et al., “Glide: Generalizable quadrupedal locomotion
1059
+ in diverse environments with a centroidal model,” arXiv preprint
1060
+ arXiv:2104.09771, 2021.
1061
+ [3] T. Miki, J. Lee, et al., “Learning robust perceptive locomotion for
1062
+ quadrupedal robots in the wild,” Science Robotics, vol. 7, no. 62, 2022.
1063
+ [4] A. Allshire, M. Mittal, et al., “Transferring dexterous manipulation
1064
+ from gpu simulation to a remote real-world trifinger,” IEEE/RSJ
1065
+ International Conference on Intelligent Robots and Systems (IROS),
1066
+ 2022.
1067
+ [5] I. Akkaya, M. Andrychowicz, et al., “Solving rubik’s cube with a
1068
+ robot hand,” arXiv preprint arXiv:1910.07113, 2019.
1069
+ [6] T. Yu, D. Quillen, et al., “Meta-world: A benchmark and evaluation for
1070
+ multi-task and meta reinforcement learning,” in Conference on Robot
1071
+ Learning (CoRL).
1072
+ PMLR, 2020.
1073
+ [7] Y.
1074
+ Zhu,
1075
+ J.
1076
+ Wong,
1077
+ et
1078
+ al.,
1079
+ “robosuite:
1080
+ A
1081
+ modular
1082
+ simulation
1083
+ framework and benchmark for robot learning,” in arXiv preprint
1084
+ arXiv:2009.12293, 2020.
1085
+ [8] Y. Urakami, A. Hodgkinson, et al., “Doorgym: A scalable door
1086
+ opening environment and baseline agent,” CoRR, vol. abs/1908.01887,
1087
+ 2019.
1088
+ [9] R. Antonova, P. Shi, et al., “Dynamic environments with deformable
1089
+ objects,” in Neural Information Processing Systems Datasets and
1090
+ Benchmarks Track (Round 2), 2021.
1091
+ [10] S. James, Z. Ma, et al., “Rlbench: The robot learning benchmark &
1092
+ learning environment,” IEEE Robotics and Automation Letters, 2020.
1093
+ [11] C. Li, F. Xia, et al., “igibson 2.0: Object-centric simulation for
1094
+ robot learning of everyday household tasks,” in Conference on Robot
1095
+ Learning (CoRL).
1096
+ PMLR, 2021.
1097
+ [12] A. Szot, A. Clegg, et al., “Habitat 2.0: Training home assistants to
1098
+ rearrange their habitat,” in Advances in Neural Information Processing
1099
+ Systems (NeurIPS), 2021.
1100
+ [13] X. Lin, Y. Wang, et al., “Softgym: Benchmarking deep reinforcement
1101
+ learning for deformable object manipulation,” in Conference on Robot
1102
+ Learning (CoRL), 2020.
1103
+ [14] C. Gan, J. Schwartz, et al., “Threedworld: A platform for interactive
1104
+ multi-modal physical simulation,” 2021.
1105
+ [15] F. Xiang, Y. Qin, et al., “Sapien: A simulated part-based interactive
1106
+ environment,” in IEEE/CVF Conference on Computer Vision and
1107
+ Pattern Recognition (CVPR), 2020.
1108
+ [16] K. Ehsani, W. Han, et al., “Manipulathor: A framework for visual
1109
+ object manipulation,” in IEEE/CVF Conference on Computer Vision
1110
+ and Pattern Recognition (CVPR), 2021.
1111
+ [17] V.
1112
+ Makoviychuk,
1113
+ L.
1114
+ Wawrzyniak,
1115
+ et
1116
+ al.,
1117
+ “Isaac
1118
+ gym:
1119
+ High
1120
+ performance GPU based physics simulation for robot learning,” in
1121
+ Neural Information Processing Systems Datasets and Benchmarks
1122
+ Track (Round 2), 2021.
1123
+ [18] M. Savva, A. Kadian, et al., “Habitat: A Platform for Embodied AI
1124
+ Research,” in IEEE/CVF International Conference on Computer Vision
1125
+ (ICCV), 2019.
1126
+ [19] M. Quigley, K. Conley, et al., “Ros: an open-source robot operating
1127
+ system,” in ICRA workshop on open source software, vol. 3, 2009.
1128
+ [20] NVIDIA, “Nvidia isaac sim,” https://developer.nvidia.com/isaac-sim,
1129
+ May 2022.
1130
+ [21] A. Mandlekar, D. Xu, et al., “What matters in learning from offline
1131
+ human demonstrations for robot manipulation,” in Conference on
1132
+ Robot Learning (CoRL).
1133
+ PMLR, 2022.
1134
+ [22] C.-A. Cheng, M. Mukadam, et al., “Rmpflow: A geometric framework
1135
+ for generation of multitask motion policies,” IEEE Transactions on
1136
+ Automation Science and Engineering, vol. 18, no. 3, 2021.
1137
+ [23] M. Mittal, D. Hoeller, et al., “Articulated object interaction in unknown
1138
+ scenes with whole-body mobile manipulation,” IEEE/RSJ Interna-
1139
+ tional Conference on Intelligent Robots and Systems (IROS), 2022.
1140
+ [24] E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine
1141
+ for model-based control,” in IEEE/RSJ International Conference on
1142
+ Intelligent Robots and Systems (IROS), 2012.
1143
+ [25] E. Coumans and Y. Bai, “Pybullet, a python module for physics sim-
1144
+ ulation for games, robotics and machine learning,” http://pybullet.org,
1145
+ 2016–2021.
1146
+ [26] NVIDIA, “Flex: A particle-based simulation library,” https://github.
1147
+ com/NVIDIAGameWorks/FleX, May 2022.
1148
+ [27] ——, “Nvidia physx sdk,” Mar 2022.
1149
+ [28] Y. Narang, K. Storey, et al., “Factory: Fast contact for robotic
1150
+ assembly,” Robotics: Science and Systems (RSS), 2022.
1151
+ [29] S. G. Parker, J. Bigler, et al., “Optix: A general purpose ray tracing
1152
+ engine,” ACM Transactions On Graphics, vol. 29, no. 4, 2010.
1153
+ [30] X. Zhang, R. Chen, et al., “Close the visual domain gap by
1154
+ physics-grounded active stereovision depth sensor simulation,” 2022.
1155
+ [31] S. Shah, D. Dey, et al., “Airsim: High-fidelity visual and physical
1156
+ simulation for autonomous vehicles,” in Field and Service Robotics,
1157
+ 2017.
1158
+ [32] J. Straub, T. Whelan, et al., “The replica dataset: A digital replica of
1159
+ indoor spaces,” arXiv preprint arXiv:1906.05797, 2019.
1160
+ [33] B. Calli, A. Walsman, et al., “Benchmarking in manipulation research:
1161
+ The ycb object and model set and benchmarking protocols,” arXiv
1162
+ preprint arXiv:1502.03143, 2015.
1163
+ [34] K. Mo, S. Zhu, et al., “PartNet: A large-scale benchmark for
1164
+ fine-grained and hierarchical part-level 3D object understanding,” in
1165
+ IEEE/CVF Conference on Computer Vision and Pattern Recognition
1166
+ (CVPR), June 2019.
1167
+ [35] R. Mart´ın-Mart´ın, M. Lee, et al., “Variable impedance control in end-
1168
+ effector space. an action space for reinforcement learning in contact
1169
+ rich tasks,” in IEEE/RSJ International Conference of Intelligent Robots
1170
+ and Systems (IROS), 2019.
1171
+ [36] J. Hwangbo, J. Lee, et al., “Learning agile and dynamic motor skills
1172
+ for legged robots,” Science Robotics, vol. 4, no. 26, 2019.
1173
+ [37] S. Buss, “Introduction to inverse kinematics with jacobian transpose,
1174
+ pseudoinverse and damped least squares methods,” IEEE Transactions
1175
+ in Robotics and Automation, vol. 17, 2004.
1176
+ [38] O. Khatib, “Inertial properties in robotic manipulation: An object-
1177
+ level framework,” The International Journal of Robotics Research,
1178
+ vol. 14, no. 1, 1995.
1179
+ [39] N. Rudin, D. Hoeller, et al., “Learning to walk in minutes using
1180
+ massively parallel deep reinforcement learning,” in Conference on
1181
+ Robot Learning (CoRL).
1182
+ PMLR, 2022.
1183
+ [40] D. Makoviichuk and V. Makoviychuk, “rl-games: A high-performance
1184
+ framework for reinforcement learning,” https://github.com/Denys88/rl
1185
+ games, May 2022.
1186
+ [41] A. Raffin, A. Hill, et al., “Stable-baselines3: Reliable reinforcement
1187
+ learning implementations,” Journal of Machine Learning Research,
1188
+ vol. 22, no. 268, 2021.
1189
+ [42] J. Schulman, F. Wolski, et al., “Proximal policy optimization algo-
1190
+ rithms,” arXiv preprint arXiv:1707.06347, 2017.
1191
+ [43] A. Mandlekar, Y. Zhu, et al., “Roboturk: A crowdsourcing platform
1192
+ for robotic skill learning through imitation,” in Conference on Robot
1193
+ Learning (CoRL).
1194
+ PMLR, 2018.
1195
+ [44] P. Hintjens, ZeroMQ: messaging for many applications.
1196
+ ” O’Reilly
1197
+ Media, Inc.”, 2013.
1198
+
3dE2T4oBgHgl3EQf6Ago/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
4NAzT4oBgHgl3EQfuv1g/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:258cf101628045db3eff2a936bffeefdeb2728e33c19453f7e8ec42fe0e7172c
3
+ size 12976173
4tAzT4oBgHgl3EQf9v5K/content/2301.01923v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:391e24169e622f1e62cc93101d0412a3f4f688469fbcab737275b760c9680219
3
+ size 2357224
4tAzT4oBgHgl3EQf9v5K/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:16d891deeda80ff239f9cc5b17765a22495a9a10efefec7feba84aeb775c8478
3
+ size 3538989
4tAzT4oBgHgl3EQf9v5K/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f71ba4145673cfa1b242d62f2fec9b941cb53d1b3f8da2b7c122fc73d916a284
3
+ size 105494
6NE1T4oBgHgl3EQfBQJe/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc08484319c8f05ff818fee0a187b341aa14cdd542c14785582fa39159b58eb0
3
+ size 122068
6tA0T4oBgHgl3EQfOP83/content/tmp_files/2301.02157v1.pdf.txt ADDED
@@ -0,0 +1,970 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Astronomy & Astrophysics manuscript no. main
2
+ ©ESO 2023
3
+ January 6, 2023
4
+ Letter to the Editor
5
+ Asteroids’ reflectance from Gaia DR3:
6
+ Artificial reddening at near-UV wavelengths
7
+ F. Tinaut-Ruano1, 2, E. Tatsumi1, 2, 3, P. Tanga4, J. de León1, 2, M. Delbo4, F. De Angeli5, D. Morate1, 2, J. Licandro1, 2,
8
+ and L. Galluccio4
9
+ 1 Instituto de Astrofísica de Canarias (IAC), C/ Vía Láctea, s/n, E-38205, La Laguna, Spain
10
+ e-mail: [email protected]
11
+ 2 Department of Astrophysics, University of La Laguna, Tenerife, Spain
12
+ 3 Department of Earth and Planetary Science, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, 113-0033 Tokyo, Japan
13
+ 4 Université Côte d’Azur, Observatoire de la Côte d’Azur, CNRS, Laboratoire Lagrange, Bd de l’Observatoire, CS 34229, 06304
14
+ Nice Cedex 4, France
15
+ 5 Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK
16
+ Received 04/10/2022; accepted 02/01/2023
17
+ ABSTRACT
18
+ Context. Observational and instrumental difficulties observing small bodies below 0.5 µm make this wavelength range poorly studied
19
+ compared with the visible and near-infrared. Furthermore, the suitability of many commonly used solar analogues, essential in the
20
+ computation of asteroid reflectances, is usually assessed only in visible wavelengths, while some of these objects show spectra that
21
+ are quite different from the spectrum of the Sun at wavelengths below 0.55 µm. Stars HD 28099 (Hyades 64) and HD 186427 (16
22
+ Cyg B) are two well-studied solar analogues that instead present spectra that are also very similar to the spectrum of the Sun in the
23
+ wavelength region between 0.36 and 0.55 µm.
24
+ Aims. We aim to assess the suitability in the near-ultraviolet (NUV) region of the solar analogues selected by the team responsible for
25
+ the asteroid reflectance included in Gaia Data Release 3 (DR3) and to suggest a correction (in the form of multiplicative factors) to
26
+ be applied to the Gaia DR3 asteroid reflectance spectra to account for the differences with respect to the solar analogue Hyades 64.
27
+ Methods. To compute the multiplicative factors, we calculated the ratio between the solar analogues used by Gaia DR3 and Hyades
28
+ 64, and then we averaged and binned this ratio in the same way as the asteroid spectra in Gaia DR3. We also compared both the
29
+ original and corrected Gaia asteroid spectra with observational data from the Eight Color Asteroid Survey (ECAS), one UV spectrum
30
+ obtained with the Hubble Space Telescope (HST) and a set of blue-visible spectra obtained with the 3.6m Telescopio Nazionale
31
+ Galileo (TNG). By means of this comparison, we quantified the goodness of the obtained correction.
32
+ Results. We find that the solar analogues selected for Gaia DR3 to compute the reflectance spectra of the asteroids of this data release
33
+ have a systematically redder spectral slope at wavelengths shorter than 0.55 µm than Hyades 64. We find that no correction is needed
34
+ in the red photometer (RP, between 0.7 and 1 µm), but a correction should be applied at wavelengths below 0.55 µm, that is in the
35
+ blue photometer (BP). After applying the correction, we find a better agreement between Gaia DR3 spectra, ECAS, HST, and our set
36
+ of ground-based observations with the TNG.
37
+ Conclusions. Correcting the near-UV part of the asteroid reflectance spectra is very important for proper comparisons with laboratory
38
+ spectra (minerals, meteorite samples, etc.) or to analyse quantitatively the UV absorption (which is particularly important to study
39
+ hydration in primitive asteroids). The spectral behaviour at wavelengths below 0.5 µm of the selected solar analogues should be fully
40
+ studied and taken into account for Gaia DR4.
41
+ Key words. Gaia – asteroids – Solar analogues – UV – spectra
42
+ 1. Introduction
43
+ Asteroid reflectance spectra and/or spectrophotometry pro-
44
+ vide(s) information on their surfaces’ composition and the pro-
45
+ cesses that modify their properties such as space weathering
46
+ (Reddy et al. 2015). Historically, the use of photoelectric detec-
47
+ tors (or photometers), which are more sensitive at bluer wave-
48
+ lengths (e.g. < 0.5 µm), and the development of the standard
49
+ UBV photometric system (Johnson & Morgan 1951) led to the
50
+ appearance of the first asteroid taxonomies in the 1970s (Zell-
51
+ ner 1973; Chapman et al. 1975), which contained information
52
+ at blue-visible wavelengths or what we call near-UV (NUV).
53
+ The introduction of the charge-coupled-devices (CCDs) in as-
54
+ tronomy in the 1990s and later on contributed to the ’loss’ of
55
+ NUV information, as CCDs were much less sensitive at those
56
+ wavelengths. Therefore, the large majority of the modern spec-
57
+ troscopic and spectrophotometric surveys cover the wavelength
58
+ range from ∼0.5 µm up to 2.5 µm. Nevertheless, there are some
59
+ exceptions. One of the first large surveys with information in the
60
+ NUV is the Eight Asteroid Survey (ECAS, Zellner et al. 1985).
61
+ In this survey, we can find the photometry in eight broad-band
62
+ filters between 0.34 to 1.04 µm for 589 minor planets, includ-
63
+ ing two filters below 0.45 µm. These observations were used to
64
+ develop a new taxonomy (see Tholen 1984). Other recent cat-
65
+ alogues, such as the Sloan Digital Sky Survey (SDSS) Moving
66
+ Objects catalogue (Ivezi´c et al. 2002), the Moving Objects Ob-
67
+ served from Javalambre (MOOJa) catalogue from the J-PLUS
68
+ survey (Morate et al. 2021), or the Solar System objects obser-
69
+ vations from the SkyMapper Southern Survey (Sergeyev et al.
70
+ Article number, page 1 of 13
71
+ arXiv:2301.02157v1 [astro-ph.EP] 5 Jan 2023
72
+
73
+ A&A proofs: manuscript no. main
74
+ 2022), also include photometry in five, 12, and six filters be-
75
+ tween 0.3 and 1.1 µm with 104,449, 3122, and 205,515 objects
76
+ observed, respectively. The new Gaia data release 3 (DR3 here-
77
+ after) catalogue, which was released in June 2022, offers 60,518
78
+ objects binned in 16 wavelengths between 0.352 and 1.056 µm
79
+ to mean reflectance spectra.
80
+ Even though some laboratory measurements suggest the po-
81
+ tential of the NUV absorption as a diagnostic region of hydrated
82
+ and ferric material (Gaffey & McCord 1979; Feierberg 1981;
83
+ Feierberg et al. 1985; Hiroi et al. 1996; Cloutis et al. 2011a,b;
84
+ Hendrix et al. 2016; Hiroi et al. 2021), a quantitative distribu-
85
+ tion of the NUV absorption among asteroids has not been dis-
86
+ cussed before (Tatsumi et al. 2022). The small sensitivity of
87
+ CCDs and the lower Sun’s emission in NUV wavelengths make
88
+ observations difficult. Moreover, the Rayleigh scattering by the
89
+ atmosphere is stronger on shorter wavelengths, decreasing the
90
+ signal-to-noise ratio (S/N) for the NUV region observed from
91
+ the ground. To compute the reflectance spectra, we needed to di-
92
+ vide wavelength by wavelength of the measured spectra by the
93
+ spectra of the Sun. As it is unpractical to observe the Sun with
94
+ the same instrument used to observe asteroids, we used solar
95
+ analogues (SAs), that is stars selected by their known similar
96
+ spectra to that of the Sun. As the large majority of the spec-
97
+ troscopic and spectrophotometric surveys cover the wavelength
98
+ range that goes from the visible to near-infrared (NIR), the most
99
+ commonly used SAs are well characterised at those wavelengths
100
+ but they can behave very differently in the NUV. This flux dif-
101
+ ference at bluer wavelengths can introduce systematic errors in
102
+ the asteroid reflectance spectra. A good example is the work by
103
+ de León et al. (2016), where they searched for the presence of
104
+ F-type asteroids in the Polana collisional family since the parent
105
+ body of the family, asteroid (142) Polana, was classified as an
106
+ F type. The authors obtained reflectance spectra in the NUV of
107
+ the members of the family, finding that the large majority were
108
+ classified as B types. As most of the observers, they used SAs
109
+ that were widely used by the community. Interestingly, after ob-
110
+ taining the asteroid reflectances again using only Hyades 64 as
111
+ the SA, Tatsumi et al. (2022) found that the large majority of
112
+ the observed members of the Polana family were indeed F types
113
+ and not B types. This evidences the importance of using ade-
114
+ quate SAs when observing in the NUV, and it has been the main
115
+ motivation for this work.
116
+ In this Letter, we present a comparison between the SAs se-
117
+ lected to compute the reflectance spectra in the frame of the data
118
+ processing of Gaia DR3 (Gaia Collaboration et al. 2022) and
119
+ Hyades 64. We analyse the results from this comparison and
120
+ propose a multiplicative correction that can be applied to the
121
+ archived asteroids’ reflectance spectra. We finally tested it by
122
+ comparing corrected Gaia reflectance spectra with ground-based
123
+ observations that have also been corrected against the same SA
124
+ (ECAS survey, TNG spectra) and with one observation with the
125
+ Hubble Space Telescope (HST).
126
+ 2. Sample
127
+ 2.1. Solar analogues in Gaia DR3
128
+ The Gaia DR3 catalogue (Gaia Collaboration et al. 2022) gives
129
+ access to internally and externally calibrated mean spectra for a
130
+ large subset of sources. Internally calibrated spectra refer to an
131
+ internal reference system that is homogeneous across all differ-
132
+ ent instrumental configurations, while externally calibrated spec-
133
+ tra are given in an absolute wavelength and flux scale (see De
134
+ Angeli et al. 2022; Montegriffo et al. 2022, for more details).
135
+ Epoch spectra (spectra derived from a single observation rather
136
+ than averaging many observations of the same source) are not
137
+ included in this release. For this Letter, we relied on internally
138
+ calibrated data when computing the correction for the Gaia re-
139
+ flectances to ensure consistency and to avoid artefacts that could
140
+ appear when dividing two externally calibrated spectra, as they
141
+ are polynomial fits.
142
+ To select the SAs, the Gaia team did a bibliographic search
143
+ and selected a list of stars that are widely used as solar analogues
144
+ for asteroid spectroscopy (Bus & Binzel 2002; Lazzaro et al.
145
+ 2004; Soubiran & Triaud 2004; Fornasier et al. 2007; Popescu
146
+ et al. 2014; Perna et al. 2018; Popescu et al. 2019; Lucas et al.
147
+ 2019). First of all, we note that the star identified as 16 Cygnus
148
+ B in Gaia Collaboration et al. (2022) is in fact 16 Cygnus A and
149
+ that the parameters in their Table C.1. correspond to those of 16
150
+ Cygnus A. Luckily enough, the spectrum of 16 Cygnus B was
151
+ also available in DR3. Among the referenced works, only Soubi-
152
+ ran & Triaud (2004) carried out a search for SAs by comparing
153
+ their spectra to that of the Sun down to 0.385 µm. The rest sim-
154
+ ply used G2V stars or cited previous works that presented SAs,
155
+ as in Hardorp (1978). In this later work, Hardorp selected SAs
156
+ by comparing their spectra with the spectrum of the Sun using
157
+ wavelengths down to 0.36 µm. He highlighted the variations that
158
+ can exist at NUV wavelengths even between stars of the same
159
+ spectral class.
160
+ 2.2. Asteroids in Gaia DR3
161
+ Among the Gaia DR3 products for Solar System objects (SSOs),
162
+ neither the internally nor the externally calibrated spectra are
163
+ available to the community, as is the case for the stars. This is due
164
+ to a specific choice of the Data Processing and Analysis Consor-
165
+ tium (DPAC) caused by the difficulty of calculating those quanti-
166
+ ties owing to the intrinsic variability and proper motion of SSOs.
167
+ Instead, for each SSO and each epoch, the nominal, pre-launch
168
+ dispersion function was used to convert pseudo-wavelengths to
169
+ physical wavelengths. The reflectance spectra were calculated by
170
+ dividing each epoch spectrum by the mean of the SAs selected
171
+ and then averaging over the set of epochs. After that, a set of
172
+ fixed wavelengths every 44 nm in the range between 374 and
173
+ 1034 nm was defined, with a set of bins centred at those wave-
174
+ lengths and with a size of 44 nm. For each bin (a total of 16 are
175
+ provided), a σ-clipping filter was applied and a weighted aver-
176
+ age using the inverse of the standard deviation as weight was
177
+ obtained. Finally, the reflectances were normalised to unity us-
178
+ ing the value at 550±25 nm. This final product is the only one
179
+ available in DR3.
180
+ 2.3. Hyades 64 & 16 Cyg B
181
+ As mentioned in Sect. 2.1, Hardorp (1978) concluded that
182
+ Hyades 64 and 16 Cyg B are two of the four stars that exhibit
183
+ ’almost indistinguishable’ NUV spectra (quoting the author’s
184
+ words) from the spectrum of the Sun. This was confirmed in
185
+ subsequent papers from the same author (Hardorp 1980a,b) and
186
+ from other researchers (Cayrel de Strobel 1996; Porto de Mello
187
+ & da Silva 1997; Farnham et al. 2000; Soubiran & Triaud 2004).
188
+ We used these two stars as a ’reference’ to compute the correc-
189
+ tion factor to be applied to the Gaia DR3 asteroids spectra, as
190
+ they are in the list of SAs selected by Gaia Collaboration et al.
191
+ (2022). The methodology is described in the following section.
192
+ We note that the obtained correction factor using Hyades 64 as
193
+ opposed to 16 Cygnus B differs less than 0.5%. We, therefore,
194
+ Article number, page 2 of 13
195
+
196
+ F. Tinaut-Ruano et al.: Asteroids’ reflectance from Gaia DR3: Artificial reddening at near-UV wavelengths
197
+ Table 1. Multiplicative correction factors for Gaia asteroid binned spec-
198
+ tra. We include the wavelengths below 0.55 µm.
199
+ Wavelength (µm)
200
+ Correction factor
201
+ 0.374
202
+ 1.07
203
+ 0.418
204
+ 1.05
205
+ 0.462
206
+ 1.02
207
+ 0.506
208
+ 1.01
209
+ 0.550
210
+ 1.00
211
+ decided to use Hyades 64, as it was the star that was used for
212
+ both the ECAS survey and our ground-based observations.
213
+ 3. Methodology
214
+ 3.1. Computing the correction factor: Internally calibrated
215
+ data
216
+ In order to compute a correction applicable to the Gaia DR3
217
+ reflectances, we proceeded as follows: first, using the internally
218
+ calibrated data, we computed the ratio between the Gaia sample
219
+ of SAs, as well as the mean spectrum of these SAs, and Hyades
220
+ 64 (Fig. 1). As we can observe in the right panel of Fig. 1, which
221
+ corresponds to the red photometer (RP), the deviation from the
222
+ unity of the ratio between Gaia’s mean SA and Hyades 64 (black
223
+ line) is always below 1%. Therefore, this mean spectrum can
224
+ confidently be used to obtain the reflectance spectra of asteroids
225
+ above 0.55 µm.
226
+ However, the situation in the the blue photometer (BP) is
227
+ quite different. We can see in the left panel of Fig. 1 that the de-
228
+ viation from the unity of the above defined ratio can reach values
229
+ of up to 10%, indicating that the mean spectrum of the SAs used
230
+ in Gaia DR3 differs significantly from Hyades 64 at wavelengths
231
+ below 0.55 µm. The biggest effect when using this mean spec-
232
+ trum to obtain asteroids’ reflectance spectra is the introduction
233
+ of a systematic (and not real) positive slope, in particular in the
234
+ range between 0.4 and 0.55 µm, mimicking a drop in reflectance
235
+ below 0.55 µm. Furthermore, the division by this mean spec-
236
+ trum can also introduce a ’fake’ absorption around 0.38 µm. We
237
+ have quantified this spectral slope in two separate wavelength
238
+ ranges, trying to reproduce the observed behaviour of the ratio:
239
+ one slope between 0.4 to 0.55 µm, which we named S Blue, and
240
+ another one for wavelengths below 0.4 µm, named µm S UV. The
241
+ obtained values for the individual SAs used in Gaia DR3 (blue
242
+ stars), as well as for the mean spectrum (blue cross) are shown
243
+ in Fig. 2. For the mean spectrum of the SAs used in Gaia DR3,
244
+ we found that the introduced slopes are S Blue = -0.38 µm−1 and
245
+ S UV = 0.69 µm−1.
246
+ From this analysis, we conclude that a correction is needed
247
+ in the NUV wavelengths, that is below 0.55 µm. To arrive at
248
+ the multiplicative correction factors, we binned the ratio between
249
+ the mean spectra of SAs selected by the DPAC and Hyades 64,
250
+ using the same wavelengths and bin size as the ones adopted for
251
+ the asteroid reflectance spectra in the Gaia DR3 (see Sect. 2). In
252
+ this way, the users can easily correct the asteroid spectra at NUV
253
+ wavelengths. The obtained values are shown in Table 1.
254
+ 3.2. Comparison of corrected reflectances with existing data
255
+ To correct the artificial slopes introduced by the use of the mean
256
+ Gaia SAs, we multiplied the binned asteroid reflectance spec-
257
+ tra below 0.55 µm by the corresponding correction factors. We
258
+ compared the corrected Gaia spectra with spectra or spectropho-
259
+ tometry of the same asteroids obtained using other facilities. As
260
+ a first step, we selected only those Gaia asteroid spectra with
261
+ a S/N > 160, as we detected a systematic decrease in spectral
262
+ slope values at blue wavelengths with decreasing S/N for objects
263
+ with a smaller S/N than 150. We then selected spectrophotomet-
264
+ ric data from the ECAS survey for asteroids that have more than
265
+ one observation, and NUV spectra obtained with the Telesco-
266
+ pio Nazionale Galileo (TNG) and previously published by Tat-
267
+ sumi et al. (2022). The resulting comparison dataset is shown in
268
+ Fig. A.1, where the red lines correspond to the original Gaia re-
269
+ flectances, black lines are the corrected ones, dark blue lines cor-
270
+ respond to ECAS data, and TNG spectra are shown in light blue.
271
+ As can be seen, the corrected reflectances are in better agree-
272
+ ment with the ECAS and TNG data than the original ones. We
273
+ also included the UV spectrum of asteroid (624) Hector down-
274
+ loaded from the ESA archive using the python package astro-
275
+ query.esa.hubble1. It was obtained with STIS at HST (Wong
276
+ et al. 2019). We converted the flux to reflectance using the spec-
277
+ trum of the Sun provided for the STIS instrument2. We note
278
+ that even after the correction, some asteroids show discrepancies
279
+ with the reference data. This is discussed in the next section.
280
+ 4. Results and discussion
281
+ We have shown that the artificial slope introduced at blue
282
+ wavelengths in the Gaia DR3 asteroid data due to the selected
283
+ SAs is -0.38 µm−1 in the range between 0.4 to 0.55 µm and 0.69
284
+ µm−1 below 0.4 µm. Following Zellner et al. (1985), the b and v
285
+ filters of the ECAS survey have central effective wavelengths of
286
+ 0.437 and 0.550 µm, respectively. According to Tholen (1984),
287
+ the (b-v) colours of the mean F and B taxonomical classes are
288
+ -0.049 and -0.015 magnitudes, respectively. Transforming these
289
+ colours to relative reflectances results in 1.046 and 1.014, which
290
+ gives slopes of -0.407 and -0.124 µm−1 between 0.437 and 0.55
291
+ µm. Therefore, the difference between these computed slopes
292
+ for F and B types (-0.283 µm−1) is smaller than the artificial
293
+ slope introduced by the use of the mean SA of Gaia, implying
294
+ that unless we apply the correction proposed in this Letter,
295
+ asteroids can be easily misclassified as B types when actually
296
+ being F types (see the described example in the Introduction for
297
+ the case of members of the Polana family).
298
+ To test and quantify the goodness of our proposed correction,
299
+ we computed the spectral slope between 0.437 and 0.55 µm for
300
+ the ECAS comparison dataset, and between 0.418 and 0.55 for
301
+ Gaia original and corrected spectra. In Fig. 3 we plotted the
302
+ difference between those slopes. After applying our correction
303
+ factor, we could see that the large majority (148 out of 152) of
304
+ the asteroids have more similar slopes to those of ECAS.
305
+ Nevertheless, our correction has limitations. First, we were
306
+ testing its goodness over space-based observations using
307
+ ground-based observations. For wavelengths down to 0.3 µm,
308
+ ground-based observations present some difficulties, mainly due
309
+ to the atmospheric absorption and the lower sensitivity of the
310
+ detectors. Furthermore, Gaia observations at those wavelengths
311
+ also have other artifacts that we do not fully understand, such
312
+ as the detected strong decrease in the spectral slope below S/N
313
+ 150. Another point to consider when comparing asteroid spectra
314
+ observed in different epochs is the effect of the different viewing
315
+ geometries. This difference in the viewing geometry, and thus, in
316
+ 1 https://astroquery.readthedocs.io/en/latest/esa/
317
+ hubble/hubble.html
318
+ 2 https://archive.stsci.edu/hlsps/reference-atlases/
319
+ cdbs/current_calspec/sun_reference_stis_002.fits
320
+ Article number, page 3 of 13
321
+
322
+ A&A proofs: manuscript no. main
323
+ 0.35
324
+ 0.40
325
+ 0.45
326
+ 0.50
327
+ 0.55
328
+ 0.60
329
+ Wavelength [ m]
330
+ 0.95
331
+ 1.00
332
+ 1.05
333
+ 1.10
334
+ 1.15
335
+ Counts relative to Hyades 64
336
+ BP
337
+ 0.7
338
+ 0.8
339
+ 0.9
340
+ 1.0
341
+ Wavelength [ m]
342
+ RP
343
+ HD060234
344
+ HD123758
345
+ 16 Cyg B(A)1
346
+ HD6400
347
+ HD220022
348
+ HD220764
349
+ HD016640
350
+ HD292561
351
+ HD100044
352
+ HD155415
353
+ SA110-361
354
+ HD182081
355
+ HD144585
356
+ HD146233
357
+ HD138159
358
+ HD139287
359
+ HD020926
360
+ HD154424
361
+ HD202282
362
+ 16 Cyg B
363
+ mean
364
+ Fig. 1. Ratio between the internally calibrated spectra of each of the Gaia SAs and Hyades 64 in the blue photometer (BP, left panel) and the red
365
+ photometer (RP, right panel). We also plotted the ratio of the mean Gaia SA and Hyades 64 (black solid line) and the binned version of this ratio
366
+ at the wavelengths provided for SSO in Gaia DR3 (black dots).
367
+ 1 We note that the star identified as 16 Cygnus B in Gaia Collaboration et al. (2022) is in fact 16 Cyg A (see the main text for more details).
368
+ 1.0
369
+ 0.5
370
+ 0.0
371
+ 0.5
372
+ 1.0
373
+ 1.5
374
+ 2.0
375
+ 2.5
376
+ SUV [ m
377
+ 1]
378
+ 0.8
379
+ 0.6
380
+ 0.4
381
+ 0.2
382
+ 0.0
383
+ SBlue [ m
384
+ 1]
385
+ Gaia SAs
386
+ Gaia mean
387
+ Fig. 2. Slopes introduced by each of the SAs in the Gaia sample (blue
388
+ stars) and their mean (blue cross), compared to Hyades 64. We note that
389
+ S Blue was computed in the 0.4–0.55 µm range, while S UV was computed
390
+ using wavelengths below 0.4 µm.
391
+ the phase angle, causes a change in the spectral slope known as
392
+ phase reddening or phase coloring Alvarez-Candal et al. (2022).
393
+ This effect has not been well studied at blue wavelengths. Still,
394
+ even in the event that we were able to correct it, Gaia’s spectra
395
+ are, on average, over different epochs and the information on
396
+ the phase angle values is not provided.
397
+ 0.0
398
+ 0.2
399
+ 0.4
400
+ 0.6
401
+ 0.8
402
+ 1.0
403
+ ECAS slope - original Gaia slope (1/ )
404
+ 0.0
405
+ 0.2
406
+ 0.4
407
+ 0.6
408
+ 0.8
409
+ 1.0
410
+ ECAS slope - corrected Gaia slope (1/ )
411
+ Fig. 3. Difference between the blue slope for ECAS and for Gaia origi-
412
+ nal data (x-axis) and corrected data (y-axis) in the comparison sample.
413
+ 5. Conclusions
414
+ We have found that the use of the SAs selected to compute the
415
+ reflectance spectra of the asteroids in Gaia DR3 introduces an
416
+ artificial reddening in the spectral slope below 0.5 µm, that is
417
+ an artificial drop in reflectance. By comparing those SAs with
418
+ Hyades 64, one of the best characterised SAs at NUV wave-
419
+ lengths, we obtain multiplicative correction factors for each of
420
+ the reflectance wavelengths below 0.55 µm (a total of four) that
421
+ can be applied to the asteroids’ reflectance spectra in Gaia DR3.
422
+ By applying this correction, we found a better agreement be-
423
+ tween the Gaia spectra and other data sources such as ECAS.
424
+ Article number, page 4 of 13
425
+
426
+ F. Tinaut-Ruano et al.: Asteroids’ reflectance from Gaia DR3: Artificial reddening at near-UV wavelengths
427
+ The behaviour of the SAs in the red wavelengths is in agree-
428
+ ment with Hyades 64 within 1%. This was somehow expected,
429
+ as the majority of the SAs used by the Gaia team were previ-
430
+ ously tested and widely used by the community to obtain visible
431
+ reflectance spectra of asteroids, typically beyond 0.45–0.5 µm.
432
+ Correcting the NUV part of the asteroid reflectance spectra is
433
+ fundamental to study the presence of the UV absorption, which
434
+ has been associated with hydration in primitive asteroids, or to
435
+ discriminate between B and F types, which are two taxonom-
436
+ ical classes that have proven to have very distinct polarimetric
437
+ properties. The NUV region has not yet been fully exploited for
438
+ asteroids and, in this way, Gaia spectra constitute a major step
439
+ forward in our understanding of these wavelengths.
440
+ Acknowledgements. FTR, JdL, ET, DM, and JL acknowledge support from the
441
+ Agencia Estatal de Investigación del Ministerio de Ciencia e Innovación (AEI-
442
+ MCINN) under the grant ’Hydrated Minerals and Organic Compounds in Prim-
443
+ itive Asteroids’ with reference PID2020-120464GB-100.
444
+ FTR also acknowledges the support from the COST Action and the ESA
445
+ Archival Visitor Programme.
446
+ DM acknowledges support from the ESA P3NEOI programme (AO/1-
447
+ 9591/18/D/MRP).
448
+ This work has made use of data from the European Space Agency (ESA)
449
+ mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia
450
+ Data Processing and Analysis Consortium (DPAC, https://www.cosmos.
451
+ esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been pro-
452
+ vided by national institutions, in particular, the institutions participating in the
453
+ Gaia Multilateral Agreement.
454
+ The work of MD is supported by the CNES and by the project Origins of the
455
+ French National Research Agency (ANR-18-CE31-0014).
456
+ F. De Angeli is supported by the United Kingdom Space Agency (UKSA)
457
+ through the grants ST/X00158X/1 and ST/W002469/1.
458
+ References
459
+ Alvarez-Candal, A., Jimenez Corral, S., & Colazo, M. 2022, A&A, 667, A81
460
+ Bus, S. J. & Binzel, R. P. 2002, Icarus, 158, 106
461
+ Cayrel de Strobel, G. 1996, A&A Rev., 7, 243
462
+ Chapman, C. R., Morrison, D., & Zellner, B. 1975, Icarus, 25, 104
463
+ Cloutis, E. A., Hiroi, T., Gaffey, M. J., Alexander, C. M. O. D., & Mann, P.
464
+ 2011a, Icarus, 212, 180
465
+ Cloutis, E. A., Hudon, P., Hiroi, T., Gaffey, M. J., & Mann, P. 2011b, Icarus, 216,
466
+ 309
467
+ De Angeli, F., Weiler, M., Montegriffo, P., et al. 2022, arXiv e-prints,
468
+ arXiv:2206.06143
469
+ de León, J., Pinilla-Alonso, N., Delbo, M., et al. 2016, Icarus, 266, 57
470
+ Farnham, T. L., Schleicher, D. G., & A’Hearn, M. F. 2000, Icarus, 147, 180
471
+ Feierberg, M. A. 1981, PhD thesis, University of Arizona
472
+ Feierberg, M. A., Lebofsky, L. A., & Tholen, D. J. 1985, Icarus, 63, 183
473
+ Fornasier, S., Dotto, E., Hainaut, O., et al. 2007, Icarus, 190, 622
474
+ Gaffey, M. J. & McCord, T. B. 1979, in Asteroids, ed. T. Gehrels & M. S.
475
+ Matthews, 688–723
476
+ Gaia Collaboration, Galluccio, L., Delbo, M., et al. 2022, arXiv e-prints,
477
+ arXiv:2206.12174
478
+ Hardorp, J. 1978, A&A, 63, 383
479
+ Hardorp, J. 1980a, A&A, 88, 334
480
+ Hardorp, J. 1980b, A&A, 91, 221
481
+ Hendrix, A. R., Vilas, F., & Li, J.-Y. 2016, Meteorit. Planet. Sci., 51, 105
482
+ Hiroi, T., Kaiden, H., Imae, N., et al. 2021, Polar Science, 29, 100723
483
+ Hiroi, T., Zolensky, M. E., Pieters, C. M., & Lipschutz, M. E. 1996, Meteorit.
484
+ Planet. Sci., 31, 321
485
+ Ivezi´c, Ž., Lupton, R. H., Juri´c, M., et al. 2002, AJ, 124, 2943
486
+ Johnson, H. L. & Morgan, W. W. 1951, ApJ, 114, 522
487
+ Lazzaro, D., Angeli, C. A., Carvano, J. M., et al. 2004, Icarus, 172, 179
488
+ Lucas, M. P., Emery, J. P., MacLennan, E. M., et al. 2019, Icarus, 322, 227
489
+ Montegriffo, P., De Angeli, F., Andrae, R., et al. 2022, arXiv e-prints,
490
+ arXiv:2206.06205
491
+ Morate, D., Marcio Carvano, J., Alvarez-Candal, A., et al. 2021, A&A, 655, A47
492
+ Perna, D., Barucci, M. A., Fulchignoni, M., et al. 2018, Planet. Space Sci., 157,
493
+ 82
494
+ Popescu, M., Birlan, M., Nedelcu, D. A., Vaubaillon, J., & Cristescu, C. P. 2014,
495
+ A&A, 572, A106
496
+ Popescu, M., Vaduvescu, O., de León, J., et al. 2019, A&A, 627, A124
497
+ Porto de Mello, G. F. & da Silva, L. 1997, ApJ, 482, L89
498
+ Reddy, V., Dunn, T. L., Thomas, C. A., Moskovitz, N. A., & Burbine, T. H. 2015,
499
+ in Asteroids IV, 43–63
500
+ Sergeyev, A. V., Carry, B., Onken, C. A., et al. 2022, A&A, 658, A109
501
+ Soubiran, C. & Triaud, A. 2004, A&A, 418, 1089
502
+ Tatsumi, E., Tinaut-Ruano, F., de León, J., Popescu, M., & Licandro, J. 2022,
503
+ A&A, 664, A107
504
+ Tholen, D. J. 1984, PhD thesis, University of Arizona, Tucson
505
+ Wong, I., Brown, M. E., Blacksberg, J., Ehlmann, B. L., & Mahjoub, A. 2019,
506
+ AJ, 157, 161
507
+ Zellner, B. 1973, in Bulletin of the American Astronomical Society, Vol. 5, 388
508
+ Zellner, B., Tholen, D. J., & Tedesco, E. F. 1985, Icarus, 61, 355
509
+ Article number, page 5 of 13
510
+
511
+ A&A proofs: manuscript no. main
512
+ Appendix A: Comparison figures
513
+ In this appendix, we are showing the spectra of asteroids that
514
+ have at least two observations from the ground from ECAS (dark
515
+ blue) or from TNG (light blue) and also have available spectra in
516
+ Gaia DR3. We plotted the original (red) and the corrected (black)
517
+ version of the Gaia spectra together. For asteroid 624 (Hector),
518
+ we also added an observation from HST (further information can
519
+ be found in the main text).
520
+ Article number, page 6 of 13
521
+
522
+ F. Tinaut-Ruano et al.: Asteroids’ reflectance from Gaia DR3: Artificial reddening at near-UV wavelengths
523
+ 1.0
524
+ 0.8
525
+ 1
526
+ ECAS
527
+ original Gaia
528
+ corrected Gaia
529
+ 2
530
+ 3
531
+ 1.0
532
+ 0.8
533
+ 4
534
+ 6
535
+ 7
536
+ 1.0
537
+ 0.8
538
+ 8
539
+ 9
540
+ 10
541
+ 1.0
542
+ 0.8
543
+ Relative reflectance
544
+ 12
545
+ 14
546
+ 16
547
+ 1.0
548
+ 0.8
549
+ 18
550
+ 21
551
+ 23
552
+ 1.0
553
+ 0.8
554
+ 24
555
+ 27
556
+ 29
557
+ 1.0
558
+ 0.8
559
+ 37
560
+ 38
561
+ 39
562
+ 0.35
563
+ 0.40
564
+ 0.45
565
+ 0.50
566
+ 0.55
567
+ Wavelength [ m]
568
+ 1.0
569
+ 0.8
570
+ 42
571
+ 0.35
572
+ 0.40
573
+ 0.45
574
+ 0.50
575
+ 0.55
576
+ Wavelength [ m]
577
+ 44
578
+ 0.35
579
+ 0.40
580
+ 0.45
581
+ 0.50
582
+ 0.55
583
+ Wavelength [ m]
584
+ 45
585
+ Fig. A.1. Comparison between ground-based observations from the Eight Asteroid Survey (ECAS, dark blue line), TNG observations (light blue
586
+ line) original Gaia data (red line), and corrected data (black line). We also included a UV spectrum of asteroid (624) downloaded from ESA
587
+ archive and obtained with the instrument STIS, on board the Hubble Space Telescope (HST).
588
+ Article number, page 7 of 13
589
+
590
+ A&A proofs: manuscript no. main
591
+ 1.0
592
+ 0.8
593
+ 46
594
+ 47
595
+ TNG
596
+ ECAS
597
+ original Gaia
598
+ corrected Gaia
599
+ 51
600
+ 1.0
601
+ 0.8
602
+ 62
603
+ 64
604
+ 65
605
+ 1.0
606
+ 0.8
607
+ 71
608
+ 80
609
+ 82
610
+ 1.0
611
+ 0.8
612
+ Relative reflectance
613
+ 83
614
+ 85
615
+ 86
616
+ 1.0
617
+ 0.8
618
+ 87
619
+ 88
620
+ 90
621
+ 1.0
622
+ 0.8
623
+ 93
624
+ 94
625
+ 95
626
+ 1.0
627
+ 0.8
628
+ 97
629
+ 101
630
+ 103
631
+ 0.35
632
+ 0.40
633
+ 0.45
634
+ 0.50
635
+ 0.55
636
+ Wavelength [ m]
637
+ 1.0
638
+ 0.8
639
+ 105
640
+ 0.35
641
+ 0.40
642
+ 0.45
643
+ 0.50
644
+ 0.55
645
+ Wavelength [ m]
646
+ 106
647
+ 0.35
648
+ 0.40
649
+ 0.45
650
+ 0.50
651
+ 0.55
652
+ Wavelength [ m]
653
+ 107
654
+ Article number, page 8 of 13
655
+
656
+ F. Tinaut-Ruano et al.: Asteroids’ reflectance from Gaia DR3: Artificial reddening at near-UV wavelengths
657
+ 1.0
658
+ 0.8
659
+ 109
660
+ 111
661
+ 114
662
+ 1.0
663
+ 0.8
664
+ 117
665
+ 124
666
+ 132
667
+ 1.0
668
+ 0.8
669
+ 134
670
+ 135
671
+ 137
672
+ 1.0
673
+ 0.8
674
+ Relative reflectance
675
+ 142
676
+ 153
677
+ 158
678
+ 1.0
679
+ 0.8
680
+ 168
681
+ 171
682
+ 179
683
+ 1.0
684
+ 0.8
685
+ 187
686
+ 190
687
+ 198
688
+ 1.0
689
+ 0.8
690
+ 211
691
+ 213
692
+ 216
693
+ 0.35
694
+ 0.40
695
+ 0.45
696
+ 0.50
697
+ 0.55
698
+ Wavelength [ m]
699
+ 1.0
700
+ 0.8
701
+ 221
702
+ 0.35
703
+ 0.40
704
+ 0.45
705
+ 0.50
706
+ 0.55
707
+ Wavelength [ m]
708
+ 225
709
+ TNG
710
+ ECAS
711
+ original Gaia
712
+ corrected Gaia
713
+ 0.35
714
+ 0.40
715
+ 0.45
716
+ 0.50
717
+ 0.55
718
+ Wavelength [ m]
719
+ 229
720
+ Article number, page 9 of 13
721
+
722
+ A&A proofs: manuscript no. main
723
+ 1.0
724
+ 0.8
725
+ 233
726
+ 236
727
+ 261
728
+ TNG
729
+ ECAS
730
+ original Gaia
731
+ corrected Gaia
732
+ 1.0
733
+ 0.8
734
+ 268
735
+ 275
736
+ 279
737
+ 1.0
738
+ 0.8
739
+ 287
740
+ 306
741
+ 308
742
+ 1.0
743
+ 0.8
744
+ Relative reflectance
745
+ 322
746
+ 323
747
+ 326
748
+ 1.0
749
+ 0.8
750
+ 334
751
+ 339
752
+ 349
753
+ 1.0
754
+ 0.8
755
+ 354
756
+ 361
757
+ 368
758
+ 1.0
759
+ 0.8
760
+ 369
761
+ 374
762
+ 379
763
+ 0.35
764
+ 0.40
765
+ 0.45
766
+ 0.50
767
+ 0.55
768
+ Wavelength [ m]
769
+ 1.0
770
+ 0.8
771
+ 380
772
+ 0.35
773
+ 0.40
774
+ 0.45
775
+ 0.50
776
+ 0.55
777
+ Wavelength [ m]
778
+ 383
779
+ 0.35
780
+ 0.40
781
+ 0.45
782
+ 0.50
783
+ 0.55
784
+ Wavelength [ m]
785
+ 389
786
+ Article number, page 10 of 13
787
+
788
+ F. Tinaut-Ruano et al.: Asteroids’ reflectance from Gaia DR3: Artificial reddening at near-UV wavelengths
789
+ 1.0
790
+ 0.8
791
+ 394
792
+ 406
793
+ 407
794
+ 1.0
795
+ 0.8
796
+ 419
797
+ TNG
798
+ ECAS
799
+ original Gaia
800
+ corrected Gaia
801
+ 420
802
+ 433
803
+ 1.0
804
+ 0.8
805
+ 434
806
+ 442
807
+ 443
808
+ 1.0
809
+ 0.8
810
+ Relative reflectance
811
+ 470
812
+ 471
813
+ 480
814
+ 1.0
815
+ 0.8
816
+ 483
817
+ 509
818
+ 512
819
+ 1.0
820
+ 0.8
821
+ 522
822
+ 529
823
+ 532
824
+ 1.0
825
+ 0.8
826
+ 558
827
+ 566
828
+ 570
829
+ 0.35
830
+ 0.40
831
+ 0.45
832
+ 0.50
833
+ 0.55
834
+ Wavelength [ m]
835
+ 1.0
836
+ 0.8
837
+ 579
838
+ 0.35
839
+ 0.40
840
+ 0.45
841
+ 0.50
842
+ 0.55
843
+ Wavelength [ m]
844
+ 602
845
+ 0.35
846
+ 0.40
847
+ 0.45
848
+ 0.50
849
+ 0.55
850
+ Wavelength [ m]
851
+ 616
852
+ Article number, page 11 of 13
853
+
854
+ A&A proofs: manuscript no. main
855
+ 1.0
856
+ 0.8
857
+ 624
858
+ HST
859
+ TNG
860
+ ECAS
861
+ original Gaia
862
+ corrected Gaia
863
+ 635
864
+ 639
865
+ 1.0
866
+ 0.8
867
+ 654
868
+ 664
869
+ 686
870
+ 1.0
871
+ 0.8
872
+ 699
873
+ 702
874
+ 704
875
+ 1.0
876
+ 0.8
877
+ Relative reflectance
878
+ 712
879
+ 714
880
+ 721
881
+ 1.0
882
+ 0.8
883
+ 733
884
+ 739
885
+ 748
886
+ 1.0
887
+ 0.8
888
+ 773
889
+ 778
890
+ 785
891
+ 1.0
892
+ 0.8
893
+ 786
894
+ 849
895
+ 863
896
+ 0.35
897
+ 0.40
898
+ 0.45
899
+ 0.50
900
+ 0.55
901
+ Wavelength [ m]
902
+ 1.0
903
+ 0.8
904
+ 897
905
+ 0.35
906
+ 0.40
907
+ 0.45
908
+ 0.50
909
+ 0.55
910
+ Wavelength [ m]
911
+ 914
912
+ 0.35
913
+ 0.40
914
+ 0.45
915
+ 0.50
916
+ 0.55
917
+ Wavelength [ m]
918
+ 931
919
+ Article number, page 12 of 13
920
+
921
+ F. Tinaut-Ruano et al.: Asteroids’ reflectance from Gaia DR3: Artificial reddening at near-UV wavelengths
922
+ 1.0
923
+ 0.8
924
+ 980
925
+ ECAS
926
+ original Gaia
927
+ corrected Gaia
928
+ 1001
929
+ 1021
930
+ 1.0
931
+ 0.8
932
+ 1105
933
+ 1144
934
+ 1172
935
+ 1.0
936
+ 0.8
937
+ Relative reflectance
938
+ 1180
939
+ 1266
940
+ 1268
941
+ 1.0
942
+ 0.8
943
+ 1275
944
+ 1509
945
+ 1604
946
+ 0.35
947
+ 0.40
948
+ 0.45
949
+ 0.50
950
+ 0.55
951
+ Wavelength [ m]
952
+ 1.0
953
+ 0.8
954
+ 1606
955
+ 0.35
956
+ 0.40
957
+ 0.45
958
+ 0.50
959
+ 0.55
960
+ Wavelength [ m]
961
+ 1650
962
+ 0.35
963
+ 0.40
964
+ 0.45
965
+ 0.50
966
+ 0.55
967
+ Wavelength [ m]
968
+ 1754
969
+ Article number, page 13 of 13
970
+
6tA0T4oBgHgl3EQfOP83/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
7tE1T4oBgHgl3EQfnQST/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:338501dd7e0f17f2ba12ff904862f2e06b620b7dcf4cbc7166628aa030a95404
3
+ size 10092589
89E2T4oBgHgl3EQfQAYH/content/tmp_files/2301.03764v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
89E2T4oBgHgl3EQfQAYH/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
8dE3T4oBgHgl3EQfRwni/content/2301.04426v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b95015f99ccd2a6fbed5dc4c87c079e54e6f73fb755889f53816ea5f8823665
3
+ size 1131400
8dE3T4oBgHgl3EQfRwni/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb16790356dd0ca2d4d4c7633da7525bae3d87cfd78455ff7da7ceded5d813a5
3
+ size 3538989
8dE3T4oBgHgl3EQfRwni/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5199a131479a3fd9e3b7973e1f39eeed2b6705859649fc8722eb49c8b015d93
3
+ size 128688
99FQT4oBgHgl3EQfJzVJ/content/tmp_files/2301.13257v1.pdf.txt ADDED
@@ -0,0 +1,1689 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.13257v1 [math.RA] 30 Jan 2023
2
+ CONDITION NUMBERS OF HESSENBERG COMPANION MATRICES
3
+ MICHAEL COX, KEVIN N. VANDER MEULEN, ADAM VAN TUYL, AND JOSEPH VOSKAMP
4
+ Abstract. The Fiedler matrices are a large class of companion matrices that include the
5
+ well-known Frobenius companion matrix. The Fiedler matrices are part of a larger class
6
+ of companion matrices that can be characterized with a Hessenberg form. In this paper,
7
+ we demonstrate that the Hessenberg form of the Fiedler companion matrices provides a
8
+ straight-forward way to compare the condition numbers of these matrices. We also show
9
+ that there are other companion matrices which can provide a much smaller condition num-
10
+ ber than any Fiedler companion matrix. We finish by exploring the condition number of a
11
+ class of matrices obtained from perturbing a Frobenius companion matrix while preserving
12
+ the characteristic polynomial.
13
+ 1. Introduction
14
+ The Frobenius companion matrix is a template that provides a matrix with a prescribed
15
+ characteristic polynomial. More recently, it was discovered that the Frobenious companion
16
+ matrix belongs to a larger class of Fiedler companion matrices [5], which in turn is a
17
+ subset of the intercyclic companion matrices [4]. Other recent templates include nonsparse
18
+ companion matrices [2] and generalized companion matrices [6].
19
+ The Frobenius companion matrix is employed in algorithms that use matrix methods to
20
+ determine roots of polynomials, but this matrix is not always well-conditioned [3]. Recent
21
+ work [3] has explored under what circumstances other Fielder companion matrices can have
22
+ a better condition number than the Frobenius matrix, with respect to the Frobenius norm.
23
+ After covering background details in Section 2, we use a Hessenberg characterization of the
24
+ Fiedler companion matrices in Section 3 to provide a concise argument for the condition
25
+ number of a Fielder companion matrix. The characterization allows us to avoid dealing
26
+ with the particular permutation in Fiedler’s construction of companion matrices [5], as well
27
+ as associated concepts around consecutions and inversions developed in [3]. In Section 4,
28
+ we provide some examples of non-Fiedler companion matrices that demonstrate that there
29
+ are intercyclic companion matrices that have a smaller condition number than any Fielder
30
+ companion matrix for some specific polynomials. In Section 5, we provide a method for con-
31
+ structing a generalized companion matrix that, in some cases, can improve on the condition
32
+ number of any Fiedler companion matrix.
33
+ Date: February 1, 2023.
34
+ 2010 Mathematics Subject Classification. 15A12, 15B99.
35
+ Key words and phrases. companion matrix, Fiedler companion matrix, condition number, generalized
36
+ companion matrix.
37
+ Research of Vander Meulen was supported in part by NSERC Discovery Grant 2022-05137.
38
+ Research of Van Tuyl was supported in part by NSERC Discovery Grant 2019-05412.
39
+ Research of Voskamp was supported in part by NSERC USRA 504279.
40
+ 1
41
+
42
+ 2
43
+ MICHAEL COX, KEVIN N. VANDER MEULEN, ADAM VAN TUYL, AND JOSEPH VOSKAMP
44
+
45
+ 
46
+ 0
47
+ 1
48
+ 0
49
+ 0
50
+ 0
51
+ 0
52
+ 1
53
+ 0
54
+ 0
55
+ 0
56
+ 0
57
+ 1
58
+ −c0
59
+ −c1
60
+ −c2
61
+ −c3
62
+
63
+  ,
64
+
65
+ 
66
+ 0
67
+ 1
68
+ 0
69
+ 0
70
+ 0
71
+ −c3
72
+ 1
73
+ 0
74
+ 0
75
+ −c2
76
+ 0
77
+ 1
78
+ −c0
79
+ −c1
80
+ 0
81
+ 0
82
+
83
+  ,
84
+
85
+ 
86
+ 0
87
+ 1
88
+ 0
89
+ 0
90
+ −c2
91
+ −c3
92
+ 1
93
+ 0
94
+ 0
95
+ 0
96
+ 0
97
+ 1
98
+ −c0
99
+ −c1
100
+ 0
101
+ 0
102
+
103
+  .
104
+ Figure 1. Some 4 × 4 unit sparse companion matrices.
105
+
106
+ 
107
+ 0
108
+ 1
109
+ 0
110
+ 0
111
+ −c2
112
+ 0
113
+ 1
114
+ 0
115
+ −c1 + c3c2
116
+ 0
117
+ −c3
118
+ 1
119
+ −c0
120
+ 0
121
+ 0
122
+ 0
123
+
124
+  ,
125
+
126
+ 
127
+ −c3
128
+ 1
129
+ 0
130
+ 0
131
+ 0
132
+ 0
133
+ 1
134
+ 0
135
+ −c1 + c3c2
136
+ −c2
137
+ 0
138
+ 1
139
+ −c0
140
+ 0
141
+ 0
142
+ 0
143
+
144
+  ,
145
+
146
+ 
147
+ −c3
148
+ 1
149
+ 0
150
+ 0
151
+ −c2 + a
152
+ 0
153
+ 1
154
+ 0
155
+ −c1 + ac3
156
+ −a
157
+ 0
158
+ 1
159
+ −c0
160
+ 0
161
+ 0
162
+ 0
163
+
164
+  .
165
+ Figure 2. Some 4 × 4 companion matrices.
166
+ 2. Technical definitions and background
167
+ In this section we recall the relevant background on companion matrices and condition
168
+ numbers that will be required throughout the paper.
169
+ Let n ≥ 2 be an integer and p(x) = xn + cn−1xn−1 + cn−2xn−2 + · · · + c0. A compan-
170
+ ion matrix to p(x) is an n × n matrix A over R[c0, . . . , cn−1] such that the characteristic
171
+ polynomial of A is p(x). A unit sparse companion matrix to p(x) is a companion matrix A
172
+ that has n − 1 entries equal to one, n variable entries −c0, . . . , −cn−1, and the remaining
173
+ n2 − 2n + 1 entries equal to zero. The unit sparse companion matrix of the form
174
+
175
+ 
176
+ 0
177
+ 1
178
+ 0
179
+ · · ·
180
+ 0
181
+ 0
182
+ 0
183
+ 0
184
+ 1
185
+ · · ·
186
+ 0
187
+ 0
188
+ 0
189
+ 0
190
+ 0
191
+ · · ·
192
+ 1
193
+ 0
194
+ ...
195
+ ...
196
+ ...
197
+ · · ·
198
+ 0
199
+ 1
200
+ −c0
201
+ −c1
202
+ −c2
203
+ · · ·
204
+ −cn−2
205
+ −cn−1
206
+
207
+ 
208
+ is called the Frobenius companion matrix of p(x). Sparse companion matrices have also
209
+ been called intercyclic companion matrices due to the structure of the digraph associated
210
+ with the matrix (see [7] and [4] for details).
211
+ The matrices in Figure 1 are examples of unit sparse companion matrices to p(x) =
212
+ x4 + c3x3 + c2x2 + c1x + c0. The first matrix in Figure 1 is a Frobenius companion matrix.
213
+ The matrices in Figure 2 are also companion matrices to p(x), but they are not unit sparse
214
+ since not every nonzero variable entry is the negative of a single coefficient of p(x). Note
215
+ that in the last matrix, the value of a can be any real number; when a = 0, then this matrix
216
+ becomes a unit sparse companion matrix.
217
+ Since matrix transposition and permutation similarity does not affect the characteristic
218
+ polynomial, nor the set of nonzero entries in a matrix, we call two companion matrices
219
+ equivalent if one can be obtained from the other via transposition and/or permutation
220
+ similarity.
221
+ The matrices in Figure 3 are equivalent to the 4 × 4 Frobenius companion
222
+ matrix. Note that if A and B are equivalent matrices, then the multiset of entries in any
223
+
224
+ CONDITION NUMBERS OF HESSENBERG COMPANION MATRICES
225
+ 3
226
+
227
+ 
228
+ −c3
229
+ 1
230
+ 0
231
+ 0
232
+ −c2
233
+ 0
234
+ 1
235
+ 0
236
+ −c1
237
+ 0
238
+ 0
239
+ 1
240
+ −c0
241
+ 0
242
+ 0
243
+ 0
244
+
245
+  ,
246
+
247
+ 
248
+ −c3
249
+ −c2
250
+ −c1
251
+ −c0
252
+ 1
253
+ 0
254
+ 0
255
+ 0
256
+ 0
257
+ 1
258
+ 0
259
+ 0
260
+ 0
261
+ 0
262
+ 1
263
+ 0
264
+
265
+  ,
266
+
267
+ 
268
+ 0
269
+ 0
270
+ 0
271
+ −c0
272
+ 1
273
+ 0
274
+ 0
275
+ −c1
276
+ 0
277
+ 1
278
+ 0
279
+ −c2
280
+ 0
281
+ 0
282
+ 1
283
+ −c3
284
+
285
+  .
286
+ Figure 3. Some companion matrices equivalent to the 4×4 Frobenius com-
287
+ panion matrix.
288
+ row of A is exactly the multiset of entries of some row or column of B. No two matrices
289
+ from Figures 1 and 2 are equivalent (assuming a ̸= 0).
290
+ Fielder [5] introduced a class of companion matrices that are constructed as a product
291
+ of certain block diagonal matrices. In particular, let F0 be a diagonal matrix with diagonal
292
+ entries (1, . . . , 1, −c0) and for k = 1, . . . , n − 1, let
293
+ Fk =
294
+
295
+
296
+ In−k−1
297
+ O
298
+ O
299
+ O
300
+ Tk
301
+ O
302
+ O
303
+ O
304
+ Ik−1
305
+
306
+  with Tk =
307
+
308
+ −ck
309
+ 1
310
+ 1
311
+ 0
312
+
313
+ .
314
+ Fiedler showed (see [5, Theorem 2.3]) that the product of these n matrices, in any or-
315
+ der, will produce a companion matrix of p(x) = xn + cn−1xn−1 + cn−2xn−2 + · · · + c0.
316
+ Consequently, given any permutation σ = (σ0, σ2, . . . , σn−1) of {0, 1, 2, . . . , n − 1}, we say
317
+ that Fσ = Fσ0Fσ1 · · · Fσn−1 is a Fiedler companion matrix. The Frobenius companion ma-
318
+ trix is a Fiedler companion matrix since the Frobenius companion matrix is equivalent to
319
+ F0F1 · · · Fn−1, as noted in [5].
320
+ In [4] it was demonstrated that every unit sparse companion matrix is equivalent to a
321
+ unit lower Hessenberg matrix, as summarized in Theorem 2.1. Note that, for 0 ≤ k ≤ n−1,
322
+ the k-th subdiagonal of a matrix A = [aij] consists of the entries {ak+1,1, ak+2,2, . . . , an,n−k}.
323
+ The 0-th subdiagonal is usually called the main diagonal of a matrix.
324
+ Theorem 2.1. [4, Corollary 4.3] Let p(x) = xn + cn−1xn−1 + cn−2xn−2 + · · · + c1x + c0 be
325
+ a polynomial over R with n ≥ 2. Then A is an n × n unit sparse companion matrix to p(x)
326
+ if and only if A is equivalent to a unit lower Hessenberg matrix
327
+ (1)
328
+ C =
329
+
330
+ 
331
+ 0
332
+ Im
333
+ O
334
+ R
335
+ In−m−1
336
+ 0T
337
+
338
+ 
339
+ for some (n − m) × (m + 1) matrix R with m(n − 1 − m) zero entries, such that C has
340
+ −cn−1−k on its k-th subdiagonal, for 0 ≤ k ≤ n − 1.
341
+ Note that in (1), the unit lower Hessenberg matrix C always has Cn,1 = −c0 and R1,m+1 =
342
+ −cn−1. Given this Hessenberg characterization of the unit sparse companion matrices, one
343
+ can deduce the corresponding inverse matrix if c0 ̸= 0.
344
+ Lemma 2.2. [7, Section 7] Let p(x) = xn + cn−1xn−1 + cn−2xn−2 + · · · + c1x + c0 be a
345
+ polynomial over R with n ≥ 2.
346
+ Suppose that C is a unit lower Hessenberg companion
347
+
348
+ 4
349
+ MICHAEL COX, KEVIN N. VANDER MEULEN, ADAM VAN TUYL, AND JOSEPH VOSKAMP
350
+ matrix to p(x) as in (1). Assuming c0 ̸= 0, if
351
+ C =
352
+
353
+ 
354
+ 0
355
+ Im
356
+ O
357
+ u
358
+ H
359
+ In−m−1
360
+ −c0
361
+ yT
362
+ 0T
363
+
364
+  , for some u, y, H, then C−1 =
365
+
366
+ 
367
+ 1
368
+ c0yT
369
+ 0T
370
+ − 1
371
+ c0
372
+ Im
373
+ O
374
+ 0
375
+ − 1
376
+ c0uyT − H
377
+ In−m−1
378
+ 1
379
+ c0 u
380
+
381
+  .
382
+ Throughout this paper, we use the Frobenius norm of an n × n matrix A = [ai,j] given
383
+ by
384
+ ||A|| =
385
+ ��
386
+ i,j
387
+ a2
388
+ i,j.
389
+ Remark 2.3. If A and B are both unit sparse companion matrices to the same polyno-
390
+ mial p(x), then it follows that ||A|| = ||B|| since A and B have exactly the same entries.
391
+ Furthermore, if A = PBP T for some permutation matrix P, then A−1 and B−1 also have
392
+ the same entries, and hence ||A−1|| = ||B−1||.
393
+ The condition number of A, denoted κ(A), is defined to be
394
+ κ(A) = ||A|| · ||A−1||.
395
+ Remark 2.3 implies the following lemma.
396
+ Lemma 2.4. If A and B are equivalent companion matrices, then κ(A) = κ(B).
397
+ 3. Condition numbers of Fiedler matrices via the Hessenberg
398
+ characterization
399
+ The condition numbers of Fiedler companion matrices were first calculated by de Ter´an,
400
+ Dopico, and P´erez [3, Theorem 4.1]. In this section we demonstrate how a characterization
401
+ of Fielder companion matrices via unit lower Hessenberg matrices, as given by Eastman,
402
+ et al. [4], provides an efficient way to obtain the condition numbers for Fiedler companion
403
+ matrices.
404
+ Our approach avoids the use of the consecution-inversion structure sequence,
405
+ described in [3, Definition 2.3], which was used in the original computation of these numbers.
406
+ The following theorem gives a characterization of the Fielder companion matrices in
407
+ terms of unit lower Hesenberg matrices.
408
+ Theorem 3.1. [4, Corollary 4.4] If p(x) = xn + cn−1xn−1 + · · · + c1x + c0 is a polynomial
409
+ over R with n ≥ 2, then F is an n × n Fiedler companion matrix to p(x) if and only if F is
410
+ equivalent to a unit lower Hessenberg matrix as in (1) with the additional property that if
411
+ −ck is in position (i, j) then −ck+1 is in position (i − 1, j) or (i, j + 1) for 1 ≤ k ≤ n − 1.
412
+ An alternative way to describe the unit lower Hesenberg matrix in Theorem 3.1 is to say
413
+ that the variable entries of R in (1) form a lattice-path from the bottom-left corner to the
414
+ top-right corner of R. The first two matrices in Figure 1 are examples of Fiedler companion
415
+ matrices since the variable entries of R form a lattice-path. The last matrix in Figure 1 is
416
+ not a Fiedler companion matrix.
417
+ If F is a Fiedler companion matrix, the initial step size of F is the number of coefficients
418
+ other than c0 in the row or column containing both c0 and c1. The first matrix in Figure 1
419
+ has initial step size three and the second matrix in Figure 1 has initial step size one.
420
+
421
+ CONDITION NUMBERS OF HESSENBERG COMPANION MATRICES
422
+ 5
423
+ Remark 3.2. Note that equivalent matrices have the same initial step size since transpo-
424
+ sitions and permutation equivalence does not change the number of coefficients in the row
425
+ or column containing c0 and c1.
426
+ Using Theorem 3.1 and Lemma 2.2, one can describe the nonzero entries of the inverse
427
+ of a Fiedler companion matrix:
428
+ Lemma 3.3. [3, Theorem 3.2] Let p(x) = xn + cn−1xn−1 + cn−2xn−2 + · · · + c1x + c0 be
429
+ a polynomial over R with n ≥ 2 and c0 ̸= 0. Let F be a Fiedler companion matrix to p(x)
430
+ with an initial step size t. Then
431
+ (1) F −1 has t + 1 entries equal to − 1
432
+ c0, − c1
433
+ c0, . . . , − ct
434
+ c0,
435
+ (2) F −1 has n − 1 − t entries equal to ct+1, ct+2, . . . , cn−1,
436
+ (3) F −1 has n − 1 entries equal to 1, and
437
+ (4) the remaining entries of F −1 are 0.
438
+ Proof. Since F is a companion matrix to p(x), by Theorem 2.1, the matrix F is equivalent
439
+ to a lower Hessenberg matrix C of the form (1). Since F and C are equivalent, it follows
440
+ that the matrices F −1 and C−1 are equivalent, so it suffices to show that the matrix C−1
441
+ satisfies conditions (1) − (4).
442
+ Since F is a Fielder companion matrix, Theorem 3.1 implies that c1 is either directly
443
+ above c0 in C or directly to the right of c1. If c1 is to right of c0 in C, then all other entries
444
+ in the column containing c0 is zero. Alternatively, if c1 is above c0, all entries to the right
445
+ of c0 in C are zero.
446
+ Lemma 2.2, which gives us the inverse of a unit lower Hessenberg matrix, applies to the
447
+ matrix C. By our above observation, the vector u or the vector y must be the zero vector.
448
+ Without loss of generality, let yT be zero, which means that − 1
449
+ c0uyT − H = −H. If the
450
+ initial step size of A is t, then there will be t nonzero elements in u, and it will have the
451
+ form
452
+ u =
453
+
454
+ 
455
+ 0
456
+ ...
457
+ 0
458
+ −ct
459
+ ...
460
+ −c1
461
+
462
+ 
463
+ .
464
+ By Lemma 2.2 the inverse of the matrix C then has the form
465
+ (2)
466
+ C−1 =
467
+
468
+ 
469
+ 0T
470
+ 0T
471
+ − 1
472
+ c0
473
+ Im
474
+ O
475
+ 0
476
+ −H
477
+ In−m−1
478
+ 1
479
+ c0u
480
+
481
+ 
482
+ .
483
+ From (2), we can describe the entries of C−1: m + n − m − 1 = n − 1 entries are 1 (coming
484
+ from the submatrices Im and In−m−1); ct+1, . . . , cn−1, which all belong to the submatrix
485
+ −H; the entry − 1
486
+ c0 from the top-right corner; and the entries − c1
487
+ c0 , . . . , − ct
488
+ c0 from the term
489
+
490
+ 6
491
+ MICHAEL COX, KEVIN N. VANDER MEULEN, ADAM VAN TUYL, AND JOSEPH VOSKAMP
492
+ 1
493
+ c0u. Moreover, the rest of the entries of C−1 are zero. We have now shown that C−1, and
494
+ hence F −1, has the desired properties.
495
+
496
+ Remark 3.4. Lemma 3.3 mimics [3, Theorem 3.2]. As observed in [7], the initial step size
497
+ of a Fiedler companion matrix is equal to the number of initial consecutions or inversions
498
+ of the permuation associated with the Fielder companion matrix, as defined in [3].
499
+ We can now compute the condition number for any Fiedler companion matrix. This
500
+ result first appeared in [3], but we can avoid the formal analysis of the permutation that
501
+ was used to construct the Fiedler companion matrix, as well as the associated concepts of
502
+ consecution and inversion of a permutation.
503
+ Theorem 3.5. [3, Theorem 4.1] Let p(x) = xn + cn−1xn−1 + cn−2xn−2 + · · · + c1x + c0 be
504
+ a polynomial over R with n ≥ 2 and c0 ̸= 0. Let F be a Fiedler companion matrix to p(x)
505
+ with an initial step size t. Then
506
+ κ(F)2 = ||F||2 ·
507
+
508
+ (n − 1) + 1 + |c1|2 + · · · + |ct|2
509
+ |c0|2
510
+ + |ct+1|2 + · · · + |cn−1|2
511
+
512
+ ,
513
+ with
514
+ ||F||2 = (n − 1) + |c0|2 + |c1|2 + · · · + |cn−1|2.
515
+ Proof. This result follows from the fact that F is a unit sparse companion matrix (so it
516
+ contains n − 1 entries equal to 1 and the entries −c0, . . . , −cn−1), and Lemma 3.3, which
517
+ describes the entries of F −1.
518
+
519
+ Because the condition number κ(F) of a Fiedler companion matrix F depends only upon
520
+ the initial step size and not the permutation σ, we can derive the following corollary.
521
+ Corollary 3.6. [3, Corollary 4.3] Let p(x) = xn + cn−1xn−1 + cn−2xn−2 + · · · + c1x + c0 be
522
+ a polynomial over R with n ≥ 2 and c0 ̸= 0. Let A and B be Fiedler companion matrices
523
+ to the polynomial p(x). If the initial step size of both A and B is t, then κ(A) = κ(B).
524
+ Since condition numbers of Fiedler companion matrices depend on the initial step size,
525
+ let
526
+ St = {F | F is a Fiedler companion matrix to p(x) with initial step size t},
527
+ and define κ(t) = κ(F) for F ∈ St. We can now recover a result of [3] that allows us to
528
+ compare the condition numbers of Fielder matrices while again avoiding any reference to
529
+ the permutation σ used to define a Fiedler matrix.
530
+ Corollary 3.7. [3, Corollary 4.5] Let p(x) = xn + cn−1xn−1 + cn−2xn−2 + · · · + c1x + c0 be
531
+ a polynomial over R with n ≥ 2 and c0 ̸= 0. Then
532
+ (1) if |c0| < 1, then κ(1) ≤ κ(2) ≤ · · · ≤ κ(n − 1);
533
+ (2) if |c0| = 1, then κ(1) = κ(2) = · · · = κ(n − 1); and
534
+ (3) if |c0| > 1, then κ(1) ≥ κ(2) ≥ · · · ≥ κ(n − 1).
535
+ Proof. Note that by Corollary 3.6, κ(A) is the same for all A ∈ St, so κ(t) is well-defined.
536
+ The conclusions follow from Theorem 3.5.
537
+
538
+
539
+ CONDITION NUMBERS OF HESSENBERG COMPANION MATRICES
540
+ 7
541
+ One of our new results is to compare the condition number of a Fielder companion
542
+ matrix of p(x) to the condition number of other companion matrices of p(x). In particular,
543
+ if a Fiedler companion matrix F has a smaller condition number than another companion
544
+ matrix C to the same polynomial p(x), then the ratio κ(C)
545
+ κ(F ) can be bounded. This result is
546
+ similar in spirit to [3, Theorem 4.12].
547
+ Theorem 3.8. Let p(x) = xn + cn−1xn−1 + · · · + c1x + c0 be a polynomial over R with
548
+ n ≥ 2, and c0 ̸= 0. Let F be a Fielder companion matrix to p(x). Further, suppose C is
549
+ any companion matrix to p(x) whose lower Hessenberg form is
550
+ C =
551
+
552
+ 
553
+ 0
554
+ Im
555
+ O
556
+ uC
557
+ HC
558
+ In−m−1
559
+ −c0
560
+ yT
561
+ C
562
+ 0T
563
+
564
+ 
565
+ such that either uC or yT
566
+ C is the zero vector. If κ(F) ≤ κ(C), then
567
+ 1 ≤ κ(C)
568
+ κ(F) ≤ κ(F).
569
+ Proof. The conclusion that 1 ≤ κ(C)
570
+ κ(F ) is immediate from the hypothesis that κ(F) ≤ κ(C).
571
+ By Theorem 3.1 and Lemma 2.4, we can assume F is in unit lower Hessenberg form. As
572
+ such, let
573
+ F =
574
+
575
+ 
576
+ 0
577
+ Il
578
+ O
579
+ uF
580
+ HF
581
+ In−l−1
582
+ −c0
583
+ yT
584
+ F
585
+ 0T
586
+
587
+ 
588
+ .
589
+ and let t be the initial step size of F. We want to show that
590
+ ||C|| · ||C−1||
591
+ ||F|| · ||F −1|| ≤ ||F|| · ||F −1||.
592
+ Since C and F are unit sparse companion matrices, ||C|| = ||F||. It suffices to show that
593
+ ||C−1|| ≤ ||F|| · ||F −1||2.
594
+ Using equivalence, we may assume without loss of generality that uC = 0. By Lemma 2.2,
595
+ C−1 =
596
+
597
+ 
598
+ 1
599
+ c0yT
600
+ C
601
+ 0T
602
+ − 1
603
+ c0
604
+ Im
605
+ O
606
+ 0
607
+ −HC
608
+ In−m−1
609
+ 0
610
+
611
+ 
612
+ .
613
+ since uC = 0. Then
614
+ (3)
615
+ ||C−1||2 = (n − 1) +
616
+ � 1
617
+ c0
618
+ �2
619
+ +
620
+
621
+ ci∈yT
622
+ C
623
+ ����
624
+ ci
625
+ c0
626
+ ����
627
+ 2
628
+ +
629
+
630
+ ck∈HC
631
+ |ck|2.
632
+
633
+ 8
634
+ MICHAEL COX, KEVIN N. VANDER MEULEN, ADAM VAN TUYL, AND JOSEPH VOSKAMP
635
+ where c ∈ H (resp. c ∈ y) means −c is an entry in H (resp. y). On the other hand, using
636
+ Lemma 3.3,
637
+ (4) ||F||2 · ||F −1||4 =
638
+
639
+ (n − 1) +
640
+ n−1
641
+
642
+ i=0
643
+ |ci|2
644
+ � 
645
+ (n − 1) +
646
+ � 1
647
+ c0
648
+ �2
649
+ +
650
+ t
651
+
652
+ i=1
653
+ ����
654
+ ci
655
+ c0
656
+ ����
657
+ 2
658
+ +
659
+ n−1
660
+
661
+ j=t+1
662
+ |cj|2
663
+
664
+
665
+ 2
666
+ .
667
+ We want to show that ||C−1|| ≤ ||F||·||F −1||2 which is equivalent to showing that ||C−1||2 ≤
668
+ ||F||2 · ||F −1||4. To do this, for each of the four different summands in (3), we show that
669
+ there exists distinct terms in ||F||2 ·||F −1||4 that are greater than or equal to the summand.
670
+ Here we rely on the fact that there are no negative summands in (4).
671
+ Partially expanding out (4), we have
672
+ ||F||2 · ||F −1||4 = (n − 1)3 + (n − 1)2
673
+ � 1
674
+ c0
675
+ �2
676
+ + (n − 1)
677
+ �n−1
678
+
679
+ i=0
680
+ |ci|2
681
+ � � 1
682
+ c0
683
+ �2
684
+ + (n − 1)2
685
+ n−1
686
+
687
+ j=0
688
+ |cj|2 + other non-negative terms.
689
+ Consequently,
690
+ ||C−1||2
691
+ =
692
+ (n − 1) +
693
+ � 1
694
+ c0
695
+ �2
696
+ +
697
+
698
+ ci∈yT
699
+ C
700
+ ����
701
+ ci
702
+ c0
703
+ ����
704
+ 2
705
+ +
706
+
707
+ ck∈HC
708
+ |ck|2
709
+
710
+ (n − 1)3 + (n − 1)2
711
+ � 1
712
+ c0
713
+ �2
714
+ + (n − 1)
715
+ �n−1
716
+
717
+ i=0
718
+ |ci|2
719
+ � � 1
720
+ c0
721
+ �2
722
+ + (n − 1)2
723
+ n−1
724
+
725
+ j=0
726
+ |cj|2
727
+
728
+ ||F||2 · ||F −1||4.
729
+
730
+ 4. Striped Companion Matrices
731
+ In this section we explore a particular class of companion matrices known as striped
732
+ companion matrices, which were introduced in [4].
733
+ A striped companion matrix to a
734
+ polynomial p(x) = xn + cn−1xn−1 + · · · + c1x + c0 has the property that the coefficients
735
+ −c0, −c1, . . . , −cn−1 form horizontal stripes in the matrix. In particular, if t = (t1, t2, . . . , tr)
736
+ is an ordered r-tuple of positive integers with t1 +t2 +· · ·+tr = n, and t1 ≥ ti for 2 ≤ i ≤ n,
737
+ then we define the striped companion matrix Sn(t) to be the companion matrix of unit Hes-
738
+ senberg form
739
+ Sn(t) =
740
+
741
+
742
+ 0
743
+ It1−1
744
+ O
745
+ R
746
+ In−t1
747
+ 0T
748
+
749
+
750
+ (5)
751
+ with the (n − t1 + 1) × t1 matrix R having r nonzero rows and with the ith nonzero row of
752
+ R having ti variables in the first ti positions and ti − 1 zero rows immediately above it in
753
+
754
+ CONDITION NUMBERS OF HESSENBERG COMPANION MATRICES
755
+ 9
756
+ R, for 1 < i ≤ r. Note that this implies the first row of R is a nonzero row with t1 leading
757
+ nonzero entries. For example,
758
+ S7(3, 2, 2) =
759
+
760
+ ����������
761
+ 0
762
+ 1
763
+ 0
764
+ 0
765
+ 0
766
+ 0
767
+ 0
768
+ 0
769
+ 0
770
+ 1
771
+ 0
772
+ 0
773
+ 0
774
+ 0
775
+ −c4
776
+ −c5
777
+ −c6
778
+ 1
779
+ 0
780
+ 0
781
+ 0
782
+ 0
783
+ 0
784
+ 0
785
+ 0
786
+ 1
787
+ 0
788
+ 0
789
+ −c2
790
+ −c3
791
+ 0
792
+ 0
793
+ 0
794
+ 1
795
+ 0
796
+ 0
797
+ 0
798
+ 0
799
+ 0
800
+ 0
801
+ 0
802
+ 1
803
+ −c0
804
+ −c1
805
+ 0
806
+ 0
807
+ 0
808
+ 0
809
+ 0
810
+
811
+ ����������
812
+ , and S8(3, 3, 2) =
813
+
814
+ ������������
815
+ 0
816
+ 1
817
+ 0
818
+ 0
819
+ 0
820
+ 0
821
+ 0
822
+ 0
823
+ 0
824
+ 0
825
+ 1
826
+ 0
827
+ 0
828
+ 0
829
+ 0
830
+ 0
831
+ −c5
832
+ −c6
833
+ −c7
834
+ 1
835
+ 0
836
+ 0
837
+ 0
838
+ 0
839
+ 0
840
+ 0
841
+ 0
842
+ 0
843
+ 1
844
+ 0
845
+ 0
846
+ 0
847
+ 0
848
+ 0
849
+ 0
850
+ 0
851
+ 0
852
+ 1
853
+ 0
854
+ 0
855
+ −c2
856
+ −c3
857
+ −c4
858
+ 0
859
+ 0
860
+ 0
861
+ 1
862
+ 0
863
+ 0
864
+ 0
865
+ 0
866
+ 0
867
+ 0
868
+ 0
869
+ 0
870
+ 1
871
+ −c0
872
+ −c1
873
+ 0
874
+ 0
875
+ 0
876
+ 0
877
+ 0
878
+ 0
879
+
880
+ ������������
881
+ .
882
+ As the next theorem shows, in some cases the stripped companion matrices can have a
883
+ better condition number than a Fielder companion matrix.
884
+ Theorem 4.1. Suppose n = k(m + 1) for some positive k, m ∈ Z and p(x) = xn +
885
+ cn−1xn−1 + · · · + c1x + c0 with c0 = 1, c1, . . . , cn−1 ∈ R. There exists a striped companion
886
+ matrix S = Sn(k, . . . , k) for p(x) such that κ(S) ≤ κ(F) for every Fiedler companion matrix
887
+ F if and only if
888
+ (6)
889
+ m
890
+
891
+ j=1
892
+ �k−1
893
+
894
+ i=1
895
+ |cicjk − cjk+i|2
896
+
897
+
898
+ m
899
+
900
+ j=1
901
+ �k−1
902
+
903
+ i=1
904
+ |cjk+i|2
905
+
906
+ .
907
+ Proof. Let S = Sk(m+1)(k, . . . , k), and let F be a Fiedler companion matrix. Since ||S|| =
908
+ ||F|| as noted in Remark 2.3, it suffices to show that ||S−1|| ≤ ||F −1|| if and only if equation
909
+ (6) holds. By Lemma 2.2,
910
+ S−1 =
911
+
912
+ 
913
+ −c1
914
+ −c2
915
+ . . .
916
+ −ck−1
917
+ 0T
918
+ −1
919
+ Ik−1
920
+ O
921
+ 0
922
+ −c1cmk + cmk+1
923
+ −c2cmk + cmk+2
924
+ . . .
925
+ −ck−1cmk + c(m+1)k−1
926
+ 0
927
+ 0
928
+ . . .
929
+ 0
930
+ ...
931
+ ...
932
+ . . .
933
+ ...
934
+ ...
935
+ ...
936
+ . . .
937
+ ...
938
+ 0
939
+ 0
940
+ . . .
941
+ 0
942
+ −c1c2k + c2k+1
943
+ −c2c2k + c2k+2
944
+ . . .
945
+ −ck−1c2k + c3k−1
946
+ 0
947
+ 0
948
+ . . .
949
+ 0
950
+ ...
951
+ ...
952
+ . . .
953
+ ...
954
+ 0
955
+ 0
956
+ . . .
957
+ 0
958
+ −c1ck + ck+1
959
+ −c2ck + ck+2
960
+ . . .
961
+ −ck−1ck + c2k−1
962
+ 0
963
+ 0
964
+ . . .
965
+ 0
966
+ ...
967
+ ...
968
+ . . .
969
+ ...
970
+ 0
971
+ 0
972
+ . . .
973
+ 0
974
+ Imk
975
+ −cmk
976
+ 0
977
+ ...
978
+ ...
979
+ 0
980
+ −c2k
981
+ 0
982
+ ...
983
+ 0
984
+ −ck
985
+ 0
986
+ ...
987
+ 0
988
+
989
+ 
990
+ .
991
+ Thus
992
+ ||S−1||2 = n +
993
+ k−1
994
+
995
+ j=1
996
+ |cj|2 +
997
+ m
998
+
999
+ j=1
1000
+ |cjk|2 +
1001
+ m
1002
+
1003
+ j=1
1004
+ �k−1
1005
+
1006
+ i=1
1007
+ |cicjk − cjk+i|2
1008
+
1009
+ .
1010
+
1011
+ 10
1012
+ MICHAEL COX, KEVIN N. VANDER MEULEN, ADAM VAN TUYL, AND JOSEPH VOSKAMP
1013
+ By Theorem 3.5,
1014
+ ||F −1||2 = n +
1015
+ k−1
1016
+
1017
+ j=1
1018
+ |cj|2 +
1019
+ m
1020
+
1021
+ j=1
1022
+ |cjk|2 +
1023
+ m
1024
+
1025
+ j=1
1026
+ �k−1
1027
+
1028
+ i=1
1029
+ |cjk+i|2
1030
+
1031
+ .
1032
+ Therefore κ(S) ≤ κ(F) if and only if
1033
+ m
1034
+
1035
+ j=1
1036
+ �k−1
1037
+
1038
+ i=1
1039
+ |cicjk − cjk+i|2
1040
+
1041
+
1042
+ m
1043
+
1044
+ j=1
1045
+ �k−1
1046
+
1047
+ i=1
1048
+ |cjk+i|2
1049
+
1050
+ .
1051
+
1052
+ We can deduce the following corollary.
1053
+ Corollary 4.2. Suppose n = k(m + 1) for some m, k ∈ Z and p(x) = xn + cn−1xn−1 +
1054
+ · · · + c1x + c0 with c0 = 1, c1, . . . , cn−1 ∈ R. Suppose F is any Fiedler companion matrix
1055
+ for p(x). If
1056
+ |cicjk − cjk+i| ≤ |cjk+i|, for 1 ≤ j ≤ m and 1 ≤ i ≤ k − 1,
1057
+ then there exists a striped companion matrix S = Sn(k, . . . , k), such that κ(S) ≤ κ(F).
1058
+ Example 4.3. Let
1059
+ p(x) = x9 + 8x8 + 6x7 + 2x6 + 5x5 + 8x4 + 3x3 + 3x2 + 2x + 1.
1060
+ Note that the inequalities in Corollary 4.2 hold. Let F be any Fiedler companion to p(x)
1061
+ and consider the striped companion matrix S = S9(3, 3, 3), i.e.,
1062
+ S9(3, 3, 3) =
1063
+
1064
+ 
1065
+ 0
1066
+ 1
1067
+ 0
1068
+ 0
1069
+ 0
1070
+ 0
1071
+ 0
1072
+ 0
1073
+ 0
1074
+ 0
1075
+ 0
1076
+ 1
1077
+ 0
1078
+ 0
1079
+ 0
1080
+ 0
1081
+ 0
1082
+ 0
1083
+ −2
1084
+ −6
1085
+ −8
1086
+ 1
1087
+ 0
1088
+ 0
1089
+ 0
1090
+ 0
1091
+ 0
1092
+ 0
1093
+ 0
1094
+ 0
1095
+ 0
1096
+ 1
1097
+ 0
1098
+ 0
1099
+ 0
1100
+ 0
1101
+ 0
1102
+ 0
1103
+ 0
1104
+ 0
1105
+ 0
1106
+ 1
1107
+ 0
1108
+ 0
1109
+ 0
1110
+ −3
1111
+ −8
1112
+ −5
1113
+ 0
1114
+ 0
1115
+ 0
1116
+ 1
1117
+ 0
1118
+ 0
1119
+ 0
1120
+ 0
1121
+ 0
1122
+ 0
1123
+ 0
1124
+ 0
1125
+ 0
1126
+ 1
1127
+ 0
1128
+ 0
1129
+ 0
1130
+ 0
1131
+ 0
1132
+ 0
1133
+ 0
1134
+ 0
1135
+ 0
1136
+ 1
1137
+ −1
1138
+ −2
1139
+ −3
1140
+ 0
1141
+ 0
1142
+ 0
1143
+ 0
1144
+ 0
1145
+ 0
1146
+
1147
+ 
1148
+ .
1149
+ Then ||S|| = ||F|| =
1150
+
1151
+ 224, but κ(S) =
1152
+
1153
+ 224
1154
+
1155
+ 63 < κ(F) =
1156
+
1157
+ 224
1158
+
1159
+ 224.
1160
+ One extreme example of how the inequalities in Corollary 4.2 can be met is if c0 = 1 and
1161
+ the striped companion matrix in line (5) has rank(R) = 1. In this case, the inequalities are
1162
+ trivially met as described in the following corollary. A more general result can be developed
1163
+ for striped companion matrices with differing stripe lengths; e.g., see [1, Section 4.2].
1164
+ Corollary 4.4. Given p(x) = xn + cn−1xn−1 + cn−2xn−2 + · · · + c1x + c0 with c0 = 1, and
1165
+ c1, . . . , cn−1 ∈ R, let S be a striped companion matrix to the polynomial p(x). If
1166
+ S =
1167
+
1168
+ 
1169
+ 0 Im
1170
+ O
1171
+ R
1172
+ In−m−1
1173
+ 0T
1174
+
1175
+ 
1176
+ with rank(R) = 1, then κ(S) ≤ κ(F) for any Fiedler companion matrix F.
1177
+
1178
+ CONDITION NUMBERS OF HESSENBERG COMPANION MATRICES
1179
+ 11
1180
+ Proof. This result follows from Corollary 4.2 by observing that |cicjk − cjk+i| = 0 for all
1181
+ 1 ≤ j ≤ m and 1 ≤ i ≤ k − 1, if and only if rank(R) = 1. In particular, rank(R) = 1
1182
+ if and only every 2 × 2 submatrix of R has determinant zero, which is true if and only if
1183
+ |cicjk − cjk+i| = 0 for 1 ≤ j ≤ m and 1 ≤ i ≤ k − 1. Note that we are using the fact that
1184
+
1185
+ −cjk
1186
+ −cjk+i
1187
+ −c0
1188
+ −ci
1189
+
1190
+ is a 2 × 2 submatrix of R and c0 = 1.
1191
+
1192
+ Example 4.5. Let b, k ∈ R and consider the polynomial p(x) = x6 + (bk3)x5 + (bk2)x4 +
1193
+ (bk2)x3 + (bk)x2 + kx + 1. If
1194
+ S = S6(2, 2, 2) =
1195
+
1196
+ 
1197
+ 0
1198
+ 1
1199
+ 0
1200
+ 0
1201
+ 0
1202
+ 0
1203
+ −bk2
1204
+ −bk3
1205
+ 1
1206
+ 0
1207
+ 0
1208
+ 0
1209
+ 0
1210
+ 0
1211
+ 0
1212
+ 1
1213
+ 0
1214
+ 0
1215
+ −bk
1216
+ −bk2
1217
+ 0
1218
+ 0
1219
+ 1
1220
+ 0
1221
+ 0
1222
+ 0
1223
+ 0
1224
+ 0
1225
+ 0
1226
+ 1
1227
+ −1
1228
+ −k
1229
+ 0
1230
+ 0
1231
+ 0
1232
+ 0
1233
+
1234
+ 
1235
+ and F is any Fiedler companion matrix for p(x), then
1236
+ �κ(F)
1237
+ κ(S)
1238
+ �2
1239
+ = b2k6 + b2k4 + b2k4 + b2k2 + k2 + 6
1240
+ b2k4 + b2k2 + k2 + 6
1241
+ .
1242
+ In this case, for sufficiently large k,
1243
+ κ(F)
1244
+ κ(S) ≈ k
1245
+ demonstrating a significantly better condition number for S compared to any Fiedler com-
1246
+ panion matrix.
1247
+ As shown in Corollary 4.4, if the rank of the submatrix R in the striped companion matrix
1248
+ S has rank(R) = 1, then the inequality κ(S) ≤ κ(F) holds for any Fiedler companion matrix
1249
+ F. Note that in the striped companion matrix given in Example 4.5, the corresponding
1250
+ submatrix R has rank one. Observe also that we can write p(x) has
1251
+ p(x) = q(x) + (bk)x2q(x) + (bk2)x4q(x) + x6 with q(x) = 1 + kx.
1252
+ This generalizes: if the matrix S in Corollary 4.4 has rank(R) = 1, then p(x) = xn+q(x)f(x)
1253
+ for some polynomial q(x) with deg(q(x)) = m and deg(f(x)) = n − m − 1. Moreover,
1254
+ Corollary 4.4 can be improved by giving an estimate on κ(F )
1255
+ κ(S) for any Fiedler companion
1256
+ matrix F.
1257
+ Theorem 4.6. Suppose n = k(m + 1) and p(x) = q(x) + b1xkq(x) + b2x2kq(x) + · · · +
1258
+ bmxmkq(x) + x(m+1)k with
1259
+ q(x) = ak−1xk−1 + ak−2xk−2 + · · · + a1x + 1.
1260
+ Let S = Sn(k, k, . . . , k) and F be any Fiedler companion matrix to p(x). If (b2
1261
+ 1 + · · · + b2
1262
+ m)
1263
+ is sufficiently large, then
1264
+ �κ(F)
1265
+ κ(S)
1266
+ �2
1267
+ ≈ (a2
1268
+ 1 + · · · + a2
1269
+ k−1 + 1),
1270
+
1271
+ 12
1272
+ MICHAEL COX, KEVIN N. VANDER MEULEN, ADAM VAN TUYL, AND JOSEPH VOSKAMP
1273
+ or if (a2
1274
+ 1 + · · · + a2
1275
+ k−1) is sufficiently large, then
1276
+ �κ(F)
1277
+ κ(S)
1278
+ �2
1279
+ ≈ (1 + b2
1280
+ 1 + · · · + b2
1281
+ m).
1282
+ Proof. By Remark 2.3, κ(F )
1283
+ κ(S) = ||F −1||
1284
+ ||S−1||. By Lemma 2.2,
1285
+ ||S−1||2 = a2
1286
+ 1 + · · · + a2
1287
+ k−1 + b2
1288
+ 1 + · · · + b2
1289
+ m + n.
1290
+ By Theorem 3.5 we can determine that
1291
+ ||F −1||2 = (1 + b2
1292
+ 1 + · · · + b2
1293
+ m)(a2
1294
+ 1 + · · · + a2
1295
+ k−1) + (b2
1296
+ 1 + · · · + b2
1297
+ m) + n.
1298
+ Therefore,
1299
+ �κ(F)
1300
+ κ(S)
1301
+ �2
1302
+ = (1 + b2
1303
+ 1 + · · · + b2
1304
+ m)(a2
1305
+ 1 + · · · + a2
1306
+ k−1) + (b2
1307
+ 1 + · · · + b2
1308
+ m) + n
1309
+ (a2
1310
+ 1 + · · · + a2
1311
+ k−1) + (b2
1312
+ 1 + · · · + b2m) + n
1313
+ and the result follows.
1314
+
1315
+ 5. Generalized companion matrices: a case study
1316
+ In the previous sections, we focused on the condition numbers of unit sparse companion
1317
+ matrices. In this section, we initiate an investigation into the condition numbers of a family
1318
+ of matrices that are not companion matrices, but have properties similar to companion
1319
+ matrices. To date, there appears to be little work done on this approach, so the work in
1320
+ this section can be seen as providing a proof-of-concept for future projects. These results can
1321
+ also be viewed in the broader context of developing the properties of generalized companion
1322
+ matrices (e.g., see [4, 6]). Roughly speaking, given a polynomial p(x) = xn + cn−1xn−1 +
1323
+ · · ·+c1x1 +c0, a generalized companion matrix A is a matrix whose entries are polynomials
1324
+ in the c0, . . . , cn and whose characteristic polynomial is p(x). See [6] for more explicit detail.
1325
+ Instead of considering the general case, we focus on a particular family of matrices and
1326
+ their condition numbers. This case study shows that the condition numbers can improve
1327
+ on those of Frobenius (or Fiedler) companion matrices under some extra hypotheses.
1328
+ We now define our special family. Let p(x) = xn+cn−1xn−1+· · ·+c1x+c0 be a polynomial
1329
+ over R with n ≥ 2 and let a ∈ R be any real number. Fix an integer ℓ ∈ {3, . . . , n − 2} and
1330
+ let
1331
+ aT
1332
+ =
1333
+ (−cn−1, −cn−2, . . . , −cℓ+1) and bT = (−cℓ−2, −cℓ−3, . . . , −c1).
1334
+ Then let
1335
+ (7)
1336
+ Mn(a, ℓ) =
1337
+
1338
+ 
1339
+ a
1340
+ In−ℓ−1
1341
+ O
1342
+ O
1343
+ −cℓ + a
1344
+ W
1345
+ I2
1346
+ O
1347
+ −cℓ−1 + acn−1
1348
+ b
1349
+ O
1350
+ O
1351
+ Iℓ−2
1352
+ −c0
1353
+ O
1354
+ O
1355
+ O
1356
+
1357
+ 
1358
+ .
1359
+
1360
+ CONDITION NUMBERS OF HESSENBERG COMPANION MATRICES
1361
+ 13
1362
+
1363
+ 
1364
+ −c6
1365
+ 1
1366
+ 0
1367
+ 0
1368
+ 0
1369
+ 0
1370
+ 0
1371
+ −c5
1372
+ 0
1373
+ 1
1374
+ 0
1375
+ 0
1376
+ 0
1377
+ 0
1378
+ −c4 + a
1379
+ 0
1380
+ 0
1381
+ 1
1382
+ 0
1383
+ 0
1384
+ 0
1385
+ −c3 + ac6
1386
+ −a
1387
+ 0
1388
+ 0
1389
+ 1
1390
+ 0
1391
+ 0
1392
+ −c2
1393
+ 0
1394
+ 0
1395
+ 0
1396
+ 0
1397
+ 1
1398
+ 0
1399
+ −c1
1400
+ 0
1401
+ 0
1402
+ 0
1403
+ 0
1404
+ 0
1405
+ 1
1406
+ −c0
1407
+ 0
1408
+ 0
1409
+ 0
1410
+ 0
1411
+ 0
1412
+ 0
1413
+
1414
+ 
1415
+ Figure 4. The matrix M7(a, 4)
1416
+ where W is a 2 × (n − ℓ − 1) matrix having W2,1 = −a and zeroes in every other entry.
1417
+ Informally, the matrix Mn(a, ℓ) is constructed by starting with the Frobenius companion
1418
+ matrix which has all the coefficents of p(x) in the first column. Then we fix a row that is
1419
+ neither the top row nor one of the bottom two rows (this corresponds to picking the ℓ), and
1420
+ then adding a to cℓ in the (n − ℓ)-th row, and −a in the column to the right and one below.
1421
+ We then also add acn−1 to the first entry in the (n − ℓ + 1)-th row. Note that when a = 0,
1422
+ Mn(0, ℓ) is equivalent to the Frobenius companion matrix. We can thus view Mn(a, ℓ) as a
1423
+ perturbation of the Frobenius companion matrix when a ̸= 0. As an example, the matrix
1424
+ M7(a, 4) is given in Figure 4.
1425
+ We wish to compare the condition number of Mn(a, ℓ) with the Frobenius (and Fiedler)
1426
+ companion matrices. In some cases our new matrix Mn(a, ℓ) can provide us with a smaller
1427
+ condition number. The next lemma gives the inverse of Mn(a, ℓ) and shows that the char-
1428
+ acteristic polynomial of Mn(a, ℓ) is p(x).
1429
+ Lemma 5.1. Let p(x) = xn + cn−1xn−1 + cn−2xn−2 + · · · + c1x + c0 be a polynomial over
1430
+ R, with n ≥ 2 and c0 ̸= 0. Let a ∈ R and ℓ ∈ {3, . . . , n − 2}, and let M = Mn(a, ℓ) be
1431
+ constructed from p(x) as above. Then
1432
+ (i) the characteristic polynomial of M is p(x), and
1433
+ (ii) if c0 ̸= 0, then
1434
+ M−1 = 1
1435
+ c0
1436
+
1437
+ 
1438
+ 0T
1439
+ 0T
1440
+ 0T
1441
+ −1
1442
+ c0In−ℓ
1443
+ O
1444
+ O
1445
+ a
1446
+ −c0W
1447
+ c0I2
1448
+ O
1449
+ −cℓ + a
1450
+ −cℓ−1
1451
+ O
1452
+ O
1453
+ c0Iℓ−2
1454
+ b
1455
+
1456
+ 
1457
+ .
1458
+ Proof. (i) We employ the fact that the determinant of a matrix is a linear function of its
1459
+ rows. In particular, if M = Mn(a, ℓ), we observe that row n − ℓ of xIn − M can be written
1460
+
1461
+ 14
1462
+ MICHAEL COX, KEVIN N. VANDER MEULEN, ADAM VAN TUYL, AND JOSEPH VOSKAMP
1463
+ as u + av for some vectors u and v such that u is not a function of a. Row n − ℓ + 1 of
1464
+ xIn − M can also be written in a similar manner. Let k = n − ℓ. Thus applying linearity to
1465
+ row k gives us
1466
+ det(xIn − M) = det
1467
+
1468
+
1469
+
1470
+
1471
+
1472
+
1473
+
1474
+ xIn −
1475
+
1476
+ 
1477
+ a
1478
+ In−ℓ−1
1479
+ O
1480
+ O
1481
+ −cℓ
1482
+ W
1483
+ I2
1484
+ O
1485
+ −cℓ−1 + acn−1
1486
+ b
1487
+ O
1488
+ O
1489
+ Iℓ−2
1490
+ −c0
1491
+ O
1492
+ O
1493
+ O
1494
+
1495
+ 
1496
+
1497
+
1498
+
1499
+
1500
+
1501
+
1502
+
1503
+ + a(−1)xℓ.
1504
+ (8)
1505
+ Note that the term a(−1)xℓ in (8) comes from computing the determinant of the matrix A′
1506
+ formed by replacing the k-th row of the matrix xIn−M with the row
1507
+
1508
+ −a
1509
+ 0
1510
+ · · ·
1511
+ 0
1512
+
1513
+ . Do-
1514
+ ing a row expansion along the k-th row of A′, the determinant of A′ is (−1)k+1(−a)det(A′′)
1515
+ where A′′ is a block lower diagonal matrix with diagonal blocks D1 and D2. Furthermore,
1516
+ D1 is a (k−1)×(k−1) lower triangular matrix with −1 on all the diagonal entries, and D2 is
1517
+ a ℓ × ℓ upper triangular matrix with x on all the diagonal entries. So det(A′′) = (−1)k−1xℓ,
1518
+ and hence det(A′) = (−1)k+1(−a)(−1)k−1xℓ = (−a)xℓ, as desired.
1519
+ We now apply linearity to row k + 1 in the matrix that appears on the right-hand side
1520
+ of (8); in particular, a similar argument shows that the right-hand side (8) is equal to
1521
+ (9)
1522
+ det
1523
+
1524
+
1525
+
1526
+
1527
+
1528
+
1529
+
1530
+ xI −
1531
+
1532
+ 
1533
+ a
1534
+ Ik−1
1535
+ O
1536
+ O
1537
+ −cℓ
1538
+ O
1539
+ I2
1540
+ O
1541
+ −cℓ−1
1542
+ b
1543
+ O
1544
+ O
1545
+ Iℓ−2
1546
+ −c0
1547
+ O
1548
+ O
1549
+ O
1550
+
1551
+ 
1552
+
1553
+
1554
+
1555
+
1556
+
1557
+
1558
+
1559
+ +a(−1)xℓ +acn−1(−1)xℓ−1 +a(x+cn−1)xℓ−1.
1560
+ Note that the first summand in (9) is the characteristic polynomial of a Frobenius companion
1561
+ matrix of p(x), and hence is p(x). Thus, (9) reduces to
1562
+ p(x) + a(−1)xℓ + acn−1(−1)xℓ−1 + a(x + cn−1)xℓ−1 = p(x).
1563
+ (ii) A direct multiplication will show that the given matrix is the inverse M.
1564
+
1565
+ Because both Mn(a, ℓ) and its inverse are known, we are able to compute its condition
1566
+ number. In the next lemma, instead of providing the general formula, we compute the
1567
+ condition number under the extra assumption that c0 = 1 in the polynomial p(x).
1568
+ Lemma 5.2. Let p(x) = xn + cn−1xn−1 + cn−2xn−2 + · · · + c1x + c0 be a polynomial over
1569
+ R, with n ≥ 2, and suppose that c0 = 1.
1570
+ Let a ∈ R and ℓ ∈ {3, . . . , n − 2}, and let
1571
+ M = Mn(a, ℓ). Then
1572
+ κ(M)2 =
1573
+
1574
+ v + a2 + (a − cℓ)2 + (acn−1 − cℓ−1)2� �
1575
+ v + a2 + (a − cℓ)2 + c2
1576
+ ℓ−1 + 1
1577
+
1578
+ with
1579
+ v = n − c2
1580
+ ℓ−1 − c2
1581
+ ℓ +
1582
+ n−1
1583
+
1584
+ i=1
1585
+ c2
1586
+ i .
1587
+
1588
+ CONDITION NUMBERS OF HESSENBERG COMPANION MATRICES
1589
+ 15
1590
+ The next result illustrates the desired proof-of-concept. In particular, the result shows
1591
+ that in special cases, the condition number of the matrix Mn(a, ℓ), which has properties
1592
+ similar to a companion matrix, has a condition number smaller than any Fielder compan-
1593
+ ion matrix. Although the scope of this result is limited, it does suggest that generalized
1594
+ companion matrices, and in particular perturbations of the Frobenius companion matrix,
1595
+ can provide better condition numbers in some cases.
1596
+ Theorem 5.3. Let n ≥ 2, and fix ℓ ∈ {3, . . . , n − 2} and t ∈ R. Set
1597
+ p(x) = xn + txn−1 + txℓ + t2xℓ−1 + 1.
1598
+ Let M = Mn(t, ℓ). Then, for any Fieldler companion matrix F of p(x),
1599
+ κ(F)2
1600
+ κ(M)2 =
1601
+ (n + 2t2 + t4)2
1602
+ (n + 2t2)(n + 1 + 2t2 + t4).
1603
+ In particular, for t for sufficiently large,
1604
+ κ(F )
1605
+ κ(M) ≈
1606
+ 1
1607
+
1608
+ 2t.
1609
+ Proof. By Lemma 5.2,
1610
+ κ(M)2 =
1611
+
1612
+ 1 + t2 + (n − 1) + a2 + (a − t)2 + (at − t2)2� �
1613
+ 1 + t2 + (n − 1) + a2 + (a − t)2 + t4 + 1
1614
+
1615
+ .
1616
+ Setting a = t gives κ(M)2 = (n + 2t2)(n + 1 + 2t2 + t4). We use Theorem 3.5 to compute
1617
+ κ(F)2. Note that since c0 = 1, κ(F) is independent of the initial step size of F. Hence
1618
+ κ(F)
1619
+ =
1620
+ ((n − 1) + 1 + t4 + t2 + t2) = (n + 2t2 + t4).
1621
+ Thus we have
1622
+ κ(F)2
1623
+ κ(M)2 =
1624
+ (n + 2t2 + t4)2
1625
+ (n + 2t2)(n + 1 + 2t2 + t4).
1626
+ The limit of the right hand side is t2
1627
+ 2 as t → ∞, which implies the final statement.
1628
+
1629
+ The following result gives another case where we can make a matrix with smaller condi-
1630
+ tion number than any other Fielder companion matrix, providing additional evidence that
1631
+ generalized companion matrices may be of interest.
1632
+ Theorem 5.4. Let n ≥ 2, and fix ℓ ∈ {3, . . . , n−2}. Let p(x) = xn+cn−1xn−1+· · ·+c1x+c0
1633
+ with c0 = 1, and (cℓcn−1)2 < 2cℓ−1cℓcn−1 − 1. Let M = Mn(cℓ, ℓ). Then κ(M) < κ(F) for
1634
+ every Fieldler companion matrix F of p(x).
1635
+ Proof. Let v = n − c2
1636
+ ℓ − c2
1637
+ ℓ−1 + �n−1
1638
+ i=1 c2
1639
+ i .
1640
+ Because c0 = 1, by Theorem 3.5 all Fielder
1641
+ companion matrices F have condition number
1642
+ κ(F) = (v + c2
1643
+ ℓ + c2
1644
+ ℓ−1).
1645
+ By Lemma 5.2, with a = cℓ,
1646
+ κ(M)2
1647
+ =
1648
+ (v + c2
1649
+ ℓ + (cℓcn−1 − cℓ−1)2)(v + c2
1650
+ ℓ + c2
1651
+ ℓ−1 + 1)
1652
+ =
1653
+ (v + c2
1654
+ ℓ + c2
1655
+ ℓ−1 + ((cℓcn−1)2 − 2cℓ−1cℓcn−1))(v + c2
1656
+ ℓ + c2
1657
+ ℓ−1 + 1).
1658
+ If we set w = (v +c2
1659
+ ℓ +c2
1660
+ ℓ−1), then κ(M)2 = (w−y)(w+1) with y = 2cℓ−1cℓcn−1 −(cℓcn−1)2.
1661
+ But y > 1 by hypothesis, thus κ(M)2 < w2 = κ(F)2.
1662
+
1663
+
1664
+ 16
1665
+ MICHAEL COX, KEVIN N. VANDER MEULEN, ADAM VAN TUYL, AND JOSEPH VOSKAMP
1666
+ References
1667
+ [1] M. Cox, On conditions numbers of companion matrices, M.Sc. Thesis, McMaster University, 2018.
1668
+ [2] L. Deaett, J. Fischer, C. Garnett, K.N. Vander Meulen, Non-sparse companion matrices, Electron. J.
1669
+ Linear Algebra 35 (2019) 223–247.
1670
+ [3] F. de Ter´an, F.M. Dopico, J. P´erez, Condition numbers for inversion of Fiedler companion matrices,
1671
+ Linear Algebra Appl. 439 (2013) 944–981.
1672
+ [4] B. Eastman, I.J. Kim, B.L. Shader, K.N. Vander Meulen, Companion matrix patterns, Linear Algebra
1673
+ Appl. 463 (2014) 255–272.
1674
+ [5] M. Fiedler, A note on companion matrices, Linear Algebra Appl. 372 (2003) 325–331.
1675
+ [6] C. Garnett, B.L. Shader, C.L. Shader, P. van den Driessche, Characterization of a family of generalized
1676
+ companion matrices, Linear Algebra Appl. 498 (2016) 360–365.
1677
+ [7] K.N. Vander Meulen, T. Vanderwoerd, Bounds on polynomial roots using intercyclic companion matri-
1678
+ ces, Linear Algebra Appl. 539 (2018) 94–116.
1679
+ Unit 202 - 133 Herkimer Street, Hamilton, ON, L8P 2H3, Canada
1680
+ Email address: [email protected]
1681
+ Department of Mathematics, Redeemer University College, Ancaster, ON, L9K 1J4, Canada
1682
+ Email address: [email protected]
1683
+ Department of Mathematics and Statistics, McMaster University, Hamilton, ON, L8S 4L8,
1684
+ Canada
1685
+ Email address: [email protected]
1686
+ Department of Mathematics and Statistics, McMaster University, Hamilton, ON, L8S 4L8,
1687
+ Canada
1688
+ Email address: [email protected]
1689
+
99FQT4oBgHgl3EQfJzVJ/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
A9E4T4oBgHgl3EQfEwz_/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f8252ee7f53af47fd959bd7df7cf56530a773e2f93f6859f07e4e026fce1305
3
+ size 2293805
AdFJT4oBgHgl3EQfrS3C/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50d8e2e93a66d89cd274a0c620aff2d3fdfa7f8e809bc41c1cfc16829439bcbe
3
+ size 4784173
B9AyT4oBgHgl3EQfePhg/content/2301.00317v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27b890d87de035d8fca966400609c6cf5934f7cabee90f4fd1ed08996b307264
3
+ size 816868
B9AyT4oBgHgl3EQfePhg/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4bf8c72c1571c4e2cf81c58910bf01b1bf27972dc300086b556c997a5c1d9e9
3
+ size 3342381
B9AyT4oBgHgl3EQfePhg/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5fe1c91e508107cd2383df7edc286038b2a6b36ed1c22b74f294f34335b18120
3
+ size 135803
CNE1T4oBgHgl3EQf9wYh/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:830c9fd07a17f0ab2342467e6153241fbb3411254bded7ec958f18d85f90d030
3
+ size 130885
CtE1T4oBgHgl3EQfpwVv/content/2301.03335v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68092070903e5ff1b82555745a460a806e2191cd4ed1139a8d08731bcf764016
3
+ size 7913756
D9E0T4oBgHgl3EQfQgCE/content/2301.02194v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cec1045b47f9e2c7c3bb662020bc2aacd29081cfa2a2a2d333cb22d48c1035c5
3
+ size 265699
D9E0T4oBgHgl3EQfQgCE/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b4f17aab1c4dd674e7d280585b0a198abb562a28094f7dd661ddaf3b5d4ef79
3
+ size 139302
DNE2T4oBgHgl3EQf9Qm6/content/2301.04227v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aea673be1071cbafb4a3c3aabed734c05ffcffe8dc696aaf1ff561e176070fde
3
+ size 393704
DNE2T4oBgHgl3EQf9Qm6/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e188addb3b7ead97656e7edba8972e7dd68b1a8744276eda8189f6dd4307e9c8
3
+ size 274656
DtE0T4oBgHgl3EQfQgBB/content/2301.02193v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:861c515bdf98bac5c44473fc0f1977e171e42d4a40690f11ddfd3ed5c35731a7
3
+ size 2640281