jackkuo commited on
Commit
2c1e1a5
·
verified ·
1 Parent(s): 68c6a8d

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. -dA0T4oBgHgl3EQfPP9i/content/tmp_files/2301.02172v1.pdf.txt +1533 -0
  2. -dA0T4oBgHgl3EQfPP9i/content/tmp_files/load_file.txt +0 -0
  3. -tA0T4oBgHgl3EQfPP-n/content/tmp_files/2301.02173v1.pdf.txt +1701 -0
  4. -tA0T4oBgHgl3EQfPP-n/content/tmp_files/load_file.txt +0 -0
  5. .gitattributes +40 -0
  6. 09E0T4oBgHgl3EQfuQEC/content/2301.02601v1.pdf +3 -0
  7. 09E0T4oBgHgl3EQfuQEC/vector_store/index.pkl +3 -0
  8. 0NFLT4oBgHgl3EQfoi-S/content/2301.12132v1.pdf +3 -0
  9. 0NFLT4oBgHgl3EQfoi-S/vector_store/index.pkl +3 -0
  10. 0tE2T4oBgHgl3EQf4wij/content/tmp_files/2301.04184v1.pdf.txt +1544 -0
  11. 0tE2T4oBgHgl3EQf4wij/content/tmp_files/load_file.txt +0 -0
  12. 19FST4oBgHgl3EQfXDjl/content/2301.13783v1.pdf +3 -0
  13. 19FST4oBgHgl3EQfXDjl/vector_store/index.pkl +3 -0
  14. 1tE1T4oBgHgl3EQflQSM/content/2301.03283v1.pdf +3 -0
  15. 3dFLT4oBgHgl3EQfry9C/content/tmp_files/2301.12145v1.pdf.txt +1429 -0
  16. 3dFLT4oBgHgl3EQfry9C/content/tmp_files/load_file.txt +0 -0
  17. 4dAzT4oBgHgl3EQfffz3/content/2301.01455v1.pdf +3 -0
  18. 4dAzT4oBgHgl3EQfffz3/vector_store/index.faiss +3 -0
  19. 4dAzT4oBgHgl3EQfffz3/vector_store/index.pkl +3 -0
  20. 5NE0T4oBgHgl3EQfegCm/content/tmp_files/2301.02392v1.pdf.txt +1592 -0
  21. 5NE0T4oBgHgl3EQfegCm/content/tmp_files/load_file.txt +0 -0
  22. 5tE1T4oBgHgl3EQfBAK1/vector_store/index.faiss +3 -0
  23. 6tAyT4oBgHgl3EQfcvc9/content/2301.00288v1.pdf +3 -0
  24. 6tAyT4oBgHgl3EQfcvc9/vector_store/index.pkl +3 -0
  25. 6tFAT4oBgHgl3EQfnx0W/content/2301.08630v1.pdf +3 -0
  26. 8NE3T4oBgHgl3EQfqQrl/content/tmp_files/2301.04651v1.pdf.txt +557 -0
  27. 8NE3T4oBgHgl3EQfqQrl/content/tmp_files/load_file.txt +397 -0
  28. AtE2T4oBgHgl3EQf8QmS/vector_store/index.faiss +3 -0
  29. B9AyT4oBgHgl3EQf4PrK/content/tmp_files/2301.00784v1.pdf.txt +1414 -0
  30. B9AyT4oBgHgl3EQf4PrK/content/tmp_files/load_file.txt +0 -0
  31. CdE4T4oBgHgl3EQfeQ2g/content/tmp_files/2301.05098v1.pdf.txt +0 -0
  32. CdE4T4oBgHgl3EQfeQ2g/content/tmp_files/load_file.txt +0 -0
  33. E9FJT4oBgHgl3EQfCizA/vector_store/index.pkl +3 -0
  34. ENE2T4oBgHgl3EQfSgdx/vector_store/index.pkl +3 -0
  35. HNE1T4oBgHgl3EQfrQW4/vector_store/index.faiss +3 -0
  36. HNE1T4oBgHgl3EQfrQW4/vector_store/index.pkl +3 -0
  37. HdFLT4oBgHgl3EQfIC9G/content/tmp_files/2301.11998v1.pdf.txt +1923 -0
  38. HdFLT4oBgHgl3EQfIC9G/content/tmp_files/load_file.txt +0 -0
  39. I9AyT4oBgHgl3EQf5_o0/content/tmp_files/2301.00813v1.pdf.txt +1464 -0
  40. I9AyT4oBgHgl3EQf5_o0/content/tmp_files/load_file.txt +0 -0
  41. ItAzT4oBgHgl3EQfjv0x/content/tmp_files/2301.01520v1.pdf.txt +680 -0
  42. ItAzT4oBgHgl3EQfjv0x/content/tmp_files/load_file.txt +435 -0
  43. JNAzT4oBgHgl3EQfVPyV/content/2301.01281v1.pdf +3 -0
  44. JdAzT4oBgHgl3EQfyP4s/content/2301.01749v1.pdf +3 -0
  45. JdAzT4oBgHgl3EQfyP4s/vector_store/index.pkl +3 -0
  46. KNE1T4oBgHgl3EQfYgRo/content/tmp_files/2301.03139v1.pdf.txt +0 -0
  47. KNE1T4oBgHgl3EQfYgRo/content/tmp_files/load_file.txt +0 -0
  48. MtAzT4oBgHgl3EQfy_6c/content/tmp_files/2301.01762v1.pdf.txt +2466 -0
  49. MtAzT4oBgHgl3EQfy_6c/content/tmp_files/load_file.txt +0 -0
  50. NNE1T4oBgHgl3EQftgVE/vector_store/index.pkl +3 -0
-dA0T4oBgHgl3EQfPP9i/content/tmp_files/2301.02172v1.pdf.txt ADDED
@@ -0,0 +1,1533 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ OPTIMISED MORSE TRANSFORM OF A GAUSSIAN PROCESS
2
+ FEATURE SPACE
3
+ Fabio E. A. Albertani∗† , Alex J. W. Thom∗
4
+ January 5, 2023
5
+ ABSTRACT
6
+ Morse projections are well-known in chemistry and allow one, within a Morse potential approximation,
7
+ to redefine the potential in a simple quadratic form. The latter, being a non-linear transform, is also
8
+ very helpful for machine learning methods as they improve the performance of models by projecting
9
+ the feature space onto more well-suited coordinates. Usually, the Morse projection parameters are
10
+ taken from numerical benchmarks. We investigate the effect of changing these parameters latter
11
+ on the model learning, as well as using the machine learning method itself to make the parameters
12
+ decision. We find that learning is not necessarily improved by the latter and that general Morse
13
+ projections are extremely susceptible to changes in the training data.
14
+ 1
15
+ Introduction
16
+ Machine learning, as in many fields of science, has rev-
17
+ olutionised the way theoretical chemists approach the
18
+ interpolation of molecular properties. The many methods
19
+ encompassed by the machine learning framework pro-
20
+ vide tools to construct models of the former with great
21
+ accuracy1–3. A particular method that has seen success is
22
+ the Gaussian process (GP) framework which has seen ex-
23
+ tensive publications in machine learning potential energy
24
+ surface applications4–12.
25
+ The representation of the molecular geometry is an essen-
26
+ tial part of the ML building process and has seen many
27
+ “solutions” spring up through the years13. When using
28
+ global or local descriptors of the atomic configuration
29
+ of the system to build a “feature space”, one often uses
30
+ the internuclear distances as an underlying coordinates.
31
+ The latter are often transformed to improve the accu-
32
+ racy of models since ML models do not perform equally
33
+ when the training data is projected onto different feature
34
+ spaces. One known projection of the feature space is
35
+ the Morse transform of the internuclear distances which
36
+ often improves one’s ability to learn the surface14.
37
+ Given the ability of a GPs to learn the underlying pattern
38
+ of the target function15,16, it is interesting to consider a
39
+ GP which can change the underlying function in its opti-
40
+ misation. This is done by making the distance fed to the
41
+ kernel (see next section) transform with GP hyperparam-
42
+ eters within the kernel itself.
43
+ Many more feature space transformations could be con-
44
+ sidered (these are also not restricted to transformations
45
+ based on internuclear distances) but we will here discuss
46
+ the effect of the added “transformation hyperparameters”
47
+ on the GP optimisation process.
48
+ 2
49
+ Gaussian Processes
50
+ A Gaussian process is a machine learning regression
51
+ method and is defined as a collection of random vari-
52
+ ables, any finite number of which have a joint Gaussian
53
+ distribution16. An essential part of a GP model is its
54
+ kernel function which defines, over a feature space (the
55
+ input space of the GP), a measure of similarity.
56
+ There are many possible kernel functions one can defined,
57
+ as they only need to adhere to a few simple rules16. We
58
+ use here the Matérn class kernel multiplied by a constant
59
+ kernel (CK) and summed with a White Kernel (WK) to
60
+ model noise. The covariance between two vectors over
61
+ the feature space, X and X′ here, is given by
62
+ K(X, X′) = σ2 21−ν
63
+ Γ(ν)
64
+
65
+
66
+ 2ν d
67
+ ρ
68
+ �ν
69
+
70
+
71
+
72
+ 2ν d
73
+ ρ
74
+
75
+ + λ2
76
+ (1)
77
+ ∗ Yusuf Hamied Department of Chemistry, University of Cambridge, Cambridge, Lensfield Road, CB2 1EW
78
79
+ arXiv:2301.02172v1 [physics.chem-ph] 5 Jan 2023
80
+
81
+ Optimised Morse transform of a Gaussian process feature space
82
+ where Γ is the gamma function, Kν is the modified
83
+ Bessel function of the second kind of degree ν, ρ are
84
+ length scales and d is the Euclidean distance in feature
85
+ space |X − X′|. The ν parameter is not optimised and
86
+ defines the smoothness of the kernel: a GP with a Matérn
87
+ kernel of parameter ν = n + 0.5 is n-times differen-
88
+ tiable↓. We also explore an infinitely smooth version of
89
+ the Matérn kernel with ν → ∞, commonly known as the
90
+ radial basis function (RBF) kernel.
91
+ At a set of query points, forming a matrix Xp of size
92
+ Np × Nfeatures, a GP model predicts a Gaussian distri-
93
+ bution with a mean (sometimes called the latent func-
94
+ tion), here denoted y(Xp), and a variance, here denoted
95
+ ∆(Xp), which is associated to the model confidence. For
96
+ a set of prediction points, Xp, the predicted distribution
97
+ are given by16:
98
+ y(Xp) = Kpt K−1
99
+ tt y
100
+ ∆(Xp) = Kpp − Kpt K−1
101
+ tt Ktp
102
+ (2)
103
+ where the kernel matrices are subscripted with the ma-
104
+ trices they evaluate (p for query points and t for train-
105
+ ing) and the ijth element of the matrix Knm is given
106
+ by K(Xn,i, Xm,j). A common metric, used by the ML
107
+ community, to define the confidence in predictions is the
108
+ ∆95% confidence interval which is given as y ± 2∆ for
109
+ GPs.
110
+ GPs are optimised by finding the most suited hyperpa-
111
+ rameters for its kernel. Using a Bayesian approach, one
112
+ finds the latter by maximising the log-marginal likelihood
113
+ (LML) defined as16
114
+ LML = −1
115
+ 2yTK−1
116
+ tt y − 1
117
+ 2log|Ktt| − n
118
+ 2 log(2π)
119
+ (3)
120
+ where Ktt, as before, is the covariance matrix of the
121
+ training set to itself. The terms on the LHS of equa-
122
+ tion 3 can be understood as a fit, a regularisation and a
123
+ normalisation term respectively.
124
+ Practically, the maximisation is done by minimising
125
+ −LML but we will use the term LML as the surface
126
+ we minimise and the term “minimum” as a set of hy-
127
+ perparameters corresponding to a model selected by the
128
+ GP.
129
+ The LML exploration is done with the GMIN suite17–19
130
+ which allows to give a full description of the minima,
131
+ both global and local, of the surface as well as their
132
+ connectivity. In order to visualise the surfaces, we use
133
+ disconnectivity graphs20–22 which represent, on a −LML
134
+ vertical scale, minima as vertical lines connected by tran-
135
+ sition states shown by connecting those lines.
136
+ 3
137
+ Methodology
138
+ If one takes the coordinates to be specified as a vector, X,
139
+ of N(N − 1)/2 internuclear distances, then the Morse
140
+ transformed coordinates form a vector defined as
141
+ T (X; M = {α, X0}) :
142
+ RN(N−1)/2
143
+ → RN(N−1)/2
144
+ Xi
145
+ �→ exp
146
+
147
+ − (Xi − X0)/α
148
+
149
+ (4)
150
+ where α is the Morse parameter and X0 is the Morse
151
+ shift parameter. In order to simplify the notation, we will
152
+ write the Morse transformed vector XJ as T XJ.
153
+ The reasoning behind this transform is that an analytical
154
+ Morse potential becomes quadratic when projected onto
155
+ the coordinates T X. If one considers non-analytical po-
156
+ tentials, one expects the potentials to closer to quadratic
157
+ in T X than X. The simpler PES is better described by a
158
+ GP since the length scale of the problem becomes more
159
+ “unique”. Despite some specific bonds having chemically
160
+ derived optimal Morse parameters, there is not always a
161
+ straight-forward way to select those parameters. There
162
+ are two ways of optimising those parameters: a numeri-
163
+ cal optimisation to reduce the error on a testing set (the
164
+ traditional “best-fit” approach) and a Bayesian approach
165
+ with a Morse hyperparameter. As one does not want the
166
+ number of hyperparameters to be too large and optimise
167
+ in a very large space, we will set Morse hyparparameters
168
+ to be equal for all feature dimensions.
169
+ Taking a basic RBF kernel16, on a Morse transformed
170
+ feature space, the kernel is evaluated as
171
+ K( ˜XA = T XA, ˜XB = T XB; ρ) =
172
+ exp
173
+
174
+ − 1
175
+ 2( ˜XA − ˜XB)TP( ˜XA − ˜XB)
176
+
177
+ where P = In
178
+
179
+ �����
180
+ ρ−2
181
+ 1
182
+ ρ−2
183
+ 2
184
+ ...
185
+ ρ−2
186
+ n
187
+
188
+ �����
189
+ (5)
190
+ where we now used the matrix notation of the RBF ker-
191
+ nel and where ρi are the length scales along each feature
192
+ dimension. We use here the ˜XJ notation to differentiate
193
+ the fixed Morse projection of the kernel input (the Morse
194
+ parameters do not appear in the evaluation of the ker-
195
+ nel in the LHS of equation 5) from the projection taken
196
+ within the kernel itself, like in equation 6.
197
+ Instead of equation 5, one can use the internuclear dis-
198
+ tances, X as an input and optimise the Morse parameters
199
+ ↓ For example, with ν = 2.5, one ensures that the GP latent function is physical since both the atomic forces (first derivative) and
200
+ atomic Hessians (second derivative) are smooth w.r.t. geometrical changes
201
+ 2
202
+
203
+ Optimised Morse transform of a Gaussian process feature space
204
+ inside a “MorseRBF” kernel which is evaluated as
205
+ K(XA, XB; M, ρ) =
206
+ exp
207
+
208
+ − 1
209
+ 2(T XA − T XB)T P (T XA − T XB)
210
+
211
+ ̸≡ exp
212
+
213
+ − 1
214
+ 2( ˜XA − ˜XB)T P ( ˜XA − ˜XB)
215
+
216
+ (6)
217
+ where P is the same matrix as the one in equation 5. In
218
+ the MorseRBF approach, the Morse parameters are hy-
219
+ perparameters of the kernel alongside the length scales.
220
+ The interesting approach of the kernel is that it does not,
221
+ as is common practice, optimise to the Morse parameters
222
+ that minimises the error on the testing set, the “best-fit”
223
+ approach, but instead uses a Bayesian approach and a
224
+ “statistically relevant” set of Morse parameters.
225
+ Regarding the X0 parameter, one can see in figure 1-2
226
+ that the covariance function drops very quickly for data
227
+ points in the region X < X0 which could potentially lead
228
+ to a loss of information. With very compressed kernel
229
+ length scales around X < X0, data will not affect the
230
+ latent function. Learning was also performed for those
231
+ surfaces but were not considered further in this study.
232
+ The derivatives of the kernel with respect to each hyper-
233
+ parameter can be obtained analytically. The derivatives
234
+ with respect to the length scales are not affected by the
235
+ Morse transform and are equivalent to simply changing
236
+ the feature space in the derivatives of the standard RBF
237
+ kernel. The derivative with respect to the Morse parame-
238
+ ter can also be analytical obtained and is given by
239
+ ∂αK(XA, XB) =
240
+
241
+ − 1
242
+ 2∂α
243
+ ��T XA − T XB
244
+ �T�
245
+ P
246
+ �T XA − T XB
247
+ ��
248
+ K(XA, XB)+
249
+
250
+ − 1
251
+ 2
252
+ �T XA − T XB
253
+ �T P
254
+ ∂α
255
+ ��T XA − T XB
256
+ ���
257
+ K(XA, XB)
258
+ =
259
+ �� XA
260
+ 2α2
261
+ T XA − XB
262
+ 2α2
263
+ T XB
264
+ �T P
265
+ �T XA − T XB
266
+ ��
267
+ K(XA, XB)+
268
+ ��T XA − T XB
269
+ �T P
270
+ � XA
271
+ 2α2
272
+ T XA − XB
273
+ 2α2
274
+ T XB
275
+ ��
276
+ K(XA, XB)
277
+ (7)
278
+ where T XI are Morse transformed vectors of the original
279
+ XI data points (to simplify the notation we did not write
280
+ the dependency on α and ρ).
281
+ In a very similar manner to the MorseRBF kernel, one
282
+ can define a MorseMatérn kernel starting from equation
283
+ 1 and, despite analytical definitions of the gradient of the
284
+ kernel with respect to the Morse hyperparameter being
285
+ quite complicated, one can use numerical gradients and
286
+ optimise the Morse transformed kernel.
287
+ To understand the effect of the Morse kernels, we
288
+ compare the shape of the kernel functions in the non-
289
+ transformed space. Figure 1-2 shows a Matérn and a
290
+ MorseMatérn, projected back to the non-transformed di-
291
+ mension, to give a better insight.
292
+ −1
293
+ 0
294
+ 1
295
+ 2
296
+ 3
297
+ X
298
+ 0.0
299
+ 0.2
300
+ 0.4
301
+ 0.6
302
+ 0.8
303
+ 1.0
304
+ K(XA, XB)
305
+ X0 = 0
306
+ −1
307
+ 0
308
+ 1
309
+ 2
310
+ 3
311
+ X
312
+ 0.0
313
+ 0.2
314
+ 0.4
315
+ 0.6
316
+ 0.8
317
+ 1.0
318
+ K(XA, XB)
319
+ X0 = 1
320
+ −1
321
+ 0
322
+ 1
323
+ 2
324
+ 3
325
+ X
326
+ 0.0
327
+ 0.2
328
+ 0.4
329
+ 0.6
330
+ 0.8
331
+ 1.0
332
+ K(XA, XB)
333
+ X0 = 2
334
+ Figure 1: Covariance of the Matérn (ν = 2.5) kernel
335
+ (black lines) compared to the MorseMatérn kernel (red
336
+ lines), projected back onto X, for different X0 and with
337
+ α = 2.0. The covariance is quite unsymmetrical an the
338
+ forward influence is greater than the backward influence
339
+ since the transform expands the dataset at large X values.
340
+ The X0 parameter dampens the strong “elongation” of the
341
+ covariance at small X values and also strongly contracts
342
+ the covariance extent at X< X0 where the exponent in
343
+ equation 4 becomes positive.
344
+ −1
345
+ 0
346
+ 1
347
+ 2
348
+ 3
349
+ X
350
+ 0.0
351
+ 0.2
352
+ 0.4
353
+ 0.6
354
+ 0.8
355
+ 1.0
356
+ K(XA, XB)
357
+ X0 = 0
358
+ −1
359
+ 0
360
+ 1
361
+ 2
362
+ 3
363
+ X
364
+ 0.0
365
+ 0.2
366
+ 0.4
367
+ 0.6
368
+ 0.8
369
+ 1.0
370
+ K(XA, XB)
371
+ X0 = 1
372
+ −1
373
+ 0
374
+ 1
375
+ 2
376
+ 3
377
+ X
378
+ 0.0
379
+ 0.2
380
+ 0.4
381
+ 0.6
382
+ 0.8
383
+ 1.0
384
+ K(XA, XB)
385
+ X0 = 2
386
+ Figure 2: Covariance of the Matérn (ν = 2.5) kernel
387
+ (black lines) compared to the MorseMatérn kernel (red
388
+ lines), projected back onto X, for a larger α = 5.0. As
389
+ opposed to figure 1, the effect of the X0 does not affect
390
+ the covariance as much. The widening seen from the
391
+ previous figure is just a consequence of the length scale
392
+ hyperparameter being equal since one cannot associate
393
+ the Morse transformed length scale to the linear space
394
+ one.
395
+ Morse kernels allow the covariance to be unsymmetrical
396
+ in the internuclear space, which affects the correlation of
397
+ data in a very particular way over the feature space. As
398
+ shown on figure 1, the forward and backward correlation
399
+ differ and the extent of that effect depends greatly on the
400
+ α parameter. This increased flexibility of the kernel in
401
+ the model optimisation, allows to control the long range
402
+ effect of the training data for PES modelling.
403
+ 3
404
+
405
+ Optimised Morse transform of a Gaussian process feature space
406
+ 4
407
+ Results
408
+ We use a training set of 48 water geometries calculated
409
+ UHF/aug-cc-pVDZ energies, sampled from a Boltzmann
410
+ distribution using the Metropolis–Hastings algorithm
411
+ with data up to 0.3 Ha above the equilibrium energy,
412
+ using the Q-Chem software23. Firstly, the training data is
413
+ projected on the 3 internuclear distances (ID) and Morse
414
+ transformed according to equation 4 to create the feature
415
+ space of the GPs. Secondly, the optimisable transforma-
416
+ tion using GPs with both the MorseRBF kernel and the
417
+ MorseMatérn class of kernel, we use the twice differen-
418
+ tiable kernel with ν = 2.5. Finally, in order to assess
419
+ the performance of each latent function, we define the
420
+ MAE of predictions on a testing set also sampled from a
421
+ Boltzmann distribution with data up to 0.2 Ha above the
422
+ equilibrium energy.
423
+ The Bayesian approach optimises the LML(θ) which
424
+ only includes the training data. This is very different
425
+ from optimising the MAE(θ) which only includes the
426
+ testing data (we do not explore this surface here). Since
427
+ we use GMIN and explore the whole LML landscape, one
428
+ can combine those approaches and rank local minima of
429
+ the LML surface with their respective MAEs. One then
430
+ selects the minimum which has the lowest error. This
431
+ gives an hybrid approach which optimises the MAE(θ |
432
+ ∂ LML(θ) = 0).
433
+ The “best-fit” approach, given we use a single Morse
434
+ parameter, is a 1D minimisation of the MAE(α). One
435
+ also selects, for each GP trained with a different Morse
436
+ parameter, the LML minimum with the lowest MAE. We
437
+ first look at the results of the Matérn (ν = 2.5) kernel.
438
+ As mentioned before, the better performing minimum
439
+ of the LML is not always the global minimum. For the
440
+ Matérn (ν = 2.5) kernel, this is seen in figure 3: from
441
+ α = 2.2 it is a worse performing model that is lower on
442
+ the LML surface while the best performing minimum can
443
+ be followed. Even though there is no guarantee that fol-
444
+ lowing the better performing minimum on the LML as α
445
+ changes ensures selection of the best model, it also seem
446
+ unlikely that an eventually better performing minimum
447
+ at a different and larger α could not be followed back to
448
+ smaller Morse parameters where it disappears (with the
449
+ exception of α → 0).
450
+ 2
451
+ 5
452
+ 10
453
+ α
454
+ 2.5
455
+ 5.0
456
+ 7.5
457
+ MAE [mHa]
458
+ 5
459
+ 10
460
+ α
461
+ 0
462
+ 1
463
+ log(ρ0) = log(ρ1)
464
+ 0
465
+ 1
466
+ log(ρ2)
467
+ 1
468
+ 5
469
+ 10
470
+ α
471
+ Figure 3: Optimised hyperparameters of the Matérn
472
+ (ν = 2.5) kernel along the lengths scales representing
473
+ the Morse transformed O-H distances (ρ0 = ρ1) and
474
+ the Morse transformed H-H distance (ρ2) for different
475
+ minima as well as the MAE (of each respective minima)
476
+ on a test set. The blue-green dots represent the lower of
477
+ the minima on the LML while the red-yellow dots are
478
+ the second lowest minimum. The trajectory highlighted
479
+ in black represents the models with the lowest MAE in
480
+ both panels. Around α = 2.0, the two modes of selec-
481
+ tion yield different models (the grey area is plotted to aid
482
+ clarity of the switch between the two regimes).
483
+ The overall behaviour of the trajectories in hyperparam-
484
+ eters space of both LML minima represented in panel
485
+ (b) of figure 3 is expected. As α increases, the length
486
+ scales shorten. A larger Morse parameter compresses the
487
+ training data, shortening the distance between data. As
488
+ a consequence, a constant length scale would flatten the
489
+ GP latent function. This is prevented by the data term
490
+ of the LML, which causes the minima to move towards
491
+ shorter length scales. Moreover, the minima trajectory
492
+ can be observed to be almost linear towards larger Morse
493
+ parameters as the transform of equation 4 becomes itself
494
+ more linear since, in the limit of infinite α, the transform
495
+ is linear:
496
+ Xi �→ lim
497
+ α→∞ exp(−Xi/α) =
498
+ lim
499
+ α→∞
500
+
501
+ 1 − Xi
502
+ α + X2
503
+ i
504
+ 2α2 + . . .
505
+
506
+ ≃ 1 − Xi
507
+ α
508
+ (8)
509
+ This opens the question of redefining length scales to re-
510
+ flect the change in α (for example as ρi → ρi/α) hyper-
511
+ parameter. This does not seem to affect the optimisation
512
+ process and will not be considered further.
513
+ Despite figure 3 showing only two minima, the GP has
514
+ multiple minima on the LML surface. However, only two
515
+ minima provided PES models with low MAEs. Figure
516
+ 4 shows the disconnectivity graphs of the LML to show
517
+ the complexity of the surface for the Matérn (ν = 2.5)
518
+ kernel. It is surprising that the variations are so large
519
+ despite the training data being unchanged.
520
+ 4
521
+
522
+ Optimised Morse transform of a Gaussian process feature space
523
+ Disconnectivity graphs show some surprising variations
524
+ that are sometimes a simple consequence of TSs being
525
+ very flat and hard to capture. This leads to the latter disap-
526
+ pearing after small changes to the LML space which has
527
+ strong consequences on the network of minima that can
528
+ be shown. This is the case of the graphs for α = 0.2 and
529
+ α = 0.4 in figure 4 where the latter finds TSs between
530
+ LML minima more easily.
531
+ epsilon
532
+ 11
533
+ 12
534
+ 19
535
+ 0.2
536
+ epsilon
537
+ 2
538
+ 3
539
+ 4
540
+ 6
541
+ 7
542
+ 8
543
+ 9
544
+ 0.4
545
+ epsilon
546
+ 4
547
+ 5
548
+ 6
549
+ 7
550
+ 1.2
551
+ epsilon
552
+ 5
553
+ 7
554
+ 1.4
555
+ epsilon
556
+ 4
557
+ 6
558
+ 2.2
559
+ epsilon
560
+ 5
561
+ 7
562
+ 2.6
563
+ epsilon
564
+ 10
565
+ 11
566
+ 2.8
567
+ epsilon
568
+ 6
569
+ 7
570
+ 15
571
+ 3.0
572
+ epsilon
573
+ 7
574
+ 8
575
+ 9
576
+ 3.2
577
+ epsilon
578
+ 6
579
+ 7
580
+ 3.6
581
+ epsilon
582
+ 1
583
+ 8
584
+ 9
585
+ 4.0
586
+ epsilon
587
+ 1
588
+ 6
589
+ 4.4
590
+ epsilon
591
+ 1
592
+ 6
593
+ 7
594
+ 11
595
+ 4.6
596
+ epsilon
597
+ 2
598
+ 10
599
+ 4.8
600
+ epsilon
601
+ 7
602
+ 8
603
+ 5.0
604
+ epsilon
605
+ 9
606
+ 10
607
+ 11
608
+ 5.8
609
+ epsilon
610
+ 3
611
+ 4
612
+ 7.4
613
+ epsilon
614
+ 2
615
+ 3
616
+ 7.8
617
+ epsilon
618
+ 10
619
+ 11
620
+ 12
621
+ 14
622
+ 8.2
623
+ epsilon
624
+ 1
625
+ 7
626
+ 9.0
627
+ epsilon
628
+ 5
629
+ 6
630
+ 9
631
+ 9.8
632
+ epsilon
633
+ 1
634
+ 4
635
+ 10.2
636
+ epsilon
637
+ 2
638
+ 3
639
+ 10.6
640
+ Figure 4: LMLs disconnectivity graphs for the Matérn
641
+ (ν = 2.5) kernels. Labels underneath each graph denote
642
+ the α parameter for the corresponding GP LML.
643
+ The minimum on the LML with the lowest MAE is al-
644
+ ways shown on the graphs and, despite its connectivity to
645
+ other LML minima changing, is easy to follow. GPs con-
646
+ verge towards a “good” model when the Morse parameter
647
+ is large enough and all latent function for GPs with α > 1
648
+ perform similarly, as shown by the plateau in figure 3.
649
+ The latent functions, given in figure 5, are also similar.
650
+ The PES models are shown for the GP with a Morse
651
+ parameter of α = 2.0 (as it has been used extensively in
652
+ the previous chapter) and for comparison purposes for
653
+ the GP with a Morse parameter of α = 5.0 where the
654
+ MAE of the best model is reaching a “plateau”.
655
+ MAE: 1.58 mHa
656
+ 0.25
657
+ 0.50
658
+ 0.75
659
+ exp(−rO-H1/2.0)
660
+ 0.25
661
+ 0.50
662
+ 0.75
663
+ exp(−rO-H2/2.0)
664
+ -76.2
665
+ -76.0
666
+ -75.8
667
+ -75.6
668
+ -75.4
669
+ -75.2
670
+ MAE: 1.47 mHa
671
+ 0.75
672
+ exp(−rO-H1/5.0)
673
+ 0.75
674
+ exp(−rO-H2/5.0)
675
+ -76.2
676
+ -76.0
677
+ -75.8
678
+ -75.6
679
+ -75.4
680
+ -75.2
681
+ Figure 5: Resulting PES, projected on the Morse trans-
682
+ formed O-H nuclear distances, for Matérn kernels trained
683
+ on Morse transformed spaces with parameters α =2.0
684
+ (higher graph) and α = 5.0 (lower graph). The magenta
685
+ lines are isovalue contours of the kernel function where
686
+ the covariance to the highlighted point is equal nσ2/4
687
+ for n = 3, 2, 1 where σ is the amplitude hyperparameter
688
+ of equation 1.
689
+ The RBF kernel seem to have a much less stable LML
690
+ landscapes (see figure 6). However, the lowest MAE(θ)
691
+ models is always found to be the lowest minimum on the
692
+ LML(θ). As opposed to the Matérn kernel, the optimal
693
+ Morse parameter value to minimise the MAE is more
694
+ distinct (see figure 8).
695
+ The Gaussian process for the optimal value seem to cor-
696
+ respond to a value where the training data is not too
697
+ compressed and does not allow the RBF kernel to over fit.
698
+ Despite the MAE being similar to the Matérn kernel best
699
+ performing models, the length scales are shorter and the
700
+ latent function, displayed on figure 7, shows that the RBF
701
+ model predictions are only reliable close to the training
702
+ data and do not “carry” any of the information to longer
703
+ bond lengths, like the Matérn kernel does.
704
+ 5
705
+
706
+ Optimised Morse transform of a Gaussian process feature space
707
+ epsilon
708
+ 15
709
+ 16
710
+ 17
711
+ 0.4
712
+ epsilon
713
+ 9
714
+ 11
715
+ 0.6
716
+ epsilon
717
+ 5
718
+ 12
719
+ 0.8
720
+ epsilon
721
+ 4
722
+ 11
723
+ 1.2
724
+ epsilon
725
+ 6
726
+ 13
727
+ 17
728
+ 1.4
729
+ epsilon
730
+ 17
731
+ 19
732
+ 1.6
733
+ epsilon
734
+ 5
735
+ 12
736
+ 16
737
+ 1.8
738
+ epsilon
739
+ 19
740
+ 24
741
+ 25
742
+ 26
743
+ 118
744
+ 2.2
745
+ epsilon
746
+ 22
747
+ 23
748
+ 2.4
749
+ epsilon
750
+ 1
751
+ 4
752
+ 2.6
753
+ epsilon
754
+ 28
755
+ 29
756
+ 41
757
+ 48
758
+ 2.8
759
+ epsilon
760
+ 13
761
+ 16
762
+ 3.0
763
+ epsilon
764
+ 13
765
+ 18
766
+ 3.2
767
+ epsilon
768
+ 19
769
+ 20
770
+ 22
771
+ 3.6
772
+ epsilon
773
+ 5
774
+ 17
775
+ 3.8
776
+ epsilon
777
+ 13
778
+ 16
779
+ 4.0
780
+ epsilon
781
+ 13
782
+ 18
783
+ 4.4
784
+ epsilon
785
+ 13
786
+ 16
787
+ 4.6
788
+ epsilon
789
+ 5
790
+ 13
791
+ 5.0
792
+ epsilon
793
+ 7
794
+ 11
795
+ 12
796
+ 6.2
797
+ epsilon
798
+ 1
799
+ 3
800
+ 6
801
+ 7
802
+ 8
803
+ 9
804
+ 7.0
805
+ epsilon
806
+ 1
807
+ 4
808
+ 8
809
+ 10
810
+ 11
811
+ 12
812
+ 15
813
+ 16
814
+ 17
815
+ 18
816
+ 7.4
817
+ epsilon
818
+ 4
819
+ 6
820
+ 7.8
821
+ epsilon
822
+ 5
823
+ 12
824
+ 13
825
+ 8.2
826
+ epsilon
827
+ 1
828
+ 4
829
+ 7
830
+ 9
831
+ 10
832
+ 11
833
+ 8.6
834
+ epsilon
835
+ 7
836
+ 9
837
+ 9.4
838
+ epsilon
839
+ 1
840
+ 8
841
+ 10
842
+ 11
843
+ 12
844
+ 9.8
845
+ epsilon
846
+ 7
847
+ 9
848
+ 10
849
+ 10.2
850
+ epsilon
851
+ 8
852
+ 9
853
+ 10.6
854
+ Figure 6: LMLs disconnectivity graphs for the RBF ker-
855
+ nels. Again, the labels underneath each graph denote the
856
+ α parameter for the corresponding GP LML.
857
+ Compared to the graphs of the Matérn kernel in figure
858
+ 4, the graphs for the RBF kernel in figure 6 show more
859
+ minima and show stronger changes. This is due to the
860
+ tendency for the RBF kernel to over fit data giving a more
861
+ complex LML landscape in the region with short length
862
+ scales. The TSs are also harder to optimise, in these short
863
+ length scale regions, making the graphs rapidly changing.
864
+ The most performant GP is found for α = 0.8 and one
865
+ can see, in figure 8, that its MAE is similar to the best
866
+ Matérn GPs. The resulting latent function is shown in
867
+ figure 7 alongside the latent function for the GP trained
868
+ with the Morse parameter set to α = 2.0 to compare with
869
+ the model of the Matérn kernel in figure 5. One can see
870
+ that, for α = 2.0, despite similar MAEs, the RBF kernel
871
+ is more “local” and does not predict a meaningful PES at
872
+ longer bond lengths.
873
+ MAE: 1.60 mHa
874
+ 0.25
875
+ 0.50
876
+ exp(−rO-H1/0.8)
877
+ 0.25
878
+ 0.50
879
+ exp(−rO-H2/0.8)
880
+ -76.2
881
+ -76.0
882
+ -75.8
883
+ -75.6
884
+ -75.4
885
+ -75.2
886
+ MAE: 3.52 mHa
887
+ 0.25
888
+ 0.50
889
+ 0.75
890
+ exp(−rO-H1/2.0)
891
+ 0.25
892
+ 0.50
893
+ 0.75
894
+ exp(−rO-H2/2.0)
895
+ -76.2
896
+ -76.0
897
+ -75.8
898
+ -75.6
899
+ -75.4
900
+ -75.2
901
+ Figure 7: Resulting PES, projected on the Morse trans-
902
+ formed O-H nuclear distances, for RBF kernels trained
903
+ on Morse transformed spaces with parameters α = 0.8
904
+ (higher graph) and α = 2.0 (lower graph) respectively.
905
+ These correspond to the minimum along the MAE plot
906
+ in figure 3 for the RBF and the Matérn kernels. The ma-
907
+ genta lines are isovalue contours of the kernel function.
908
+ In figure 7, as before, the contours represent an isocon-
909
+ tour of the kernel from a given sample↓. One can see
910
+ that despite the surfaces covering the same geometry
911
+ stretches, for the larger α, the optimised length scales
912
+ is much shorter and only allows a sample to span “ in-
913
+ fluence” over a small part of the considered space. This
914
+ leads to partial over fitting of the training data and a more
915
+ complicated PES model.
916
+ To summarise both the MAE optimisation of the GPs
917
+ trained with RBF and Matérn kernels, we plot the MAE
918
+ curves against the Morse parameter. This is the curve
919
+ that one minimises in the “best-fit” approach and leads
920
+ to selecting α = 0.8 for the RBF kernel and a larger
921
+ ↓ The first contour is where the covariance function evaluates to 0.75σ2, where σ is the amplitude hyperparameter, while the second
922
+ one correspond to 0.5σ2.
923
+ 6
924
+
925
+ Optimised Morse transform of a Gaussian process feature space
926
+ α > 2.0 for the Matérn kernel. The latter produces a
927
+ monotonically decreasing line which indicates that the
928
+ optimal Morse transform is a linear transform (since the
929
+ limit of α → ∞ reduces the Morse transform to the latter,
930
+ as explained in equation 8). This is an indication that it
931
+ is not optimal , for the Matérn kernel, to do the transfor-
932
+ mation and that the initial internuclear distances produce
933
+ a better feature space to learn on.
934
+ 1
935
+ 5
936
+ 10
937
+ α
938
+ 2.5
939
+ 5.0
940
+ 7.5
941
+ MAE [mHa]
942
+ Figure 8: MAE curves for the RBF (blue) and Matérn
943
+ (red) kernels against the Morse parameter. One can see
944
+ that the Matérn curve does not show a minimum and thus
945
+ indicates the best transform is the linear transform, i.e.
946
+ the limit of the Morse transform when α → ∞.
947
+ 5
948
+ Optimisable Morse Kernels
949
+ We now explore the optimisation of GP with the same
950
+ training data projected on the internuclear distances that
951
+ are Morse transformed in the kernel, for example as given
952
+ by equation 6 for the MorseRBF kernel. As usual the
953
+ kernels are scaled by an optimisable CK and have an
954
+ added optimisable noise given by a WK. The additional
955
+ hyperparameter, α, means we approach the Morse pa-
956
+ rameter optimisation in a fully Bayesian manner through
957
+ the LML minimisation. As mentioned before this does
958
+ remove the testing set in the optimisation and only the
959
+ training data affects its optimisation.
960
+ For the MorseRBF, multiple minima on the LML are
961
+ obtained and, when ranked with their respective MAEs,
962
+ the best GP models are found to be in the region of
963
+ 0.5 < α < 1.0. This is in accordance with the MAE
964
+ curve, as seen in figure 8, of the GPs trained with standard
965
+ RBF kernels and fixed Morse transforms. As expected,
966
+ the MorseRBF GP best models latent function are very
967
+ similar to the RBF GP latent function with small α param-
968
+ eters↓: the lowest minima on the LML for the MorseRBF
969
+ is shown in figure 9.
970
+ 0.5
971
+ 1.5
972
+ 2.5
973
+ rO-H1 [˚A]
974
+ 0.5
975
+ 1.5
976
+ 2.5
977
+ rO-H2 [˚A]
978
+ -76.2
979
+ -76.0
980
+ -75.8
981
+ -75.6
982
+ -75.4
983
+ -75.2
984
+ 0.5
985
+ 1.5
986
+ 2.5
987
+ rO-H1 [˚A]
988
+ 0.5
989
+ 1.5
990
+ 2.5
991
+ rO-H2 [˚A]
992
+ -76.2
993
+ -76.0
994
+ -75.8
995
+ -75.6
996
+ -75.4
997
+ -75.2
998
+ Figure 9: Latent functions of GPs trained with a standard
999
+ RBF kernel (higher graph) and a fixed Morse parameter
1000
+ close to the one exhibited by the lowest LML minimum
1001
+ of the other GP, trained with a MorseRBF kernel (lower
1002
+ graph). The change in length scale (shown by the kernel
1003
+ isovalue contour extending further out) is simply a conse-
1004
+ quence of the small difference in α value and the models
1005
+ are essentially the same.
1006
+ Obtaining MorseRBF models that resembles the RBF
1007
+ ones is important as it tells us that the added hyperpa-
1008
+ rameter dimension creates a convex LML hypersurface↓
1009
+ that can be optimised. The other hyperparameters of the
1010
+ kernel are quite close to the ones of the kernel that does
1011
+ not include the transformation when one fixes the latter
1012
+ with the parameters found by the Morse kernel.
1013
+ To summarise, the Morse kernels do optimise to lower
1014
+ values which agree better with the optimal MAE(α) for
1015
+ the RBF kernel but not for the Matérn kernel. Figure 10
1016
+ shows the disparity between the two Morse kernel ability
1017
+ to replicate the “best-fit” approach..
1018
+ 1
1019
+ 5
1020
+ 10
1021
+ α
1022
+ 2.5
1023
+ 5.0
1024
+ 7.5
1025
+ MAE [mHa]
1026
+ Figure 10: MAE curves for the RBF (blue) and Matérn
1027
+ (red) kernels against the Morse parameter. The dots rep-
1028
+ resent the optimised Morse hyperparameter of the Morse
1029
+ kernels (blue for MorseRBF and red for MorseMatérn)
1030
+ with grey line to aid clarity.
1031
+ 6
1032
+ Changing the Training Data
1033
+ Since the feature space is optimised differently with re-
1034
+ spect to the selected training data, we will consider the
1035
+ effect of adding data to the previously discussed models.
1036
+ We still use the MAE of the final GP model but we use
1037
+ two different testing sets. The new set is also taken from a
1038
+ ↓ The MorseRBF kernel is equal to a RBF kernel and a fixed Morse projection with the optimal Morse hyperparameter.
1039
+ ↓ It should be made clear again that we are technically talking about the −LML surface which we are optimising. On the true LML
1040
+ surface this would be concave.
1041
+ 7
1042
+
1043
+ Optimised Morse transform of a Gaussian process feature space
1044
+ Boltzmann distibution but at a higher temperature which
1045
+ allows data to be sampled 0.4 Ha above the equilibrium
1046
+ energy.
1047
+ The training data is changed by adding data sampled
1048
+ from NM clusters↓. Two things are interesting to fol-
1049
+ low: the effect of increasing the size of the dataset on
1050
+ the MAE(α) curves as well as the progression of LML-
1051
+ optimised Morse hyperparameters.
1052
+ 2
1053
+ 5
1054
+ 10
1055
+ α
1056
+ 0
1057
+ 2
1058
+ 4
1059
+ MAElow [mHa]
1060
+ 8
1061
+ 16
1062
+ MAEhigh [mHa]
1063
+ (a) N = 29
1064
+ 2
1065
+ 5
1066
+ 10
1067
+ α
1068
+ 0
1069
+ 2
1070
+ 4
1071
+ MAElow [mHa]
1072
+ 8
1073
+ 16
1074
+ MAEhigh [mHa]
1075
+ (b) N = 39
1076
+ 2
1077
+ 5
1078
+ 10
1079
+ α
1080
+ 0
1081
+ 2
1082
+ 4
1083
+ MAElow [mHa]
1084
+ 8
1085
+ 16
1086
+ MAEhigh [mHa]
1087
+ (c) N = 49
1088
+ 2
1089
+ 5
1090
+ 10
1091
+ α
1092
+ 0
1093
+ 2
1094
+ 4
1095
+ MAElow [mHa]
1096
+ 8
1097
+ 16
1098
+ MAEhigh [mHa]
1099
+ (d) N = 59
1100
+ 2
1101
+ 5
1102
+ 10
1103
+ α
1104
+ 0
1105
+ 2
1106
+ 4
1107
+ MAElow [mHa]
1108
+ 8
1109
+ 16
1110
+ MAEhigh [mHa]
1111
+ (e) N = 69
1112
+ 2
1113
+ 5
1114
+ 10
1115
+ α
1116
+ 0
1117
+ 2
1118
+ 4
1119
+ MAElow [mHa]
1120
+ 8
1121
+ 16
1122
+ MAEhigh [mHa]
1123
+ (f) N = 79
1124
+ Figure 11:
1125
+ Different MAE(α) curves for different
1126
+ datasets. The two colours represent the MAE on different
1127
+ testing sets↓ to see the dependency of the minimum of
1128
+ the MAE(α) with respect to the chosen testing set. There
1129
+ is no clear choice for an α, although there seem to be
1130
+ a preference for small α values in the RBF kernel, un-
1131
+ til training data becomes rather large and most Morse
1132
+ parameters perform equally.
1133
+ A first observation that can be made from the curves,
1134
+ in figure 11, is that a different testing set can lead to
1135
+ a different optimal Morse parameter. A second impor-
1136
+ tant aspect is that small changes to the training set (in
1137
+ this case adding training data) can importantly alter the
1138
+ curves. For the latter, it is a surprising result since the
1139
+ new training data does not differ from the original train-
1140
+ ing data in terms of what it describes. The new sampled
1141
+ training data does not allow the GP to understand new
1142
+ patterns in the target function, which were not seen in
1143
+ the original set. One could expect that consequently the
1144
+ changes in the training data would not affect the relative
1145
+ performance of GPs with different Morse parameters.
1146
+ As data is added to the original training set, the MAE
1147
+ curves tend to flatten and stop exhibiting a clear mini-
1148
+ mum. The optimal Morse parameter is not well defined
1149
+ and GPs, despite having different projections on their
1150
+ feature spaces, perform similarly in terms of MAE. The
1151
+ sparsity of training data is reduced, which makes the fea-
1152
+ ture space less relevant: dense training data is likely to
1153
+ perform well in any way it is projected.
1154
+ Consequently, instead of additional data making the min-
1155
+ imum of the MAE(α) more and more distinct, one sees
1156
+ the minimum disappearing.
1157
+ 0.2
1158
+ 0.5
1159
+ ρ0 = ρ1
1160
+ 0.2
1161
+ 0.5
1162
+ ρ2
1163
+ 5
1164
+ 15
1165
+ 25
1166
+ 35
1167
+ 45
1168
+ RBF
1169
+ 0.5
1170
+ 1.5
1171
+ ρ0 = ρ1
1172
+ 0.5
1173
+ 1.5
1174
+ ρ2
1175
+ 5
1176
+ 15
1177
+ 25
1178
+ 35
1179
+ 45
1180
+ Matérn (ν = 2.5)
1181
+ Figure 12: Trajectories of optimal models in hyperpa-
1182
+ rameter space for different datasets (each dataset is rep-
1183
+ resented by a different colour and the dots follow their
1184
+ respective best minima on the LML landscapes). The
1185
+ end of the trajectory (where the label is given) is going
1186
+ towards larger Morse parameters and each trajectory has
1187
+ points, since the progression is smooth, ordered for in-
1188
+ creasing α along itself. The colourbar is given as greys
1189
+ since it has shared by all training sets.
1190
+ In figure 12, each progression in hyperparameter space
1191
+ seem to have an initial curved trajectory followed by a
1192
+ linear trajectory. This linear regime was already observed
1193
+ in figure 3 as a consequence of the Morse transform limit
1194
+ (see equation 8). If one considers the end of the trajectory
1195
+ of the Matérn GPs, one sees that the optimised length
1196
+ scales of GPs with the same Morse projection increases
1197
+ with the training set size. The trend is less clear for the
1198
+ RBF GPs where trajectories are not as well-behaved.
1199
+ Some latent functions of the Matérn GPs are plotted in
1200
+ figure 13. Despite the length scale growing larger, the
1201
+ PESs seem to strongly “oscillate”, as if they had a short
1202
+ length scale. This is particularly seen away from the data
1203
+ as seen in panel (e) and (f).
1204
+ MAE: 3.06 mHa
1205
+ MAE: 0.73 mHa
1206
+ MAE: 0.39 mHa
1207
+ 0.25
1208
+ 0.75
1209
+ exp(−rO−H1/2)
1210
+ 0.25
1211
+ 0.75
1212
+ exp(−rO−H2/2)
1213
+ -76.2
1214
+ -76.0
1215
+ -75.8
1216
+ -75.6
1217
+ -75.4
1218
+ -75.2
1219
+ (a) N = 29
1220
+ 0.25
1221
+ 0.75
1222
+ exp(−rO−H1/2)
1223
+ 0.25
1224
+ 0.75
1225
+ exp(−rO−H2/2)
1226
+ -76.2
1227
+ -76.0
1228
+ -75.8
1229
+ -75.6
1230
+ -75.4
1231
+ -75.2
1232
+ (b) N = 49
1233
+ 0.25
1234
+ 0.75
1235
+ exp(−rO−H1/2)
1236
+ 0.25
1237
+ 0.75
1238
+ exp(−rO−H2/2)
1239
+ -76.2
1240
+ -76.0
1241
+ -75.8
1242
+ -75.6
1243
+ -75.4
1244
+ -75.2
1245
+ (c) N = 69
1246
+ Figure 13: Resulting PES, projected on the Morse trans-
1247
+ formed O-H internuclear distances, for Matérn kernels
1248
+ trained on Morse transformed spaces with parameters
1249
+ α =2.0 for different training dataset sizes.
1250
+ If one thinks of the optimisation in a Bayesian sense, one
1251
+ would expect the opposite where minima on the LML
1252
+ become more well-defined as new data, if it still agrees
1253
+ ↓ There is no overlap of the two training set. The additional data is added incrementally to the original training data with batches of
1254
+ 5 random samples drawn from the NM clusters.
1255
+ 8
1256
+
1257
+ 9.0
1258
+ 6.0
1259
+ Q
1260
+ 3.0Optimised Morse transform of a Gaussian process feature space
1261
+ with the hyperparameters of the model of that particular
1262
+ minimum, is added.
1263
+ 6.1
1264
+ Optimisable Morse kernels for changing data
1265
+ Gaussian processes trained with a MorseMatérn (ν =
1266
+ 2.5) kernel and a MorseRBF kernel are trained on the
1267
+ same datasets and compared to the MAE(α) trends of
1268
+ figure 11. These results do not use the full GMIN imple-
1269
+ mentation and use a basic sklearn24 L-BFGS approach.
1270
+ All reported minima have projected gradients converged
1271
+ to 10−2, which is not as tight a convergence criterion as
1272
+ LML minima that were found using the GMIN imple-
1273
+ mentation. Table 1 summaries the results for both the
1274
+ Morse-transformed kernels with the training data ranging
1275
+ from the initial set to the fully “merged” one in incre-
1276
+ ments of 5 data points.
1277
+ N
1278
+ MorseRBF: α
1279
+ MorseRBF: ρ0 = ρ1
1280
+ MorseMatérn: α
1281
+ MorseMatérn: ρ0 = ρ1
1282
+ 29
1283
+ 0.534
1284
+ 0.41
1285
+ 0.580
1286
+ 2.08
1287
+ 34
1288
+ 0.774
1289
+ 0.62
1290
+ 0.960
1291
+ 0.17
1292
+ 39
1293
+ 0.804
1294
+ 0.39
1295
+ 0.804
1296
+ 0.86
1297
+ 44
1298
+ 0.937
1299
+ 0.22
1300
+ 0.514
1301
+ 3.05
1302
+ 49
1303
+ 0.802
1304
+ 0.18
1305
+ 0.539
1306
+ 0.88
1307
+ 54
1308
+ 0.806
1309
+ 0.12
1310
+ 0.569
1311
+ 0.55
1312
+ 59
1313
+ 0.440
1314
+ 0.42
1315
+ 0.993
1316
+ 2.40
1317
+ 64
1318
+ 0.391
1319
+ 0.32
1320
+ 0.622
1321
+ 2.27
1322
+ 69
1323
+ 0.949
1324
+ 0.23
1325
+ 0.448
1326
+ 0.15
1327
+ 74
1328
+ 0.750
1329
+ 0.50
1330
+ 0.525
1331
+ 0.72
1332
+ 79
1333
+ 0.251
1334
+ 0.12
1335
+ 0.721
1336
+ 0.30
1337
+ Table 1: Summary of Morse parameters and length scales for optimised Morse kernels with an increasing size of training
1338
+ set (number of samples given by N). When the Morse kernel performs better than its “standard” kernel counterpart, the
1339
+ values are written in green. One can see that the improvement is very rarely seen for the RBF kernel side and only seen
1340
+ about half the time for the kernel Matérn.
1341
+ There does not seem to be a smooth transition as data is
1342
+ added in terms of an optimal α and there does not seem
1343
+ to a consistently better method for choosing the Morse
1344
+ parameters in the Bayesian GP framework.
1345
+ Since N, the size of the training set, is not a continu-
1346
+ ous variable that changes the LML smoothly, it is not
1347
+ necessary that the change in α is smooth. One expects
1348
+ the N parameter to produce smooth changes of the LML
1349
+ minima. It is clear that MorseMatérn kernels, like the
1350
+ MorseRBF kernels, tend to favour small α values when
1351
+ optimised through the LML but it is hard to understand
1352
+ the reason for the progression seen in the tables above.
1353
+ These optimal values are not similar to the usual values
1354
+ used for Morse projections in the literature but it does
1355
+ not in terms of MAE discredit those choices.
1356
+ 7
1357
+ Conclusion
1358
+ Optimising, with a Bayesian approach, the feature space
1359
+ of the GP to produce more performant latent functions,
1360
+ in terms of MAE, is not straight forward. The LML is
1361
+ made more complex by the additional DOF and there is
1362
+ a strong correlation between the hyperparameters. Differ-
1363
+ ent transforms might not necessarily suffer from this but,
1364
+ for the Morse transform, the relation between the Morse
1365
+ parameter and the length scales is evident.
1366
+ For the Morse transform, since the limit of its parameters
1367
+ tending to a certain value (α → ∞ here) give a linear
1368
+ transform, optimising the transform parameters can also
1369
+ inform us on the “usefulness” of the transform. The
1370
+ curves of figure 8, for example, show that the transform
1371
+ 9
1372
+
1373
+ Optimised Morse transform of a Gaussian process feature space
1374
+ is actually producing less performant GP models for this
1375
+ system.
1376
+ The “best-fit” approach also produced some interesting
1377
+ results regarding the choice of testing set to produce the
1378
+ MAE curves one minimises. A target function is as-
1379
+ sumed to have an optimal Morse parameter to project
1380
+ it to a “simpler” surface↓. However, a different testing
1381
+ set can significantly affect the result of the minimisation
1382
+ (this cannot be interpreted in the Bayesian approach since
1383
+ there is no testing set in the LML minimisation). This
1384
+ should not be the case if the testing set is “complete”,
1385
+ in the sense that new samples are drawn from the same
1386
+ distribution. It will be the case if the distribution change↓
1387
+ which suggests that one should use testing data that is
1388
+ suitable for the intended use of the GP model.
1389
+ Acknowledgments
1390
+ I would like to thank the Royal Society for funding as
1391
+ well as the Wales group of the University of Cambridge
1392
+ for providing access to the GMIN suite17. Moreover, I
1393
+ would like to thank Angelos Michaelides and Albert Par-
1394
+ tay Bartòk for fruitful discussion during my PhD viva
1395
+ that improved this work.
1396
+ References
1397
+ (1)
1398
+ J. Behler, The Journal of Chemical Physics, 2016,
1399
+ 145, 170901.
1400
+ (2)
1401
+ F. Noé, A. Tkatchenko, K.-R. Müller and C.
1402
+ Clementi, Annual Review of Physical Chemistry,
1403
+ 2020, 71, 361–390.
1404
+ (3)
1405
+ V. L. Deringer, A. P. Bartók, N. Bernstein, D. M.
1406
+ Wilkins, M. Ceriotti and G. Csányi, Chemical
1407
+ Reviews, 2021, 121, PMID: 34398616, 10073–
1408
+ 10141.
1409
+ (4)
1410
+ A. P. Bartók and G. Csányi, International Journal
1411
+ of Quantum Chemistry, 2015, 115, 1051–1057.
1412
+ (5)
1413
+ S. R. Sourish Das and R. Sambasivan, Computing
1414
+ Research Repositry, 2015, abs/1509.05142, 1–17.
1415
+ (6)
1416
+ A. J. Cresswell, R. J. Wheatley, R. D. Wilkinson
1417
+ and R. S. Graham, Faraday Discussions, 2016,
1418
+ 192, 415–436.
1419
+ (7)
1420
+ J. Cui and R. V. Krems, Journal of Physics B:
1421
+ Atomic, Molecular and Optical Physics, 2016, 49,
1422
+ 224001.
1423
+ (8)
1424
+ E. Uteva, R. S. Graham, R. D. Wilkinson and R. J.
1425
+ Wheatley, The Journal of Chemical Physics, 2017,
1426
+ 147, 161706.
1427
+ (9)
1428
+ B. Kolb, P. Marshall, B. Zhao, B. Jiang and H.
1429
+ Guo, The Journal of Physical Chemistry A, 2017,
1430
+ 121, PMID: 28287725, 2552–2557.
1431
+ (10)
1432
+ E. Uteva, R. S. Graham, R. D. Wilkinson and R. J.
1433
+ Wheatley, The Journal of Chemical Physics, 2018,
1434
+ 149, 174114.
1435
+ (11)
1436
+ D. Dragoni, T. D. Daff, G. Csányi and N. Marzari,
1437
+ Physical Review Materials, 2018, 2, 1–16.
1438
+ (12)
1439
+ J. Dai and R. V. Krems, Journal of Chemical The-
1440
+ ory and Computation, 2020, 16, 1386–1395.
1441
+ (13)
1442
+ M. F. Langer, A. Goeßmann and M. Rupp, arXiv,
1443
+ 2020.
1444
+ (14)
1445
+ C. Qu, Q. Yu and J. M. Bowman, Annual Review
1446
+ of Physical Chemistry, 2018, 69, 151–175.
1447
+ (15)
1448
+ J. Sacks, S. B. Schiller and W. J. Welch, Techno-
1449
+ metrics, 1989, 31, 41–47.
1450
+ (16)
1451
+ C. E. Rasmussen and C. K. I. Williams, Gaus-
1452
+ sian Processes for Machine Learning (Adaptive
1453
+ Computation and Machine Learning), 2005.
1454
+ (17)
1455
+ D. J. Wales, GMIN: A program for finding global
1456
+ minima and calculating thermodynamic properties
1457
+ from basin-sampling.
1458
+ (18)
1459
+ D. J. Wales, OPTIM: A program for optimising
1460
+ geometries and calculating pathways.
1461
+ (19)
1462
+ D. J. Wales, PATHSAMPLE: A program for gen-
1463
+ erating connected stationary point databases and
1464
+ extracting global kinetics.
1465
+ (20)
1466
+ O. M. Becker and M. Karplus, The Journal of
1467
+ Chemical Physics, 1997, 106, 1495–1517.
1468
+ (21)
1469
+ D. J. Wales, M. A. Miller and T. R. Walsh, Nature,
1470
+ 1998, 394, 758–760.
1471
+ (22)
1472
+ M. Miller, D. J. Wales and V. de Souza, disconnec-
1473
+ tionDPS: A program for creating disconnectivity
1474
+ graphs.
1475
+ (23)
1476
+ Y. Shao, Z. Gan, E. Epifanovsky, A. T. Gilbert,
1477
+ M. Wormit, J. Kussmann, A. W. Lange, A. Behn,
1478
+ J. Deng, X. Feng, D. Ghosh, M. Goldey, P. R.
1479
+ Horn, L. D. Jacobson, I. Kaliman, R. Z. Khali-
1480
+ ullin, T. Ku?, A. Landau, J. Liu, E. I. Proynov,
1481
+ Y. M. Rhee, R. M. Richard, M. A. Rohrdanz, R. P.
1482
+ Steele, E. J. Sundstrom, H. L. W. III, P. M. Zim-
1483
+ merman, D. Zuev, B. Albrecht, E. Alguire, B.
1484
+ Austin, G. J. O. Beran, Y. A. Bernard, E. Berquist,
1485
+ K. Brandhorst, K. B. Bravaya, S. T. Brown, D.
1486
+ Casanova, C.-M. Chang, Y. Chen, S. H. Chien,
1487
+ K. D. Closser, D. L. Crittenden, M. Diedenhofen,
1488
+ R. A. D. Jr., H. Do, A. D. Dutoi, R. G. Edgar, S.
1489
+ Fatehi, L. Fusti-Molnar, A. Ghysels, A. Golubeva-
1490
+ Zadorozhnaya, J. Gomes, M. W. Hanson-Heine,
1491
+ P. H. Harbach, A. W. Hauser, E. G. Hohenstein,
1492
+ Z. C. Holden, T.-C. Jagau, H. Ji, B. Kaduk, K.
1493
+ Khistyaev, J. Kim, J. Kim, R. A. King, P. Klun-
1494
+ zinger, D. Kosenkov, T. Kowalczyk, C. M. Krauter,
1495
+ K. U. Lao, A. D. Laurent, K. V. Lawler, S. V.
1496
+ Levchenko, C. Y. Lin, F. Liu, E. Livshits, R. C.
1497
+ Lochan, A. Luenser, P. Manohar, S. F. Manzer,
1498
+ S.-P. Mao, N. Mardirossian, A. V. Marenich, S. A.
1499
+ ↓ This is not guaranteed to a GP easier to train on that surface.
1500
+ ↓ In the results presented in figure 11, it was the temperature of the Boltzmann distribution that changed between the distributions.
1501
+ 10
1502
+
1503
+ Optimised Morse transform of a Gaussian process feature space
1504
+ Maurer, N. J. Mayhall, E. Neuscamman, C. M.
1505
+ Oana, R. Olivares-Amaya, D. P. O?Neill, J. A.
1506
+ Parkhill, T. M. Perrine, R. Peverati, A. Prociuk,
1507
+ D. R. Rehn, E. Rosta, N. J. Russ, S. M. Sharada,
1508
+ S. Sharma, D. W. Small, A. Sodt, T. Stein, D.
1509
+ Stück, Y.-C. Su, A. J. Thom, T. Tsuchimochi, V.
1510
+ Vanovschi, L. Vogt, O. Vydrov, T. Wang, M. A.
1511
+ Watson, J. Wenzel, A. White, C. F. Williams, J.
1512
+ Yang, S. Yeganeh, S. R. Yost, Z.-Q. You, I. Y.
1513
+ Zhang, X. Zhang, Y. Zhao, B. R. Brooks, G. K.
1514
+ Chan, D. M. Chipman, C. J. Cramer, W. A. G. III,
1515
+ M. S. Gordon, W. J. Hehre, A. Klamt, H. F. S.
1516
+ III, M. W. Schmidt, C. D. Sherrill, D. G. Truhlar,
1517
+ A. Warshel, X. Xu, A. Aspuru-Guzik, R. Baer,
1518
+ A. T. Bell, N. A. Besley, J.-D. Chai, A. Dreuw,
1519
+ B. D. Dunietz, T. R. Furlani, S. R. Gwaltney, C.-P.
1520
+ Hsu, Y. Jung, J. Kong, D. S. Lambrecht, W. Liang,
1521
+ C. Ochsenfeld, V. A. Rassolov, L. V. Slipchenko,
1522
+ J. E. Subotnik, T. V. Voorhis, J. M. Herbert, A. I.
1523
+ Krylov, P. M. Gill and M. Head-Gordon, Molecu-
1524
+ lar Physics, 2015, 113, 184–215.
1525
+ (24)
1526
+ F. Pedregosa, G. Varoquaux, A. Gramfort, V.
1527
+ Michel, B. Thirion, O. Grisel, M. Blondel, P. Pret-
1528
+ tenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A.
1529
+ Passos, D. Cournapeau, M. Brucher, M. Perrot
1530
+ and E. Duchesnay, Journal of Machine Learning
1531
+ Research, 2011, 12, 2825–2830.
1532
+ 11
1533
+
-dA0T4oBgHgl3EQfPP9i/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
-tA0T4oBgHgl3EQfPP-n/content/tmp_files/2301.02173v1.pdf.txt ADDED
@@ -0,0 +1,1701 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.02173v1 [nlin.AO] 22 Dec 2022
2
+ Reconstruction of Phase Dynamics from Macroscopic Observations Based on Linear
3
+ and Nonlinear Response Theories
4
+ Yoshiyuki Y. Yamaguchi1∗ and Yu Terada2,3,4†
5
+ 1Department of Applied Mathematics and Physics,
6
+ Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan
7
+ 2Neurobiology Section, Division of Biological Sciences,
8
+ University of California San Diego, La Jolla, CA 92093, United States of America
9
+ 3Institute for Physics of Intelligence, Department of Physics Graduate School of Science,
10
+ The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
11
+ 4Laboratory for Neural Computation and Adaptation,
12
+ RIKEN Center for Brain Science, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
13
+ We propose a novel method to reconstruct phase dynamics equations from responses in macro-
14
+ scopic variables to weak inputs. Developing linear and nonlinear response theories in coupled phase-
15
+ oscillators, we derive formulae which connect the responses with the system parameters including
16
+ the time delay in interactions. We examine our method by applying it to two phase models, one
17
+ of which describes a mean-field network of the Hodgkin–Huxley type neurons with a nonzero time
18
+ delay.
19
+ The method does not require much invasiveness nor microscopic observations, and these
20
+ advantages highlight its broad applicability in various fields.
21
+ Rhythmical phenomena have been ubiquitously ob-
22
+ served in nature as well as in engineering systems and
23
+ attracted a wide spectrum of interests [1–3].
24
+ Specific
25
+ rhythmical dynamics are believed to play crucial func-
26
+ tional roles in information processing of the brain [4, 5].
27
+ Theoretical analysis have contributed to understanding
28
+ the nature of interacting rhythmical systems. One signif-
29
+ icant success in theoretical researches is the phase reduc-
30
+ tion, which reduces a high-dimensional rhythmic dynam-
31
+ ical system to a one-dimensional phase-oscillator system
32
+ by eliminating the other nonessential degrees of freedom
33
+ [6–8].
34
+ In this framework, a collective system of inter-
35
+ acting units is described by a coupled phase-oscillator
36
+ system, which consists of the natural frequency distribu-
37
+ tion, coupling function, and time delay in interactions.
38
+ A dynamical system behind an observed rhythmic phe-
39
+ nomenon in the real world is mostly, however, unknown,
40
+ while the knowledge helps to profoundly understand, pre-
41
+ dict, and control it. This means high demand to specify
42
+ the underlying coupled phase-oscillator system.
43
+ As the reconstruction is a central issue in coupled
44
+ phase-oscillator systems, many works have proposed re-
45
+ construction methods [9–18]. However, there are mainly
46
+ two rooms that should be addressed. The first is the as-
47
+ sumption of accessibility to individual elements. The pre-
48
+ vious works assume that time series of almost all elements
49
+ are available, which implausible in some situations. For
50
+ example, with electroencephalogram or functional mag-
51
+ netic response imaging signals, we can obtain only meso-
52
+ scopic or macroscopic activity of the nervous systems.
53
+ The second is the inference of the time delay. The exis-
54
+ tence of the time delay is in principle inevitable in real
55
+ systems, and can drastically change dynamics [19, 20]. It
56
57
58
+ is therefore a next step to develop a method that can be
59
+ implemented with unknown interaction delay.
60
+ Here, we utilize the linear response theory for coupled
61
+ phase-oscillator systems [21–23] with the aid of a nonlin-
62
+ ear response theory. We apply weak external forces into
63
+ a system, and observe asymptotic responses of order pa-
64
+ rameters, which are macroscopic variables. We note that
65
+ it does not require time series of individual elements and
66
+ that the time delay is tractable. Further, applied external
67
+ forces are assumed substantially weak, since we focus on
68
+ a regime where the linear response theory is valid. This
69
+ assumption brings another advantage that our approach
70
+ possesses, because strong inputs into a system may cause
71
+ an undesirable change in states of a system. The essen-
72
+ tial assumptions on models are that the system has the
73
+ mean-field, all-to-all homogeneous interactions and that
74
+ the system lies in the nonsynchronized state.
75
+ For the
76
+ first assumption, it is worth remarking that the all-to-
77
+ all interaction may not be extremely special, because the
78
+ criticality in the small-world network [24] belongs to the
79
+ universality class of the all-to-all interaction [25, 26]. The
80
+ mean-field analysis employed here could be extended by
81
+ assuming statistics in couplings [27, 28]. The second as-
82
+ sumption comes from the effectiveness of linear response
83
+ theory developed in [23] and here.
84
+ Based on the phase reduction [29] and following the
85
+ first assumption, we describe the underlying coupled
86
+ phase-oscillator system by
87
+ dθj
88
+ dt = ωj + 1
89
+ N
90
+ N
91
+
92
+ k=1
93
+ Γ (θj(t) − θk(t − τ)) + H(θj(t), t; ωex).
94
+ (1)
95
+ The variable θj(t) represents the phase of the jth oscilla-
96
+ tor at time t, the constant ωj is the natural frequency
97
+ following the natural frequency distribution g(ω), the
98
+ function Γ represents the coupling function, the constant
99
+ τ is the time delay for the coupling.
100
+ The function H
101
+
102
+ 2
103
+ represents the external force and the constant ωex is its
104
+ frequency. The system parameters g(ω), Γ, and τ are
105
+ intrinsically determined but unknown, and we will infer
106
+ them from observation of responses to the external force
107
+ H by varying the controllable frequency ωex. The cou-
108
+ pling function Γ(θ) is 2π-periodic and is expanded into
109
+ the Fourier series as
110
+ Γ (θ) = −
111
+
112
+
113
+ m=1
114
+ Km sin (mθ + αm) ,
115
+ (2)
116
+ where Km is the coupling strength and αm is the phase-
117
+ lag parameter for the mth Fourier component of Γ(θ).
118
+ We here apply the external force as
119
+ H (θ, t; ωex) = −Θ(t)
120
+
121
+
122
+ m=1
123
+ hm sin [m (θ − ωext)] ,
124
+ (3)
125
+ where hm is the amplitude of the mth mode. The func-
126
+ tion Θ(t) is the unit step function: The external force is
127
+ off for t < 0 and kicks in at t = 0.
128
+ The dynamics (A1) are described in the limit N → ∞
129
+ by the equation of continuity [30] governing F(θ, ω, t),
130
+ which is the probability density function at the time t
131
+ and normalized as
132
+ � ∞
133
+ −∞ dω
134
+ � 2π
135
+ 0
136
+ dθ F(θ, ω, t) = 1.
137
+ The
138
+ nonsynchronized state specified as F0(ω) = g(ω)/(2π),
139
+ which corresponds to the uniform distribution over θ, is
140
+ a stationary solution to the equation of continuity. The
141
+ order parameters, whose responses we observe, are de-
142
+ fined by [31]
143
+ zn(t) =
144
+ � ∞
145
+ −∞
146
+
147
+ � 2π
148
+ 0
149
+ dθ einθF(θ, ω, t).
150
+ (4)
151
+ Assuming that the external force h = (h1, h2, · · · ) is
152
+ sufficiently small, we perturbatively analyze the equation
153
+ of continuity by using the Fourier transform in θ and
154
+ the Laplace transform in t. Supposing that F0 is stable,
155
+ we obtain the asymptotic evolution of zn(t) in the lin-
156
+ ear regime as e−inωextzn(t)
157
+ t→∞
158
+ −−−→ χn(ωex)hn + O(∥h∥2),
159
+ where we suppose n > 0 hereafter [23]. Smallness of h
160
+ ensures that observation of e−inωextzn provides a good
161
+ approximation of χn(ωex)hn.
162
+ Moreover, if we apply
163
+ hm (m > 0) and observe e−inωextzn (n ̸= m), then
164
+ we have a nonlinear response of order O(∥h∥2).
165
+ Our
166
+ goal is to obtain formulae that allow to reconstruct τ,
167
+ Km’s, αm’s, and g(ω) from observation date of {χn(ωex)}
168
+ and nonlinear responses for a set of external frequency,
169
+ ωex ∈ {ω1
170
+ ex, · · · , ωS
171
+ ex}, where ω1
172
+ ex < · · · < ωS
173
+ ex. We call
174
+ a sampling reliable, if the range ωS
175
+ ex − ω1
176
+ ex is sufficiently
177
+ large and the gaps ωi+1
178
+ ex
179
+ − ωi
180
+ ex are sufficiently small.
181
+ The susceptibility χn(ωex) of the linear response reads
182
+ [32]
183
+ χn(ωex) =
184
+ G(ωex)
185
+ 2 − Ln(ωex)G(ωex)
186
+ (n > 0),
187
+ (5)
188
+ where Ln(ωex)
189
+ =
190
+ Kne−i(αn+nωexτ)
191
+ and G(ωex)
192
+ =
193
+ πg(ωex) + i PV
194
+ � ∞
195
+ −∞ dω g(ω)/(ω − ωex). The symbol PV
196
+ indicates the Cauchy principal value. We remark that
197
+ G(ωex) does not depend on the mode number n. Thanks
198
+ to this independence, once we obtain one of Lm’s, say Ln,
199
+ the other coefficients are obtained thought the relation
200
+ Lm(ωex) − Ln(ωex) =
201
+ 1
202
+ χn(ωex) −
203
+ 1
204
+ χm(ωex).
205
+ (6)
206
+ This is the key relation in our method.
207
+ An obtained
208
+ Lm infers the natural frequency distribution g(ω) from
209
+ observation of the susceptibility χm(ωex) as
210
+ g(ω) = 1
211
+ π Re G(ω) = 1
212
+ π Re
213
+
214
+ 2χm(ω)
215
+ 1 + Lm(ω)χm(ω)
216
+
217
+ .
218
+ (7)
219
+ Our method is twofold: inference of τ (Procedure-1)
220
+ and the others (Procedure-2).
221
+ The latter is further
222
+ decomposed into the two cases of τ > 0 (Procedure-
223
+ 2A) and τ = 0 (Procedure-2B).
224
+ Procedure-1 performs a finite Fourier transform
225
+ Lmn(t) =
226
+ 1
227
+ ωSex − ω1ex
228
+ � ωS
229
+ ex
230
+ ω1
231
+ ex
232
+ [Lm(ωex) − Ln(ωex)]eiωextdωex.
233
+ (8)
234
+ If the sampling of ωex is perfectly reliable so as to repro-
235
+ duce the integral of (8) in the limit ωS
236
+ ex − ω1
237
+ ex → ∞, we
238
+ have Lmn(t)
239
+ ωS
240
+ ex−ω1
241
+ ex→∞
242
+ −−−−−−−−→ Kme−iαmδt,mτ −Kne−iαnδt,nτ,
243
+ where δt,t′ is the Kronecker delta. The absolute value
244
+ |Lmn(t)| has one (τ = 0) or two (τ ̸= 0) peaks at t = mτ
245
+ and t = nτ, and the peak positions infer the time delay τ.
246
+ An actual sampling induces two types of errors from the
247
+ above limit: One comes from boundedness of ωS
248
+ ex − ω1
249
+ ex,
250
+ and the other from finiteness of the sample number. The
251
+ latter type concerns errors of the numerical integration.
252
+ Nevertheless, large peaks appear at t = mτ and t = nτ if
253
+ the sampling is sufficiently reliable, and Km and Kn are
254
+ sufficiently large comparing with the errors.
255
+ Procedure-2A
256
+ uses
257
+ the
258
+ relation
259
+ Lmn(mτ)
260
+ =
261
+ Kme−iαm under a reliable sampling of ωex to infer Km
262
+ and αnm. They with τ give the factor Lm(ω), and the
263
+ natural frequency distribution g(ω) is inferred by (7). We
264
+ remark that we solely used linear responses up to this
265
+ procedure.
266
+ Procedure-2B is for τ = 0, since the peak at t = 0
267
+ mixes the modes m and n, Lmn(0) = Kme−iαm −
268
+ Kne−iαn.
269
+ The linear equations for Kme−iαm (m =
270
+ 1, 2, 3) obtained from L12(0), L13(0), and L23(0), for in-
271
+ stance, are degenerate. We thus use a nonlinear response
272
+ to infer, for example, L1:
273
+ z2 in O(∥h∥2) can be ob-
274
+ served by applying the external force in the first mode
275
+ h = (h1, 0, 0, · · · ) as e−i2ωextz2(t)
276
+ t→∞
277
+ −−−→ χ11
278
+ 2 (ωex)h2
279
+ 1. The
280
+ nonlinear response coefficient is theoretically obtained as
281
+ [32]
282
+ χ11
283
+ 2 (ωex) =
284
+ 2iG′(ωex)
285
+ [2 − L2G(ωex)][2 − L1G(ωex)]2 ,
286
+ (9)
287
+ where G′(ωex) is the derivative of G(ωex) with respect
288
+ to ωex. Solving (9) we have one expression of G′(ωex).
289
+
290
+ 3
291
+ TABLE I. True and inferred parameter values of Model-1
292
+ and Model-2. The inferred values are given for each sample
293
+ set. NI means noninferred values, because there is no clear
294
+ peak around t = 3τ in neither |L34| nor |L35|. Procedure-1
295
+ implies that K4 should be sufficiently small from absence of
296
+ clear peak of |L45(t)| [see Fig. 1(d)].
297
+ Model-1 τ
298
+ K1
299
+ α1
300
+ K2
301
+ α2
302
+ K3
303
+ α3
304
+ Truth
305
+ 2
306
+ 1.379 0.7884 0.568 -3.0316 0.154 -0.7546
307
+ Ω50
308
+ 1
309
+ 1.987
310
+ 1.383 0.820
311
+ 0.596 -3.016
312
+ 0.153 -0.864
313
+ Ω25
314
+ 1
315
+ 1.995
316
+ 1.381 0.793
317
+ 0.582 -3.111
318
+ NI
319
+ NI
320
+ Model-2 τ
321
+ K1
322
+ α1
323
+ K2
324
+ α2
325
+ Truth
326
+ 0
327
+ 1
328
+ 1
329
+ 0
330
+ 0
331
+ Ω81
332
+ 2
333
+ 0.001
334
+ 0.958 1.001
335
+ 0.044 -2.119
336
+ Ω41
337
+ 2
338
+ -0.001 1.063 0.497
339
+ 0.521 -0.706
340
+ We independently have another expression of G′(ωex)
341
+ through solving (A31) by G and derivating it. The com-
342
+ bination of the above two expressions of G′(ωex) gives
343
+ L1 = K1e−iα1 =
344
+ 2χ11
345
+ 2 (ωex)
346
+ iχ2(ωex)χ′
347
+ 1(ωex) −
348
+ 1
349
+ χ1(ωex)
350
+ (10)
351
+ for τ = 0 [32]. We take the average over S estimated
352
+ values of L1 from ω1
353
+ ex, · · · , ωS
354
+ ex.
355
+ The other coefficients
356
+ Lm (m > 1) are estimated from (6) by taking the av-
357
+ erage. We remark that Procedure-2B is also applica-
358
+ ble for τ > 0, where L1 is obtained as a solution to a
359
+ quadratic equation. However, Procedure-2A provides
360
+ higher performance in inference for a nonzero time-delay
361
+ case as compared in an application [32].
362
+ By employing the theory developed above, we tackle
363
+ a reconstruction problem in two models: Model-1 has
364
+ a delay, that is, τ > 0 and Procedure-2A is applied,
365
+ while Model-2 does not and Procedure-2B is in use.
366
+ Their system parameters are arranged in Table I. Nu-
367
+ merical simulations of (A1) are performed in the use of
368
+ the second-order Runge-Kutta algorithm with the time
369
+ step ∆t = 0.01. Responses of order parameters are ob-
370
+ tained as the average in the time interval (50, 150]. The
371
+ number of oscillators is N = 105. All the numerical sim-
372
+ ulations are performed by activating only one mode in
373
+ h with strength 0.1: hm = 0.1 and hn = 0 (n ̸= m)
374
+ for the mth mode. This strength is sufficiently small for
375
+ the linear response but sufficiently large for overcoming
376
+ finite-size fluctuation of order O(1/
377
+
378
+ N) by the second-
379
+ order response of order O(∥h∥2).
380
+ Model-1 is motivated by neurobiological systems and
381
+ is connected directly to a network of the Hodgkin–Huxley
382
+ neurons. As in [33, 34], the Fourier components of the
383
+ modes m (m ≥ 4) are zero.
384
+ The time delay is set as
385
+ τ = 2, which is compatible with experimental observa-
386
+ tions [35]. Taking another experimental observation [36]
387
+ into account, we assume the log-normal natural frequency
388
+ distribution
389
+ g1(ω) =
390
+ 1
391
+ ω
392
+
393
+ 2πσ2
394
+ 1
395
+ exp
396
+
397
+ −(ln ω − µ1)2
398
+ 2σ2
399
+ 1
400
+
401
+ (11)
402
+ 0
403
+ 0.2
404
+ 0.4
405
+ 0.6
406
+ 0.8
407
+ 1
408
+ 1.2
409
+ 1.4
410
+ 0
411
+ 2
412
+ 4
413
+ 6
414
+ 8
415
+ 10
416
+ (a)
417
+ 0
418
+ 0.1
419
+ 0.2
420
+ 0.3
421
+ 0.4
422
+ 0.5
423
+ 0.6
424
+ 0.7
425
+ 0.8
426
+ 0
427
+ 2
428
+ 4
429
+ 6
430
+ 8
431
+ 10
432
+ (b)
433
+ 0
434
+ 0.05
435
+ 0.1
436
+ 0.15
437
+ 0.2
438
+ 0
439
+ 2
440
+ 4
441
+ 6
442
+ 8
443
+ 10
444
+ (c)
445
+ 0
446
+ 0.05
447
+ 0.1
448
+ 0.15
449
+ 0.2
450
+ 0
451
+ 2
452
+ 4
453
+ 6
454
+ 8
455
+ 10
456
+ (d)
457
+ |L1n(t)|
458
+ t
459
+ |L2n(t)|
460
+ t
461
+ |L3n(t)|
462
+ t
463
+ |L4n(t)|
464
+ t
465
+ FIG. 1. Procedure-1 in Model-1. |Lmn(t)| (8) computed
466
+ from the sample set Ω50
467
+ 1 . (a) m = 1 and n ∈ {2, 3, 4, 5}. (b)
468
+ m = 2 and n ∈ {3, 4, 5}. (c) m = 3 and n ∈ {4, 5}. (c) m = 4
469
+ and n ∈ {5}. The lines are n = 2 (purple chain), n = 3 (green
470
+ broken), n = 4 (blue dotted), and n = 5 (orange solid). The
471
+ vertical dashed black lines mark the inferred time-delay mτ,
472
+ and the horizontal solid black lines the inferred Km.
473
+ −1
474
+ −0.5
475
+ 0
476
+ 0.5
477
+ 1
478
+ 1.5
479
+ 2
480
+ −3
481
+ −2
482
+ −1
483
+ 0
484
+ 1
485
+ 2
486
+ 3
487
+ (a)
488
+ 0
489
+ 0.02
490
+ 0.04
491
+ 0.06
492
+ 0.08
493
+ 0.1
494
+ 0.12
495
+ 0.14
496
+ 0.16
497
+ 0
498
+ 2
499
+ 4
500
+ 6
501
+ 8
502
+ 10
503
+ (b)
504
+ Γ1(θ)
505
+ θ
506
+ Truth
507
+ Ω50
508
+ 1
509
+ Ω25
510
+ 1
511
+ g1(ω)
512
+ ω
513
+ Truth
514
+ From L1
515
+ From L2
516
+ From L3
517
+ FIG. 2.
518
+ Comparison between the truth (purple solid line)
519
+ and the inference in Model-1 having τ > 0. (a) The coupling
520
+ function Γ1(θ). The sample sets are Ω50
521
+ 1
522
+ (green broken line)
523
+ and Ω25
524
+ 1
525
+ (blue chain line). (b) The natural frequency distri-
526
+ bution g1(ω) (11) obtained from the inferred L1 (green filled
527
+ circles), L2 (blue open circles), and L3 (orange triangles) by
528
+ (7). The sample set is Ω50
529
+ 1 .
530
+ with µ1 = ln 5 and σ1 = 1. The external frequency is
531
+ sampled from the interval [0.2, 10] with the step ∆ωex =
532
+ 0.2 for the sample set Ω50
533
+ 1
534
+ (S = 50), and ∆ωex = 0.4
535
+ for the set Ω25
536
+ 1
537
+ (S = 25). We start from Procedure-
538
+ 1. We approximately compute Lmn(t) (8) by using the
539
+ midpoint algorithm, where a sampling point ωi
540
+ ex is the
541
+ midpoint. Absolute values |Lmn(t)| for the set Ω50
542
+ 1
543
+ are
544
+ reported in Fig. 1. We obtain the estimate τ = 1.987
545
+ by taking the average over the largest peak positions for
546
+ the pairs (m, n) = (3, 4) and (m′, n′) (m′ = 1, 2; n′ =
547
+ m′ + 1, · · · , 5). A graph should have two large peaks at
548
+ t = mτ and t = nτ, but some peaks are not visible in
549
+ Fig. 1. No clear peak at t = nτ implies that Kn is smaller
550
+ than the error level. Indeed, no clear peak of |L45(t)| in
551
+ Fig. 1(d) is consistent with K4 = K5 = 0. Procedure-
552
+ 2A infers the coefficients Lm’s from the value of Lmn(t)
553
+ at the peak position, where the above mentioned pairs
554
+
555
+ 4
556
+ 0
557
+ 0.2
558
+ 0.4
559
+ 0.6
560
+ 0.8
561
+ 1
562
+ 1.2
563
+ −10 −8 −6 −4 −2
564
+ 0
565
+ 2
566
+ 4
567
+ 6
568
+ 8
569
+ 10
570
+ (a)
571
+ −8
572
+ −6
573
+ −4
574
+ −2
575
+ 0
576
+ 2
577
+ 4
578
+ 6
579
+ 8
580
+ −4
581
+ −3
582
+ −2
583
+ −1
584
+ 0
585
+ 1
586
+ 2
587
+ 3
588
+ 4
589
+ (b)
590
+ |L12(t)|
591
+ t
592
+ ReL1, ImL1
593
+ ωex
594
+ ReL1
595
+ ImL1
596
+ 0.5165
597
+ -0.8068
598
+ FIG. 3.
599
+ Model-2.
600
+ (a) Procedure-1.
601
+ The peak position
602
+ is τ = 0.001 and the peak height is 1.014. (b) Procedure-
603
+ 2B to infer L1 by (10) for each external frequency ωex. The
604
+ real part ReLm (purple filled circles) and the imaginary part
605
+ ImLm (green open circles). The purple and green horizontal
606
+ solid lines mark the averaged values. The sample set is Ω81
607
+ 2 .
608
+ are in use to take the average. Performing the same pro-
609
+ cedure but using the set Ω25
610
+ 1 , we obtain another set of
611
+ inferences. The inferences are compared with the true
612
+ values in Table I. The coupling function Γ1(θ) is directly
613
+ obtained from Lm’s, and the natural frequency distribu-
614
+ tion g1(ω) is inferred through the relation (7). They are
615
+ in good agreement with the true ones for the set Ω50
616
+ 1
617
+ as
618
+ exhibited in Fig. 2. Increasing the number of samples im-
619
+ proves the inference, because the sampling set becomes
620
+ more reliable.
621
+ Model-2 is the Sakaguchi–Kuramoto model [37] which
622
+ is specified by the parameter set (K1, α1) = (1, 1) and the
623
+ other Fourier modes are zero. To demonstrate the ability
624
+ of the proposed method for general natural frequency dis-
625
+ tributions, a nonunimodal and asymmetric natural fre-
626
+ quency distribution is assumed as
627
+ g2(ω) = ae−(x−µ2)2/(2σ2
628
+ 2) + (1 − a)e−(x+µ2)2/(2σ2
629
+ 2)
630
+
631
+
632
+ ,
633
+ (12)
634
+ where a = 0.8, µ2 = 2, and σ2 = 1. The external fre-
635
+ quency is sampled from [−4, 4] with the step ∆ωex = 0.1
636
+ for the sample set Ω81
637
+ 2
638
+ (S = 81) and ∆ωex = 0.2 for the
639
+ set Ω41
640
+ 2 (S = 41). To compute the derivative χ′
641
+ 1(ωex), we
642
+ use the central difference except for the head and the end
643
+ points, namely ω1
644
+ ex and ωS
645
+ ex, for which the forward and
646
+ backward differences are in use, respectively.
647
+ From now on, we concentrate on inferences of L1 and
648
+ L2. Procedure-1 confirms that |L12(t)| has a large peak
649
+ at t = 0.001 [see Fig. 3(a)], and hence we conclude no
650
+ time-delay, τ = 0. The peak height 1.014 corresponds to
651
+ |K1e−iα1 − K2e−iα2|, and the fact K2 = 0 implies that
652
+ the peak height approximately infers the value of K1 = 1.
653
+ However, we do not know the value of K2 a priori, and we
654
+ cannot determine K1 yet. We thus use Procedure-2B,
655
+ (10), for inferring L1, and (6) for L2. They are obtained
656
+ as functions of ωex, and L1(ωex) is reported in Fig. 3(b).
657
+ We determine the inferred values of the constants L1 and
658
+ L2 by taking the average over ωex, and the constants
659
+ Km and αm (m = 1, 2) from the averaged Lm.
660
+ The
661
+ inferred values are arranged in Table I. The set Ω81
662
+ 2 infers
663
+ good values, while the set Ω41
664
+ 2
665
+ does not provide good
666
+ −1.5
667
+ −1
668
+ −0.5
669
+ 0
670
+ 0.5
671
+ 1
672
+ −3
673
+ −2
674
+ −1
675
+ 0
676
+ 1
677
+ 2
678
+ 3
679
+ (a)
680
+ 0
681
+ 0.05
682
+ 0.1
683
+ 0.15
684
+ 0.2
685
+ 0.25
686
+ 0.3
687
+ 0.35
688
+ −4
689
+ −3
690
+ −2
691
+ −1
692
+ 0
693
+ 1
694
+ 2
695
+ 3
696
+ 4
697
+ (b)
698
+ Γ2(θ)
699
+ θ
700
+ Truth
701
+ Ω81
702
+ 2
703
+ Ω41
704
+ 2
705
+ g2(ω)
706
+ ω
707
+ Truth
708
+ From L1
709
+ From L2
710
+ FIG. 4.
711
+ Comparison between the truth (purple solid line)
712
+ and the inference in Model-2 having τ = 0. (a) The cou-
713
+ pling function Γ2(θ). The sample sets are Ω81
714
+ 2
715
+ (green broken
716
+ line) and Ω41
717
+ 2
718
+ (blue chain line). (b) The natural frequency
719
+ distribution g2(ω) (12) obtained from the inferred L1 (green
720
+ filled circles) and L2 (blue open circles) through (7).
721
+ The
722
+ sample set is Ω81
723
+ 2 .
724
+ inferences, due to the lack of precision in computation of
725
+ the derivative χ′
726
+ 1(ωex). The inferred coupling function Γ2
727
+ and the natural frequency distribution g2(ω) agree with
728
+ the true ones as reported in Fig. 4.
729
+ In summary, we proposed a method to reconstruct the
730
+ underlying coupled phase-oscillator model of a collective
731
+ rhythmic system by observing responses in order param-
732
+ eters to a weak external force with varying its frequency.
733
+ Non-invasivity is respected due to weakness of the exter-
734
+ nal force, and we do not need to know activity of indi-
735
+ vidual elements of the system. The proposed method is
736
+ examined through numerical simulations in two models.
737
+ The unknown system parameters including the time de-
738
+ lay in interactions have been successfully inferred, when
739
+ the sampling of the external frequency lies on a suffi-
740
+ ciently large range with sufficiently small gaps. Finally,
741
+ we remark on potential directions of development: ex-
742
+ tensions to synchronized states, to noisy systems, and to
743
+ network systems.
744
+ Y.Y.Y. acknowledges the support of JSPS KAKENHI
745
+ Grants No. 16K05472 and No. 21K03402. Y.T. is sup-
746
+ ported by the Special Postdoctoral Research Program at
747
+ RIKEN and JSPS KAKENHI Grant No. 19K20365.
748
+
749
+ 5
750
+ Appendix A: Linear and nonlinear response theories
751
+ 1.
752
+ Equations to analyze
753
+ We consider the equation of motion
754
+ dθj
755
+ dt = ωj + 1
756
+ N
757
+ N
758
+
759
+ k=1
760
+ Γ (θj(t) − θk(t − τ)) + H(θj, t; ωex),
761
+ (j = 1, · · · , N).
762
+ (A1)
763
+ The variable θj is the phase of the jth phase-oscillator. The natural frequency ωj follows the natural frequency
764
+ distribution g(ω). The function Γ is the coupling function and the constant τ is the time delay. We assume that the
765
+ external force H is sufficiently small, i.e. ∥H∥ ≪ 1, where ∥H∥ is a certain norm of the function H. Dynamics of
766
+ (A1) are described in the limit N → ∞ by the equation of continuity
767
+ ∂F
768
+ ∂t + ∂
769
+ ∂θ {[ω + v[F] + H(θ, t; ωex)] F} = 0,
770
+ (A2)
771
+ where
772
+ v[F](θ, t; τ) =
773
+ � ∞
774
+ −∞
775
+
776
+ � 2π
777
+ 0
778
+ dθ Γ(θ − θ′)F(θ′, ω, t − τ).
779
+ (A3)
780
+ Suppose that the nonsynchronized state F0(ω) = g(ω)/(2π) is stable stationary under H ≡ 0. We expand F around
781
+ F0 as
782
+ F(θ, ω, t) = F0(ω) + f (1)(θ, ω, t) + f (2)(θ, ω, t) + · · · ,
783
+ (A4)
784
+ where f (k) = O(∥H∥k). Substituting the expansion (A4) into the equation of continuity (A2), we have
785
+ ∂f (1)
786
+ ∂t
787
+ + ∂
788
+ ∂θ
789
+
790
+ ωf (1) +
791
+
792
+ v[f (1)] + H
793
+
794
+ F0
795
+
796
+ = 0
797
+ (A5)
798
+ in the order of O(∥H∥), and
799
+ ∂f (2)
800
+ ∂t
801
+ + ∂
802
+ ∂θ
803
+
804
+ ωf (2) + v[f (2)]F0 +
805
+
806
+ v[f (1)] + H
807
+
808
+ f (1)�
809
+ = 0
810
+ (A6)
811
+ in the order of O(∥H∥2). We analyze (A5) and (A6) through the Fourier series expansion in θ and the Laplace
812
+ transform in t.
813
+ 2.
814
+ Fourier series expansion
815
+ The coupling function Γ, the external force H, and the perturbations f (k) are 2π-periodic functions with respect
816
+ to θ, and they are expanded into the Fourier series as
817
+ Γ (θ) = −
818
+
819
+
820
+ m=1
821
+ Km sin (mθ + αm) = −
822
+
823
+ n̸=0
824
+ Γneinθ,
825
+ (A7)
826
+ H (θ, t; ωex) = −Θ(t)
827
+
828
+
829
+ m=1
830
+ hm sin [m (θ − ωext)] = −
831
+
832
+ n̸=0
833
+ einθHn(t; ωex),
834
+ (A8)
835
+ and
836
+ f (k)(θ, ω, t) =
837
+
838
+ n̸=0
839
+ einθf (k)
840
+ n (ω, t).
841
+ (A9)
842
+
843
+ 6
844
+ Here, we have the relations
845
+ Γn = iKn
846
+ 2 eiαn,
847
+ Γ−n = Γ∗
848
+ n
849
+ (n > 0)
850
+ (A10)
851
+ and
852
+ Hn(t; ωex) = ihn
853
+ 2 Θ(t)e−inωext,
854
+ H−n = H∗
855
+ n
856
+ (n > 0)
857
+ (A11)
858
+ where the superscript ∗ represents the complex conjugate. We assume that Γ0 = 0, since it is renormalized into ω, in
859
+ other words, into a shift of the natural frequency distribution g(ω). Note that there is no external force of the zeroth
860
+ mode: H0 ≡ 0. The order parameter functionals zn[f]’s are defined by
861
+ zn[f](t) =
862
+ � ∞
863
+ −∞
864
+
865
+ � 2π
866
+ 0
867
+ dθ einθf(θ, ω, t) = 2π
868
+ � ∞
869
+ −∞
870
+ f−n(ω, t).
871
+ (A12)
872
+ The Fourier series expansions give
873
+ ∂f (1)
874
+ n
875
+ ∂t
876
+ + in
877
+
878
+ ωf (1)
879
+ n
880
+ +
881
+
882
+ Γnz(1)
883
+ −n(t − τ) + Hn
884
+
885
+ F0
886
+
887
+ = 0
888
+ (A13)
889
+ in O(∥H∥) and
890
+ ∂f (2)
891
+ n
892
+ ∂t
893
+ + in
894
+
895
+ ωf (2)
896
+ n
897
+ + Γnz(2)
898
+ −n(t − τ)F0 + N (2)
899
+ n
900
+
901
+ = 0
902
+ (A14)
903
+ in O(∥H∥2). The symbol z(k)
904
+ −n(t) = z−n[f (k)](t) was introduced to simplify the notation. The second-order nonlinear
905
+ term N (2)
906
+ n
907
+ is defined by
908
+ N (2)
909
+ n (ω, t) =
910
+
911
+ m
912
+
913
+ Γmz(1)
914
+ −m(t − τ) + Hm(t)
915
+
916
+ f (1)
917
+ n−m(ω, t).
918
+ (A15)
919
+ 3.
920
+ Laplace transform
921
+ From now on, the Laplace transform of a function is indicated by the upper hat symbol. For an arbitrary analytic
922
+ function ϕ(t), the Laplace transform is defined by
923
+ �ϕ(s) =
924
+ � ∞
925
+ 0
926
+ e−stϕ(t)dt,
927
+ Re(s) > 0,
928
+ (A16)
929
+ where the domain Re(s) > 0 is introduced to ensure the convergence of integral. The perturbation f is zero at
930
+ t = 0, since F0 is stable stationary and no external force is applied in t < 0. We hence have the Laplace transformed
931
+ equations as
932
+ (s + inω) �f (1)
933
+ n
934
+ + in
935
+
936
+ Γne−sτ �z(1)
937
+ −n + �Hn
938
+
939
+ F0 = 0
940
+ (A17)
941
+ in O(∥H∥) and
942
+ (s + inω) �f (2)
943
+ n
944
+ + in
945
+
946
+ Γne−sτ �z(2)
947
+ −nF0 + �
948
+ N (2)
949
+ n
950
+
951
+ = 0
952
+ (A18)
953
+ in O(∥H∥2).
954
+ 4.
955
+ Linear response : O(∥H∥)
956
+ The equation (A17) is solved algebraically. Dividing s + inω, multiplying by 2π, and integrating over ω, we have
957
+ �z(1)
958
+ −n(s) = −
959
+ �Hn(s)
960
+ Λn(s) In(s),
961
+ Re(s) > 0.
962
+ (A19)
963
+
964
+ 7
965
+ where the spectrum function Λn(s) (n ̸= 0) is
966
+ Λn(s) = 1 + Γne−sτIn(s),
967
+ Re(s) > 0.
968
+ (A20)
969
+ and the integral In(s) is
970
+ In(s) =
971
+ � ∞
972
+ −∞
973
+ g(ω)
974
+ ω − is/n,
975
+ Re(s) > 0.
976
+ (A21)
977
+ The domain Re(s) > 0 comes from the domain of the Laplace transform (A16).
978
+ In(s), and Λn(s) and z(1)
979
+ −n(s) accordingly, are analytically continued to the whole complex s plane as follows. The
980
+ integrand of In(s) has the singularity at ω = is/n, which is located on the upper (lower) half of the complex ω plane
981
+ for Re(s) > 0 and n > 0 (n < 0). Moving the singularity to the other half, we smoothly modify the integral contour,
982
+ the real axis, so as to avoid the singularity. As a result, the residue is added, because the modified contour, denoted
983
+ by L, encloses the singularity entirely for Re(s) < 0 and half for Re(s) = 0. The continued integral In(s) is therefore
984
+ In(s) =
985
+
986
+ L
987
+ g(ω)
988
+ ω − is/ndω =
989
+
990
+
991
+
992
+
993
+
994
+
995
+
996
+
997
+ ���
998
+
999
+
1000
+
1001
+
1002
+
1003
+
1004
+ � ∞
1005
+ −∞
1006
+ g(ω)
1007
+ ω − is/ndω
1008
+ (Re(s) > 0)
1009
+ PV
1010
+ � ∞
1011
+ −∞
1012
+ g(ω)
1013
+ ω − is/ndω + sgn(n)iπg(is/n) (Re(s) = 0)
1014
+ � ∞
1015
+ −∞
1016
+ g(ω)
1017
+ ω − is/ndω + sgn(n)i2πg(is/n)
1018
+ (Re(s) < 0)
1019
+ (A22)
1020
+ where PV represents the Cauchy principal value, and sgn(n) is the sign of n representing the direction of the integral
1021
+ counter enclosing the singularity.
1022
+ Temporal evolution of z(1)
1023
+ −n(t) is obtained by performing the inverse Laplace transform as
1024
+ z(1)
1025
+ −n(t) =
1026
+ 1
1027
+ 2πi
1028
+ � σ+i∞
1029
+ σ−i∞
1030
+ est�z(1)
1031
+ −n(s)ds,
1032
+ (A23)
1033
+ where σ ∈ R is larger than the real parts of any singularities of �z(1)
1034
+ −n(s). The continuation of �z(1)
1035
+ −n(s) permits us to use
1036
+ the residue theorem by adding the half-circle lying in left-half of the complex s plane; The inverse Laplace transform
1037
+ picks up the singularity of �z(1)
1038
+ −n(s). The asymptotic behavior is determined by the pole of �z−n(s) which has the largest
1039
+ real part. Since we assumed that the reference state F0 is stable, all the roots of Λn(s) are in the region Re(s) < 0,
1040
+ which induce the Landau damping. The asymptotic behavior is hence determined by the poles of �Hn(s) and �H−n(s),
1041
+ which are
1042
+ �Hn(s) = ihn
1043
+ 2
1044
+ 1
1045
+ s + inωex
1046
+ ,
1047
+ �H−n(s) = −ihn
1048
+ 2
1049
+ 1
1050
+ s − inωex
1051
+ ,
1052
+ (n > 0).
1053
+ (A24)
1054
+ The continued integrals In(s) at the poles are
1055
+ In(−inωex) = iG∗(ωex),
1056
+ I−n(inωex) = −iG(ωex),
1057
+ (n > 0)
1058
+ (A25)
1059
+ where
1060
+ G(ωex) = πg(ωex) + iPV
1061
+ � ∞
1062
+ −∞
1063
+ g(ω)
1064
+ ω − ωex
1065
+ dω.
1066
+ (A26)
1067
+ The spectrum functions at the poles are
1068
+ Λn(−inωex) = 1
1069
+ 2 [2 − L∗
1070
+ nG∗(ωex)] ,
1071
+ Λ−n(inωex) = 1
1072
+ 2 [2 − LnG(ωex)] ,
1073
+ (n > 0)
1074
+ (A27)
1075
+ where Ln = Kne−i(αn+nωexτ).
1076
+ Putting all together, the asymptotic temporal evolution is for n > 0 is
1077
+ z(1)
1078
+ −n(t)
1079
+ t→∞
1080
+ −−−→ e−inωext
1081
+ G∗(ωex)
1082
+ 2 − L∗nG∗(ωex)hn,
1083
+ z(1)
1084
+ n (t)
1085
+ t→∞
1086
+ −−−→ einωext
1087
+ G(ωex)
1088
+ 2 − LnG(ωex)hn.
1089
+ (A28)
1090
+
1091
+ 8
1092
+ The susceptibility χm
1093
+ n (ωex) defined by
1094
+ e−inωextz(1)
1095
+ n (t)
1096
+ t→∞
1097
+ −−−→
1098
+
1099
+ m
1100
+ χm
1101
+ n (ωex)hm + O(∥H∥2),
1102
+ einωextz(1)
1103
+ −n(t)
1104
+ t→∞
1105
+ −−−→
1106
+
1107
+ m
1108
+ χ−m
1109
+ −n (ωex)h−m + O(∥H∥2),
1110
+ (n > 0)
1111
+ (A29)
1112
+ is hence
1113
+ χm
1114
+ n (ωex) = χn(ωex)δnm,
1115
+ χ−m
1116
+ −n (ωex) = χ−n(ωex)δnm,
1117
+ (n > 0),
1118
+ (A30)
1119
+ where
1120
+ χn(ωex) =
1121
+ G(ωex)
1122
+ 2 − LnG(ωex),
1123
+ χ−n(ωex) =
1124
+ G∗(ωex)
1125
+ 2 − L∗nG∗(ωex),
1126
+ (n > 0).
1127
+ (A31)
1128
+ 5.
1129
+ Nonlinear response : O(∥H∥2)
1130
+ The same way as O(∥H∥) gives the Laplace transform �z(2)
1131
+ −n(s) as
1132
+ �z(2)
1133
+ −n(s) = −2π
1134
+ Λn(s)
1135
+ � ∞
1136
+ −∞
1137
+
1138
+ N (2)
1139
+ n (ω, s)
1140
+ ω − is/n dω.
1141
+ (A32)
1142
+ We need the Laplace transform of products, which appear in �
1143
+ N (2)
1144
+ n .
1145
+ a.
1146
+ Laplace transform of a product function
1147
+ For analytic functions f(t) and g(t), we have the relation
1148
+
1149
+ fg(s) =
1150
+ 1
1151
+ 2πi
1152
+ � σg+i∞
1153
+ σg−i∞
1154
+ �f(s − s′)�g(s′)ds′,
1155
+ (A33)
1156
+ where σg ∈ R is larger than the real parts of any singularities of �g(s). A proof of (A33) is straightforward. We denote
1157
+ the inverse Laplace transforms of �f(s) and �g(s) as
1158
+ f(t) =
1159
+ 1
1160
+ 2πi
1161
+ � σf +i∞
1162
+ σf −i∞
1163
+ es1t �f(s1)ds1,
1164
+ (A34)
1165
+ where σf ∈ R is larger than the real parts of any singularities of �f(s), and
1166
+ g(t) =
1167
+ 1
1168
+ 2πi
1169
+ � σg+i∞
1170
+ σg−i∞
1171
+ es2t�g(s2)ds2.
1172
+ (A35)
1173
+ Changing the variables as (s, s′) = (s1 + s2, s2), the product function (fg)(t) is expressed as
1174
+ (fg)(t) =
1175
+ 1
1176
+ 2πi
1177
+ � σf +σg+i∞
1178
+ σf +σg−i∞
1179
+ ds est
1180
+
1181
+ 1
1182
+ 2πi
1183
+ � σg+i∞
1184
+ σg−i∞
1185
+ ds′ �f(s − s′)�g(s′)
1186
+
1187
+ .
1188
+ (A36)
1189
+ The integral over s is the inverse Laplace transform of the inside of the square brackets, and hence we have the relation
1190
+ (A33).
1191
+ We note that we pick up the singularities of �g only in the integral with respect to s′. Let a be a pole of �f(s), and
1192
+ b of �g(s). By the definitions, we have Re(a) < σ1 and Re(b) < σ2. The convolution yields a pole of �f which lies on
1193
+ the right-side of the line Re(s′) = σg, since s′ = s − a = σf + σg − a > σg. Therefore, this singularity is not enclosed
1194
+ by the integral counter, which consists of the line Re(s′) = σg and the left half-circle passing through the point at
1195
+ infinity on the left-half complex s′ plane.
1196
+
1197
+ 9
1198
+ b.
1199
+ Convolution in �
1200
+ N (2)
1201
+ n
1202
+ Let us denote
1203
+ Vm(t) = Γmz(1)
1204
+ −m(t − τ) + Hm(t),
1205
+ (A37)
1206
+ which rewrite the nonlinear term N (2)
1207
+ n
1208
+ into
1209
+ N (2)
1210
+ n (ω, t) =
1211
+
1212
+ m
1213
+ Vm(t)f (1)
1214
+ n−m(ω, t).
1215
+ (A38)
1216
+ The Laplace transform �z(2)
1217
+ −n(s) is expressed as
1218
+ �z(2)
1219
+ −n(s) = −2π
1220
+ Λn(s)
1221
+
1222
+ m
1223
+ � ∞
1224
+ −∞
1225
+ L[Vmf (1)
1226
+ n−m](s)
1227
+ ω − is/n
1228
+ dω,
1229
+ (A39)
1230
+ where L represents the Laplace transform operator.
1231
+ The Laplace transform of Vm is
1232
+ �Vm(s) = Γme−sτ �z(1)
1233
+ −m(s) + �Hm(s) =
1234
+ �Hm(s)
1235
+ Λm(s) ,
1236
+ (A40)
1237
+ where we used (A19) and (A20). The Laplace transform �f (1)
1238
+ m (ω, s) is then from (A17)
1239
+ �f (1)
1240
+ m (ω, s) = −
1241
+ F0(ω)
1242
+ ω − is/m
1243
+ �Hm(s)
1244
+ Λm(s) .
1245
+ (A41)
1246
+ The Laplace transform of Vmf (1)
1247
+ n−m is
1248
+ L[Vmf (1)
1249
+ n−m](s) =
1250
+ 1
1251
+ 2πi
1252
+ � σ2+i∞
1253
+ σ2−i∞
1254
+ �Hm(s′)
1255
+ Λm(s′)
1256
+ F0(ω)
1257
+ ω − i s−s′
1258
+ n���m
1259
+ �Hn−m(s − s′)
1260
+ Λn−m(s − s′) ds′.
1261
+ (A42)
1262
+ Remembering the note at the end of Sec. A 5 a and keeping in mind that we are interested in the asymptotic temporal
1263
+ evolution, we pick up the pole of �Hm(s′) which is at s′ = −imωex. The principal part of the Laplace transform is
1264
+ then
1265
+ PPL[Vmf (1)
1266
+ n−m](s) =
1267
+ Res( �Hm)
1268
+ Λm(−imωex)
1269
+ �Hn−m(s + imωex)
1270
+ Λn−m(s + imωex)
1271
+ F0(ω)
1272
+ ω − i s+imωex
1273
+ n−m
1274
+ ,
1275
+ (A43)
1276
+ where PP represents the principal part surviving in the limit t → ∞, and Res( �Hm) = sgn(m)ihm/2 is the residue of
1277
+ �Hm. Substituting the above expression into (A44), we have
1278
+ PP�z(2)
1279
+ −n(s) =
1280
+ −1
1281
+ Λn(s)
1282
+
1283
+ m
1284
+ Res( �Hm)
1285
+ Λm(−imωex)
1286
+ �Hn−m(s + imωex)
1287
+ Λn−m(s + imωex) Tn,m(s),
1288
+ (A44)
1289
+ where
1290
+ Tn,m(s) =
1291
+
1292
+ L
1293
+ g(ω)
1294
+
1295
+ ω − i s+imωex
1296
+ n−m
1297
+ � �
1298
+ ω − i s
1299
+ n
1300
+ �dω.
1301
+ (A45)
1302
+ We pick up the pole of �Hn−m(s + imωex), which is at s = −inωex, for the asymptotic temporal evolution. Then,
1303
+ einωextz(2)
1304
+ −n(t)
1305
+ t→∞
1306
+ −−−→
1307
+ −1
1308
+ Λn(−inωex)
1309
+
1310
+ m
1311
+ Res( �Hm)Res( �Hn−m)Tn,m(−inωex)
1312
+ Λm(−imωex)Λn−m(−i(n − m)ωex).
1313
+ (A46)
1314
+ We have to be careful for the value Tn,m(−inωex), because the integrand of Tn,m(−inωex) has the pole of order two
1315
+ at ω = ωex.
1316
+
1317
+ 10
1318
+ c.
1319
+ Nonlinear response coefficient
1320
+ From now on, we focus on the linear response of the mode 2 induced by the external force of the mode 1, i.e. h1 > 0
1321
+ and hl = 0 (l > 1). Setting n = 2 and m = 1 in (A46), we have
1322
+ e2iωextz(2)
1323
+ −2(t)
1324
+ t→∞
1325
+ −−−→
1326
+ T2,1(−2iωex)
1327
+ 4Λ2(−2iωex)[Λ1(−iωex)]2 h2
1328
+ 1.
1329
+ (A47)
1330
+ To obtain the value T2,1(−2iωex), we first perform the partial fraction decomposition as
1331
+ T2,1(s) =
1332
+ 2
1333
+ i(s + 2iωex) [I1(s + iωex) − I2(s)] .
1334
+ (A48)
1335
+ In the limit s → −2iω′
1336
+ ex (ω′
1337
+ ex ̸= ωex) from the upper-half s plane, we have
1338
+ T2,1(−2iω′
1339
+ ex) =
1340
+ i
1341
+ ω′ex − ωex
1342
+ [G∗(2ω′
1343
+ ex − ωex) − G∗(ω′
1344
+ ex)] .
1345
+ (A49)
1346
+ Further taking the limit ω′
1347
+ ex → ωex, we have
1348
+ T2,1(−2iωex) = i (G∗)′ (ωex).
1349
+ (A50)
1350
+ The asymptotic temporal evolution of z(2)
1351
+ 2 (t) is hence
1352
+ e−2iωextz(2)
1353
+ 2 (t)
1354
+ t→∞
1355
+ −−−→ χ11
1356
+ 2 (ωex)h2
1357
+ 1 + O(∥H∥3),
1358
+ (A51)
1359
+ where
1360
+ χ11
1361
+ 2 (ωex) =
1362
+ iG′(ωex)
1363
+ 4Λ∗
1364
+ 2(−2iωex)[Λ∗
1365
+ 1(−iωex)]2 .
1366
+ (A52)
1367
+ Substituting (A27) into the above expression, we have
1368
+ χ11
1369
+ 2 (ωex) =
1370
+ 2iG′(ωex)
1371
+ [2 − L2(ωex)G(ωex)][2 − L1(ωex)G(ωex)]2 = 2iG′(ωex)
1372
+ [G(ωex)]3 χ2(ωex)[χ1(ωex)]2,
1373
+ (A53)
1374
+ where we used (A31).
1375
+ Appendix B: Inference of L1
1376
+ The nonlinear response coefficient (A53) gives
1377
+ G′(ωex) =
1378
+ χ11
1379
+ 2 (ωex)[G(ωex)]3
1380
+ 2iχ2(ωex)[χ1(ωex)]2 .
1381
+ (B1)
1382
+ Another expression of G′(ωex) is obtained by solving (A31) by G(ωex) as
1383
+ G(ωex) =
1384
+ 2χn(ωex)
1385
+ 1 + Ln(ωex)χn(ωex)
1386
+ (B2)
1387
+ and derivating it with respect to ωex as
1388
+ G′(ωex) = 2χ′
1389
+ n[1 + Lnχn] − χn[Lnχn]′
1390
+ [1 + Lnχn]2
1391
+ = χ′
1392
+ n(ωex) + inτLn[χn(ωex)]2
1393
+ 2[χn(ωex)]2
1394
+ [G(ωex)]2.
1395
+ (B3)
1396
+ where we used the definition Ln = Kne−i(αn+nωexτ). The combination between (B1) and (B3) provides for n = 1
1397
+ G(ωex) = iχ2(ωex)[χ′
1398
+ 1(ωex) + iτL1[χ1(ωex)]2]
1399
+ χ11
1400
+ 2 (ωex)
1401
+ .
1402
+ (B4)
1403
+ This expression and (B2) for n = 1 give the equality
1404
+ 1 + L1(ωex)χ1(ωex)
1405
+ 2χ1(ωex)
1406
+ =
1407
+ χ11
1408
+ 2 (ωex)
1409
+ iχ2(ωex){χ′
1410
+ 1(ωex) + iτL1[χ1(ωex)]2}.
1411
+ (B5)
1412
+ This is the equation for determining L1.
1413
+
1414
+ 11
1415
+ 1.
1416
+ For τ = 0
1417
+ In particular, L1 is uniquely determined for τ = 0 as
1418
+ L1 = K1e−iα1 =
1419
+ 2χ11
1420
+ 2 (ωex)
1421
+ iχ2(ωex)χ′
1422
+ 1(ωex) −
1423
+ 1
1424
+ χ1(ωex).
1425
+ (B6)
1426
+ 2.
1427
+ For τ > 0
1428
+ We can infer L1 from the quadratic equation (B5) for τ > 0 as well as for τ = 0. The quadratic equation is rewritten
1429
+ into
1430
+ AL2
1431
+ 1 + BL1 + C = 0,
1432
+ (B7)
1433
+ where
1434
+ A(ωex) = iτ [χ1(ωex)]2
1435
+ χ′
1436
+ 1(ωex) ,
1437
+ B(ωex) = 1 + iτ χ1(ωex)
1438
+ χ′
1439
+ 1(ωex),
1440
+ C(ωex) =
1441
+ 1
1442
+ χ1(ωex) −
1443
+ 2χ11
1444
+ 2 (ωex)
1445
+ iχ2(ωex)χ′
1446
+ 1(ωex).
1447
+ (B8)
1448
+ We have the two solutions to (B7), and we select the solution
1449
+ L1(ωex) = − B(ωex)
1450
+ 2A(ωex)
1451
+
1452
+ 1 −
1453
+
1454
+ 1 − 4A(ωex)C(ωex)
1455
+ [B(ωex)]2
1456
+
1457
+ (B9)
1458
+ to have (B6) in the limit τ → 0, namely A → 0. The inferred L1 induces the other inferences of Lm’s through the
1459
+ relation
1460
+ Lm(ωex) − L1(ωex) =
1461
+ 1
1462
+ χ1(ωex) −
1463
+ 1
1464
+ χm(ωex)
1465
+ (m ≥ 2).
1466
+ (B10)
1467
+ The inferred parameter values are summarized in Table II for Model-1. The inferred coupling function Γ1(θ) and
1468
+ the natural frequency distribution g1(ω) are compared with the true ones in Fig. 5. We observe rather large errors in
1469
+ higher order modes in Γ1(θ), and precision is improved by truncating the Fourier series up to the mode-3. Moreover,
1470
+ the errors tend to decrease as the number of samples increases, and g1(ω) is well inferred irrespective of used modes.
1471
+ TABLE II. True and inferred parameter values of Model-1 from (B9) and (B10), by taking the average over ωex. The time
1472
+ delay τ is inferred by Procedure-1.
1473
+ Model-1 τ
1474
+ K1
1475
+ α1
1476
+ K2
1477
+ α2
1478
+ K3
1479
+ α3
1480
+ K4
1481
+ α4
1482
+ K5
1483
+ α5
1484
+ Truth
1485
+ 2
1486
+ 1.379 0.7884 0.568 -3.0316 0.154 -0.7546 0
1487
+
1488
+ 0
1489
+
1490
+ Ω50
1491
+ 1
1492
+ 1.987 1.215 0.925
1493
+ 0.683 -2.663
1494
+ 0.257 0.694
1495
+ 0.119 2.108 0.289 0.991
1496
+ ���25
1497
+ 1
1498
+ 1.995 0.857 0.806
1499
+ 0.956 -2.584
1500
+ 0.414 1.004
1501
+ 0.253 1.190 0.389 0.407
1502
+ [1] A.
1503
+ T.
1504
+ Winfree,
1505
+ The
1506
+ Geometry
1507
+ of
1508
+ Biological
1509
+ Time
1510
+ (Springer, New York, 2001).
1511
+ [2] S. H. Strogatz, Sync: How order emerges from chaos in
1512
+ the universe, nature, and daily life (Hyperion, New York,
1513
+ 2003).
1514
+ [3] A. Pikovsky, M. Rosenblum, and J. Kurths, Synchroniza-
1515
+ tion: a universal concept in nonlinear sciences (Cam-
1516
+ bridge University Press, Cambridge, 2001).
1517
+ [4] A. Palmigiano, T. Geisel, F. Wolf, and D. Battaglia,
1518
+ Flexible information routing by transient synchrony, Nat.
1519
+ Neurosci. 20, 1014-1022 (2017).
1520
+ [5] G. Buzs´aki and E.I. Moser, Memory, navigation and
1521
+ theta rhythm in the hippocampal-entorhinal system,
1522
+ Nat. Neurosci. 16, 130-138 (2013).
1523
+ [6] Y. Kuramoto, Chemical oscillations, waves, and turbu-
1524
+ lence (Dover, New York, 2003).
1525
+ [7] H. Nakao, Phase reduction approach to synchronisation
1526
+ of nonlinear oscillators, Contemp. Phys. 57, 188 (2016).
1527
+ [8] Y. Kuramoto and H. Nakao, On the concept of dynamical
1528
+ reduction: the case of coupled oscillators, Phil. Trans. R.
1529
+
1530
+ 12
1531
+ −1.5
1532
+ −1
1533
+ −0.5
1534
+ 0
1535
+ 0.5
1536
+ 1
1537
+ 1.5
1538
+ 2
1539
+ −3
1540
+ −2
1541
+ −1
1542
+ 0
1543
+ 1
1544
+ 2
1545
+ 3
1546
+ (a)
1547
+ −1.5
1548
+ −1
1549
+ −0.5
1550
+ 0
1551
+ 0.5
1552
+ 1
1553
+ 1.5
1554
+ 2
1555
+ −3
1556
+ −2
1557
+ −1
1558
+ 0
1559
+ 1
1560
+ 2
1561
+ 3
1562
+ (b)
1563
+ 0
1564
+ 0.02
1565
+ 0.04
1566
+ 0.06
1567
+ 0.08
1568
+ 0.1
1569
+ 0.12
1570
+ 0.14
1571
+ 0.16
1572
+ 0
1573
+ 2
1574
+ 4
1575
+ 6
1576
+ 8
1577
+ 10
1578
+ (c)
1579
+ Γ1(θ)
1580
+ θ
1581
+ Truth
1582
+ Sample set Ω50
1583
+ 1
1584
+ Sample set Ω25
1585
+ 1
1586
+ Γ1(θ)
1587
+ θ
1588
+ Truth
1589
+ Sample set Ω50
1590
+ 1 (up to mode-3)
1591
+ Sample set Ω25
1592
+ 1 (up to mode-3)
1593
+ g1(ω)
1594
+ ω
1595
+ Truth
1596
+ From L1
1597
+ From L2
1598
+ From L3
1599
+ From L4
1600
+ From L5
1601
+ FIG. 5. Comparison between the truth and the inference in Model-1 having τ > 0. (a) The coupling function Γ1(θ) produced
1602
+ from the sample set Ω50
1603
+ 1
1604
+ (green broken line), Ω25
1605
+ 1
1606
+ (blue chain line). (b) Same as (a) but the inferred Γ1(θ) are truncated up to
1607
+ the Fourier mode-3. (c) The natural frequency distribution g1(ω) obtained from the inferred L1 (green filled circles), L2 (blue
1608
+ open circles), L3 (orange triangles), L4 (yellow inverse triangles), and L5 (dark-blue diamonds). The sample set is Ω50
1609
+ 1 .
1610
+ Soc. A 377, 20190041 (2019).
1611
+ [9] R. F. Gal´an, G. B. Ermentrout, and N. N. Urban, Effi-
1612
+ cient estimation of phase-resetting curves in real neurons
1613
+ and its significance for neural-network modeling, Phys.
1614
+ Rev. Lett. 94, 158101 (2005).
1615
+ [10] J. Miyazaki and S. Kinoshita, Determination of a cou-
1616
+ pling function in multicoupled oscillators, Phys. Rev.
1617
+ Lett. 96, 194101 (2005).
1618
+ [11] I. T. Tokuda, S. Jain, I. Z. Kiss, and J. L. Hudson, Infer-
1619
+ ring phase equations from multivariate time series, Phys.
1620
+ Rev. Lett. 99, 064101 (2007).
1621
+ [12] B. Kralemann,
1622
+ L. Cimponeriu,
1623
+ M. Rosenblum,
1624
+ A.
1625
+ Pikovsky, and R. Mrowka, Uncovering interaction of cou-
1626
+ pled oscillators from data, Phys. Rev. E 76, 055201(R)
1627
+ (2007).
1628
+ [13] B. Kralemann,
1629
+ L. Cimponeriu,
1630
+ M. Rosenblum,
1631
+ A.
1632
+ Pikovsky, and R. Mrowka, Phase dynamics of coupled
1633
+ oscillators reconstructed from data, Phys. Rev. E 77,
1634
+ 066205 (2008).
1635
+ [14] W. D. Penny, V. Litvak, L. Fuentemilla, E. Duzel, and
1636
+ K. Friston, Dynamic Causal Models for phase coupling,
1637
+ J. Neurosci. Methods 183, 19 (2009).
1638
+ [15] T. Stankovski, A. Duggento. P. V. E. McClintock, and
1639
+ A Stefanovska, Inference of Time-Evolving Coupled Dy-
1640
+ namical Systems in the Presence of Noise, Phys. Rev.
1641
+ Lett. 109, 024101 (2012).
1642
+ [16] K. Ota and T. Aoyagi, Direct extraction of phase dynam-
1643
+ ics from fluctuating rhythmic data based on a Bayesian
1644
+ approach, arXiv: 1405.4126 (2014).
1645
+ [17] A. Pikovsky, Reconstruction of a random phase dynamics
1646
+ network from observations, Phys. Lett. A 382, 147 (2018).
1647
+ [18] F. Mori and H. Kori, Noninvasive inference methods for
1648
+ interaction and noise intensities of coupled oscillators us-
1649
+ ing only spike time data, Proc. Natl. Acad. Sci. 119,
1650
+ e2113620119 (2022).
1651
+ [19] M. K. S. Yeung and S. H. Strogatz, Time delay in the
1652
+ Kuramoto model of coupled oscillators, Phys. Rev. Lett.
1653
+ 82, 648 (1999).
1654
+ [20] E. Montbri´o, D. Paz´o, and J. Schmidt, Time delay in the
1655
+ Kuramoto model with bimodal frequency distribution,
1656
+ Phys. Rev. E 74, 056201 (2006).
1657
+ [21] H. Sakaguchi, Cooperative Phenomena in Coupled Oscil-
1658
+ lator Systems under External Fields, Prog. Theor. Phys.
1659
+ 79, 39 (1988).
1660
+ [22] H. Daido, Susceptibility of large populations of coupled
1661
+ oscillators, Phys. Rev. E 91 012925 (2015).
1662
+ [23] Y. Terada and Y. Y. Yamaguchi, Linear response the-
1663
+ ory for coupled phase oscillators with general coupling
1664
+ functions, J. Phys. A: Math. Theor. 53, 044001 (2020).
1665
+ [24] D. J. Watts and S. H. Strogatz, Collective dynamics of
1666
+ ‘small-world’ networks, Nature 393, 440 (1998).
1667
+ [25] H. Hong, M. Y. Choi, and B. J. Kim, Synchronization on
1668
+ small-world networks, Phys. Rev. E 65, 026139 (2002).
1669
+ [26] R. Yoneda, K. Harada, and Y. Y. Yamaguchi, Critical
1670
+ exponents in coupled phase-oscillator models on small-
1671
+ world networks, Phys. Rev. E 102, 062212 (2020).
1672
+ [27] H. Daido, Population Dynamics of Randomly Interacting
1673
+ Self-Oscillators. I: Tractable Models without Frustration,
1674
+ Prog. Theor. Phys. 77, 622 (1987).
1675
+ [28] T. Ichinomiya, Frequency synchronization in a random
1676
+ oscillator network, Phys. Rev. E 70, 026116 (2004).
1677
+ [29] F. C. Hoppensteadt and E.M. Izhikevich, Weakly con-
1678
+ nected neural networks (Springer, New York, 1997).
1679
+ [30] C. Lancellotti, On the Vlasov limit for systems of nonlin-
1680
+ early coupled oscillators without noise, Transport Theory
1681
+ and Statistical Physics 34, 523 (2005).
1682
+ [31] H. Daido, Order function and macroscopic mutual en-
1683
+ trainment in uniformly coupled limit-cycle oscillators,
1684
+ Prog. Theor. Phys. 88, 1213 (1992).
1685
+ [32] See the Supplementary Material at [URL].
1686
+ [33] D. Hansel, G. Mato, and C. Meunier, Phase Dynamics
1687
+ for Weakly Coupled Hodgkin-Huxley Neurons, Europhys.
1688
+ Lett. 23, 367 (1993).
1689
+ [34] D. Hansel, G. Mato, and C. Meunier, Synchrony in ex-
1690
+ citatory neural networks, Neural Comput. 7, 307-337
1691
+ (1995).
1692
+ [35] E. M. Izhikevich, Polychronization:
1693
+ computation with
1694
+ spikes, Neural Comput. 18 245 (2006).
1695
+ [36] G. Buzs´aki and K. Mizuseki, The log-dynamic brain: how
1696
+ skewed distributions affect network operations, Nat. Rev.
1697
+ Neurosci. 15, 264 (2014).
1698
+ [37] H. Sakaguchi and Y. Kuramoto, A soluble active rotater
1699
+ model showing phase transitions via mutual entertain-
1700
+ ment, Prog. Theor. Phys.. 76, 576 (1986).
1701
+
-tA0T4oBgHgl3EQfPP-n/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
.gitattributes CHANGED
@@ -5517,3 +5517,43 @@ jtAzT4oBgHgl3EQfpf1N/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex
5517
  RNA0T4oBgHgl3EQfDv8H/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5518
  k9FRT4oBgHgl3EQfYTew/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5519
  LdFRT4oBgHgl3EQf1zij/content/2301.13658v1.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5517
  RNA0T4oBgHgl3EQfDv8H/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5518
  k9FRT4oBgHgl3EQfYTew/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5519
  LdFRT4oBgHgl3EQf1zij/content/2301.13658v1.pdf filter=lfs diff=lfs merge=lfs -text
5520
+ _9E1T4oBgHgl3EQfVAPl/content/2301.03098v1.pdf filter=lfs diff=lfs merge=lfs -text
5521
+ z9AzT4oBgHgl3EQfC_rr/content/2301.00970v1.pdf filter=lfs diff=lfs merge=lfs -text
5522
+ ydFST4oBgHgl3EQfTTiJ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5523
+ 5tE1T4oBgHgl3EQfBAK1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5524
+ _9FLT4oBgHgl3EQfwi-I/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5525
+ AtE2T4oBgHgl3EQf8QmS/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5526
+ V9E0T4oBgHgl3EQfVgDO/content/2301.02266v1.pdf filter=lfs diff=lfs merge=lfs -text
5527
+ ttAyT4oBgHgl3EQfmviR/content/2301.00477v1.pdf filter=lfs diff=lfs merge=lfs -text
5528
+ x9FRT4oBgHgl3EQfiTdH/content/2301.13586v1.pdf filter=lfs diff=lfs merge=lfs -text
5529
+ S9E0T4oBgHgl3EQfUwBZ/content/2301.02254v1.pdf filter=lfs diff=lfs merge=lfs -text
5530
+ 0NFLT4oBgHgl3EQfoi-S/content/2301.12132v1.pdf filter=lfs diff=lfs merge=lfs -text
5531
+ gNE5T4oBgHgl3EQfhg_G/content/2301.05642v1.pdf filter=lfs diff=lfs merge=lfs -text
5532
+ ddFIT4oBgHgl3EQfoCum/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5533
+ 1tE1T4oBgHgl3EQflQSM/content/2301.03283v1.pdf filter=lfs diff=lfs merge=lfs -text
5534
+ 6tFAT4oBgHgl3EQfnx0W/content/2301.08630v1.pdf filter=lfs diff=lfs merge=lfs -text
5535
+ JNAzT4oBgHgl3EQfVPyV/content/2301.01281v1.pdf filter=lfs diff=lfs merge=lfs -text
5536
+ ddFIT4oBgHgl3EQfoCum/content/2301.11317v1.pdf filter=lfs diff=lfs merge=lfs -text
5537
+ W9AyT4oBgHgl3EQfWPem/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5538
+ vNE_T4oBgHgl3EQf-hyD/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5539
+ UtA0T4oBgHgl3EQfEf_h/content/2301.02020v1.pdf filter=lfs diff=lfs merge=lfs -text
5540
+ HNE1T4oBgHgl3EQfrQW4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5541
+ 09E0T4oBgHgl3EQfuQEC/content/2301.02601v1.pdf filter=lfs diff=lfs merge=lfs -text
5542
+ _tFRT4oBgHgl3EQftDeF/content/2301.13626v1.pdf filter=lfs diff=lfs merge=lfs -text
5543
+ JdAzT4oBgHgl3EQfyP4s/content/2301.01749v1.pdf filter=lfs diff=lfs merge=lfs -text
5544
+ 4dAzT4oBgHgl3EQfffz3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5545
+ yNFAT4oBgHgl3EQfAhzZ/content/2301.08399v1.pdf filter=lfs diff=lfs merge=lfs -text
5546
+ d9FJT4oBgHgl3EQf_y11/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5547
+ samples/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5548
+ U9AyT4oBgHgl3EQfV_cW/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5549
+ vNE_T4oBgHgl3EQf-hyD/content/2301.08387v1.pdf filter=lfs diff=lfs merge=lfs -text
5550
+ y9E5T4oBgHgl3EQfNw4V/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5551
+ S9E0T4oBgHgl3EQfUwBZ/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5552
+ e9A0T4oBgHgl3EQfHf82/content/2301.02061v1.pdf filter=lfs diff=lfs merge=lfs -text
5553
+ U9FKT4oBgHgl3EQfmC51/content/2301.11856v1.pdf filter=lfs diff=lfs merge=lfs -text
5554
+ 4dAzT4oBgHgl3EQfffz3/content/2301.01455v1.pdf filter=lfs diff=lfs merge=lfs -text
5555
+ wdAzT4oBgHgl3EQfQPvW/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text
5556
+ 19FST4oBgHgl3EQfXDjl/content/2301.13783v1.pdf filter=lfs diff=lfs merge=lfs -text
5557
+ r9E2T4oBgHgl3EQf1Qhf/content/2301.04149v1.pdf filter=lfs diff=lfs merge=lfs -text
5558
+ 6tAyT4oBgHgl3EQfcvc9/content/2301.00288v1.pdf filter=lfs diff=lfs merge=lfs -text
5559
+ Z9E1T4oBgHgl3EQfcwRV/content/2301.03187v1.pdf filter=lfs diff=lfs merge=lfs -text
09E0T4oBgHgl3EQfuQEC/content/2301.02601v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ea858a266f2ff5e1d9669ea00fd39d3708c35f06ccd080d3e72ed6d70d181bc
3
+ size 764538
09E0T4oBgHgl3EQfuQEC/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c94cbfb7cbda79d5257db72b70260e9742dd1f8b15a05cae7d93054fdb77bd12
3
+ size 88694
0NFLT4oBgHgl3EQfoi-S/content/2301.12132v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0457f94f70ad6f35c844e1055e858ec0097407af712b5a26e96c3102607eee1
3
+ size 743102
0NFLT4oBgHgl3EQfoi-S/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19427f4c25fc8307d54329cd66d3388489233f2a5322f39b1b62d7d2e93fa633
3
+ size 182051
0tE2T4oBgHgl3EQf4wij/content/tmp_files/2301.04184v1.pdf.txt ADDED
@@ -0,0 +1,1544 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Electronic character of charge order in square planar low valence nickelates
2
+ Y. Shen,1, ∗ J. Sears,1 G. Fabbris,2 J. Li,3 J. Pelliciari,3 M. Mitrano,4 W. He,1 Junjie
3
+ Zhang,5, 6 J. F. Mitchell,5 V. Bisogni,3 M. R. Norman,5 S. Johnston,7, 8 and M. P. M. Dean1, †
4
+ 1Condensed Matter Physics and Materials Science Department,
5
+ Brookhaven National Laboratory, Upton, New York 11973, USA
6
+ 2Advanced Photon Source, Argonne National Laboratory, Lemont, Illinois 60439, USA
7
+ 3National Synchrotron Light Source II, Brookhaven National Laboratory, Upton, New York 11973, USA
8
+ 4Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA
9
+ 5Materials Science Division, Argonne National Laboratory, Lemont, Illinois 60439, USA
10
+ 6Institute of Crystal Materials, Shandong University, Jinan, Shandong 250100, China
11
+ 7Department of Physics and Astronomy, The University of Tennessee, Knoxville, Tennessee 37966, USA
12
+ 8Institute of Advanced Materials and Manufacturing, The University of Tennessee, Knoxville, Tennessee 37996, USA
13
+ (Dated: January 12, 2023)
14
+ Charge order is a central feature of the physics of cuprate superconductors and is known to arise
15
+ from a modulation of holes with primarily oxygen character. Low-valence nickelate superconductors
16
+ also host charge order, but the electronic character of this symmetry breaking is unsettled. Here,
17
+ using resonant inelastic x-ray scattering at the Ni L2-edge, we identify intertwined involvements of
18
+ Ni 3dx2−y2, 3d3z2−r2, and O 2pσ orbitals in the formation of diagonal charge order in an overdoped
19
+ low-valence nickelate La4Ni3O8. The Ni 3dx2−y2 orbitals, strongly hybridized with planar O 2pσ,
20
+ largely shape the spatial charge distribution and lead to Ni site-centered charge order. The 3d3z2−r2
21
+ orbitals play a small, but non-negligible role in the charge order as they hybridize with the rare-
22
+ earth 5d orbitals. Our results reveal that the low-energy physics and ground-state character of these
23
+ nickelates are more complex than those in cuprates.
24
+ I.
25
+ INTRODUCTION
26
+ One of the common threads linking different classes
27
+ of unconventional superconductors is their propensity to
28
+ host proximate competing orders such as charge and spin
29
+ stripes [1, 2].
30
+ For example, the cuprate superconduc-
31
+ tors exhibit diagonal (with respect to the Cu-O bonds)
32
+ spin stripes when underdoped [3–5], while Cu-O bond
33
+ oriented (parallel) charge order dominates the rest of the
34
+ phase diagram [6, 7]. The detection of superconductivity
35
+ and charge order in the square-planar low-valence fam-
36
+ ily of nickelates therefore presents a fascinating oppor-
37
+ tunity to study the degree of similarity between differ-
38
+ ent unconventional superconducting families [8–17]. In-
39
+ triguingly, different nickelates within the structural se-
40
+ ries of Rn+1NinO2n+2 (R stands for a rare earth and n is
41
+ the number of neighboring NiO2 layers) also host differ-
42
+ ent charge ordered phases. Underdoped materials with
43
+ n = ∞ and R = La, Nd exhibit parallel charge order
44
+ [15–17], whereas n = 3 material La4Ni3O8, which is ef-
45
+ fectively 1/3 overdoped, manifests diagonal charge order
46
+ [14]. Many researchers have emphasized that charge or-
47
+ der plays an important role in the physics of cuprates
48
+ [18–21].
49
+ In particular, there is good evidence showing
50
+ that charge/spin order is a fundamental feature of min-
51
+ imal Hubbard model descriptions of the cuprates [22–
52
+ 24].
53
+ Some researchers have suggested that charge and
54
+ spin order can intertwine with superconductivity to form
55
+ pair density waves [25, 26], or that dynamic charge/spin
56
57
58
+ fluctuations might promote superconductivity [27–29].
59
+ Others have associated charge order fluctuations with
60
+ the anomalous “strange metal” electronic transport in
61
+ cuprates [30].
62
+ Understanding the electronic states in-
63
+ volved in charge order formation is a prerequisite to test-
64
+ ing all these scenarios in low-valence nickelates and is also
65
+ important more generally for understanding charge order
66
+ as a prevalent feature of correlated quantum materials.
67
+ Here, we use Ni L2-edge RIXS to determine the elec-
68
+ tronic character of the charge order in La4Ni3O8. We
69
+ find that both the Ni 3dx2−y2 and 3d3z2−r2 orbitals are
70
+ involved in charge order formation.
71
+ The former con-
72
+ tributes most of the charge modulation while the latter
73
+ dominates the RIXS spectra in the post-edge regime and
74
+ so plays a less important role.
75
+ As the charge-transfer
76
+ energy of these nickelates is larger than that of cuprates
77
+ but comparable to the on-site Coulomb interaction, the
78
+ holes involved in the charge modulation reside predomi-
79
+ nately on Ni sites, despite an appreciable amount of holes
80
+ occupying the O orbitals. Our results indicate that the
81
+ low-energy electronic structure and charge order of low-
82
+ valence nickelates is largely shaped by hybridized 3dx2−y2
83
+ and planar O 2pσ orbitals, similar to cuprates, while
84
+ some differences exist due to the multi-band physics in-
85
+ troduced by Ni 3d3z2−r2 orbitals hybridized with rare-
86
+ earth 5d states.
87
+ II.
88
+ RESULTS
89
+ The La4Ni3O8 nickelate samples studied here were pre-
90
+ pared by reducing single crystals synthesized via the
91
+ floating zone method (see the Appendix A for details),
92
+ arXiv:2301.04184v1 [cond-mat.str-el] 10 Jan 2023
93
+
94
+ 2
95
+ tpd
96
+ tpp
97
+ Ni1
98
+ Ni2
99
+ Ni3
100
+ a
101
+ b
102
+ c
103
+ σ
104
+ Ni L2-edge
105
+
106
+ θ
107
+ (a)
108
+ Nickel
109
+ Oxygen
110
+ NiO2 plane
111
+ La4Ni3O8 sample
112
+ (b)
113
+ (c)
114
+ (d)
115
+ (e)
116
+ (f)
117
+ -0.2
118
+ -0.1
119
+ 0
120
+ 0.1
121
+ 0.2
122
+ Energy loss (eV)
123
+ 0.32
124
+ 0.36
125
+ 0.34
126
+ 0.32
127
+ 0.34
128
+ 0.32
129
+ 0.34
130
+ 0.32
131
+ 0.34
132
+ 0.32
133
+ 0.34
134
+ (H, H) (r.l.u.)
135
+ (H, H) (r.l.u.)
136
+ (H, H) (r.l.u.)
137
+ (H, H) (r.l.u.)
138
+ (H, H) (r.l.u.)
139
+ T = 40 K
140
+ T = 50 K
141
+ T = 70 K
142
+ T = 90 K
143
+ T = 110 K
144
+ ×10
145
+ 0
146
+ 25
147
+ 50
148
+ 75
149
+ 100
150
+ Intensity (arb. units)
151
+ (g)
152
+ (h)
153
+ 40 K
154
+ 50 K
155
+ 70 K
156
+ 90 K
157
+ 110 K
158
+ 0.32
159
+ 0.33
160
+ 0.34
161
+ 0.35
162
+ Q║=(H, H) (r.l.u.)
163
+ 40
164
+ 60
165
+ 80
166
+ 100 120
167
+ Temperature (K)
168
+ 0
169
+ 2
170
+ 4
171
+ 6
172
+ Intensity (arb. units)
173
+ FIG. 1. Charge order transition in La4Ni3O8. (a) Schematic of the Ni L2-edge RIXS experimental setup. A single NiO2 layer
174
+ is presented with stripes running vertically. A Ni3O10 cluster composed of Ni 3dx2−y2 and planar O 2pσ orbitals is embedded
175
+ in it, tracing the charge order motif, in which hole poor Ni1 and Ni3 sites, shown in red, flank the hole rich Ni2 site depicted
176
+ in purple. (b)–(f) RIXS intensity maps with σ polarized incident photons at the indicated temperatures obtained by changing
177
+ the in-plane sample angle θ. (g) Quasi-elastic-line amplitudes extracted from the data presented in (b)–(f) as a function of
178
+ in-plane momentum transfer in reciprocal lattice units (r.l.u.). The solid lines are fitting curves with pseudo-Voigt profiles. (h)
179
+ Temperature dependence of the fitted peak amplitudes. The bold gray line is a guide to the eye.
180
+ and will be indexed in terms of scattering vector Q =
181
+ (2π/a, 2π/a, 2π/c) with a = b = 3.97 ˚A, c = 26.092 ˚A.
182
+ As the n = 3 member of the low-valence nickelate family,
183
+ it possesses a trilayer structure with a nominal 3d8+2/3
184
+ valence.
185
+ This leads to a 1/3-hole self-doping with re-
186
+ spect to the undoped 3d9 state, putting it in the over-
187
+ doped regime of the phase diagram [13, 31]. It shares
188
+ the same structural motif as infinite-layer nickelates with
189
+ square-planar NiO2 layers stacked without apical oxy-
190
+ gens, leading to dominant Ni 3dx2−y2 character near
191
+ the Fermi energy. Although La4Ni3O8 has two inequiv-
192
+ alent NiO2 layers, they are expected to show similar
193
+ electronic structure as indicated by theoretical calcula-
194
+ tions [32, 33], which is further supported by the obser-
195
+ vation that the same charge order pattern is formed in
196
+ both layers [14]. We study their properties using Ni L2-
197
+ edge RIXS in order to avoid interference from the La
198
+ M4-edge, which overlaps the Ni L3-edge (see the Ap-
199
+ pendix B for details). As shown in Fig. 1(a), charge or-
200
+ der in La4Ni3O8 is quasi-two-dimensional in nature and
201
+ occurs at Q∥ = QCO = (1/3, 1/3), where a strong peak
202
+ is observed in the quasi-elastic region of the RIXS inten-
203
+ sity map at 40 K [see Fig. 1(b)]. The in-plane correlation
204
+ length is larger than 100 nm, which might be limited by
205
+ the sample mosaic, suggesting the long range nature of
206
+ the charge order [14]. This charge order peak persists up
207
+ to 90 K and disappears above 110 K, indicating a tran-
208
+ sition temperature of around 100 K [see Figs. 1(c)–(h)],
209
+ consistent with the reported charge order from hard x-ray
210
+ diffraction measurements [14]. No indication of charge
211
+ order is apparent in equivalent measurements of metallic
212
+ Pr4Ni3O8 samples prepared in the same way (Supple-
213
+ mental Material Sec. I [34]).
214
+ We begin by identifying the active electronic states
215
+ in La4Ni3O8 using x-ray spectroscopy. Figure 2(a) and
216
+ 2(b) show the L2-edge RIXS energy maps taken with
217
+ σ x-ray polarization in the ab-plane and π x-ray po-
218
+ larization approximately parallel to the c-axis, respec-
219
+ tively. The RIXS maps mainly comprise dd and charge-
220
+ transfer excitations that are predominantly localized and
221
+ resonate at the Ni L2-edge, and diagonal fluorescence
222
+ features (Supplemental Material Sec. II [34]). To distin-
223
+ guish among these contributions, we integrated the RIXS
224
+ spectra along the incident energy axis and show the re-
225
+ sult in Fig. 2(c). With σ polarization, the spectra above
226
+ 4 eV energy loss are dominated by mostly featureless flu-
227
+ orescence originating from particle-hole excitations that
228
+ can be understood from an itinerant framework involv-
229
+ ing transitions from extended electronic bands spanning
230
+ many unit cells [35]. Charge transfer excitations are also
231
+
232
+ 3
233
+ CT
234
+ dd
235
+ FL
236
+ dd
237
+ FL
238
+ (a)
239
+ (b)
240
+ 0
241
+ 1
242
+ 2
243
+ 3
244
+ 4
245
+ 5
246
+ 6
247
+ Energy loss (eV)
248
+ 0.00
249
+ 0.05
250
+ 0.10
251
+ 0.15
252
+ 0.20
253
+ 0.25
254
+ 0.00
255
+ 0.05
256
+ 0.10
257
+ 0.15
258
+ 0.20
259
+ 0.25
260
+ 0
261
+ 1
262
+ 2
263
+ 3
264
+ 4
265
+ 5
266
+ 6
267
+ Intensity (arb. units)
268
+ σ-pol.
269
+ �-pol.
270
+ 0
271
+ 2
272
+ 4
273
+ 6
274
+ 8
275
+ Energy loss (eV)
276
+ 0.0
277
+ 0.2
278
+ 0.4
279
+ 0.6
280
+ Intensity (arb. units)
281
+ 0.8
282
+ dd
283
+ CT
284
+ +
285
+ FL
286
+ (c)
287
+ σ-pol.
288
+ �-pol.
289
+ Integral over
290
+ incident energy
291
+ 0.00
292
+ 0.01
293
+ 0.02
294
+ 0.03
295
+ Intensity (arb. units)
296
+ 865
297
+ 870
298
+ 875
299
+ 880
300
+ Incident energy (eV)
301
+ CT
302
+ FL
303
+ (d)
304
+ 5.5 ≤ Eloss ≤ 6
305
+ σ-pol.
306
+ �-pol.
307
+ x0.1, CO
308
+ CO
309
+ 866
310
+ 868
311
+ 870
312
+ 872
313
+ 874
314
+ Incident energy (eV)
315
+ 866
316
+ 868
317
+ 870
318
+ 872
319
+ 874
320
+ Incident energy (eV)
321
+ -0.10
322
+ -0.05
323
+ 0.00
324
+ 0.05
325
+ 0.10
326
+ Energy loss (eV)
327
+ 0
328
+ 4
329
+ 8
330
+ 12
331
+ 16
332
+ -0.10
333
+ -0.05
334
+ 0.00
335
+ 0.05
336
+ 0.10
337
+ 0
338
+ 4
339
+ 8
340
+ 12
341
+ 16
342
+ Intensity (arb. units)
343
+ (e)
344
+ (f)
345
+ FIG. 2.
346
+ RIXS energy maps and the resonant behaviors of the charge order (CO) peak.
347
+ (a, b) RIXS intensity maps as a
348
+ function of incident photon energy with (a) σ x-ray polarization in the ab plane of the sample and (b) π x-ray polarization
349
+ approximately parallel to the c-axis. Several components can be identified: charge transfer excitations (CT), dd excitations (dd)
350
+ and constant-emission-energy fluorescence (FL). (c) Integral of the RIXS spectra along the incident energy axis. The dashed
351
+ lines are guides to the eye. (d) Incident energy dependence of the integrated RIXS spectra between 5.5 and 6 eV energy loss.
352
+ (e, f) RIXS intensity maps around the quasi-elastic regime with Q fixed at QCO. Note that the intensity in (e) is multiplied
353
+ by 0.1 for clarity in visualizing the signal.
354
+ visible above 4 eV but only at resonance. Below 4 eV,
355
+ prominent dd excitations emerge that dominate over the
356
+ featureless fluorescence (dashed lines). With π polariza-
357
+ tion, the fluorescence contributes most of the spectral
358
+ weight and the dd excitations are much weaker.
359
+ The
360
+ strong dichroism of dd excitations reflects the dominant
361
+ Ni 3dx2−y2 orbital character near the Fermi energy in
362
+ low-valence nickelates.
363
+ To further distinguish between charge-transfer excita-
364
+ tions and fluorescence, we inspect the RIXS spectra be-
365
+ tween 5.5 and 6 eV energy loss, well above the dd excita-
366
+ tion threshold. As shown in Fig. 2(d), the charge-transfer
367
+ excitations and fluorescence are separated in the incident
368
+ energy axis, with the former stronger in the σ polariza-
369
+ tion channel, indicating appreciable dx2−y2-pσ hybridiza-
370
+ tion where pσ indicates O orbitals that are parallel to the
371
+ Ni-O bonds. In contrast, the fluorescence is stronger in
372
+ the π polarization channel, suggesting that states involv-
373
+ ing Ni 3d3z2−r2 orbitals dominate the fluorescence for a
374
+ broad range of energy losses above ∼3 eV. The broadness
375
+ of these states is in contrast with cuprates, and suggests
376
+ that although the Ni 3d3z2−r2 orbitals are mostly oc-
377
+ cupied and localized, their unoccupied components are
378
+ hybridized with the rare earth 5d orbitals and thus con-
379
+ tribute to dispersive states. This conclusion is consistent
380
+ with density functional theory (DFT)+dynamical mean
381
+ field theory (DMFT) calculations [32], as well as RIXS
382
+ simulations for RNiO2 that studied the effect of switching
383
+ on and off the rare-earth hybridization [36]. Meanwhile,
384
+ the Ni 3dx2−y2 orbitals exhibit less hybridization with
385
+ the rare earth 5d orbitals and are more localized. Here,
386
+ since we are measuring at the Ni L edge and the Ni t2g
387
+ orbitals are expected to lie well below the Fermi energy,
388
+ we only consider Ni eg orbitals [34].
389
+ Based on the resonant behavior of the different states
390
+ identified, we now examine how the 3dx2−y2 and 3d3z2−r2
391
+ orbitals participate in the charge order. Figure 2(e) and
392
+ 2(f) show the RIXS energy maps around the quasi-elastic
393
+ regime at QCO, i.e. the resonant elastic x-ray scatter-
394
+ ing (REXS) signals.
395
+ The peak intensity strongly res-
396
+ onates at the Ni L2-edge in the σ polarization channel
397
+ [see Fig. 2(e)], confirming that the (1/3, 1/3) Bragg peak
398
+ in La4Ni3O8 involves a charge modulation and is not
399
+ purely structural. Surprisingly, the charge order peak in
400
+ the π polarization channel, although much weaker, res-
401
+ onates at the pre- and post-edge regimes but not at the
402
+ main edge [see Fig. 2(f)], distinct from that in cuprates
403
+ [37–39]. First, this observation indicates that both the
404
+ 3dx2−y2 and 3d3z2−r2 orbitals are involved in charge order
405
+ formation with the latter much less prominent. Second,
406
+
407
+ 4
408
+ (a)
409
+ 868
410
+ 870
411
+ 872
412
+ Incident energy (eV)
413
+ σ-pol.
414
+ (b)
415
+ 868
416
+ 870
417
+ 872
418
+ Incident energy (eV)
419
+ �-pol.
420
+ 0
421
+ 1
422
+ 2
423
+ 3
424
+ Energy loss (eV)
425
+ Intensity (arb. units)
426
+ 0
427
+ 0.25
428
+ FIG. 3. Low-energy electronic states in La4Ni3O8. Calcula-
429
+ tions of the RIXS energy maps at the Ni L2-edge for (a) σ and
430
+ (b) π incident x-ray polarization. The calculations reproduce
431
+ the experimental energy-scale and polarization of the dd exci-
432
+ tations evincing an appropriate minimal model for La4Ni3O8.
433
+ the charge order peak in the post-edge regime with π po-
434
+ larization suggests that the states far above the Fermi
435
+ energy also show charge modulation, which is mostly
436
+ contributed by 3d3z2−r2 orbitals. Considering that the
437
+ 3d3z2−r2 density of states in the post-edge regime is likely
438
+ caused by hybridization with the rare-earth 5d orbitals,
439
+ this indicates potential involvement of rare-earth orbitals
440
+ in the charge order formation. Similarly, the weak pre-
441
+ edge charge order peak with π polarization indicates that
442
+ the 3d3z2−r2 density of states near the Fermi energy is
443
+ nonzero but small.
444
+ Having established the involvement of Ni orbitals in
445
+ the charge order formation, now we look at the role of
446
+ oxygen states. To do this, we use exact diagonalization
447
+ (ED) methods which allow us to solve the resonant cross-
448
+ section and break down the contributions from different
449
+ states. Since the charge order is commensurate with a
450
+ period of three Ni sites and there is a strong hybridiza-
451
+ tion between the Ni and O orbitals, the smallest cluster
452
+ one can use to describe the charge-ordered state involves
453
+ three Ni-O plaquettes, which we label 1, 2, & 3.
454
+ We
455
+ choose a bond-oriented cluster, as illustrated in Fig. 1(a),
456
+ given that the Ni-O hopping dominates the kinetic en-
457
+ ergy.
458
+ In order to compute REXS we use the atomic
459
+ scattering factors from the cluster and add these am-
460
+ plitudes to simulate an effective two-dimensional NiO2
461
+ plane as shown in Fig. 1(a). The appropriate parame-
462
+ ters for this cluster, and in particular the charge-transfer
463
+ energy ∆ = 5.6 eV and the on-site Coulomb repulsion
464
+ Udd = 6.5 eV, have been empirically determined by prior
465
+ x-ray measurements of this material at the O K-edge
466
+ [40]. We use open boundary conditions and construct the
467
+ Hamiltonian in the hole language (see the Appendix C
468
+ for details).
469
+ Four holes are introduced to the cluster,
470
+ which is appropriate for the d9−1/3 electronic configu-
471
+ ration of La4Ni3O8. Without any additional constraints,
472
+ the holes will be evenly distributed among different NiO4
473
+ plaquettes with minimal charge disproportionation and
474
+ no symmetry breaking is expected. To realize the charge
475
+ order observed in La4Ni3O8, we manually introduce a
476
+ potential difference [41], ∆ϵd, for different Ni sites by
477
+ lowering the orbital energies of Ni2 by 2∆ϵd/3 and rais-
478
+ ing those of Ni1 and Ni3 by ∆ϵd/3. Based on the sim-
479
+ ilar magnetic exchange of charge ordered La4Ni3O8 and
480
+ metallic Pr4Ni3O8 [42], ∆ϵd must be significantly smaller
481
+ than the charge-transfer energy. Thus, we choose it to
482
+ be ∆ϵd = 0.8 eV while noting that apart from modu-
483
+ lating the intensity of the charge order peak, the results
484
+ are similar provided ∆ϵd is not made unfeasibly large
485
+ (Supplemental Material Fig. S5 [34]). This choice leads
486
+ to a charge disproportionation of ∆n = 0.32, which is
487
+ of a similar order of magnitude as that in cuprates [37].
488
+ This value is much smaller than the fully disproportion-
489
+ ate limit ∆n = 1, consistent with DFT calculations that
490
+ indicate a small charge modulation upon charge ordering
491
+ in this system [31]. When examining the electronic con-
492
+ figuration of the cluster, we find that the ground state is
493
+ a singlet, and the first excited state is a triplet, which is
494
+ around 70 meV above the ground state, consistent with
495
+ the magnetic excitations found in La4Ni3O8 [42].
496
+ Figure 3 shows the calculated Ni L2-edge RIXS en-
497
+ ergy maps with all the Ni 3d and O 2p orbitals included,
498
+ which qualitatively reproduce the localized dd excitations
499
+ observed experimentally. Note that the small cluster size
500
+ means that we can only capture a limited number of dis-
501
+ crete states. For this reason, fluorescence features are not
502
+ fully captured, which would require a continuous distri-
503
+ bution of states. This can be seen more clearly in the
504
+ π polarization channel where the fluorescence dominates
505
+ the spectra in experimental data [see Fig. 2(b)] but only
506
+ the weak dd excitations are present in our cluster calcu-
507
+ lations [see Fig. 3(b)].
508
+ Having verified the relevant parameters via the RIXS
509
+ maps, we computed the x-ray absorption spectrum
510
+ (XAS) and REXS response of La4Ni3O8 using a simi-
511
+ lar ED approach and identical parameters and plot the
512
+ results in Fig. 4 (Supplemental Material Sec. V [34]).
513
+ The charge disproportionation in the cluster implies a
514
+ REXS response at QCO. The predicted REXS resonance
515
+ shown in Fig. 4(c) nicely captures the main two peak
516
+ structure of the experimental REXS resonance shown in
517
+ Fig. 4(b). The same applies for the XAS as shown in
518
+ Fig. 4(a). In fact, the lineshape of the resonant profile of
519
+ the charge order peak is sensitive to the charge-transfer
520
+ energy, and neither the pure charge-transfer nor Mott-
521
+ Hubbard scenarios can describe the observed resonant
522
+ behaviors (Supplemental Material Fig. S6 [34]), demon-
523
+
524
+ 5
525
+ strating the mixed charge-transfer/Mott-Hubbard char-
526
+ acters of charge order in this material. To understand
527
+ the nature of the two resonant features, we projected
528
+ the wavefunctions of the RIXS intermediate states onto
529
+ the Fock basis which specifies the location of the holes.
530
+ Two main manifolds are seen for each Ni site. The first
531
+ manifold is primarily attributed to transitions resonant
532
+ with d10L0 states, where L stands for ligand holes on
533
+ the four oxygen σ orbitals surrounding the Ni site. The
534
+ second manifold is mainly resonant with d9L0 and d10L1
535
+ states caused by the doped holes, similar to the cuprates
536
+ [43, 44]. With nonzero ∆ϵd, the manifolds of different
537
+ Ni sites split along the incident energy axis, as shown in
538
+ Fig. 4(c). The successful description of the charge order
539
+ in La4Ni3O8 using our cluster model indicates that about
540
+ 70% of the holes participating in the charge modulation
541
+ are on Ni, with the remaining 30% on oxygen, as depicted
542
+ in the inset to Fig. 4(b).
543
+ III.
544
+ DISCUSSION
545
+ Our Ni-dominant charge order distribution is quite dif-
546
+ ferent from cuprates, in which the charge order has dom-
547
+ inant oxygen character [37, 45]. This difference mainly
548
+ arises from the larger charge transfer energy in nicke-
549
+ lates compared to cuprates. Another difference is that in
550
+ cuprates, the 3d3z2−r2 orbitals are strongly localized at
551
+ energies more than 1.5 eV away from the 3dx2−y2 orbitals
552
+ [46], and thus not involved in the low-energy physics. For
553
+ square-planar nickelates, our analysis of La4Ni3O8 indi-
554
+ cates that the 3d3z2−r2 density of states, though small, is
555
+ spread out over an extended energy range, likely due to
556
+ hybridization with the rare earth 5d orbitals. It should be
557
+ noted that although the 3d3z2−r2 orbital involvement in
558
+ the charge order formation is nonzero, its contribution is
559
+ much less than the hybridized 3dx2−y2 and 2pσ orbitals,
560
+ as indicated by the stronger charge order peak in the σ
561
+ polarization channel. These factors mean that minimal
562
+ theoretical models of charge order in nickelates must ex-
563
+ plicitly include both Ni and O states alongside strong
564
+ correlations.
565
+ Another result of our model is that the
566
+ doped sites in charge ordered nickelates are much closer
567
+ to a low-spin S = 0 state than to a high-spin S = 1 state,
568
+ unlike La2−xSrxNiO4, whose high-spin physics drives in-
569
+ sulating behavior across the vast majority of its phase
570
+ diagram [47].
571
+ Recently, RIXS measurements in infinite-layer nicke-
572
+ late films have discovered and studied charge order at
573
+ Q∥ = (1/3, 0) in undoped and underdoped samples [15–
574
+ 17], resembling the charge order in cuprates, but differing
575
+ from the diagonal charge order in La4Ni3O8. In terms
576
+ of these differing wavevectors, theoretical model studies
577
+ in the cuprates have shown that charge order at (Q, 0)
578
+ and (Q, Q) are close in energy, the eventual choice of the
579
+ charge order wavevector being sensitive to details of the
580
+ electronic structure and correlations [48, 49]. This idea is
581
+ supported by the experimental observation that the dop-
582
+ 868
583
+ 869
584
+ 870
585
+ 871
586
+ 872
587
+ 873
588
+ Incident energy (eV)
589
+ 2
590
+ 1
591
+ 0
592
+ Intensity (arb. units)
593
+ Ni1: d 10L0
594
+ Ni1: d 10L1
595
+ Ni1: d 9L0
596
+ Ni2: d 10L0
597
+ Ni2: d 10L1
598
+ Ni2: d 9L0
599
+ REXS calculation
600
+ (c)
601
+ 6
602
+ 4
603
+ 2
604
+ 0
605
+ Intensity (arb. units)
606
+ Ni1
607
+ Ni3
608
+ Ni2
609
+ O
610
+ σ-pol.
611
+ �-pol.
612
+ REXS data
613
+ (b)
614
+ (a)
615
+ 1.2
616
+ 0.8
617
+ 0.4
618
+ 0
619
+ Intensity (arb. units)
620
+ XAS, σ-pol.
621
+ Data
622
+ Ni1+Ni3
623
+ Ni2
624
+ Calculation
625
+ FIG. 4. Electronic character of charge order. (a) x-ray ab-
626
+ sorption spectrum (XAS) data at the Ni L2 edge in the σ
627
+ polarization channel along with the calculation results with
628
+ ∆ϵd = 0.8 eV. Note that Ni1 and Ni3 are symmetry-related.
629
+ (b) Fitted peak amplitudes of the quasi-elastic intensities pre-
630
+ sented in Fig. 2(e)&(f), representing the resonant behaviors of
631
+ the charge order peak. Inset is a schematic of the electronic
632
+ character of the charge order showing a dominant modula-
633
+ tion of Ni orbitals along with an appreciable modulation of
634
+ the oxygen orbitals.
635
+ (c) Simulation of the incident energy
636
+ dependence of the charge order peak intensity with σ inci-
637
+ dent polarization and ∆ϵd = 0.8 eV. The vertical bars are
638
+ weights of different configurations of the RIXS intermediate
639
+ states, the total height of which is normalized according to
640
+ the simulated charge order peak intensity of each state. The
641
+ accurate simulation of the Ni 3d and O 2p components of the
642
+ resonance verifies our model, which is used to extract the elec-
643
+ tronic character of the charge order illustrated in the inset to
644
+ panel (b).
645
+ ing dependent charge order wavevector varies in different
646
+ cuprate families [20], similar to what has been seen more
647
+ recently in the infinite-layer nickelates [15]. In view of
648
+ this, the difference in wavevector probably does not re-
649
+ flect a difference in the mechanisms at play in charge
650
+ order formation. It should, however, be noted that the
651
+ parallel charge order seen in infinite-layer materials oc-
652
+ curs at a lower hole concentration.
653
+ More information can be obtained by comparing the
654
+
655
+ 6
656
+ states involved in charge order formation for different
657
+ low-valence nickelates [15–17].
658
+ All these recent works
659
+ support an appreciable role for Ni in charge order forma-
660
+ tion. However, controversy exists regarding whether the
661
+ rare-earth-Ni hybridization is crucial for charge order for-
662
+ mation [16], or whether the charge modulation on rare-
663
+ earth states only plays a secondary parasitic role [15].
664
+ Our results support the latter scenario in La4Ni3O8. Re-
665
+ garding the involvement of oxygen states, we provide the
666
+ first spectroscopic modeling that allows this question to
667
+ be addressed quantitatively. We deduce a mixed charge-
668
+ transfer/Mott-Hubbard picture for the charge order and
669
+ 70%/30% split of Ni vs. O contributions to the charge
670
+ modulation. This contradicts some of the previous sug-
671
+ gestions for infinite-layer nickelates, which propose a neg-
672
+ ligible role for oxygen in charge order formation and that
673
+ in-plane and out-of-plane Ni states contribute roughly
674
+ equally [16].
675
+ These differences are puzzling consider-
676
+ ing that different members of the Rn+1NinO2n+2 family
677
+ share similar Ni-O bonding, magnetic exchange [42, 50],
678
+ superconducting transition temperatures [12, 13, 51, 52],
679
+ and calculated electronic structures [53]. Part of the chal-
680
+ lenge of making this comparison is that RIXS maps of
681
+ infinite-layer films, as well as their charge order proper-
682
+ ties, vary substantially between different samples of nom-
683
+ inally the same composition [15–17]. In this regard, our
684
+ quantitative spectroscopic analysis on single crystals is
685
+ valuable considering that these samples show more con-
686
+ sistent spectral properties than films of infinite layer ma-
687
+ terials [15–17].
688
+ IV.
689
+ CONCLUSION
690
+ In summary, we have used RIXS measurements at
691
+ the Ni L2-edge to study the character of the electronic
692
+ structure and charge order in the low-valence nickelate
693
+ La4Ni3O8. Our work is unique in providing a realistic
694
+ quantitative empirical model for charge order and vali-
695
+ dating it using Q-resolved spectroscopy at the charge or-
696
+ der wavevector. Different from cuprates where the spatial
697
+ charge modulation dominantly resides on ligand orbitals,
698
+ the charge order in La4Ni3O8 is mostly contributed by
699
+ the Ni sites due to the larger charge transfer energy in
700
+ low-valence nickelates. In addition to the dominant role
701
+ of in-plane Ni 3dx2−y2 and O 2pσ orbitals, the out-of-
702
+ plane Ni 3d3z2−r2 orbitals also participate in the charge
703
+ order, this being enabled by their hybridization with
704
+ the rare-earth 5d orbitals. Thus, our results reveal that
705
+ the overall low-energy physical properties of low-valence
706
+ nickelates are shaped by Ni 3dx2−y2 and O 2pσ orbitals,
707
+ while the detailed electronic structure is fine tuned by
708
+ Ni 3d3z2−r2 and rare-earth 5d orbitals. This reveals that
709
+ multi-orbital physics is crucial to low-valence nickelates,
710
+ indicating that several different ground states are close
711
+ in energy. This observation points to a more complex,
712
+ and perhaps an even richer, phenomenology than their
713
+ cuprate cousins, while charge order remains an intrinsic
714
+ character of these strongly correlated materials.
715
+ The RIXS data generated in this study have been de-
716
+ posited in the Zenodo database under accession code [to
717
+ be assigned].
718
+ ACKNOWLEDGMENTS
719
+ Work at Brookhaven and the University of Tennessee
720
+ (RIXS measurements and the interpretation and model
721
+ Hamiltonian calculations) was supported by the U.S. De-
722
+ partment of Energy, Office of Science, Office of Basic
723
+ Energy Sciences, under Award Number DE-SC0022311.
724
+ Work at Argonne was supported by the U.S. DOE, Of-
725
+ fice of Science, Basic Energy Sciences, Materials Science
726
+ and Engineering Division (nickelate sample synthesis and
727
+ first principles calculations).
728
+ Work performed at Har-
729
+ vard University (data interpretation and paper writing)
730
+ was supported by the US Department of Energy, Division
731
+ of Materials Science, under Contract No. DESC0012704.
732
+ This research used resources at the SIX beamline of the
733
+ National Synchrotron Light Source II, a U.S. DOE Office
734
+ of Science User Facility operated for the DOE Office of
735
+ Science by Brookhaven National Laboratory under Con-
736
+ tract No. DE-SC0012704.
737
+ Appendix A: Sample synthesis
738
+ Parent Ruddlesden-Popper La4Ni3O10 and Pr4Ni3O10
739
+ were prepared using the high-pressure optical floating
740
+ zone method. Sample reduction was performed by cleav-
741
+ ing small crystals from the boules and heating them
742
+ in a flowing H2/Ar gas mixture as described previously
743
+ [31]. We adopt the tetragonal notation with space group
744
+ I4/mmm and lattice constants of a = b = 3.97 ˚A,
745
+ c = 26.092 ˚A to describe reciprocal space.
746
+ Using this
747
+ notation, the samples had a c-axis surface normal. The
748
+ high quality of these samples is confirmed by prior stud-
749
+ ies [40, 42]. Single crystals of La4Ni3O8 are particularly
750
+ suitable for this study as they exhibit more consistent
751
+ XAS spectra and charge order properties than thin films
752
+ of infinite-layer nickelates [15–17]].
753
+ Appendix B: RIXS measurements
754
+ High-energy-resolution RIXS measurements were per-
755
+ formed at the SIX beamline at the NSLS-II. Although
756
+ the sample geometry and the energy of the Ni L2-edge
757
+ resonance limits reciprocal space access, charge order in
758
+ La4Ni3O8 has a c-axis correlation length of less than one
759
+ unit cell, which means that the charge order Bragg peaks
760
+ are accessible for a wide range of L values [14]. We chose
761
+ to measure at the Ni L2-edge instead of the L3 edge to
762
+ avoid contamination from the La M-edge which is very
763
+ close to the Ni L3-edge and can strongly distort the reso-
764
+ nant process [54]. In view of this, we fixed the spectrome-
765
+
766
+ 7
767
+ ter angle at its maximum value of 2Θ = 153◦ throughout
768
+ the measurements of the charge order peak. The sam-
769
+ ples were aligned with the crystalline [0, 0, L] and [H,
770
+ H, 0] directions lying in the horizontal scattering plane
771
+ to access the charge order peak with momentum transfer
772
+ QCO = (1/3, 1/3, L) where L ≈ 1.75. In this geometry,
773
+ the x-ray intensity is dominated by charge, rather than
774
+ spin, scattering (Supplemental Material Sec. IV [34]).
775
+ Spectra designed to study the charge order resonance
776
+ in the σ polarization channel, such as Fig. 2(e), were
777
+ taken with 24 meV energy resolution.
778
+ For the charge
779
+ order in the π polarization channel, such as Fig. 2(f),
780
+ a relaxed energy resolution of 32 meV was used to in-
781
+ crease throughput. Whenever the energy was changed,
782
+ the sample was rotated in order to remain at the same
783
+ in-plane scattering vector. In order to study the high-
784
+ energy features, as done in Fig. 2(a) and 2(b), the en-
785
+ ergy resolution was further relaxed to 48 meV and the
786
+ sample and spectrometer were slightly offset from the
787
+ diffraction condition with a sample angle of 14.3◦ and
788
+ a spectrometer angle of 2Θ = 147◦ to avoid saturating
789
+ the detector. Note that the strong elastic intensity over-
790
+ whelms the low-energy inelastic signals such as that from
791
+ the magnetic excitations studied previously [42]. Data
792
+ collected with different energy-resolution configurations
793
+ were normalized by the dd excitations measured with the
794
+ same sample geometry.
795
+ Upon illumination by very strong elastic scattering
796
+ from charge order, a weak periodic error was identified in
797
+ the spectrometer grating which created the weak feature
798
+ in the energy gain side of Fig. 2(a). This was confirmed
799
+ by measuring reference elastic scattering.
800
+ Appendix C: Exact diagonalization calculations
801
+ The RIXS spectra and REXS responses presented here
802
+ were calculated using the Kramers-Heisenberg formula in
803
+ the dipole approximation through the EDRIXS software
804
+ [55, 56]. The eigenstates for the initial/final and interme-
805
+ diate states are obtained from exact diagonalization of a
806
+ Ni3O10 cluster with four holes and open boundary con-
807
+ ditions. To fully take into account the many-body and
808
+ multi-orbital effects, we explicitly include the Coulomb
809
+ interactions and nearest-neighbor inter-atomic hoppings
810
+ in our model, and construct the Hamiltonian in hole lan-
811
+ guage. We use the same parameters as those used in the
812
+ O K-edge calculations which are proved to well describe
813
+ the RIXS data [40].
814
+ By doing so, the charge-transfer
815
+ energy ∆ is set to 5.6 eV and the on-site Coulomb repul-
816
+ sion to 6.5 eV, locating the material in the mixed charge-
817
+ transfer/Mott-Hubbard regime of the Zaanen-Sawatzky-
818
+ Allen (ZSA) scheme. We also include the spin-orbit cou-
819
+ pling for the Ni 3d electrons, which is very small and is
820
+ expected to play a minimal role. For simplicity, the scat-
821
+ tering angle 2Θ is kept at 150◦ and the sample angle is
822
+ fixed to θ = 15◦.
823
+ The total RIXS scattering amplitude is calculated via
824
+ F =
825
+
826
+ i
827
+ FieiQ·ri
828
+ (C1)
829
+ where Fi and ri are the scattering amplitude and position
830
+ of each Ni site, respectively. The charge order peak was
831
+ then calculated by combining the atomic scattering am-
832
+ plitudes with the phases appropriate for tiling the cluster
833
+ into the NiO2 plane as shown in Fig. 1(a).
834
+ [1] M. R. Norman, The challenge of unconventional super-
835
+ conductivity, Science 332, 196 (2011).
836
+ [2] D. J. Scalapino, A common thread: The pairing inter-
837
+ action for unconventional superconductors, Reviews of
838
+ Modern Physics 84, 1383 (2012).
839
+ [3] K. Yamada, C. H. Lee, K. Kurahashi, J. Wada, S. Waki-
840
+ moto, S. Ueki, H. Kimura, Y. Endoh, S. Hosoya, G. Shi-
841
+ rane, R. J. Birgeneau, M. Greven, M. A. Kastner, and
842
+ Y. J. Kim, Doping dependence of the spatially modulated
843
+ dynamical spin correlations and the superconducting-
844
+ transition temperature in La2−xSrxCuO4, Physical Re-
845
+ view B 57, 6165 (1998).
846
+ [4] S. Wakimoto, J. M. Tranquada, T. Ono, K. M. Kojima,
847
+ S. Uchida, S. H. Lee, P. M. Gehring, and R. J. Birgeneau,
848
+ Diagonal static spin correlation in the low-temperature
849
+ orthorhombic
850
+ Pccn
851
+ phase
852
+ of
853
+ La1.55Nd0.4Sr0.05CuO4,
854
+ Physical Review B 64, 174505 (2001).
855
+ [5] S. R. Dunsiger, Y. Zhao, Z. Yamani, W. J. L. Buy-
856
+ ers, H. A. Dabkowska, and B. D. Gaulin, Incommen-
857
+ surate spin ordering and fluctuations in underdoped
858
+ La2−xBaxCuO4, Physical Review B 77, 224410 (2008).
859
+ [6] R. Arpaia, S. Caprara, R. Fumagalli, G. De Vecchi,
860
+ Y. Y. Peng, E. Andersson, D. Betto, G. M. De Luca,
861
+ N. B. Brookes, F. Lombardi, M. Salluzzo, L. Braicovich,
862
+ C. Di Castro, M. Grilli, and G. Ghiringhelli, Dynamical
863
+ charge density fluctuations pervading the phase diagram
864
+ of a Cu-based high-Tc superconductor, Science 365, 906
865
+ (2019).
866
+ [7] H. Miao, G. Fabbris, R. J. Koch, D. G. Mazzone, C. S.
867
+ Nelson, R. Acevedo-Esteves, G. D. Gu, Y. Li, T. Yil-
868
+ imaz, K. Kaznatcheev, E. Vescovo, M. Oda, T. Kuro-
869
+ sawa, N. Momono, T. Assefa, I. K. Robinson, E. S. Bozin,
870
+ J. M. Tranquada, P. D. Johnson, and M. P. M. Dean,
871
+ Charge density waves in cuprate superconductors beyond
872
+ the critical doping, npj Quantum Materials 6, 31 (2021).
873
+ [8] D. Li, B. Y. Wang, K. Lee, S. P. Harvey, M. Osada, B. H.
874
+
875
+ 8
876
+ Goodge, L. F. Kourkoutis, and H. Y. Hwang, Supercon-
877
+ ducting dome in Nd1−xSrxNiO2 infinite layer films, Phys-
878
+ ical Review Letters 125, 027001 (2020).
879
+ [9] S. Zeng, C. S. Tang, X. Yin, C. Li, M. Li, Z. Huang,
880
+ J. Hu, W. Liu, G. J. Omar, H. Jani, Z. S. Lim, K. Han,
881
+ D. Wan, P. Yang, S. J. Pennycook, A. T. S. Wee, and
882
+ A. Ariando, Phase diagram and superconducting dome of
883
+ infinite-layer Nd1−xSrxNiO2 thin films, Physical Review
884
+ Letters 125, 147003 (2020).
885
+ [10] M. Osada, B. Y. Wang, K. Lee, D. Li, and H. Y. Hwang,
886
+ Phase diagram of infinite layer praseodymium nickelate
887
+ Pr1−xSrxNiO2 thin films, Physical Review Materials 4,
888
+ 121801 (2020).
889
+ [11] M. Osada, B. Y. Wang, B. H. Goodge, S. P. Harvey,
890
+ K. Lee, D. Li, L. F. Kourkoutis, and H. Y. Hwang, Nick-
891
+ elate superconductivity without rare-earth magnetism:
892
+ (La,Sr)NiO2, Advanced Materials 33, 2104083 (2021).
893
+ [12] S. Zeng, C. Li, E. Chow Lin, Y. Cao, Z. Zhang,
894
+ S. Tang Chi, X. Yin, S. Lim Zhi, J. Hu, P. Yang, and
895
+ A. Ariando, Superconductivity in infinite-layer nickelate
896
+ La1−xCaxNiO2 thin films, Science Advances 8, eabl9927
897
+ (2022).
898
+ [13] G. A. Pan, D. Ferenc Segedin, H. LaBollita, Q. Song,
899
+ E. M. Nica, B. H. Goodge, A. T. Pierce, S. Doyle, S. No-
900
+ vakov, D. C´ordova Carrizales, et al., Superconductivity
901
+ in a quintuple-layer square-planar nickelate, Nature Ma-
902
+ terials 21, 160 (2022).
903
+ [14] J. Zhang, Y.-S. Chen, D. Phelan, H. Zheng, M. R. Nor-
904
+ man, and J. F. Mitchell, Stacked charge stripes in the
905
+ quasi-2D trilayer nickelate La4Ni3O8, Proceedings of the
906
+ National Academy of Sciences 113, 8945 (2016).
907
+ [15] M. Rossi, M. Osada, J. Choi, S. Agrestini, D. Jost,
908
+ Y. Lee, H. Lu, B. Y. Wang, K. Lee, A. Nag, et al., A
909
+ broken translational symmetry state in an infinite-layer
910
+ nickelate, Nature Physics 18, 869–873 (2022).
911
+ [16] C. C. Tam, J. Choi, X. Ding, S. Agrestini, A. Nag,
912
+ B. Huang, H. Luo, M. Garc´ıa-Fern´andez, L. Qiao, and K.-
913
+ J. Zhou, Charge density waves in infinite-layer NdNiO2
914
+ nickelates, Nature Materials 10.1038/s41563-022-01330-1
915
+ (2022).
916
+ [17] G. Krieger, L. Martinelli, S. Zeng, L. E. Chow, K. Kum-
917
+ mer, R. Arpaia, M. Moretti Sala, N. B. Brookes, A. Ar-
918
+ iando,
919
+ N. Viart,
920
+ M. Salluzzo,
921
+ G. Ghiringhelli, and
922
+ D. Preziosi, Charge and spin order dichotomy in NdNiO2
923
+ driven by the capping layer, Physical Review Letters 129,
924
+ 027002 (2022).
925
+ [18] J. M. Tranquada, Spins, stripes, and superconductivity in
926
+ hole-doped cuprates, AIP Conference Proceedings 1550,
927
+ 114 (2013).
928
+ [19] R. Comin and A. Damascelli, Resonant x-ray scattering
929
+ studies of charge order in cuprates, Annual Review of
930
+ Condensed Matter Physics 7, 369 (2016).
931
+ [20] A. Frano, S. Blanco-Canosa, B. Keimer, and R. J. Birge-
932
+ neau, Charge ordering in superconducting copper oxides,
933
+ Journal of Physics: Condensed Matter 32, 374005 (2020).
934
+ [21] J. M. Tranquada, M. P. M. Dean, and Q. Li, Supercon-
935
+ ductivity from charge order in cuprates, Journal of the
936
+ Physical Society of Japan 90, 111002 (2021).
937
+ [22] E. W. Huang, C. B. Mendl, S. Liu, S. Johnston, H.-C.
938
+ Jiang, B. Moritz, and T. P. Devereaux, Numerical evi-
939
+ dence of fluctuating stripes in the normal state of high-Tc
940
+ cuprate superconductors, Science 358, 1161 (2017).
941
+ [23] B.-X. Zheng, C.-M. Chung, P. Corboz, G. Ehlers, M.-P.
942
+ Qin, R. M. Noack, H. Shi, S. R. White, S. Zhang, and
943
+ G. K.-L. Chan, Stripe order in the underdoped region of
944
+ the two-dimensional Hubbard model, Science 358, 1155
945
+ (2017).
946
+ [24] P. Mai, S. Karakuzu, G. Balduzzi, S. Johnston, and T. A.
947
+ Maier, Intertwined spin, charge, and pair correlations in
948
+ the two-dimensional Hubbard model in the thermody-
949
+ namic limit, Proceedings of the National Academy of Sci-
950
+ ences 119, e2112806119 (2022).
951
+ [25] Q. Li, M. H¨ucker, G. D. Gu, A. M. Tsvelik, and J. M.
952
+ Tranquada, Two-dimensional superconducting fluctua-
953
+ tions in stripe-ordered La1.875Ba0.125CuO4, Physical Re-
954
+ view Letters 99, 067001 (2007).
955
+ [26] E. Berg,
956
+ E. Fradkin,
957
+ E.-A. Kim,
958
+ S. A. Kivelson,
959
+ V. Oganesyan, J. M. Tranquada, and S. C. Zhang,
960
+ Dynamical layer decoupling in a stripe-ordered high-
961
+ Tc superconductor, Physical Review Letters 99, 127003
962
+ (2007).
963
+ [27] V. J. Emery, S. A. Kivelson, and O. Zachar, Spin-gap
964
+ proximity effect mechanism of high-temperature super-
965
+ conductivity, Physical Review B 56, 6120 (1997).
966
+ [28] S. A. Kivelson, E. Fradkin, and V. J. Emery, Electronic
967
+ liquid-crystal phases of a doped Mott insulator, Nature
968
+ 393, 550 (1998).
969
+ [29] E. Fradkin, S. A. Kivelson, and J. M. Tranquada, Theory
970
+ of intertwined orders in high temperature superconduc-
971
+ tors, Reviews of Modern Physics 87, 457 (2015).
972
+ [30] C. Castellani, C. Di Castro, and M. Grilli, Singular quasi-
973
+ particle scattering in the proximity of charge instabilities,
974
+ Physical Review Letters. 75, 4650 (1995).
975
+ [31] J. Zhang, A. Botana, J. Freeland, D. Phelan, H. Zheng,
976
+ V. Pardo, M. Norman, and J. Mitchell, Large orbital po-
977
+ larization in a metallic square-planar nickelate, Nature
978
+ Physics 13, 864 (2017).
979
+ [32] J. Karp, A. Hampel, M. Zingl, A. S. Botana, H. Park,
980
+ M. R. Norman, and A. J. Millis, Comparative many-body
981
+ study of Pr4Ni3O8 and NdNiO2, Physical Review B 102,
982
+ 245130 (2020).
983
+ [33] E. M. Nica, J. Krishna, R. Yu, Q. Si, A. S. Botana, and
984
+ O. Erten, Theoretical investigation of superconductivity
985
+ in trilayer square-planar nickelates, Physical Review B
986
+ 102, 020504(R) (2020).
987
+ [34] See Supplemental Material at [URL will be inserted by
988
+ publisher] for measurements of Pr4Ni3O8, discussion of
989
+ the RIXS process, consideration of spin order, and fur-
990
+ ther calculations. This also includes reference [57–59].
991
+ [35] G. Fabbris, D. Meyers, L. Xu, V. M. Katukuri, L. Hozoi,
992
+ X. Liu, Z.-Y. Chen, J. Okamoto, T. Schmitt, A. Uldry,
993
+ B. Delley, G. D. Gu, D. Prabhakaran, A. T. Boothroyd,
994
+ J. van den Brink, D. J. Huang, and M. P. M. Dean, Dop-
995
+ ing dependence of collective spin and orbital excitations
996
+ in the spin-1 quantum antiferromagnet La2−xSrxNiO4
997
+ observed by x rays, Physical Review Lett. 118, 156402
998
+ (2017).
999
+ [36] K. Higashi, M. Winder, J. Kuneˇs, and A. Hariki, Core-
1000
+ level x-ray spectroscopy of infinite-layer nickelate: LDA+
1001
+ DMFT study, Physical Review X 11, 041009 (2021).
1002
+ [37] P. Abbamonte, A. Rusydi, S. Smadici, G. D. Gu, G. A.
1003
+ Sawatzky, and D. L. Feng, Spatially modulated ‘Mot-
1004
+ tness’ in La2−xBaxCuO4, Nature Physics 1, 155 (2005).
1005
+ [38] Y.
1006
+ Y.
1007
+ Peng,
1008
+ R.
1009
+ Fumagalli,
1010
+ Y.
1011
+ Ding,
1012
+ M.
1013
+ Minola,
1014
+ S. Caprara, D. Betto, M. Bluschke, G. M. De Luca,
1015
+ K. Kummer, E. Lefran¸cois, M. Salluzzo, H. Suzuki,
1016
+ M. Le Tacon, X. J. Zhou, N. B. Brookes, B. Keimer,
1017
+ L. Braicovich, M. Grilli, and G. Ghiringhelli, Re-entrant
1018
+
1019
+ 9
1020
+ charge order in overdoped (Bi,Pb)2.12Sr1.88CuO6+δ out-
1021
+ side the pseudogap regime, Nature Materials 17, 697
1022
+ (2018).
1023
+ [39] J. Li, A. Nag, J. Pelliciari, H. Robarts, A. Walters,
1024
+ M. Garcia-Fernandez, H. Eisaki, D. Song, H. Ding,
1025
+ S. Johnston, R. Comin, and K.-J. Zhou, Multiorbital
1026
+ charge-density wave excitations and concomitant phonon
1027
+ anomalies in Bi2Sr2LaCuO6+δ, Proceedings of the Na-
1028
+ tional Academy of Sciences 117, 16219 (2020).
1029
+ [40] Y. Shen, J. Sears, G. Fabbris, J. Li, J. Pelliciari, I. Jar-
1030
+ rige, X. He, I. Boˇzovi´c, M. Mitrano, J. Zhang, J. F.
1031
+ Mitchell, A. S. Botana, V. Bisogni, M. R. Norman,
1032
+ S. Johnston, and M. P. M. Dean, Role of oxygen states
1033
+ in the low valence nickelate La4Ni3O8, Physical Review
1034
+ X 12, 011055 (2022).
1035
+ [41] A. J. Achkar, F. He, R. Sutarto, J. Geck, H. Zhang, Y.-
1036
+ J. Kim, and D. G. Hawthorn, Resonant x-ray scattering
1037
+ measurements of a spatial modulation of the Cu 3d and
1038
+ O 2p energies in stripe-ordered cuprate superconductors,
1039
+ Physical Review Letters 110, 017001 (2013).
1040
+ [42] J. Q. Lin, P. Villar Arribi, G. Fabbris, A. S. Botana,
1041
+ D. Meyers, H. Miao, Y. Shen, D. G. Mazzone, J. Feng,
1042
+ S. G. Chiuzb˘aian, A. Nag, A. C. Walters, M. Garc´ıa-
1043
+ Fern´andez, K.-J. Zhou, J. Pelliciari, I. Jarrige, J. W.
1044
+ Freeland, J. Zhang, J. F. Mitchell, V. Bisogni, X. Liu,
1045
+ M. R. Norman, and M. P. M. Dean, Strong superex-
1046
+ change in a d9−δ nickelate revealed by resonant inelastic
1047
+ x-ray scattering, Physical Review Letters 126, 087001
1048
+ (2021).
1049
+ [43] C. T. Chen, L. H. Tjeng, J. Kwo, H. L. Kao, P. Rudolf,
1050
+ F. Sette, and R. M. Fleming, Out-of-plane orbital charac-
1051
+ ters of intrinsic and doped holes in La2−xSrxCuO4, Phys-
1052
+ ical Review Letters 68, 2543 (1992).
1053
+ [44] M. Schneider, R. S. Unger, R. Mitdank, R. M˘uller,
1054
+ A. Krapf, S. Rogaschewski, H. Dwelk, C. Janowitz,
1055
+ and R. Manzke, Evolution of the density of states
1056
+ at the Fermi level of Bi2−yPbySr2−xLaxCuO6+δ and
1057
+ Bi2Sr2−xLaxCuO6+δ cuprates with hole doping, Physi-
1058
+ cal Review B 72, 014504 (2005).
1059
+ [45] F. C. Zhang and T. M. Rice, Effective Hamiltonian for
1060
+ the superconducting Cu oxides, Physical Review B 37,
1061
+ 3759 (1988).
1062
+ [46] M. Moretti Sala, V. Bisogni, C. Aruta, G. Balestrino,
1063
+ H. Berger, N. B. Brookes, G. M. d. Luca, D. Di Castro,
1064
+ M. Grioni, M. Guarise, P. G. Medaglia, F. Miletto Gra-
1065
+ nozio, M. Minola, P. Perna, M. Radovic, M. Salluzzo,
1066
+ T. Schmitt, K. J. Zhou, L. Braicovich, and G. Ghir-
1067
+ inghelli, Energy and symmetry of dd excitations in un-
1068
+ doped layered cuprates measured by Cu L3 resonant
1069
+ inelastic x-ray scattering, New Journal of Physics 13,
1070
+ 043026 (2011).
1071
+ [47] H. Ulbrich and M. Braden, Neutron scattering studies on
1072
+ stripe phases in non-cuprate materials, Physica C: Su-
1073
+ perconductivity 481, 31 (2012), stripes and Electronic
1074
+ Liquid Crystals in Strongly Correlated Materials.
1075
+ [48] A. Melikyan and M. R. Norman, Symmetry of the charge
1076
+ density wave in cuprates, Physical Review B 89, 024507
1077
+ (2014).
1078
+ [49] P. Corboz, T. M. Rice, and M. Troyer, Competing states
1079
+ in the t-J model:
1080
+ Uniform d-wave state versus stripe
1081
+ state, Physical Review Letters 113, 046402 (2014).
1082
+ [50] H. Lu, M. Rossi, A. Nag, M. Osada, D. F. Li, K. Lee,
1083
+ B. Y. Wang, M. Garcia-Fernandez, S. Agrestini, Z. X.
1084
+ Shen, E. M. Been, B. Moritz, T. P. Devereaux, J. Zaa-
1085
+ nen, H. Y. Hwang, K.-J. Zhou, and W. S. Lee, Magnetic
1086
+ excitations in infinite-layer nickelates, Science 373, 213
1087
+ (2021).
1088
+ [51] D. Li, K. Lee, B. Y. Wang, M. Osada, S. Crossley, H. R.
1089
+ Lee, Y. Cui, Y. Hikita, and H. Y. Hwang, Supercon-
1090
+ ductivity in an infinite-layer nickelate, Nature 572, 624
1091
+ (2019).
1092
+ [52] M. Osada, B. Y. Wang, B. H. Goodge, K. Lee, H. Yoon,
1093
+ K. Sakuma, D. Li, M. Miura, L. F. Kourkoutis, and H. Y.
1094
+ Hwang, A superconducting praseodymium nickelate with
1095
+ infinite layer structure, Nano Letters 20, 5735 (2020).
1096
+ [53] Low valence nickelate electronic structure are rather sim-
1097
+ ilar provided they are compared at similar effective dop-
1098
+ ings [32, 60].
1099
+ [54] C. Sch¨ußler-Langeheine, J. Schlappa, A. Tanaka, Z. Hu,
1100
+ C.
1101
+ F.
1102
+ Chang,
1103
+ E.
1104
+ Schierle,
1105
+ M.
1106
+ Benomar,
1107
+ H.
1108
+ Ott,
1109
+ E. Weschke, G. Kaindl, O. Friedt, G. A. Sawatzky, H.-J.
1110
+ Lin, C. T. Chen, M. Braden, and L. H. Tjeng, Spec-
1111
+ troscopy of stripe order in La1.8Sr0.2NiO4 using resonant
1112
+ soft x-ray diffraction, Physical Review Letters 95, 156402
1113
+ (2005).
1114
+ [55] EDRIXS
1115
+ website,
1116
+ https://github.com/NSLS-II/
1117
+ edrixs, accessed: 2022-05-19.
1118
+ [56] Y. Wang, G. Fabbris, M. Dean, and G. Kotliar, EDRIXS:
1119
+ An open source toolkit for simulating spectra of resonant
1120
+ inelastic x-ray scattering, Computer Physics Communi-
1121
+ cations 243, 151 (2019).
1122
+ [57] M. W. Haverkort, M. Zwierzycki, and O. K. Ander-
1123
+ sen, Multiplet ligand-field theory using wannier orbitals,
1124
+ Physical Review B 85, 165113 (2012).
1125
+ [58] M. W. Haverkort, Theory of resonant inelastic x-ray scat-
1126
+ tering by collective magnetic excitations, Physical Re-
1127
+ view Letters 105, 167404 (2010).
1128
+ [59] A. J. Achkar, R. Sutarto, X. Mao, F. He, A. Frano,
1129
+ S.
1130
+ Blanco-Canosa,
1131
+ M.
1132
+ Le
1133
+ Tacon,
1134
+ G.
1135
+ Ghiringhelli,
1136
+ L. Braicovich, M. Minola, M. Moretti Sala, C. Mazzoli,
1137
+ R. Liang, D. A. Bonn, W. N. Hardy, B. Keimer, G. A.
1138
+ Sawatzky, and D. G. Hawthorn, Distinct charge orders in
1139
+ the planes and chains of ortho-III-ordered YBa2Cu3O6+δ
1140
+ superconductors identified by resonant elastic x-ray scat-
1141
+ tering, Physical Review Letters 109, 167001 (2012).
1142
+ [60] H. LaBollita and A. S. Botana, Electronic structure and
1143
+ magnetic properties of higher-order layered nickelates:
1144
+ Lan+1NinO2n+2(n = 4 − 6), Physical Review B 104,
1145
+ 035148 (2021).
1146
+
1147
+ Supplemental Material: Electronic character of charge order in square planar low
1148
+ valence nickelates
1149
+ Y. Shen,∗ J. Sears, G. Fabbris, J. Li, J. Pelliciari, M. Mitrano, W. He, Junjie
1150
+ Zhang, J. F. Mitchell, V. Bisogni, M. R. Norman, S. Johnston, and M. P. M. Dean†
1151
+ (Dated: January 12, 2023)
1152
+ I.
1153
+ ABSENCE OF DIAGONAL CHARGE ORDER IN Pr4Ni3O8
1154
+ To confirm the absence of diagonal charge order in metallic Pr4Ni3O8 [1], we performed resonant inelastic x-ray
1155
+ scattering (RIXS) measurements near Q∥ = (1/3, 1/3) in Pr4Ni3O8. Figure S1 shows the RIXS spectra in the quasi-
1156
+ elastic regime with σ polarized incident photons. No superlattice peaks are found but only background evolving
1157
+ smoothly with the in-plane sample angle θ, which is primarily caused by the self-absorption effect. Note that despite
1158
+ the absence of long-range or short-range stripe order indicated here, stripe related spin fluctuations are distinguished
1159
+ in the inelastic regime [2].
1160
+ 0.2
1161
+ 0.1
1162
+ 0.0
1163
+ 0.1
1164
+ 0.2
1165
+ Energy loss (eV)
1166
+ (a)
1167
+ 10
1168
+ 15
1169
+ 20
1170
+ 25
1171
+ 30
1172
+ (deg)
1173
+ 0.0
1174
+ 0.2
1175
+ 0.4
1176
+ 0.6
1177
+ Intensity (arb. units)
1178
+ (b)
1179
+ 0
1180
+ 2
1181
+ 4
1182
+ 6
1183
+ 8
1184
+ 10
1185
+ Intensity (arb. units)
1186
+ FIG. S1. Absence of charge order in Pr4Ni3O8 at 40 K. (a) RIXS intensity map around Q∥ = (1/3, 1/3) in the quasi-elastic
1187
+ regime at the Ni L3-edge. The experimental configuration is the same as that for La4Ni3O8. (b) Quasi-elastic amplitudes
1188
+ extracted from (a).
1189
1190
1191
+ arXiv:2301.04184v1 [cond-mat.str-el] 10 Jan 2023
1192
+
1193
+ 2
1194
+ II.
1195
+ RIXS PROCESS FOR DIFFERENT EXCITATIONS
1196
+ Here we discuss the RIXS process for different excitations. Due to the presence of the strong core-hole potential in
1197
+ the RIXS intermediate states, the electron that is excited from the core level is constrained to a few unit cells near the
1198
+ Ni site where the x-ray absorption takes place. This effect competes with the kinetic energy of the electron and leads
1199
+ to intertwined excitations in the RIXS spectra. In a simplified picture, the orbital states during the RIXS process can
1200
+ be divided into three categories based on how they are affected by the core-hole potential [see Fig. S2(a)]. The first
1201
+ one involves the Ni 3d orbitals that are strongly localized at the core-hole site. The second one involves the ligand
1202
+ orbitals that surround the Ni site and strongly hybridize with the Ni 3d orbitals. They are largely localized but could
1203
+ show a finite bandwidth. The third one involves continuous electronic bands that are mostly unperturbed by the
1204
+ core-hole potential and behave itinerantly with an appreciable bandwidth. The localized Ni 3d orbitals can hybridize
1205
+ with the continuous bands in an orbital dependent fashion. At the Ni L-edge, the core electron is predominantly
1206
+ excited to the unoccupied localized Ni 3d orbitals [see Fig. S2(b)]. During the photon emission process, either an
1207
+ electron from another 3d orbital deexcites to fill the core hole, leading to dd multiplet excitations [see Fig. S2(d)], or
1208
+ an electron from the ligand orbitals hops to the Ni site, resulting in charge-transfer excitations [Fig. S2(e)]. Since the
1209
+ ligand orbitals normally lie at a lower energy, the charge-transfer excitations usually occur with a larger energy loss
1210
+ than the dd excitations and are much weaker at the Ni L-edge as they are made possible through hybridization. In the
1211
+ post-edge regime, the core electron is excited to the unoccupied states in the continuous bands through hybridization
1212
+ in the intermediate state [see Fig. S2(c)], and during the photon emission an electron below the Fermi level deexcites
1213
+ to fill the core hole, leading to the fluorescence [see Fig. S2(f)]. As the deexciting process is dominated by electrons
1214
+ near the Fermi energy, fluorescence tends to present a constant emission photon energy. Note that at the Ni L-edge
1215
+ RIXS process, contributions from rare earth and oxygen states are seen via their hybridization with atomic Ni 3d
1216
+ orbitals. In real materials, there are no clear boundaries between the localized orbitals and continuous bands. Thus,
1217
+ different excitations are also intertwined but the weights are quite different, which helps us distinguish them in the
1218
+ RIXS spectra.
1219
+ electron
1220
+ hole
1221
+ Ni site
1222
+ Core level
1223
+ Multiplets
1224
+ Ligand
1225
+ Continuous bands
1226
+ Energy
1227
+ Distance
1228
+ Fermi
1229
+ level
1230
+ Final states
1231
+ Intermediate states
1232
+ fluorescence
1233
+ Post-edge
1234
+ Charge-transfer excitations
1235
+ dd excitations
1236
+ At edge
1237
+ (a)
1238
+ (b)
1239
+ (d)
1240
+ (e)
1241
+ (c)
1242
+ (f)
1243
+ FIG. S2. RIXS process for different excitations. (a) Legend for each symbol. (b, c) Photon absorption process and corresponding
1244
+ intermediate states. (d)–(f) Photon emission process and corresponding final states. For the fluorescence excitation scenario,
1245
+ the multiplets are not well defined so they are replaced by dashed lines.
1246
+
1247
+ 3
1248
+ 0
1249
+ 30
1250
+ 60
1251
+ 90
1252
+ 120
1253
+ 150
1254
+ 180
1255
+ (deg)
1256
+ 0
1257
+ 1
1258
+ 2
1259
+ 3
1260
+ 4
1261
+ Absorption ratio for vs polarization
1262
+ d3z2
1263
+ r2
1264
+ dxz/dyz
1265
+ dx2
1266
+ y2/dxy
1267
+ FIG. S3. Intensity ratio for dipole absorption between π and σ polarization channels as a function of sample angle θ. The
1268
+ vertical dashed line indicates the experimental configuration we used for charge order measurements while the horizontal dotted
1269
+ line denotes a unit ratio.
1270
+ III.
1271
+ POLARIZATION DEPENDENCE OF FLUORESCENCE
1272
+ The polarization dependence of dd excitations in cuprates and nickelates has been widely discussed [3, 4]. Here,
1273
+ we focus on fluorescence to show how the orbital information can be extracted by comparing the RIXS intensity in
1274
+ different polarization channels. Since we are measuring at the Ni L edge, the RIXS signal can only arise from either
1275
+ Ni orbitals or Ni states hybridized with other orbitals. For the fluorescence features, the photon emission process
1276
+ is quite similar, with electrons from the crystalline environment surrounding the Ni site deexciting to fill the core
1277
+ hole. Hence, the main intensity difference between these two polarization channels comes from the photon absorption
1278
+ process, the cross-section of which can be simulated in the dipole approximation. Fig. S3 presents the x-ray dipole
1279
+ absorption intensity ratio between π and σ polarization channels as a function of sample angle θ. For the experimental
1280
+ configuration we used (vertical dashed line), the biggest contribution to the π over σ polarization intensity ratio is
1281
+ the Ni 3d3z2−r2 orbitals while 3dx2−y2 and 3dxy contribute equally to σ over π polarization intensity ratio. Since the
1282
+ 3dx2−y2 orbitals dominate near the Fermi level and are expected to show stronger hybridization with oxygen orbitals
1283
+ [5], 3dxy orbitals are expected to play a less important role. Moreover, the t2g states do not make any significant
1284
+ contribution to the unoccupied states. Thus, we focus on Ni eg orbitals during the discussion in the main text, which
1285
+ are the subject of most of the debates over the appropriate theoretical models.
1286
+ IV.
1287
+ MINIMAL CONTRIBUTION OF SPIN ORDER TO THE REXS SIGNAL
1288
+ In La4Ni3O8, spin order takes place concomitantly with the charge order and shares the same Q∥. Hence we need
1289
+ to invoke cross-section considerations to separate the possible contribution of charge and spin order [6].
1290
+ With π incident x-ray polarization, charge order contributes to the measured signal in the π-π′ scattering channel
1291
+ while the spin order is responsible for the π-σ′ channel. The resonant elastic x-ray scattering (REXS) intensity ratio
1292
+ between these channels can be estimated by (ki · kf)2/(ϵi × ϵ′
1293
+ f · M)2 = cos2 2Θ/ sin2 θ ≈ 11.8, where ki (kf) is the
1294
+ initial (final) x-ray wavevector, ϵi (ϵ′
1295
+ f) is the initial (final) x-ray polarization, and M is the spin direction, which is
1296
+ parallel to the c-axis in this case. Based on this formula we can see that the REXS signal with π incident polarization
1297
+ is dominantly of charge order origin.
1298
+ Regarding the spin order contribution with σ incident x-ray polarization, we can compare the peak intensity with
1299
+ grazing-in and grazing-out conditions. Since the charge order composes the σ-σ′ channel, its intensity is expected to
1300
+ be the same in these two geometries. For spin order signal that is only observable in the σ-π′ channel, the intensity
1301
+
1302
+ 4
1303
+ 0.32
1304
+ 0.33
1305
+ 0.34
1306
+ 0.35
1307
+ (H, H) (r.l.u.)
1308
+ 0
1309
+ 2
1310
+ 4
1311
+ 6
1312
+ 8
1313
+ Intensity (arb. units)
1314
+ grazing-in
1315
+ 0.32
1316
+ 0.33
1317
+ 0.34
1318
+ 0.35
1319
+ grazing-out
1320
+ FIG. S4. Comparison of the superlattice peak intensity with grazing-in and grazing-out conditions. The scattering angle 2Θ
1321
+ was fixed to 153◦ and the data were collected in σ polarization channel at 40 K. The solid lines are guides to the eye. Both
1322
+ peaks are found to have essentially the same intensity, which confirms that the peak arises from charge, rather than spin, order.
1323
+ TABLE S1. Full list of parameters used for the ED calculations. The on-site orbital energies, hopping integrals and Coulomb
1324
+ interactions are kept the same as those used in the O K-edge calculations [7], and Vpdπ = −Vpdσ/2, Vppπ = −Vppσ/4. The
1325
+ potential difference, ∆ϵd, only applies to the Ni 3d orbitals. Note that the crystal field splitting that is instead used in a Ni
1326
+ atomic model is a combination of point charge potential and orbital hybridization, which can be estimated through ligand field
1327
+ theory [5]. The resulting effective crystal field splitting gives 10Dq = 0.971, ∆eg = 1.041, ∆t2g = 0.342 eV, which are of a
1328
+ similar energy scale as the dd excitations observed in the RIXS measurements. ζi and ζn are spin-orbit coupling parameters of
1329
+ the Ni 3d electrons for the initial and intermediate states, respectively, and ζc is the spin-orbit coupling strength for the Ni 2p
1330
+ core electrons. The core-hole lifetime is set to be 0.6 eV. All parameters are in units of eV.
1331
+ On-site orbital energies
1332
+ Hopping integrals
1333
+ ϵdx2−y2
1334
+ ϵd3z2−r2
1335
+ ϵdxy
1336
+ ϵdxz/yz
1337
+ ϵpσ
1338
+ ϵpπ/pz
1339
+ Vpdσ
1340
+ Vppσ
1341
+ 0
1342
+ 0.2
1343
+ 0.1
1344
+ 0.3
1345
+ 5.6
1346
+ 6.1
1347
+ 1.57
1348
+ 0.6
1349
+ Spin-orbit coupling
1350
+ On-site Coulomb interactions
1351
+ ζi
1352
+ ζn
1353
+ ζc
1354
+ F 0
1355
+ dd
1356
+ F 2
1357
+ dd
1358
+ F 4
1359
+ dd
1360
+ F 0
1361
+ pp
1362
+ F 2
1363
+ pp
1364
+ 0.083
1365
+ 0.102
1366
+ 11.507
1367
+ 5.58
1368
+ 6.89
1369
+ 4.31
1370
+ 3.3
1371
+ 5
1372
+ Inter-site Coulomb interactions
1373
+ Core-hole potential
1374
+ F 0
1375
+ dp
1376
+ F 2
1377
+ dp
1378
+ G1
1379
+ dp
1380
+ G3
1381
+ dp
1382
+ F 0
1383
+ dp
1384
+ F 2
1385
+ dp
1386
+ G1
1387
+ dp
1388
+ G3
1389
+ dp
1390
+ 1
1391
+ 0
1392
+ 0
1393
+ 0
1394
+ 7.869
1395
+ 5.405
1396
+ 4.051
1397
+ 2.304
1398
+ ratio between grazing-in and grazing-out conditions is (kf, grazing−in ·M)2/(kf, grazing−out ·M)2 ≈ 5.6, indicating that
1399
+ the spin order signal should be strongly suppressed with grazing-out condition. Figure S4 shows the Q dependence
1400
+ of the superlattice peak with both conditions, which are comparable with each other, proving that the superlattice
1401
+ peak observed with σ incident x-ray polarization is also dominantly of charge order origin.
1402
+ V.
1403
+ CHARGE ORDER IN ED CALCULATIONS.
1404
+ We use cluster ED to study the charge order in the low-valence nickelate La4Ni3O8. The full list of the parameters
1405
+ used is presented in Table S1. The validity of our cluster model and parameters has been verified by calculating
1406
+ the RIXS energy maps and confirming that they capture the main features of the measurements as shown in the
1407
+ main text. In the calculations, we include all the Ni 3d and O 2p orbitals, which leads to a large Hilbert space and
1408
+ correspondingly only a limited number of states can be solved for. Fortunately, the accessible energy range covers
1409
+ the dd excitations so that we can make a direct comparison with the experimental data. The calculated results are
1410
+ broadened using a Gaussian profile with a full width at half maximum of 0.3 eV and are shown in Fig. 3 of the main
1411
+ text.
1412
+ To fully explore the charge order character in the ED calculations, we need to cover a large incident energy range
1413
+ but only the ground state is needed to calculate the REXS signals. Thus, we only include the Ni 3dx2−y2 and O 2pσ
1414
+
1415
+ 5
1416
+ 0.00
1417
+ 0.25
1418
+ 0.50
1419
+ 0.75
1420
+ 1.00
1421
+ 1.25
1422
+ Hole occupation
1423
+ (a)
1424
+ Ni1
1425
+ Ni1L
1426
+ Ni2
1427
+ Ni2L
1428
+ 0.0
1429
+ 0.4
1430
+ 0.8
1431
+ 1.2
1432
+ 1.6
1433
+ 2.0
1434
+ d (eV)
1435
+ 0.00
1436
+ 0.25
1437
+ 0.50
1438
+ 0.75
1439
+ Charge disproportionation
1440
+ (b)
1441
+ (Ni2+Ni2L)-(Ni1+Ni1L)
1442
+ 10
1443
+ 12
1444
+ 14
1445
+ 16
1446
+ 18
1447
+ Incident energy (eV)
1448
+ 0.0
1449
+ 0.5
1450
+ 1.0
1451
+ 1.5
1452
+ 2.0
1453
+ Intensity (arb. units)
1454
+ (c)
1455
+ REXS@QCO, -pol.
1456
+ d=0.0
1457
+ d=0.1
1458
+ d=0.2
1459
+ d=0.3
1460
+ d=0.4
1461
+ d=0.5
1462
+ d=0.6
1463
+ d=0.7
1464
+ d=0.8
1465
+ d=0.9
1466
+ d=1.0
1467
+ FIG. S5. The emergence of charge order by introducing the potential difference term ∆ϵd. (a) Hole occupations of different
1468
+ sites as a function of ∆ϵd. Ni1L stands for the ligand orbitals for Ni1 (the surrounding four oxygens). Correspondingly, one
1469
+ oxygen is shared by Ni1L and Ni2L. (b) Charge disproportionation defined as the hole occupation difference of Ni1+Ni1L and
1470
+ Ni2+Ni2L. (c) Calculated REXS signals at QCO with different ∆ϵd. All the calculations are performed with ∆ = 5.6 eV.
1471
+ orbitals during the calculations of the charge order, which dominate the ground state, so that a tractable basis size
1472
+ is realized. To trigger charge order in the Ni3O10 cluster, we introduce a potential difference ∆ϵd as described in the
1473
+ main text. In a microscopic model like we use here, the onsite energy shift and charge occupation are intrinsically
1474
+ coupled, which is different from a phenomenological model where these two factors can be tuned independently [8, 9].
1475
+ As shown in Fig. S5, when ∆ϵd is zero, the hole occupations on different Ni sites are almost the same while the hole
1476
+ occupations of ligand orbitals are slightly imbalanced since Ni2 shares oxygens with both Ni1 and Ni3, leading to a
1477
+ small charge disproportionation. With increasing ∆ϵd, the charge imbalances on both the Ni and ligand orbitals are
1478
+ enhanced with the former much more prominent, indicating that most of the spatial charge modulation resides on
1479
+ the Ni sites, leading to a Ni site-centered charge order. Correspondingly, a charge-order peak emerge in the REXS
1480
+ calculations, the intensity of which increases with increasing charge disproportionation while the lineshape only evolves
1481
+ by a little.
1482
+ After testing the effect of ∆ϵd, here we compare results with different charge-transfer energy ∆ in addition to the
1483
+ calculated results presented in the main text. As shown in Fig. S6, in the charge-transfer regime (∆ ≪ Udd), a
1484
+ sharp resonant peak is obtained, resembling the experimental observations in cuprates. With increasing ∆, the REXS
1485
+ lineshape evolves correspondingly. In the Mott-Hubbard limit (∆ ≫ Udd), the charge order peak becomes broader and
1486
+ shows multiple peak features. Compared with the data presented in the main text, we conclude that a charge-transfer
1487
+ energy with an intermediate strength (∆ ≈ Udd) matches the experimental results the best.
1488
+ [1] Junjie Zhang, A.S. Botana, J.W. Freeland, D. Phelan, Hong Zheng, V. Pardo, M.R. Norman, and J.F. Mitchell, “Large
1489
+ orbital polarization in a metallic square-planar nickelate,” Nature Physics 13, 864–869 (2017).
1490
+ [2] J. Q. Lin, P. Villar Arribi, G. Fabbris, A. S. Botana, D. Meyers, H. Miao, Y. Shen, D. G. Mazzone, J. Feng, S. G. Chiuzb˘aian,
1491
+ A. Nag, A. C. Walters, M. Garc´ıa-Fern´andez, Ke-Jin Zhou, J. Pelliciari, I. Jarrige, J. W. Freeland, Junjie Zhang, J. F.
1492
+ Mitchell, V. Bisogni, X. Liu, M. R. Norman, and M. P. M. Dean, “Strong superexchange in a d9−δ nickelate revealed by
1493
+ resonant inelastic x-ray scattering,” Physical Review Letters 126, 087001 (2021).
1494
+ [3] M. Rossi, H. Lu, A. Nag, D. Li, M. Osada, K. Lee, B. Y. Wang, S. Agrestini, M. Garcia-Fernandez, J. J. Kas, Y.-D. Chuang,
1495
+ Z. X. Shen, H. Y. Hwang, B. Moritz, Ke-Jin Zhou, T. P. Devereaux, and W. S. Lee, “Orbital and spin character of doped
1496
+ carriers in infinite-layer nickelates,” Physical Review B 104, L220505 (2021).
1497
+ [4] M. Moretti Sala, V. Bisogni, C. Aruta, G. Balestrino, H. Berger, N. B. Brookes, G. M. de Luca, D. Di Castro, M. Grioni,
1498
+
1499
+ 6
1500
+ 10
1501
+ 12
1502
+ 14
1503
+ 16
1504
+ 18
1505
+ Incident energy (eV)
1506
+ 0
1507
+ 1
1508
+ 2
1509
+ 3
1510
+ 4
1511
+ 5
1512
+ Intensity (arb. units)
1513
+ REXS@QCO, -pol.
1514
+ =2.5
1515
+ =3.5
1516
+ =4.5
1517
+ =5.6
1518
+ =6.7
1519
+ =7.8
1520
+ =8.9
1521
+ FIG. S6.
1522
+ Calculated REXS signals at the charge order wavevector QCO with different charge-transfer energy ∆.
1523
+ All the
1524
+ calculations are performed with ∆ϵd = 0.8 eV and U = 6.5 eV.
1525
+ M. Guarise, P. G. Medaglia, F. Miletto Granozio, M. Minola, P. Perna, M. Radovic, M. Salluzzo, T. Schmitt, K. J. Zhou,
1526
+ L. Braicovich, and G. Ghiringhelli, “Energy and symmetry of dd excitations in undoped layered cuprates measured by Cu
1527
+ L3 resonant inelastic x-ray scattering,” New Journal of Physics 13, 043026 (2011).
1528
+ [5] M. W. Haverkort, M. Zwierzycki,
1529
+ and O. K. Andersen, “Multiplet ligand-field theory using wannier orbitals,” Physical
1530
+ Review B 85, 165113 (2012).
1531
+ [6] M. W. Haverkort, “Theory of resonant inelastic x-ray scattering by collective magnetic excitations,” Physical Review Letters
1532
+ 105, 167404 (2010).
1533
+ [7] Y. Shen, J. Sears, G. Fabbris, J. Li, J. Pelliciari, I. Jarrige, Xi He, I. Boˇzovi´c, M. Mitrano, Junjie Zhang, J. F. Mitchell,
1534
+ A. S. Botana, V. Bisogni, M. R. Norman, S. Johnston,
1535
+ and M. P. M. Dean, “Role of oxygen states in the low valence
1536
+ nickelate La4Ni3O8,” Physical Review X 12, 011055 (2022).
1537
+ [8] A. J. Achkar, R. Sutarto, X. Mao, F. He, A. Frano, S. Blanco-Canosa, M. Le Tacon, G. Ghiringhelli, L. Braicovich,
1538
+ M. Minola, M. Moretti Sala, C. Mazzoli, Ruixing Liang, D. A. Bonn, W. N. Hardy, B. Keimer, G. A. Sawatzky, and D. G.
1539
+ Hawthorn, “Distinct charge orders in the planes and chains of ortho-III-ordered YBa2Cu3O6+δ superconductors identified
1540
+ by resonant elastic x-ray scattering,” Physical Review Letters 109, 167001 (2012).
1541
+ [9] A. J. Achkar, F. He, R. Sutarto, J. Geck, H. Zhang, Y.-J. Kim, and D. G. Hawthorn, “Resonant x-ray scattering measure-
1542
+ ments of a spatial modulation of the Cu 3d and O 2p energies in stripe-ordered cuprate superconductors,” Physical Review
1543
+ Letters 110, 017001 (2013).
1544
+
0tE2T4oBgHgl3EQf4wij/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
19FST4oBgHgl3EQfXDjl/content/2301.13783v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d3fd07f8e2d21998835aa082cd5d302af6ffb8f9674f3cd7661e12a2918ff95
3
+ size 438785
19FST4oBgHgl3EQfXDjl/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40341d1f0e0141777634750d5f3266d1620e07a68c719eadc8335ada48c73220
3
+ size 136742
1tE1T4oBgHgl3EQflQSM/content/2301.03283v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a0df2e08638b3d22aa0c8a4b3230aeb1e29ca61581f20591dede8412e0a5d777
3
+ size 1183734
3dFLT4oBgHgl3EQfry9C/content/tmp_files/2301.12145v1.pdf.txt ADDED
@@ -0,0 +1,1429 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.12145v1 [math.PR] 28 Jan 2023
2
+ Normal approximation of subgraph counts in
3
+ the random-connection model
4
+ Qingwei Liu∗
5
+ Nicolas Privault†
6
+ Division of Mathematical Sciences
7
+ School of Physical and Mathematical Sciences
8
+ Nanyang Technological University
9
+ 21 Nanyang Link, Singapore 637371
10
+ January 31, 2023
11
+ Abstract
12
+ This paper derives normal approximation results for subgraph counts written as
13
+ multiparameter stochastic integrals in a random-connection model based on a Pois-
14
+ son point process. By combinatorial arguments we express the cumulants of general
15
+ subgraph counts using sums over connected partition diagrams, after cancellation of
16
+ terms obtained by M¨obius inversion. Using the Statuleviˇcius condition, we deduce con-
17
+ vergence rates in the Kolmogorov distance by studying the growth of subgraph count
18
+ cumulants as the intensity of the underlying Poisson point process tends to infinity.
19
+ Our analysis covers general subgraphs in the dilute and full random graph regimes,
20
+ and tree-like subgraphs in the sparse random graph regime.
21
+ Keywords: Random-connection model, subgraph count, normal approximation, Kolmogorov
22
+ distance, cumulant method, Poisson point process, random graphs.
23
+ Mathematics Subject Classification: 60F05, 60D05, 05C80, 60G55.
24
+ 1
25
+ Introduction
26
+ This paper treats the asymptotic behavior of random subgraph counts in the random-
27
+ connection model (RCM), which is used to model physical systems in e.g. wireless networks,
28
+ complex networks, and statistical mechanics. Our approach relies on the study of cumulant
29
+ growth rates as the intensity of the underlying Poisson point process tends to infinity.
30
+ The distributional approximation of subgraph counts has attracted significant interest
31
+ in the random graph literature.
32
+ In [Ruc88], conditions for the asymptotic normality of
33
34
35
+ 1
36
+
37
+ renormalized subgraph counts have been obtained in the Erd˝os-R´enyi random graph model
38
+ [ER59, Gil59]. Those results have been made more precise in [BKR89] by the derivation
39
+ of convergence rates in the Wasserstein distance via Stein’s method. They have also been
40
+ strengthened in [KRT17] using the Kolmogorov distance in the case of triangle counts, and
41
+ in [PS20] in the case of general subgraphs G. The case of triangles has also been treated in
42
+ [R¨ol22] by the Stein-Tikhomirov method, which has been extended to general subgraphs in
43
+ [ER21]. In [Kho08], the counts of line (X-model) and cycles (Y -model) in discrete Erd˝os-
44
+ R´enyi models have been analyzed via the asymptotic behavior of their cumulants. In compar-
45
+ ison with [Kho08], we derive Kolmogorov convergence rates and our results are not restricted
46
+ to line and cycle graphs, as they cover more general subgraphs.
47
+ The random connection-model is a natural generalization of the Erd˝os-R´enyi random
48
+ graph in which vertices are randomly located and can be connected with position-dependent
49
+ probabilities. Studying the random-connection model and obtaining normal approximation
50
+ error bounds is more difficult due to the additional layer of complexity coming from the
51
+ randomness of vertex locations. In [LNS21], a central limit theorem and Kolmogorov con-
52
+ vergence rates have been presented for the number of components isomorphic to a given
53
+ finite connected graph in the random-connection model, together with a study of first mo-
54
+ ments and covariances. Recently, a Central Limit Theorem has been derived in [CT22] for
55
+ the counts of induced subgraphs in the random-connection model under certain stabilization
56
+ and moment conditions.
57
+ In this paper, we derive normal approximation rates under a relatively mild condition
58
+ on the connection function of the random-connection model, by deriving growth rates of
59
+ cumulants written as sums over connected partitions. To the best of our knowledge, this is
60
+ the first time that the normal approximation of subgraph counts with convergence rates is
61
+ established in the random-connection model. Furthermore, various random graph regimes
62
+ are discussed.
63
+ A number of probabilistic conclusions can be derived from the behavior of cumulants
64
+ of random variables using the Statuleviˇcius condition, including convergence rates in the
65
+ Kolmogorov distance and moderate deviation principles, see [SS91], [DE13], [DJS22]. In
66
+ [GT18a, GT18b], this method has been used to derive concentration inequalities, normal
67
+ approximation with error bounds, and moderate deviation principles for random polytopes.
68
+ Given µ a finite diffuse measure on Rd, we consider a random-connection model based on
69
+ an underlying Poisson point process Ξ on Rd with intensity of the form λµ(dx), in which any
70
+ 2
71
+
72
+ two vertices x, y in Ξ are connected with the probability Hλ(x, y) := cλH(x, y) ∈ [0, 1], where
73
+ Hλ is the connection function of the model. Here, we investigate the limiting behavior of the
74
+ count NG of a given subgraph G as the intensity λ of the underlying Poisson point process
75
+ on Rd tends to infinity. To this end, we use the combinatorics of the cumulants κn(NG)
76
+ based on moment expressions obtained in [Pri19] for multiparameter stochastic integrals in
77
+ the random-connection model.
78
+ Using partition diagrams and dependency graph arguments, we start by showing in
79
+ Proposition 3.3 that the (virtual) cumulants of a random functional admitting a certain con-
80
+ nectedness factorization property (3.1) can be expressed as sums over connected partition
81
+ diagrams, generalizing Lemma 2 in [MM91]. A related result has been obtained in [Jan19]
82
+ in the particular case of two-parameter Poisson stochastic integrals, in relation to cluster
83
+ expansions for Gibbs point processes in statistical mechanics. In Proposition 4.3, we apply
84
+ Proposition 3.3 to express the cumulants of multiparameter stochastic integrals, for which
85
+ this factorization property can be checked from the moment formulas for multiparameter
86
+ stochastic integrals computed in Proposition 4.1.
87
+ Such expressions allow us to determine the dominant terms in the growth of cumulants
88
+ as the intensity λ of the underlying point process tends to infinity, by estimating the counts
89
+ of vertices and edges in connected partition diagrams as in [Kho08]. We work under a mild
90
+ condition (6.3) which is satisfied by e.g. any translation-invariant continuous connection
91
+ function H : Rd × Rd → [0, 1] non vanishing at 0, such as the Rayleigh connection function
92
+ given by H(x, y) = e−β∥x−y∥2, x, y ∈ Rd, for some β > 0.
93
+ For our analysis of cumulant behavior we identify the leading terms in the sum (5.3)
94
+ over connected partition diagrams. When G is a connected graph with r := |V (G)| vertices,
95
+ satisfying Assumption 6.1 in the dilute regime (6.1) where λ−1 ≪ cλ ≤ 1, the dominant
96
+ terms correspond to connected partition diagrams with the highest number of blocks, as
97
+ found in [Pri22] in the case of k-hop counting in the one-dimensional random-connection
98
+ model. In Theorem 6.1 this yields the cumulant bounds
99
+ (n − 1)!cn|E(G)|
100
+ λ
101
+ (K1λ)1+(r−1)n ≤ κn(NG) ≤ n!r−1cn|E(G)|
102
+ λ
103
+ (K2λ)1+(r−1)n,
104
+ λ > 0,
105
+ for some constants K1, K2 > 0 independent of λ, n ≥ 1. From the Statuleviˇcius condition
106
+ (A.1) below, see [RSS78, DJS22], we deduce the Kolmogorov distance bound
107
+ sup
108
+ x∈R
109
+ ��P
110
+ � �
111
+ NG ≤ x
112
+
113
+ − P(Z ≤ x)
114
+ �� ≤
115
+ C
116
+ λ1/(4r−6) ,
117
+ λ → ∞,
118
+ 3
119
+
120
+ see Corollary 7.1, and a moderate deviation principle by Theorem 1.1 of [DE13].
121
+ In the sparse regime (6.2) where cλ ≤ λ−α for some α ≥ 1, the maximal rate λα−(α−1)r is
122
+ attained for G a tree-like graph, and in Theorem 6.2 we obtain the cumulant bounds
123
+ (K1)(r−1)nλα−(α−1)r ≤ κn(NG) ≤ n!r−1(K2)(r−1)nλα−(α−1)r,
124
+ λ > 0,
125
+ if G is a tree, and
126
+ (K1)rλr−α|E(G)| ≤ κn(NG) ≤ n!r−1(K2)(r−1)nλr−α|E(G)|,
127
+ λ > 0,
128
+ if G is a not a tree, such as e.g.
129
+ a cycle graph.
130
+ As a consequence of the Statuleviˇcius
131
+ condition (A.1), when G is a tree we find the Kolmogorov distance bound
132
+ sup
133
+ x∈R
134
+ ��P
135
+ � �
136
+ NG ≤ x
137
+
138
+ − P(Z ≤ x)
139
+ �� ≤ Cλ−(α−(α−1)r)/(4r−6),
140
+ λ → ∞,
141
+ provided that 1 ≤ α < r/(r − 1), see Corollary 7.2.
142
+ Convergence rates in the Kolmogorov distances may be improved into classical Berry-
143
+ Esseen rates when the connection function H(x, y) is {0, 1}-valued, e.g.
144
+ in disk models
145
+ as in [Pri22], by representing subgraph counts as multiple Poisson stochastic integrals and
146
+ using the fourth moment theorem for U-statistics and sums of multiple stochastic integrals
147
+ Corollary 4.10 in [ET14], see also Theorem 3 in [LRR16] or Theorem 6.3 in [PS22] for
148
+ Hoeffding decompositions. In the general case where H(x, y) is [0, 1]-valued this method no
149
+ longer applies, this is why we rely on the Statuleviˇcius condition which in turn may yield
150
+ suboptimal convergence rates.
151
+ The paper is organized as follows. Sections 2 and 3 introduce the preliminary frame-
152
+ work and notations on connected partition diagrams and combinatorics of virtual cumulants
153
+ that will be used for the expression of cumulants of multiparameter stochastic integrals in
154
+ Section 4 and for subgraph counts in Section 5. Those expressions are applied in Section 6
155
+ to derive cumulant growth rates in the random-connection model, with application to Kol-
156
+ mogorov rates in subgraph counting via the Statuleviˇcius condition in Section 7.
157
+ 2
158
+ Set partitions and diagram connectivity
159
+ Given η a finite set, we denote by Π(η) the collection of its set partitions, and we let |σ|
160
+ denote the number of blocks in any partition σ ∈ Π(η). Given ρ, σ two set partitions, we
161
+ 4
162
+
163
+ say that σ is coarser than ρ, or that ρ is finer than σ, and we write ρ ⪯ σ, if every block
164
+ in σ is a combination of blocks in ρ. We also denote by ρ ∨ σ the finest partition which is
165
+ coarser than ρ and σ, and by ρ∧σ the coarsest partition that is finer than ρ and σ. We let �0
166
+ be the finest partition, which is made of a single element in each block, and we let �1 be the
167
+ coarsest (one-block) partition. In general, given any graph G we denote by V (G) the set of
168
+ its vertices, and by E(G) the set of its edges.
169
+ Our study of cumulants and moments of functionals of random fields relies on partition
170
+ diagrams, see [MM91, Kho08, PT11] and references therein for additional background. In
171
+ what follows we let [n] := {1, 2, . . . , n} for n ≥ 1.
172
+ Definition 2.1 Let n, r ≥ 1.
173
+ 1. Given η ⊂ [n] we let Π(η × [r]) denote the set of all partitions of the set
174
+ η × [r] :=
175
+
176
+ (k, l) : k ∈ η, l = 1, . . . , r
177
+
178
+ .
179
+ 2. We also let πη := (πi)i∈η ∈ Π(η × [r]) denote the partition made of the |η| blocks of size r
180
+ given by
181
+ πk := {(k, 1), . . . , (k, r)},
182
+ k ∈ η.
183
+ Next, we introduce the definition of partition diagrams.
184
+ Definition 2.2 Let n, r ≥ 1. Given η ⊂ [n] and ρ ∈ Π(η × [r]) a partition of η × [r], we
185
+ denote by Γ(ρ, πη) the diagram, or graphical representation of the partition ρ, constructed
186
+ by:
187
+ 1. arranging the elements of η × [r] into an array of |η| rows and r columns, and
188
+ 2. adding edges connecting neighbors within a same block in ρ.
189
+ In addition, we say that the partition diagram Γ(ρ, π) is connected when ρ ∨ πη = �1.
190
+ For example, taking η := {2, 3, 5, 8, 10}, given the partitions
191
+ ρ =
192
+
193
+ {(2, 1), (3, 1), (3, 2), (3, 3)}, {(2, 2), (2, 3), (2, 4), (3, 4)}, {(5, 1)}, {(5, 2), (8, 2)},
194
+ {(5, 3)}, {(5, 4), (8, 3)}, {(8, 1), (10, 1)}, {(8, 4)}, {(10, 2), (10, 3), (10, 4)}
195
+
196
+ and
197
+ σ =
198
+
199
+ {(2, 1), (3, 1)}, {(2, 2)}, {(2, 3), (3, 4)}, {(2, 4)}, {(3, 2), (5, 2), (8, 2)},
200
+ 5
201
+
202
+ {(3, 3), (5, 4), (8, 3), (10, 2)}, {(5, 1)}, {(5, 3)}, {(8, 1), (10, 1)}, {(8, 4)}, {(10, 3)}, {(10, 4)}
203
+
204
+ ,
205
+ of η × [4], Figure 1−a) presents an example of a non-connected partition diagram Γ(ρ, π),
206
+ and Figure 1−b) presents an example of a connected partition diagram Γ(σ, π),
207
+ 2
208
+ 3
209
+ 5
210
+ 8
211
+ 10
212
+ 1
213
+ 2
214
+ 3
215
+ 4
216
+ (a) Non-connected partition diagram Γ(ρ, π).
217
+ 2
218
+ 3
219
+ 5
220
+ 8
221
+ 10
222
+ 1
223
+ 2
224
+ 3
225
+ 4
226
+ (b) Connected partition diagram Γ(σ, π).
227
+ Figure 1: Two examples of partition diagrams with η = {2, 3, 5, 8, 10}, n = 10, r = 4.
228
+ Note that the above notion of connected partition diagram is distinct from that of irreducible
229
+ partition, see, e.g., [BOR85].
230
+ Definition 2.3 Let n ≥ 1, G a connected graph with |V (G)| = r ≥ 1 vertices, and consider
231
+ G1, . . . , Gn copies of G respectively built on π1, . . . , πn. Let also ρ ∈ Π(η × [r]) be a partition
232
+ of η × [r].
233
+ 1. We let �ρG be the multigraph constructed on the blocks of ρ by adding an edge between two
234
+ blocks ρ1, ρ2 of the partition ρ whenever there exists (k, l1) ∈ ρ1 and (k, l2) ∈ ρ2 such that
235
+ (l1, l2) is an edge in Gk.
236
+ 2. We let ρG be the graph constructed on the blocks of ρ by removing redundant edges in �ρG,
237
+ so that at most one edge remains between any two blocks ρ1, ρ2 ∈ ρ.
238
+ Figure 2-b) presents an illustration of the multigraph �ρG and graph ρG on the blocks of ρ
239
+ when G is the line graph {(1, 2), (2, 4), (3, 4)} on {1, 2, 3, 4}.
240
+ 6
241
+
242
+ 1
243
+ 2
244
+ 3
245
+ 4
246
+ 5
247
+ 1
248
+ 2
249
+ 3
250
+ 4
251
+ (a) Diagram Γ(ρ, π) and multigraph �ρG in blue.
252
+ 1
253
+ 2
254
+ 3
255
+ 4
256
+ 5
257
+ 1
258
+ 2
259
+ 3
260
+ 4
261
+ (b) Diagram Γ(ρ, π) and graph ρG in red.
262
+ Figure 2: Diagram and graphs G, ρG, �ρG with n = 5, r = 4.
263
+ Definition 2.4 Let n, r ≥ 1, and let ρ ∈ Π([n] × [r]) be a partition of [n] × [r].
264
+ 1. For b ⊂ [n], we let ρb ⊂ ρ be defined as
265
+ ρb := {c ∈ ρ : c ⊂ b × [r]}.
266
+ 2. Given η ⊂ [n] we split any partition ρ of η × [r] into the equivalence classes deduced from
267
+ the connected components of the diagram ρG, as
268
+ ρ =
269
+
270
+ b×[r]∈ρ∨π
271
+ b⊂[n]
272
+ ρb,
273
+ (2.1)
274
+ As an example, in Figure 3-a), when b = {1, 2} we have
275
+ ρ{1,2} =
276
+
277
+ {(1, 1), (2, 1), (2, 2), (2, 3)}, {(1, 2), (1, 3), (1, 4), (2, 4)}
278
+
279
+ ,
280
+ and the partition (2.1) is illustrated in Figure 3-b) with b1 = {1, 2} and b2 = {3, 4, 5}.
281
+ 7
282
+
283
+ 1
284
+ 2
285
+ 3
286
+ 4
287
+ 5
288
+ 1
289
+ 2
290
+ 3
291
+ 4
292
+ ρ{1,2}
293
+ (a) Diagram Γ(ρ, π) and block ρ{1,2}.
294
+ 1
295
+ 2
296
+ 3
297
+ 4
298
+ 5
299
+ 1
300
+ 2
301
+ 3
302
+ 4
303
+ ρb1
304
+ ρb2
305
+ (b) Splitting {ρb1, ρb2} of ρ according to ρG.
306
+ Figure 3: Splitting of the partition ρ with ρ ∨ π = {π1 ∪ π2, π3 ∪ π4 ∪ π5} and n = 5, r = 4.
307
+ Definition 2.5 Let n, r ≥ 1. Given σ ∈ Π([n]) a partition of [n], we let Πσ([n]×[r]) denote
308
+ the set of partitions ρ of [n] × [r] such that
309
+ ρ ∨ π = {b × [r] : b ∈ σ},
310
+ and we partition Π([n] × [r]) as
311
+ Π([n] × [r]) =
312
+
313
+ σ∈Π([n])
314
+ Πσ([n] × [r]).
315
+ (2.2)
316
+ We note that given η ⊂ [n], the set Π�1(η × [r]) consists of the partitions ρ of η × [r] for
317
+ which the diagram ρG is connected, as in Figure 4. In what follows, we also will use non-flat
318
+ partition diagrams Γ(ρ, π) such that ρ ∧ π = �0, see Chapter 4 of [PT11] and Figure 4.
319
+ 1
320
+ 2
321
+ 3
322
+ 4
323
+ 5
324
+ 1
325
+ 2
326
+ 3
327
+ 4
328
+ (a) Diagram Γ(ρ, π) and multigraph �ρG in blue.
329
+ 1
330
+ 2
331
+ 3
332
+ 4
333
+ 5
334
+ 1
335
+ 2
336
+ 3
337
+ 4
338
+ (b) Diagram Γ(ρ, π) and graph ρG in red.
339
+ Figure 4: Connected non-flat partition diagram with G a cycle graph and n = 5, r = 4.
340
+ 8
341
+
342
+ Lemma 2.6 a) Let n, r ≥ 1. The cardinality of the set
343
+ C(n, r) := {ρ ∈ Π�1([n] × [r]) : ρ ∧ π = �0}
344
+ of connected non-flat partition diagrams on [n] × [r] satisfies
345
+ |C(n, r)| ≤ n!r−1rn−1r!n−1,
346
+ n ≥ 1.
347
+ (2.3)
348
+ b) Let n ≥ 1 and r ≥ 2. The cardinality of the set
349
+ Mn := {ρ ∈ C(n, r) : |ρ| = 1 + (r − 1)n}
350
+ of maximal connected non-flat partition diagrams on [n] × [r] satisfies
351
+ ((r − 1)r)n−1(n − 1)! ≤ |Mn| ≤ ((r − 1)r)n−1n!2,
352
+ n ≥ 1.
353
+ (2.4)
354
+ Proof.
355
+ a) We have |C(1, r)| = 1. Given a connected partition diagram Γ(ρ, π) in C(n+1, r),
356
+ we construct a connected undirected graph �ρ on [n + 1] as in Figure 5-a), and note that
357
+ �ρ contains a spanning tree ρ, see e.g. Theorem 4.2.3 in [BR12], as shown in Figure 5-b).
358
+ In addition, the tree ρ has at most r leaves, because after removing any of root of ρ, the
359
+ remaining partition can be reconnected using no more than r vertices from the root. Then,
360
+ starting for any leaf in the tree ρ, ρ must be made from a connected partition diagram
361
+ in C(n, r), completed by a choice of at most (n + 1)r−1r! allocations of r − 1 vertices into
362
+ existing or new blocks. Indeed, note that at least one out of r vertices in the leaf is used for
363
+ an existing connection.
364
+ 1
365
+ 2
366
+ 3
367
+ 4
368
+ 5
369
+ 1
370
+ 2
371
+ 3
372
+ 4
373
+ (a) Diagram Γ(ρ, π) and graph �ρ.
374
+ 1
375
+ 2
376
+ 3
377
+ 4
378
+ 5
379
+ 1
380
+ 2
381
+ 3
382
+ 4
383
+ (b) Diagram Γ(ρ, π) and spanning tree ρ.
384
+ Figure 5: Example of graph �ρ and its spanning tree subgraph.
385
+ 9
386
+
387
+ This yields the induction inequality
388
+ |C(n + 1, r)| ≤ r(n + 1)r−1r!|C(n, r)|,
389
+ from which we conclude to (2.3).
390
+ b) Proceeding similarly to part (a), we have |M1| = 1 and the recursion
391
+ r × (1 + (r − 1)n) × |Mn| ≤ |Mn+1| ≤ (n + 1)r × (1 + (r − 1)n) × |Mn|,
392
+ n ≥ 1,
393
+ which yields
394
+ ((r − 1)r)n−1
395
+ n−1
396
+
397
+ i=1
398
+
399
+ i +
400
+ 1
401
+ r − 1
402
+
403
+ ≤ |Mn| ≤ n!((r − 1)r)n−1
404
+ n−1
405
+
406
+ i=1
407
+
408
+ i +
409
+ 1
410
+ r − 1
411
+
412
+ ,
413
+ n ≥ 1,
414
+ from which (2.4) follows.
415
+
416
+ 3
417
+ Virtual cumulants
418
+ The following definition uses the concept of independence of a virtual field with respect to
419
+ graph connectedness, see Relation (17) in [MM91, p. 34].
420
+ Definition 3.1 Let n, r ≥ 1. We say that a mapping F defined on partitions of [n] × [r]
421
+ admits the connectedness factorization property if it decomposes according to the partition
422
+ (2.1) as
423
+ F(ρ) =
424
+
425
+ b×[r]∈ρ∨π
426
+ F(ρb),
427
+ ρ ∈ Π([n] × [r]).
428
+ (3.1)
429
+ In what follows, given F a mapping defined on the partitions of [n] × [r], we will use the
430
+ M¨obius transform �F of F, defined as
431
+ �F(η) :=
432
+
433
+ ρ∈Π(η×[r])
434
+ F(ρ),
435
+ η ⊂ [n],
436
+ with �F(∅) := 0, see [Rot64] and § 2.5 of [PT11]. We refer to [MM91, p. 33] for the following
437
+ definition.
438
+ Definition 3.2 Let n, r ≥ 1. The virtual cumulant G of a mapping F on �
439
+ η⊂[n] Π(η × [r])
440
+ is defined by letting CF(η) := �F(η) when |η| = 1, and then recursively by
441
+ CF(η) := �F(η) −
442
+
443
+ σ∈Π(η)
444
+ |σ|≥2
445
+
446
+ b∈σ
447
+ CF(b),
448
+ η ⊂ [n],
449
+ |η| ≥ 2.
450
+ (3.2)
451
+ 10
452
+
453
+ In the particular case r = 1, we note that when (X1, . . . , Xn) is a sequence of random
454
+ variables, letting
455
+ F(ρ) := E
456
+ ��
457
+ b∈ρ
458
+
459
+ i∈b
460
+ Xi
461
+
462
+ = E
463
+ � n
464
+
465
+ i=1
466
+ Xi
467
+
468
+ ,
469
+ Relation (3.2) shows that
470
+ CF(η) =
471
+
472
+ σ∈Π[η]
473
+ (−1)|σ|−1(|σ| − 1)!
474
+
475
+ b∈σ
476
+ F({b}) =
477
+
478
+ σ∈Π[η]
479
+ (−1)|σ|−1(|σ| − 1)!
480
+
481
+ b∈ρ
482
+ E
483
+ ��
484
+ i∈b
485
+ Xi
486
+
487
+ ,
488
+ coincides with the actual joint cumulant of (Xi)t∈η, η ⊂ [n].
489
+ The following proposition is an extension of the classical Lemma 2 in [MM91, p. 34], see
490
+ also Lemma 3.1 in [Kho08].
491
+ Proposition 3.3 Let n, r ≥ 1. Let F be a mapping defined on �
492
+ η⊂[n] Π(η × [r]) and admit-
493
+ ting the connectedness factorization property (3.1). Then, for η ⊂ [n] with η ̸= ∅, the virtual
494
+ cumulant of F is given by the sum
495
+ CF(η) =
496
+
497
+ σ∈Π�1(η×[r])
498
+ (connected)
499
+ F(σ)
500
+ (3.3)
501
+ over connected partition diagrams on η × [r].
502
+ Proof.
503
+ The claim is true when |η| = 1. Assume that it is true for all η ⊂ [n] for some n ≥ 1,
504
+ and let η be such that |η| = n + 1. By (2.2) and (3.1), we have
505
+ �F(η)
506
+ =
507
+
508
+ ρ∈Π(η×[r])
509
+ F(ρ)
510
+ =
511
+
512
+ σ∈Π(η)
513
+
514
+ ρ∈Πσ(η×[r])
515
+ F(ρ)
516
+ =
517
+
518
+ σ∈Π(η)
519
+
520
+ ρ∈Πσ(η×[r])
521
+
522
+ b∈σ
523
+ F(ρb)
524
+ =
525
+
526
+ σ∈Π(η)
527
+
528
+ b∈σ
529
+
530
+ ρ∈Π�1(b×[r])
531
+ (connected)
532
+ F(ρ)
533
+ =
534
+
535
+ ρ∈Π�1(η×[r])
536
+ (connected)
537
+ F(ρ) +
538
+
539
+ σ∈Π(η)
540
+ |σ|≥2
541
+
542
+ b∈σ
543
+ CF(b),
544
+ where the last equality follows from the induction hypothesis (3.3) when |η| ≤ n. The proof
545
+ is completed by subtracting the last term on both sides.
546
+
547
+ 11
548
+
549
+ In the particular case r = 1, we note that when (X1, . . . , Xn) is a sequence of independent
550
+ random variables, the functional
551
+ F(ρ) := E
552
+ ��
553
+ b∈ρ
554
+
555
+ i∈b
556
+ Xi
557
+
558
+ =
559
+
560
+ i∈[n]
561
+ E[Xi]
562
+ satisfies the connectedness factorization property (3.1), and Proposition 3.3 recovers the
563
+ vanishing of the joint cumulants of (Xi)i∈η when |η| ≥ 2, as the set Π�1(η × [1]) of connected
564
+ partition diagrams on η × [1] is empty in this case.
565
+ 4
566
+ Cumulants of multiparameter stochastic integrals
567
+ Consider a Poisson point process Ξ on Rd, d ≥ 1, with intensity measure Λ on Rd, constructed
568
+ on the space
569
+ Ω =
570
+
571
+ ω = {xi}i∈I ⊂ Rd : #(A ∩ ω) < ∞ for all compact A ∈ B(Rd)
572
+
573
+ of locally finite configurations on Rd, whose elements ω ∈ Ω are identified with the Radon
574
+ point measures ω =
575
+
576
+ x∈ω
577
+ ǫx, where ǫx denotes the Dirac measure at x ∈ Rd.
578
+ By [LP18,
579
+ Corollary 6.5], almost every element ω of Ω can be represented as ω = {Vi}1≤i≤N, where
580
+ (Vi)i≥1 is a random sequence in Rd and a N ∪ {∞}-valued random variable N.
581
+ In this section, using sums over partitions we express the moments of the multiparameter
582
+ stochastic integral
583
+
584
+ V1,...,Vr∈Ξ
585
+ uG(V1, . . . , Vr) =
586
+
587
+ (Rd)r uG(x1, . . . , xr)ω(dx1) · · · ω(dxr),
588
+ (4.1)
589
+ where uG(x1, . . . , xr) is a measurable process of the form
590
+ uG(x1, . . . , xr) :=
591
+
592
+ (i,j)∈E(G)
593
+ vi,j(xi, xj),
594
+ and vi,j(x, y), (i, j) ∈ E(G), are random processes v(x, y) independent of the underlying
595
+ Poisson point process Ξ. The next proposition is a consequence of Proposition 2 in [Pri19],
596
+ which relies on Proposition 3.1 of [Pri12] and Lemma 2.1 of [BRSW17].
597
+ Proposition 4.1 Let n ≥ 1 and r ≥ 2. The n-th moment of the multiparameter stochastic
598
+ integral (4.1) is given by the summation
599
+
600
+ ρ∈Π([n]×[r])
601
+
602
+ (Rd)|ρ| E
603
+
604
+
605
+ n
606
+
607
+ k=1
608
+
609
+ (i,j)∈E(Gk)
610
+ v
611
+
612
+
613
+ k,i, xρ
614
+ k,j
615
+
616
+
617
+
618
+
619
+ η∈V (ρG)
620
+ Λ(dxη),
621
+ (4.2)
622
+ 12
623
+
624
+ where we let xρ
625
+ k,l := xη whenever (k, l) ∈ η, for ρ ∈ Π([n] × [r]) and η ∈ ρ.
626
+ The next proposition rewrites the product in (4.2) as a product on the edges of the graph ρG
627
+ similarly to Proposition 4 of [Pri19] when v(x, y) vanishes on the diagonal, and it generalizes
628
+ Proposition 2.4 of [Jan19] from two-parameter Poisson stochastic integrals to multiparameter
629
+ integrals of higher orders.
630
+ Proposition 4.2 Let n ≥ 1, r ≥ 2, and assume that the process v(x, y) vanishes on diag-
631
+ onals, i.e. v(x, x) = 0, x ∈ Rd. Then, the n-th moment of the multiparameter stochastic
632
+ integral (4.1) is given by the summation
633
+
634
+ ρ∈Π([n]×[r])
635
+ ρ∧π=�0
636
+ (non−flat)
637
+
638
+ (Rd)|ρ|
639
+
640
+ (η1,η2)∈E(ρG)
641
+ E
642
+
643
+ v(xη1, xη2)m(η1,η2)�
644
+
645
+ η∈V (ρG)
646
+ Λ(dxη),
647
+ over connected non-flat diagrams, where m(η1, η2) represents the multiplicity of the edge
648
+ (η1, η2) in the multigraph �ρG.
649
+ The next proposition is a consequence of Propositions 3.3 and 4.2, and it also extends
650
+ Proposition 2.5 of [Jan19] from the two-parameter case to the multiparameter case. Note
651
+ that in our setting, the two-parameter case only applies to the edge counting.
652
+ Proposition 4.3 Let n ≥ 1, r ≥ 2, and assume that the process v(x, y) vanishes on diag-
653
+ onals, i.e. v(x, x) = 0, x ∈ Rd. Then, the n-th cumulant of the multiparameter stochastic
654
+ integral (4.1) is given by the summation
655
+
656
+ ρ∈Π�1([n]×[r])
657
+ ρ∧π=�0
658
+ (non−flat connected)
659
+
660
+ (Rd)|ρ|
661
+
662
+ (η1,η2)∈E(ρG)
663
+ E
664
+
665
+ v(xη1, xη2)m(η1,η2)�
666
+
667
+ η∈V (ρG)
668
+ Λ(dxη)
669
+ (4.3)
670
+ over connected non-flat partition diagrams.
671
+ Proof.
672
+ The functional
673
+ F(ρ) :=
674
+
675
+ ρ∈Π([n]×[r])
676
+ ρ∧π=�0
677
+ (non−flat)
678
+
679
+ (Rd)|ρ|
680
+
681
+ (η1,η2)∈E(ρG)
682
+ E
683
+
684
+ v(xη1, xη2)m(η1,η2)�
685
+
686
+ η∈V (ρG)
687
+ Λ(dxη)
688
+ satisfies the connectedness factorization property (3.1), as for σ = b × [r] ∈ ρ ∨ π and
689
+ σ′ = b′ ×[r] ∈ ρ∨π with b ̸= b′, the variables (xη)η∈ρb are distinct from the variables (xη)η∈ρb′
690
+ in the above integration. Hence, (4.3) follows from Proposition 3.3.
691
+
692
+ 13
693
+
694
+ 5
695
+ Cumulants of subgraph counts
696
+ Let H : Rd × Rd → [0, 1] denote a measurable connection function such that
697
+ 0 <
698
+
699
+ Rd H(x, y)Λ(dx) < ∞,
700
+ for all y ∈ R. Given ω ∈ Ω, for any x, y ∈ ω with x ̸= y, an edge connecting x and y
701
+ is added with probability H(x, y), independently of the other pairs, and in this case we
702
+ write x ↔ y. The resulting random graph, together with the point process Ξ, is called the
703
+ random-connection model and denoted by GH(Ξ).
704
+ In the case where the connection function H is given by H(x, y) := 1{∥x−y∥≤R} for some
705
+ R > 0, the resulting graph is completely determined by the geometric of the underlying
706
+ point process Ξ, and is called a random geometric graph, which is included as a special case
707
+ in this paper.
708
+ Given G a connected graph with |V (G)| = r vertices, we denote NG the count of sub-
709
+ graphs isomorphic to G in the random-connection model GH(Ξ), which can be represented
710
+ as the multiparameter stochastic integral
711
+ NG :=
712
+
713
+ V1,...,Vr∈Ξ
714
+
715
+ (i,j)∈E(G)
716
+ 1{Vi↔Vj} =
717
+
718
+ (Rd)r
719
+
720
+ (i,j)∈E(G)
721
+ 1{xi↔xj} ω(dx1) · · ·ω(dxr),
722
+ up to division by the number of automorphisms of G. Here, we have 1{Vi↔Vj} = 1 or 0
723
+ depending whether Vi and Vj are connected or not by an edge in GH(Ξ), with
724
+ 1{x↔x} = 0,
725
+ x ∈ Rd.
726
+ (5.1)
727
+ The following result is a direct consequence of Proposition 4.3 by taking v(x, y) := 1{x↔y}
728
+ in (4.3) and by using non-flat partition diagrams Γ(ρ, π) such that ρ ∧ π = �0, to take into
729
+ account condition (5.1).
730
+ Proposition 5.1 Let n ≥ 1 and r ≥ 2. The moments and cumulants of NG are given by
731
+ the summation
732
+ E[(NG)n] =
733
+
734
+ ρ∈Π([n]×[r])
735
+ ρ∧π=�0
736
+ (non−flat)
737
+
738
+ (Rd)|��|
739
+
740
+
741
+ (η1,η2)∈E(ρG)
742
+ H(xη1, xη2)
743
+
744
+
745
+ η∈V (ρG)
746
+ Λ(dxη),
747
+ (5.2)
748
+ over non-flat partition diagrams, and by the summation
749
+ κn(NG) =
750
+
751
+ ρ∈Π�1([n]×[r])
752
+ ρ∧π=�0
753
+ (non−flat connected)
754
+
755
+ (Rd)|ρ|
756
+
757
+
758
+ (η1,η2)∈E(ρG)
759
+ H(xη1, xη2)
760
+
761
+
762
+ η∈V (ρG)
763
+ Λ(dxη),
764
+ (5.3)
765
+ 14
766
+
767
+ over connected non-flat partition diagrams.
768
+ Proof.
769
+ Relations (5.2)-(5.3) are consequence of Proposition 4.3, after taking vi,j(xi, xj) :=
770
+ 1{xi↔xj}, (i, j) ∈ E(G). The summations are restricted to non-flat partition diagrams due
771
+ to condition (5.1) as in Section 2 of [Pri19].
772
+
773
+ 6
774
+ Asymptotic growth of subgraph count cumulants
775
+ We assume that the intensity measure of the Poisson point process Ξ on Rd has the form
776
+ Λλ(dx) = λµ(dx),
777
+ λ > 0,
778
+ where µ is a finite diffuse measure on Rd. We investigate the asymptotic behaviour of the
779
+ cumulants κn(NG) as the intensity λ tends to infinity, as a consequence of the partition
780
+ diagram representation of cumulant. For this, we consider the subgraph count in GH(Ξ)
781
+ obtained by replacing H(x, y) with Hλ(x, y) := cλH(x, y), in which case every term in (5.3)
782
+ contributes a factor c|E(ρG)|
783
+ λ
784
+ λ|V (ρG)|.
785
+ In what follows, given two positive functions f and g on (1, ∞) we write f(λ) ≪ g(λ) if
786
+ limλ→∞ g(λ)/f(λ) = ∞, and we consider the following regimes.
787
+ • Dilute regime: for some constant K > 0 we have
788
+ 1
789
+ λ ≪ cλ ≤ K,
790
+ λ → ∞.
791
+ (6.1)
792
+ • Sparse regime: for some constants K > 0 and α ≥ 1 we have
793
+ cλ ≤ K
794
+ λα,
795
+ λ → ∞.
796
+ (6.2)
797
+ In case cλ = K for all λ > 0 we also say that we are in the full random graph regime, and
798
+ in the sequel we take K = 1 for simplicity.
799
+ Assumption 6.1 Let r ≥ 2. There exist two constants c, C > 0 such that for any connected
800
+ non-flat partition diagram Γ(ρ, π), ρ ∈ Π�1([n] × [r]), n ≥ 1, we have
801
+ c|E(ρG)|C|V (ρG)| ≤
802
+
803
+ Rd · · ·
804
+
805
+ Rd
806
+
807
+
808
+ (i,j)∈E(ρG)
809
+ H(xi, xj)
810
+
811
+
812
+ k∈V (ρG)
813
+ µ(dxk).
814
+ (6.3)
815
+ 15
816
+
817
+ We note that (6.3) is satisfied by e.g. any translation-invariant continuous kernel function
818
+ H : Rd×Rd → [0, 1] non vanishing at 0, including the standard Rayleigh connection function
819
+ given by H(x, y) = e−β∥x−y∥2, x, y ∈ Rd, for some β > 0. Indeed, for those kernels there
820
+ exists c > 0 and a Borel set B ⊂ Rd such that µ(B) > 0 and
821
+ H(x, y) = H(x − y, 0) ≥ c1B(x)1B(y),
822
+ x, y ∈ Rd,
823
+ hence
824
+ c|E(ρG)|(µ(B))|V (ρG)|
825
+ =
826
+ c|E(ρG)|
827
+
828
+ B
829
+ · · ·
830
+
831
+ B
832
+
833
+ k∈V (ρG)
834
+ µ(dxk)
835
+
836
+
837
+ Rd · · ·
838
+
839
+ Rd
840
+
841
+
842
+ (i,j)∈E(ρG)
843
+ H(xi, xj)
844
+
845
+
846
+ k∈V (ρG)
847
+ µ(dxk).
848
+ In what follows, we consider the centered and normalized subgraph count cumulants defined
849
+ as
850
+
851
+ NG := NG − κ1(NG)
852
+
853
+ κ2(NG)
854
+ ,
855
+ n ≥ 1.
856
+ The following result shows that for n ≥ 3 the normalized cumulant κn( �NG) tends to zero
857
+ in (6.5), hence �
858
+ NG converges in distribution to the normal distribution by Theorem 1 in
859
+ [Jan88].
860
+ Theorem 6.1 (Dilute regime) Let r ≥ 2 and consider G a connected graph with |V (G)| =
861
+ r vertices, satisfying Assumption 6.1 in the dilute regime (6.1). We have the cumulant bounds
862
+ (n − 1)!cn|E(G)|
863
+ λ
864
+ (K1λ)1+(r−1)n ≤ κn(NG) ≤ n!r−1cn|E(G)|
865
+ λ
866
+ (K2λ)1+(r−1)n
867
+ (6.4)
868
+ for some constants K1, K2 > 0 independent of λ, n ≥ 1, and
869
+ ��κn
870
+ � �NG
871
+ ��� ≤ n!r−1(Kλ)−(n/2−1),
872
+ λ ≥ 1,
873
+ n ≥ 2,
874
+ (6.5)
875
+ where K > 0 is a constant independent of λ > 0 and n ≥ 1.
876
+ Proof.
877
+ We identify the leading terms in the sum (5.3) over connected partition diagrams,
878
+ knowing that every vertex in ρG contributes a factor λ, and that every edge contributes a
879
+ factor cλ, therefore every summand in (5.3) contributes a factor c|E(ρG)|
880
+ λ
881
+ λ|V (ρG)|.
882
+ Modifying ρ ∈ Π�1([n] × [r]) by splitting a block in two means adding a vertex to ρG, and
883
+ therefore a adding factor λ to the corresponding term in (5.3). At the same time, this entails
884
+ 16
885
+
886
+ no loss of edge but possibly the addition of an edge to ρG, which results into an additional
887
+ factor cλ with λcλ ≫ 1 by (6.1). Hence, the leading terms in (5.3) are those associated with
888
+ the connected partition diagrams Γ(ρ, π) having the highest block count, i.e. which have
889
+ 1 + (r − 1)n blocks, see Figure 6 for a sample of such partition diagram.
890
+ 1
891
+ 2
892
+ 3
893
+ 4
894
+ 5
895
+ 1
896
+ 2
897
+ 3
898
+ 4
899
+ (a) Diagram Γ(ρ, π) and graph �ρG in blue.
900
+ 1
901
+ 2
902
+ 3
903
+ 4
904
+ 5
905
+ 1
906
+ 2
907
+ 3
908
+ 4
909
+ (b) Diagram Γ(ρ, π) and graph ρG in red.
910
+ Figure 6: Example of maximal connected partition diagram with n = 5 and r = 4.
911
+ We note that any maximal partition ρ satisfies |E(ρG)| = n × |E(G)|, as can be checked in
912
+ Figure 6. Therefore, by (2.3)-(2.4), (5.3) and (6.3), we obtain
913
+ cn|E(G)|C1+(r−1)ncn|E(G)|
914
+ λ
915
+ ((r − 1)r)n−1(n − 1)!λ1+(r−1)n
916
+
917
+ λ1+(r−1)ncn|E(G)|
918
+ λ
919
+
920
+ ρ∈Mn
921
+
922
+ (Rd)1+(r−1)n
923
+
924
+
925
+ (η1,η2)∈E(ρG)
926
+ Hλ(xη1, xη2)
927
+
928
+
929
+ η∈V (ρG)
930
+ µ(dxη),
931
+
932
+ κn(NG)
933
+
934
+ n!r−1rn−1r!n−1(µ(Rd))1+(r−1)ncn|E(G)|
935
+ λ
936
+ λ1+(r−1)n,
937
+ which yields (6.4). Regarding (6.5), we have, for n ≥ 2,
938
+ ��κn( �NG)
939
+ �� ≤
940
+ n!r−1cn|E(G)|
941
+ λ
942
+ (K2λ)1+(r−1)n
943
+
944
+ (2 − 1)!c2|E(G)|
945
+ λ
946
+ (K1λ)1+2(r−1)�n/2 = K2
947
+ �(K2/K1)r−1
948
+ √K1
949
+ �n
950
+ n!r−1λ−(n/2−1).
951
+
952
+ The following result yields a positive cumulant growth of order α − (α − 1)r > 0 in (6.6) for
953
+ trees in the sparse regime with α ∈ [1, r/(r − 1)), while in the case of non-tree graphs such
954
+ as cycle graphs the growth rate r − α|E(G)| ≤ (1 − α)r ≤ 0 is negative or zero in (6.8) and
955
+ (6.10). In addition, the normalized cumulant κn( �NG) tends to zero for n ≥ 3 in (6.7) only
956
+ 17
957
+
958
+ when G is a tree, in which case �
959
+ NG converges in distribution to the normal distribution by
960
+ Theorem 1 in [Jan88]. We note that when α = 1, (6.7) is consistent with (6.5).
961
+ Theorem 6.2 (Sparse regime) Let G be a connected graph with |V (G)| = r vertices,
962
+ r ≥ 2, satisfying Assumption 6.1 in the sparse regime (6.2).
963
+ a) If G is a tree, i.e. |E(G)| = r − 1, we have the cumulant bounds
964
+ (K1)(r−1)nλα−(α−1)r ≤ κn(NG) ≤ n!r−1(K2)(r−1)nλα−(α−1)r,
965
+ (6.6)
966
+ for some constants K1 > 0, K2 > 1 independent of λ, n ≥ 1, and
967
+ ��κn( �
968
+ NG)
969
+ �� ≤ (K3)nn!r−1λ−(α−(α−1)r)(n/2−1),
970
+ λ ≥ 1,
971
+ n ≥ 2,
972
+ (6.7)
973
+ where K3 := (K2/K1)r−1.
974
+ b) If G is not a tree, i.e. |E(G)| ≥ r, we have the cumulant bounds
975
+ (K1)rλr−α|E(G)| ≤ κn(NG) ≤ n!r−1(K2)(r−1)nλr−α|E(G)|,
976
+ (6.8)
977
+ for some constants K1 > 0, K2 > 1 independent of λ, n ≥ 1, and
978
+ ��κn( �
979
+ NG)
980
+ �� ≤ n!r−1(K3)nλ(α|E(G)|−r)(n/2−1),
981
+ λ ≥ 1,
982
+ n ≥ 2,
983
+ (6.9)
984
+ for some K3 > 0.
985
+ c) If G is a cycle, i.e. |E(G)| = r, we have the cumulant bounds
986
+ (K1)rλ−(α−1)r ≤ κn(NG) ≤ n!r−1(K2)(r−1)nλ−(α−1)r,
987
+ (6.10)
988
+ for some constants K1 > 0, K2 > 1 independent of λ, n ≥ 1, and
989
+ ��κn( �
990
+ NG)
991
+ �� ≤ n!r−1(K3)nλ(α−1)(n/2−1)r,
992
+ λ ≥ 1,
993
+ n ≥ 2,
994
+ (6.11)
995
+ for some K3 > 0.
996
+ Proof.
997
+ In the sparse regime (6.2), every edge in the graph ρG contributes a power λ−α and
998
+ every vertex contributes a power λ, hence every term in (5.3) contributes a power
999
+ λ|V (ρG)|−α|E(ρG)| = λα−(α−1)|V (ρG)|+(|V (ρG)|−|E(ρG)|−1)α ≤ λα−(α−1)|V (ρG)|
1000
+ (6.12)
1001
+ 18
1002
+
1003
+ since |V (ρG)| − |E(ρG)| − 1 ≤ 0. In addition, for any connected partition diagram Γ(ρ, π)
1004
+ with ρ ∈ Π�1([n] × [r]), we have
1005
+ r ≤ |V (ρG)| ≤ 1 + (r − 1)n.
1006
+ a) When G is a tree and the graph ρG is also a tree, i.e. |V (ρG)| − |E(ρG)| − 1 = 0, the
1007
+ maximal order λα−(α−1)|V (ρG)| is attained in (6.12), see Figure 7 for an example.
1008
+ 1
1009
+ 2
1010
+ 3
1011
+ 4
1012
+ 5
1013
+ 1
1014
+ 2
1015
+ 3
1016
+ 4
1017
+ (a) Diagram Γ(ρ, π) and multigraph �ρG in blue.
1018
+ 1
1019
+ 2
1020
+ 3
1021
+ 4
1022
+ 5
1023
+ 1
1024
+ 2
1025
+ 3
1026
+ 4
1027
+ (b) Diagram Γ(σ, π) and graph ρG in red.
1028
+ Figure 7: Example of connected partition diagram with ρG a tree and n = 5, r = 4.
1029
+ In this case, the corresponding term in (5.3) contributes a power
1030
+ λ|V (ρG)|−α|E(ρG)| = λα−(α−1)|V (ρG)|,
1031
+ λ ≥ 1.
1032
+ In this case, since |V (ρG)| ≥ r and α ≥ 1, the optimal rate λα−(α−1)r is attained by the
1033
+ partition diagrams Γ(ρ, π) such that |V (ρG)| = r, as illustrated in Figure 8.
1034
+ 1
1035
+ 2
1036
+ 3
1037
+ 4
1038
+ 5
1039
+ 1
1040
+ 2
1041
+ 3
1042
+ 4
1043
+ (a) Diagram Γ(ρ, π) and multigraph �ρG in blue.
1044
+ 1
1045
+ 2
1046
+ 3
1047
+ 4
1048
+ 5
1049
+ 1
1050
+ 2
1051
+ 3
1052
+ 4
1053
+ (b) Diagram Γ(ρ, π) and graph ρG in red.
1054
+ Figure 8: Tree diagram ρG with G a tree with |V (ρG)| = r and n = 5, r = 4.
1055
+ 19
1056
+
1057
+ We conclude to (6.6) using Lemma 2.6 as in the proof of Theorem 6.1, by upper bounding
1058
+ the count of connected partitions from (2.3). Regarding (6.7), we have
1059
+ ��κn( �
1060
+ NG)
1061
+ ��
1062
+
1063
+ n!r−1K(r−1)n
1064
+ 2
1065
+ ((K1)2(r−1)λα−(α−1)r)n/2λα−(α−1)r
1066
+ =
1067
+ �K2
1068
+ K1
1069
+ �(r−1)n
1070
+ n!r−1λ−(α−(α−1)r)(n/2−1),
1071
+ n ≥ 2.
1072
+ b) When G is not a tree it contains at least one cycle, and for any partition ρ ∈ Π�1([n] × [r])
1073
+ the same holds for the graph ρG. In this case, the highest order contribution in (5.3) is
1074
+ attained by connected non-flat partition diagrams Γ(ρ, π), ρ ∈ Π�1([n] × [r]), such that ρG
1075
+ has |V (ρG)| = r vertices, and their contribution is given by a power of order λr−α|E(G)|. An
1076
+ example of such partition diagram ρ is given in Figure 9, with G a cycle.
1077
+ 1
1078
+ 2
1079
+ 3
1080
+ 1
1081
+ 2
1082
+ 3
1083
+ 4
1084
+ (a) Diagram Γ(ρ, π) and multigraph �ρG in blue.
1085
+ 1
1086
+ 2
1087
+ 3
1088
+ 1
1089
+ 2
1090
+ 3
1091
+ 4
1092
+ (b) Diagram Γ(σ, π) and graph ρG in red.
1093
+ Figure 9: Cycle diagram ρG with G a cycle graph and n = 5, r = 4.
1094
+ Indeed, in order to remain non flat, the partition diagram ρ can only be modified into a
1095
+ partition diagram σ by splitting a block of ρG in two, which entails the addition of a number
1096
+ q of edges, q ≥ 1, resulting into an additional factor λ1−qα ≤ 1 that may only lower the order
1097
+ of the contribution, see Figures 10-13 for an example where G is a graph with one cycle.
1098
+ 1
1099
+ 2
1100
+ 3
1101
+ 1
1102
+ 2
1103
+ 3
1104
+ 4
1105
+ (a) Diagram Γ(ρ, π) with order λ4−4α.
1106
+ 1
1107
+ 2
1108
+ 3
1109
+ 1
1110
+ 2
1111
+ 3
1112
+ 4
1113
+ (b) Diagram Γ(σ, π) with order λ5−5α = λ4−4αλ−(α−1).
1114
+ Figure 10: Splitting of a vertex with addition of one edge and n = 3, r = 4.
1115
+ 20
1116
+
1117
+ 1
1118
+ 2
1119
+ 3
1120
+ 1
1121
+ 2
1122
+ 3
1123
+ 4
1124
+ (a) Diagram Γ(ρ, π) with order λ4−4α.
1125
+ 1
1126
+ 2
1127
+ 3
1128
+ 1
1129
+ 2
1130
+ 3
1131
+ 4
1132
+ (b) Diagram Γ(σ, π) with order λ5−6α = λ4−4αλ1−2α.
1133
+ Figure 11: Splitting of a vertex with addition of three edges and n = 3, r = 4.
1134
+ 1
1135
+ 2
1136
+ 3
1137
+ 1
1138
+ 2
1139
+ 3
1140
+ 4
1141
+ (a) Diagram Γ(ρ, π) with order λ4−4α.
1142
+ 1
1143
+ 2
1144
+ 3
1145
+ 1
1146
+ 2
1147
+ 3
1148
+ 4
1149
+ (b) Diagram Γ(σ, π) with order λ5−6α = λ4−4αλ1−2α.
1150
+ Figure 12: Splitting of a vertex with addition of two edges and n = 3, r = 4.
1151
+ 1
1152
+ 2
1153
+ 3
1154
+ 1
1155
+ 2
1156
+ 3
1157
+ 4
1158
+ (a) Diagram Γ(ρ, π) with order λ4−4α.
1159
+ 1
1160
+ 2
1161
+ 3
1162
+ 1
1163
+ 2
1164
+ 3
1165
+ 4
1166
+ (b) Diagram Γ(σ, π) with order λ5−6α = λ4−4αλ1−2α.
1167
+ Figure 13: Splitting of a vertex with addition of two edges and n = 3, r = 4.
1168
+ When G is a triangle with n = 2 and r = 3, the above procedure can be reversed by first
1169
+ merging a vertex and then gluing edges, see Figure 14, which results into “overlapping” all
1170
+ copies of the graph G.
1171
+ 21
1172
+
1173
+ (a) Merging one vertex.
1174
+ (b) Gluing one edge.
1175
+ (c) Gluing three edges.
1176
+ Figure 14: Diagram patterns with G a triangle and n = 2, r = 3.
1177
+ As in part-(b) above, we lower bound κn(NG) using a single partition, and we upper bound
1178
+ using the total count of connected non-flat partition diagrams using Lemma 2.6-b) to obtain
1179
+ (6.8). Regarding (6.9), we have
1180
+ ��κn( �
1181
+ NG)
1182
+ �� ≤ n!r−1(K2)(r−1)nλr−α|E(G)|
1183
+ ((K1)rλr−α|E(G)|)n/2
1184
+ = n!r−1(K2)(r−1)n
1185
+ (K1)nr/2 λ−(r−α|E(G)|)(n/2−1),
1186
+ n ≥ 2.
1187
+ c) is a direct consequence of part b) above.
1188
+
1189
+ 7
1190
+ Asymptotic normality of subgraph counts
1191
+ The cumulant bound (6.5) shows that the centered and normalized subgraph count �
1192
+ NG
1193
+ satisfies the Statuleviˇcius condition (A.1) below, see [RSS78, DJS22], with γ := r − 2. As a
1194
+ consequence, we have the following result, in which the Berry-Esseen rate is obtained when
1195
+ r = 2.
1196
+ Corollary 7.1 (Dilute regime) Let G be a connected graph with |V (G)| = r vertices,
1197
+ r ≥ 2, satisfying Assumption 6.1 in the dilute regime (6.1).
1198
+ We have the Kolmogorov
1199
+ distance bound
1200
+ sup
1201
+ x∈R
1202
+ ��P
1203
+ � �
1204
+ NG ≤ x
1205
+
1206
+ − P(Z ≤ x)
1207
+ �� ≤ Cλ−1/(4r−6),
1208
+ (7.1)
1209
+ with rate 1/(4r − 6) as λ tends to infinity, where C > 0 depends only on H and G.
1210
+ In addition, by Theorem 1.1 of [DE13], �NG satisfies a moderate deviation principle with
1211
+ speed a2
1212
+ λ = o(λ1/(2r−3)) and rate function x2/2, see Lemma A.1-iii) in appendix. The cu-
1213
+ mulant bounds (6.7), (6.9), (6.11) show that the centered and normalized subgraph count
1214
+
1215
+ NG satisfies the Statuleviˇcius condition (A.1) below, see [RSS78, DJS22], with γ := r − 2.
1216
+ As a consequence, we have the following result, in which (7.2) is consistent with (7.1) when
1217
+ α = 1.
1218
+ 22
1219
+
1220
+ Corollary 7.2 (Sparse regime) Let G be a tree with |V (G)| = r ≥ 2 vertices, satisfying
1221
+ Assumption 6.1 in the sparse regime (6.2) with α ∈ [1, r/(r − 1)). We have the Kolmogorov
1222
+ distance bound
1223
+ sup
1224
+ x∈R
1225
+ ��P
1226
+ � �
1227
+ NG ≤ x
1228
+
1229
+ − P(Z ≤ x)
1230
+ �� ≤ Cλ−(α−(α−1)r)/(4r−6),
1231
+ (7.2)
1232
+ as λ tends to infinity, where C > 0 depends only on H and G.
1233
+ In addition, by Theorem 1.1 of [DE13], �NG satisfies a moderate deviation principle with
1234
+ speed a2
1235
+ λ = o(λ(α−(α−1)r)/(2r−3)) and rate function x2/2, see Lemma A.1-iii) in appendix.
1236
+ Remark 7.3 We note that up to division by 2r − 3, the rate in (7.2) is consistent with
1237
+ the rate (α − (α − 1)r)/2 obtained for the counting of trees in the Erd˝os-R´enyi graph, cf.
1238
+ Corollary 4.10 of [PS20].
1239
+ Remark 7.4 Since (α|E(G)| − r)(n/2 − 1) ≥ (α − 1)(n/2 − 1)r ≥ 0, no significant Kol-
1240
+ mogorov bounds are derived from (6.9) and (6.11) for cycle and other non-tree graphs in the
1241
+ sparse regime, which is consistent with Corollaries 4.8-4.9 of [PS20].
1242
+ A
1243
+ Appendix
1244
+ The following results are summarized from the “main lemmas” in Chapter 2 of [SS91] and
1245
+ [DE13], and are tailored to our RCM applications. We let Φ denote the cumulative distri-
1246
+ bution function of the standard normal distribution.
1247
+ Lemma A.1 Let (Xλ)λ≥1 be a family of random variables with mean zero and unit variance
1248
+ for all λ > 0. Suppose that for all λ ≥ 1, all moments of the random variable Xλ exist and
1249
+ that the cumulants of Xλ satisfy
1250
+ |κj(Xλ)| ≤ (j!)1+γ
1251
+ (∆λ)j−2,
1252
+ j ≥ 3,
1253
+ (A.1)
1254
+ where γ ≥ 0 is a constant not depending on λ, while ∆λ ∈ (0, ∞) may depend on λ. Then,
1255
+ the following assertions hold.
1256
+ i) (Kolmogorov bound). One has
1257
+ sup
1258
+ x∈R
1259
+ |P(Xλ ≤ x) − Φ(x)| ≤
1260
+ C
1261
+ (∆λ)1/(1+2γ) ,
1262
+ (A.2)
1263
+ for some constant C only depending on γ, see [SS91, Corollary 2.1] and [DJS22, Theo-
1264
+ rem 2.4].
1265
+ 23
1266
+
1267
+ ii) (Concentration inequality). For any x ≥ 0 and sufficiently large λ,
1268
+ P(|Xλ| ≥ x) ≤ 2 exp
1269
+
1270
+ −1
1271
+ 4 min
1272
+
1273
+ x2
1274
+ 21/(1+γ), (x∆λ)1/(1+γ)
1275
+ ��
1276
+ .
1277
+ (A.3)
1278
+ See the corollary to [SS91, Lemma 2.4].
1279
+ iii) (Moderate deviation principle). Let (aλ)λ>0 be a sequence of real numbers tending to
1280
+ infinity, and such that
1281
+ lim
1282
+ λ→∞
1283
+
1284
+ (∆λ)1/(1+2γ) = 0.
1285
+ Then, (a−1
1286
+ λ Xλ)λ>0 satisfies a moderate deviation principle with speed a2
1287
+ λ and rate func-
1288
+ tion x2/2, see [DE13, Theorem 1.1].
1289
+ iv) (Normal approximation with Cram´er corrections). There exists a constant c > 0 such
1290
+ that for all λ ≥ 1 and x ∈ (0, c(∆λ)1/(1+2γ)) we have
1291
+ P(Xλ ≥ x)
1292
+ 1 − Φ(x)
1293
+ =
1294
+
1295
+ 1 + O
1296
+
1297
+ x + 1
1298
+ (∆λ)1/(1+2γ)
1299
+ ��
1300
+ exp
1301
+ �˜L(x)
1302
+
1303
+ ,
1304
+ P(Xλ ≤ −x)
1305
+ Φ(−x)
1306
+ =
1307
+
1308
+ 1 + O
1309
+
1310
+ x + 1
1311
+ (∆λ)1/(1+2γ)
1312
+ ��
1313
+ exp
1314
+ �˜L(−x)
1315
+
1316
+ ,
1317
+ where ˜L(x) is related to the Cram´er-Petrov series, see [SS91, Lemma 2.3].
1318
+ References
1319
+ [BKR89]
1320
+ A.D. Barbour, M. Karo´nski, and A. Ruci´nski. A central limit theorem for decomposable random
1321
+ variables with applications to random graphs. J. Combin. Theory Ser. B, 47(2):125–145, 1989.
1322
+ [BOR85]
1323
+ E.A. Bender, A.M. Odlyzko, and L.B. Richmond. The asymptotic number of irreducible parti-
1324
+ tions. European J. Combin., 6(1):1–6, 1985.
1325
+ [BR12]
1326
+ R. Balakrishnan and K. Ranganathan. A textbook of graph theory. Universitext. Springer, New
1327
+ York, second edition, 2012.
1328
+ [BRSW17] K. Bogdan, J. Rosi´nski, G. Serafin, and L. Wojciechowski. L´evy systems and moment formulas
1329
+ for mixed Poisson integrals. In Stochastic analysis and related topics, volume 72 of Progr. Probab.,
1330
+ pages 139–164. Birkh¨auser/Springer, Cham, 2017.
1331
+ [CT22]
1332
+ V. H. Can and K. D. Trinh. Random connection models in the thermodynamic regime: central
1333
+ limit theorems for add-one cost stabilizing functionals. Electron. J. Probab., 27:1–40, 2022.
1334
+ [DE13]
1335
+ H. D¨oring and P. Eichelsbacher. Moderate deviations via cumulants. J. Theoret. Probab., 26:360–
1336
+ 385, 2013.
1337
+ [DJS22]
1338
+ H. D¨oring, S. Jansen, and K. Schubert. The method of cumulants for the normal approximation.
1339
+ Probab. Surv., 19:185–270, 2022.
1340
+ [ER59]
1341
+ P. Erd˝os and A. R´enyi. On random graphs. I. Publ. Math. Debrecen, 6:290–297, 1959.
1342
+ [ER21]
1343
+ P. Eichelsbacher and B. Rednoß. Kolmogorov bounds for decomposable random variables and
1344
+ subgraph counting by the Stein-Tikhomirov method. Preprint arXiv:2107.03775, 2021.
1345
+ 24
1346
+
1347
+ [ET14]
1348
+ P. Eichelsbacher and C. Th¨ale. New Berry-Esseen bounds for non-linear functionals of Poisson
1349
+ random measures. Electron. J. Probab., 19:no. 102, 25, 2014.
1350
+ [Gil59]
1351
+ E.N. Gilbert. Random graphs. Ann. Math. Statist, 30(4):1141–1144, 1959.
1352
+ [GT18a]
1353
+ J. Grote and C. Th¨ale. Concentration and moderate deviations for Poisson polytopes and poly-
1354
+ hedra. Bernoulli, 24:2811–2841, 2018.
1355
+ [GT18b]
1356
+ J. Grote and C. Th¨ale. Gaussian polytopes: a cumulant-based approach. Journal of Complexity,
1357
+ 47:1–41, 2018.
1358
+ [Jan88]
1359
+ S. Janson. Normal convergence by higher semiinvariants with applications to sums of dependent
1360
+ random variables and random graphs. Ann. Probab., 16(1):305–312, 1988.
1361
+ [Jan19]
1362
+ S. Jansen. Cluster expansions for Gibbs point processes. Adv. in Appl. Probab., 51(4):1129–1178,
1363
+ 2019.
1364
+ [Kho08]
1365
+ O. Khorunzhiy. On connected diagrams and cumulants of Erd˝os-R´enyi matrix models. Comm.
1366
+ Math. Phys., 282:209–238, 2008.
1367
+ [KRT17]
1368
+ K. Krokowski, A. Reichenbachs, and C. Th¨ale. Discrete Malliavin-Stein method: Berry-Esseen
1369
+ bounds for random graphs and percolation. Ann. Probab., 45(2):1071–1109, 2017.
1370
+ [LNS21]
1371
+ G. Last, F. Nestmann, and M. Schulte. The random connection model and functions of edge-
1372
+ marked Poisson processes: second order properties and normal approximation.
1373
+ Ann. Appl.
1374
+ Probab., 31(1):128–168, 2021.
1375
+ [LP18]
1376
+ G. Last and M.D. Penrose. Lectures on the Poisson process, volume 7 of Institute of Mathematical
1377
+ Statistics Textbooks. Cambridge University Press, Cambridge, 2018.
1378
+ [LRR16]
1379
+ R. Lachi`eze-Rey and M. Reitzner. U-statistics in stochastic geometry. In G. Peccati and M. Re-
1380
+ itzner, editors, Stochastic Analysis for Poisson Point Processes: Malliavin Calculus, Wiener-Itˆo
1381
+ Chaos Expansions and Stochastic Geometry, volume 7 of Bocconi & Springer Series, pages 229–
1382
+ 253. Springer, Berlin, 2016.
1383
+ [MM91]
1384
+ V.A. Malyshev and R.A. Minlos. Gibbs random fields, volume 44 of Mathematics and its Appli-
1385
+ cations (Soviet Series). Kluwer Academic Publishers Group, Dordrecht, 1991.
1386
+ [Pri12]
1387
+ N. Privault. Moments of Poisson stochastic integrals with random integrands. Probability and
1388
+ Mathematical Statistics, 32(2):227–239, 2012.
1389
+ [Pri19]
1390
+ N. Privault.
1391
+ Moments of k-hop counts in the random-connection model.
1392
+ J. Appl. Probab.,
1393
+ 56(4):1106–1121, 2019.
1394
+ [Pri22]
1395
+ N. Privault. Asymptotic analysis of k-hop connectivity in the 1D unit disk random graph model.
1396
+ Preprint arXiv:2203.14535, 40 pages, 2022.
1397
+ [PS20]
1398
+ N. Privault and G. Serafin. Normal approximation for sums of discrete U-statistics - application
1399
+ to Kolmogorov bounds in random subgraph counting. Bernoulli, 26(1):587–615, 2020.
1400
+ [PS22]
1401
+ N. Privault and G. Serafin. Berry-Esseen bounds for functionals of independent random variables.
1402
+ Electron. J. Probab., 27:1–37, 2022.
1403
+ [PT11]
1404
+ G. Peccati and M. Taqqu. Wiener Chaos: Moments, Cumulants and Diagrams: A survey with
1405
+ Computer Implementation. Bocconi & Springer Series. Springer, 2011.
1406
+ [R¨ol22]
1407
+ A. R¨ollin.
1408
+ Kolmogorov bounds for the normal approximation of the number of triangles in
1409
+ the Erd˝os-R´enyi random graph.
1410
+ Probability in the Engineering and Informational Sciences,
1411
+ 36(3):747–773, 2022.
1412
+ [Rot64]
1413
+ G.-C. Rota. On the foundations of combinatorial theory. I. Theory of M¨obius functions. Z.
1414
+ Wahrscheinlichkeitstheorie und Verw. Gebiete, 2:340–368, 1964.
1415
+ [RSS78]
1416
+ R. Rudzkis, L. Saulis, and V.A. Statuljaviˇcus. A general lemma on probabilities of large devia-
1417
+ tions. Litovsk. Mat. Sb., 18(2):99–116, 217, 1978.
1418
+ 25
1419
+
1420
+ [Ruc88]
1421
+ A. Ruci´nski.
1422
+ When are small subgraphs of a random graph normally distributed?
1423
+ Probab.
1424
+ Theory Related Fields, 78:1–10, 1988.
1425
+ [SS91]
1426
+ L. Saulis and V.A. Statuleviˇcius. Limit theorems for large deviations, volume 73 of Mathematics
1427
+ and its Applications (Soviet Series). Kluwer Academic Publishers Group, Dordrecht, 1991.
1428
+ 26
1429
+
3dFLT4oBgHgl3EQfry9C/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
4dAzT4oBgHgl3EQfffz3/content/2301.01455v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed3d2fc7428b661dc615feae5e7505c74ac577aec516c89c34ab6b019fa1c63d
3
+ size 2453675
4dAzT4oBgHgl3EQfffz3/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c0bcbf1b3c9ecace5c762602d6632feb83f5382b5032fb0ed9a56279ba1c52d
3
+ size 720941
4dAzT4oBgHgl3EQfffz3/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af8bc173bcdc92f4590e7222ebcf6b0dc1d072269dc05da720543764fd209419
3
+ size 30918
5NE0T4oBgHgl3EQfegCm/content/tmp_files/2301.02392v1.pdf.txt ADDED
@@ -0,0 +1,1592 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.02392v1 [physics.plasm-ph] 6 Jan 2023
2
+ Moment-Fourier approach to ion parallel fluid closures and
3
+ transport for a toroidally confined plasma
4
+ Jeong-Young Ji,∗ Eric D. Held, and J. Andrew Spencer
5
+ Department of Physics, Utah State University, Logan, Utah 84322, USA
6
+ Yong-Su Na
7
+ Department of Nuclear Engineering,
8
+ Seoul National University, Seoul 08826, South Korea
9
+ Abstract
10
+ A general method of solving the drift kinetic equation is developed for an axisymmetric magnetic
11
+ field. Expanding a distribution function in general moments a set of ordinary differential equa-
12
+ tions are obtained.
13
+ Successively expanding the moments and magnetic-field involved quantities
14
+ in Fourier series, a set of linear algebraic equations is obtained. The set of full (Maxwellian and
15
+ non-Maxwellian) moment equations is solved to express the density, temperature, and flow veloc-
16
+ ity perturbations in terms of radial gradients of equilibrium pressure and temperature. Closure
17
+ relations that connect parallel heat flux density and viscosity to the radial gradients and parallel
18
+ gradients of temperature and flow velocity, are also obtained by solving the non-Maxwellian mo-
19
+ ment equations. The closure relations combined with the linearized fluid equations reproduce the
20
+ same solution obtained directly from the full moment equations. The method can be generalized
21
+ to derive closures and transport for an electron-ion plasma and a multi-ion plasma in a general
22
+ magnetic field.
23
24
+ 1
25
+
26
+ I.
27
+ INTRODUCTION
28
+ For magnetically confined plasmas, neoclassical transport theory describes particle, heat,
29
+ and momentum transport of a steady-state plasma due to Coulomb collisions in an inhomo-
30
+ geneous magnetic field [1–7]. The neoclassical transport is obtained by solving the first order
31
+ drift kinetic equation [8, 9] assuming a zeroth order background distribution (see Ref. [10, 11]
32
+ for reviews). Due to difficulty in treating the integro-differential collision operator in veloc-
33
+ ity space, modified collision operators have been adopted for analytical work. Numerical
34
+ work may adopt the Landau (Fokker-Planck) collision operator with desired accuracy by in-
35
+ creasing velocity space resolution. Numerous transport codes have been developed to solve
36
+ the continuum drift kinetic equation with a modified [12, 13] or an exact Landau collision
37
+ operator [14–19].
38
+ For describing a macroscopic state of a tokamak plasma, the fluid variables are of primary
39
+ importance and solving fluid equations instead of the kinetic equation may be sufficient. Due
40
+ to significantly lower dimensionality of position space compared to phase space, numerically
41
+ solving fluid equations has a great advantage over solving the kinetic equation [20–24]. The
42
+ key issue is to obtain proper closures to capture desired physics effects. Even though the
43
+ heat flux density is derived in neoclassical transport theory, it cannot serve as one of closures
44
+ for the temperature equation because it is derived from the fluid equations, and hence,
45
+ expressed in terms of the zeroth-order density and temperature instead of the (first-order)
46
+ fluid variables whose evolution equations are to be closed. That is, the heat flux derived
47
+ from the divergence free condition plays no role for the divergence term in the temperature
48
+ equation.
49
+ In this work, we introduce an analytic method to solve the drift kinetic equation to obtain
50
+ closures and transport. For a magnetized plasma, the parallel moment equations are derived
51
+ in Ref. [25]. One advantage of the moment approach is the availability of the exact collisional
52
+ moments of the linearized Landau operator [26]. The moment-based collision operator can
53
+ be utilized for the linear and nonlinear gyrokinetic Coulomb collision operator [27–29]. For
54
+ slab geometry where the magnetic field strength does not change along a magnetic field
55
+ line, the drift-kinetic equation can be converted to a linear system of ordinary differential
56
+ equations with constant coefficients. This linear system can be analytically solved for the
57
+ 2
58
+
59
+ parallel moments using the eigenvector method [30].
60
+ On the other hand, for an inhomogeneous magnetic field of a tokamak, the drift kinetic
61
+ equations becomes a linear system of ordinary differential equations with varying coefficients.
62
+ This means that the eigenvector method used in the integral closure [30] does not work. For
63
+ a system of linear differential equations with varying coefficients, we can Fourier-expand
64
+ the varying coefficients and moments to build a system of linear algebraic equations. While
65
+ truncation both in the moments and Fourier modes is inevitable, the solution of the truncated
66
+ system is equivalent to that of the drift kinetic equation when convergence is achieved by
67
+ increasing the number of moments and Fourier modes. The solution moments can then be
68
+ used to construct the distribution function that is the solution of the drift kinetic equation.
69
+ Therefore the moment solution can be used for benchmarking numerous fluid and kinetic
70
+ codes.
71
+ In Sec. II, we present the parallel moment equations which are equivalent to the first order
72
+ drift kinetic equation. In Sec. III, we use the Fourier expansion to solve the general moment
73
+ equations for fluid quantities in Fourier series.
74
+ The convergent solution is presented as
75
+ the numbers of moments and Fourier modes increase. In Sec. IV, we derive closures and
76
+ incorporate them into fluid equations to reproduce the fluid quantities. In Sec.V, we conclude
77
+ and discuss possible extensions of the work to more general plasmas.
78
+ II.
79
+ DRIFT KINETIC EQUATION AND MOMENT EQUATIONS
80
+ In standard neoclassical transport theory (see Ref. [11] for a general review), drift kinetic
81
+ equations are solved for ion and electron transport. An analytic solution can be obtained
82
+ for an axisymmetric magnetic field
83
+ B = I∇ζ + ∇ζ × ∇ψ
84
+ (1)
85
+ where 2πψ is the poloidal flux, 2πI/µ0 is the poloidal current, µ0 is the magnetic perme-
86
+ ability, and ζ is the toroidal angle.
87
+ For simplicity, we assume a circular magnetic field
88
+ B =
89
+ B0
90
+ 1 + ǫ cos θ
91
+ (2)
92
+ 3
93
+
94
+ where θ is the poloidal angle, B0 is a constant reference field, ǫ = r/R0 is the inverse aspect
95
+ ratio, and R0 and r respectively are the major and minor radii of a circular-shape flux
96
+ surface.
97
+ For ion transport, the ion-electron collisions are often ignored and the reduced ion drift
98
+ kinetic equation for the first-order distribution function f1 becomes
99
+ v∥∂∥(f1 − F) = C(f1)
100
+ (3)
101
+ with
102
+ F = −Iv∥
103
+
104
+ df0
105
+ dψ = −Iv∥
106
+
107
+ �d ln p0
108
+
109
+ +
110
+
111
+ s2 − 5
112
+ 2
113
+ � d ln T0
114
+
115
+
116
+ f0
117
+ (4)
118
+ and
119
+ f0(ψ, w) =
120
+ n0(ψ)
121
+ [2πmT0(ψ)]3/2e−w/T0(ψ) =
122
+ n0
123
+ π3/2v3
124
+ 0
125
+ e−s2
126
+ (5)
127
+ in the (ψ, θ, w = mv2/2, µ = mv2
128
+ ⊥/2B) coordinates, where ∂∥ = b·∇ = (B/B)·∇, v∥ = b·v,
129
+ Ω = qB/m, v0 =
130
+
131
+ 2T0/m, and s = v/v0. Note that flux surfaces can be labeled by the
132
+ lowest-order density n0, temperature T0, or pressure p0 = n0T0. The collision operator is a
133
+ Landau operator linearized with respect to a static Maxwellian distribution function f0,
134
+ C(f1) = C(f1, f0) + C(f0, f1).
135
+ (6)
136
+ One difficulty of solving the kinetic equation (3) is in treating the collision operator, an
137
+ integro-differential operator in velocity space. In standard analytical neoclassical theory,
138
+ the Landau operator is often approximated as the Lorentz pitch-angle scattering operator
139
+ with an additional momentum restoring term for an analytical treatment. In the moment
140
+ approach, the linearized collision operator can be analytically calculated and explicitly rep-
141
+ resented by a matrix of collision coefficients. In this work, we solve a system of parallel
142
+ moment equations introduced in Ref. [25, 26]. The moment equations can also be derived
143
+ from the drift kinetic equation as shown below.
144
+ In the moment method of this work, a gyro-averaged distribution function f1 is expanded
145
+ as
146
+ f1 = f0
147
+
148
+ l,k
149
+ ˆP lk ˆ
150
+ Mlk
151
+ (7)
152
+ 4
153
+
154
+ with orthonormal polynomials
155
+ ˆP lk =
156
+ 1
157
+ √¯σlk
158
+ P lk =
159
+ 1
160
+ √¯σlk
161
+ slPl(v∥/v)L(l+1/2)
162
+ k
163
+ (s2),
164
+ where Pl is a Legendre polynomial, L(l+1/2)
165
+ k
166
+ is an associated Laguerre (Sonine) polynomial,
167
+ and the normalization constants are
168
+ ¯σlk = ¯σlλlk, ¯σl =
169
+ 1
170
+ 2l + 1, λlk = (l + k + 1/2)!
171
+ k!(1/2)!
172
+ .
173
+ (8)
174
+ Several lowest order moments of f1 are:
175
+ ˆ
176
+ M00 = n1/n0 (density),
177
+ ˆ
178
+ M01 = −
179
+
180
+ 3/2T1/T0
181
+ (temperature),
182
+ ˆ
183
+ M10 =
184
+
185
+ 2u/v0 (parallel flow velocity u = V1∥),
186
+ ˆ
187
+ M11 = −
188
+
189
+ 4/5h∥/v0p0
190
+ (parallel heat flux density), and ˆ
191
+ M20 =
192
+
193
+ 3/4π∥/p0 (parallel viscosity), where p0 = n0T0.
194
+ The neoclassical thermodynamic drive term can also be expanded as
195
+ v∥∂∥F =v0∂∥ ln B
196
+ B/B0
197
+ f0
198
+ ��
199
+ 2 ˆP 00 − 2
200
+
201
+ 2
202
+ 3
203
+ ˆP 01 + 1
204
+
205
+ 3
206
+ ˆP 20
207
+
208
+ ˆp0,ψ
209
+ +
210
+
211
+ −5
212
+
213
+ 2
214
+ 3
215
+ ˆP 01 + 2
216
+
217
+ 10
218
+ 3
219
+ ˆP 02 + 1
220
+
221
+ 3
222
+ ˆP 20 −
223
+
224
+ 7
225
+ 6
226
+ ˆP 21
227
+
228
+ ˆT0,ψ
229
+
230
+ ,
231
+ (9)
232
+ where
233
+ ˆp0,ψ =
234
+ I
235
+ qv0B0n0
236
+ dp0
237
+ dψ ,
238
+ (10)
239
+ ˆT0,ψ =
240
+ I
241
+ qv0B0
242
+ dT0
243
+ dψ .
244
+ (11)
245
+ Taking the ˆP jp moment of Eq. (3) yields
246
+
247
+ lk
248
+ ψjp,lk∂∥ ˆ
249
+ Mlk + ψjp,lk
250
+ B
251
+ (∂∥ ln B) ˆ
252
+ Mlk = 1
253
+ λC
254
+ cjp,lk ˆ
255
+ Mlk + ∂∥ ln B
256
+ B/B0
257
+
258
+ gjp
259
+ p ˆp0,ψ + gjp
260
+ T ˆT0,ψ
261
+
262
+ , (12)
263
+ where λC = v0τii (the ion mean free path). Note that eliminating (j, p) = (0, 0), (0, 1), and
264
+ (1,0) moment equations from Eq. (12) yields a set of closure moment equations, similar to
265
+ the closure moment equations in slab geometry Ref. [31]. The constant coefficients ψjp,lk,
266
+ ψjp,lk
267
+ B
268
+ , and cjp,lk are defined by
269
+
270
+ d3vv∥ ˆP jp ˆP lkf0 = n0v0ψjp,lk,
271
+ (13)
272
+
273
+ d3vv∥ ˆP jp(∂∥ ˆP lk)f0 = n0v0(∂∥ ln B)ψjp,lk
274
+ B
275
+ ,
276
+ (14)
277
+ 5
278
+
279
+
280
+ d3v ˆP jpC(f0 ˆP lk) = n0
281
+ τii
282
+ cjp,lk = n0
283
+ τii
284
+ δjlcj
285
+ pk.
286
+ (15)
287
+ The nonvanishing gjp in Eq. (9) are
288
+ g0,0
289
+ p
290
+ = 2, g0,1
291
+ p
292
+ = −2
293
+
294
+ 2
295
+ 3, g2,0
296
+ p
297
+ = 1
298
+
299
+ 3
300
+ (16)
301
+ and
302
+ g0,1
303
+ T
304
+ = −5
305
+
306
+ 2
307
+ 3, g0,2
308
+ T
309
+ = 2
310
+
311
+ 10
312
+ 3 , g2,0
313
+ T
314
+ = 1
315
+
316
+ 3
317
+ , g2,1
318
+ T
319
+ = −
320
+
321
+ 7
322
+ 6.
323
+ (17)
324
+ Noting that ψjp,lk = δj,j±1ψj±
325
+ lk , ψjp,j+1,k
326
+ B
327
+ = −(j + 2)ψjp,j+1,k/2, and ψjp,j−1,k
328
+ B
329
+ = (j −
330
+ 1)ψjp,j−1,k/2 (see Ref. [25]) and defining
331
+ ∂j+
332
+
333
+ = ∂∥ − j + 2
334
+ 2
335
+ ∂∥ ln B,
336
+ ∂j−
337
+
338
+ = ∂∥ + j − 1
339
+ 2
340
+ ∂∥ ln B,
341
+ (18)
342
+ we can combine the ψ and ψB terms to rewrite Eq. (12) as
343
+
344
+ k
345
+ ψj−
346
+ pk ∂j−
347
+
348
+ ˆ
349
+ Mj−1,k +
350
+
351
+ k
352
+ ψj+
353
+ pk ∂j+
354
+
355
+ ˆ
356
+ Mj+1,k = 1
357
+ λC
358
+
359
+ k
360
+ cj
361
+ pk ˆ
362
+ Mjk + ∂∥ ln B
363
+ B/B0
364
+
365
+ gjp
366
+ p ˆpψ + gjp
367
+ T ˆTψ
368
+
369
+ . (19)
370
+ Although Eq. (12) for j = 0, 1, · · · , L − 1 and k = 0, 1, · · · , K − 1 is a truncated system,
371
+ there exist L and K such that the solution does not change when increasing the number of
372
+ moments higher than L and K. In other words, there exists a convergent solution of Eq. (12)
373
+ which can be considered as a solution of Eq. (3). Therefore Eq. (12) for the truncated set
374
+ of moments is quantitatively equivalent to Eq. (3).
375
+ III.
376
+ FOURIER METHOD OF SOLVING MOMENT EQUATIONS
377
+ In the axisymmetric magnetic field (1), physical quantities on a flux surface depends on θ
378
+ only. Using ∂∥ = (B · ∇θ/B)∂/∂θ = (Bθ/B)∂θ and dividing Eq. (12) by Bθ/B yields a
379
+ system of ordinary differential equations
380
+
381
+ lk
382
+ ψjp,lk∂θ ˆ
383
+ Mlk + ψjp,lk
384
+ B
385
+ (∂θ ln B) ˆ
386
+ Mlk =
387
+ B
388
+ BθλC
389
+ cjp,lk ˆ
390
+ Mlk + ∂θ ln B
391
+ B/B0
392
+
393
+ gjp
394
+ p ˆpψ + gjp
395
+ t ˆTψ
396
+
397
+ . (20)
398
+ 6
399
+
400
+ Since the coefficient ∂θ ln B is θ-dependent, the eigenvector method used in deriving integral
401
+ closures [30] does not work. Instead, we adopt the Fourier method to convert the system of
402
+ differential equations to a system of algebraic equations. Note that Eq. (20) forms a linear
403
+ system of ordinary differential equations for the parallel moments
404
+ ˆ
405
+ Mlk and the Fourier
406
+ expansion of coefficients, moments, and drive terms will convert the differential system to a
407
+ linear algebraic system.
408
+ In the Fourier method, all physical quantities are expanded in Fourier series. For A = ˆ
409
+ Mlk(θ)
410
+ and ∂θ ln B/(B/B0),
411
+ A(θ) = A(0)+A(1−) sin θ+A(1+) cos θ+A(2−) sin 2θ+A(2+) cos 2θ+· · · =
412
+
413
+ m
414
+ A(m)ϕ(m), (21)
415
+ with Fourier modes
416
+ ϕ(0) = 1, ϕ(1) = ϕ(1−) = sin θ, ϕ(2) = ϕ(1+) = cos θ, · · · ,
417
+ ϕ(2n−1) = ϕ(n−) = sin nθ, ϕ(2n) = ϕ(n+) = cos nθ, · · ·
418
+ (22)
419
+ where the Fourier index is denoted in the parentheses. The Fourier coefficient for A(θ) can
420
+ be obtained by
421
+ A(m) =
422
+ 1
423
+ σ(m)
424
+
425
+ dθϕ(m)A(θ),
426
+ (23)
427
+ where σ(0) = 2π and σ(m) = π for m > 0. The derivative ∂θ and the θ-dependent coefficients
428
+ in Eq. (20) become matrices in Fourier representation. For O = ∂θ, ∂θ ln B, and B/BθλC,
429
+ the Fourier matrix elements O(i,j) are obtained by
430
+ O(i,j) =
431
+ 1
432
+ σ(i)
433
+
434
+ dθϕ(i)Oϕ(j),
435
+ (24)
436
+ and the Fourier representation of O ˆ
437
+ Mlk becomes
438
+
439
+ O ˆ
440
+ Mlk�
441
+ (i) =
442
+ 1
443
+ σ(i)
444
+
445
+ dθϕ(i)O
446
+
447
+ j
448
+ ˆ
449
+ Mlk
450
+ (j)ϕ(j) =
451
+
452
+ j
453
+ O(i,j) ˆ
454
+ Mlk
455
+ (j).
456
+ (25)
457
+ Then the (m)th Fourier component of Eq. (20) becomes a system of algebraic equations
458
+ ψjp,lk (∂θ)(m,n) ˆ
459
+ Mlk
460
+ (n) + ψjp,lk
461
+ B
462
+ (∂θ ln B)(m,n) ˆ
463
+ Mlk
464
+ (n) =
465
+ cjp,lk
466
+
467
+ B
468
+ BθλC
469
+
470
+ (m,n)
471
+ ˆ
472
+ Mlk
473
+ (n) +
474
+ �∂θ ln B
475
+ B/B0
476
+
477
+ (m)
478
+
479
+ gjp
480
+ p ˆp0,ψ + gjp
481
+ T ˆT0,ψ
482
+
483
+ ,
484
+ (26)
485
+ 7
486
+
487
+ -0.05
488
+ 0
489
+ 0.05
490
+ -0.05
491
+ 0
492
+ 0.05
493
+ -3
494
+ -2
495
+ -1
496
+ 0
497
+ 1
498
+ 2
499
+ 3
500
+ 0.3
501
+ 0.4
502
+ 0.5
503
+ 0.6
504
+ 0.7
505
+ Figure 1. First-order density, temperature, and parallel flow velocity for ǫ = 0.1, K0 = 100, nF = 4,
506
+ and for LK = 10×20 (red, dotted), 20×40 (green, dash-dotted), 40×80 (blue solid), and 80×160
507
+ (cyan, dashed). The ratios n1/n0, T1/T0, and u/v0 are plotted in units of ˆT0,ψ.
508
+ where summation over l, k, and n is implied. The system of algebraic equations can be
509
+ written in matrix form,
510
+ �ψ∂θ�
511
+
512
+ ˆ
513
+ M
514
+
515
+ +�ψB∂θ ln B�
516
+
517
+ ˆ
518
+ M
519
+
520
+ =
521
+
522
+ cB/BθλC
523
+ � �
524
+ ˆ
525
+ M
526
+
527
+ +
528
+
529
+ (gpˆp0 + gT ˆT0)(B0/B)(∂θ ln B)
530
+
531
+ , (27)
532
+ where �ψ∂θ� = [ψ] ⊗ (∂θ)F, �ψB∂θ ln B� = [ψB] ⊗ (∂θ ln B)F, and
533
+
534
+ cB/BθλC
535
+
536
+ = [c] ⊗
537
+
538
+ B/BθλC
539
+
540
+ F with ⊗ denoting a tensor product of two matrices. The ith row and jth column
541
+ of a Fourier matrix (O)F is O(i,j), and the dimension of the linear system is N = LKF =
542
+ (the number of Legendre polynomials)(the number of Laguerre polynomials)(the number of
543
+ Fourier modes).
544
+ 8
545
+
546
+ -0.05
547
+ 0
548
+ 0.05
549
+ -0.05
550
+ 0
551
+ 0.05
552
+ -3
553
+ -2
554
+ -1
555
+ 0
556
+ 1
557
+ 2
558
+ 3
559
+ 0.2
560
+ 0.4
561
+ 0.6
562
+ 0.8
563
+ Figure 2. First-order density, temperature, and parallel flow velocity for ǫ = 0.1, K0 = 100, LK =
564
+ 40 × 80, and for nF = 1 (red, dotted), 2 (green, dash-dotted), 4 (blue solid), and 7 (cyan, dashed).
565
+ The ratios n1/n0, T1/T0, and u/v0 are plotted in units of ˆT0,ψ.
566
+ The solution
567
+
568
+ ˆ
569
+ M
570
+
571
+ can be obtained by inverting or singular-value-decomposing the matrix,
572
+
573
+ ˆ
574
+ M
575
+
576
+ =
577
+
578
+ �ψ∂θ� + �ψB∂θ ln B� −
579
+
580
+ cB/BθλC
581
+ ��−1
582
+ ns
583
+
584
+ (gpˆp0,ψ + gT ˆT0,ψ)(B0/B)(∂θ ln B)
585
+
586
+ , (28)
587
+ where the subscript ‘ns’ denotes the nonsingular part of the matrix. It is found that elimi-
588
+ nating n(0) and T(0) components makes the matrix nonsingular [see also remarks in relation
589
+ to Eqs. (48) and (50)]. Then the Fourier components of the first order fluid quantities can
590
+ 9
591
+
592
+ -0.05
593
+ 0
594
+ 0.05
595
+ -0.05
596
+ 0
597
+ 0.05
598
+ -3
599
+ -2
600
+ -1
601
+ 0
602
+ 1
603
+ 2
604
+ 3
605
+ 0.3
606
+ 0.4
607
+ 0.5
608
+ 0.6
609
+ 0.7
610
+ Figure 3. First-order density, temperature, and parallel flow velocity for ǫ = 0.3, K0 = 100, nF = 4,
611
+ and for LK = 10×20 (red, dotted), 20×40 (green, dash-dotted), 40×80 (blue solid), and 80×160
612
+ (cyan, dashed).
613
+ be read from the solution
614
+
615
+ ˆ
616
+ M
617
+
618
+ ,
619
+ N = ˆp0,ψNp0 + ˆT0,ψNT0,
620
+ T = ˆp0,ψTp0 + ˆT0,ψTT0,
621
+ (29)
622
+ U = ˆp0,ψUp0 + ˆT0,ψUT0,
623
+ where N = (ˆn)F = (n1/n0)F, T = ( ˆT)F = (T1/T0)F, U = (ˆu)F = (u/v0)F, Nα, Tβ, and Uβ
624
+ (β = p0, T0) are column vectors of Fourier components. With the Fourier components, the
625
+ first-order fluid quantities can be constructed from Eq. (21). For example, the density due
626
+ to ˆp0,ψ and ˆT0,ψ, respectively, are ˆn = �
627
+ m Np0
628
+ (m)ϕ(m)ˆp0,ψ and ˆn = �
629
+ m NT0
630
+ (m)ϕ(m) ˆT0,ψ, where
631
+
632
+ (m) is the (m)th Fourier component of the column vector Nβ.
633
+ 10
634
+
635
+ -0.05
636
+ 0
637
+ 0.05
638
+ -0.05
639
+ 0
640
+ 0.05
641
+ -3
642
+ -2
643
+ -1
644
+ 0
645
+ 1
646
+ 2
647
+ 3
648
+ 0.2
649
+ 0.4
650
+ 0.6
651
+ 0.8
652
+ Figure 4. First-order density, temperature, and parallel flow velocity for ǫ = 0.3, K0 = 100, N =
653
+ LK = 40 × 80, and for nF = 1 (red, dotted), 5 (green, dash-dotted), 9 (blue solid), and 13 (cyan,
654
+ dashed). The ratios n1/n0, T1/T0, and u/v0 are plotted in units of ˆT0,ψ.
655
+ The inverse collisionality of the system is characterized by a Knudsen number, the ratio of
656
+ the mean free path to the gradient scale length. Defining a basic Knudsen number for a
657
+ tokamak K0 = B/BθλC, the effective Knudsen number would be roughly K0∂θ ln B ∼ mK0
658
+ where m is the typical Fourier mode of the system.
659
+ Although the solution (28) can be
660
+ obtained for an arbitrary axisymmetric magnetic field, circular magnetic fields [see Eq. (2)]
661
+ are considered in this work. For the circular magnetic field (2), the basic Knudsen number
662
+ is given by K0 ∼ λC/qR0 where q is the safety factor and the Fourier mode m is determined
663
+ by the inverse aspect ratio ǫ = r/R0. In general, the effective Knudsen number increases as
664
+ λC and ǫ increase.
665
+ 11
666
+
667
+ Figure 5. The first-order distribution function f1 at θ = −π/3 in the s⊥-s∥ plane for ǫ = 0.3
668
+ and K0 = 100. The white dashed lines indicate the passing/trapped boundary. The ratio f1/f0 is
669
+ plotted in units of ˆp0,ψ in (a), (c), and (d) and in units of ˆT0,ψ in (b).
670
+ Figure 6. The first-order distribution function f1 at θ = π/3 on the s⊥-s∥ plane for ǫ = 0.3 and
671
+ K0 = 100. The white dashed lines indicate the passing/trapped boundary. The ratio f1/f0 is plotted
672
+ in units of ˆp0,ψ in (a), (c), and (d) and in units of ˆT0,ψ in (b).
673
+ 12
674
+
675
+ 0.2
676
+ 0.1
677
+ 0
678
+ -0.1
679
+ -0.2
680
+ 0.1
681
+ 0.05
682
+ 0
683
+ -0.05
684
+ -0.13
685
+ 3
686
+ 0.15
687
+ 2.5
688
+ 2.5
689
+ 0.1
690
+ 2
691
+ 0.05
692
+ 2
693
+ 1.5
694
+ 0
695
+ 1.5
696
+ 1
697
+ 0.05
698
+ 1
699
+ -0.1
700
+ 0.5
701
+ 0.5
702
+ -0.15
703
+ 0
704
+ 0
705
+ -3
706
+ -2
707
+ -1
708
+ 0
709
+ 1
710
+ 2
711
+ 3
712
+ -3
713
+ -2
714
+ -1
715
+ 0
716
+ 1
717
+ 2
718
+ 3
719
+ s1
720
+ (c) fi/ fo due to dpo /db and dTo /dab for To,b = po,b
721
+ (d) fi/ fo due to dpo /db and dTo/db for To, = 0.3po,b
722
+ 3
723
+ 3
724
+ 2.5
725
+ 0.05
726
+ 2.5
727
+ 2
728
+ 2
729
+ 0
730
+ 1.5
731
+ 1.5
732
+ 1
733
+ 1
734
+ -0.05
735
+ 0.5
736
+ 0.5
737
+ 0
738
+ -3
739
+ -2
740
+ -1
741
+ 0
742
+ 1
743
+ 2
744
+ 3
745
+ -3
746
+ -2
747
+ -1
748
+ 0
749
+ 1
750
+ 2
751
+ 3
752
+ s1(a) fi/ fo due to dpo/db
753
+ (b) fi/ fo due to dTo/db0.2
754
+ 0.1
755
+ 0
756
+ -0.1
757
+ -0.2
758
+ 0.1
759
+ 0.05
760
+ 0
761
+ -0.05
762
+ -0.13
763
+ 3
764
+ 0.15
765
+ 2.5
766
+ 2.5
767
+ 0.1
768
+ 2
769
+ 0.05
770
+ 2
771
+ 1.5
772
+ 0
773
+ 1.5
774
+ 1
775
+ 0.05
776
+ 1
777
+ -0.1
778
+ 0.5
779
+ 0.5
780
+ -0.15
781
+ 0
782
+ 0
783
+ -3
784
+ -2
785
+ -1
786
+ 0
787
+ 1
788
+ 2
789
+ 3
790
+ -3
791
+ -2
792
+ -1
793
+ 0
794
+ 1
795
+ 2
796
+ 3
797
+ s1
798
+ (c) fi/fo due to dpo /db and dTo /dab for To,b = po,b
799
+ (d) fi/ fo due to dpo /db and dTo/db for To, = 0.3po,b
800
+ 3
801
+ 3
802
+ 2.5
803
+ 2.5
804
+ 0.05
805
+ 2
806
+ 2
807
+ 1.5
808
+ 1.5
809
+ 0
810
+ 1
811
+ 1
812
+ 0.5
813
+ -0.05
814
+ 0.5
815
+ 0
816
+ 0
817
+ -3
818
+ -2
819
+ -1
820
+ 0
821
+ 1
822
+ 2
823
+ 3
824
+ -3
825
+ -2
826
+ -1
827
+ 0
828
+ 1
829
+ 2
830
+ 3
831
+ s1Figure 7. The first-order distribution function f1 at s = 0.7 on the θ-µ plane for ǫ = 0.3 and
832
+ K0 = 100.
833
+ The white dashed line indicates the passing/trapped boundary.
834
+ The ratio f1/f0 is
835
+ plotted in units of ˆp0,ψ in (a), (c), and (d) and in units of ˆT0,ψ in (b).
836
+ The solution responding to the radial pressure gradient dp0/dψ shows that Np0 = 0, Tp0 = 0,
837
+ and Up0 = −(1, 0, ǫ, · · · )T = −(B0/B)F. This means that the ˆp0,ψ drive contributes only to
838
+ the flow velocity as ˆu = −ˆp0,ψB0/B + γuB/B0, consistent with the continuity equation
839
+ ∇ · (n0V1) = 0. Here γu is an integration constant that can be determined by temperature
840
+ and flow velocity equations. It turns out that γu is proportional to ˆT0,ψ as verified from the
841
+ solution and as discussed in Sec. IV.
842
+ For the solution responding to the radial temperature gradient dT0/dψ, the density, tem-
843
+ perature, and parallel flow velocity are shown in Fig. 1 in the case of ǫ = 0.1, K0 = 100, and
844
+ nF = 4 (F = 2nF + 1 = 9). A convergence study increases the number of moments to show
845
+ that the LK = 40 × 80 moment solution converges and can be considered practically exact.
846
+ Note that the polynomials ˆP lk in Eq. (7) form a complete set. The necessary number of
847
+ moments for convergence increases as K0 increases. A convergence study that increases the
848
+ 13
849
+
850
+ 0.2
851
+ 0.15
852
+ 0.1
853
+ 0.05
854
+ 0
855
+ 0.01
856
+ 0
857
+ -0.01
858
+ 0.02
859
+ -0.03
860
+ -0.04
861
+ -0.05
862
+ -0.06bm /odm on ann of/lf (e)
863
+ @n /on o ann of/lc (a)
864
+ 0
865
+ 0.6
866
+ 0.6
867
+ -0.02
868
+ To/ Bol
869
+ 0.5
870
+ 0.5
871
+ 0.04
872
+ 0.4
873
+ 0.4
874
+ [units of
875
+ -0.06
876
+ JOS
877
+ 0.3
878
+ [units
879
+ -0.08
880
+ 0.3
881
+ 0.2
882
+ 0.1
883
+ 0.2
884
+ 0.1
885
+ -0.12
886
+ 0.1
887
+ 0
888
+ 0.14
889
+ 0
890
+ -3
891
+ -2
892
+ -1
893
+ 0
894
+ 1
895
+ 2
896
+ 3
897
+ -3
898
+ -2
899
+ -1
900
+ 0
901
+ 1
902
+ 2
903
+ 3
904
+ 0
905
+ 0
906
+ (c) fi/ fo due to dpo /dab and dTo/db for To,b = po,b
907
+ (d) fi/ fo due to dpo/dab and dTo/db for To, = 0.3po,b
908
+ 0.1
909
+ 0.6
910
+ 0.6
911
+ 0.08
912
+ 0.5
913
+ 0.5
914
+ 0.06
915
+ 0.4
916
+ 0.4
917
+ 0.04
918
+ JO
919
+ JO
920
+ units
921
+ [units
922
+ 0.3
923
+ 0.02
924
+ 0.3
925
+ 0.2
926
+ 0
927
+ 0.2
928
+ 0.1
929
+ 0.02
930
+ 0.1
931
+ -0.04
932
+ 0
933
+ 0
934
+ -3
935
+ -2
936
+ -1
937
+ 0
938
+ 1
939
+ 2
940
+ 3
941
+ -3
942
+ -2
943
+ -1
944
+ 0
945
+ 1
946
+ 2
947
+ 3
948
+ 0
949
+ 0number of Fourier modes from 1 to 7 (see Figure 2) shows that the nF = 4 mode solution
950
+ converges and may be considered to be very accurate. The necessary number of Fourier
951
+ modes for convergence increases as ǫ increases.
952
+ Figures 3 and 4 show the density, temperature, and parallel flow velocity for ǫ = 0.3, a
953
+ larger inverse aspect ratio, and K0 = 100. The LK = 40 × 80 moment solution, while not as
954
+ accurate as in the ǫ = 0.1 case, is still very accurate for practical use, and the LK = 80×160
955
+ solution is expected to be accurate. This is because ǫ = 0.3 requires more Fourier modes
956
+ than ǫ = 0.1 for an accurate expansion of the magnetic field. Higher Fourier modes make the
957
+ effective Knudsen number larger. The necessary number of Fourier modes for convergence
958
+ is nF = 13.
959
+ The moment solution can be used to construct the distribution function that is a solution
960
+ of the kinetic equation (3). Since all fluid quantities relevant to physical observables involve
961
+ several lowest order of moments, the reconstruction of the distribution function from the
962
+ moments may be redundant. Nevertheless, the distribution function itself is important for
963
+ understanding the kinetic behavior of a plasma. In the moment expansion, the high order
964
+ moments near truncation of the moment expansion could be inaccurate and may adversely
965
+ affect the convergence of the distribution function. However we find that those moments
966
+ near truncation are several orders smaller than the fluid moments, making the truncation
967
+ errors ignorable once the convergence is achieved. Figures 5 and 6 show the distribution
968
+ functions constructed from the moment solution on the s⊥-s∥ plane at θ = −π/3 and π/3,
969
+ respectively. Figure 7 shows the distribution function at s = 0.7 on the θ-µ plane.
970
+ IV.
971
+ FLUID EQUATIONS AND CLOSURES
972
+ In neoclassical transport theory, one solves Eq. (3) to express f1 in terms of f0 (or F) and take
973
+ moments of the solution f1 to express u in terms of dp0/dψ and dT0/dψ. These expressions
974
+ can be directly obtained by solving Eq. (12). In this section we derive closure relations
975
+ that can be used for closing and advancing (nonlinear) fluid equations for density, flow
976
+ velocity, and temperature. They can also be incorporated into linearized fluid equations to
977
+ reproduce the expressions of n1, T1 and u that are obtained in Sec. III. Although the closures
978
+ 14
979
+
980
+ are represented in the Fourier basis, the formalism developed here can be applied to any
981
+ basis such as a finite element basis or finite difference basis in numerical methods.
982
+ The linearized fluid equations for n1, u, and T1 can be obtained from the original fluid
983
+ equations with n = n0 + n1, T = T0 + T1, V = ub + b × ∇p0/n0qB, h = h∥b + 5p0b ×
984
+ ∇T0/2qB, and π = (3π∥/2)(bb − b2I/3) where b = B/B.
985
+ They are equivalent to the
986
+ {P 00, mv0P 10, −T0P 01} moments of Eq. (3) and can be read from Eq. (20) for (j, p) = (0, 0),
987
+ (1, 0), and (0, 1):
988
+ ∂0+
989
+ θ ˆu = 2ˆp0,ψ
990
+ ∂θ ln B
991
+ B/B0
992
+ ,
993
+ (30)
994
+ ∂0+
995
+ θ ˆu + ∂0+
996
+ θ ˆh = (2ˆp0,ψ + 5 ˆT0,ψ)∂θ ln B
997
+ B/B0
998
+ ,
999
+ (31)
1000
+ ∂1−
1001
+ θ ˆn + ∂1−
1002
+ θ
1003
+ ˆT + ∂1+
1004
+ θ ˆπ = 0,
1005
+ (32)
1006
+ where ˆu = u/v0, ˆh = h∥/v0p0, ˆπ = π∥/p0, and ∂l±
1007
+ θ
1008
+ is defined by Eq. (18) with ∂∥ replaced by
1009
+ ∂θ. For this fluid system to be closed, closure quantities ˆh and ˆπ should relate to first-order
1010
+ (ˆn, ˆu, and ˆT) and equilibrium (ˆp0,ψ and ˆT0,ψ) fluid quantities.
1011
+ In order to obtain the closure relations, the rows corresponding to fluid equations need to
1012
+ be removed from Eq. (20). Then the corresponding columns appear as drives (sources) [gθ]
1013
+ in the system:
1014
+ [ψ′]
1015
+
1016
+ ∂θ ˆ
1017
+ M′�
1018
+ +[ψ′
1019
+ B] (∂θ ln B)
1020
+
1021
+ ˆ
1022
+ M′�
1023
+ =
1024
+ B
1025
+ BθλC
1026
+ [c′]
1027
+
1028
+ ˆ
1029
+ M′�
1030
+ +[gθ]+ ∂θ ln B
1031
+ B/B0
1032
+ ��
1033
+ g′
1034
+ p
1035
+
1036
+ ˆp0,ψ + [g′
1037
+ T] ˆT0,ψ
1038
+
1039
+ ,
1040
+ (33)
1041
+ where ′ denotes the removal of fluid columns and rows. For example,
1042
+
1043
+ ˆ
1044
+ M′�
1045
+ is a column vec-
1046
+ tor ( ˆ
1047
+ M0,2, · · · ˆ
1048
+ M0,K+1, ˆ
1049
+ M1,1, · · · , ˆ
1050
+ M1,K, ˆ
1051
+ M2,0, · · · , ˆ
1052
+ M2,K−1, · · · , ˆ
1053
+ ML−1,0, · · · , ˆ
1054
+ ML−1,K−1). The
1055
+ nonvanishing elements of [gθ] are
1056
+ g1,1
1057
+ θ
1058
+ =
1059
+
1060
+ 5
1061
+ 2 ∂θ ˆT,
1062
+ (34)
1063
+ g2,0
1064
+ θ
1065
+ = −
1066
+
1067
+ 3
1068
+ 2 Wθ, Wθ = 4
1069
+ 3∂2−
1070
+ ∥ ˆu.
1071
+ (35)
1072
+ From Fourier representation of Eq. (33),
1073
+ �ψ′∂θ�
1074
+
1075
+ ˆ
1076
+ M′�
1077
+ +�ψ′
1078
+ B∂θ ln B�
1079
+
1080
+ ˆ
1081
+ M′�
1082
+ =
1083
+
1084
+ cB/BθλC
1085
+ � �
1086
+ ˆ
1087
+ M′�
1088
+ +�gθ�+
1089
+
1090
+ (g′
1091
+ pˆp0 + g′
1092
+ T ˆT0)(B0/B)(∂θ ln B)
1093
+
1094
+ ,
1095
+ 15
1096
+
1097
+ (36)
1098
+ the solution can be obtained,
1099
+
1100
+ ˆ
1101
+ M′�
1102
+ =
1103
+
1104
+ �ψ′∂θ� + �ψ′
1105
+ B∂θ ln B� −
1106
+
1107
+ cB/BθλC
1108
+ ��−1 �
1109
+ gθ + (gpˆp0,ψ + gT ˆT0,ψ)(B0/B)(∂∥ ln B)
1110
+
1111
+ .
1112
+ (37)
1113
+ Fourier components of closures ˆh = −
1114
+
1115
+ 5 ˆ
1116
+ M1,1/2 and ˆπ = 2 ˆ
1117
+ M2,0/
1118
+
1119
+ 3 can be read from the
1120
+ solution and expressed in terms of ˆp0,ψ, and ˆT0,ψ, ˆT, and ˆu:
1121
+ H = ˆp0,ψHp0 + ˆT0,ψHT0 + KhhDT + KhπW,
1122
+ (38)
1123
+ S = ˆp0,ψSp0 + ˆT0,ψST0 + KπhDT + KππW,
1124
+ (39)
1125
+ where H = (ˆh)F, S = (ˆπ)F, and W = (Wθ)F = (4/3)D2−U ≡ DWU, Hβ, and Sβ (β = p0, T0)
1126
+ are column vectors, and D = (∂θ)F , Dl± = (∂l±
1127
+ θ )F, and Kαβ (α, β = h, π) are matrices. Here a
1128
+ column vector Hβ and Sβ connects the closures h∥ and π∥ to a radial gradient of zeroth-order
1129
+ pressure (β = p0) or temperature (β = T0), and a matrix Kαβ connects closures α = h and
1130
+ π to a parallel gradient of first-order temperature (β = h) or parallel flow velocity (β = π).
1131
+ The closures in the position space can be constructed from the solution vector, for example,
1132
+ ˆh(θ) = �
1133
+ i ϕ(i){Hp0
1134
+ (i)ˆp0,ψ +HT0
1135
+ (i) ˆT0,ψ +�
1136
+ j[Khh
1137
+ (i,j)(DT)(j) +Khπ
1138
+ (i,j)W(j)]ϕ(j)}, where Hβ
1139
+ (i) is the (i)th
1140
+ Fourier component of the column vector Hβ and Kαβ
1141
+ (i,j) is the (i)th row and (j)th column
1142
+ of the matrix Kαβ. Figures 8 and 9, respectively, show the parallel heat flux density and
1143
+ viscosity due to ˆp0,ψ, ˆT0,ψ, and several Fourier modes of ∂θ ˆT and Wθ. As the Fourier mode
1144
+ of the thermodynamic drives increases, the contribution to the closure quantity decreases.
1145
+ By combining closure relations with the time-independent, linear fluid equations, we can
1146
+ reproduce the fluid variables of Sec. III. Using (B0/B)∂θ ln B = −∂θ(B0/B) and eliminating
1147
+ Eq. (30) from Eq. (31), we write the Fourier representation of Eqs. (30)-(32),
1148
+ D0+U = −2ˆp0,ψDB−1,
1149
+ (40)
1150
+ D0+H = −5 ˆT0,ψDB−1,
1151
+ (41)
1152
+ DN + DT + D1+S = 0,
1153
+ (42)
1154
+ 16
1155
+
1156
+ -2
1157
+ -1.5
1158
+ -1
1159
+ -0.5
1160
+ 0
1161
+ -4
1162
+ -2
1163
+ 0
1164
+ 2
1165
+ -0.2
1166
+ -0.1
1167
+ 0
1168
+ 0.1
1169
+ 0.2
1170
+ -2
1171
+ 0
1172
+ 2
1173
+ 4
1174
+ -0.5
1175
+ 0
1176
+ 0.5
1177
+ 1
1178
+ Figure 8.
1179
+ Parallel heat flux density due to (a) dp0/dψ and dT0/dψ, (b) (∂θ ˆT)(m+) cos mθ, (c)
1180
+ (∂θ ˆT)(m−) sin mθ, (d) (Wθ)(m+) cos mθ, and (e) (Wθ)(m−) sin mθ.
1181
+ The dimensionless heat flux,
1182
+ h∥/v0p0, is plotted in units of (a) ˆp0,ψ and ˆT0,ψ, (b) (∂θ ˆT)(m+), (c) (∂θ ˆT)(m−), (d) (Wθ)(m+), and
1183
+ (e) (Wθ)(m−).
1184
+ 17
1185
+
1186
+ -0.02
1187
+ -0.01
1188
+ 0
1189
+ 0.01
1190
+ 0.02
1191
+ -0.1
1192
+ -0.05
1193
+ 0
1194
+ 0.05
1195
+ 0.1
1196
+ -1
1197
+ 0
1198
+ 1
1199
+ 2
1200
+ -80
1201
+ -60
1202
+ -40
1203
+ -20
1204
+ 0
1205
+ -0.2
1206
+ 0
1207
+ 0.2
1208
+ Figure 9.
1209
+ Parallel viscosity due to (a) dp0/dψ and dT0/dψ,
1210
+ (b) (∂θ ˆT)(m+) cos mθ,
1211
+ (c)
1212
+ (∂θ ˆT)(m−) sin mθ, (d) (Wθ)(m+) cos mθ, and (e) (Wθ)(m−) sin mθ. The dimensionless viscosity π∥/p0
1213
+ is plotted in units of (a) ˆp0,ψ and ˆT0,ψ, (b) (∂θ ˆT)(m+), (c) (∂θ ˆT)(m−), (d) (Wθ)(m+), and (e)
1214
+ (Wθ)(m−).
1215
+ 18
1216
+
1217
+ where B−1 = (B0/B)F. Then we combine with closures (38) and (39) to write
1218
+ L
1219
+
1220
+
1221
+
1222
+
1223
+
1224
+ N
1225
+ T
1226
+ U
1227
+
1228
+
1229
+
1230
+
1231
+  = Rp0 ˆp0,ψ + RT0 ˆT0,ψ.
1232
+ (43)
1233
+ where
1234
+ L =
1235
+
1236
+
1237
+
1238
+
1239
+
1240
+ 0
1241
+ 0
1242
+ D0+
1243
+ 0
1244
+ D0+KhhD
1245
+ D0+KhπDW
1246
+ D D + D1+KπhD D1+KππDW
1247
+
1248
+
1249
+
1250
+
1251
+  ,
1252
+ (44)
1253
+ Rp0 = −
1254
+
1255
+
1256
+
1257
+
1258
+
1259
+ 2DB−1
1260
+ D0+Hp0
1261
+ D1+Sp0
1262
+
1263
+
1264
+
1265
+
1266
+  , RT0 = −
1267
+
1268
+
1269
+
1270
+
1271
+
1272
+ 0
1273
+ 5DB−1
1274
+ D1+ST0
1275
+
1276
+
1277
+
1278
+
1279
+  .
1280
+ (45)
1281
+ Using the singular value decomposition, we can invert the nonsingular part of L and obtain
1282
+ the solution vector (N, T, U) in terms of ˆp0,ψ and ˆT0,ψ.
1283
+ The solution vector reproduces
1284
+ Eq. (29) with the column vector (Nβ, Tβ, Uβ) = (L−1
1285
+ ns ) Rβ for β = p0 and T0.
1286
+ Now we discuss how to obtain the parallel flow velocity and heat flux density when not
1287
+ using the singular value decomposition but instead, analytically calculating the integration
1288
+ constants. From Eqs. (40) and (41), we have
1289
+ U = −ˆp0,ψB−1 + γuB,
1290
+ (46)
1291
+ H = −5
1292
+ 2
1293
+ ˆT0,ψB−1 + γhB,
1294
+ (47)
1295
+ where γu and γh are expansion coefficients for the null space of D0+ (D0+B = 0), and
1296
+ B = (B/B0)F. Combining Eq. (38) with (47), we have
1297
+ DT = γuFu + γhFh + ˆp0,ψFp + ˆT0,ψFT,
1298
+ (48)
1299
+ where
1300
+ Fu = −Khh,−1KhπDWB,
1301
+ Fh = Khh,−1B,
1302
+ Fp = −Khh,−1 �
1303
+ Hp0 − KhπDWB−1
1304
+
1305
+ ,
1306
+ FT = −Khh,−1
1307
+
1308
+ HT0 + 5
1309
+ 2B−1
1310
+
1311
+ ,
1312
+ (49)
1313
+ 19
1314
+
1315
+ Combining Eq. (39) with Eq. (42) and using Eqs. (46) and (48), we have
1316
+ DN + DT = γuGu + γhGh + ˆp0,ψGp + ˆT0,ψGT
1317
+ (50)
1318
+ where
1319
+ Gu = −D1+ �
1320
+ KπhFu + KππDWB
1321
+
1322
+ ,
1323
+ Gh = −D1+KπhFh,
1324
+ Gp = −D1+ �
1325
+ Sp0 + KπhFp − KππDWB−1
1326
+
1327
+ ,
1328
+ GT = −D1+ �
1329
+ ST0 + KπhFT�
1330
+ .
1331
+ (51)
1332
+ The temperature and density can be obtained by inverting the nonsingular part of D in
1333
+ Eqs. (48) and (50). The null space of D is spanned by [ϕ(0)]F, which corresponds to the
1334
+ constant term in the Fourier series. Since the lowest-order density (n0) and temperature
1335
+ (T0) are constant, we set n(0) = 0 and T(0) = 0 without loss of generality. From the first row
1336
+ corresponding to the constant (0) Fourier mode,
1337
+ 0 = γuFu
1338
+ (0) + γhFh
1339
+ (0) + ˆp0,ψFp
1340
+ (0) + ˆT0,ψFT
1341
+ (0),
1342
+ (52)
1343
+ 0 = γuGu
1344
+ (0) + γhGh
1345
+ (0) + ˆp0,ψGp
1346
+ (0) + ˆT0,ψGT
1347
+ (0),
1348
+ (53)
1349
+ we can determine the integration constants γu and γh,
1350
+
1351
+  γu
1352
+ γh
1353
+
1354
+  = −
1355
+
1356
+  Fu
1357
+ (0) Fh
1358
+ (0)
1359
+ Gu
1360
+ (0) Gh
1361
+ (0)
1362
+
1363
+
1364
+ −1 
1365
+  Fp
1366
+ (0) FT
1367
+ (0)
1368
+ Gp
1369
+ (0) GT
1370
+ (0)
1371
+
1372
+
1373
+
1374
+  ˆp0,ψ
1375
+ ˆT0,ψ
1376
+
1377
+  .
1378
+ (54)
1379
+ Then Eqs. (46) and (47) with the constants obtained in Eq. (54) agree with the corresponding
1380
+ column vectors of the solution (28). Note that the heat flux obtained here is not a closure
1381
+ and satisfies ∇ · h = 0.
1382
+ Before concluding this section, a few remarks are in order. First, Eqs. (40) and (41) are
1383
+ equivalent to ∇ · (n0V1) = 0 and ∇ · h = 0. Inserting the lowest order solutions V1⊥ =
1384
+ (1/qB2)B × ∇p0 and h⊥ = (5p0/2qB2)B × ∇T0 obtained from ∇p0 − n0qV1 × B/m =
1385
+ 0 and (5/2)p0∇T0 − qh × B = 0, one can derive ˆu = −ˆp0,ψB0/B + γuB/B0 and ˆh =
1386
+ −5 ˆT0,ψB0/2B + γhB/B0 where γu and γh are integration constants. Second, Fp and Gp
1387
+ vanish when ion-electron collisions are ignored. By setting f1 = g + F, Eq. (3) becomes
1388
+ v∥∂∥g = C(g) + C(F). Note that the ˆp0,ψ term in C(F) = C(F, f 0) + C(f 0, F) vanishes due
1389
+ 20
1390
+
1391
+ to momentum conservation and does not affect g. Therefore the term ˆp0,ψ contributes only
1392
+ to the flow velocity moment of f1 and hence Fp in Eq. (48) and Gp in Eq. (50) must vanish.
1393
+ Third, in the closure calculation, the ˆp0,ψ drive appears in Wθ of the viscosity equation and
1394
+ affects closure quantities. However, the ˆp0,ψ term in V1∥ of Wθ exactly cancels the ˆp0,ψ term
1395
+ in V1⊥ of Wθ making n1 and T1 independent of the ˆp0,ψ drive. Fourth, for an electron-ion
1396
+ plasma (a, b) = (e, i) and (i, e), the ˆpa0,ψ and ˆpb0,ψ drives do not vanish in the collision
1397
+ operator C(Fa, f 0
1398
+ b ) + C(f 0
1399
+ a, Fb) for the ga equation and do affect ga unless V1a∥ = V1b∥.
1400
+ V.
1401
+ CONCLUSION AND FUTURE WORK
1402
+ We have demonstrated how to solve the drift kinetic equation using the general moment
1403
+ equations to obtain transport and closure relations.
1404
+ Using the moment-Fourier method
1405
+ developed here, one can directly solve a full set of parallel moment equations equivalent to
1406
+ the drift kinetic equation for fluid variables (density, flow velocity, and temperature) and/or
1407
+ fluxes (particle flux, electric current, heat flux, etc.). The solution moments can be used to
1408
+ construct the distribution function that is the solution of the drift kinetic equation. One can
1409
+ also solve the non-Maxwellian moment equations to express parallel closures in terms of fluid
1410
+ variables. The closures can be combined with linearized fluid equations to reproduce the
1411
+ fluid variables and/or fluxes obtained from the full set of parallel moment equations. More
1412
+ importantly, the closures can be utilized to advance a system of fluid equations in numerical
1413
+ simulations with nonlinear terms kept when nonlinear effects are significant. Note that the
1414
+ drift kinetic equation yields only linearized fluid equations by nature, e.g. Eqs. (30)-(32),
1415
+ and hence cannot capture the nonlinear effects.
1416
+ While the formalism developed here is only applied in the case of a single component plasma
1417
+ in a circular axisymmetric magnetic field, it can be generalized to a multi-component plasma
1418
+ in a tokamak with arbitrarily shaped nested flux surfaces. As long as the magnetic field
1419
+ is Fourier-expandable, the moment-Fourier approach developed here is applicable. For a
1420
+ multi-component plasma, the collisional heating and friction terms, respectively, will modify
1421
+ Eqs. (31) and (32). The collision terms introduce couplings of temperatures and flow veloc-
1422
+ ities between unlike species and, as a result, the dp0/dψ term will affect all other fluid and
1423
+ closure moments as remarked at the end of Sec. IV. Although ion-electron collisions in the
1424
+ 21
1425
+
1426
+ ion theory are ignored based on the small-mass-ratio approximation in the existing theories
1427
+ (including this work), the momentum and energy conservations require those terms in the
1428
+ ion fluid equations. These effects can be investigated by solving coupled moment equations
1429
+ with the Fourier method. The transport and closure relations for an electron-ion plasma
1430
+ will be presented in the near future.
1431
+ The moment-Fourier method developed here is applicable to a plasma with an arbitrary
1432
+ Knudsen number in a general magnetic field, as long as convergence can be achieved by
1433
+ increasing the number of moments and Fourier modes.
1434
+ In the high-collisionality limit,
1435
+ B/BθλC ≪ 1, the closure coefficients Kαβ in Eqs. (38) and (39) reproduce the corresponding
1436
+ Braginskii closure coefficient [32, 33]. In the small inverse aspect ratio limit, ǫ ≪ 1, the Kαβ
1437
+ reproduce the corresponding integral closure [31]. In principle, the moment-Fourier solutions
1438
+ are practically exact once convergence is achieved. The necessary numbers of moments and
1439
+ Fourier modes, respectively, increase as the Knudsen number and the inverse aspect ratio
1440
+ increase. In practice, the moment approach is limited by the accuracy of the inverse matrix
1441
+ in Eqs. (28) and (37). For low collisionality nFK0 ≳ 104, the required matrix dimension
1442
+ for convergence is LKF ≳ 106, and the inverse matrix becomes inaccurate due to a large
1443
+ condition number, even with the exact null space eliminated in the case of Eq. (28). For low
1444
+ collisionality, the drift kinetic equation may be solved numerically. However, in the collision-
1445
+ less limit, we find that the drift kinetic equation should be solved analytically for accurate
1446
+ closure and transport relations. The results in the collisionless limit will be presented in the
1447
+ near future, too. It is also notable that the finite element basis used in Refs. [18] and [19]
1448
+ makes the convergence faster than the Legendre polynomial basis.
1449
+ Since the computational effort to calculate the convergent closures is tremendous when
1450
+ the effective collisionality is low, it may be impractical to compute the closures during a
1451
+ fluid simulation. For practical applications, we plan to develop explicit formulas of closures
1452
+ which can be expressed in terms of magnetic field parameters, ǫ for a circular geometry
1453
+ or Fourier components for a general magnetic field. The explicit expressions of closures
1454
+ can be developed for practical values of ǫ ≲ 0.4 (at the edge of the ITER tokamak) and
1455
+ nFK0 ≲ 104 (at the core of ITER). Once the closures have been obtained for the magnetic field
1456
+ parameters, they can be conveniently used without time-consuming moment calculations.
1457
+ Furthermore, calculating γu in Eq. (46) will be performed for general ǫ and collisionality of
1458
+ 22
1459
+
1460
+ interest for a quantitative analysis of convergence depending on the number of moments and
1461
+ Fourier modes.
1462
+ DATA AVAILABILITY STATEMENT
1463
+ The data that support the findings of this study are available upon request from the authors.
1464
+ ACKNOWLEDGMENTS
1465
+ The research was supported by the U.S. DOE under Grant Nos. DE-SC0022048 and DE-
1466
+ FG02-04ER54746 and by National R&D Program through the National Research Foundation
1467
+ of Korea (NRF) funded by Ministry of Science and ICT (2021M3F7A1084419).
1468
+ [1] A. A. Galeev and R. Z. Sagdeev, Soviet Physics JETP 26, 233 (1968).
1469
+ [2] M. N. Rosenbluth, R. D. Hazeltine, and F. L. Hinton, Phys. Fluids 15, 116 (1972).
1470
+ [3] R. D. Hazeltine, F. L. Hinton, and M. N. Rosenbluth, The Physics of Fluids 16, 1645 (1973),
1471
+ https://aip.scitation.org/doi/pdf/10.1063/1.1694191.
1472
+ [4] F. L. Hinton and R. D. Hazeltine, Rev. Mod. Phys. 48, 239 (1976).
1473
+ [5] S. P. Hirshman and D. J. Sigmar, Nucl. Fusion 21, 1079 (1981).
1474
+ [6] C.
1475
+ S.
1476
+ Chang
1477
+ and
1478
+ F.
1479
+ L.
1480
+ Hinton,
1481
+ The Physics of Fluids 25, 1493 (1982),
1482
+ https://aip.scitation.org/doi/pdf/10.1063/1.863934.
1483
+ [7] M. Taguchi, Plasma Physics and Controlled Fusion 30, 1897 (1988).
1484
+ [8] R. D. Hazeltine, Plasma Phys. 15, 77 (1973).
1485
+ [9] R. D. Hazeltine and J. D. Meiss, Plasma Confinement (Dover Pub., Inc., New York, 2003).
1486
+ [10] R. Balescu, Transport Processes in Plasmas (North-Holland, Amsterdam, 1988) vols. 1 and 2.
1487
+ [11] P. Helander and D. J. Sigmar, Collisional Transport in Magnetized Plasmas (Cambridge Uni-
1488
+ versity Press, Cambridge, 2002).
1489
+ [12] E. A. Belli and J. Candy, Plasma Physics and Controlled Fusion 50, 095010 (2008).
1490
+ 23
1491
+
1492
+ [13] E. A. Belli and J. Candy, Plasma Physics and Controlled Fusion 51, 075018 (2009).
1493
+ [14] E. A. Belli and J. Candy, Plasma Physics and Controlled Fusion 54, 015015 (2011).
1494
+ [15] M. Landreman and D. R. Ernst, Plasma Physics and Controlled Fusion 54, 115006 (2012).
1495
+ [16] M. Landreman and D. R. Ernst, Journal of Computational Physics 243, 130 (2013).
1496
+ [17] E.
1497
+ D.
1498
+ Held,
1499
+ S.
1500
+ E.
1501
+ Kruger,
1502
+ J.-Y.
1503
+ Ji,
1504
+ E.
1505
+ A.
1506
+ Belli,
1507
+ and
1508
+ B.
1509
+ C.
1510
+ Lyons,
1511
+ Physics of Plasmas 22, 032511 (2015), https://doi.org/10.1063/1.4914165.
1512
+ [18] J.
1513
+ R.
1514
+ Jepson,
1515
+ C.
1516
+ C.
1517
+ Hegna,
1518
+ E.
1519
+ D.
1520
+ Held,
1521
+ J.
1522
+ A.
1523
+ Spencer,
1524
+ and
1525
+ B.
1526
+ C.
1527
+ Lyons,
1528
+ Physics of Plasmas 28, 082503 (2021), https://doi.org/10.1063/5.0054978.
1529
+ [19] J.
1530
+ A.
1531
+ Spencer,
1532
+ B.
1533
+ Adair,
1534
+ E.
1535
+ D.
1536
+ Held,
1537
+ J.-Y.
1538
+ Ji,
1539
+ and
1540
+ J.
1541
+ R.
1542
+ Jepson,
1543
+ Journal of Computational Physics 450, 110862 (2022).
1544
+ [20] C. Sovinec, A. Glasser, T. Gianakon, D. Barnes, R. Nebel, S. Kruger, D. Schnack, S. Plimpton,
1545
+ A. Tarditi, and M. Chu, Journal of Computational Physics 195, 355 (2004).
1546
+ [21] S. C. Jardin, N. Ferraro, X. Luo, J. Chen, J. Breslau, K. E. Jansen,
1547
+ and M. S. Shephard,
1548
+ Journal of Physics: Conference Series 125, 012044 (2008).
1549
+ [22] J.
1550
+ Breslau,
1551
+ N.
1552
+ Ferraro,
1553
+ and
1554
+ S.
1555
+ Jardin,
1556
+ Physics of Plasmas 16, 092503 (2009),
1557
+ https://doi.org/10.1063/1.3224035.
1558
+ [23] B.
1559
+ Dudson,
1560
+ M.
1561
+ Umansky,
1562
+ X.
1563
+ Xu,
1564
+ P.
1565
+ Snyder,
1566
+ and
1567
+ H.
1568
+ Wilson,
1569
+ Computer Physics Communications 180, 1467 (2009).
1570
+ [24] M. Hoelzl, G. Huijsmans, S. Pamela, M. Bécoulet, E. Nardon, F. Artola, B. Nkonga, C. Atana-
1571
+ siu, V. Bandaru, A. Bhole, D. Bonfiglio, A. Cathey, O. Czarny, A. Dvornova, T. Fehér, A. Fil,
1572
+ E. Franck, S. Futatani, M. Gruca, H. Guillard, J. Haverkort, I. Holod, D. Hu, S. Kim,
1573
+ S. Korving, L. Kos, I. Krebs, L. Kripner, G. Latu, F. Liu, P. Merkel, D. Meshcheriakov,
1574
+ V. Mitterauer, S. Mochalskyy, J. Morales, R. Nies, N. Nikulsin, F. Orain, J. Pratt, R. Ra-
1575
+ masamy, P. Ramet, C. Reux, K. Särkimäki, N. Schwarz, P. S. Verma, S. Smith, C. Sommariva,
1576
+ E. Strumberger, D. van Vugt, M. Verbeek, E. Westerhof, F. Wieschollek,
1577
+ and J. Zielinski,
1578
+ Nuclear Fusion 61, 065001 (2021).
1579
+ [25] J.-Y. Ji and E. D. Held, Phys. Plasmas 21, 042102 (2014).
1580
+ [26] J.-Y. Ji and E. D. Held, Phys. Plasmas 13, 102103 (2006).
1581
+ [27] J.-Y. Ji and E. D. Held, Phys. Plasmas 16, 102108 (2009).
1582
+ [28] R. Jorge, P. Ricci, and N. F. Loureiro, Journal of Plasma Physics 83, 905830606 (2017).
1583
+ [29] R. Jorge, B. J. Frei, and P. Ricci, Journal of Plasma Physics 85, 905850604 (2019).
1584
+ 24
1585
+
1586
+ [30] J.-Y. Ji, E. D. Held, and C. R. Sovinec, Phys. Plasmas 16, 022312 (2009).
1587
+ [31] J.-Y. Ji, H. Q. Lee, and E. D. Held, Phys. Plasmas 24, 022127 (2017).
1588
+ [32] S. I. Braginskii, in Reviews of Plasma Physics, Vol. 1, edited by M. A. Leontovich (Consultants
1589
+ Bureau, New York, 1965) p. 205.
1590
+ [33] J.-Y. Ji and E. D. Held, Phys. Plasmas 22, 062114 (2015).
1591
+ 25
1592
+
5NE0T4oBgHgl3EQfegCm/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
5tE1T4oBgHgl3EQfBAK1/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd67f4064744cd135e9fa3d728f2e0056b74357ca230618c641329fe4d3d20d2
3
+ size 4259885
6tAyT4oBgHgl3EQfcvc9/content/2301.00288v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c01c2bc4caa48a9f6d5e9edaf554e49dabdf9fb915a912aae9198de02c6b03c8
3
+ size 376798
6tAyT4oBgHgl3EQfcvc9/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0851bf3944a6b475c9c3b54ee59fb28e6ec940ce4bd998875f05cdec87c39279
3
+ size 190729
6tFAT4oBgHgl3EQfnx0W/content/2301.08630v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:605703411986816c42638de3793fd2d51a11dec4dbe54c492fd00f9966de127d
3
+ size 657729
8NE3T4oBgHgl3EQfqQrl/content/tmp_files/2301.04651v1.pdf.txt ADDED
@@ -0,0 +1,557 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ PHOTONIC SPATIAL-EULER ISING MACHINE FOR SOLVING
2
+ 20000-NODE MAX-CUT PROBLEM ∗
3
+ Xin Ye, Wenjia Zhang, Shaomeng Wang, Xiaoxuan Yang, Zuyuan He
4
+ State Key Laboratory of Advanced Optical Communication Systems and Networks
5
+ Shanghai Jiao Tong University
6
+ Shanghai 200240, China
7
+ {Wenjia Zhang}[email protected]
8
+ ABSTRACT
9
+ To tackle challenging combinatorial optimization problems, analog computing machines based on
10
+ the nature-inspired Ising model are attracting increasing attentions in order to disruptively overcome
11
+ the impending limitations on conventional electronic computers. Photonic spatial Ising machine has
12
+ become an unique and primitive solution with all-to-all connections to solve large-scale Max-cut
13
+ problems. However, spin configuration and flipping requires two independent sets of spatial light
14
+ modulators (SLMs) for amplitude and phase modulation, which will lead to tremendous engineering
15
+ difficulty of optical alignment and coupling. We report a novel quadrature photonic spatial-Euler
16
+ Ising machine to realize large-scale and flexible spin-interaction configuration and spin-flip in a
17
+ single spatial light modulator, and develop a noise enhancement approach by adding digital white
18
+ noise onto detected optical signals. We experimentally show that such proposal accelerates solving
19
+ (un)weighted, (non)fully connected, 20736-node Max-cut problems, which offers obvious advantages
20
+ over simulation and heuristic algorithm results in digital computers.
21
+ 1
22
+ Introduction
23
+ Complex systems related research has progressed at a rapid pace due to high-throughput data acquisition techniques
24
+ [1, 2, 3]. Contrarily, comprehensive processing and optimization of big data with complex structures and correlations is
25
+ a prerequisite for the vast applications and spectacular advancement in bioinformatics [4, 5], pharmaceutical medicine
26
+ [6, 7], finance [8, 9], cryptography [10, 11], and artificial intelligence (AI) [12, 13]. Therefore, powerful mathematical
27
+ models and hardware processors are critically utilised to analyse high-dimensional data sets and complex systems. The
28
+ Ising model, depicting Markov chains of interacting binary units, is a typical model used to study complex systems
29
+ [14, 15]. Various artificial Ising machines developed based on this model accelerate conventional electronic computers
30
+ in performing optimization tasks involving non-deterministic polynomial time (NP)-hard problems and combinatorial
31
+ optimisation tasks, such as the Max-cut, protein folding, number partition and travelling salesman problem(TSP)
32
+ [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36].
33
+ Among these Ising solutions, the photonic Ising machine, by leveraging light interference to emulate spin interaction in
34
+ ferromagnets, offers substantial benefits of high connectivity and speed in ground state search [37]. Recently, there
35
+ propose various innovative photonic constructions for Ising model, such as optical coherent Ising machines (CIM)
36
+ [19, 24, 20, 21, 22, 23], photonic recurrent Ising sampler (PRIS) [34, 35], and spatial photonic Ising machines (SPIM)
37
+ [28, 29, 30, 31, 32, 33, 38]. These proposals, originated from Ising model given by H = − �
38
+ <l,k> Jl,kxlxk where
39
+ Jl,k is the interaction between spins and spin binary state xl ∈ {1, −1}, are designed to search for ground state of
40
+ Ising model with the minimum Hamiltonian by either iterative sampling or directly evolving the ensemble energy
41
+ regarding the established mapping of a particular combinatorial problem. Although the coherent Ising machines
42
+ performs comparably to the quantum annealing, it lacks the advantages of parallel processing in optical computing since
43
+ it requires an extremely long fiber cavity to simulate spins through temporal multiplexing [19]. Chip-level photonic
44
+ Ising samplers are embedded with specialised heuristic method to provide sample solutions to the ground state of Ising
45
+
46
+ arXiv:2301.04651v1 [cs.ET] 11 Jan 2023
47
+
48
+ Figure 1: Architecture of the quadrature photonic spatial-Euler Ising machine. (a)The schematic and principle
49
+ of Euler-SIM. (b) Images of the initial and target intensity. The white bar corresponds to the length of 20 µm. (c) Initial
50
+ and final phase masks encoding on SLM in one experiment.
51
+ models, but currently fail to scale up [34] and heuristic algorithms are difficult to converge into optimum point for
52
+ a large-scale problem. In contrast, spatial photonic Ising machines encoding the spins as a phase matrix in spatial
53
+ light modulators (SLMs), can implement spin scales up to tens of thousands [28, 31]. This approach, using spatial
54
+ Fourier transformation as basic building block, can be expressed by H = − �
55
+ <l,k> εlεkxlxk, which indicates that the
56
+ interaction coefficient Jl,k is set by the amplitude modulation εl and εk. This scheme is compatible with an Ising model
57
+ with fully connected interactions (or an equivalent quadratic unconstrained binary optimization (QUBO) problem) due
58
+ to its high connectivity and scalability [31].
59
+ However, the proposed Ising machine still need external spatial amplitude modulator and thereby spin configuration
60
+ and flipping will require two independent sets of spatial light modulators (SLMs) for amplitude and phase modulation,
61
+ which will lead to tremendous engineering difficulty of optical alignment and coupling [31]. In our previous work, we
62
+ proposed quadrature spatial Ising machine to provide flexibility for interaction configuration by introducing spatial
63
+ spins interference with quadrature phase design [32, 39]. However, the proposed Ising machine still need external
64
+ spatial amplitude modulator and thereby spin configuration and flipping will require two independent sets of spatial
65
+ light modulators (SLMs) for amplitude and phase modulation, which will lead to tremendous engineering difficulty of
66
+ optical alignment and coupling.
67
+ In this paper, we propose a novel quadrature photonic spatial-Euler Ising machine (Euler-SIM) where intensity
68
+ modulation is performed based on Euler’s Formula by extending quadrature phase configuration. To estimate the
69
+ performance of Euler-SIM, we conduct experiments and simulations on the Max-cut problem with over 20000 nodes.
70
+ The max cut value in experiment is improved by 32% over simulation results and 34% over Sahni-Gonzales (SG)
71
+ algorithm with a hundredfold speedup. The results demonstrate the superiority of our structure in terms of result
72
+ yield and speed of solving NP-hard problems beyond the traditional von Neumann processor. Furthermore, we also
73
+ investigate noise enhancement approach through experiments, finding that up to 8% performance enhancement by
74
+ adding external Gaussian white noise on the detected optical amplitude.
75
+ 2
76
+ Principle of quadrature photonic spatial-Euler Ising machine
77
+ Fig.1(a) shows the architecture design of Euler-SIM. An extended coherent light source shines on the SLM screen.
78
+ The phase mask of SLM is configured by four parts to encode both the interaction coefficients and the spin states. In
79
+ this case, a spin with amplitude information will consist of four parts ei(φl−αl),ei(θl−βl),ei(φl+αl),ei(θl+βl). On the
80
+ one hand, the spin state xl is encoded by the modulated phase φl ∈ {0, π}, and the corresponding yl is encoded by
81
+ the quadrature phase θl ∈ { π
82
+ 2 , 3π
83
+ 2 }. There satisfies a specific transformation relation between y and x determined by
84
+ the interaction matrix, y = Ax [32]. On the other hand, arbitrary amplitudes scaled down to the range (−1, 1) can be
85
+ converted into phase. According to the corollary to Euler’s Formula, the cosine functions can be interpreted as weighted
86
+ sums of the exponential functions,as
87
+ cos αl = ℜ(eiαl) = eiαl + e−iαl
88
+ 2
89
+ (1)
90
+ 2
91
+
92
+ Initial I(u,v)
93
+ Initial Phase Mask
94
+ ei(t-βi)
95
+ 20μm
96
+ ei(pi+an)
97
+ ntyi
98
+ Target I(u,v)
99
+ Final Phase Mask
100
+ H=
101
+ (EIEXiXkNnyiyk)
102
+ ,k
103
+ 1x13
104
+ 15
105
+ 20μm
106
+ feedbackFigure 2: Experimental and simulation results of Max-cut problem. (a) Graph division of a 100-node Max-cut
107
+ problem obtained by Euler-SIM. (b) Graph division of a 100-node Max-cut problem obtained with the SG algorithm.
108
+ (c) Experimental searching for max cut value of 20736 nodes. (d) Simulated searching for max cut value of 20736
109
+ nodes. (e) Experimental and simulation results for Max-cut problems with graph densities of [0.5, 1.0].
110
+ Thus, phase and amplitude information can then be encoded simultaneously according to extra phases αl, as
111
+ εlxl = 1
112
+ 2[ei(φl−αl) + ei(φl+αl)]
113
+ (2)
114
+ The modulated wave is passed through a lens to achieve spatial Fourier Transform and result in a superimposed effect at
115
+ the centre intensity
116
+ I(0, 0) = (xT ε + yT η)(εT x + ηT y)
117
+ (3)
118
+ which followed an opposite trend to the corresponding Hamiltonian
119
+ H = −
120
+
121
+ <l,k>
122
+ (εlεkxlxk ± ηlηkylyk)
123
+ (4)
124
+ Therefore, we can search for the ground state of Ising model by maximising the central light intensity during the
125
+ experiment.
126
+ Based on the above architecture design, we construct the experimental setup. An incident beam with λ = 632.8nm is
127
+ injected into the beam expander with a rectangular aperture to produce a 12.5 × 7.1 mm2 rectangular light spot, which
128
+ completely covers the phase-only reflective SLM (HOLOEYE LETO-3-CFS-127) plane to activate all the 1920×1080
129
+ pixels with a pixel pitch of 6.4 µm. Here, a full-coverage spot is essential to maximise pixel utilisation while minimizing
130
+ modulation-related pixel alignment issues. In the rear, a lens (focal length f = 150 mm) is arranged to perform a
131
+ two-dimensional Fourier transform on the beam with modulated wavefront. Finally, we probe the field intensity by the
132
+ charge-coupled device (CCD) camera on the back focal plane of the lens. The 2758×2208 sensor in the CCD (QSI 600)
133
+ has 800 kHz read rate and 16 bit digital resolution, providing extremely high resolution. The intensity images are loaded
134
+ into the central processing unit (CPU) for subsequent calculations in the electrical domain and feed back. Fig.1(b)
135
+ 3
136
+
137
+ (a)
138
+ (b)
139
+ (c)
140
+ ×104
141
+ ×108
142
+ .285
143
+ 2
144
+ 1.8
145
+ 3.28
146
+ 1.6
147
+ Distance
148
+ Euclidean Distance
149
+ 1.4
150
+ 3.275
151
+ Cut Value
152
+ 1.2
153
+ Value
154
+ Euclidean J
155
+ 3.27
156
+ 1
157
+ 0.8
158
+ 3.265
159
+ 0.6
160
+ 0.4
161
+ 3.26
162
+ 0.2
163
+ Euler-SIM
164
+ SG algorithm
165
+ 3.255
166
+ 0
167
+ (100 nodes)
168
+ (100 nodes)
169
+ 1
170
+ 11
171
+ 21
172
+ 31
173
+ 41
174
+ 51
175
+ 61
176
+ 71
177
+ 81
178
+ 91
179
+ Iteration
180
+ (d)
181
+ (e)
182
+ ×108
183
+ ×108
184
+ ×108
185
+ 5
186
+ 1.2
187
+ 4.5
188
+ 2
189
+ 4
190
+ Hamiltonian
191
+ 3.5
192
+ 0.8
193
+ 1.5
194
+ Hamiltonian
195
+ Cut Value
196
+ Value,
197
+
198
+ 2.5
199
+ Cut
200
+ 2
201
+ Experimental results
202
+ 0.4
203
+ 1.5
204
+ Simulated results
205
+ 0.5
206
+ 1
207
+ 0.2
208
+ SG algorithm
209
+ 0.5
210
+ 0
211
+ 0
212
+ 0
213
+ 1
214
+ 11
215
+ 21
216
+ 31
217
+ 41
218
+ 51
219
+ 61
220
+ 71
221
+ 81
222
+ 91
223
+ 0.5
224
+ 0.6
225
+ 0.7
226
+ 0.8
227
+ 0.9
228
+ 1
229
+ Iteration
230
+ Graph DensityFigure 3: Experimental and simulation results of Max-cut problem with added digital noise. (a) Initial image
231
+ acquired by CCD. (b) Gaussian white noise matrices with noise level of 0.1. (c) Polluted image for computation. (d)
232
+ Simulated results of Max-cut problems with added digital noise of 0.02 0.08. The black dashed line represents the
233
+ unnoised results as a reference line. (e) Experimental results for Max-cut problems with added digital noise. The ideal
234
+ noise levels related to different graph densities are marked at the top of the orange bars.
235
+ exhibits the images of the initial detected intensityI and target intensityIT , which is focused by a uniform beam without
236
+ any modulation. Here, we calculate the Euclidean distance ∥IT − I∥2 as a cost function of the simulated annealing (SA)
237
+ algorithm, thus generating a new phase mask to refresh SLM screen. And the initial and final phase masks describing
238
+ the spin states are illustrated in Fig.1(c). This procedure is continuously cycled to govern the Hamiltonian evolution
239
+ until the system stabilises to the ground state.
240
+ 3
241
+ Experiments and Discussion
242
+ 3.1
243
+ Experimental performances and numerical simulations
244
+ The Max-cut problem, requiring to find the cut of the given graph into two subsets with the maximum value of their
245
+ connecting weighted edges, can be formulated into an equivalent Ising model without local fields [40]. An unweighted
246
+ and all-to-all connection max-cut problem make it easier by assuming Jl,k taking values of ±1, whereas many NP
247
+ problems can only be converted into weighted sparse max-cut problems for solution [41]. Our proposed scheme
248
+ perfectly implements the mapping of the latter. For each cut, the cut value is denoted as
249
+ W = 1
250
+ 2
251
+
252
+ <l,k>
253
+ wl,k(1 − xlxk)
254
+ (5)
255
+ where wl,k is the weight between the l-th vertex and the k-th vertex. The related Hamiltonian we use is H =
256
+
257
+ <l,k> wl,kxlxk and the weight can be expressed as
258
+ wl,k = cos αl cos αk ± cos βl cos βk
259
+ (6)
260
+ Thus, we can maximise the cut value by looking for the minimum Hamiltonian.
261
+ Given that it is too complex to be solved precisely with a large scale problem as the exact solvers generally fail with
262
+ 1000 vertexes [34, 42]. Before the experiments we need to perform a reference calculation on the conventional electrical
263
+ computing platform. Usually, the Goemans-Williamson SDP (GW-SDP) algorithm is one of the most popular methods
264
+ to solve the Max-cut problem with a guarantee of solution quality. However, it fails to solve large-scale problems owing
265
+ to the inordinately long time consumption [43]. Therefore, for large instances, we choose to employ another classical
266
+ greedy heuristic algorithm called the Sahni-Gonzales (SG) method, which is known to find approximate solutions
267
+ to large Max-cut problems in polynomial time, comparable to the GW-SDP [24, 36]. Using this method, a set of
268
+ 4
269
+
270
+ (a)
271
+ (b)
272
+ (c)
273
+ (e)
274
+ ×108
275
+ 1.85
276
+ 0.8
277
+ 0.08
278
+ 0.6
279
+ 1.8
280
+ Spontaneous Noise Only
281
+ 0.4
282
+ 0.07
283
+ 0.2
284
+ 1.75
285
+ Added Digital Noise
286
+ (d)
287
+ ×108
288
+ 1.4
289
+ 1.7
290
+ -0.02
291
+ 0.04
292
+ -0
293
+ 1.35
294
+ lue
295
+ 1.65
296
+ -0.03
297
+ 0.04
298
+ Val
299
+ 1.3
300
+ Value
301
+ 0.06
302
+ 0.08
303
+ 0.06
304
+ 1.25
305
+ 1.55
306
+ 0.05
307
+ 0.07
308
+ 1.2
309
+ 1.5
310
+ 1.15
311
+ 1.45
312
+ 1.1
313
+ 1.4
314
+ 0.5
315
+ 0.6
316
+ 0.7
317
+ 0.8
318
+ 0.9
319
+ 1
320
+ 0.5
321
+ 0.6
322
+ 0.7
323
+ 0.8
324
+ 0.9
325
+ 1
326
+ Graph Density
327
+ Graph DensityTable 1: Performance comparison between Euler-SIM and other Ising machines for solving Max-cut problems.
328
+ Ising machine
329
+ Implementation
330
+ Problem Type
331
+ Problem Scale
332
+ Time to Resolution
333
+ 8-FPGA SB machine [44]
334
+ Easy
335
+ All-to-all,Weighted
336
+ 16,384-node
337
+ 1.2 ms
338
+ PRIS [34]
339
+ Easy
340
+ All-to-all,Unweighted
341
+ 100-node
342
+ 63 ns per-step
343
+ CIM with DOPO [24]
344
+ Very hard
345
+ All-to-all,Unweighted
346
+ 100000-node
347
+ 785 µs
348
+ CIM with OEPO [45]
349
+ Hard
350
+ Sparse,Unweighted
351
+ 56-node
352
+ 4.5 µs
353
+ D-wave 2000Q [46, 36]
354
+ Very hard
355
+ Sparse,Weighted
356
+ 2500-node
357
+ > 104 s (for 55 nodes)
358
+ Euler-SIM
359
+ Easy
360
+ All-to-all,Weighted
361
+ 20000-node
362
+ 325 s
363
+ 20736-node Max-cut problems with varies graph densities is implemented on CPUs (Intel i9-13900K, 5.8 GHz) to
364
+ derive the max cut values, consuming 11 hours on average.
365
+ The division of 20736 points is too convoluted to be plotted. We present the division of the 100-node Max-cut problem
366
+ solved by the Euler-SIM in Fig. 2 (a) and the SG algorithm in Fig. 2 (b), respectively. Fig. 2 (c) plots the results of five
367
+ experiments on the weighted Max-cut problem with 20736 fully connected nodes. During the 100 iterations, the cut
368
+ value increases as the Euclidean distance decreases and stabilizes at around 1.759 × 108, with a 122 times speedup
369
+ compared to the SG algorithm.
370
+ Furthermore, we carried out five simulations of the same problem in MATLAB to approximate the operation of the
371
+ photonic Ising machine. As shown in Fig. 2 (d), the cut value remarkably increases to 1.076 × 108 as the Hamiltonian
372
+ of the Ising model converges rapidly. An interesting finding is that the simulation results are inferior to the experimental
373
+ results. Finally, we extended our experiments and simulations for the Max-cut problem with graph densities of 0.5-1.0
374
+ compared with the SG algorithm. The results are shown as Fig. 2 (e), which statistically demonstrates that our
375
+ Euler-SIM offers compelling advantages for handling large-scale Max-cut problem that outweighs electronic computers,
376
+ in comparison with both simulation results and SG algorithm. The experimental max cut values exceed the SG algorithm
377
+ by an average of 34% and achieve a maximum of 49% with graph density of 1.0, which precisely captures the inherent
378
+ advantage of fully connected systems. Additionally, the experimental results routinely outperform the simulated results
379
+ by roughly 32%. The reasons for this occurrence will be discussed in the next section.
380
+ 3.2
381
+ Noise enhancement approach
382
+ The detection susceptible to noise may cause some uncertainty in experiments and we speculate that it is the discrepancy
383
+ that makes it easier to jump out of the local optimum and fit better with the SA algorithm, resulting in a better solution.
384
+ In fact, several related works have reported that noise-accelerated or noise-enhanced photonic Ising machines can be
385
+ used to solve large-scale combinatorial optimization problems [29, 35]. Considering that the spontaneous noise of the
386
+ system is challenging to gauge, we develop a noise enhancement approach by adding digital white noise onto detected
387
+ optical signals. More specifically, we generate a group of white noise matrices with different variances and add to
388
+ the CCD acquired images (after normalization) separately to provide a group of polluted images for computation and
389
+ feedback, as shown in Fig. 3(a)-(c).
390
+ Since large noise can blur the image and prevent the algorithm from converging, we pre-simulate to clarify a suitable
391
+ range of noise levels, defined as variance. The noise level is eventually lowered to less than 0.1, ensuring the correct
392
+ execution of the algorithm to obtain a feasible solution. And the simulated results in Fig. 3(d) show that Gaussian noise
393
+ with a noise level of 0.02 to 0.03 may enhance the outcomes, which will be more striking for higher graph densities, up
394
+ to 2.7%. In subsequent experiments shown as Fig. 3(e), we find that the added digital noise do improve the experimental
395
+ results with an average increase of 2.9%. Similar to the numerical simulation results, a more significant improvement
396
+ still appears in the larger graph density, up to 8.6%. Note that the ideal noise level fluctuates with the graph density
397
+ rather than in a fixed threshold, which is different from numerical simulations.
398
+ 3.3
399
+ Discussion on the Euler-SIM performance
400
+ We also compare the performance of the Euler-SIM in solving Max-Cut problems to other Ising machines in Table
401
+ 1. We evaluated relevant metrics, such as implementation, problem type and scale, time to resolution (or speed) and
402
+ obtained the following conclusions:
403
+ 1. Efficiently solving large-scale Max-cut problems. Compared to most solutions [44, 34, 45, 46], we comfortably
404
+ solve Max-cut problems with size over 20,000, approaching the highest reported record so far [24]. In fact, we
405
+ take the adjacent 10×10 pixels as an operation unit for the same encoding to ensure the consistency of the
406
+ 5
407
+
408
+ Ising system in our experiments, thus do not maximise the use of all pixel points. With further optimisation of
409
+ the alignment and detection capabilities, it is feasible to scale up the problem hundredfold.
410
+ 2. Flexible mapping of (non)fully connected Max-cut problems with arbitrary amplitude. Considering experi-
411
+ mental setups, many schemes prefer to demonstrate the process of solving the benchmark sparse unweighted
412
+ Max-cut problem [34, 24, 45]. Obviously, being unweighted reduces the complexity, and fully connected
413
+ problems are of more practical value and harder to implement than sparse ones [44]. As a result, many designs
414
+ take great efforts to achieve fully connection. Quantum annealers sacrifice scale, and CIMs also address this
415
+ deficiency by various schemes [15]. And achieving weighted is even harder. However, it is where SIM excel
416
+ and our design further magnifies this advantage by liberal switching between fully and non-fully connected,
417
+ weighted and unweighted problems.
418
+ 3. Simple and cost-effective experimental construction. Large power consumption and high costs are required
419
+ by quantum annealers because of the cryogenic environment. Even CIMs impose rigorous experimental
420
+ requirements. Fiber oscillators of tens of kilometres are applied to keep optical loss and optical gain within
421
+ thresholds and thus guarantee spin-to-spin coupling, bringing fairly large roundtrip loss [15]. In contrast, our
422
+ approach based on a simple SLM is superior in terms of experimental cost and manoeuvrability.
423
+ Despite the fact that different Ising machines demonstrate their respective attractions in tackling Max-cut problems, such
424
+ as ultra-large scale [24], ultra-high speed [34], high stability [45], and arbitrary Max-cut problem mapping [44, 34], our
425
+ design, by adopting a more economical experimental architecture, achieves the magnitude adjacent to the largest scale
426
+ and free mapping of (non)fully connected, (un)weighted Max-cut problems, which has greater practical implications for
427
+ solving NP-hard problems. Although the design leaves much to be desired in terms of computational speed, which
428
+ is constrained by the optoelectronic transmission of data and the refresh frequency of the SLM, it still exhibits speed
429
+ advantages over electrical computation and even quantum annealing.
430
+ 4
431
+ Conclusion
432
+ In summary, our proposed Euler-SIM utilises Euler’s Formula to achieve amplitude-phase integrated modulation
433
+ and solves the Max-cut problem with 20,736 nodes. The experimental results present around 32% max cut value
434
+ improvement over simulations and 34% over SG algorithm running on the electronic computer, validating better
435
+ optimization performance and fast speed of the optical computing paradigm. Additionally, it is also a noise-friendly
436
+ Ising machine that not only exhibits a large tolerance for system noise, but even rationalizes noise as a potential boost
437
+ to system performance. Thus, this Euler-SIM will become a large-scale optical stochastic computing architecture for
438
+ solving optimization problems of various complex systems.
439
+ Acknowledgments
440
+ This work is supported by the National Key Research and Development Program of China under Grant
441
+ 2019YFB1802903, National Natural Science Foundation of China under Grant 62175146 and 62235011 and Major Key
442
+ Project of PCL (PCL2021A14).
443
+ References
444
+ [1] Giorgio Parisi. Complex systems: a physicist’s viewpoint. arXiv preprint cond-mat/0205297, 2002.
445
+ [2] Marc Mézard, Giorgio Parisi, and Miguel Angel Virasoro. Spin glass theory and beyond: An Introduction to the
446
+ Replica Method and Its Applications, volume 9. World Scientific Publishing Company, 1987.
447
+ [3] Miguel Aguilera, S Amin Moosavi, and Hideaki Shimazaki. A unifying framework for mean-field theories of
448
+ asymmetric kinetic ising systems. Nature communications, 12(1):1–12, 2021.
449
+ [4] Qinghua Liu, Liman Wang, Anthony G Frutos, Anne E Condon, Robert M Corn, and Lloyd M Smith. Dna
450
+ computing on surfaces. Nature, 403(6766):175–179, 2000.
451
+ [5] Andrea Degasperi, Dirk Fey, and Boris N Kholodenko. Performance of objective functions and optimisation
452
+ procedures for parameter estimation in system biology models. NPJ systems biology and applications, 3(1):1–9,
453
+ 2017.
454
+ [6] Lars-Petter Granan. The ising model applied on chronification of pain. Pain Medicine, 17(1):5–9, 2016.
455
+ [7] Douglas B Kitchen, Hélène Decornez, John R Furr, and Jürgen Bajorath. Docking and scoring in virtual screening
456
+ for drug discovery: methods and applications. Nature reviews Drug discovery, 3(11):935–949, 2004.
457
+ 6
458
+
459
+ [8] Rosario N Mantegna and H Eugene Stanley. Introduction to econophysics: correlations and complexity in finance.
460
+ Cambridge university press, 1999.
461
+ [9] Stan Van Hoesel and Rudolf Müller. Optimization in electronic markets: examples in combinatorial auctions.
462
+ Netnomics, 3(1):23–33, 2001.
463
+ [10] Baonan Wang, Feng Hu, Haonan Yao, and Chao Wang. Prime factorization algorithm based on parameter
464
+ optimization of ising model. Scientific reports, 10(1):1–10, 2020.
465
+ [11] Andrew D King, Juan Carrasquilla, Jack Raymond, Isil Ozfidan, Evgeny Andriyash, Andrew Berkley, Mauricio
466
+ Reis, Trevor Lanting, Richard Harris, Fabio Altomare, et al.
467
+ Observation of topological phenomena in a
468
+ programmable lattice of 1,800 qubits. Nature, 560(7719):456–460, 2018.
469
+ [12] Masanao Yamaoka, Chihiro Yoshimura, Masato Hayashi, Takuya Okuyama, Hidetaka Aoki, and Hiroyuki Mizuno.
470
+ Ising computer. Hitachi Review, 65(6):157, 2016.
471
+ [13] Qiang Zhang, Dandan Deng, Wenting Dai, Jixin Li, and Xinwen Jin. Optimization of culture conditions for
472
+ differentiation of melon based on artificial neural network and genetic algorithm. Scientific Reports, 10(1):1–8,
473
+ 2020.
474
+ [14] Andrew Lucas. Ising formulations of many np problems. Frontiers in physics, page 5, 2014.
475
+ [15] Naeimeh Mohseni, Peter L McMahon, and Tim Byrnes. Ising machines as hardware solvers of combinatorial
476
+ optimization problems. Nature Reviews Physics, 4(6):363–379, 2022.
477
+ [16] Kihwan Kim, M-S Chang, Simcha Korenblit, Rajibul Islam, Emily E Edwards, James K Freericks, G-D Lin,
478
+ L-M Duan, and Christopher Monroe. Quantum simulation of frustrated ising spins with trapped ions. Nature,
479
+ 465(7298):590–593, 2010.
480
+ [17] Colin D Bruzewicz, John Chiaverini, Robert McConnell, and Jeremy M Sage. Trapped-ion quantum computing:
481
+ Progress and challenges. Applied Physics Reviews, 6(2):021314, 2019.
482
+ [18] DL—Bixby Applegate and V RE—Chvátal. Cook, wj: The traveling salesman problem: A computational study,
483
+ 2007.
484
+ [19] Fuxi Cai, Suhas Kumar, Thomas Van Vaerenbergh, Xia Sheng, Rui Liu, Can Li, Zhan Liu, Martin Foltin, Shimeng
485
+ Yu, Qiangfei Xia, et al. Power-efficient combinatorial optimization using intrinsic noise in memristor hopfield
486
+ neural networks. Nature Electronics, 3(7):409–418, 2020.
487
+ [20] Masoud Babaeian, Dan T Nguyen, Veysi Demir, Mehmetcan Akbulut, Pierre-A Blanche, Yushi Kaneda, Saikat
488
+ Guha, Mark A Neifeld, and N Peyghambarian. A single shot coherent ising machine based on a network of
489
+ injection-locked multicore fiber lasers. Nature communications, 10(1):1–11, 2019.
490
+ [21] Takahiro Inagaki, Yoshitaka Haribara, Koji Igarashi, Tomohiro Sonobe, Shuhei Tamate, Toshimori Honjo, Alireza
491
+ Marandi, Peter L McMahon, Takeshi Umeki, Koji Enbutsu, et al. A coherent ising machine for 2000-node
492
+ optimization problems. Science, 354(6312):603–606, 2016.
493
+ [22] Peter L McMahon, Alireza Marandi, Yoshitaka Haribara, Ryan Hamerly, Carsten Langrock, Shuhei Tamate,
494
+ Takahiro Inagaki, Hiroki Takesue, Shoko Utsunomiya, Kazuyuki Aihara, et al. A fully programmable 100-spin
495
+ coherent ising machine with all-to-all connections. Science, 354(6312):614–617, 2016.
496
+ [23] Ryan Hamerly, Takahiro Inagaki, Peter L McMahon, Davide Venturelli, Alireza Marandi, Tatsuhiro Onodera,
497
+ Edwin Ng, Carsten Langrock, Kensuke Inaba, Toshimori Honjo, et al. Experimental investigation of performance
498
+ differences between coherent ising machines and a quantum annealer. Science advances, 5(5):eaau0823, 2019.
499
+ [24] Toshimori Honjo, Tomohiro Sonobe, Kensuke Inaba, Takahiro Inagaki, Takuya Ikuta, Yasuhiro Yamada, Takushi
500
+ Kazama, Koji Enbutsu, Takeshi Umeki, Ryoichi Kasahara, et al. 100,000-spin coherent ising machine. Science
501
+ advances, 7(40):eabh0952, 2021.
502
+ [25] Chihiro Yoshimura, Masato Hayashi, Takuya Okuyama, and Masanao Yamaoka. Implementation and evaluation
503
+ of fpga-based annealing processor for ising model by use of resource sharing. International Journal of Networking
504
+ and Computing, 7(2):154–172, 2017.
505
+ [26] Igor Gershenzon, Geva Arwas, Sagie Gadasi, Chene Tradonsky, Asher Friesem, Oren Raz, and Nir Davidson. Exact
506
+ mapping between a laser network loss rate and the classical xy hamiltonian by laser loss control. Nanophotonics,
507
+ 9(13):4117–4126, 2020.
508
+ [27] Peter L McMahon, Alireza Marandi, Yoshitaka Haribara, Ryan Hamerly, Carsten Langrock, Shuhei Tamate,
509
+ Takahiro Inagaki, Hiroki Takesue, Shoko Utsunomiya, Kazuyuki Aihara, et al. A fully programmable 100-spin
510
+ coherent ising machine with all-to-all connections. Science, 354(6312):614–617, 2016.
511
+ [28] D Pierangeli, G Marcucci, and C Conti. Large-scale photonic ising machine by spatial light modulation. Physical
512
+ review letters, 122(21):213902, 2019.
513
+ 7
514
+
515
+ [29] Davide Pierangeli, Giulia Marcucci, Daniel Brunner, and Claudio Conti. Noise-enhanced spatial-photonic ising
516
+ machine. Nanophotonics, 9(13):4109–4116, 2020.
517
+ [30] Davide Pierangeli, Giulia Marcucci, and Claudio Conti. Adiabatic evolution on a spatial-photonic ising machine.
518
+ Optica, 7(11):1535–1543, 2020.
519
+ [31] Yisheng Fang, Junyi Huang, and Zhichao Ruan. Experimental observation of phase transitions in spatial photonic
520
+ ising machine. Physical Review Letters, 127(4):043902, 2021.
521
+ [32] Wenchen Sun, Wenjia Zhang, Yuanyuan Liu, Qingwen Liu, and Zuyuan He. Quadrature photonic spatial ising
522
+ machine. Optics Letters, 47(6):1498–1501, 2022.
523
+ [33] Jiayi Ouyang, Yuxuan Liao, Zhiyao Ma, Deyang Kong, Xue Feng, Xiang Zhang, Xiaowen Dong, Kaiyu Cui,
524
+ Fang Liu, Wei Zhang, et al. An on-demand photonic ising machine with simplified hamiltonian calculation by
525
+ phase-encoding and intensity detection. arXiv preprint arXiv:2207.05072, 2022.
526
+ [34] Charles Roques-Carmes, Yichen Shen, Cristian Zanoci, Mihika Prabhu, Fadi Atieh, Li Jing, Tena Dubˇcek, Chenkai
527
+ Mao, Miles R Johnson, Vladimir ˇCeperi´c, et al. Heuristic recurrent algorithms for photonic ising machines. Nature
528
+ communications, 11(1):1–8, 2020.
529
+ [35] Mihika Prabhu, Charles Roques-Carmes, Yichen Shen, Nicholas Harris, Li Jing, Jacques Carolan, Ryan Hamerly,
530
+ Tom Baehr-Jones, Michael Hochberg, Vladimir ˇCeperi´c, et al. Accelerating recurrent ising machines in photonic
531
+ integrated circuits. Optica, 7(5):551–558, 2020.
532
+ [36] Suryendy Dutta, Abhishek Khanna, AS Assoa, Hanjong Paik, Darrell G Schlom, Zoltán Toroczkai, Arijit
533
+ Raychowdhury, and Suman Datta. An ising hamiltonian solver based on coupled stochastic phase-transition
534
+ nano-oscillators. Nature Electronics, 4(7):502–512, 2021.
535
+ [37] Ernst Ising. Beitrag zur theorie des ferro-und paramagnetismus. PhD thesis, Grefe & Tiedemann, 1924.
536
+ [38] Santosh Kumar, He Zhang, and Yu-Ping Huang. Large-scale ising emulation with four body interaction and
537
+ all-to-all connections. Communications Physics, 3(1):1–9, 2020.
538
+ [39] Wenchen Sun. In OFC 2022, page M2G.4, 2022.
539
+ [40] Michel X Goemans and David P Williamson.
540
+ Improved approximation algorithms for maximum cut and
541
+ satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115–1145, 1995.
542
+ [41] Michael R Garey. A guide to the theory of np-completeness. Computers and intractability, 1979.
543
+ [42] Sera Kahruman, Elif Kolotoglu, Sergiy Butenko, and Illya V Hicks. On greedy construction heuristics for the
544
+ max-cut problem. International Journal of Computational Science and Engineering, 3(3):211–218, 2007.
545
+ [43] Takuya Okuyama, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Masanao Yamaoka. Binary optimization by
546
+ momentum annealing. Physical Review E, 100(1):012111, 2019.
547
+ [44] Kosuke Tatsumura, Masaya Yamasaki, and Hayato Goto. Scaling out ising machines using a multi-chip architecture
548
+ for simulated bifurcation. Nature Electronics, 4(3):208–217, 2021.
549
+ [45] Qizhuang Cen, Hao Ding, Tengfei Hao, Shanhong Guan, Zhiqiang Qin, Jiaming Lyu, Wei Li, Ninghua Zhu, Kun
550
+ Xu, Yitang Dai, et al. Large-scale coherent ising machine based on optoelectronic parametric oscillator. Light:
551
+ Science & Applications, 11(1):1–10, 2022.
552
+ [46] D-Wave Systems Inc.
553
+ Hybrid solvers for quadratic optimization.
554
+ https://www.dwavesys.com/media/
555
+ soxph512/hybrid-solvers-for-quadratic-optimization.pdf, April 2022.
556
+ 8
557
+
8NE3T4oBgHgl3EQfqQrl/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,397 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf,len=396
2
+ page_content='PHOTONIC SPATIAL-EULER ISING MACHINE FOR SOLVING 20000-NODE MAX-CUT PROBLEM ∗ Xin Ye, Wenjia Zhang, Shaomeng Wang, Xiaoxuan Yang, Zuyuan He State Key Laboratory of Advanced Optical Communication Systems and Networks Shanghai Jiao Tong University Shanghai 200240, China {Wenjia Zhang}wenjia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
3
+ page_content='zhang@sjtu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
4
+ page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
5
+ page_content='cn ABSTRACT To tackle challenging combinatorial optimization problems, analog computing machines based on the nature-inspired Ising model are attracting increasing attentions in order to disruptively overcome the impending limitations on conventional electronic computers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
6
+ page_content=' Photonic spatial Ising machine has become an unique and primitive solution with all-to-all connections to solve large-scale Max-cut problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
7
+ page_content=' However, spin configuration and flipping requires two independent sets of spatial light modulators (SLMs) for amplitude and phase modulation, which will lead to tremendous engineering difficulty of optical alignment and coupling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
8
+ page_content=' We report a novel quadrature photonic spatial-Euler Ising machine to realize large-scale and flexible spin-interaction configuration and spin-flip in a single spatial light modulator, and develop a noise enhancement approach by adding digital white noise onto detected optical signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
9
+ page_content=' We experimentally show that such proposal accelerates solving (un)weighted, (non)fully connected, 20736-node Max-cut problems, which offers obvious advantages over simulation and heuristic algorithm results in digital computers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
10
+ page_content=' 1 Introduction Complex systems related research has progressed at a rapid pace due to high-throughput data acquisition techniques [1, 2, 3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
11
+ page_content=' Contrarily, comprehensive processing and optimization of big data with complex structures and correlations is a prerequisite for the vast applications and spectacular advancement in bioinformatics [4, 5], pharmaceutical medicine [6, 7], finance [8, 9], cryptography [10, 11], and artificial intelligence (AI) [12, 13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
12
+ page_content=' Therefore, powerful mathematical models and hardware processors are critically utilised to analyse high-dimensional data sets and complex systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
13
+ page_content=' The Ising model, depicting Markov chains of interacting binary units, is a typical model used to study complex systems [14, 15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
14
+ page_content=' Various artificial Ising machines developed based on this model accelerate conventional electronic computers in performing optimization tasks involving non-deterministic polynomial time (NP)-hard problems and combinatorial optimisation tasks, such as the Max-cut, protein folding, number partition and travelling salesman problem(TSP) [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
15
+ page_content=' Among these Ising solutions, the photonic Ising machine, by leveraging light interference to emulate spin interaction in ferromagnets, offers substantial benefits of high connectivity and speed in ground state search [37].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
16
+ page_content=' Recently, there propose various innovative photonic constructions for Ising model, such as optical coherent Ising machines (CIM) [19, 24, 20, 21, 22, 23], photonic recurrent Ising sampler (PRIS) [34, 35], and spatial photonic Ising machines (SPIM) [28, 29, 30, 31, 32, 33, 38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
17
+ page_content=' These proposals, originated from Ising model given by H = − � <l,k> Jl,kxlxk where Jl,k is the interaction between spins and spin binary state xl ∈ {1, −1}, are designed to search for ground state of Ising model with the minimum Hamiltonian by either iterative sampling or directly evolving the ensemble energy regarding the established mapping of a particular combinatorial problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
18
+ page_content=' Although the coherent Ising machines performs comparably to the quantum annealing, it lacks the advantages of parallel processing in optical computing since it requires an extremely long fiber cavity to simulate spins through temporal multiplexing [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
19
+ page_content=' Chip-level photonic Ising samplers are embedded with specialised heuristic method to provide sample solutions to the ground state of Ising ∗ arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
20
+ page_content='04651v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
21
+ page_content='ET] 11 Jan 2023 Figure 1: Architecture of the quadrature photonic spatial-Euler Ising machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
22
+ page_content=' (a)The schematic and principle of Euler-SIM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
23
+ page_content=' (b) Images of the initial and target intensity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
24
+ page_content=' The white bar corresponds to the length of 20 µm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
25
+ page_content=' (c) Initial and final phase masks encoding on SLM in one experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
26
+ page_content=' models, but currently fail to scale up [34] and heuristic algorithms are difficult to converge into optimum point for a large-scale problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
27
+ page_content=' In contrast, spatial photonic Ising machines encoding the spins as a phase matrix in spatial light modulators (SLMs), can implement spin scales up to tens of thousands [28, 31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
28
+ page_content=' This approach, using spatial Fourier transformation as basic building block, can be expressed by H = − � <l,k> εlεkxlxk, which indicates that the interaction coefficient Jl,k is set by the amplitude modulation εl and εk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
29
+ page_content=' This scheme is compatible with an Ising model with fully connected interactions (or an equivalent quadratic unconstrained binary optimization (QUBO) problem) due to its high connectivity and scalability [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
30
+ page_content=' However, the proposed Ising machine still need external spatial amplitude modulator and thereby spin configuration and flipping will require two independent sets of spatial light modulators (SLMs) for amplitude and phase modulation, which will lead to tremendous engineering difficulty of optical alignment and coupling [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
31
+ page_content=' In our previous work, we proposed quadrature spatial Ising machine to provide flexibility for interaction configuration by introducing spatial spins interference with quadrature phase design [32, 39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
32
+ page_content=' However, the proposed Ising machine still need external spatial amplitude modulator and thereby spin configuration and flipping will require two independent sets of spatial light modulators (SLMs) for amplitude and phase modulation, which will lead to tremendous engineering difficulty of optical alignment and coupling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
33
+ page_content=' In this paper, we propose a novel quadrature photonic spatial-Euler Ising machine (Euler-SIM) where intensity modulation is performed based on Euler’s Formula by extending quadrature phase configuration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
34
+ page_content=' To estimate the performance of Euler-SIM, we conduct experiments and simulations on the Max-cut problem with over 20000 nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
35
+ page_content=' The max cut value in experiment is improved by 32% over simulation results and 34% over Sahni-Gonzales (SG) algorithm with a hundredfold speedup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
36
+ page_content=' The results demonstrate the superiority of our structure in terms of result yield and speed of solving NP-hard problems beyond the traditional von Neumann processor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
37
+ page_content=' Furthermore, we also investigate noise enhancement approach through experiments, finding that up to 8% performance enhancement by adding external Gaussian white noise on the detected optical amplitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
38
+ page_content=' 2 Principle of quadrature photonic spatial-Euler Ising machine Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
39
+ page_content='1(a) shows the architecture design of Euler-SIM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
40
+ page_content=' An extended coherent light source shines on the SLM screen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
41
+ page_content=' The phase mask of SLM is configured by four parts to encode both the interaction coefficients and the spin states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
42
+ page_content=' In this case, a spin with amplitude information will consist of four parts ei(φl−αl),ei(θl−βl),ei(φl+αl),ei(θl+βl).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
43
+ page_content=' On the one hand, the spin state xl is encoded by the modulated phase φl ∈ {0, π}, and the corresponding yl is encoded by the quadrature phase θl ∈ { π 2 , 3π 2 }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
44
+ page_content=' There satisfies a specific transformation relation between y and x determined by the interaction matrix, y = Ax [32].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
45
+ page_content=' On the other hand, arbitrary amplitudes scaled down to the range (−1, 1) can be converted into phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
46
+ page_content=' According to the corollary to Euler’s Formula, the cosine functions can be interpreted as weighted sums of the exponential functions,as cos αl = ℜ(eiαl) = eiαl + e−iαl 2 (1) 2 Initial I(u,v) Initial Phase Mask ei(t-βi) 20μm ei(pi+an) ntyi Target I(u,v) Final Phase Mask H= (EIEXiXkNnyiyk) ,k 1x13 15 20μm feedbackFigure 2: Experimental and simulation results of Max-cut problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
47
+ page_content=' (a) Graph division of a 100-node Max-cut problem obtained by Euler-SIM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
48
+ page_content=' (b) Graph division of a 100-node Max-cut problem obtained with the SG algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
49
+ page_content=' (c) Experimental searching for max cut value of 20736 nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
50
+ page_content=' (d) Simulated searching for max cut value of 20736 nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
51
+ page_content=' (e) Experimental and simulation results for Max-cut problems with graph densities of [0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
52
+ page_content='5, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
53
+ page_content='0].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
54
+ page_content=' Thus,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
55
+ page_content=' phase and amplitude information can then be encoded simultaneously according to extra phases αl,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
56
+ page_content=' as εlxl = 1 2[ei(φl−αl) + ei(φl+αl)] (2) The modulated wave is passed through a lens to achieve spatial Fourier Transform and result in a superimposed effect at the centre intensity I(0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
57
+ page_content=' 0) = (xT ε + yT η)(εT x + ηT y) (3) which followed an opposite trend to the corresponding Hamiltonian H = − � <l,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
58
+ page_content='k> (εlεkxlxk ± ηlηkylyk) (4) Therefore,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
59
+ page_content=' we can search for the ground state of Ising model by maximising the central light intensity during the experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
60
+ page_content=' Based on the above architecture design, we construct the experimental setup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
61
+ page_content=' An incident beam with λ = 632.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
62
+ page_content='8nm is injected into the beam expander with a rectangular aperture to produce a 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
63
+ page_content='5 × 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
64
+ page_content='1 mm2 rectangular light spot, which completely covers the phase-only reflective SLM (HOLOEYE LETO-3-CFS-127) plane to activate all the 1920×1080 pixels with a pixel pitch of 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
65
+ page_content='4 µm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
66
+ page_content=' Here, a full-coverage spot is essential to maximise pixel utilisation while minimizing modulation-related pixel alignment issues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
67
+ page_content=' In the rear, a lens (focal length f = 150 mm) is arranged to perform a two-dimensional Fourier transform on the beam with modulated wavefront.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
68
+ page_content=' Finally, we probe the field intensity by the charge-coupled device (CCD) camera on the back focal plane of the lens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
69
+ page_content=' The 2758×2208 sensor in the CCD (QSI 600) has 800 kHz read rate and 16 bit digital resolution, providing extremely high resolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
70
+ page_content=' The intensity images are loaded into the central processing unit (CPU) for subsequent calculations in the electrical domain and feed back.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
71
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
72
+ page_content='1(b) 3 (a) (b) (c) ×104 ×108 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
73
+ page_content='285 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
74
+ page_content='8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
75
+ page_content='28 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
76
+ page_content='6 Distance Euclidean Distance 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
77
+ page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
78
+ page_content='275 Cut Value 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
79
+ page_content='2 Value Euclidean J 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
80
+ page_content='27 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
81
+ page_content='8 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
82
+ page_content='265 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
83
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
84
+ page_content='4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
85
+ page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
86
+ page_content='2 Euler-SIM SG algorithm 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
87
+ page_content='255 0 (100 nodes) (100 nodes) 1 11 21 31 41 51 61 71 81 91 Iteration (d) (e) ×108 ×108 ×108 5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
88
+ page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
89
+ page_content='5 2 4 Hamiltonian 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
90
+ page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
91
+ page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
92
+ page_content='5 Hamiltonian Cut Value Value, 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
93
+ page_content='5 Cut 2 Experimental results 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
94
+ page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
95
+ page_content='5 Simulated results 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
96
+ page_content='5 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
97
+ page_content='2 SG algorithm 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
98
+ page_content='5 0 0 0 1 11 21 31 41 51 61 71 81 91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
99
+ page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
100
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
101
+ page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
102
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
103
+ page_content='9 1 Iteration Graph DensityFigure 3: Experimental and simulation results of Max-cut problem with added digital noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
104
+ page_content=' (a) Initial image acquired by CCD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
105
+ page_content=' (b) Gaussian white noise matrices with noise level of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
106
+ page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
107
+ page_content=' (c) Polluted image for computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
108
+ page_content=' (d) Simulated results of Max-cut problems with added digital noise of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
109
+ page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
110
+ page_content='08.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
111
+ page_content=' The black dashed line represents the unnoised results as a reference line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
112
+ page_content=' (e) Experimental results for Max-cut problems with added digital noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
113
+ page_content=' The ideal noise levels related to different graph densities are marked at the top of the orange bars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
114
+ page_content=' exhibits the images of the initial detected intensityI and target intensityIT , which is focused by a uniform beam without any modulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
115
+ page_content=' Here, we calculate the Euclidean distance ∥IT − I∥2 as a cost function of the simulated annealing (SA) algorithm, thus generating a new phase mask to refresh SLM screen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
116
+ page_content=' And the initial and final phase masks describing the spin states are illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
117
+ page_content='1(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
118
+ page_content=' This procedure is continuously cycled to govern the Hamiltonian evolution until the system stabilises to the ground state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
119
+ page_content=' 3 Experiments and Discussion 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
120
+ page_content='1 Experimental performances and numerical simulations The Max-cut problem, requiring to find the cut of the given graph into two subsets with the maximum value of their connecting weighted edges, can be formulated into an equivalent Ising model without local fields [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
121
+ page_content=' An unweighted and all-to-all connection max-cut problem make it easier by assuming Jl,k taking values of ±1, whereas many NP problems can only be converted into weighted sparse max-cut problems for solution [41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
122
+ page_content=' Our proposed scheme perfectly implements the mapping of the latter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
123
+ page_content=' For each cut, the cut value is denoted as W = 1 2 � <l,k> wl,k(1 − xlxk) (5) where wl,k is the weight between the l-th vertex and the k-th vertex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
124
+ page_content=' The related Hamiltonian we use is H = � <l,k> wl,kxlxk and the weight can be expressed as wl,k = cos αl cos αk ± cos βl cos βk (6) Thus, we can maximise the cut value by looking for the minimum Hamiltonian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
125
+ page_content=' Given that it is too complex to be solved precisely with a large scale problem as the exact solvers generally fail with 1000 vertexes [34, 42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
126
+ page_content=' Before the experiments we need to perform a reference calculation on the conventional electrical computing platform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
127
+ page_content=' Usually, the Goemans-Williamson SDP (GW-SDP) algorithm is one of the most popular methods to solve the Max-cut problem with a guarantee of solution quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
128
+ page_content=' However, it fails to solve large-scale problems owing to the inordinately long time consumption [43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
129
+ page_content=' Therefore, for large instances, we choose to employ another classical greedy heuristic algorithm called the Sahni-Gonzales (SG) method, which is known to find approximate solutions to large Max-cut problems in polynomial time, comparable to the GW-SDP [24, 36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
130
+ page_content=' Using this method, a set of 4 (a) (b) (c) (e) ×108 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
131
+ page_content='85 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
132
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
133
+ page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
134
+ page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
135
+ page_content='8 Spontaneous Noise Only 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
136
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
137
+ page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
138
+ page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
139
+ page_content='75 Added Digital Noise (d) ×108 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
140
+ page_content='4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
141
+ page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
142
+ page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
143
+ page_content='04 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
144
+ page_content='35 lue 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
145
+ page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
146
+ page_content='03 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
147
+ page_content='04 Val 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
148
+ page_content='3 Value 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
149
+ page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
150
+ page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
151
+ page_content='06 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
152
+ page_content='25 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
153
+ page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
154
+ page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
155
+ page_content='07 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
156
+ page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
157
+ page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
158
+ page_content='15 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
159
+ page_content='45 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
160
+ page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
161
+ page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
162
+ page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
163
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
164
+ page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
165
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
166
+ page_content='9 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
167
+ page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
168
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
169
+ page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
170
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
171
+ page_content='9 1 Graph Density Graph DensityTable 1: Performance comparison between Euler-SIM and other Ising machines for solving Max-cut problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
172
+ page_content=' Ising machine Implementation Problem Type Problem Scale Time to Resolution 8-FPGA SB machine [44] Easy All-to-all,Weighted 16,384-node 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
173
+ page_content='2 ms PRIS [34] Easy All-to-all,Unweighted 100-node 63 ns per-step CIM with DOPO [24] Very hard All-to-all,Unweighted 100000-node 785 µs CIM with OEPO [45] Hard Sparse,Unweighted 56-node 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
174
+ page_content='5 µs D-wave 2000Q [46, 36] Very hard Sparse,Weighted 2500-node > 104 s (for 55 nodes) Euler-SIM Easy All-to-all,Weighted 20000-node 325 s 20736-node Max-cut problems with varies graph densities is implemented on CPUs (Intel i9-13900K, 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
175
+ page_content='8 GHz) to derive the max cut values, consuming 11 hours on average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
176
+ page_content=' The division of 20736 points is too convoluted to be plotted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
177
+ page_content=' We present the division of the 100-node Max-cut problem solved by the Euler-SIM in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
178
+ page_content=' 2 (a) and the SG algorithm in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
179
+ page_content=' 2 (b), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
180
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
181
+ page_content=' 2 (c) plots the results of five experiments on the weighted Max-cut problem with 20736 fully connected nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
182
+ page_content=' During the 100 iterations, the cut value increases as the Euclidean distance decreases and stabilizes at around 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
183
+ page_content='759 × 108, with a 122 times speedup compared to the SG algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
184
+ page_content=' Furthermore, we carried out five simulations of the same problem in MATLAB to approximate the operation of the photonic Ising machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
185
+ page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
186
+ page_content=' 2 (d), the cut value remarkably increases to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
187
+ page_content='076 × 108 as the Hamiltonian of the Ising model converges rapidly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
188
+ page_content=' An interesting finding is that the simulation results are inferior to the experimental results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
189
+ page_content=' Finally, we extended our experiments and simulations for the Max-cut problem with graph densities of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
190
+ page_content='5-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
191
+ page_content='0 compared with the SG algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
192
+ page_content=' The results are shown as Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
193
+ page_content=' 2 (e), which statistically demonstrates that our Euler-SIM offers compelling advantages for handling large-scale Max-cut problem that outweighs electronic computers, in comparison with both simulation results and SG algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
194
+ page_content=' The experimental max cut values exceed the SG algorithm by an average of 34% and achieve a maximum of 49% with graph density of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
195
+ page_content='0, which precisely captures the inherent advantage of fully connected systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
196
+ page_content=' Additionally, the experimental results routinely outperform the simulated results by roughly 32%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
197
+ page_content=' The reasons for this occurrence will be discussed in the next section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
198
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
199
+ page_content='2 Noise enhancement approach The detection susceptible to noise may cause some uncertainty in experiments and we speculate that it is the discrepancy that makes it easier to jump out of the local optimum and fit better with the SA algorithm, resulting in a better solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
200
+ page_content=' In fact, several related works have reported that noise-accelerated or noise-enhanced photonic Ising machines can be used to solve large-scale combinatorial optimization problems [29, 35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
201
+ page_content=' Considering that the spontaneous noise of the system is challenging to gauge, we develop a noise enhancement approach by adding digital white noise onto detected optical signals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
202
+ page_content=' More specifically, we generate a group of white noise matrices with different variances and add to the CCD acquired images (after normalization) separately to provide a group of polluted images for computation and feedback, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
203
+ page_content=' 3(a)-(c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
204
+ page_content=' Since large noise can blur the image and prevent the algorithm from converging, we pre-simulate to clarify a suitable range of noise levels, defined as variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
205
+ page_content=' The noise level is eventually lowered to less than 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
206
+ page_content='1, ensuring the correct execution of the algorithm to obtain a feasible solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
207
+ page_content=' And the simulated results in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
208
+ page_content=' 3(d) show that Gaussian noise with a noise level of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
209
+ page_content='02 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
210
+ page_content='03 may enhance the outcomes, which will be more striking for higher graph densities, up to 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
211
+ page_content='7%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
212
+ page_content=' In subsequent experiments shown as Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
213
+ page_content=' 3(e), we find that the added digital noise do improve the experimental results with an average increase of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
214
+ page_content='9%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
215
+ page_content=' Similar to the numerical simulation results, a more significant improvement still appears in the larger graph density, up to 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
216
+ page_content='6%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
217
+ page_content=' Note that the ideal noise level fluctuates with the graph density rather than in a fixed threshold, which is different from numerical simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
218
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
219
+ page_content='3 Discussion on the Euler-SIM performance We also compare the performance of the Euler-SIM in solving Max-Cut problems to other Ising machines in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
220
+ page_content=' We evaluated relevant metrics, such as implementation, problem type and scale, time to resolution (or speed) and obtained the following conclusions: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
221
+ page_content=' Efficiently solving large-scale Max-cut problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
222
+ page_content=' Compared to most solutions [44, 34, 45, 46], we comfortably solve Max-cut problems with size over 20,000, approaching the highest reported record so far [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
223
+ page_content=' In fact, we take the adjacent 10×10 pixels as an operation unit for the same encoding to ensure the consistency of the 5 Ising system in our experiments, thus do not maximise the use of all pixel points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
224
+ page_content=' With further optimisation of the alignment and detection capabilities, it is feasible to scale up the problem hundredfold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
225
+ page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
226
+ page_content=' Flexible mapping of (non)fully connected Max-cut problems with arbitrary amplitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
227
+ page_content=' Considering experi- mental setups, many schemes prefer to demonstrate the process of solving the benchmark sparse unweighted Max-cut problem [34, 24, 45].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
228
+ page_content=' Obviously, being unweighted reduces the complexity, and fully connected problems are of more practical value and harder to implement than sparse ones [44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
229
+ page_content=' As a result, many designs take great efforts to achieve fully connection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
230
+ page_content=' Quantum annealers sacrifice scale, and CIMs also address this deficiency by various schemes [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
231
+ page_content=' And achieving weighted is even harder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
232
+ page_content=' However, it is where SIM excel and our design further magnifies this advantage by liberal switching between fully and non-fully connected, weighted and unweighted problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
233
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
234
+ page_content=' Simple and cost-effective experimental construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
235
+ page_content=' Large power consumption and high costs are required by quantum annealers because of the cryogenic environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
236
+ page_content=' Even CIMs impose rigorous experimental requirements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
237
+ page_content=' Fiber oscillators of tens of kilometres are applied to keep optical loss and optical gain within thresholds and thus guarantee spin-to-spin coupling, bringing fairly large roundtrip loss [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
238
+ page_content=' In contrast, our approach based on a simple SLM is superior in terms of experimental cost and manoeuvrability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
239
+ page_content=' Despite the fact that different Ising machines demonstrate their respective attractions in tackling Max-cut problems,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
240
+ page_content=' such as ultra-large scale [24],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
241
+ page_content=' ultra-high speed [34],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
242
+ page_content=' high stability [45],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
243
+ page_content=' and arbitrary Max-cut problem mapping [44,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
244
+ page_content=' 34],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
245
+ page_content=' our design,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
246
+ page_content=' by adopting a more economical experimental architecture,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
247
+ page_content=' achieves the magnitude adjacent to the largest scale and free mapping of (non)fully connected,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
248
+ page_content=' (un)weighted Max-cut problems,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
249
+ page_content=' which has greater practical implications for solving NP-hard problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
250
+ page_content=' Although the design leaves much to be desired in terms of computational speed, which is constrained by the optoelectronic transmission of data and the refresh frequency of the SLM, it still exhibits speed advantages over electrical computation and even quantum annealing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
251
+ page_content=' 4 Conclusion In summary, our proposed Euler-SIM utilises Euler’s Formula to achieve amplitude-phase integrated modulation and solves the Max-cut problem with 20,736 nodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
252
+ page_content=' The experimental results present around 32% max cut value improvement over simulations and 34% over SG algorithm running on the electronic computer, validating better optimization performance and fast speed of the optical computing paradigm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
253
+ page_content=' Additionally, it is also a noise-friendly Ising machine that not only exhibits a large tolerance for system noise, but even rationalizes noise as a potential boost to system performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
254
+ page_content=' Thus, this Euler-SIM will become a large-scale optical stochastic computing architecture for solving optimization problems of various complex systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
255
+ page_content=' Acknowledgments This work is supported by the National Key Research and Development Program of China under Grant 2019YFB1802903, National Natural Science Foundation of China under Grant 62175146 and 62235011 and Major Key Project of PCL (PCL2021A14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
256
+ page_content=' References [1] Giorgio Parisi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
257
+ page_content=' Complex systems: a physicist’s viewpoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
258
+ page_content=' arXiv preprint cond-mat/0205297, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
259
+ page_content=' [2] Marc Mézard, Giorgio Parisi, and Miguel Angel Virasoro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
260
+ page_content=' Spin glass theory and beyond: An Introduction to the Replica Method and Its Applications, volume 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
261
+ page_content=' World Scientific Publishing Company, 1987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
262
+ page_content=' [3] Miguel Aguilera, S Amin Moosavi, and Hideaki Shimazaki.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
263
+ page_content=' A unifying framework for mean-field theories of asymmetric kinetic ising systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
264
+ page_content=' Nature communications, 12(1):1–12, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
265
+ page_content=' [4] Qinghua Liu, Liman Wang, Anthony G Frutos, Anne E Condon, Robert M Corn, and Lloyd M Smith.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
266
+ page_content=' Dna computing on surfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
267
+ page_content=' Nature, 403(6766):175–179, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
268
+ page_content=' [5] Andrea Degasperi, Dirk Fey, and Boris N Kholodenko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
269
+ page_content=' Performance of objective functions and optimisation procedures for parameter estimation in system biology models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
270
+ page_content=' NPJ systems biology and applications, 3(1):1–9, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
271
+ page_content=' [6] Lars-Petter Granan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
272
+ page_content=' The ising model applied on chronification of pain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
273
+ page_content=' Pain Medicine, 17(1):5–9, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
274
+ page_content=' [7] Douglas B Kitchen, Hélène Decornez, John R Furr, and Jürgen Bajorath.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
275
+ page_content=' Docking and scoring in virtual screening for drug discovery: methods and applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
276
+ page_content=' Nature reviews Drug discovery, 3(11):935–949, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
277
+ page_content=' 6 [8] Rosario N Mantegna and H Eugene Stanley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
278
+ page_content=' Introduction to econophysics: correlations and complexity in finance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
279
+ page_content=' Cambridge university press, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
280
+ page_content=' [9] Stan Van Hoesel and Rudolf Müller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
281
+ page_content=' Optimization in electronic markets: examples in combinatorial auctions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
282
+ page_content=' Netnomics, 3(1):23–33, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
283
+ page_content=' [10] Baonan Wang, Feng Hu, Haonan Yao, and Chao Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
284
+ page_content=' Prime factorization algorithm based on parameter optimization of ising model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
285
+ page_content=' Scientific reports, 10(1):1–10, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
286
+ page_content=' [11] Andrew D King, Juan Carrasquilla, Jack Raymond, Isil Ozfidan, Evgeny Andriyash, Andrew Berkley, Mauricio Reis, Trevor Lanting, Richard Harris, Fabio Altomare, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
287
+ page_content=' Observation of topological phenomena in a programmable lattice of 1,800 qubits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
288
+ page_content=' Nature, 560(7719):456–460, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
289
+ page_content=' [12] Masanao Yamaoka, Chihiro Yoshimura, Masato Hayashi, Takuya Okuyama, Hidetaka Aoki, and Hiroyuki Mizuno.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
290
+ page_content=' Ising computer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
291
+ page_content=' Hitachi Review, 65(6):157, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
292
+ page_content=' [13] Qiang Zhang, Dandan Deng, Wenting Dai, Jixin Li, and Xinwen Jin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
293
+ page_content=' Optimization of culture conditions for differentiation of melon based on artificial neural network and genetic algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
294
+ page_content=' Scientific Reports, 10(1):1–8, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
295
+ page_content=' [14] Andrew Lucas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
296
+ page_content=' Ising formulations of many np problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
297
+ page_content=' Frontiers in physics, page 5, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
298
+ page_content=' [15] Naeimeh Mohseni, Peter L McMahon, and Tim Byrnes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
299
+ page_content=' Ising machines as hardware solvers of combinatorial optimization problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
300
+ page_content=' Nature Reviews Physics, 4(6):363–379, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
301
+ page_content=' [16] Kihwan Kim, M-S Chang, Simcha Korenblit, Rajibul Islam, Emily E Edwards, James K Freericks, G-D Lin, L-M Duan, and Christopher Monroe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
302
+ page_content=' Quantum simulation of frustrated ising spins with trapped ions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
303
+ page_content=' Nature, 465(7298):590–593, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
304
+ page_content=' [17] Colin D Bruzewicz, John Chiaverini, Robert McConnell, and Jeremy M Sage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
305
+ page_content=' Trapped-ion quantum computing: Progress and challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
306
+ page_content=' Applied Physics Reviews, 6(2):021314, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
307
+ page_content=' [18] DL—Bixby Applegate and V RE—Chvátal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
308
+ page_content=' Cook, wj: The traveling salesman problem: A computational study, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
309
+ page_content=' [19] Fuxi Cai, Suhas Kumar, Thomas Van Vaerenbergh, Xia Sheng, Rui Liu, Can Li, Zhan Liu, Martin Foltin, Shimeng Yu, Qiangfei Xia, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
310
+ page_content=' Power-efficient combinatorial optimization using intrinsic noise in memristor hopfield neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
311
+ page_content=' Nature Electronics, 3(7):409–418, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
312
+ page_content=' [20] Masoud Babaeian, Dan T Nguyen, Veysi Demir, Mehmetcan Akbulut, Pierre-A Blanche, Yushi Kaneda, Saikat Guha, Mark A Neifeld, and N Peyghambarian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
313
+ page_content=' A single shot coherent ising machine based on a network of injection-locked multicore fiber lasers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
314
+ page_content=' Nature communications, 10(1):1–11, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
315
+ page_content=' [21] Takahiro Inagaki, Yoshitaka Haribara, Koji Igarashi, Tomohiro Sonobe, Shuhei Tamate, Toshimori Honjo, Alireza Marandi, Peter L McMahon, Takeshi Umeki, Koji Enbutsu, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
316
+ page_content=' A coherent ising machine for 2000-node optimization problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
317
+ page_content=' Science, 354(6312):603–606, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
318
+ page_content=' [22] Peter L McMahon, Alireza Marandi, Yoshitaka Haribara, Ryan Hamerly, Carsten Langrock, Shuhei Tamate, Takahiro Inagaki, Hiroki Takesue, Shoko Utsunomiya, Kazuyuki Aihara, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
319
+ page_content=' A fully programmable 100-spin coherent ising machine with all-to-all connections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
320
+ page_content=' Science, 354(6312):614–617, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
321
+ page_content=' [23] Ryan Hamerly, Takahiro Inagaki, Peter L McMahon, Davide Venturelli, Alireza Marandi, Tatsuhiro Onodera, Edwin Ng, Carsten Langrock, Kensuke Inaba, Toshimori Honjo, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
322
+ page_content=' Experimental investigation of performance differences between coherent ising machines and a quantum annealer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
323
+ page_content=' Science advances, 5(5):eaau0823, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
324
+ page_content=' [24] Toshimori Honjo, Tomohiro Sonobe, Kensuke Inaba, Takahiro Inagaki, Takuya Ikuta, Yasuhiro Yamada, Takushi Kazama, Koji Enbutsu, Takeshi Umeki, Ryoichi Kasahara, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
325
+ page_content=' 100,000-spin coherent ising machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
326
+ page_content=' Science advances, 7(40):eabh0952, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
327
+ page_content=' [25] Chihiro Yoshimura, Masato Hayashi, Takuya Okuyama, and Masanao Yamaoka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
328
+ page_content=' Implementation and evaluation of fpga-based annealing processor for ising model by use of resource sharing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
329
+ page_content=' International Journal of Networking and Computing, 7(2):154–172, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
330
+ page_content=' [26] Igor Gershenzon, Geva Arwas, Sagie Gadasi, Chene Tradonsky, Asher Friesem, Oren Raz, and Nir Davidson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
331
+ page_content=' Exact mapping between a laser network loss rate and the classical xy hamiltonian by laser loss control.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
332
+ page_content=' Nanophotonics, 9(13):4117–4126, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
333
+ page_content=' [27] Peter L McMahon, Alireza Marandi, Yoshitaka Haribara, Ryan Hamerly, Carsten Langrock, Shuhei Tamate, Takahiro Inagaki, Hiroki Takesue, Shoko Utsunomiya, Kazuyuki Aihara, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
334
+ page_content=' A fully programmable 100-spin coherent ising machine with all-to-all connections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
335
+ page_content=' Science, 354(6312):614–617, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
336
+ page_content=' [28] D Pierangeli, G Marcucci, and C Conti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
337
+ page_content=' Large-scale photonic ising machine by spatial light modulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
338
+ page_content=' Physical review letters, 122(21):213902, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
339
+ page_content=' 7 [29] Davide Pierangeli, Giulia Marcucci, Daniel Brunner, and Claudio Conti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
340
+ page_content=' Noise-enhanced spatial-photonic ising machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
341
+ page_content=' Nanophotonics, 9(13):4109–4116, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
342
+ page_content=' [30] Davide Pierangeli, Giulia Marcucci, and Claudio Conti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
343
+ page_content=' Adiabatic evolution on a spatial-photonic ising machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
344
+ page_content=' Optica, 7(11):1535–1543, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
345
+ page_content=' [31] Yisheng Fang, Junyi Huang, and Zhichao Ruan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
346
+ page_content=' Experimental observation of phase transitions in spatial photonic ising machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
347
+ page_content=' Physical Review Letters, 127(4):043902, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
348
+ page_content=' [32] Wenchen Sun, Wenjia Zhang, Yuanyuan Liu, Qingwen Liu, and Zuyuan He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
349
+ page_content=' Quadrature photonic spatial ising machine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
350
+ page_content=' Optics Letters, 47(6):1498–1501, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
351
+ page_content=' [33] Jiayi Ouyang, Yuxuan Liao, Zhiyao Ma, Deyang Kong, Xue Feng, Xiang Zhang, Xiaowen Dong, Kaiyu Cui, Fang Liu, Wei Zhang, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
352
+ page_content=' An on-demand photonic ising machine with simplified hamiltonian calculation by phase-encoding and intensity detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
353
+ page_content=' arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
354
+ page_content='05072, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
355
+ page_content=' [34] Charles Roques-Carmes, Yichen Shen, Cristian Zanoci, Mihika Prabhu, Fadi Atieh, Li Jing, Tena Dubˇcek, Chenkai Mao, Miles R Johnson, Vladimir ˇCeperi´c, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
356
+ page_content=' Heuristic recurrent algorithms for photonic ising machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
357
+ page_content=' Nature communications, 11(1):1–8, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
358
+ page_content=' [35] Mihika Prabhu, Charles Roques-Carmes, Yichen Shen, Nicholas Harris, Li Jing, Jacques Carolan, Ryan Hamerly, Tom Baehr-Jones, Michael Hochberg, Vladimir ˇCeperi´c, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
359
+ page_content=' Accelerating recurrent ising machines in photonic integrated circuits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
360
+ page_content=' Optica, 7(5):551–558, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
361
+ page_content=' [36] Suryendy Dutta, Abhishek Khanna, AS Assoa, Hanjong Paik, Darrell G Schlom, Zoltán Toroczkai, Arijit Raychowdhury, and Suman Datta.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
362
+ page_content=' An ising hamiltonian solver based on coupled stochastic phase-transition nano-oscillators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
363
+ page_content=' Nature Electronics, 4(7):502–512, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
364
+ page_content=' [37] Ernst Ising.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
365
+ page_content=' Beitrag zur theorie des ferro-und paramagnetismus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
366
+ page_content=' PhD thesis, Grefe & Tiedemann, 1924.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
367
+ page_content=' [38] Santosh Kumar, He Zhang, and Yu-Ping Huang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
368
+ page_content=' Large-scale ising emulation with four body interaction and all-to-all connections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
369
+ page_content=' Communications Physics, 3(1):1–9, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
370
+ page_content=' [39] Wenchen Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
371
+ page_content=' In OFC 2022, page M2G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
372
+ page_content='4, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
373
+ page_content=' [40] Michel X Goemans and David P Williamson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
374
+ page_content=' Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
375
+ page_content=' Journal of the ACM (JACM), 42(6):1115–1145, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
376
+ page_content=' [41] Michael R Garey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
377
+ page_content=' A guide to the theory of np-completeness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
378
+ page_content=' Computers and intractability, 1979.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
379
+ page_content=' [42] Sera Kahruman, Elif Kolotoglu, Sergiy Butenko, and Illya V Hicks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
380
+ page_content=' On greedy construction heuristics for the max-cut problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
381
+ page_content=' International Journal of Computational Science and Engineering, 3(3):211–218, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
382
+ page_content=' [43] Takuya Okuyama, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Masanao Yamaoka.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
383
+ page_content=' Binary optimization by momentum annealing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
384
+ page_content=' Physical Review E, 100(1):012111, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
385
+ page_content=' [44] Kosuke Tatsumura, Masaya Yamasaki, and Hayato Goto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
386
+ page_content=' Scaling out ising machines using a multi-chip architecture for simulated bifurcation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
387
+ page_content=' Nature Electronics, 4(3):208–217, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
388
+ page_content=' [45] Qizhuang Cen, Hao Ding, Tengfei Hao, Shanhong Guan, Zhiqiang Qin, Jiaming Lyu, Wei Li, Ninghua Zhu, Kun Xu, Yitang Dai, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
389
+ page_content=' Large-scale coherent ising machine based on optoelectronic parametric oscillator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
390
+ page_content=' Light: Science & Applications, 11(1):1–10, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
391
+ page_content=' [46] D-Wave Systems Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
392
+ page_content=' Hybrid solvers for quadratic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
393
+ page_content=' https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
394
+ page_content='dwavesys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
395
+ page_content='com/media/ soxph512/hybrid-solvers-for-quadratic-optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
396
+ page_content='pdf, April 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
397
+ page_content=' 8' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/8NE3T4oBgHgl3EQfqQrl/content/2301.04651v1.pdf'}
AtE2T4oBgHgl3EQf8QmS/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74907552d3f735ec39cd3d6ffc84b7cfc4a39b6115231b672ce2b5608665f29a
3
+ size 1572909
B9AyT4oBgHgl3EQf4PrK/content/tmp_files/2301.00784v1.pdf.txt ADDED
@@ -0,0 +1,1414 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ arXiv:2301.00784v1 [math.RT] 2 Jan 2023
2
+ ON A DETERMINANT FORMULA FOR SOME REAL REGULAR
3
+ REPRESENTATIONS
4
+ LÉA BITTMANN
5
+ Abstract. We interpret a formula established by Lapid-Mínguez on real regular rep-
6
+ resentations of GLn over a local non-archimedean field as a matrix determinant. We
7
+ use the Lewis Carroll determinant identity to prove new relations between real regular
8
+ representations. Through quantum affine Schur-Weyl duality, these relations generalize
9
+ Mukhin-Young’s Extended T -systems, for representations of the quantum affine algebra
10
+ Uqppslkq, which are themselves generalizations of the celebrated T -system relations.
11
+ Contents
12
+ 1.
13
+ Introduction
14
+ 1
15
+ 2.
16
+ Preliminaries
17
+ 3
18
+ 3.
19
+ Good segments
20
+ 7
21
+ 4.
22
+ Determinant formula
23
+ 10
24
+ 5.
25
+ Extended T-system formula
26
+ 11
27
+ 6.
28
+ Relation to quantum affine algebras representations
29
+ 17
30
+ Appendix A.
31
+ Ferrers boards
32
+ 19
33
+ References
34
+ 20
35
+ 1. Introduction
36
+ The context of this work is the representation theory of GLnpFq (where F is a non-
37
+ archimedean local field), or equivalently of the type A quantum affine algebra Uqppslkq
38
+ (where q P Cˆ is not a root of unity). Indeed, through Chari-Pressley’s quantum affine
39
+ Schur-Weyl duality [CP95], the category of complex smooth finite-length representations of
40
+ GLnpFq is equivalent to the category of (level n) finite-dimensional Uqppslkq-modules, when
41
+ k ě n. Since both contexts are equivalent, we will work with the category C of GLnpFq
42
+ representations in most of this paper. Both these categories have been intensively (and
43
+ independently) studied, but some important natural questions remain open.
44
+ The normalized parabolic induction, denoted by ˆ, endows this category with a ring
45
+ category structure, and its Grothendieck group R with a ring structure. Irreducible rep-
46
+ resentations in the category C have been classified by Zelevinsky [Zel80] using multiseg-
47
+ ments (formal sums of segments). For m “ ∆1 ` ∆2 ` ¨ ¨ ¨ ` ∆N a multisegment, the
48
+ corresponding irreducible representation Zpmq is obtained as the unique irreducible sub-
49
+ representation of the standard representation ζpmq :“ Zp∆1q ˆ Zp∆2q ˆ ¨ ¨ ¨ ˆ Zp∆Nq. The
50
+ classes of the irreducible representations and the standard representations form two bases
51
+ of the Grothendieck ring R, the change of basis matrix between them is unitriangular, with
52
+ coefficients which can be expressed in terms of Kazhdan-Lusztig polynomials (see [Zel81],
53
+ [CG97]). A similar story was established for finite-dimensional representations of Uqppslkq
54
+ (see the work of Nakajima [Nak01]). This gives an algorithm to compute the classes of
55
+ 1
56
+
57
+ 2
58
+ LÉA BITTMANN
59
+ the simple representations from the classes of the standard representations. However, in
60
+ practice the actual computation of the coefficients can be very difficult.
61
+ For some specific classes of irreducible representations, remarkable formulas have been
62
+ established to compute their classes as linear combinations of classes of standard represen-
63
+ tations. The work of Tadić [Tad95], and then Chenevier-Renard [CR08], established such
64
+ a formula for Speh representations. Cleverly, this formula can be seen as the computation
65
+ of the determinant of a matrix, and it was then proved using the Lewis Carroll identity
66
+ (also called Dodgson’s rule of determinant). In [LM14], Lapid-Mínguez generalized Tadić’s
67
+ formula to a larger class of representations called ladder representations. Then, in [LM18]
68
+ the same authors established an even more general formula (see (4.1) below), for regular
69
+ representations which are real - Zpmq such that Zpmq ˆ Zpmq is irreducible.
70
+ Furthermore, in [LM14], Lapid-Mínguez used the Lewis Carroll identity to obtain a
71
+ remarkable relation between the classes of some of these ladder representations.
72
+ For
73
+ Zpmq “ Zp∆1 ` ¨ ¨ ¨ ` ∆N) a ladder representation, we have the following relation in
74
+ R [LM14, Corollary 12]:
75
+ (1.1)
76
+ Zp∆1 ` ¨ ¨ ¨ ` ∆N´1q ˆ Zp∆2 ` ¨ ¨ ¨ ` ∆Nq “ Zpmq ˆ Zp∆2 ` ¨ ¨ ¨ ` ∆N´1q
77
+ ` Zpm1q ˆ Zpm2q,
78
+ where Zpm1q, Zpm2q are also ladders (see Theorem 5.5). Through quantum affine Schur-
79
+ Weyl duality, relation (1.1) has been established independently by Mukhin-Young in [MY12,
80
+ Theorem 4.1] for representations of the quantum affine algebra Uqppslkq, under the name
81
+ Extended T-systems.
82
+ The extended T-systems are generalizations of the famous T-system relations, which
83
+ are sets of recurrence relations of crucial importance in the study of certain integrable
84
+ systems (see review [KNS11]).
85
+ For representations of quantum affine algebras, the T-
86
+ systems are relations in the Grothendieck ring R between classes of Speh representations
87
+ (called Kirillov-Reshetikhin modules there).
88
+ These relations were proved in all simply-
89
+ laced types (A, D or E) by Nakajima [Nak01] and in all types by Hernandez [Her06].
90
+ Additionally, the T-systems, and their extended version, can be interpreted as short exact
91
+ sequences between irreducible finite-dimensional Uqppgq-modules.
92
+ More recently, the T-systems gained a new interpretation as exchange relations in a
93
+ Fomin-Zelevinsky cluster algebra [FZ02]. Indeed, in [HL16] Hernandez-Leclerc proved this
94
+ interpretation of T-systems as cluster transformations and used it to the prove that the
95
+ Grothendieck ring of the category of finite-dimensional Uqppgq-modules (in all Dynkin types)
96
+ had the structure of a cluster algebra. Note that Duan-Li-Luo obtained in [DLL19] another
97
+ generalization of the T-systems, different from Mukhin-Young extended T-systems, which
98
+ they also interpreted as exchange relations in the cluster algebra structure.
99
+ In the present work, we establish formulas generalizing the extended T-systems of
100
+ Mukhin-Young, for some real regular representations. Regular representations have a per-
101
+ mutation associated to them and in [LM18], Lapid-Mínguez gave a sufficient condition for
102
+ a regular representation to be real, as a pattern avoidance condition on the permutation
103
+ associated to the representation. We show, using the notion of Ferres boards and the work
104
+ of Sjostrand [Sjo07] that under the same pattern avoidance condition, Lapid-Mínguez’s
105
+ formula [LM18, Theorem 1.2 (9)] can be written as a matrix determinant. Our relations
106
+ are then obtained using some choice of Lewis Carroll identities. As our main result, we
107
+ prove the following (Theorem 5.1 and Corollary 5.3): for Zpmq “ Zp∆1 ` ¨ ¨ ¨ ` ∆N) a
108
+ regular representation such that the associated permutation σ avoids the patterns 3412
109
+
110
+ ON A DETERMINANT FORMULA FOR SOME REAL REGULAR REPRESENTATIONS
111
+ 3
112
+ and 4231, we have the following relations in R ((5.1) and (5.2)):
113
+ Zpmz∆Nq ˆ Zpmz∆σpNqq “ Zpmq ˆ Zpmz∆N, ∆σpNqq ` Zpm1
114
+ 1q ˆ Zpm2
115
+ 1q,
116
+ (1.2)
117
+ Zpmz∆1q ˆ Zpmz∆σp1qq “ Zpmq ˆ Zpmz∆1, ∆σp1qq ` Zpm1
118
+ 2q ˆ Zpm2
119
+ 2q,
120
+ (1.3)
121
+ where m1
122
+ 1, m2
123
+ 1, m1
124
+ 2 and m2
125
+ 2 are real regular representations.
126
+ As part of Theorem 5.1 and Corollary 5.3, we also prove that, as the extended T-systems,
127
+ these relations correspond to a decomposition of a module of length 2, i.e. the two terms in
128
+ the right hand side of (1.2) and (1.3) are irreducible representations. We prove this using
129
+ Lapid-Mínguez’s [LM16] combinatorial irreducibility criteria, as well as a newly introduced
130
+ notion of good segments in a mutlisegment, which enables us to prove by induction that
131
+ some parabolic induction of irreducible representations are irreducible.
132
+ The paper is organized as follows. We start with some reminders about segments, multi-
133
+ segments, p-adic representations of GLnpFq and the Zelevinsky classification in Section 2.
134
+ We also recall Lapid-Mínguez’s [LM16] irreducibility criteria for a parabolic induction of
135
+ two representations, using socles and cosocles. In Section 3, we introduce the notion of
136
+ good segments and use it to obtain some combinatorial criteria to prove that certain para-
137
+ bolic inductions Zp∆q ˆ Zpmq, where Zpmq is a regular representation are irreducible. We
138
+ also prove an existence result for good segments (Proposition 3.7). In particular, we obtain
139
+ that every regular representation whose permutation avoids the patterns 3412 and 4231
140
+ has at least two good segments, from which we can recover that such representations are
141
+ real. In Section 4, we use the notion of Ferres boards and results from Sjostrand [Sjo07] and
142
+ Chepuri–Sherman-Bennett [CSB21] to write existing relations as determinants of matrices.
143
+ The main result is stated and then proved in Section 5, in which we also give examples.
144
+ Finally, in Section 6 we translate our results to the context of quantum affine algebra
145
+ representations, and give some perspective, in particular in relation to cluster algebras.
146
+ Acknowledgements. We would like to thank Alberto Mínguez for providing inspiration
147
+ for this work.
148
+ The author was partially supported by the European Research Council
149
+ (ERC) under the European Union’s Horizon 2020 research and innovation programme
150
+ under grant agreement No 948885 and by the Royal Society University Research Fellowship.
151
+ 2. Preliminaries
152
+ 2.1. Segments and multisegments.
153
+ Definition 2.1. A segment is a pair of integers a ď b P Z, denoted by ra; bs.
154
+ Let Seg denote the set of segments.
155
+ The extremities of the segment ∆ “ ra; bs P Seg are denoted by bp∆q “ a and ep∆q “ b.
156
+ We also write ÐÝ
157
+ ∆ “ ra ´ 1; b ´ 1s.
158
+ Definition 2.2. Two segments ∆ “ ra; bs and ∆1 “ rc; ds are linked if
159
+ a ă c
160
+ and
161
+ c ´ 1 ď b ă d,
162
+ or
163
+ c ă a
164
+ and
165
+ a ´ 1 ď d ă b.
166
+ In the first case, we say that ∆ precedes ∆1 and write ∆ ă ∆1.
167
+ Example 2.3. A few examples of linked and unlinked pairs of segments:
168
+ 1
169
+ 3
170
+ 4
171
+ 5
172
+ are linked.
173
+ 1
174
+ 2
175
+ 4
176
+ 5
177
+ are not linked.
178
+ 1
179
+ 4
180
+ 3
181
+ 5
182
+ are linked.
183
+ 1
184
+ 5
185
+ 2
186
+ 4
187
+ are not linked.
188
+
189
+ 4
190
+ LÉA BITTMANN
191
+ Definition 2.4. A multisegment m is a finite formal sum of segments of Seg (with possible
192
+ multiplicities), m “ ∆1 ` ∆2 ` ¨ ¨ ¨ ` ∆N. Let Mult denote the set of multisegments.
193
+ A sequence of segments p∆1, . . . , ∆Nq is said to be ordered if, for all 1 ď i ă j ď N, ∆i
194
+ does not precedes ∆j. If m P Mult, and p∆1, . . . , ∆Nq is an ordered sequence of segments
195
+ such that m “ ��1 ` ¨ ¨ ¨ ` ∆N, we say that p∆1, . . . , ∆Nq is an ordered form of m.
196
+ 2.2. Representations. Let F be a non-archimedean local field with a normalized absolute
197
+ value | ¨ | and let D be a finite-dimensional central division F-algebra. For n P Zě1, let
198
+ CpGLnq be the category of complex, smooth representations of GLnpDq of finite length
199
+ and IrrpGLnq the set of equivalence classes of irreducible objects of CpGLnq.
200
+ For πi P
201
+ CpGLniq, i “ 1, 2, denote by π1ˆπ2 P CpGLn1`n2q the representation which is parabolically
202
+ induced from π1 b π2. The parabolic induction endows the category À
203
+ Ně0 CpGLnq with
204
+ the structure of a tensor category.
205
+ For any supercuspidal representation ρ P Ť
206
+ nPZě0 IrrpGLnq, there exists a unique positive
207
+ real number sρ such that ρ|¨|sρˆρ is reducible. Let νρ “ |¨|sρ, we write ÝÑρ “ ρνρ, ÐÝρ “ ρν´1
208
+ ρ .
209
+ A cuspidal line is an equivalence class on Ť
210
+ nPZě0 IrrpGLnq for the equivalence relation given
211
+ by ρ „ ÝÑρ .
212
+ For a fixed cuspidal line L, consider CL the Serre ring subcategory of À
213
+ Ně0 CpGLnq
214
+ consisting of the representations whose supercuspidal support is contained in L. Then all
215
+ categories CL are equivalent as ring categories and the study of À
216
+ Ně0 CpGLnq amounts to
217
+ the study of one CL. From now on, we fix a cuspidal line, drop the subscript and consider
218
+ the category C, its set of equivalence classes of irreducible objects Irr and its Grothendieck
219
+ ring R.
220
+ For ∆ “ ra; bs P Seg, consider the induced representation
221
+ Ira; bs :“ ρνa
222
+ ρ ˆ ρνa`1
223
+ ρ
224
+ ˆ ¨ ¨ ¨ ˆ ρνb
225
+ ρ.
226
+ Definition 2.5. We consider the socle and cosocle of this representation:
227
+ Zra; bs :“ socpIra; bsq,
228
+ maximal semi-simple submodule,
229
+ Lra; bs :“ cospIra; bsq,
230
+ maximal semi-simple quotient.
231
+ The following is known (see for example [Zel80]).
232
+ Proposition 2.6. For ∆1, . . . , ∆N P Seg, Zp∆1qˆ¨ ¨ ¨ˆZp∆Nq (resp. Lp∆1qˆ¨ ¨ ¨ˆLp∆Nq)
233
+ is irreducible if and only if the segments ∆1, . . . , ∆N are pairwise unlinked.
234
+ For m P Mult and p∆1, . . . , ∆Nq an ordered form of m, define the standard module:
235
+ ζpmq :“ Zp∆1q ˆ ¨ ¨ ¨ ˆ Zp∆Nq.
236
+ From the previous proposition, ζpmq does not depend on the chosen order.
237
+ Theorem 2.7. [Zel80][Zelevinsky Classification] The map
238
+ m ÞÑ Zpmq :“ socpζpmqq,
239
+ defines a bijection
240
+ Mult „
241
+ ÝÑ Irr .
242
+ 2.3. Families of representations. We are interested in some particular families of rep-
243
+ resentations. Let Zpmq be an irreducible representation, with m “ ∆1 ` ¨ ¨ ¨ ` ∆N P Mult.
244
+ Definition 2.8. The irreducible representation Zpmq is a Speh representation if ∆i`1 “ ÐÝ
245
+ ∆i,
246
+ for all 1 ď i ď N ´ 1.
247
+
248
+ ON A DETERMINANT FORMULA FOR SOME REAL REGULAR REPRESENTATIONS
249
+ 5
250
+ Example 2.9. The representations corresponding to the multisegments
251
+ r3; 3s ` r2; 2s ` r1; 1s ` r0; 0s “
252
+ ‚0
253
+ ‚1
254
+ ‚2
255
+ ‚3
256
+ and
257
+ r2; 4s ` r1; 3s ` r0; 2s “
258
+ 4
259
+ 2
260
+ 1
261
+ 0
262
+ 3
263
+ 2
264
+ are Speh representations.
265
+ Definition 2.10. The irreducible representation Zpmq is a ladder representation if, for all
266
+ 1 ď i ď N ´ 1, ep∆i`1q ă ep∆iq and bp∆i`1q ă bp∆iq.
267
+ Example 2.11. All Speh representations are particular cases of ladder representations.
268
+ The representations corresponding to the multisegments
269
+ r2; 5s ` r1; 3s ` r0; 0s “
270
+ ‚0
271
+ 1
272
+ 2
273
+ 5
274
+ 3
275
+ ,
276
+ r4; 7s ` r1; 2s “
277
+ 4
278
+ 7
279
+ 2
280
+ 1
281
+ are ladder representations.
282
+ Definition 2.12. The irreducible representation Zpmq is a regular representation if, for
283
+ all 1 ď i ‰ j ď N, ep∆jq ‰ ep∆iq and bp∆jq ‰ bp∆iq. By extension, the multisegment m
284
+ is also called regular.
285
+ Example 2.13. All ladder representations are particular cases of regular representations.
286
+ The representation
287
+ Zpr1; 5s ` r0; 4s ` r2; 3sq “
288
+ 1
289
+ 5
290
+ 0
291
+ 4
292
+ 2
293
+ 3
294
+ is a regular representation.
295
+ Definition 2.14. If Zpmq is a regular representation, then one can define a corresponding
296
+ permutation σm as follows.
297
+ Write m “ ra1; b1s ` ra2; b2s ` ¨ ¨ ¨ ` raN; bNs, and assume
298
+ b1 ą b2 ą ¨ ¨ ¨ ą bN, then σm P SN is such that
299
+ aσmp1q ă aσmp2q ă ¨ ¨ ¨ ă aσmpNq.
300
+ Remark 2.15. If Zpmq is a ladder representation, then the associated permutation is w0,
301
+ the longest element of SN.
302
+ Definition 2.16. An irreducible representation π is said to be real if π ˆ π is also irre-
303
+ ducible.
304
+ Remark 2.17. Real representations are usually called square-irreducible representations in
305
+ this context, but we use real here, which is the terminology coming from the work of Kang-
306
+ Kashiwara-Kim-Oh [KKKO15] on representations of quantum affine algebras, where the
307
+ notion appeared in a crucial way (see Section 6.2).
308
+ The following is one of the main results of [LM18].
309
+ Theorem 2.18. The regular representation Zpmq is real if and only if there does not exists
310
+ a sequence 1 ď j1 ă ¨ ¨ ¨ ă jr ď N, r ě 4 such that if a1
311
+ i “ aji and b1
312
+ i “ bji, then either
313
+ a1
314
+ i`1 ă a1
315
+ i ď b1
316
+ i`1 ` 1, i “ 3, . . . , r ´ 1, a1
317
+ 3 ă a1
318
+ 1 ď b1
319
+ 3 ` 1, and a1
320
+ r ă a1
321
+ 2 ă a1
322
+ r´1,
323
+ or
324
+ a1
325
+ i`1 ă a1
326
+ i ď b1
327
+ i`1 ` 1, i “ 4, . . . , r ´ 1, a1
328
+ 4 ă a1
329
+ 2 ď b1
330
+ 4 ` 1, and a1
331
+ 3 ă a1
332
+ r ă a1
333
+ 1 ă a1
334
+ ℓ,
335
+
336
+ 6
337
+ LÉA BITTMANN
338
+ where ℓ “ 2 if r “ 4 and ℓ “ r ´ 1 otherwise.
339
+ If the permutation σm avoids the patterns 4231 and 3412, then the condition of Theo-
340
+ rem 2.18 is satisfied. We will call these representations pattern avoiding regular.
341
+ The same patterns avoidance condition correspond to the smoothness condition of the
342
+ Schubert variety Xσm (see [LS90]).
343
+ Note that in particular, all ladder representations are real.
344
+ 2.4. Irreducibility criteria. The following result will be much used.
345
+ Lemma 2.19. [MS14] Let π1 and π2 be irreducible representations, and π be a represen-
346
+ tation such that
347
+ (a) π is a subrepresentation of π1 ˆ π2,
348
+ (b) π is a quotient of π2 ˆ π1,
349
+ (c) π1 b π2 has multiplicity 1 in the Jordan-Hölder sequence of π1 ˆ π2,
350
+ Then π is irreducible.
351
+ Definition 2.20. Given π1 “ Zpm1q and π2 “ Zpm2q, we write LIpπ1, π2q (resp. RIpπ1, π2q)
352
+ for the condition
353
+ Zpm1 ` m2q “ socpπ1 ˆ π2q
354
+ (resp.
355
+ Zpm1 ` m2q “ cospπ1 ˆ π2qq.
356
+ Lemma 2.21. Let m be a multisegment and ∆ a segment. Then we have the following
357
+ equivalences:
358
+ LIpZp∆q, Zpmqq
359
+ ðñ
360
+ Zpm ` ∆q ãÑ Zp∆q ˆ Zpmq,
361
+ RIpZp∆q, Zpmqq
362
+ ðñ
363
+ Zpm ` ∆q ãÑ Zpmq ˆ Zp∆q.
364
+ Proof. The first statement follows from the fact that the segment representation Zp∆q is
365
+ a left multiplier (see [LM16, Definition 4.3]), thus Zp∆q ˆ Zpmq has a unique irreducible
366
+ submodule, which appears with multiplicity 1 in the Jordan-Hölder sequence of Zp∆q ˆ
367
+ Zpmq.
368
+ The second statement can be deduced from the first by the use of the contragredient,
369
+ or more precisely [LM16, Lemma 3.9].
370
+
371
+ Proposition 2.22. [LM16] π1 ˆ π2 is irreducible if and only if LIpπ1, π2q and RIpπ1, π2q.
372
+ In [LM16], Lapid-Minguez introduced a combinatorial setup in order to determine
373
+ whether the conditions RIpZp∆q, Zpmqq and LIpZp∆q, Zpmqq where satisfied, for ∆ P Seg
374
+ and m P Mult. Let us recall it here.
375
+ Write m “ ∆1 ` ¨ ¨ ¨ ` ∆N, and consider the sets
376
+ X∆,m “ ti | ∆ ă ∆iu ,
377
+ ˜X∆,m “ ti | ∆i ă ∆u ,
378
+ Y∆,m “
379
+ !
380
+ i | ÐÝ
381
+ ∆ ă ∆i
382
+ )
383
+ ,
384
+ ˜Y∆,m “
385
+ !
386
+ i | ÐÝ
387
+ ∆i ă ∆
388
+ )
389
+ .
390
+ Definition 2.23. Let LCp∆, mq be the condition that there exists an injective function
391
+ f : X∆,m Ñ Y∆,m such that for all 1 ď i ď N, ∆fpiq ă ∆i.
392
+ Let RCp∆, mq be the condition that there exists an injective function f : ˜X∆,m Ñ ˜Y∆,m
393
+ such that for all 1 ď i ď N, ∆i ă ∆fpiq.
394
+ The function of Definition 2.23 are called matching functions.
395
+ Proposition 2.24. [LM16] The conditions LCp∆, mq and LIpZp∆q, Zpmqq (resp. RCp∆, mq
396
+ and RIpZp∆q, Zpmqq) are equivalent.
397
+
398
+ ON A DETERMINANT FORMULA FOR SOME REAL REGULAR REPRESENTATIONS
399
+ 7
400
+ Combining this result with Proposition 2.22, we get the following.
401
+ Corollary 2.25. The parabolic induction Zp∆qˆZpmq is irreducible if and only if LCp∆, mq
402
+ and RCp∆, mq.
403
+ 3. Good segments
404
+ 3.1. Definition.
405
+ Definition 3.1. A segment ∆ in a multisegment m P Mult is called a good segment if
406
+ (i) Zp∆q ˆ Zpmq is irreducible.
407
+ (ii)
408
+ #
409
+ Zpmq ãÑ Zp∆q ˆ Zpm´q,
410
+ or Zpmq ãÑ Zpm´q ˆ Zp∆q,
411
+ , where m´ “ mzt∆u.
412
+ If the first (resp. second) subcase of (ii) is satisfied, ∆ is called a good left (resp. right)
413
+ segment of m.
414
+ Using Lemma 2.21 as well as Proposition 2.4, we have the following equivalences:
415
+ ∆ is a good left segment of m
416
+ ðñ
417
+ LCp∆, mq, RCp∆, mq,
418
+ and LCp∆, m´q.
419
+ ,
420
+ (3.1)
421
+ ∆ is a good right segment of m
422
+ ðñ
423
+ LCp∆, mq, RCp∆, mq,
424
+ and RCp∆, m´q.
425
+ (3.2)
426
+ 3.2. Combinatorial criteria.
427
+ Lemma 3.2. If m “ ∆1 ` ¨ ¨ ¨ ` ∆N is a multisegment and ∆0 is a segment such that
428
+ ∆0 ` m is a regular multisegment, then
429
+ LCp∆0, mq ô Ei, ∆0 ă ∆i,
430
+ RCp∆0, mq ô Ei, ∆i ă ∆0.
431
+ Proof. We will prove the first equivalence, the second being exactly analog.
432
+ From the
433
+ definition of the condition LC, the implication
434
+ LCp∆0, mq ð Ei, ∆0 ă ∆i
435
+ is clear.
436
+ Let i P Y∆0,m. If i R X∆0,m, then either bp∆0q “ bp∆iq or ep∆0q “ ep∆iq, which is
437
+ a contradiction. Thus Y∆0,m Ă X∆,m. Now, if X∆0,m ‰ H, then LCp∆0, mq can not be
438
+ satisfied (by Hall’s marriage theorem).
439
+
440
+ Lemma 3.3. If m “ ∆1 ` ¨ ¨ ¨ ` ∆N is an ordered regular multisegment with σ “ σm the
441
+ associated permutation, then for all 1 ď i ď N,
442
+ • the condition LCp∆i, mq is equivalent to σ´1 is strictly decreasing on X∆i,m,
443
+ • the condition RCp∆i, mq is equivalent to σ´1 is strictly decreasing on ˜X∆i,m.
444
+ Proof. As before, we only prove the first statement. Fix 1 ď i ď N. If X∆i,m “ H, the
445
+ equivalence is trivial.
446
+ Suppose X∆i,m ‰ H, then with the same reasoning as in the proof of Lemma 3.2,
447
+ Y∆i,m Ă X∆i,m Y tiu.
448
+ Suppose Y∆i,m “ X∆i,m Y tiu. Let X∆i,m “ tj1 ą j2 ą ¨ ¨ ¨ ą jmu. Then, since m is
449
+ ordered, ep∆j1q ă ep∆j2q ă ¨ ¨ ¨ ă ep∆jmq.
450
+
451
+ 8
452
+ LÉA BITTMANN
453
+ If σ´1 is strictly decreasing on X∆i,m, then bp∆j1q ă bp∆j2q ă ¨ ¨ ¨ ă bp∆jmq. Since all
454
+ jk P X∆i,m, we have ∆jℓ ă ∆jℓ`1 for all 1 ď ℓ ď m ´ 1. Thus the function
455
+ (3.3)
456
+ f :
457
+ X∆,m
458
+ Ñ Y∆,m,
459
+ j1
460
+ ÞÑ i,
461
+ jℓ`1
462
+ ÞÑ jℓ,
463
+ 1 ď ℓ ď m ´ 1,
464
+ is a matching function from X∆i,m to Y∆i,m. Thus LCp∆i, mq.
465
+ If Y∆i,m Ĺ X∆i,m Y tiu, then there exists j P X∆i,m such that bp∆jq “ ep∆iq ` 1, and
466
+ Y∆i,m “ pX∆i,mztjuq Y tiu. If σ´1 is strictly decreasing on X∆,m, then necessarily j “ jm
467
+ and the function f from (3.3) is a matching function from X∆i,m to Y∆i,m, as jm does not
468
+ appear in the image of f.
469
+ Conversely, suppose LCp∆i, mq and let f be a matching function from X∆i,m to Y∆i,m.
470
+ Necessarily, fpj1q “ i, as ∆i is the only segment considered which precedes ∆j1. Recur-
471
+ sively, we see that f is the function from (3.3). As it is a matching function, we deduce
472
+ that ∆jℓ ă ∆jℓ`1 for all 1 ď ℓ ď m ´ 1, and thus σ´1 is strictly decreasing on X∆i,m.
473
+
474
+ Remark 3.4. From Lemma 3.2, if m is a regular multisegment, for all 1 ď i ď k, LCp∆i, m´
475
+ ∆iq (resp. RCp∆i, m ´ ∆iq) is equivalent to the fact that ∆i precedes (resp. is preceded
476
+ by) no segment in m.
477
+ Combining with Lemma 3.3, we have the following equivalences:
478
+ ∆ is a good left segment of m
479
+ ðñ
480
+ ∆ precedes no other segment of m and ∆
481
+ forms a ladder with the segments which pre-
482
+ cede it.
483
+ ∆ is a good right segment of m
484
+ ðñ
485
+ ∆ is preceded by no other segment of m and
486
+ ∆ forms a ladder with the segments which are
487
+ preceded by it.
488
+ The following result is clear using this criteria.
489
+ Lemma 3.5. If m1 is a sub-multisegment of m and ∆ P m1 is a good segment for m, then
490
+ it is a good segment for m1.
491
+ Remark 3.6. Note that the converse is not true. For example, any segment ∆ is a good
492
+ segment for itself, but not necessarily a good segment for any multisegment containing it.
493
+ 3.3. Existence results.
494
+ Proposition 3.7. For N ě 2, let m “ ∆1 ` ∆2 ` ¨ ¨ ¨ ` ∆N be a regular multisegment
495
+ such that for all i, ∆i “ rai, bis with b1 ą b2 ą ¨ ¨ ¨ ą bN and the associated permutation
496
+ σ avoids the patterns 4231 and 3412, and π “ Zpmq is a prime irreducible representation.
497
+ Then
498
+ either ∆1
499
+ or
500
+ ∆σp1q,
501
+ and
502
+ either ∆N
503
+ or
504
+ ∆σpNq
505
+ correspond to good segments of m.
506
+ Moreover, if σpNq “ 1 (resp. σp1q “ N), then ∆N is a good right segment (resp. ∆1
507
+ is a good left segment) of m. If σpNq “ 1 and σp1q “ N then m is a ladder.
508
+ Proof. First of all, if σp1q “ 1, then ∆1 is not linked with any other segment of m, and it
509
+ is both a good left and a good right segment of m.
510
+ Let i0 “ σp1q and suppose i0 ą 1. Suppose neither ∆1 nor ∆i0 are good segments.
511
+ From Lemma 3.3, σ´1
512
+ m
513
+ is neither decreasing on X∆1,m nor on ˜X∆i0,m. We consider different
514
+ cases.
515
+ If there exists i ă j ă i0 such that ∆i ă ∆1 and ∆j ă ∆1, or ∆i0 ă ∆i and ∆i0 ă ∆j
516
+ and ∆j ć ∆j, the configuration is the following:
517
+
518
+ ON A DETERMINANT FORMULA FOR SOME REAL REGULAR REPRESENTATIONS
519
+ 9
520
+ ∆i0
521
+ ∆j
522
+ ∆i
523
+ ∆1
524
+ The pattern 4231 appears in this configura-
525
+ tion, which is a contradiction.
526
+ Otherwise, there exists at least one 1 ă i ă i0 such that ∆i0 ă ∆i and ∆i ć ∆1 and
527
+ one i0 ă j such that ∆j ă ∆1 and ∆j ć ∆i0. The configuration is the following:
528
+ ∆j
529
+ ∆i0
530
+ ∆i
531
+ ∆1
532
+ The pattern 3412 appears in this configura-
533
+ tion, which is a contradiction.
534
+ The proof for ∆N and ∆σpNq is exactly symmetric.
535
+ Now, suppose σpNq “ 1. We know that either ∆1 or ∆N is a good segment of m. If
536
+ ∆1 is a good segment and ∆N is not a good segment, then we are necessarily in the first
537
+ configuration drawn above, which features the pattern 4231. Similarly, if σp1q “ N, then
538
+ either ∆1 or ∆N is a good segment of m. The same pattern avoidance condition implies
539
+ that ∆1 is necessarily good.
540
+ If both σpNq “ 1 and σp1q “ N then any pair of segments between ∆1 and ∆N which
541
+ does not form a ladder would create a 4231 pattern. Thus m is a ladder.
542
+
543
+ This result has the following consequence.
544
+ Corollary 3.8. Let π “ Zpmq be a regular representation avoiding the patterns 4231 and
545
+ 3412, then π has at least two good segments.
546
+ Remark 3.9. This criteria allows us to recover the implication (which is established by
547
+ Theorem 2.18 [LM18]):
548
+ m avoids the patterns 4231 and 3412
549
+ ùñ
550
+ Zpmq is real.
551
+ This can be proved by induction on N, the number of segments in the multisegment m.
552
+ For completeness, let us detail the reasoning.
553
+ If N “ 1, then m “ ∆ is just a segment, and Zp∆q is real (for example as an application
554
+ of Proposition 2.6).
555
+ If N ě 2 and m avoids the patterns 4231 and 3412, then from Corollary 3.8, m has
556
+ at least one good segment ∆. Suppose without loss of generality that it is a good left
557
+ segment. Then
558
+ Zpmq ˆ Zpmq ãÑ Zpmq ˆ Zp∆q
559
+ looooooomooooooon
560
+ irreducible
561
+ ˆZpm´q,
562
+ ãÑ Zp∆q ˆ Zpmq ˆ Zpm´q ãÑ Zp∆q ˆ Zp∆q
563
+ looooooomooooooon
564
+ irreducible
565
+ ˆZpm´q ˆ Zpm´q.
566
+ However, m´ has N ´ 1 segments, and satisfy the pattern avoidance condition.
567
+ Thus
568
+ Zpm´q ˆ Zpm´q is irreducible by induction hypothesis.
569
+ Similarly, as Zpmq և Zpm´q ˆ Zp∆q,
570
+ Zpmq ˆ Zpmq և Zpm´q ˆ Zpm´q ˆ Zp∆q ˆ Zp∆q.
571
+ Then the irreducibility of Zpmq ˆ Zpmq is obtained through Lemma 2.19.
572
+ Notice we only used the existence of one good segment in the proof, although there is
573
+ two from Corollary 3.8.
574
+
575
+ 10
576
+ LÉA BITTMANN
577
+ 4. Determinant formula
578
+ 4.1. Alternate sum formula. One of the results of [LM18] is an alternate sum formula
579
+ for every regular real representation using standard representations. Let π “ Zpmq be a
580
+ regular real representation, with m “ ra1; b1s ` ¨ ¨ ¨ ` raN; bNs such that b1 ą ¨ ¨ ¨ ą bN.
581
+ In the Grothendieck ring,
582
+ (4.1)
583
+ π “
584
+ ÿ
585
+ σ1PSN
586
+ σ0ďσ1ďσ
587
+ sgnpσ1σqZpraσp1q; bσ1p1qsq ˆ Zpraσp2q; bσ1p2qsq ˆ ¨ ¨ ¨ ˆ ZpraσpNq; bσ1pNqsq,
588
+ where σ “ σm and for all i,
589
+ σ´1
590
+ 0 piq “ max
591
+
592
+ j ď xi | j R σ´1
593
+ 0 pti ` 1, . . . , Nuq
594
+ (
595
+ ,
596
+ with xi “ #tj | aj ď bi ` 1u.
597
+ Remark 4.1. The permutation σ0 satisfies
598
+ σ0 ď σ1
599
+ ô
600
+ @i P t1, . . . , Nu, aσpiq ď bσ1piq ` 1.
601
+ We deduce that equation (4.1) is equivalent to
602
+ (4.2)
603
+ π “
604
+ ÿ
605
+ σ1PSN
606
+ σ1ďσ
607
+ sgnpσ1σqZpraσp1q; bσ1p1qsq ˆ Zpraσp2q; bσ1p2qsq ˆ ¨ ¨ ¨ ˆ ZpraσpNq; bσ1pNqsq.
608
+ Indeed, for σ1 ą σ0, at least one of the Zpraσpiq, bσ1piqsq is not defined, and the term does
609
+ not contribute to the sum in (4.2).
610
+ For all i P t1, . . . , Nu, set a1
611
+ i “ aσpiq, then equation (4.2) can be rewritten
612
+ (4.3)
613
+ π “ sgnpσq
614
+ ÿ
615
+ σ1PrId,σs
616
+ sgnpσ1qZpra1
617
+ 1; bσ1p1qsq ˆ Zpra1
618
+ 2; bσ1p2qsq ˆ ¨ ¨ ¨ ˆ Zpra1
619
+ N; bσ1pNqsq,
620
+ where rId, σs denotes the Bruhat interval of permutations in SN lower than σ.
621
+ 4.2. Matrix determinant. Equation (4.3) is similar to the determinant of a matrix, with
622
+ some entries replaced by zeros to account for the missing permutations σ1. More precisely,
623
+ for σ1, σ2 permutations in SN, let
624
+ Γrσ1, σ2s :“ tpi, σpiqq | σ P rσ1, σ2s, 1 ď i ď Nu,
625
+ then permutations whose graph is contained in ΓrId, σs form the right convex hull, from
626
+ the work of Sjöstrand [Sjo07]. The following is obtained using [Sjo07, Theorem 4].
627
+ Proposition 4.2. [CSB21, Proposition 3.3] If the permutation σ P SN avoids the patterns
628
+ 4231 and 34121, and M “ pmi,jq1ďi,jďN is a square N ˆ N-matrix, then
629
+ (4.4)
630
+ detpM|ΓrId,σsq “
631
+ ÿ
632
+ σ1PrId,σs
633
+ sgnpσ1qm1,σ1p1qm2,σ1p2q ¨ ¨ ¨ mN,σ1pNq.
634
+ Remark 4.3. Using Ferrers boards (see Appendix A), the determinant in equation (4.4)
635
+ can be computed placing the coefficient mi,j in the box pi, jq of rNs2 if it is coloured and
636
+ 0 if it is not. Note that the dots are placed on the Zprai; bisq.
637
+ Combining Proposition 4.2 with (4.3), and assuming σ avoids the patterns 4231 and
638
+ 3412, we obtain the following:
639
+ (4.5)
640
+ π “ sgnpσq det
641
+ `
642
+ pZpa1
643
+ i; bjqq1ďi,jďN|ΓrId,σs
644
+ ˘
645
+ .
646
+ 1in [Sjo07, CSB21], the pattern avoidance condition is weaker, permutations are assumed to avoid the
647
+ patterns 4231, 35142, 42513, and 351624
648
+
649
+ ON A DETERMINANT FORMULA FOR SOME REAL REGULAR REPRESENTATIONS
650
+ 11
651
+ 4.3. Lewis Carroll’s identity. The following result is usually called the Lewis Carroll’s
652
+ identity or the Desnanot–Jacobi identity.
653
+ Proposition 4.4. For M a square N ˆ N-matrix, and A, B Ă t1, . . . , Nu, let MB
654
+ A be the
655
+ matrix obtained from M by removing all rows indexed by elements of A and all columns
656
+ indexed by elements of B. Then, for all 1 ď a ă a1 ď N and 1 ď b ă b1 ď N,
657
+ (4.6)
658
+ detpMq detpMb,b1
659
+ a,a1q “ detpMb
660
+ aq detpMb1
661
+ a1q ´ detpMb1
662
+ a q detpMb
663
+ a1q.
664
+ We can use Proposition 4.4 with equation (4.5) to write relations in the Grothendieck
665
+ ring R involving π. However, if M “
666
+ `
667
+ pZpa1
668
+ i; bjqq1ďi,jďN|ΓrId,σs
669
+ ˘
670
+ , the determinant of the
671
+ submatrix Mj
672
+ i does not necessarily realize (up to a sign) the class of an irreducible repre-
673
+ sentation Zpm1q in R, for all 1 ď i, j ď N.
674
+ Example 4.5. Let m “ r1; 4s`r0; 3s`r2; 2s, the corresponding permutation is the reflection
675
+ σ “ p12q P S3. The alternate sum formula for the class of the irreducible representation is
676
+ Zpmq “ Zpr1; 4sq ˆ Zpr0; 3sq ˆ Zpr2; 2sq ´ Zpr0; 4sq ˆ Zpr1; 3sq ˆ Zpr2; 2sq,
677
+ “ ´
678
+ ������
679
+ Zpr0; 4sq
680
+ Zpr0; 3sq
681
+ 0
682
+ Zpr1; 4sq
683
+ Zpr1; 3sq
684
+ 0
685
+ 0
686
+ 0
687
+ Zpr2; 2sq
688
+ ������
689
+ .
690
+ Let M be the above matrix, then
691
+ detpM1
692
+ 2 q “ Zpr0; 3sq ˆ Zpr2; 2sq “ Zpr0; 3s ` r2; 2sq
693
+ P R,
694
+ detpM1
695
+ 3 q “ 0 ‰ Zpm1q.
696
+ Nevertheless, it is possible to write explicit formulas in the Grothendieck ring R in some
697
+ interesting cases.
698
+ We will use the following key result.
699
+ Proposition 4.6. [CSB21, Proposition 4.17] Let σ be a permutation in SN avoiding the
700
+ patterns 4231 and 3412, and choose i P rNs. Let σi P SN´1 be the "flatten" permutation
701
+ obtained from σ by removing pi, σpiqq and shifting the remaining numbers appropriately.
702
+ Then for M a pN ´ 1q ˆ pN ´ 1q-matrix,
703
+ (4.7)
704
+ detpM|ΓrId,σsσpiq
705
+ i
706
+ q “ detpM|ΓrId,σisq.
707
+ Remark 4.7. Note for M a N ˆ N-matrix and for 1 ď i, j ď N,
708
+ `
709
+ M|ΓrId,σs
710
+ ˘j
711
+ i “ Mj
712
+ i |ΓrId,σsj
713
+ i .
714
+ 5. Extended T-system formula
715
+ Our main result is the following, which will be proven in Section 5.2 and 5.3.
716
+ Theorem 5.1. Let m “ ∆1`∆2`¨ ¨ ¨`∆N be a regular multisegment, such that b1 ą b2 ą
717
+ ¨ ¨ ¨ ą bN, where for all 1 ď i ď N, ∆i “ rai; bis. Assume the corresponding permutation σ
718
+ avoids the patterns 4231 and 3412, and that σpNq ‰ N. Let
719
+ I “
720
+ "
721
+ i | aN ď ai
722
+ bi ď bσpNq
723
+ *
724
+ “ ti1 ă i2 ă ¨ ¨ ¨ iru.
725
+ Then, we have the following relation, in the Grothendieck ring R:
726
+ (5.1)
727
+ Zpmz∆Nq ˆ Zpmz∆σpNqq “ Zpmq ˆ Zpmz∆N, ∆σpNqq ` Zpm1q ˆ Zpm2q,
728
+
729
+ 12
730
+ LÉA BITTMANN
731
+ where
732
+ m1 “
733
+ ÿ
734
+ jRI
735
+ ∆j `
736
+ r´1
737
+ ÿ
738
+ k“1
739
+ raik; bik`1s,
740
+ m2 “
741
+ ÿ
742
+ iRI
743
+ ∆i `
744
+ r´1
745
+ ÿ
746
+ k“1
747
+ raik`1; biks.
748
+ Moreover, the products in both terms on the right hand side of (5.1) are irreducible.
749
+ Remark 5.2.
750
+ (1) If σpNq “ N, then the segment ∆N is not linked to any other segment
751
+ of m. In that case
752
+ Zpmq “ Zpmz∆Nq ˆ Zp∆Nq.
753
+ (2) As σ avoids the pattern 4231, the segments ∆i, with i P I form a ladder.
754
+ Corollary 5.3. Let us assume the permutation σ avoids the patterns 4231 and 3412 and
755
+ satisfies σp1q ‰ 1. Let
756
+ J “
757
+ "
758
+ j | aj ď a1
759
+ bσp1q ď bj
760
+ *
761
+ “ tj1 ă j2 ă ¨ ¨ ¨ jsu.
762
+ The following relation in satisfied in the Grothendieck ring R:
763
+ (5.2)
764
+ Zpmz∆1q ˆ Zpmz∆σp1qq “ Zpmq ˆ Zpmz∆1, ∆σp1qq ` Zpm1q ˆ Zpm2q,
765
+ where
766
+ m1 “
767
+ ÿ
768
+ iRJ
769
+ ∆i `
770
+ s´1
771
+ ÿ
772
+ k“1
773
+ rajk; bjk`1s,
774
+ m2 “
775
+ ÿ
776
+ iRJ
777
+ ∆i `
778
+ s´1
779
+ ÿ
780
+ k“1
781
+ rajk`1; bjks.
782
+ Proof. The result is obtained by applying Theorem 5.1 to the irreducible representation
783
+ Zpm1q, with m1 “ r´bN; ´aNs ` ¨ ¨ ¨ ` r´b1; ´a1s.
784
+
785
+ Example 5.4.
786
+ (1) Let m “ r2; 3s`r0; 2s`r1; 1s. The corresponding regular represen-
787
+ tation Zpmq is real, since its associated permutation is σ “ 231. It has two good
788
+ right segments, which are r0; 2s and r1; 1s (∆σp1q and ∆3). Applying Theorem 5.1
789
+ gives the following relation:
790
+ Zpr2; 3s ` r0; 2sq ˆ Zpr0; 2s ` r1; 1sq “ Zpmq ˆ Zpr0; 2sq ` Zpr0; 2sq ˆ Zpr1; 3s ` r0; 2sq.
791
+ Note that Zpr0; 2s`r1; 1sq – Zpr0; 2sqˆZpr1; 1sq, and in this case the above relation
792
+ can be simplified by Zpr0; 2sq.
793
+ (2) Let m “ r1; 6s`r3; 5s`r0; 4s`r2; 3s. The corresponding regular representation Zpmq
794
+ is real, since its associated permutation is σ “ 3142. It has two good segments,
795
+ which are r1; 6s (left) and r2; 3s (right). Applying Theorem 5.1 gives the following
796
+ relation:
797
+ Zpr1; 6s ` r3; 5s ` r0; 4sq ˆ Zpr1; 6s ` r0; 4s ` r2; 3sq “ Zpmq ˆ Zpr1; 6s ` r0; 4sq
798
+ ` Zpr1; 6s ` r0; 4s ` r3; 3sq ˆ Zpr1; 6s ` r0; 4s ` r2; 5sq.
799
+ Whereas applying Corollary 5.3 gives the following relation:
800
+ Zpr3; 5s ` r0; 4s ` r2; 3sq ˆ Zpr1; 6s ` r3; 5s ` r2; 3sq “ Zpmq ˆ Zpr3; 5s ` r2; 3sq
801
+ ` Zpr3; 5s ` r2; 3s ` r1; 4sq ˆ Zpr3; 5s ` r2; 3s ` r0; 6sq.
802
+
803
+ ON A DETERMINANT FORMULA FOR SOME REAL REGULAR REPRESENTATIONS
804
+ 13
805
+ 5.1. Ladder case. If m is a ladder, then the corresponding permutation is the longest
806
+ permutation w0. Thus ΓrId, w0s “ rNs. In that case, the result of Theorem 5.1 is al-
807
+ ready known, as Corollary 12 of [LM14], or Theorem 4.1 in [MY12], in the language of
808
+ representations of quantum affine algebras.
809
+ Theorem 5.5. Let m “ ∆1 ` ¨ ¨ ¨ ` ∆N be a ladder multisegment, with ∆i “ rai; bis, then
810
+ (5.3)
811
+ Zp∆1 ` ¨ ¨ ¨ ` ∆N´1q ˆ Zp∆2 ` ¨ ¨ ¨ ` ∆Nq “ Zpmq ˆ Zp∆2 ` ¨ ¨ ¨ ` ∆N´1q
812
+ ` Zpra1; b2s ` ¨ ¨ ¨ ` raN´1; bNsq ˆ Zpra2; b1s ` ¨ ¨ ¨ ` raN; bN´1sq.
813
+ In this case, the result comes from the application of the Lewis Carroll identity (4.6)
814
+ to the matrix pZprai; bjsqq1ďi,jďN, on lines and columns 1 and N. However, in order to
815
+ understand better the general case, let us consider in more details the application of the
816
+ Lewis Carroll identity to the matrix M “ pZpra1
817
+ i; bjsqq1ďi,jďN (recall that a1
818
+ i “ aN´i`1).
819
+ One can look at what happens to the Ferrers boards (see Appendix A) in this case . The
820
+ permutation w0 is represented by an anti-diagonal, and the Ferrers board is the full grid.
821
+ Taking out row 1 and column N, one gets exactly the grid corresponding to the longest
822
+ element of SN´1.
823
+ S5 Q p15qp24q “
824
+
825
+
826
+
827
+
828
+
829
+ ÝÑ
830
+
831
+
832
+
833
+
834
+ “ p14qp23q P S4
835
+ As the signature of the longest permutation in SN is p´1qt N
836
+ 2 u, one has
837
+ p´1qt N´1
838
+ 2
839
+ u detpM1
840
+ Nq “ Zp∆1 ` ¨ ¨ ¨ ` ∆N´1q,
841
+ p´1qt N´1
842
+ 2
843
+ u detpMN
844
+ 1 q “ Zp∆2 ` ¨ ¨ ¨ ` ∆Nq,
845
+ p´1qt N
846
+ 2 u´1 detpM1,N
847
+ 1,N q “ Zp∆2 ` ¨ ¨ ¨ ` ∆N´1q.
848
+ Now, taking out row 1 and column 1, or row N and column N, one gets again the grid
849
+ corresponding to the longest element of SN´1, but the dots have moved. For example,
850
+ if one does a cyclic permutation of the columns by shifting them to the left and placing
851
+ column 1 at the end, then taking out row 1 and column 1 gives the same result as taking
852
+ out row 1 and column N in the shifted board.
853
+
854
+
855
+
856
+
857
+
858
+ ÝÑ
859
+
860
+
861
+
862
+
863
+
864
+
865
+
866
+
867
+
868
+
869
+ ÝÑ
870
+
871
+
872
+
873
+
874
+
875
+
876
+
877
+ The same operation can be applied to the matrix M. Note that the new dots are placed
878
+ on the coefficients Zpra1; b2sq, . . . , ZpraN´1; bNsq. The permutation of the columns does
879
+ not change the sign of the determinant because the columns on which the determinant is
880
+ computed are not permuted with respect to one another.
881
+ detpM1
882
+ 1 q “ detpshiftpMqN
883
+ 1 q “ p´1qt N´1
884
+ 2
885
+ uZpra1; b2s ` ¨ ¨ ¨ ` raN´1; bNsq.
886
+ Similarly,
887
+ detpMN
888
+ N q “ p´1qt N´1
889
+ 2
890
+ uZpra2; b1s ` ¨ ¨ ¨ ` raN; bN´1sq.
891
+
892
+ 14
893
+ LÉA BITTMANN
894
+ Finally, the Lewis Carroll identity (4.6) gives relation (5.1) of Theorem 5.5.
895
+ Moreover, the irreductibility of the terms Zpmq ˆ Zp∆2 ` ¨ ¨ ¨ ` ∆N´1q and Zpra1; b2s `
896
+ ¨ ¨ ¨ ` raN´1; bNsq ˆ Zpra2; b1s ` ¨ ¨ ¨ ` raN; bN´1sq is proven in [BLM13, Exemple 4.5].
897
+ 5.2. Proof of relation (5.1). Let us apply Proposition 4.4 (Lewis Carroll’s identity) to the
898
+ matrix ˜
899
+ M “ M|ΓrId,σs, where M “ ppZpa1
900
+ i; bjqq1ďi,jďNq, on rows σ´1pNq, N and columns
901
+ σpNq, N:
902
+ (5.4)
903
+ detp ˜
904
+ Mq detp ˜
905
+ MσpNq,N
906
+ σ´1pNq,Nq “ detp ˜
907
+ MσpNq
908
+ σ´1pNqq detp ˜
909
+ MN
910
+ N q ´ detp ˜
911
+ MN
912
+ σ´1pNqq detp ˜
913
+ MσpNq
914
+ N
915
+ q.
916
+ Using Proposition 4.6,
917
+ det
918
+ ´
919
+ ˜
920
+ MσpNq
921
+ N
922
+ ¯
923
+ “ det
924
+ ´
925
+ MσpNq
926
+ N
927
+ |ΓrId,σNs
928
+ ¯
929
+ ,
930
+ det
931
+ ´
932
+ ˜
933
+ MN
934
+ σ´1pNq
935
+ ¯
936
+ “ det
937
+ ´
938
+ MN
939
+ σ´1pNq|ΓrId,σσ´1pNqs
940
+ ¯
941
+ .
942
+ Since σN and σσ´1pNq satisfy the pattern avoidance condition, using (4.5), one has
943
+ det
944
+ ´
945
+ MσpNq
946
+ N
947
+ |ΓrId,σNs
948
+ ¯
949
+ “ sgnpσNqZp∆1 ` ¨ ¨ ¨ ` {
950
+ ∆σpNq ` ¨ ¨ ¨ ` ∆Nq,
951
+ det
952
+ ´
953
+ MN
954
+ σ´1pNq|ΓrId,σσ´1pNqs
955
+ ¯
956
+ “ sgnpσσ´1pNqqZp∆1 ` ∆2 ` ¨ ¨ ¨ ` ∆N´1q.
957
+ Note that for i P rNs, Mσpiq
958
+ i
959
+ is obtained by taking out the row and column containing the
960
+ coefficient Zpra1
961
+ i; bσpiqsq “ Zp∆σpiqq.
962
+ Similarly,
963
+ det
964
+ ´
965
+ ˜
966
+ MσpNq,N
967
+ σ´1pNq,N
968
+ ¯
969
+ “ sgnpσσ´1pNqN
970
+ qZp∆1 ` ¨ ¨ ¨ ` {
971
+ ∆σpNq ` ¨ ¨ ¨ ` ∆N´1q.
972
+ Let us now consider the coefficients detp ˜
973
+ MN
974
+ N q and detp ˜
975
+ MσpNq
976
+ σ´1pNqq. Using Lemma A.3, we
977
+ know that either the two columns σpNq and N or the two rows σ´1pNq and N of ˜
978
+ M are
979
+ similar (the zeros are at the same place). Assume that the two columns σpNq and N of ˜
980
+ M
981
+ are similar, meaning that there are as many zeros above the coefficient Zpra1
982
+ σ´1pNq; bσpNqsq
983
+ as above Zpra1
984
+ σ´1pNq; bNsq. Note that since σ avoids the pattern 4231, the dots in the lower
985
+ right corner of the Ferrers board form a ladder. In particular, one can apply the same
986
+ reasoning as in the ladder case.
987
+ Let us cyclically permute the columns σpNq, . . . , N in order to obtain column N in
988
+ position σpNq, and take the determinant of the minor shiftp ˜
989
+ MN
990
+ N q “ shiftp ˜
991
+ MqσpNq
992
+ N
993
+ instead
994
+ of the minor ˜
995
+ MN
996
+ N . Because theses columns have the same zero block in their upper part,
997
+ these determinants are equal. The resulting permutation is σN (same permutation as if
998
+ we had taken out row N and column σpNq).
999
+
1000
+
1001
+
1002
+
1003
+
1004
+ ÝÑ
1005
+
1006
+
1007
+
1008
+
1009
+
1010
+
1011
+
1012
+
1013
+ ÝÑ
1014
+
1015
+
1016
+
1017
+
1018
+
1019
+ For j R I, there are still dots placed on the Zpraj; bjsq, and the new dots are placed on
1020
+ the Zpraik`1; biksq, for 1 ď k ď r.
1021
+ Hence,
1022
+ detp ˜
1023
+ MN
1024
+ N q “ detpshiftp ˜
1025
+ MqσpNq
1026
+ N
1027
+ q “ sgnpσNqZpm2q.
1028
+
1029
+ ON A DETERMINANT FORMULA FOR SOME REAL REGULAR REPRESENTATIONS
1030
+ 15
1031
+ Similarly,
1032
+ detp ˜
1033
+ MσpNq
1034
+ σ´1pNqq “ sgnpσσ´1pNqqZpm1q.
1035
+ Let us now consider the signatures of the permutations, using the criteria of Lemma A.4:
1036
+ sgnpσq “
1037
+ ÿ
1038
+
1039
+ 7
1040
+ "
1041
+
1042
+
1043
+ *
1044
+ .
1045
+ Let us separate the grid (or matrix) in 4 blocks, A, B, C and the upper right being empty
1046
+ (or filled with zeros).
1047
+ p0q
1048
+ A
1049
+ B
1050
+ C
1051
+
1052
+ pN, σpNqq
1053
+
1054
+ pσ´1pNq, Nq
1055
+ Note that zone C contains r dots, where r is the cardinal of I. Going from σ to σN, we
1056
+ take out the dot on the last line. It is clear that all dots in zones A, B and C except the
1057
+ bottom one will have the same contribution to the sum ℓ. The difference is thus equal to
1058
+ the contribution of the bottom dot pN, σpNqq, which is r ´ 1. Hence
1059
+ sgnpσNq “ sgnpσqp´1qr´1.
1060
+ Now, going from σ to σσ´1pNq, we take out the dot pσ´1pNq, Nq. All dots is block A will
1061
+ still have the same contribution, but the dots in block B will count one less dot in their
1062
+ upper right corner. The remaining dots in block C will also count one less dot. Thus,
1063
+ sgnpσσ´1pNqq “ sgnpσqp´1qr´1`7B.
1064
+ Going from σ to σσ´1pNqN
1065
+ , we take out both these dots, and the signature of the resulting
1066
+ permutation is
1067
+ sgnpσσ´1pNqN
1068
+ q “ sgnpσqp´1q7B`1.
1069
+ Simplifying the signs in (5.4), we get the desired relation (5.1).
1070
+ Finally, in the case where it is not the two columns but the two rows σ´1pNq and N of
1071
+ ΓrId, σs which are identical, one can apply the same procedure of cyclically permuting the
1072
+ rows of the matrix ˜
1073
+ M. As a result,
1074
+ detp ˜
1075
+ MN
1076
+ N q “ sgnpσσ´1pNqqZpm2q,
1077
+ detp ˜
1078
+ MσpNq
1079
+ σ´1pNqq “ sgnpσNqZpm1q.
1080
+ But a symmetric reasoning on the signatures, we also get the desired relation, which
1081
+ concludes the proof of (5.1).
1082
+ 5.3. Proof of irreducibility.
1083
+ 5.3.1. Irreductibility of ZpmqˆZpmz∆N, ∆σpNqq: As in Remark 3.9, let us prove this result
1084
+ by induction on N ě 3, the number of segments in m. For N “ 3, assuming σp3q ‰ 3, then
1085
+ Zpmz∆3, ∆σp3qq “ Zp∆q, where ∆ is necessarily a good segment of m by Proposition 3.7.
1086
+ By definition, Zpmq ˆ Zp∆q is irreducible.
1087
+ Let N ě 4, from Proposition 3.7, as σ avoids the patterns 4231 and 3412, we know that
1088
+ either m is a ladder or it has at least one good segment ∆ which is different from ∆N and
1089
+ ∆σpNq. The ladder case has been considered in Section 5.1. Otherwise, using Lemma 3.5,
1090
+ we know that ∆ is also a good segment of m1 :“ mz∆N, ∆σpNq (on the same side).
1091
+
1092
+ 16
1093
+ LÉA BITTMANN
1094
+ We can assume without loss of generality that ∆ is a good left segment of m and m1.
1095
+ Then, as in the proof of Corollary 3.8,
1096
+ Zpmq ˆ Zpm1q ãÑ Zpmq ˆ Zp∆q
1097
+ looooooomooooooon
1098
+ irreducible
1099
+ ˆZpm1z∆q – Zp∆q ˆ Zpmq ˆ Zpm1z∆q
1100
+ ãÑ Zp∆q ˆ Zp∆q
1101
+ looooooomooooooon
1102
+ irreducible
1103
+ ˆZpmz∆q ˆ Zpm1z∆q.
1104
+ By induction, Zpmz∆q ˆ Zpm1z∆q is irreducible.
1105
+ Similarly,
1106
+ Zpmq ˆ Zpm1q և Zpmz∆q ˆ Zpm1z∆q ˆ Zp∆q ˆ Zp∆q.
1107
+ We conclude that Zpmq ˆ Zpm1q is irreducible by Lemma 2.19.
1108
+ 5.3.2. Irreducibility of Zpm1q ˆ Zpm2q: As before, we prove this by induction, this time on
1109
+ N ´ r, where r “ |J|. If r “ N, then m is a ladder, and the result was proven in [BLM13,
1110
+ Exemple 4.5]. If N ą r, then m is not a ladder. In that case, either ∆1 or ∆σp1q is a good
1111
+ segment of m and does not form a ladder with ∆N and ∆σpNq. This good segment is not
1112
+ one of the ∆ik and thus it is a segment of m1 and m2. Let us prove it is a common good
1113
+ segment of m1 and m2.
1114
+ Let us assume that σpNq ‰ 1 and that ∆1 is a good left segment of m. Clearly, ∆1 does
1115
+ not precede any segment of m1 or m2.
1116
+ Suppose ∆1 does not form a ladder with the segments which precedes it in m1. Thus
1117
+ there exists ∆, ∆1 such that ∆ ă ∆1, ∆1 ă ∆1 and ∆1 Ĺ ∆. As ∆1 is a good segment of
1118
+ m, necessarily exactly one of ∆, ∆1 is in m while the other has be shifted. If ∆1 P m, then
1119
+ ∆ “ raik; bik`1s for some k. In that case, ∆ik`1 ă ∆1 and ∆1 ć ∆ik`1, which contradicts
1120
+ the fact that ∆1 is a good segment of m.
1121
+ ∆1
1122
+ bik`1
1123
+ ∆1
1124
+ aik
1125
+ aik`1
1126
+ If ∆ “ rai; bis P m, then there is k such that ∆1 “ raik; bik`1s. If both bik ą bi and
1127
+ aik`1 ă ai then i P I, which contradicts the fact that ∆ has not been shifted.
1128
+ ∆1
1129
+
1130
+ bik`1
1131
+ aik
1132
+ aik`1
1133
+ bik
1134
+ aik
1135
+ If bik ă bi, then ∆ik, ∆, ∆1 do not form a ladder in m, if aik`1 ą ai then ∆, ∆ik`1, ∆1 do
1136
+ not form a ladder in m. In both cases, it contradicts the fact that ∆1 is a good segment of
1137
+ m.
1138
+ ∆1
1139
+
1140
+ bik`1
1141
+ aik
1142
+ bik
1143
+ aik
1144
+ ∆1
1145
+
1146
+ bik`1
1147
+ aik
1148
+ aik`1
1149
+ Hence by the criteria in section 3.2, ∆1 is good segment of m1. We show in a similar way
1150
+ that ∆1 is good segment of m2.
1151
+ Then,
1152
+ Zpm1q ˆ Zpm2q ãÑ Zp∆1q ˆ Zp∆1q
1153
+ loooooooomoooooooon
1154
+ irreducible
1155
+ ˆZpm1z∆1q ˆ Zpm2z∆1q.
1156
+
1157
+ ON A DETERMINANT FORMULA FOR SOME REAL REGULAR REPRESENTATIONS
1158
+ 17
1159
+ By induction, Zpm1z∆1q ˆ Zpm2z∆1q is irreducible.
1160
+ Similarly,
1161
+ Zpm1q ˆ Zpm2q և Zpm1z∆1q ˆ Zpm2z∆1q ˆ Zp∆1q ˆ Zp∆1q.
1162
+ We conclude that Zpm1q ˆ Zpm2q is irreducible by Lemma 2.19.
1163
+ 6. Relation to quantum affine algebras representations
1164
+ 6.1. Translation of results. As mentioned above, the result of Theorem 5.1 has a quan-
1165
+ tum affine analog through quantum affine Schur-Weyl duality. Indeed, when q is not a
1166
+ root of unity, Chari-Pressley [CP96] have established an equivalence of categories between
1167
+ the category of finite-dimensional representations of the affine Hecke algebra 9Hq2pnq and
1168
+ the category of (level n) finite-dimensional representations of the quantum affine algebra
1169
+ Uqppslkq, when k ě n. Moreover, through type theory (see for example [Hei11]), it is known
1170
+ that finite-dimensional representations of the affine Hecke algebra 9Hq2pnq are equivalent
1171
+ to finite length representations of GLnpFq.
1172
+ This equivalence is monoidal, in the sense that the parabolic induction of two represen-
1173
+ tations in C is translated into the tensor product of the corresponding Uqppslkq-modules.
1174
+ Instead of multisegments, finite-dimensional irreducible Uqppslkq-modules have been clas-
1175
+ sified [CP95] using Drinfeld polynomials, which correspond to their highest-weights. By a
1176
+ process similar to the reduction to cuspidal lines described in the beginning of Section 2.2,
1177
+ the study of the category of finite-dimensional Uqppslkq-modules amounts to the study of a
1178
+ skeleton Serre subcategory C , introduced by Hernandez-Leclerc (see [HL10, Section 3.7]),
1179
+ in relation to cluster algebras. Let R denote the Grothendieck ring of the monoidal cate-
1180
+ gory C .
1181
+ Simple objects in the category C are then parametrized, up to isomorphism, by mono-
1182
+ mials in the formal variables Yi,p, pi, pq P ˆI :“ tt1, . . . , k ´ 1u ˆ Z | i ` p ` 1 P 2Zu. The
1183
+ correspondence between segments and formal variables is as follows:
1184
+ (6.1)
1185
+ ra; bs
1186
+ ÞÑ
1187
+ Yb´a`1,´a´b,
1188
+ r1´i´p
1189
+ 2
1190
+ ; i´p´1
1191
+ 2
1192
+ s
1193
+ �
1194
+ Yi,p.
1195
+ Since we are using the Zelevinsky classification for the representations of GLnpFq, from
1196
+ now on irreducible Uqppslkq-modules will be denoted LpMq, with M their highest loop-
1197
+ weight, in the set of dominant loop-weights:
1198
+ ˆPℓ :“
1199
+ # N
1200
+ ź
1201
+ j“1
1202
+ Yij,pj | @ 1 ď j ď N, pij, pjq P ˆI
1203
+ +
1204
+ .
1205
+ Through this correspondence, ladder representations are usually called snake modules
1206
+ in the context of quantum affine algebras. For completeness, recall the definition of snakes
1207
+ modules by Mukhin-Young. For M “ śN
1208
+ j“1 Yij,pj P ˆPℓ, the simple module LpMq is a snake
1209
+ module if and only if, for all 1 ď j ď N,
1210
+ pj`1 ´ pj ě |ij`1 ´ ij| ` 2.
1211
+ It clearly translates to the definition of ladders, as in Definition 2.10. Note that a definition
1212
+ of snake modules for type B quantum affine algebras was also introduced by Mukhin-Young.
1213
+ Moreover, as stated above, Theorem 5.5 from [LM14] was previously established by
1214
+ Mukhin-Young in terms of snake modules.
1215
+
1216
+ 18
1217
+ LÉA BITTMANN
1218
+ For M “ śN
1219
+ j“1 Yij,pj P ˆPℓ such that LpMq is a snake module, we have the following
1220
+ relation, in the Grothendieck ring R [MY12, Theorem 4.1]:
1221
+ (6.2)
1222
+ «
1223
+ L
1224
+ ˜N´1
1225
+ ź
1226
+ j“1
1227
+ Yij,pj
1228
+ ¸ff
1229
+ ¨
1230
+ «
1231
+ L
1232
+ ˜ N
1233
+ ź
1234
+ j“2
1235
+ Yij,pj
1236
+ ¸ff
1237
+
1238
+ «
1239
+ L
1240
+ ˜N´1
1241
+ ź
1242
+ j“2
1243
+ Yij,pj
1244
+ ¸ff
1245
+ ¨ rLpMqs
1246
+ `
1247
+
1248
+ LpM1q
1249
+
1250
+ ¨
1251
+
1252
+ LpM2q
1253
+
1254
+ ,
1255
+ where M1, M2 are called the neighboring snakes of M, and correspond to m1 and m2 in
1256
+ this case. Note that relation (6.2) was established in [MY12] also in type B. Moreover, as
1257
+ in our result, both terms on the left hand side of (6.2) correspond to irreducible modules.
1258
+ For these reasons, our theorem 5.1 is a generalization of [MY12, Theorem 4.1], and we
1259
+ have established some new relations between irreducible representations of Uqppslkq.
1260
+ Example 6.1. Let us translate the relations obtained in Example 5.4:
1261
+ (1) For k ě 4, let M “ Y2,´5Y3,´2Y1,´2 P ˆPℓ. As before, the corresponding regular
1262
+ representation LpMq is real. Applying Theorem 5.1 gives the following relation:
1263
+ LpY2,´5Y3,´2q ¨ LpY3,´2Y1,´2q “ LpMq ¨ LpY3,´2q ` LpY3,´2q ˆ LpY3,´4Y3,´2q.
1264
+ (2) For k ě 7, let M “ Y6,´7Y3,´8Y5,´4Y2,´5 P ˆPℓ. The corresponding regular repre-
1265
+ sentation LpMq is real and applying Theorem 5.1 gives the following relation:
1266
+ LpY6,´7Y3,´8Y5,´4q ¨ LpY6,´7Y5,´4Y2,´5q “ LpMq ¨ LpY6,´7Y5,´4q
1267
+ ` LpY6,´7Y5,´4Y1,´6q ¨ LpY6,´7Y5,´4Y4,´7q.
1268
+ Whereas applying Corollary 5.3 gives the following relation:
1269
+ LpY3,´8Y5,´4Y2,´5q ¨ LpY6,´7Y3,´8Y2,´5q “ LpMq ¨ LpY3,´8Y2,´5q
1270
+ ` LpY3,´8Y2,´5Y4,´5q ¨ LpY3,´8Y2,´5Y7,´6q.
1271
+ Note that when k “ 7, the right hand side of the last relation simplifies as
1272
+ LpY3,´8Y2,´5Y4,´5q ¨ LpY3,´8Y2,´5q.
1273
+ 6.2. Relation to cluster algebras. In [HL16], Hernandez and Leclerc proved that the
1274
+ Grothendieck ring R had a cluster algebra structure for which the initial cluster variables
1275
+ are Kirillov-Reshetikhin modules (or Speh representations, as in Definition 2.8). Moreover,
1276
+ one of the key ingredients used for this result is the fact that the T-system relations (of
1277
+ which the Mukhin-Young extended T-systems are generalizations) correspond to exchange
1278
+ relations in the cluster algebra structure. The same authors also conjectured [HL16, Con-
1279
+ jecture 5.2] that the cluster variables were in bijection with the prime real simple modules.
1280
+ Part of this conjecture was proven by Kashiwara-Kim-Oh-Park in [KKOP21], where they
1281
+ proved that all cluster variables correspond to prime real simple modules.
1282
+ In [DLL19] Duan-Li-Luo proved that prime snake modules correspond to cluster vari-
1283
+ ables, thus proving Hernandez-Leclerc’s conjecture for snake modules, and for that purpose
1284
+ introduced new relations in the Grothendieck ring R, which they interpreted as exchange
1285
+ relations. However, it is unclear whether (some of) the Mukhin-Young extended T-systems
1286
+ can be interpreted as exchange relations.
1287
+ One of the motivations behind this work was to obtain more generalizations of the T-
1288
+ system relations, which could be interpreted as exchange relations. We conjecture that,
1289
+ equipped with more explicit relations such as (5.1) and (5.2), one could prove that all prime
1290
+ real regular representations (for which there exists the criterion of Theorem 2.18 [LM18])
1291
+ correspond to cluster variables.
1292
+
1293
+ ON A DETERMINANT FORMULA FOR SOME REAL REGULAR REPRESENTATIONS
1294
+ 19
1295
+ However, we already observe that not all relations (5.1) and (5.2) have the form of an
1296
+ exchange relation. For example, in the relation in Example 6.1 (1), one of the factors in
1297
+ the left hand side is not prime LpY3,´2Y1,´2q – LpY3,´2q¨LpY1,´2q. Thus the left hand side
1298
+ is a product of three prime irreducible modules, and the relation cannot be an exchange
1299
+ relation.
1300
+ Appendix A. Ferrers boards
1301
+ Permutations in SN can be represented by placing dots in an N ˆ N-grid.
1302
+ For all
1303
+ 1 ď i ď N, place a dot in the box pi, σpiqq. Then the set ΓrId, σs Ă rNs2 can be represented
1304
+ in the grid by colouring the boxes pi, σ1piqq, for σ1 ď σ.
1305
+ Example A.1. The grid corresponding to the permutation σ “ 152463 is
1306
+
1307
+
1308
+
1309
+
1310
+
1311
+
1312
+ Remark A.2. By the study of Sjöstrand [Sjo07], the set ΓrId, σs is a right-aligned Skew
1313
+ Ferrers board, in particular it is the union of the rectangles
1314
+
1315
+
1316
+ for all pairs of (not
1317
+ necessarily distinct) dots ���.
1318
+ Lemma A.3. The σ be a permutation in SN which avoids the pattern 3412, then either
1319
+ the columns σpNq and N or the rows σ´1pNq and N of ΓrId, σs are identical.
1320
+ Proof. If the columns σpNq and N of ΓrId, σs are different, then there is a dot above
1321
+ pσ´1pNq, Nq and to the right of pN, σpNqq.
1322
+
1323
+
1324
+
1325
+ pσ´1pNq, Nq
1326
+ pN, σpNqq
1327
+ Similarly, if the rows σ´1pNq and N are different, then there is a dot to the left of pN, σpNqq
1328
+ and below pσ´1pNq, Nq.
1329
+ Thus if both the columns σpNq and N and the rows σ´1pNq and N are different, then
1330
+ there a 3412 configuration, which is a contradiction.
1331
+
1332
+
1333
+
1334
+
1335
+
1336
+ The following is clear from the definition of the signature.
1337
+ Lemma A.4. The signature of the permutation σ is equal to p´1qℓ, where ℓ is the sum
1338
+ over all dots ‚ of the number of dots strictly above and to the right of ‚.
1339
+
1340
+ 20
1341
+ LÉA BITTMANN
1342
+ References
1343
+ [BLM13] I. Badulescu, E. Lapid, and A. Mínguez.
1344
+ Une condition suffisante pour
1345
+ l’irréductibilité d’une induite parabolique de GLpm, Dq.
1346
+ Ann. Inst. Fourier
1347
+ (Grenoble), 63(6):2239–2266, 2013.
1348
+ [CG97] N. Chriss and V. Ginzburg. Representation Theory and Complex Geometry.
1349
+ Birkhäuser Boston, MA, first edition, 1997.
1350
+ [CP95] V. Chari and A. Pressley. Quantum affine algebras and their representations. In
1351
+ Representations of groups (Banff, AB, 1994), volume 16 of CMS Conf. Proc.,
1352
+ pages 59–78. Amer. Math. Soc., Providence, RI, 1995.
1353
+ [CP96] V Chari and A Pressley. Quantum affine algebras and affine Hecke algebras.
1354
+ Pacific J. Math., 174(2):295–326, 1996.
1355
+ [CR08] G. Chenevier and D. Renard. Characters of Speh representations and Lewis-
1356
+ Caroll identity. Represent. Theory, 12:447–452, 2008.
1357
+ [CSB21] S. Chepuri and M. Sherman-Bennett. 1324- and 2143-avoiding kazhdan–lusztig
1358
+ immanants and k-positivity.
1359
+ Canadian Journal of Mathematics, page 1–31,
1360
+ 2021.
1361
+ [DLL19] B. Duan, J.-R. Li, and Y.-F. Luo. Cluster algebras and snake modules. Journal
1362
+ of Algebra, 519:325 – 377, 2019.
1363
+ [FZ02] S. Fomin and A. Zelevinsky. Cluster algebras. I. Foundations. J. Amer. Math.
1364
+ Soc., 15(2):497–529, 2002.
1365
+ [Hei11] V. Heiermann.
1366
+ Opérateurs d’entrelacement et algèbres de Hecke avec
1367
+ paramètres d’un groupe réductif p-adique: le cas des groupes classiques. Selecta
1368
+ Math. (N.S.), 17(3):713–756, 2011.
1369
+ [Her06] D. Hernandez. The Kirillov-Reshetikhin conjecture and solutions of T-systems.
1370
+ J. Reine Angew. Math., 596:63–87, 2006.
1371
+ [HL10] D. Hernandez and B. Leclerc. Cluster algebras and quantum affine algebras.
1372
+ Duke Math. J., 154(2):265–341, 2010.
1373
+ [HL16] D. Hernandez and B. Leclerc. A cluster algebra approach to q-characters of
1374
+ Kirillov-Reshetikhin modules. J. Eur. Math. Soc. (JEMS), 18(5):1113–1159,
1375
+ 2016.
1376
+ [KKKO15] S-J. Kang, M. Kashiwara, M. Kim, and S. Oh. Simplicity of heads and socles
1377
+ of tensor products. Compos. Math., 151(2):377–396, 2015.
1378
+ [KKOP21] M. Kashiwara, M. Kim, S.-J. Oh, and E. Park. Monoidal categorification and
1379
+ quantum affine algebras II. arXiv e-prints, page arXiv:2103.10067, March 2021.
1380
+ [KNS11] A. Kuniba1, T. Nakanishi, and J. Suzuki. T-systems and Y-systems in inte-
1381
+ grable systems. J. Phys. A: Math. Theor., 44, 2011.
1382
+ [LM14] E. Lapid and A. Mínguez. On a determinantal formula of Tadić. Amer. J.
1383
+ Math., 136(1):111–142, 2014.
1384
+ [LM16] E. Lapid and A. Mínguez. On parabolic induction on inner forms of the gen-
1385
+ eral linear group over a non-archimedean local field.
1386
+ Selecta Math. (N.S.),
1387
+ 22(4):2347–2400, 2016.
1388
+ [LM18] E. Lapid and A. Mínguez. Geometric conditions for ˝-irreducibility of certain
1389
+ representations of the general linear group over a non-archimedean local field.
1390
+ Adv. Math., 339:113–190, 2018.
1391
+ [LS90] V. Lakshmibai and B. Sandhya. Criterion for smoothness of Schubert varieties
1392
+ in Slpnq{B. Proc. Indian Acad. Sci. Math. Sci., 100(1):45–52, 1990.
1393
+ [MS14] A. Mínguez and V. Sécherre.
1394
+ Représentations lisses modulo ℓ de GLmpDq.
1395
+ Duke Math. J., 163(4):795–887, 2014.
1396
+
1397
+ ON A DETERMINANT FORMULA FOR SOME REAL REGULAR REPRESENTATIONS
1398
+ 21
1399
+ [MY12] E. Mukhin and C. A. S. Young. Extended T-systems. Selecta Math. (N.S.),
1400
+ 18(3):591–631, 2012.
1401
+ [Nak01] H. Nakajima. Quiver varieties and finite-dimensional representations of quan-
1402
+ tum affine algebras. J. Amer. Math. Soc., 14(1):145–238, 2001.
1403
+ [Sjo07] J. Sjostrand. Bruhat intervals as rooks on skew Ferrers boards. J. Combin.
1404
+ Theory Ser. A, 114(7):1182–1198, 2007.
1405
+ [Tad95] M. Tadić. On characters of irreducible unitary representations of general linear
1406
+ groups. Abh.Math.Semin.Univ.Hambg., 65:341–363, 1995.
1407
+ [Zel80] A. Zelevinsky. Induced representations of reductive p-adic groups. II. On irre-
1408
+ ducible representations of GLpnq. Ann. Sci. École Norm. Sup. (4), 13(2):165–
1409
+ 210, 1980.
1410
+ [Zel81] A. Zelevinsky. The p-adic analog of the Kazhdan-Lusztig hypothesis. Funct
1411
+ Anal Its Appl, 15:83–92, 1981.
1412
+ Léa Bittmann, Hodge Institute, University of Edinburgh, United Kingdom.
1413
+ Email address: [email protected]
1414
+
B9AyT4oBgHgl3EQf4PrK/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
CdE4T4oBgHgl3EQfeQ2g/content/tmp_files/2301.05098v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
CdE4T4oBgHgl3EQfeQ2g/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
E9FJT4oBgHgl3EQfCizA/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1ae9e99a2bda0e5ac34eb8b8eb81e4dddd360cc7209e0993f06abaa40f16221
3
+ size 138057
ENE2T4oBgHgl3EQfSgdx/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0670453256e56b7b4cc96a3ad2856ae4eecc525ae5f981ff66bcb403b5d3d59d
3
+ size 73132
HNE1T4oBgHgl3EQfrQW4/vector_store/index.faiss ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5ab9bb029202b93a53b808fb7822668d9ea4e2c7fe86fa2a3ff7183979b6435
3
+ size 4259885
HNE1T4oBgHgl3EQfrQW4/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebb6166df2d8282d357852720da2b7b4845cd0095898ebd10f329d24ccab173e
3
+ size 156958
HdFLT4oBgHgl3EQfIC9G/content/tmp_files/2301.11998v1.pdf.txt ADDED
@@ -0,0 +1,1923 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Augmented Reality’s Potential for Identifying and
2
+ Mitigating Home Privacy Leaks
3
+ Stefany Cruz1, Logan Danek1, Shinan Liu2, Christopher Kraemer6, Zixin Wang3
4
+ Nick Feamster2, Danny Yuxing Huang4, Yaxing Yao5, Josiah Hester6
5
+ 1Northwestern University, 2University of Chicago, 3Zhejiang University
6
+ 4New York University, 5University of Maryland, Baltimore County, 6Georgia Institute of Technology
7
+ Abstract—Users face various privacy risks in smart homes, yet
8
+ there are limited ways for them to learn about the details of such
9
+ risks, such as the data practices of smart home devices and their
10
+ data flow. In this paper, we present Privacy Plumber, a system
11
+ that enables a user to inspect and explore the privacy “leaks” in
12
+ their home using an augmented reality tool. Privacy Plumber
13
+ allows the user to learn and understand the volume of data
14
+ leaving the home and how that data may affect a user’s privacy—
15
+ in the same physical context as the devices in question, because
16
+ we visualize the privacy leaks with augmented reality. Privacy
17
+ Plumber uses ARP spoofing to gather aggregate network traffic
18
+ information and presents it through an overlay on top of the
19
+ device in an smartphone app. The increased transparency aims to
20
+ help the user make privacy decisions and mend potential privacy
21
+ leaks, such as instruct Privacy Plumber on what devices to block,
22
+ on what schedule (i.e., turn off Alexa when sleeping), etc. Our
23
+ initial user study with six participants demonstrates participants’
24
+ increased awareness of privacy leaks in smart devices, which
25
+ further contributes to their privacy decisions (e.g., which devices
26
+ to block).
27
+ I.
28
+ INTRODUCTION
29
+ The increasing adoption of Internet-connected smart de-
30
+ vices has brought huge improvements to our lives. Yet, these
31
+ devices also raise significant privacy concerns from their users,
32
+ such as sensitive data collection [53], [51], data sharing [51],
33
+ and data misuse [22], [23], [27]. Literature has suggested
34
+ many types of privacy risks associated with smart devices.
35
+ For example, some seemingly innocent data, such as the
36
+ network traffic shapes and patterns of smart devices, may
37
+ reveal sensitive personal information, such as users’ daily
38
+ schedule, their gender, date of birth, social security number,
39
+ location, and behaviors [5], [3].
40
+ However, many risks are not obvious to users due to the
41
+ opaque nature of the data practices of smart devices; the
42
+ average users lack an understanding of how their data is
43
+ collected, processed, and shared [51], [50], [21]. Prior research
44
+ has proposed various ways to increase users’ awareness of the
45
+ data practices in smart homes, such as data dashboards, mobile
46
+ phone apps, ambient light and sounds, and so on [44], [15],
47
+ [9], [16]. Some other mechanisms (e.g., IoT Inspector [15])
48
+ focus on specific aspects of the data practices and present
49
+ network traffic data to users so that they can access first-
50
+ hand data of the data flow in/out of smart devices. Yet,
51
+ most mechanisms we know decouple such transparency from
52
+ the device themselves—i.e., users need to learn about the
53
+ data practices separately from the smart devices—making
54
+ the information less intuitive to consume, especially for the
55
+ average user. In addition, these mechanisms do not provide
56
+ users with the ability to take action if they notice unexpected
57
+ data practices (e.g., blocking the data from being sent out to
58
+ third parties).
59
+ In this paper, we focus on the data flow in and out
60
+ of smart devices. We build a proof-of-concept smartphone-
61
+ based augmented reality system called Privacy Plumber to
62
+ increase users’ awareness of the data flows of smart devices
63
+ and provide them with controls to block certain data flow if
64
+ needed. We focus on data flow rather than other aspects of
65
+ data practices (e.g., types of data being collected) mostly due to
66
+ practicality and feasibility reason, as we can reasonably capture
67
+ data flow and identify its source and destination using ARP
68
+ spoofing [15]. In addition, from the smart devices’ perspective,
69
+ these devices have multiple tiers of software, all of which entail
70
+ some type of tracking. Such tracking is generally embodied
71
+ in the data flow. We use augmented reality to visualize data
72
+ flows in the same physical environment as the devices in
73
+ question; this method could potentially help users establish
74
+ a connection between the devices and their data flows in the
75
+ same context. Users’ proper understanding of data flow may
76
+ help them understand the privacy implications of devices such
77
+ as smart TVs [28], voice assistants [15], children’s toys [10],
78
+ security cameras [24], [35], and smart light bulbs [8].
79
+ The development of Privacy Plumber is inspired by the
80
+ following three gaps in the literature. First, the data flows of
81
+ smart devices are opaque and not visible to users. Second,
82
+ existing tools to monitor network traffic of smart devices (e.g.,
83
+ IoT Inspector [15], open.Dash [9]) require a certain level of
84
+ technical knowledge to be able to interpret the results—not to
85
+ mention that the results are often decoupled from the physical
86
+ environment where the smart devices are situated. Oftentimes,
87
+ the results are presented on, for instance, dashboards on
88
+ computers or phones, where there is a disconnection between
89
+ the visualization of data flows and the smart devices that
90
+ create the data flow. Third, existing tools or mechanisms do
91
+ not provide users with the ability to control unnecessary or
92
+ unexpected data flows. With Privacy Plumber, we aim to bridge
93
+ the gaps and increase users’ awareness and control of the data
94
+ flow in smart devices.
95
+ arXiv:2301.11998v1 [cs.CR] 27 Jan 2023
96
+
97
+ Fig. 1: Privacy Plumber lets a user find and mitigate potential
98
+ privacy violations in the smart home. The figure shows a
99
+ user walking around the smart home and inspecting the traffic
100
+ and trackers coming out of a Samsung Smart Fridge using
101
+ the Augmented Reality enabled app. Furthermore, (not shown
102
+ in the picture above) users can use built-in, infrastructure-
103
+ free controls to limit traffic of devices to times of day—
104
+ without requiring any additional hardware or modifications
105
+ to the network. The graph shows the actual network traffic
106
+ as the user interacted with the Smart Fridge: A: turning on
107
+ the ice maker; B: browsing recipe; C: browsing goods; D:
108
+ interacting with the Bixby voice assistant of the fridge; E:
109
+ opening the fridge door; F: adding items to the shopping list.
110
+ During these interactions, the Smart Fridge communicated with
111
+ various advertising and tracking services, such as DoubleClick
112
+ and Tapad.
113
+ Privacy Plumber uses augmented reality (AR) techniques
114
+ and visualizes real-time network traffic flowing in and out
115
+ of smart devices through an overlay. It allows users to find
116
+ potential privacy leaks in their homes by pointing the AR-
117
+ based app at smart devices. As shown in Figure 1, the app adds
118
+ an overlay on top of the smart devices in which it displays
119
+ a real-time data flow based on the network traffic with the
120
+ necessary information for users to understand it. We chose
121
+ to use AR because, as privacy is highly contextual [32], it
122
+ can provide strong contextual connections between the actual
123
+ real-time privacy leaks, and the user actions (or inaction).
124
+ This allows the smartphone to function as a viewfinder into
125
+ the invisible world of data flow and identify potential privacy
126
+ violations. The smartphone application relies on a companion
127
+ software tool hosted on a laptop or desktop on the same home
128
+ network. This tool discovers smart devices in a user’s home,
129
+ intercepts their traffic via ARP spoofing [48], and analyzes the
130
+ data flow (e.g., what traffic is leaving the home over time)—
131
+ without requiring the user to modify their network settings
132
+ 0
133
+ 1
134
+ Nest Camera
135
+ Live streaming
136
+ 0
137
+ 2
138
+ Samsung Fridge
139
+ Door opening
140
+ Recipe browsing
141
+ 0
142
+ 10
143
+ 20
144
+ 30
145
+ 40
146
+ 50
147
+ 60
148
+ 70
149
+ 80
150
+ time [s]
151
+ 0
152
+ 10
153
+ Amazon Echo
154
+ Weather reporting
155
+ Radio playing
156
+ traffic [mbps]
157
+ Fig. 2: Outbound network traffic from various smart home IoT
158
+ devices: a Nest Camera, an Amazon Echo, and a Samsung
159
+ Smart Fridge. Traffic increases or provides a fingerprint for
160
+ many types of seemingly benign actions, creating a privacy
161
+ leak. Current systems do not provide real-time context or
162
+ ability to experiment with these devices, nor control their
163
+ leakage.
164
+ or install additional hardware. When users would like to take
165
+ action and block certain data flow, ARP-spoofing is used again
166
+ to jam specific devices’ traffic (thereby blocking the device)
167
+ at the time of day set by the user.
168
+ We build a proof-of-concept prototype and conducted a
169
+ pilot study with 6 participants in our lab to collect their
170
+ feedback on the prototype. Our initial findings have suggested
171
+ that Privacy Plumber helped participants understand the net-
172
+ work traffic, increased their awareness of potential privacy
173
+ violations, and helped them make more informed decisions
174
+ on how to handle IoT devices.
175
+ This paper makes three contributions. First, to the best of
176
+ our knowledge, Privacy Plumber is the first mechanism that
177
+ provides users with real-time information on the data flow of
178
+ their smart devices. This paper proves the possibility of using
179
+ AR-based technology as a viable option to increase users’
180
+ awareness of the data flows of smart devices. Second, our
181
+ initial evaluation shows promising results, indicating users’ po-
182
+ tential acceptance of these technologies. Third, we summarized
183
+ lessons learned from the pilot user study to inform the design
184
+ and development of future systems that aim to improve users’
185
+ awareness of data practices in smart homes.
186
+ II.
187
+ BACKGROUND AND RELATED WORK
188
+ In this section we discuss related work seeking to under-
189
+ stand or discover privacy leaks, and the tools that exist to
190
+ help users understand and mitigate them. Privacy Plumber is
191
+ meant to to provide a handheld and zero-cost inspection and
192
+ experimentation tool for privacy leaks of nearby smart devices
193
+ in the home, and a straightforward and low burden method for
194
+ mitigating those leaks.
195
+ A. Privacy Issues in Smart Home
196
+ Over the decades, privacy issues have been deeply dis-
197
+ closed in smart home, such as transparency of data collection,
198
+ data sharing, and accessibility [20], [50], [49], [16], [53], [30],
199
+ [50]. Some smart home devices have always-on sensors that
200
+ capture users’ offline activities in their homes and transmit
201
+ relevant information outside of the home, especially for cloud
202
+ services run by device manufacturers [6].
203
+ 2
204
+
205
+ 02:006Privacy
206
+ PLumbe
207
+ X
208
+ LiveDeviceTraffic
209
+ 720
210
+ 540
211
+ 360
212
+ 180
213
+ 10
214
+ CurrentDeviceTraffic:984.3bytesinthelast10seconds
215
+ Sendingdatato17differentdestinations,including2
216
+ advertisingservice(s)
217
+ Thisdataisequivalentto536.9wordsoftextor0.5
218
+ picturesperminute
219
+ DeviceDetails&Control>DoubleClick
220
+ pedel
221
+ DoubleClick
222
+ Tapad
223
+ traffic [mbps]
224
+ 6
225
+ A
226
+ E
227
+ C
228
+ 4
229
+ B
230
+ 2
231
+ 0
232
+ 10
233
+ 20
234
+ 30
235
+ 40
236
+ 50
237
+ 60
238
+ 70
239
+ 80
240
+ time [s]In the meantime, users are concerned about leaks of
241
+ sensitive information [23], [51], [25], such as visual and
242
+ auditory information which they see as private [23], [25].
243
+ Thus, users have a strong desire to protect themselves against
244
+ such recordings being accessed without their permission [30],
245
+ [19]. However, some information users perceived as not very
246
+ sensitive also lead to privacy leaks. For example, the home
247
+ temperature could be used to determine whether a house is
248
+ occupied or not, as a precursor to burglary [20].
249
+ In fact, smart devices give off digital exhaust which can
250
+ be used by third parties including a user’s Internet Service
251
+ Provider, advertisers, device manufacturers, and others, to
252
+ fingerprint activities and get sensitive information. Shown in
253
+ Figure 2 is the network traffic and trackers of various smart
254
+ home devices.This network traffic forms the basis of most
255
+ leaks.
256
+ B. Tools for Enhancing Smart Home Privacy
257
+ Most related to Privacy Plumber are tools that watch or
258
+ monitor network traffic in the home and provide something
259
+ of use to the user, whether visualization and information,
260
+ education, or a mechanism for control.
261
+ Sophisticated, technically literate users can use systems that
262
+ block advertising and tracking domains (e.g., PiHole [38] and
263
+ pfSense [34]), but these methods are bespoke and often require
264
+ additional or dedicated hardware (e.g., Raspberry Pi for Pi-
265
+ Hole, and a supporting custom router for pfSense). Other tools
266
+ have provided insight into what might be exposed from web-
267
+ browsing activities, including WiFi privacy Ticker [11], but do
268
+ not consider or scale to the new problems of connected devices
269
+ with physical sensors and abilities in a space. Aretha [40]
270
+ explores this tool space and proposed (but did not deploy)
271
+ a simple firewall-based control mechanism. Aretha presents
272
+ data in aggregate instead of contextually and in real-time.
273
+ None of these techniques investigate a range of IoT devices,
274
+ usually constrained by studies with participants in their own
275
+ homes, in a time when smart home adoption is low (Aretha
276
+ had three participants, and only one had more than a phone,
277
+ tablet, and Alexa). None of these techniques develop a scalable
278
+ (no additional hardware required) way to interpret privacy
279
+ leaks and control them. Emerging smart devices are highly
280
+ contextual and location sensitive, an Alexa in the bedroom
281
+ versus the kitchen has different privacy exposure (i.e. the
282
+ former gives sleep times, the latter exposes eating habits).
283
+ Moreover, tracking these devices’ privacy exposures presents
284
+ a technical challenge because the traffic is not centralized
285
+ through a web browser or laptop. A tool is needed to visualize
286
+ privacy leaks from smart devices in real-time and in context,
287
+ educate users on the consequences of these leaks, and provide
288
+ control mechanisms for partially mitigating these leaks.
289
+ Wifi Privacy Ticker [11] demonstrated a first method for
290
+ improving the awareness of users in terms of privacy by
291
+ providing a count of the amount of sensitive data that was
292
+ being transmitted unencrypted over the network awareness.
293
+ By seeing this in real-time, users could adjust their behavior
294
+ or find encrypted means to browse the web. Of course, this
295
+ ticker was developed well before the current generation of
296
+ smart devices, however the underlying concept of surfacing the
297
+ invisible privacy leaks remains the same for Privacy Plumber,
298
+ but for smart devices. Xray-refine [46], [47] provided smart
299
+ phone users a means to visualize their exposure profile, based
300
+ on the duration of app use. This method was an educational
301
+ solution, but users had to adjust behavior to work around the
302
+ constraints of the apps they were using, in some cases, opting
303
+ out of apps to reduce privacy exposure.
304
+ Finally, recent work like Aretha [40], PriView [36], Lu-
305
+ mos [41], and IoT Inspector [15] look at making usable
306
+ visualizations and mechanisms to understand and interpret data
307
+ coming from smart devices in the home. IoT Inspector is a
308
+ simple-to-install desktop application that uses ARP spoofing
309
+ to gather network traffic on the Wifi network of the desk-
310
+ top/laptop. This information is sent and collated at a server,
311
+ and then viewed online by the user, listing different trackers
312
+ and websites that are attached to smart device usage. Because
313
+ of the ease of installation and no extra hardware requirement,
314
+ IoT Inspector was deployed by thousands of users.
315
+ In comparison, Aretha is a part research tool, part ex-
316
+ ploratory users tool for exploring a design space of privacy
317
+ tools and controls. Aretha helps users become aware of the net-
318
+ work traffic flows in their homes while also educating users to
319
+ regain their privacy in the connected home. Aretha suggests the
320
+ use of firewall mechanisms controllable by the user, but does
321
+ not implement them. Aretha, owing to a hardware requirement
322
+ (a device must be attached to the Wifi router in the home) was
323
+ only deployed in three homes, compared to the massive scale
324
+ deployment of IoT Inspector. Similarly, PriView also has a
325
+ hardware requirement; its users need to have dedicated external
326
+ thermal cameras (e.g., FLIR One [36]) attached to their phones.
327
+ For Lumos, there is no special hardware environment, although
328
+ the focus is more on identifying hidden smart devices rather
329
+ than analyzing the network traffic for privacy leaks.
330
+ Privacy Plumber is not meant as a research tool or a design
331
+ space exploration tool. It is meant as an actual, real world
332
+ system with a focus on scalability and ease of deployment in
333
+ any home, similar to IoT inspector. Unlike both IoT inspector
334
+ and Aretha, Privacy Plumber provides real-time and contextual
335
+ visualizations of privacy leaks, real-time ability to plug those
336
+ leaks (as well as automated rule setting for plugging leaks),
337
+ and enables experimentation in real-time.
338
+ Finally, other significant measurement campaigns on in-
339
+ home traffic have been conducted, focusing on the Wifi net-
340
+ work itself or devices in the home [39], [18]. These have
341
+ usually been for research purposes and need finding and are
342
+ useful for informing the design of Privacy Plumber, but are not
343
+ necessarily tools for controlling smart home device privacy.
344
+ C. Determining Home Activities from Network Traffic
345
+ Complementary to Privacy Plumber are other works which
346
+ demonstrate the ability to infer activities from network traffic:
347
+ whether on a phone, smart device, or laptop [4]. By analyzing
348
+ the patterns of network traffic in the home, occupancy, habits
349
+ such as sleeping, watching TV, listening to music, and some-
350
+ times preferences, can all be determined. HomeSnitch [33],
351
+ Peek-a-boo [1], and HoMonit [52] all utilize machine learning
352
+ with varying degrees of success to identify activities in the
353
+ home from network traffic. Other tools utilized for monitoring
354
+ Internet connected smart devices in the home, IoT Sentinel [26]
355
+ and IoT Sense [7], have shown that particular devices can be
356
+ 3
357
+
358
+ fingerprinted by their traffic patterns. Enabling another way
359
+ for an ISP or third party to determine the activity in the
360
+ home. Each of these systems and methods are complementary
361
+ with Privacy Plumber; inferred activities from traffic would be
362
+ useful to surface in Privacy Plumber for the user to understand
363
+ privacy exposure and know when to mitigate it, and device
364
+ fingerprinting provides a way for zero-registration or setup of
365
+ Privacy Plumber in a home.
366
+ D. Challenges: Contextual, Real-time Privacy Understanding
367
+ and Control in the Home
368
+ Despite the diverse work in the smart home privacy space,
369
+ significant gaps and challenges remain, which we detail below.
370
+ C1: Users can’t model what devices are doing, especially
371
+ without context. With tools like IoT Inspector, a user might be
372
+ able to count the number of trackers and advertisers contacted
373
+ in a day from the sum of their interactions with smart devices.
374
+ But how can a user know that turning on the NPR podcast
375
+ on their smart fridge will send thousands of bytes of informa-
376
+ tion to Bloomberg News for advertising purposes? How can
377
+ they know that turning on the device sends a short burst of
378
+ traffic? Users know that data captured will often be used for
379
+ advertising, which often generates an adverse reaction [45].
380
+ However, with smart devices, it is not always clear what
381
+ actions or contexts trigger data being transmitted [13]. Things
382
+ like Privacy labels for websites and smart devices are meant to
383
+ give a method for scoring devices privacy [42], [17]. However,
384
+ these are static representations of the privacy exposure of a
385
+ device. With tools like IoT inspector and Aretha, aggregate
386
+ views of data are seen (as opposed to real-time views), not
387
+ associated with very fine user actions: like the turn on the light,
388
+ say command to Alexa, or open the fridge door. Because of
389
+ this granularity, the mental models of what devices are doing,
390
+ and what they are sharing, are very perplexing. Privacy tools
391
+ must address this lack of action mapping to network traffic,
392
+ enabling contextual integrity [32] in real-time.
393
+ C2: Users don’t have intuitive methods to control the
394
+ privacy “valve”. Users want devices that provide helpful
395
+ features, but they do not know the cost of this ease. One option
396
+ is to just unplug the device; however, this is all or nothing.
397
+ Users need a way to valve the privacy flow to something they
398
+ are comfortable with, or to at least be able to analyze the
399
+ tradeoffs [43]. Making privacy more ”tangible” [2] is one way
400
+ this can be done; where the privacy leaks are more visceral.
401
+ Selective firewalls (such as pfSense [34]), or other more fine
402
+ grained network mechanisms may provide a means to control
403
+ the privacy valve, but this must be intuitive and understandable
404
+ to the user, and they must be able to actually ”see” the effect
405
+ of turning this valve.
406
+ C3: Smart devices are context (location, time, action)
407
+ dependent. Smart devices are necessarily scattered around
408
+ the home; and this will continue as more devices become
409
+ intelligent, and more applications are explored. Watching a
410
+ desktop or laptop traffic meter and figuring out which device
411
+ in which room is doing what at which times, is mentally trying
412
+ for the user and disassociates the device from the physical
413
+ space that defines its context and use. Just like when trying to
414
+ find leaks in pipes, physical proximity is required. Handheld
415
+ inspection tools provide mobility, and enable in-situ fixing and
416
+ experimentation.
417
+ C4: Users can’t experiment. Indeed, because of contextual
418
+ changes in how private information is leaked, experimentation
419
+ is difficult with existing tools that generally provide traffic
420
+ summaries. Interactions with smart devices can last only a few
421
+ seconds. Enabling a user to experiment with different actions
422
+ and uses of a smart device, and then see the associated network
423
+ traffic in real-time, would provide a powerful way to build a
424
+ mental model. However, providing an ability to experiment is
425
+ challenging with the current suite of tools.
426
+ C5: Technical challenge of scalability and deployment. If
427
+ a privacy tool is to be useful and translate to the general
428
+ public, it must be hardware free, or at least trivially easy to
429
+ deploy to enable scalability and broad adoption. Commercial
430
+ products like fing.com embed all functionality in a single
431
+ phone application. Large scale deployments like with IoT
432
+ Inspector are enabled through a desktop application that is
433
+ easy to install. However, these methods do not provide controls
434
+ since that is technically difficult to do without custom hardware
435
+ put between the Wifi endpoint and the user. On the other
436
+ hand, hardware requirements or custom install procedures
437
+ reduce the deployment size of tools like Aretha, or narrow
438
+ the user base by requiring technical ability, as with PiHole.
439
+ It is not clear how to implement mechanisms of control
440
+ without changing the Wifi network and infrastructure. To
441
+ create scalable, user-centered, novice friendly privacy tools,
442
+ mechanisms for enabling control of smart device traffic without
443
+ hardware intervention must be developed.
444
+ III.
445
+ SYSTEM DESIGN
446
+ We present Privacy Plumber as a proof-of-concept and
447
+ end-to-end system to address the challenges listed of scalable
448
+ and general population serving privacy tools for emerging
449
+ smart homes. Privacy Plumber is inspired by various handheld
450
+ tools for identifying and fixing faults in large and complex
451
+ systems. For example, acoustic leak finding has been used for
452
+ decades to localize leaks in gas and water pipelines. Handheld
453
+ oscilloscopes, multimeters, and RF Spectrum Analyzers have
454
+ helped engineers debug problems in large electrical systems.
455
+ These handheld devices make the invisible signals visible and
456
+ interactive. They allow real-time experimentation and debug-
457
+ ging. Inspired by these devices, Privacy Plumber is designed
458
+ to offer a general user a level of insight and control of the
459
+ invisible privacy leaks that are rampant in Internet-connected
460
+ smart devices in the home. Privacy Plumber is composed of
461
+ two pieces as shown in Figure 3:
462
+ (1) the IoT Network Analyzer, a desktop application
463
+ which collects real-time data on smart devices on the shared
464
+ Wifi network, and provides an infrastructure and hardware free
465
+ mechanism to block arbitrary devices traffic, and;
466
+ (2) the Privacy Plumber phone application, which serves
467
+ as a viewfinder or inspector for any smart devices in view, and
468
+ presents data from the desktop application, including device
469
+ network traffic and potential privacy leaks to the users, along
470
+ with educational content matched to what is known about the
471
+ device, all in real time.
472
+ 4
473
+
474
+ ARP
475
+ Spoofer
476
+ Viewfinder
477
+ Educational
478
+ Views
479
+ Packet
480
+ Analyzer
481
+ Traffic Analyzer
482
+ Controls
483
+ IoT Network Analyzer
484
+ Router
485
+ Privacy Plumber App
486
+ Traffic and Control
487
+ Visualizer
488
+ IoT Devices
489
+ Network
490
+ Traffic
491
+ ARP
492
+ Spoofing
493
+ Leak
494
+ Information
495
+ Fig. 3: System diagram of Privacy Plumber including the IoT Network Analyzer and Privacy Plumber mobile application. IoT
496
+ Network Analyzer runs on a computer that is connected to a user’s router. IoT Network Analyzer automatically discovers and
497
+ captures IoT devices on the same network using ARP spoofing. Privacy Plumber connects with IoT Network Analyzer to present
498
+ the network analysis in AR. The user can then examine their devices’ network traffic and control when they want their devices
499
+ to be on or off.
500
+ Overview of Usage. A user would first download, install,
501
+ and run IoT Network Analyzer on their computer and the
502
+ Privacy Plumber app on their mobile phone, such that both
503
+ the computer and the phone are on the same local area
504
+ network. Let us assume that the user is interested in inspecting
505
+ a smart device like an Amazon Echo. While running the
506
+ Privacy Plumber app, the user points the phone camera to
507
+ Echo and speaks a voice command (e.g., “Alexa, what is
508
+ the weather?”) IoT Network Analyzer captures all network
509
+ traffic between Echo and the Internet, analyzes the packets,
510
+ and identifies destinations that are third-party advertising and
511
+ tracking companies. The Privacy Plumber app extracts this
512
+ information from IoT Network Analyzer and visualizes key
513
+ statistics for the user—such as real-time bandwidth usage of
514
+ the device and the number of advertising and tracking services
515
+ contacted—as an overlay in the AR view.
516
+ When the user points the phone camera at a device, the
517
+ Privacy Plumber app does not recognize devices with computer
518
+ vision algorithms. Instead, for the purpose of this prototype,
519
+ we print a QR code on each IoT device. The QR code includes
520
+ the device’s MAC address, its name, and the manufacturer. The
521
+ app uses the phone’s camera to scan for the QR code, identifies
522
+ the device based on the QR code, and displays the device with
523
+ a dial menu around it (see Figure 4a). The options in the menu
524
+ allow the user to see the outbound traffic from the device as
525
+ well as read a brief article stating what types of information
526
+ the device may be tracking. The user may also use the Device
527
+ Control menu (Figure 4c) to manually block or allow traffic
528
+ from the device. Future versions of the app will use computer
529
+ vision to recognize devices; see the discussion in Section V.
530
+ Privacy Threat Model and Assumptions. We assume that
531
+ a user’s privacy may be potentially violated if an IoT device
532
+ exhibits either or both of the following behaviors. In Threat 1,
533
+ an IoT device could contact an advertising and tracking service
534
+ on the Internet. In Threat 2, an IoT device could be sending
535
+ out network traffic to hosts on the Internet when the user does
536
+ not expect any network activities—for example, when the user
537
+ is not interacting with the device.
538
+ We design both the Privacy Plumber app and IoT Network
539
+ Analyzer with this privacy threat model in mind. IoT Network
540
+ Analyzer captures packets, analyzes the headers, identifies the
541
+ destination hosts (based on the IP addresses, domain names,
542
+ and the TLS Server Name Indication fields), and determines
543
+ if a destination host is an advertising and tracking company.
544
+ The Privacy Plumber app displays the number of advertising
545
+ and tracking services (e.g., the red text below the graph in
546
+ Figure 4b), thereby helping users toward identifying Threat 1.
547
+ Based on the byte counters from IoT Network Analyzer, the
548
+ Privacy Plumber app also shows a bandwidth graph that plots
549
+ the bytes sent per second over time (e.g., the time-series graph
550
+ in Figure 4b). This graph could help users correlate network
551
+ activities with human interactions—or the lack thereof—with
552
+ given IoT devices and thus identify possible instances of
553
+ Threat 2. Note that IoT Network Analyzer does not parse the
554
+ payload of packets to discover sensitive information within the
555
+ traffic, as the network traffic is likely encrypted.
556
+ A. Design Goals
557
+ Privacy Plumber must make the underlying behavior of the
558
+ devices in the home apparent, and enable forms of fine-grained
559
+ (informed) control of the leakage of sensitive information for
560
+ the user. Towards this end, and addressing the challenges
561
+ described in Section II-D, we are guided by the following
562
+ design goals.
563
+ (1) Handheld and Mobile. Smart devices are scattered
564
+ throughout the home. Phone adoption is nearly universal.
565
+ Using a phone as a window into the information world gives
566
+ 5
567
+
568
+ Privacy
569
+ Jagwnnd
570
+ X
571
+ Live Device Traffic
572
+ Bytes/seo
573
+ 1500
574
+ 0
575
+ 1125
576
+ 0
577
+ 7500
578
+ 3750
579
+ -10
580
+ Current Device Traffic: 26954.1 bytes in the last 10
581
+ seconds
582
+ Sending data to 15 different destinations
583
+ This data is equivalent to 14702.2 words of text or 13.5
584
+ pictures per minute
585
+ Amazon Details
586
+ Status: Allowed
587
+ BLOCK
588
+ ALLOW
589
+ Device
590
+ Device
591
+ Traffic
592
+ Traffic
593
+ Custom Time Rule
594
+ ApplyRule
595
+ Time (ex. 8:00AM)
596
+ Turn On
597
+ Turn Off:
598
+ Time (ex. 8:00PM)
599
+ RemoveRulecontext and a sense of place. The phone form factor increases
600
+ the likelihood of adoption and allows for inspection on the go;
601
+ users can trigger or interact with devices and easily watch the
602
+ movement of data, instead of having to return to the desktop.
603
+ (2) Real-time. Seeing statistics after the fact, as in most
604
+ systems, is not as impactful or helpful when developing a
605
+ model of how devices operate. Moreover, real-time analysis
606
+ enables experimentation, providing users with a mechanism for
607
+ exploring limitless scenarios and quickly associating triggers
608
+ with outcomes.
609
+ (3) Infrastructure/Hardware Free. Many other meth-
610
+ ods require custom hardware. This increases cost and raises
611
+ the barrier to entry. We hope to enable anyone, especially
612
+ those that may have limited autonomy over infrastructure (i.e.
613
+ renters, low-resourced communities) to be able to inspect the
614
+ devices put in their living space.
615
+ (4) Intuitive Controls. Complex mechanisms to control or
616
+ limit the flow of privacy are not interpretable by users, and are
617
+ possibly frustrating. Configuring a firewall is not a task most
618
+ people would enjoy. Straightforward controls, with visible
619
+ results, once those controls are put in place, are essential for
620
+ users to trust the capability of the system.
621
+ (5) Educational. The ever-changing landscape of devices
622
+ and the security/privacy arms is impossible to keep up with for
623
+ privacy tools. Assisting users in understanding what makes cer-
624
+ tain devices leakier (e.g., always-on microphone) is essential.
625
+ To realize these design goals, we build the Privacy Plumber
626
+ app—i.e., the handheld form factor—and IoT Network An-
627
+ alyzer as a two-part architecture working in tandem. Both
628
+ systems must be running on the same local area network.
629
+ IoT Network Analyzer, running on a computer, captures and
630
+ analyzes network traffic between smart devices on the network
631
+ and the Internet. IoT Network Analyzer acts as a server and
632
+ provides the above information over an HTTP REST API. The
633
+ Privacy Plumber app, acting as a client, regularly polls the
634
+ REST API and presents the analysis as an AR overlay to users.
635
+ In the following sections, we detail the pieces of the system
636
+ and how they interact to enable understanding and control of
637
+ smart devices in the home. In Section III-B we discuss the
638
+ IoT Network Analyzer and its role in capturing and curating
639
+ privacy leak information; in Section III-D we describe the
640
+ phone app design; in Section III-C we detail the mechanisms
641
+ we use for controlling devices on a schedule, and finally, in
642
+ Section III-E we describe a few ways to use Privacy Plumber.
643
+ B. Low Burden Home Network Traffic Capture
644
+ To use the Privacy Plumber app, the user must also
645
+ have IoT Network Analyzer running on a computer (macOS,
646
+ Windows, or Linux) that is on the same local area network
647
+ as the phone. For our study, we run IoT Network Analyzer
648
+ on a Raspberry Pi 3 Model B that is connected to the lab’s
649
+ network via Ethernet. We based IoT Network Analyzer’s code
650
+ on the open-source project, IoT Inspector [15], and made
651
+ modifications according to our needs. In particular, whereas
652
+ the original IoT Inspector constantly sends captured traffic’s
653
+ metadata to the researchers’ servers, IoT Network Analyzer
654
+ runs without the Internet; it processes the captured traffic
655
+ locally and exposes the traffic via a REST API. Furthermore,
656
+ whereas the original IoT Inspector runs on users’ computers
657
+ and visualizes the traffic in a browser-based dashboard, IoT
658
+ Network Analyzer uses an AR-based app, Privacy Plumber,
659
+ to visualize the network traffic; the mobile app reads the
660
+ processed traffic through the abovementioned REST API and
661
+ presents the results as an AR overlay.
662
+ Once running, IoT Network Analyzer automatically discov-
663
+ ers IoT devices on the network, captures their network traffic
664
+ via ARP spoofing, produces traffic statistics (e.g., bandwidth
665
+ usage and identifying advertising and tracking services) over
666
+ a local HTTP REST API, and blocks select devices (if desired
667
+ by the user). We explain each of these features below.
668
+ Discovering IoT devices. Upon launch, IoT Network An-
669
+ alyzer automatically broadcasts ARP packets to the local
670
+ area network and discovers active devices. To identify IoT
671
+ devices, Huang et al. [15] describe an algorithm that infers
672
+ the likely identities of IoT devices based on MAC OUI (i.e.,
673
+ Organizationally Unique Identifier, basically the first three
674
+ octets of a MAC address), DNS, and UPnP messages. For the
675
+ prototype in this paper, we only use the MAC OUI. Within the
676
+ code of IoT Network Analyzer, we have already hard-coded
677
+ the mapping between OUIs and names of five IoT devices in
678
+ our lab (which we can find out beforehand). In this way, IoT
679
+ Network Analyzer can instantaneously identify the IoT device
680
+ in our lab without relying on the device identification algorithm
681
+ in Huang et al. [15].
682
+ Capturing network traffic. Once IoT Network Analyzer iden-
683
+ tifies a known IoT device on the lab’s network, it automatically
684
+ starts intercepting network traffic between the device and the
685
+ Internet via ARP spoofing, a technique used in the original IoT
686
+ Inspector implementation and which incurs an overhead of 3.4
687
+ Kbps, given that we have five IoT devices in the lab [15].1
688
+ Obtaining traffic statistics. All traffic to and from IoT devices
689
+ in our lab is redirected through IoT Network Analyzer. In do-
690
+ ing so, IoT Network Analyzer is able to obtain statistics about
691
+ the network traffic for every device, including the device’s
692
+ MAC address (from which to extract the OUI and determine
693
+ the device’s identity based on our hard-coded mapping); the
694
+ number and size of packets (from which to infer the bandwidth
695
+ usage); the remote IP addresses, DNS requests and responses,
696
+ and the Server Name Indication field within TLS packets
697
+ (from which to infer the remote hostname and whether the
698
+ hostname is associated with a known advertising and tracking
699
+ company, based on the Disconnect block list [12]. IoT Network
700
+ Analyzer presents all these statistics and information via an
701
+ HTTP REST API that the Privacy Plumber app can access over
702
+ the local area network. For example, if the computer running
703
+ IoT Network Analyzer has a local IP address of Ii, then the
704
+ Privacy Plumber app (on the same local network) can access
705
+ the traffic information via http://[Ii]/get traffic.
706
+ Phone
707
+ Application:
708
+ App
709
+ Implementation. The Privacy
710
+ Plumber mobile app was implemented in Unity using C#
711
+ and is cross-platform, tested on Android and iPhone. The
712
+ 1Per Huang et al. [15], our setup includes N = 5 devices. It follows that
713
+ N(N + 1) = 30 spoofed ARP packets are sent every two seconds. As each
714
+ ARP packet has 28 bytes, the overhead is 28×30/2∗8 = 3, 360 Bits/second
715
+ or 3.4 Kbps.
716
+ 6
717
+
718
+ (a) View finder
719
+ (b) Traffic view
720
+ (c) Controls
721
+ (d) Education
722
+ Fig. 4: Illustration of mobile application design. (a) Device recognition with interactive menu. (b) Live traffic inspection. (c)
723
+ Rule-based device traffic control (i.e., blocking and unblocking). (d) Educational material on privacy details.
724
+ app works by communicating with IoT Network Analyzer via
725
+ HTTP GET requests, as described in the previous paragraph,
726
+ to obtain JSON-encoded information about the devices on the
727
+ network and their traffic. Parsing these JSON objects, the app
728
+ visualizes the information as charts and text on the AR display
729
+ (e.g., Figure 4b). The app also shows an interface where users
730
+ could block an IoT device’s traffic, e.g., Figure 4c. Once the
731
+ user confirms, the app sends the corresponding request to IoT
732
+ Network Analyzer via the HTTP REST API, and IoT Network
733
+ Analyzer would subsequently block the device by jamming the
734
+ device with corrupt ARP packets.
735
+ C. User Control of Privacy Leaks from a Phone
736
+ With Privacy Plumber we also want to help the user feel
737
+ more empowered by allowing them to take control of their
738
+ devices with the ability to block device traffic. Users can
739
+ manually block or allow device traffic indefinitely, or they can
740
+ set rules to govern when they want their device to be on or off
741
+ and for how long (Figure 4c). Users are also given the option
742
+ to physically power off their device altogether. In this way,
743
+ Privacy Plumber provides a closed-loop system where users
744
+ can analyze the information flow out of a given device, then
745
+ immediately apply direct control over that device in response
746
+ and receive immediate feedback via the traffic view.
747
+ To illustrate how a user might control an IoT device’s
748
+ traffic, let us say that a user feels uncomfortable with an IoT
749
+ device communicating with the Internet. The user can use the
750
+ Privacy Plumber app to block Internet access on the device.
751
+ As shown in Figure 4c, the user can click “Block Traffic”
752
+ on the app to indefinitely block the device, or specify when
753
+ to block and unblock the device. Moreover, the app sends
754
+ an HTTP request to IoT Network Analyzer, using the REST
755
+ API 2 (where Ii is the IP address of the running instance
756
+ of IoT Network Analyzer). During the period of blocking,
757
+ IoT Network Analyzer jams the communication of the device
758
+ by using a corrupt source MAC address in the spoofed ARP
759
+ packets (as described in Section III-B). IoT Network Analyzer
760
+ stops this process at [unblock time], upon which IoT Network
761
+ Analyzer sends out spoofed ARP packets without the corrupt
762
+ source MAC address. This gives users the ability to control
763
+ the times of day when they want their devices to be on or off.
764
+ Privacy Plumber’s software-based device blocking offers
765
+ several advantages over simply turning off or disconnecting
766
+ a device. First, users do not need physical access to the
767
+ device; for instance, many surveillance cameras are mounted
768
+ on ceilings and are difficult to power off. Second, users can
769
+ temporarily disable a device when they are feeling uncom-
770
+ fortable, e.g., blocking Amazon Echo for an hour during a
771
+ sensitive phone call or conversation, through Privacy Plumber.
772
+ Such temporary blocking is difficult to achieve through Echo’s
773
+ app (no such feature) or manually (e.g., the user has to
774
+ remind themselves to re-connect Echo again). Third, though
775
+ not currently implemented, Privacy Plumber, with the help
776
+ of IoT Network Analyzer, can block based on the context
777
+ (i.e., future work). For example, when IoT Network Analyzer
778
+ detects the presence of a user’s phone on the network (e.g., by
779
+ checking if the phone responds to periodic ARP requests), IoT
780
+ Network Analyzer automatically blocks all indoor cameras;
781
+ when the phone leaves the network (e.g., when the user is
782
+ out), IoT Network Analyzer could automatically unblock all
783
+ indoor cameras.
784
+ Technical Mechanism for Blocking Devices. A major differ-
785
+ ence with respect to IoT Inspector’s original implementation is
786
+ 2http://[Ii]/block/[device id]/[block time]/[unblock time]
787
+ 7
788
+
789
+ Privacy
790
+ Lagwnd
791
+ (A)
792
+ amagon
793
+ M
794
+ Device Details & Control >Privacy
795
+ PLumber
796
+ X
797
+ Live Device Traffic
798
+ Bytes/sec
799
+ 720
800
+ 540
801
+ 360
802
+ 180
803
+ -10
804
+ .9
805
+ CurrentDeviceTraffic:984.3bytes inthe last10seconds
806
+ Sending data to 17 different destinations, including 2
807
+ advertising service(s)
808
+ This data is equivalent to 536.9 words of text or 0.5
809
+ pictures per minute
810
+ M
811
+ amazon
812
+ Device Details & Control >Privacy
813
+ Lagwd
814
+ (A)
815
+ M
816
+ cef:co.ob:
817
+ Amazon Details
818
+ Status:Allowed
819
+ BLOCK
820
+ ALLOW
821
+ Device
822
+ Device
823
+ Traffic
824
+ Traffic
825
+ CustomTime Rule
826
+ ApplyRule
827
+ :uo ni
828
+ Time (ex. 8:00AM)
829
+ Tum Off:
830
+ Time (ex. 8:00PM)
831
+ Remove RulePrivacy
832
+ Laqwnd
833
+ X
834
+ Device Privacy Details
835
+ Echo Smart Speaker
836
+ This voice assistant is like a butler, providing voice access and
837
+ control of music and other media, enabling voice based
838
+ purchases, making phone calls, and helping to keep track of
839
+ things,among many other functions.
840
+ Potential Privacy Leaks
841
+ Location
842
+ Your physical location is leaked when interacting with this
843
+ device. Because motion can be detected, your near exact
844
+ locationwithinaroomcanbedetermined.
845
+ 讠 Activity
846
+ (A)
847
+ amazo
848
+
849
+ DeviceDetails&Control>that we have added the capability of blocking devices in IoT
850
+ Network Analyzer. Using the HTTP REST API 3, the Privacy
851
+ Plumber app can request IoT Network Analyzer to block a
852
+ certain device at a particular time (for instance, because the
853
+ user does not want the device to be communicating to the
854
+ Internet). Upon receiving this request, IoT Network Analyzer
855
+ jams the network communication of the device by sending it
856
+ spoofed ARP packets with corrupt MAC addresses.
857
+ To illustrate this process, let us assume that the computer
858
+ running IoT Network Analyzer has a MAC address Mi and IP
859
+ address Ii. Let us also assume that IoT Network Analyzer is
860
+ about to intercept the communication from the gateway (with
861
+ MAC address Mg and IP address Ig) to a particular device
862
+ (with MAC Md and IP Id) without blocking. To do so, every
863
+ two seconds, IoT Network Analyzer sends an ARP packet to
864
+ the device, such that the ARP packet has a source MAC of
865
+ Mi and a source IP of Ig, along with a destination MAC of
866
+ Md and a destination IP of Id. In contrast, let us assume that
867
+ IoT Network Analyzer is to block the device. It sends the
868
+ same ARP packet to the device, except that the ARP packet’s
869
+ source MAC is a series of random numbers (instead of Mi)
870
+ that represent an unreachable MAC address on the local area
871
+ network.
872
+ D. Visualizing and Understanding Traffic in Real-Time
873
+ One of the goals of Privacy Plumber is to show users
874
+ contextualized network activities of IoT devices to help them
875
+ pinpoint the potential privacy risks. In this section, we discuss
876
+ how Privacy Plumber utilizes Augmented Reality to help users
877
+ contextually visualize their devices’ network traffic informa-
878
+ tion in real-time, provide a chart of network traffic in real-
879
+ time, and provide links to other research in which the privacy
880
+ concerns of the inspected device have been studied (including
881
+ home behavior inference, sleeping behaviors, and personal
882
+ data). Lastly, users are able to send feedback and bug reports.
883
+ Use of Augmented Reality. The use of AR visualization
884
+ makes the interaction with the device the user is inspecting
885
+ more tangible and contextual. While IoT Inspector [15] and
886
+ IoT Network Analyzer are text-only data-driven analyzers that
887
+ can only be accessed using browser HTTP requests, Privacy
888
+ Plumber is a fully-fledged interactive application due to the
889
+ utilization of AR. By pointing their camera at the device
890
+ being inspected, the user can see, in their environment, the
891
+ traffic coming out of the device that they are physically
892
+ inspecting. Users can interact with their devices and receive
893
+ immediate feedback about data output and communication
894
+ with advertisers. Combined with manual device control, this is
895
+ intended to help the user feel informed and in control of the
896
+ IoT devices that physically surround them, similar to the use
897
+ of a TV remote control.
898
+ Learning About Privacy Threats. We aim to educate and
899
+ inform users on how their IoT devices expose their network
900
+ traffic information to third parties. In Figures 4d and 5, the
901
+ app shows icons surrounding the IoT device. When any of
902
+ these icons are pressed, they provide links to other research
903
+ materials—which we have manually curated in advance—
904
+ where the privacy concerns of the inspected device have been
905
+ 3http://[Ii]/block/[device id]/[block time]
906
+ studied. Depending on the device, Privacy Plumber provides
907
+ the following categories of potential privacy violations repre-
908
+ sented by icons:
909
+
910
+ Location: Your physical location either roughly (your
911
+ address) or fine-grained (room you are in).
912
+
913
+ Activity: Your physical activity such as walking, talk-
914
+ ing, sleeping.
915
+
916
+ Screen: Your online activity, such as when you browse
917
+ videos on YouTube or surf the web.
918
+
919
+ Identity: Attributes that can identify you such as your
920
+ face or voice.
921
+
922
+ Shopping: Data on your usage of money or products.
923
+
924
+ Health: Can infer different health markers without
925
+ consent (heart rate, breathing, and others).
926
+ E. Privacy Plumber Example Use Cases
927
+ In this section, we illustrate two example use cases of
928
+ the Privacy Plumber app. We focus on the ability of Privacy
929
+ Plumber to enable experimentation and the usefulness of a
930
+ real-time inspector. We will describe the users’ reactions in
931
+ Section 4.3.
932
+ Example 1: Is Echo Always Listening?
933
+ A user may use the Privacy Plumber app to correlate net-
934
+ work activities on an Amazon Echo device with the user’s
935
+ interactions—or the lack thereof—with it. While pointing the
936
+ AR camera at the device, the user could invoke a voice com-
937
+ mand, such as “Alexa, what is the weather”, while observing
938
+ the device’s bandwidth usage graph on the Privacy Plumber
939
+ app. Afterward, the user may physically press the mute button
940
+ on Echo, repeat the same voice command, and observe the
941
+ bandwidth usage graph on the app.
942
+ Example 2: What is this App on My Smart Fridge?
943
+ Many smart fridges have built-in touch-screen panels. For
944
+ example, the Samsung Smart Fridge has a tablet-like touch-
945
+ screen panel to control various settings of the fridge (such as
946
+ temperature). The panel also allows users to access various
947
+ built-in apps, such as checking recipes or ordering ingredients
948
+ online. A user who is concerned with the privacy of such
949
+ apps may point the AR camera at the fridge, interact with
950
+ an app, and observe the advertising and tracking services
951
+ counter on the app. This counter shows, in real-time, the total
952
+ number of advertising and tracking services that the fridge has
953
+ communicated with, based on the Disconnect block list [12].
954
+ IV.
955
+ PILOT USER STUDY
956
+ To test how users react to Privacy Plumber and inform its
957
+ future iteration, we conducted a pilot study with 6 participants
958
+ to experiment with, understand, and control the potential
959
+ privacy violations of IoT devices. It should be noted that the
960
+ pilot study would be best conducted in participants’ homes.
961
+ However, due to University research restrictions, the COVID-
962
+ 19 pandemic has made it difficult for us to recruit real users,
963
+ distribute hardware (e.g., phones powerful enough for AR
964
+ and Raspberry Pi’s for running IoT Network Analyzer), and
965
+ conduct a free-living study.
966
+ 8
967
+
968
+ We conducted a one-day controlled lab study in our IoT
969
+ Lab with 6 participants. Participants were invited to use
970
+ the Privacy Plumber app while interacting with several IoT
971
+ devices in the lab, including Samsung Smart Fridge, Amazon
972
+ Echo, Google Home, Samsung Smart TV, and Google’s Nest
973
+ Cam. Our goal is to assess whether using augmented reality
974
+ to display network traffic (i.e., by using Privacy Plumber)
975
+ influenced the participants’ awareness of privacy and changed
976
+ their behaviors.
977
+ In the following sections, we present the details of the pilot
978
+ study and discuss some highlights in the results as well as
979
+ lessons learned to inform the next iterative of Privacy Plumber.
980
+ A. Participants Recruitment and Demographics
981
+ We recruited 6 graduate students from our institution
982
+ through our university mailing list. We did so rather than
983
+ recruiting from a broader population sample because of the
984
+ constraints our university implemented during the COVID-19
985
+ pandemic (i.e., external members were not permitted to enter
986
+ our buildings). Our sample consisted of four males and two
987
+ females. Three of the participants were between the ages of
988
+ 18-24, two participants were between the ages of 25-34, and
989
+ one participant was between the ages of 35-44.
990
+ B. Study Procedure and Data Collection
991
+ For safety reasons and to implement social distancing
992
+ procedures, only two people were allowed in the IoT Lab
993
+ during the study. Aside from the participant, one of the co-
994
+ authors in this paper served as the research coordinator. They
995
+ were present throughout the user study to help guide the
996
+ participants or troubleshoot any technical difficulties that could
997
+ arise during the study procedure.
998
+ Before the study began, each participant filled out a back-
999
+ ground pre-survey on a computer in the IoT lab. We asked
1000
+ questions about their demographics, how technically savvy
1001
+ they are, their smart device experiences, their general under-
1002
+ standing of privacy, and their concerns about their information
1003
+ being exposed to third parties.
1004
+ After completing the survey, our research coordinator
1005
+ handed each participant a script and an Android mobile phone
1006
+ that had Privacy Plumber installed. Following the script, each
1007
+ participant opened the Privacy Plumber app, kept it running in
1008
+ the foreground, and interacted with one IoT device at a time.
1009
+ Regardless of the IoT device, each interaction consists of the
1010
+ following steps, as prescribed in the script:
1011
+ 1)
1012
+ Using the Privacy Plumber app, the participant
1013
+ scanned the QR code that we had placed on the
1014
+ IoT device. The QR code encodes the device’s MAC
1015
+ address, device name, and manufacturer. Based on the
1016
+ QR code, Privacy Plumber shows the corresponding
1017
+ device’s AR model on the screen.
1018
+ 2)
1019
+ The participant used the app to inspect the device’s
1020
+ traffic, while not doing anything to the device.
1021
+ 3)
1022
+ The participant interacted with the device (which we
1023
+ will describe in detail). During the interactions, the
1024
+ participants observed the network traffic graph on the
1025
+ app.
1026
+ Fig. 5: This screen on the phone application describes the
1027
+ different categories of privacy leaks that different devices have,
1028
+ based on a database that we manually curated in advance.
1029
+ 4)
1030
+ Using the app, the participant clicked on any of the
1031
+ icons surrounding the AR model of the device and
1032
+ read the educational material.
1033
+ After interacting with all the IoT devices, participants
1034
+ returned the phone to the research coordinator and responded
1035
+ to a post-survey that asked the same questions as in the pre-
1036
+ survey, along with a usability survey. We discuss the results in
1037
+ more depth in Sections IV-C and IV-D. We also include our
1038
+ pre- and post-surveys in the Appendix.
1039
+ Below, we describe each participant’s scripted interactions
1040
+ with each device—i.e., showing Step 3 in detail. During the
1041
+ interactions with the devices, users can access the educational
1042
+ content which is summarized from Mozilla’s “privacy not
1043
+ included” handout [29] and academic literature. Each device
1044
+ is described by the categories of privacy exposure they create,
1045
+ those categories are shown in Figure 5.
1046
+ Samsung Smart Fridge. The fridge has a built-in touchscreen
1047
+ on the door. Through the touchscreen, users have the ability
1048
+ to interact with several built-in apps, such as managing the
1049
+ shopping list, checking what is inside the fridge, and searching
1050
+ for recipes online. Users can also interact with the touchscreen
1051
+ using voice commands, using the trigger word, “Bixby.”
1052
+ Per the script, the research coordinator instructed the
1053
+ participant to follow the following three tasks. (i) Once the
1054
+ participant scanned the QR code of the smart fridge, they
1055
+ said the voice command, “Hey Bixby, do we have mangoes?”
1056
+ Bixby, the fridge’s voice assistant, would say “no,”. The
1057
+ participant then said, “Hey Bixby, can you add mangos to
1058
+ my shopping list?” Immediately, the participant looked at the
1059
+ 9
1060
+
1061
+ Privacy Leak Categories
1062
+ Smart devices can collect private information about you
1063
+ intentionally or incidentally in the following ways.
1064
+ Location
1065
+ Your physical location either roughly (your address) or fine-
1066
+ grained (room you are in).
1067
+ Activity
1068
+ Your physical activity, such as walking, talking, sleeping.
1069
+ Screen
1070
+ Your online activity, like when you watch videos on YouTube
1071
+ or surf the web.
1072
+ Identity
1073
+ Collects attributes that can identify you such as your face or
1074
+ voice.
1075
+ Shopping
1076
+ Collects data on your usage of money or products.
1077
+ Health
1078
+ Can infer different health markers without consent
1079
+ (heartrate, breathing and others).-3
1080
+ -2
1081
+ -1
1082
+ 0
1083
+ 1
1084
+ 2
1085
+ 3
1086
+ Strongly Disagree
1087
+ Strongly Agree
1088
+ I think about what information I may be exposing to 3rd parties
1089
+ when I interact smart devices.
1090
+ I am not concerned over the
1091
+ information I may be exposing to 3rd
1092
+ parties when I interact with smart
1093
+ devices.
1094
+ Privacy Plumber has made me more aware of what information I may
1095
+ be exposing to 3rd parties when I interact with smart devices.
1096
+ Privacy Plumber has made me more aware of privacy concerns
1097
+ regarding smart devices.
1098
+ Fig. 6: Representation of participants’ average agreement ratings relating to statements about information being exposed to third
1099
+ parties and privacy concerns caused by interacting with IoT devices. Participants rated the first two statements before and after
1100
+ the study, while the last two statements were rated at the end of the study. The results show that after the study, participants
1101
+ displayed an increase in awareness and concern about how their information is being handled when interacting with IoT devices.
1102
+ Privacy Plumber app and observed the network traffic emitting
1103
+ from the fridge for about 30 seconds. (ii) The participant
1104
+ said, “Hey Bixby, find me a Ramen recipe.” The recipe app
1105
+ popped up on the touchscreen. Using the finger, the participant
1106
+ browsed through the available recipes on the screen, while
1107
+ observing the network traffic on Privacy Plumber for 30
1108
+ seconds. (iii) The participant opened the fridge door and then
1109
+ closed it. Once again, they inspected the fridge’s network
1110
+ traffic through the Privacy Plumber app for 30 seconds.
1111
+ Amazon Echo. Interactions with Echo consists of the follow-
1112
+ ing 3 tasks. (i) The participant said the voice command, “Alexa,
1113
+ play a thunderstorm sound.” Immediately, the participant ob-
1114
+ served the network traffic on the app for 30 seconds. (ii) The
1115
+ participant physically pressed the “mute” button on the Echo
1116
+ and watch the device’s network traffic for 15 seconds. (iii) The
1117
+ participant said the same voice command as in Task (i) and
1118
+ observed the traffic in the app.
1119
+ Google Home. The participant said a voice command, “Hey
1120
+ Google, what was the final score in the Super Bowl last year?”
1121
+ The participant immediately started observing the network
1122
+ traffic on the app for 30 seconds.
1123
+ Samsung Smart TV. The participant used the TV’s remote
1124
+ control to navigate to the Bloomberg app on the home screen.
1125
+ They then started streaming a live video on the Bloomberg app
1126
+ for one minute while they observed the network traffic with
1127
+ the Privacy Plumber app.
1128
+ Nest Cam. Interactions with the camera consists of the follow-
1129
+ ing 2 tasks. (i) The participant walked into the field of view of
1130
+ the camera and stay there for five seconds, walked out of the
1131
+ camera’s field of view, and observed inspect the network traffic
1132
+ with the Privacy Plumber app. They repeated this task as many
1133
+ times as they liked. (ii) The participant blocked the network
1134
+ traffic to and from the camera using the built-in feature on the
1135
+ Privacy Plumber app. The participant observed the network
1136
+ traffic for 10 seconds, walked in front of the camera’s field
1137
+ of view, waited for another ten seconds, and unblocked the
1138
+ device using the Privacy Plumber functionality.
1139
+ C. Analysis of Pre-Study and Post-Study Surveys
1140
+ We asked each participant to complete two surveys: (i) a
1141
+ pre-Study Survey that they filled out on a dedicated computer
1142
+ at the beginning of the study, i.e., before the participants
1143
+ interacted with the Privacy Plumber app or the IoT devices;
1144
+ (ii) a post-Study Survey that the participants filled out on
1145
+ the dedicated computer after interacting with all the five IoT
1146
+ devices. We present the results below.
1147
+ In Figure 6 we present the participants’ agreement rating
1148
+ responses for two statements that were asked in the pre-study
1149
+ survey and post-study survey. We observe that for those two
1150
+ statements participants seemed less concerned by how their
1151
+ information is exposed to third parties when they interact with
1152
+ IoT devices before they performed the activities in the study.
1153
+ After participants completed the study, they were more aware
1154
+ and concerned about how their information was disclosed to
1155
+ third parties. The last two statements of Figure 6 were only
1156
+ given in the post-study survey, which asked participants to rate
1157
+ whether Privacy Plumber was useful in helping them become
1158
+ more aware of privacy concerns and how their information is
1159
+ being shared with third parties. On average, participants some-
1160
+ what agreed that Privacy Plumber helped raise their awareness
1161
+ and privacy concerns. Participants found that Privacy Plumber
1162
+ was helpful in that it helped them visualize what information
1163
+ was being shared.
1164
+ Additionally, we discuss the results of participants’ re-
1165
+ sponses with the IoT devices before and after the study. We
1166
+ show that after the study participants felt less safe with how
1167
+ IoT devices handle their data. Participants were presented with
1168
+ three statements and were asked to rate whether they agree or
1169
+ disagree with these statements on a scale of one to five, where
1170
+ a 1 meant they strongly agree and a 5 represents a strongly
1171
+ disagree rating. Table I demonstrates the average change in
1172
+ attitudes participants had before the study and after the study.
1173
+ We note that before the study, on average participants neither
1174
+ agreed nor disagreed with the statements presented in Table I.
1175
+ After completing the study, the average rating agreement score
1176
+ increased to “somewhat agree” on the last two statements on
1177
+ all IoT devices. The exception was in the first statement, the
1178
+ scores for the Amazon Echo and Google Home. This indicates
1179
+ 10
1180
+
1181
+ Survey Question
1182
+ Smart Fridge
1183
+ Amazon Echo
1184
+ Google Home
1185
+ Smart TV
1186
+ Nest Cam
1187
+ pre
1188
+ post
1189
+ pre
1190
+ post
1191
+ pre
1192
+ post
1193
+ pre
1194
+ post
1195
+ pre
1196
+ post
1197
+ I think this device could be (or is) useful or
1198
+ valuable to my daily life and routine.
1199
+ 3
1200
+ 3.17
1201
+ 2.86
1202
+ 2.5
1203
+ 2.71
1204
+ 2.5
1205
+ 2.43
1206
+ 2.33
1207
+ 2.71
1208
+ 3
1209
+ I am comfortable having this device in
1210
+ my house and always on.
1211
+ 2.29
1212
+ 3.5
1213
+ 3.86
1214
+ 4.17
1215
+ 3.86
1216
+ 4.17
1217
+ 2.29
1218
+ 3.5
1219
+ 3.43
1220
+ 4.17
1221
+ I am comfortable having this device in
1222
+ my house if I can automatically control
1223
+ when it is on, or off.
1224
+ 1.29
1225
+ 2.17
1226
+ 2.29
1227
+ 2.5
1228
+ 2
1229
+ 2.33
1230
+ 1.14
1231
+ 2
1232
+ 2.29
1233
+ 2.83
1234
+ Strongly Disagree (5) to Strongly Agree (1)
1235
+ TABLE I: Results of the survey on user awareness and comfort with smart devices, before and after using Privacy Plumber to
1236
+ inspect those devices. Scores are listed for both pre- and post-study surveys for each device. The higher the scores, the more
1237
+ strongly the participant disagreed with the survey question statement.
1238
+ that after using Privacy Plumber in the study, participants felt
1239
+ that the Amazon Echo and Google would still find it useful to
1240
+ use in their households.
1241
+ We also observe that the Smart Fridge, Smart TV, and
1242
+ the Nest cam had the most significant change in attitude. We
1243
+ gathered a few quotes from participants in which they describe
1244
+ how they felt about interacting with these IoT devices and
1245
+ using Privacy Plumber to inspect their network traffic:
1246
+ IoT devices provide more information to third par-
1247
+ ties than people thought. I think apps like Privacy
1248
+ Plumber can help people to make better decisions
1249
+ when using IoT devices — (P1)
1250
+ Cool to see when and how much traffic each device
1251
+ sends at any given moment! — (P5)
1252
+ I think the app does make me more aware about
1253
+ how the traffic is associated with the behavior of the
1254
+ device. Having some control over the traffic is nice.
1255
+ That being said, if I do have the device in my home,
1256
+ I probably would like to use it, and in that case, I
1257
+ have to allow traffic, which I have no control about
1258
+ what could pass or could not pass. In that sense, I
1259
+ can only accept certain privacy risks. — (P2)
1260
+ It was interesting to see the potential privacy leaks
1261
+ shown next to the device. Some leaks/ privacy im-
1262
+ plications were surprising. Liked the ability to al-
1263
+ low/block traffic, it was also cool to see the real-
1264
+ time traffic including communication with third-party
1265
+ advertisers. Liked the app interface. —(P6)
1266
+ These quotes, along with results from Figure 6 and Table I,
1267
+ suggest that Privacy Plumber helped participants understand
1268
+ the network traffic, increased their awareness of potential
1269
+ privacy violations, and helped them make more informed
1270
+ decisions on how to handle IoT devices.
1271
+ D. Analysis of the Usability Survey
1272
+ At the end of the study, each participant completed the
1273
+ usability survey. Overall, most participants indicated that they
1274
+ would use Privacy Plumber in their home network, found it
1275
+ easy to use and user-friendly, and agreed that most people
1276
+ would learn to use Privacy Plumber quickly. We summarize
1277
+ the results below:
1278
+
1279
+ When asked if they would use the Privacy Plumber
1280
+ mobile app to inspect the data the IoT devices in their
1281
+ homes, two participants said they strongly agreed with
1282
+ the statement and four participants said they somewhat
1283
+ agreed to use Privacy Plumber.
1284
+
1285
+ When participants were asked if they found Privacy
1286
+ Plumber easy to use, four of them somewhat agreed,
1287
+ one participant strongly agreed, and one participant
1288
+ somewhat disagreed.
1289
+
1290
+ When presented the statement “I imagine that most
1291
+ people would learn to use Privacy Plumber very
1292
+ quickly”, the responses were across the board spec-
1293
+ trum. Three participants rated that comment as
1294
+ strongly agreed, one participant rated the statement
1295
+ with a somewhat agree, one participant responded
1296
+ that they felt neither agreed or disagreed with the
1297
+ statement, and one participant somewhat disagreed.
1298
+
1299
+ When participants were asked to rate the overall user-
1300
+ friendliness’s of Privacy Plumber, four participants
1301
+ rated the Privacy Plumber app as good and two
1302
+ participants rated Privacy Plumber as fair.
1303
+ We gave participants an open-ended question if they would
1304
+ improve the usability of Privacy Plumber, and if so, how.
1305
+ We show their responses in Appendix B. All in all, partici-
1306
+ pants seemed to respond somewhat positively towards Privacy
1307
+ Plumber. It shows that Privacy Plumber may have the potential
1308
+ to be distributed to the general public after further studies.
1309
+ We hope to build off our current platform and implement the
1310
+ suggestions our participants gave us in future work.
1311
+ E. Performance: System Overhead and Battery Life Impact
1312
+ Network Overhead. IoT Network Analyzer intercepts the
1313
+ network traffic of select IoT devices via ARP spoofing, a
1314
+ technique that could introduce network overhead especially
1315
+ to the targeted IoT devices. This overhead comes from two
1316
+ sources. First, the spoofed ARP packets consume extra band-
1317
+ width, although the overhead is relatively small—i.e., less than
1318
+ 60 Kilobytes/second even if 50 IoT devices are under ARP
1319
+ 11
1320
+
1321
+ spoofing [15]). The second source of overhead comes from
1322
+ the Raspberry Pi 3 Model B, where we run IoT Network
1323
+ Analyzer in the lab. The Raspberry Pi is connected to the
1324
+ lab’s network via Ethernet. For all IoT devices to which IoT
1325
+ Network Analyzer sends spoofed ARP packets, all inbound
1326
+ (i.e., download) and outbound (i.e., upload) traffic to and
1327
+ from the IoT devices has to first go through the Raspberry
1328
+ Pi before IoT Network Analyzer forwards the traffic to the
1329
+ targeted device and to the Internet respectively. Effectively,
1330
+ the Raspberry Pi introduces a bottleneck for the ARP-spoofed
1331
+ devices.
1332
+ To measure the overhead as a result of the Raspberry Pi
1333
+ bottleneck, we conduct the following experiment. We install
1334
+ the Ookla Speed Test app on an Android phone that is con-
1335
+ nected to the the lab’s WiFi network. We have the Ookla app
1336
+ run 15 back-to-back speed tests, which measure the inbound
1337
+ and outbound traffic rates with respect to a server in our city,
1338
+ as well as the latency of packets. Using the same setup, we
1339
+ repeat the same experiment, except that we have IoT Network
1340
+ Analyzer inspect the phone’s traffic via ARP spoofing.
1341
+ We find significant overhead as a result of IoT Network
1342
+ Analyzer. Without ARP spoofing, the app achieves, on average,
1343
+ an inbound rate of 293.6 ± 15.4 Mbps, an outbound rate of
1344
+ 94.1±0.2 Mbps, and a latency of 5.7±0.5 milliseconds. With
1345
+ ARP spoofing by IoT Network Analyzer, the app achieves, on
1346
+ average, an inbound rate of 41.4±74.6 Mbps, an outbound rate
1347
+ of 72.8 ± 14.1 Mbps, and a latency of 5.9 ± 0.5 milliseconds.
1348
+ Compared with the case without ARP spoofing, IoT Network
1349
+ Analyzer reduces the inbound rate by 85.9% and outbound rate
1350
+ by 22.6%, while increasing the latency by 3.5%.
1351
+ Despite the seemingly significant reduction in bandwidth,
1352
+ we argue that IoT Network Analyzer is unlikely to degrade
1353
+ usability, as the network analyzer is not always running (only
1354
+ when inspecting, or blocking a specific device). Additionally,
1355
+ the overhead can be reduced with improved hardware. Ac-
1356
+ cording to Netflix, 25 Mbps of inbound rate is sufficient to
1357
+ stream Ultra HD contents [31]. A user who inspects a smart
1358
+ TV using IoT Network Analyzer is likely to enjoy Ultra HD
1359
+ streaming given the reduced inbound rate of 41.4±74.6 Mbps
1360
+ under ARP spoofing. If a user desires to reduce the network
1361
+ overhead, the user could upgrade the computer that runs IoT
1362
+ Network Analyzer, as Raspberry Pi 3 is anecdotally known for
1363
+ its poor networking performance [37], [38]. Possible upgrade
1364
+ option could include a computer—or ODroid if the user needs
1365
+ the compact form factor [14]—that is shipped with a fast CPU
1366
+ and a Gigabit Ethernet card.
1367
+ Battery Lifetime. We used AccuBattery on android, to try to
1368
+ understand the energy cost. This does not hold across phones,
1369
+ so we compare the energy cost against YouTube and TikTok
1370
+ for ten minutes of streaming video. With all the background
1371
+ application killed, 10 minutes of Privacy Plumber impacts
1372
+ 3.98% (159mAh) of the battery lifetime, while YouTube costs
1373
+ 2.63% (105mAh) and TikTok costs 3.9% (156mAh). Privacy
1374
+ Plumber is only meant for point inspection and short usage to
1375
+ analyze new devices in the home, or experiment with different
1376
+ setups, so it should not impact battery lifetime too much since
1377
+ it is not always on. Moreover, the battery lifetime cost is
1378
+ similar to that of streaming videos online, a normal function,
1379
+ therefore users should not expect significant battery lifetime
1380
+ loss due to usage of Privacy Plumber.
1381
+ V.
1382
+ DISCUSSION ON LIMITATIONS AND FUTURE WORK
1383
+ Comparing users’ mental models against actual contents
1384
+ of IoT network traffic. Our results show that users’ mental
1385
+ model of how IoT devices communicate with the Internet may
1386
+ be inconsistent with how devices appear to behave, but it is
1387
+ unclear whether this mental model is consistent with the actual
1388
+ contents of the communication. For example, two participants
1389
+ in our study did not expect network traffic from Amazon Echo
1390
+ when the device’s microphone was on mute. Presumably, the
1391
+ participants expected Amazon not to send any audio data back
1392
+ to Amazon during mute. In this case, Echo’s apparent behavior
1393
+ was the communication with the Internet on mute; in contrast,
1394
+ whether Echo actually sent out audio data was unknown. Our
1395
+ system did not extract the contents of the communication,
1396
+ which could be encrypted based on previous results [4].
1397
+ Despite the encrypted contents, man-in-the-middling is
1398
+ possible (e.g., per Moghaddam et al. [28]). In future in-lab
1399
+ studies, we plan to modify IoT Network Analyzer to intercept
1400
+ and decrypt IoT traffic, assuming that devices do not validate
1401
+ certificates and/or do not use certificate pinning. We hope to
1402
+ extract the payload from some of the TLS connections, identify
1403
+ exactly what devices are sending to the Internet, and compare
1404
+ it against users’ mental models.
1405
+ Automated, contextualized blocking of devices. The current
1406
+ prototype allows users to set a block/unblock schedule for IoT
1407
+ devices. Although this feature provides users with fine-grained
1408
+ control, it requires manual effort from the user both in setting
1409
+ what devices to block and when to block.
1410
+ We plan to augment this feature with automated device
1411
+ blocking based on contextualized information that IoT Net-
1412
+ work Analyzer already collects. For example, a user could
1413
+ create a rule on IoT Network Analyzer that would automat-
1414
+ ically block surveillance cameras if IoT Network Analyzer
1415
+ detects the presence of mobile phones (based on ARP and
1416
+ pings) in the home network (which could suggest that the
1417
+ residents are home); otherwise, it can unblock the cameras to
1418
+ capture, say, unauthorized entry into the property. As another
1419
+ example, let’s say a user has an Amazon Echo and a smart
1420
+ TV in the living room. The user could create another rule that
1421
+ lets IoT Network Analyzer automatically block Amazon Echo
1422
+ if it detects active streaming traffic from the smart TV, as the
1423
+ user may not want Echo to capture any conversations while
1424
+ the family is watching TV in the living room. In short, by
1425
+ leveraging the IoT traffic that IoT Network Analyzer already
1426
+ collects, users could create automated, contextualized rules to
1427
+ block IoT devices from collecting sensitive data.
1428
+ Deployment roadmap and challenges. We plan to deploy the
1429
+ Privacy Plumber app and IoT Network Analyzer to real-world
1430
+ users at scale. Based on our current prototype, we plan to make
1431
+ the following modifications.
1432
+ Operating system support. Once deployed, our system will
1433
+ have the same two-component architecture, although we will
1434
+ expand the Privacy Plumber app to both iOS and Android
1435
+ (current prototype), and IoT Network Analyzer to all major
1436
+ non-mobile operating systems including macOS, Windows,
1437
+ 12
1438
+
1439
+ and Linux (current prototype). This process will likely be
1440
+ straightforward, as we developed both components with cross-
1441
+ OS platforms (Unity for the app and pure Python for IoT
1442
+ Network Analyzer).
1443
+ Network-based device identification. We will develop
1444
+ network-based device identification mechanisms to help users
1445
+ distinguish among their devices and identify the device(s)
1446
+ of interest. The current prototype identifies devices based
1447
+ on a hard-coded mapping between MAC OUIs and device
1448
+ names, because we already know the inventory of IoT devices
1449
+ in the lab. For real-world deployment, we will incorporate
1450
+ IoT Inspector’s device identification algorithm [15], so that
1451
+ our system will dynamically infer device names based on
1452
+ the network signature, which includes not only OUIs, but
1453
+ also DNS queries, UPnP banners, mDNS names, and DHCP
1454
+ hostnames. We will also use information in the 802.11 frames
1455
+ to discover and locate devices [41].
1456
+ Image-based device identification. To complement the
1457
+ network-based approach, we will also develop image-based
1458
+ device identification mechanisms for the AR camera. Cur-
1459
+ rently, the Privacy Plumber app identifies devices based on
1460
+ printed QR codes on or near select IoT devices, such that
1461
+ the QR codes encode the MAC addresses and the names of
1462
+ devices. For real-world deployment, we will use computer
1463
+ vision to train a model of common IoT device types, such as
1464
+ voice assistants, smart TVs, and surveillance cameras (where
1465
+ security and privacy issues are commonly found in the litera-
1466
+ ture). This model will help the AR app recognize possible IoT
1467
+ devices (e.g., “likely a smart TV”). The app will then refine
1468
+ the recognition with the network-based device identification
1469
+ algorithm (e.g., “whether the device is indeed a smart TV based
1470
+ on the network signatures”) and manual user input if necessary.
1471
+ Both the network- and image-based approaches will hopefully
1472
+ help the app identify IoT devices in real-world settings.
1473
+ Expanded user study. The user study, as a pilot, has a small
1474
+ sample size and is limited to graduate students, who may
1475
+ be more inquisitive or technically-inclined than the general
1476
+ population. We hope to scale out the testing to a larger
1477
+ userbase, both in lab and in real homes, in future work. We will
1478
+ also compare the participants’ changes in privacy awareness
1479
+ against other visualization tools (e.g., IoT Inspector [15] and
1480
+ Aretha [40]). Finally, we will conduct in-depth studies on
1481
+ various ways to visualize privacy leaks in AR (e.g., icon
1482
+ overlays and animations).
1483
+ VI.
1484
+ SUMMARY
1485
+ This paper presented Privacy Plumber, an end-to-end sys-
1486
+ tem demonstrating how a general population of end users can
1487
+ potentially have insight into the network traffic of smart home
1488
+ IoT devices, and how these users can control when these smart
1489
+ devices could communicate with the Internet with one click of
1490
+ a button. Designed after the concept of a leak detector, Privacy
1491
+ Plumber is a phone app with a tethered desktop application—
1492
+ IoT Network Analyzer—that provides an inspect and correct
1493
+ interface supported by network traffic analysis (inspect) and
1494
+ automated and timed network traffic jamming (correct).
1495
+ Privacy Plumber is the first real-world inspection and
1496
+ control system that can be deployed in any home without new
1497
+ hardware or router modifications. Using AR, the tool aims to
1498
+ help users model IoT device activities within the context of
1499
+ the physical environment and of user interactions (addressing
1500
+ challenges C1 and C3, per Section II-D); it gives users the
1501
+ option to block IoT devices and control the privacy “valve”
1502
+ (C2); it provides users with an interface to visualize IoT device
1503
+ activities as users interact with devices (C4); and it requires
1504
+ a modern AR-supported phone and computer, without any
1505
+ dedicated or specialized hardware (C5).
1506
+ We evaluated Privacy Plumber inside an instrumented smart
1507
+ home space with a variety of devices not previously evaluated
1508
+ for any privacy-enhancing tool, including a smart fridge, a
1509
+ smart TV, voice assistants, and Internet-connected surveillance
1510
+ cameras. We found that using Privacy Plumber improved users’
1511
+ awareness of potential privacy violations of devices and that
1512
+ the system was generally easy to use and afforded useful
1513
+ controls. In the future, we hope tools like Privacy Plumber will
1514
+ give mechanisms back to the user for stymieing the flow of
1515
+ private information outside the home, especially as our homes
1516
+ and living spaces become smarter, often without our consent.
1517
+ ACKNOWLEDGMENT
1518
+ This research is based upon work supported by the National
1519
+ Science Foundation under award numbers CNS-2219867,
1520
+ CNS-1739809, and CNS-1915847. Any opinions, findings, and
1521
+ conclusions or recommendations expressed in this material are
1522
+ those of the authors and do not necessarily reflect the views of
1523
+ the National Science Foundation. The research is also based
1524
+ on work supported by gifts from Consumer Reports and Meta.
1525
+ REFERENCES
1526
+ [1]
1527
+ Abbas Acar, Hossein Fereidooni, Tigist Abera, Amit Kumar Sikder,
1528
+ Markus Miettinen, Hidayet Aksu, Mauro Conti, Ahmad-Reza Sadeghi,
1529
+ and Selcuk Uluagac. Peek-a-boo: I see your smart home activities, even
1530
+ encrypted! In Proceedings of the 13th ACM Conference on Security and
1531
+ Privacy in Wireless and Mobile Networks, pages 207–218, 2020.
1532
+ [2]
1533
+ Imtiaz Ahmad, Rosta Farzan, Apu Kapadia, and Adam J Lee. Tangible
1534
+ privacy: Towards user-centric sensor designs for bystander privacy. Pro-
1535
+ ceedings of the ACM on Human-Computer Interaction, 4(CSCW2):1–
1536
+ 28, 2020.
1537
+ [3]
1538
+ Omar Alrawi, Chaz Lever, Manos Antonakakis, and Fabian Monrose.
1539
+ Sok: Security evaluation of home-based iot deployments. In 2019 IEEE
1540
+ symposium on security and privacy (sp), pages 1362–1380. IEEE, 2019.
1541
+ [4]
1542
+ Noah Apthorpe, Danny Yuxing Huang, Dillon Reisman, Arvind
1543
+ Narayanan, and Nick Feamster. Keeping the smart home private with
1544
+ smart (er) iot traffic shaping.
1545
+ Proceedings on Privacy Enhancing
1546
+ Technologies, 2019(3):128–148, 2019.
1547
+ [5]
1548
+ Noah Apthorpe, Dillon Reisman, and Nick Feamster. A smart home is
1549
+ no castle: Privacy vulnerabilities of encrypted iot traffic. arXiv preprint
1550
+ arXiv:1705.06805, 2017.
1551
+ [6]
1552
+ Noah
1553
+ Apthorpe,
1554
+ Dillon
1555
+ Reisman,
1556
+ Srikanth
1557
+ Sundaresan,
1558
+ Arvind
1559
+ Narayanan, and Nick Feamster.
1560
+ Spying on the smart home: Pri-
1561
+ vacy attacks and defenses on encrypted iot traffic.
1562
+ arXiv preprint
1563
+ arXiv:1708.05044, 2017.
1564
+ [7]
1565
+ Bruhadeshwar Bezawada, Maalvika Bachani, Jordan Peterson, Hossein
1566
+ Shirazi, Indrakshi Ray, and Indrajit Ray. Iotsense: Behavioral finger-
1567
+ printing of iot devices. arXiv preprint arXiv:1804.03852, 2018.
1568
+ [8]
1569
+ Patrick Bombik, Tom Wenzel, Jens Grossklags, and Sameer Patil. A
1570
+ multi-region investigation of the perceptions and use of smart home
1571
+ devices.
1572
+ Proceedings on Privacy Enhancing Technologies, 3:6–32,
1573
+ 2022.
1574
+ 13
1575
+
1576
+ [9]
1577
+ Nico Castelli, Corinna Ogonowski, Timo Jakobi, Martin Stein, Gunnar
1578
+ Stevens, and Volker Wulf. What happened in my home? an end-user de-
1579
+ velopment approach for smart home data visualization. In Proceedings
1580
+ of the 2017 CHI Conference on Human Factors in Computing Systems,
1581
+ pages 853–866, 2017.
1582
+ [10]
1583
+ Gordon Chu, Noah Apthorpe, and Nick Feamster. Security and privacy
1584
+ analyses of internet of things children’s toys. IEEE Internet of Things
1585
+ Journal, 6(1):978–985, 2018.
1586
+ [11]
1587
+ Sunny Consolvo, Jaeyeon Jung, Ben Greenstein, Pauline Powledge,
1588
+ Gabriel Maganis, and Daniel Avrahami.
1589
+ The wi-fi privacy ticker:
1590
+ improving awareness & control of personal information exposure on
1591
+ wi-fi.
1592
+ In Proceedings of the 12th ACM international conference on
1593
+ Ubiquitous computing, pages 321–330, 2010.
1594
+ [12]
1595
+ Disconnect, Inc. Disconnect tracking protection, 2021.
1596
+ [13]
1597
+ Yuanyuan Feng, Yaxing Yao, and Norman Sadeh. A design space for
1598
+ privacy choices: Towards meaningful privacy control in the internet of
1599
+ things. In Proceedings of the 2021 CHI Conference on Human Factors
1600
+ in Computing Systems, pages 1–16, 2021.
1601
+ [14]
1602
+ HardKernel. ODROID-XU4, 2021.
1603
+ [15]
1604
+ Danny Yuxing Huang, Noah Apthorpe, Frank Li, Gunes Acar, and Nick
1605
+ Feamster. Iot inspector: Crowdsourcing labeled network traffic from
1606
+ smart home devices at scale. Proceedings of the ACM on Interactive,
1607
+ Mobile, Wearable and Ubiquitous Technologies, 4(2):1–21, 2020.
1608
+ [16]
1609
+ Haojian Jin, Boyuan Guo, Rituparna Roychoudhury, Yaxing Yao,
1610
+ Swarun Kumar, Yuvraj Agarwal, and Jason I Hong. Exploring the needs
1611
+ of users for supporting privacy-protective behaviors in smart homes. In
1612
+ CHI Conference on Human Factors in Computing Systems, pages 1–19,
1613
+ 2022.
1614
+ [17]
1615
+ Patrick Gage Kelley, Joanna Bresee, Lorrie Faith Cranor, and Robert W
1616
+ Reeder.
1617
+ A” nutrition label” for privacy.
1618
+ In Proceedings of the 5th
1619
+ Symposium on Usable Privacy and Security, pages 1–12, 2009.
1620
+ [18]
1621
+ Christian Kreibich, Nicholas Weaver, Boris Nechaev, and Vern Paxson.
1622
+ Netalyzr: Illuminating the edge network. In Proceedings of the 10th
1623
+ ACM SIGCOMM conference on Internet measurement, pages 246–259,
1624
+ 2010.
1625
+ [19]
1626
+ Hosub Lee and Alfred Kobsa. Understanding user privacy in internet
1627
+ of things environments. In 2016 IEEE 3rd World Forum on Internet of
1628
+ Things (WF-IoT), pages 407–412. IEEE, 2016.
1629
+ [20]
1630
+ Huichen Lin and Neil W Bergmann. Iot privacy and security challenges
1631
+ for smart home environments. Information, 7(3):44, 2016.
1632
+ [21]
1633
+ Heather Richter Lipford, Madiha Tabassum, Paritosh Bahirat, Yaxing
1634
+ Yao, and Bart P Knijnenburg. Privacy and the internet of things. Modern
1635
+ Socio-Technical Perspectives on Privacy, page 233, 2022.
1636
+ [22]
1637
+ Nathan Malkin, Julia Bernd, Maritza Johnson, and Serge Egelman.
1638
+ “what can’t data be used for?” privacy expectations about smart tvs
1639
+ in the us. In Proceedings of the 3rd European Workshop on Usable
1640
+ Security (EuroUSEC), London, UK, 2018.
1641
+ [23]
1642
+ Nathan Malkin, Joe Deatrick, Allen Tong, Primal Wijesekera, Serge
1643
+ Egelman, and David Wagner. Privacy attitudes of smart speaker users.
1644
+ Proceedings on Privacy Enhancing Technologies, 2019(4), 2019.
1645
+ [24]
1646
+ Shrirang Mare, Franziska Roesner, and Tadayoshi Kohno. Smart devices
1647
+ in airbnbs: Considering privacy and security for both guests and hosts.
1648
+ Proc. Priv. Enhancing Technol., 2020(2):436–458, 2020.
1649
+ [25]
1650
+ Emily McReynolds, Sarah Hubbard, Timothy Lau, Aditya Saraf, Maya
1651
+ Cakmak, and Franziska Roesner. Toys that listen: A study of parents,
1652
+ children, and internet-connected toys. In Proceedings of the 2017 CHI
1653
+ conference on human factors in computing systems, pages 5197–5207,
1654
+ 2017.
1655
+ [26]
1656
+ Markus Miettinen, Samuel Marchal, Ibbad Hafeez, N Asokan, Ahmad-
1657
+ Reza Sadeghi, and Sasu Tarkoma.
1658
+ Iot sentinel: Automated device-
1659
+ type identification for security enforcement in iot. In 2017 IEEE 37th
1660
+ International Conference on Distributed Computing Systems (ICDCS),
1661
+ pages 2177–2184. IEEE, 2017.
1662
+ [27]
1663
+ Phoebe Moh, Noel Warford, Pubali Datta, Nathan Malkin, Adam Bates,
1664
+ and Michelle L Mazurek. Characterizing misuse and snooping in home
1665
+ iot devices.
1666
+ [28]
1667
+ Hooman Mohajeri Moghaddam, Gunes Acar, Ben Burgess, Arunesh
1668
+ Mathur, Danny Yuxing Huang, Nick Feamster, Edward W Felten, Pra-
1669
+ teek Mittal, and Arvind Narayanan. Watching you watch: The tracking
1670
+ ecosystem of over-the-top tv streaming devices.
1671
+ In Proceedings of
1672
+ the 2019 ACM SIGSAC Conference on Computer and Communications
1673
+ Security, pages 131–147, 2019.
1674
+ [29]
1675
+ Mozilla. Privacy Not Included., 2021.
1676
+ [30]
1677
+ Pardis Emami Naeini, Sruti Bhagavatula, Hana Habib, Martin Degeling,
1678
+ Lujo Bauer, Lorrie Faith Cranor, and Norman Sadeh. Privacy expecta-
1679
+ tions and preferences in an {IoT} world. In Thirteenth Symposium on
1680
+ Usable Privacy and Security (SOUPS 2017), pages 399–412, 2017.
1681
+ [31]
1682
+ Netflix. Internet Connection Speed Recommendations, 2021.
1683
+ [32]
1684
+ Helen Nissenbaum.
1685
+ Privacy in context: Technology, policy, and the
1686
+ integrity of social life. Stanford University Press, 2009.
1687
+ [33]
1688
+ TJ OConnor, Reham Mohamed, Markus Miettinen, William Enck,
1689
+ Bradley Reaves, and Ahmad-Reza Sadeghi.
1690
+ Homesnitch: behavior
1691
+ transparency and control for smart home iot devices. In Proceedings of
1692
+ the 12th Conference on Security and Privacy in Wireless and Mobile
1693
+ Networks, pages 128–138, 2019.
1694
+ [34]
1695
+ pfSense.org. World’s MOST Trusted Open Source Firewall, 2021.
1696
+ [35]
1697
+ James Pierce, Richmond Y Wong, and Nick Merrill. Sensor illumi-
1698
+ nation: Exploring design qualities and ethical implications of smart
1699
+ cameras and image/video analytics. In Proceedings of the 2020 CHI
1700
+ Conference on Human Factors in Computing Systems, pages 1–19,
1701
+ 2020.
1702
+ [36]
1703
+ Sarah Prange, Ahmed Shams, Robin Piening, Yomna Abdelrahman, and
1704
+ Florian Alt. Priview– exploring visualisations to support users’ privacy
1705
+ awareness.
1706
+ In Proceedings of the 2021 CHI Conference on Human
1707
+ Factors in Computing Systems, CHI ’21, New York, NY, USA, 2021.
1708
+ Association for Computing Machinery.
1709
+ [37]
1710
+ Raspberry Pi Discussion Forum. RPi 3B+ gigabit ethernet bad download
1711
+ speeds, 2018.
1712
+ [38]
1713
+ Raspberry Pi Dramble. Networking Benchmarks, 2021.
1714
+ [39]
1715
+ Abbas Razaghpanah, Rishab Nithyanand, Narseo Vallina-Rodriguez,
1716
+ Srikanth Sundaresan, Mark Allman, Christian Kreibich, and Phillipa
1717
+ Gill.
1718
+ Apps, trackers, privacy, and regulators: A global study of the
1719
+ mobile tracking ecosystem. 2018.
1720
+ [40]
1721
+ William Seymour, Martin J Kraemer, Reuben Binns, and Max
1722
+ Van Kleek. Informing the design of privacy-empowering tools for the
1723
+ connected home. In Proceedings of the 2020 CHI Conference on Human
1724
+ Factors in Computing Systems, pages 1–14, 2020.
1725
+ [41]
1726
+ Rahul Anand Sharma, Elahe Soltanaghaei, Anthony Rowe, and Vyas
1727
+ Sekar. Lumos: Identifying and localizing diverse hidden IoT devices
1728
+ in an unfamiliar environment.
1729
+ In 31st USENIX Security Symposium
1730
+ (USENIX Security 22), pages 1095–1112, Boston, MA, August 2022.
1731
+ USENIX Association.
1732
+ [42]
1733
+ Yun Shen and Pierre-Antoine Vervier. Iot security and privacy labels.
1734
+ In Annual Privacy Forum, pages 136–147. Springer, 2019.
1735
+ [43]
1736
+ Madiha Tabassum, Tomasz Kosinski, and Heather Richter Lipford. ” i
1737
+ don’t own the data”: End user perceptions of smart home device data
1738
+ practices and risks. In Fifteenth Symposium on Usable Privacy and
1739
+ Security ({SOUPS} 2019), 2019.
1740
+ [44]
1741
+ Parth Kirankumar Thakkar, Shijing He, Shiyu Xu, Danny Yuxing
1742
+ Huang, and Yaxing Yao. “it would probably turn into a social faux-pas”:
1743
+ Users’ and bystanders’ preferences of privacy awareness mechanisms
1744
+ in smart homes. In CHI Conference on Human Factors in Computing
1745
+ Systems, pages 1–13, 2022.
1746
+ [45]
1747
+ Blase Ur, Pedro Giovanni Leon, Lorrie Faith Cranor, Richard Shay,
1748
+ and Yang Wang. Smart, useful, scary, creepy: perceptions of online
1749
+ behavioral advertising.
1750
+ In proceedings of the eighth symposium on
1751
+ usable privacy and security, pages 1–15, 2012.
1752
+ [46]
1753
+ Max Van Kleek, Reuben Binns, Jun Zhao, Adam Slack, Sauyon Lee,
1754
+ Dean Ottewell, and Nigel Shadbolt.
1755
+ X-ray refine: Supporting the
1756
+ exploration and refinement of information exposure resulting from
1757
+ smartphone apps.
1758
+ In Proceedings of the 2018 CHI Conference on
1759
+ Human Factors in Computing Systems, pages 1–13, 2018.
1760
+ [47]
1761
+ Max Van Kleek, Ilaria Liccardi, Reuben Binns, Jun Zhao, Daniel J
1762
+ Weitzner, and Nigel Shadbolt. Better the devil you know: Exposing
1763
+ the data sharing practices of smartphone apps. In Proceedings of the
1764
+ 2017 CHI Conference on Human Factors in Computing Systems, pages
1765
+ 5208–5220, 2017.
1766
+ [48]
1767
+ Sean Whalen.
1768
+ An introduction to arp spoofing.
1769
+ Node99 [Online
1770
+ Document], 2001.
1771
+ 14
1772
+
1773
+ [49]
1774
+ Peter Worthy, Ben Matthews, and Stephen Viller. Trust me: doubts and
1775
+ concerns living with the internet of things. In Proceedings of the 2016
1776
+ ACM Conference on Designing Interactive Systems, pages 427–434,
1777
+ 2016.
1778
+ [50]
1779
+ Yaxing Yao, Justin Reed Basdeo, Smirity Kaushik, and Yang Wang.
1780
+ Defending my castle: A co-design study of privacy mechanisms for
1781
+ smart homes. In Proceedings of the 2019 CHI conference on human
1782
+ factors in computing systems, pages 1–12, 2019.
1783
+ [51]
1784
+ Eric Zeng, Shrirang Mare, and Franziska Roesner. End user security
1785
+ and privacy concerns with smart homes. In Thirteenth Symposium on
1786
+ Usable Privacy and Security (SOUPS 2017), pages 65–80, 2017.
1787
+ [52]
1788
+ Wei Zhang, Yan Meng, Yugeng Liu, Xiaokuan Zhang, Yinqian Zhang,
1789
+ and Haojin Zhu. Homonit: Monitoring smart home apps from encrypted
1790
+ traffic.
1791
+ In Proceedings of the 2018 ACM SIGSAC Conference on
1792
+ Computer and Communications Security, pages 1074–1088, 2018.
1793
+ [53]
1794
+ Serena Zheng, Noah Apthorpe, Marshini Chetty, and Nick Feamster.
1795
+ User perceptions of smart home iot privacy. Proceedings of the ACM
1796
+ on human-computer interaction, 2(CSCW):1–20, 2018.
1797
+ APPENDIX
1798
+ SURVEY QUESTIONS
1799
+ All questions require responses in Likert scales, ranging
1800
+ from “Strongly Agree” (1) to “Strongly Disagree” (5).
1801
+ A. Pre-Study Survey Questions
1802
+ 1)
1803
+ When I am in a smart home, I think about what in-
1804
+ formation I may be exposing to vendors, companies,
1805
+ and 3rd parties when I interact with or sit in the same
1806
+ space with smart devices in the home.
1807
+ 2)
1808
+ I am not concerned about the information I may be
1809
+ exposing to 3rd parties when I interact with or sit in
1810
+ the same space as smart devices in a smart home.
1811
+ 3)
1812
+ I think this device could be (or is) useful or valuable
1813
+ to my daily life and routine.
1814
+
1815
+ Smart Fridge
1816
+
1817
+ Google Home
1818
+
1819
+ Amazon Echo
1820
+
1821
+ Smart TV
1822
+
1823
+ Nest Cam
1824
+ 4)
1825
+ I am comfortable having this device in my house and
1826
+ always on.
1827
+
1828
+ Smart Fridge
1829
+
1830
+ Google Home
1831
+
1832
+ Amazon Echo
1833
+
1834
+ Smart TV
1835
+
1836
+ Nest Cam
1837
+ B. Post-Study Survey Questions
1838
+ 1)
1839
+ When I am in a smart home, I think about what in-
1840
+ formation I may be exposing to vendors, companies,
1841
+ and 3rd parties when I interact with or sit in the same
1842
+ space with smart devices in the home.
1843
+ 2)
1844
+ I am not concerned about the information I may be
1845
+ exposing to 3rd parties when I interact with or sit in
1846
+ the same space as smart devices in a smart home.
1847
+ 3)
1848
+ Privacy Plumber has made me more aware of what
1849
+ information I may be exposing to 3rd parties when I
1850
+ interact with smart devices in the home.
1851
+ 4)
1852
+ I feel Privacy Plumber has made me more aware
1853
+ of privacy and security concerns surrounding IoT
1854
+ devices.
1855
+ 5)
1856
+ I think this device could be (or is) useful or valuable
1857
+ to my daily life and routine.
1858
+
1859
+ Smart Fridge
1860
+
1861
+ Google Home
1862
+
1863
+ Amazon Echo
1864
+
1865
+ Smart TV
1866
+
1867
+ Nest Cam
1868
+ 6)
1869
+ I am comfortable having this device in my house and
1870
+ always on.
1871
+
1872
+ Smart Fridge
1873
+
1874
+ Google Home
1875
+
1876
+ Amazon Echo
1877
+
1878
+ Smart TV
1879
+
1880
+ Nest Cam
1881
+ 7)
1882
+ Finally, please provide any other thoughts or obser-
1883
+ vations from participating in this experiment with
1884
+ Privacy Plumber (open ended).
1885
+ ADDITIONAL RESPONSES FROM THE USABILITY SURVEY
1886
+ We gave participants an open-ended question if they would
1887
+ improve the usability if privacy plumber, if so how. We
1888
+ obtained the following responses from each participant.
1889
+ I would include more guidance or instructions in the app
1890
+ for first-time users. (P1)
1891
+ I think the app is generally easy-to-use, although I might
1892
+ want more functionalities in the app. There are certain laten-
1893
+ cies in the app, which can be annoying. It would be more
1894
+ helpful if I can know if the device is not sending any traffic,
1895
+ or it is just simply late (e.g., adding a loading icon). (P2)
1896
+ Make it possible to view past trends (a la net microscope)
1897
+ and scroll backwards in time, so I can get the context of how
1898
+ much traffic is regularly sent. Give me a global view of the
1899
+ worst offenders. Still some work to do on basic stability. It
1900
+ only works on devices that people have obviously ALREADY
1901
+ DECIDED TO BUY, which is a weird sample. Obviously, I
1902
+ don’t have QR codes printed out on all of my household
1903
+ electronics. (P3)
1904
+ I had difficulties trying to access the buttons, and the
1905
+ images seemed lagged a little. But the info was very useful
1906
+ overall. (P4)
1907
+ Fix where the traffic and ‘learn more about the device’
1908
+ buttons once you’ve scanned the QR code. It’s a bit awkward
1909
+ to have to hold the phone back up to the device. Maybe add
1910
+ the units (byte/kB) to the left hand side of the graph instead
1911
+ of above it for the traffic visualization. (P5)
1912
+ The plots are not super-intuitive but I liked the representa-
1913
+ tions in terms of text/pictures which is easier to comprehend.
1914
+ I would also be interested to see what advertisers the infor-
1915
+ mation is being leaked to. While the AR thing is cool, I would
1916
+ also like the option to just scroll through a list of devices.
1917
+ That ways I do not have to be close to the device and would
1918
+ also be able to monitor its activity when I am not close to
1919
+ the device. In fact, I would be interested in seeing the device
1920
+ communication (including interaction w/ advertisers) in that
1921
+ case. (P6)
1922
+ 15
1923
+
HdFLT4oBgHgl3EQfIC9G/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
I9AyT4oBgHgl3EQf5_o0/content/tmp_files/2301.00813v1.pdf.txt ADDED
@@ -0,0 +1,1464 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ A Survey on Protein Representation Learning: Retrospect and Prospect
2
+ Lirong Wu 1,2 ∗ , Yufei Huang 1,2 ∗ , Haitao Lin 1,2 , Stan Z. Li 1†
3
+ 1 AI Lab, Research Center for Industries of the Future, Westlake University
4
+ 2 College of Computer Science and Technology, Zhejiang University
5
+ {wulirong,huangyufei,linhaitao,stan.zq.li}@westlake.edu.cn
6
+ Abstract
7
+ Proteins are fundamental biological entities that
8
+ play a key role in life activities. The amino acid
9
+ sequences of proteins can be folded into stable 3D
10
+ structures in the real physicochemical world, form-
11
+ ing a special kind of sequence-structure data. With
12
+ the development of Artificial Intelligence (AI) tech-
13
+ niques, Protein Representation Learning (PRL) has
14
+ recently emerged as a promising research topic
15
+ for extracting informative knowledge from mas-
16
+ sive protein sequences or structures. To pave the
17
+ way for AI researchers with little bioinformatics
18
+ background, we present a timely and comprehen-
19
+ sive review of PRL formulations and existing PRL
20
+ methods from the perspective of model architec-
21
+ tures, pretext tasks, and downstream applications.
22
+ We first briefly introduce the motivations for pro-
23
+ tein representation learning and formulate it in a
24
+ general and unified framework. Next, we divide
25
+ existing PRL methods into three main categories:
26
+ sequence-based, structure-based, and sequence-
27
+ structure co-modeling. Finally, we discuss some
28
+ technical challenges and potential directions for
29
+ improving protein representation learning. The lat-
30
+ est advances in PRL methods are summarized in
31
+ a GitHub repository https://github.com/LirongWu/
32
+ awesome-protein-representation-learning.
33
+ 1
34
+ Introduction
35
+ Proteins perform specific biological functions that are essen-
36
+ tial for all living organisms and therefore play a key role when
37
+ investigating the most fundamental questions in the life sci-
38
+ ences. The proteins are composed of one or several chains of
39
+ amino acids that fold into a stable 3D structure to enable vari-
40
+ ous biological functionalities. Therefore, understanding, pre-
41
+ dicting, and designing proteins for biological processes are
42
+ critical for medical, pharmaceutical, and genetic research.
43
+ Previous approaches on protein modeling are mostly driven
44
+ by biological or physical priors, and they explore com-
45
+ plex sequence-structure-function relationships through en-
46
+ ergy minimization [Rohl et al., 2004; Xu and Zhang, 2011],
47
+ ∗Equal contribution, † Corresponding author
48
+ dynamics simulations [Hospital et al., 2015; Karplus and
49
+ Petsko, 1990], etc.
50
+ With the development of artificial in-
51
+ telligence and low-cost sequencing technologies, data-driven
52
+ Protein Representation Learning (PRL) [Jumper et al., 2021;
53
+ Rao et al., 2019; Rives et al., 2021; Hermosilla and Ropinski,
54
+ 2022; Jing et al., 2020] has made remarkable progress due to
55
+ its superior performance in modeling complex nonlinear rela-
56
+ tionships. The primary goal of protein representation learning
57
+ is to extract transferable knowledge from protein data with
58
+ well-designed model architectures and pretext tasks, and then
59
+ generalize the learned knowledge to various protein-related
60
+ downstream applications, ranging from structure prediction
61
+ to sequence design. Despite their great progress, it is still
62
+ tricky for AI researchers without bioinformatics background
63
+ to get started with protein representation learning, and one
64
+ obstacle is the vast amount of physicochemical knowledge in-
65
+ volved behind the proteins. Therefore, a survey on PRL meth-
66
+ ods that is friendly to the AI community is urgently needed.
67
+ Existing surveys related to PRL [Iuchi et al., 2021; Unsal
68
+ et al., 2020; Hu et al., 2021; Torrisi et al., 2020] are mainly
69
+ developed from the perspective of biological applications, but
70
+ do not go deeper into other important aspects, such as model
71
+ architectures and pretext tasks. Overall, our contributions can
72
+ be summarized as follows: (1) Comprehensive review. Our
73
+ survey provides a comprehensive and up-to-date review of
74
+ existing PRL methods from the perspective of the model ar-
75
+ chitectures and pretext tasks. (2) New taxonomy. We divide
76
+ existing PRL methods into three categories: sequence-based,
77
+ structure-based, and sequence-structure co-modeling. (3) De-
78
+ tailed Implementations. We summarize the paper lists and
79
+ open-source codes in a public GitHub repository, setting the
80
+ stage for the development of more future works. (4) Future
81
+ directions. We point out the technical limitations of current
82
+ research and discuss several promising directions.
83
+ 2
84
+ Notation and Problem Statement
85
+ The sequence of amino acids can be folded into a stable 3D
86
+ structure, forming a special kind of sequence-structure data,
87
+ which determines its properties and functions. Therefore, we
88
+ can model each protein as a graph G = (V, E, X, F), where V
89
+ is the ordered set of N nodes in the graph representing amino
90
+ acid residues and E ∈ V × V is the set of edges that connects
91
+ the nodes. Each node u ∈ V in graph G can be attributed with
92
+ a scalar-vector tuple xu = (su, Vu), where su ∈ RO and
93
+ arXiv:2301.00813v1 [cs.LG] 31 Dec 2022
94
+
95
+ Vu ∈ R3×P . Each edge e ∈ E can be attributed with a scalar-
96
+ vector tuple fe = (se, Ve), where se ∈ RT and Ve ∈ R3×D.
97
+ Given a model architecture fθ(·) and a set of K losses
98
+ of pretext tasks {L(1)
99
+ pre(θ, η1), L(2)
100
+ pre(θ, η2), · · · , L(K)
101
+ pre (θ, ηK)}
102
+ with projection heads {gηk(·)}K
103
+ k=1, Protein Representation
104
+ Learning (PRL) usually works in a two-stage manner: (1)
105
+ Pre-training the model fθ(·) with pretext tasks; and (2) Fine-
106
+ tuning the pre-trained model fθinit(·) with a projection head
107
+ gω(·) under the supervision of a specific downstream task
108
+ Ltask(θ, ω). The learning objective can be formulated as
109
+ θ∗, ω∗ = arg min
110
+ (θ,ω) Ltask(θinit, ω),
111
+ s.t. θinit, {η∗
112
+ k}K
113
+ k=1 = arg min
114
+ θ,{ηk}K
115
+ k=1
116
+ K
117
+
118
+ k=1
119
+ λkL(k)
120
+ pre(θ, ηk)
121
+ (1)
122
+ where {λk}K
123
+ k=1 are trade-off task hyperparameters. A high-
124
+ level overview of the PRL framework is shown in Fig. 1.
125
+ In practice, if we set K = 1, ω = η1, i.e., L(1)
126
+ pre(θ, η1) =
127
+ Ltask(θ, ω), it is equivalent to learning task-specific repre-
128
+ sentations directly under downstream supervision, which in
129
+ this survey can be considered as a special case of Eq. (1).
130
+ Pretext Tasks
131
+ Prediction
132
+ Head
133
+ Prediction
134
+ Head
135
+ Encoder
136
+ Downstream Task
137
+ Encoder
138
+ Step 2
139
+ Fine-tune
140
+ Step 1
141
+ Pre-train
142
+ Figure 1: A general framework for protein representation learning.
143
+ In this survey, we mainly focus on the model architecture
144
+ fθ(·) and pretext tasks {L(k)
145
+ pre(θ, ηk)}K
146
+ k=1 for protein repre-
147
+ sentation learning, and defer the discussion on downstream
148
+ applications until Sec. 5. A high-level overview of this sur-
149
+ vey with some representative examples is shown in Fig. 2.
150
+ 3
151
+ Model Architectures
152
+ In this section, we summarize some commonly used model
153
+ architectures for learning protein sequences or structures.
154
+ 3.1
155
+ Sequence-based Encoder
156
+ The sequence encoder takes as input (V, X) and then aims to
157
+ capture the dependencies between amino acids. [Wang et al.,
158
+ 2019] treats protein sequences as a special “biological lan-
159
+ guage” and then establishes an analogy between such “bio-
160
+ logical language” and natural (textual) language. Inspired by
161
+ this, many classical model architectures developed for natural
162
+ language processing can be directly extended to handle pro-
163
+ tein sequences [Asgari et al., 2019]. Depending on whether
164
+ a single sequence or multiple sequences are to be encoded,
165
+ there are a variety of different sequence-based encoders.
166
+ Single Sequences
167
+ The commonly used sequence encoders for modeling single
168
+ sequences include Variational Auto-Encoder (VAE) [Sinai et
169
+ al., 2017; Ding et al., 2019], Recurrent Neural Networks
170
+ (RNNs) [Armenteros et al., 2020], Long Short-Term Memory
171
+ (LSTM) [Hochreiter and Schmidhuber, 1997], BERT [Devlin
172
+ et al., 2018], Transformer [Vaswani et al., 2017]. Based on
173
+ the vanilla Transformer, [Wu et al., 2022] proposes a novel
174
+ geometry-inspired transformer (Geoformer) to further distill
175
+ the structural and physical pairwise relationships between
176
+ amino acids into the learned protein representation. If we
177
+ do not consider the ordering of amino acids in the sequences,
178
+ we can also directly apply Convolutional Neural Networks
179
+ (CNNs) [LeCun et al., 1995] or ResNet [He et al., 2016] to
180
+ capture the local dependencies between adjacent amino acids.
181
+ MSA Sequences
182
+ The long-standing practices in computational biology are to
183
+ make inferences from a family of evolutionarily related se-
184
+ quences [Weigt et al., 2009; Thomas et al., 2005; Lapedes
185
+ et al., 1999].
186
+ Therefore, there have been several multi-
187
+ ple sequences encoders proposed to capture co-evolutionary
188
+ information by taking as input a set of sequences in the
189
+ form of multiple sequence alignment (MSA). For exam-
190
+ ple, MSA Transformer [Rao et al., 2021] extends the self-
191
+ attention mechanism to the MSA setting, which interleaves
192
+ self-attention across rows and columns to capture dependen-
193
+ cies between amino acids and between sequences. As a cru-
194
+ cial component of AlphaFold2, Evoformer [Jumper et al.,
195
+ 2021] alternatively updates MSA and Pair representations in
196
+ each block, which encode co-evolutionary information in se-
197
+ quences and relations between residues, respectively.
198
+ 3.2
199
+ Structure-based Encoder
200
+ Despite the effectiveness of sequence-based encoders, the
201
+ power of pre-training with protein structures has been rarely
202
+ explored, even though protein structures are known to be de-
203
+ terminants of protein functions. To better utilize this critical
204
+ structural information, a large number of structure-based en-
205
+ coders have been proposed to model structural information,
206
+ which can be mainly divided into three categories: feature
207
+ map-based, message-passing GNNs, and geometric GNNs.
208
+ Feature map-based Methods
209
+ The use of deep learning to model protein 3D structures could
210
+ be traced back to a decade ago [Zhang and Zhang, 2010;
211
+ Schaap et al., 2001]. Early methods directly extracted sev-
212
+ eral hand-crafted feature maps from protein structures and
213
+ then applied 3D CNNs to model the geometric information
214
+ of proteins [Derevyanko et al., 2018; Amidi et al., 2018;
215
+ Townshend et al., 2019]. Later work extended 3D CNNs to
216
+ spherical convolution for identifying interaction patterns on
217
+ protein surfaces [Sverrisson et al., 2021; Gainza et al., 2020].
218
+ Message-passing GNNs
219
+ To further capture the geometric relationships and biomedi-
220
+ cal interactions between amino acids, it has been proposed
221
+ to first construct a graph from the extracted feature maps by
222
+ thresholding or k Nearest Neighbors (kNN) [Preparata and
223
+ Shamos, 2012]. Then, many existing message-passing Graph
224
+ Neural Networks (GNNs) can be directly applied to model
225
+ protein structures, including Graph Convolutional Network
226
+ (GCN) [Kipf and Welling, 2016], Graph Isomorphism Net-
227
+ work (GIN) [Xu et al., 2018], and GraphSAGE [Hamilton
228
+
229
+ PRL
230
+ Preliminaries
231
+ Notation and Problem Statement
232
+ Architectures
233
+ Sequence-based
234
+ Single Sequence
235
+ LSTM [Hochreiter and Schmidhuber, 1997], Transformer [Vaswani et al., 2017], CNNs [LeCun et al., 1995]
236
+ MSA Sequence
237
+ MSA Transformer [Rao et al., 2021], Evoformer [Jumper et al., 2021]
238
+ Structure-based
239
+ Feature map-based
240
+ 3D CNNs [Derevyanko et al., 2018], Spherical CNNs [Sverrisson et al., 2021]
241
+ Message-passing GNNs
242
+ GCNs [Kipf and Welling, 2016], IEConv [Hermosilla et al., 2020], GearNet [Zhang et al., 2022]
243
+ Geometric GNNs
244
+ GVP [Jing et al., 2020], GBP [Aykent and Xia, 2022], DWP [Li et al., 2022]
245
+ Sequence-structure Co-modeling
246
+ DeepFRI [Gligorijevi´c et al., 2021], LM-GVP [Wang et al., 2021]
247
+ Pretext Tasks
248
+ Sequence-based
249
+ Supervised
250
+ PLUS [Min et al., 2021], Profile Prediction [Sturmfels et al., 2020], Progen [Madani et al., 2020]
251
+ Self-Supervised
252
+ MLM [Rao et al., 2019], PMLM [He et al., 2021], NAP [Alley et al., 2019], CPC [Lu et al., 2020]
253
+ Structure-based
254
+ Contrative
255
+ Multiview Contrast [Hermosilla and Ropinski, 2022; Zhang et al., 2022]
256
+ Predictive
257
+ Distance and Angle Prediction [Chen et al., 2022], Dihedral Prediction [Hermosilla and Ropinski, 2022]
258
+ Sequence-structure Co-modeling
259
+ Full-atomic Structure Prediction [Jumper et al., 2021; Hu et al., 2022]
260
+ Applications
261
+ Property Prediction
262
+ Stability [Rao et al., 2019], Fold Quality [Baldassarre et al., 2021], Mutation Effect [Meier et al., 2021], PPI [Wang et al., 2019]
263
+ Structure Prediction
264
+ Full-atomic or Backbone Prediction [Hiranuma et al., 2021; Wu et al., 2022], Structure Inpainting [McPartlon and Xu, 2022]
265
+ Protein Design
266
+ Template-based [Ingraham et al., 2019], De Novo [Huang et al., 2016; Koepnick et al., 2019]
267
+ Structure-Based Drug Design
268
+ Auto-regressive [Liu et al., 2022a; Peng et al., 2022], Diffusion [Lin et al., 2022; Schneuing et al., 2022]
269
+ Figure 2: A high-level overview of this survey with representative examples.
270
+ et al., 2017]. However, the edges in the protein graph may
271
+ have some key properties, such as dihedral angles and direc-
272
+ tions, which determine the biological function of proteins.
273
+ With this in mind, there have been several structure-based
274
+ encoders proposed to simultaneously leverages the node and
275
+ edge features of the protein graph. For example, [Hermosilla
276
+ et al., 2020] proposes IE convolution (IEconv) to simultane-
277
+ ously capture the primary, secondary and tertiary structures
278
+ of proteins by incorporating intrinsic and extrinsic distances
279
+ between nodes. Besides, [Hermosilla and Ropinski, 2022]
280
+ adopts a similar architecture to IEConv, but introduces seven
281
+ additional edge features to efficiently describe the relative po-
282
+ sition and orientation of neighboring nodes.
283
+ Furthermore,
284
+ GearNet [Zhang et al., 2022] proposes a simple structure en-
285
+ coder, which encodes spatial information by adding different
286
+ types of sequential or structural edges and then performs both
287
+ node-level and edge-level message passing simultaneously.
288
+ Geometric GNNs
289
+ The above message-passing GNNs incorporate the 3D geom-
290
+ etry of proteins by encoding the vector features Vu/Ve into
291
+ rotation-invariant scalars su/se. However, reducing this vec-
292
+ tor information directly to scalars may not fully capture com-
293
+ plex geometry. Therefore, geometric-aware neural networks
294
+ are proposed to bake 3D rigid transformations into network
295
+ operations, leading to SO(3)-invariant and equivariant GNNs.
296
+ For example, [Jing et al., 2020] introduces Geometric Vector
297
+ Perceptrons (GVPs), which replace standard multi-layer per-
298
+ ceptrons (MLPs) in feed-forward layers and operate directly
299
+ on both scalar and vector features under a global coordinate
300
+ system. Besides, [Aykent and Xia, 2022] proposes Geometric
301
+ Bottleneck Perceptron (GBPs) to integrate geometric features
302
+ and capture complex geometric relations in the 3D structure,
303
+ based on which a new SO(3)-equivariant message passing
304
+ neural network is proposed to support a variety of geomet-
305
+ ric representation learning tasks. To achieve more sensitive
306
+ geometric awareness in both global transformations and local
307
+ relations, [Li et al., 2022] proposes Directed Weight Percep-
308
+ trons (DWPs) by extending not only the hidden neurons but
309
+ the weights from scalars to 2D/3D vectors, naturally saturat-
310
+ ing the network with 3D structures in the Euclidean space.
311
+ 3.3
312
+ Sequence-structure Encoder
313
+ Compared to sequence- and structure-based encoders, com-
314
+ paratively less work has focused on the co-encoding of pro-
315
+ tein sequences and structures. The mainstream model archi-
316
+ tecture is to extract amino acid representations as node fea-
317
+ tures by a language model and then capture the dependencies
318
+ between amino acids using a GNN module. For example,
319
+ [Gligorijevi´c et al., 2021] introduces DeepFRI, a Graph Con-
320
+ volutional Network (GCN) for predicting protein functions
321
+ by leveraging sequence representations extracted from a pro-
322
+ tein language model (LSTM) and protein structures. Besides,
323
+ LM-GVP [Wang et al., 2021] is composed of a protein lan-
324
+ guage model (composed of Transformer blocks) and a GVP
325
+ network, where the protein LM takes protein sequences as
326
+ input to compute amino acid embeddings and the GVP net-
327
+ work is used to make predictions about protein properties on a
328
+ graph derived from the protein 3D structure. Moreover, [You
329
+
330
+ and Shen, 2022] applies the hierarchical RNN and GAT to
331
+ encode both protein sequences and structures and proposes a
332
+ cross-interaction module to enforce a learned relationship be-
333
+ tween the encoded embeddings of the two protein modalities.
334
+ 4
335
+ Pretext Task
336
+ The pretext tasks are designed to extract meaningful repre-
337
+ sentations from massive data through optimizing some well-
338
+ designed objective functions. In this section, we summarize
339
+ some commonly used pretext tasks for learning on proteins.
340
+ 4.1
341
+ Sequence-based Pretext Task
342
+ There have been many pretext tasks proposed for pre-training
343
+ language models, including Masked Language Modeling
344
+ (MLM) and Next Sentence Prediction (NSP) [Devlin et al.,
345
+ 2018], which can be naturally extended to pre-train protein
346
+ sequences. We divide existing sequence-based pretext tasks
347
+ into two main categories: self-supervised and supervised.
348
+ Self-supervised Pretext Task
349
+ The self-supervised pretext tasks utilize the training data itself
350
+ as supervision signals without the need for additional annota-
351
+ tions. If we consider an amino acid in a sequence as a word
352
+ in a sentence, we can naturally extend masked language mod-
353
+ eling to protein sequences. For example, we can statically or
354
+ dynamically mask out a single or a set of contiguous amino
355
+ acids and then predict the masked amino acids from the re-
356
+ maining sequences [Rao et al., 2019; Elnaggar et al., 2020;
357
+ Rives et al., 2021; Rao et al., 2021; Nambiar et al., 2020;
358
+ Xiao et al., 2021]. Besides, [McDermott et al., 2021] com-
359
+ bines adversarial training with MLM and proposes to mask
360
+ amino acids in a learnable manner. Taking into account the
361
+ dependence between masked amino acids, Pairwise MLM
362
+ (PMLM) [He et al., 2021] proposes to model the probabil-
363
+ ity of a pair of masked amino acids instead of predicting the
364
+ probability of a single amino acid. Besides, Next Amino acid
365
+ Prediction (NAP) [Alley et al., 2019; Elnaggar et al., 2020;
366
+ Strodthoff et al., 2020] aims to predict the type of the next
367
+ amino acid based on a set of given sequence fragments. Dif-
368
+ ferent from the above methods, Contrastive Predictive Cod-
369
+ ing (CPC) [Lu et al., 2020] applies different augmentation
370
+ transformations on the input sequence to generate different
371
+ views, and then maximizes the agreement of two jointly sam-
372
+ pled pairs against that of two independently sampled pairs.
373
+ Supervised Pretext Task
374
+ The supervised pretext tasks use additional labels as auxiliary
375
+ information to guide the model to learn knowledge relevant
376
+ to downstream tasks. For example, PLUS [Min et al., 2021]
377
+ devises a protein-specific pretext task, namely Same-Family
378
+ Prediction (SFP), which trains a model to predict whether a
379
+ given protein pair belongs to the same protein family. The
380
+ protein family labels provide weak structural information and
381
+ help the model learn structurally contextualized representa-
382
+ tions. Besides, [Sturmfels et al., 2020] proposes to use HMM
383
+ profiles derived from MSA as labels and then take Profile Pre-
384
+ diction as a pretext task to help the model learn information
385
+ about protein structures. In addition, to leverage the exponen-
386
+ tially growing protein sequences that lack costly structural
387
+ annotations, Progen [Madani et al., 2020] trains a language
388
+ model with conditioning tags that encode various annotations,
389
+ such as taxonomic, functional, and locational information.
390
+ 4.2
391
+ Structure-based Pretext Task
392
+ Despite the great progress in the design of structure-based
393
+ encoders and graph-based pretext tasks [Wu et al., 2021;
394
+ Xie et al., 2022; Liu et al., 2022b], there are few efforts focus-
395
+ ing on the structure-based pre-training of proteins. Existing
396
+ structure-based pretext tasks for proteins can be mainly clas-
397
+ sified into two branches: contrastive and predictive methods.
398
+ Contrastive Pretext Task
399
+ The primary goal of contrastive methods is to maximize the
400
+ agreement of two jointly sampled positive pairs. For example,
401
+ Multiview Contrast [Hermosilla and Ropinski, 2022] pro-
402
+ poses to randomly sample two sub-structures from each pro-
403
+ tein, encoder them into two representations, and finally max-
404
+ imize the similarity between representations from the same
405
+ protein while minimizing the similarity between representa-
406
+ tions from different proteins. Besides, [Zhang et al., 2022]
407
+ adopts almost the same architecture as Multiview Contrast,
408
+ but replaces GearNet with IEConv as the structure encoder.
409
+ Predictive Pretext Task
410
+ The contrastive methods deal with the inter-data information
411
+ (data-data pairs). In contrast, the predictive methods aim to
412
+ self-generate informative labels from the data as supervision
413
+ and handle the data-label relationships. Categorized by dif-
414
+ ferent types of pseudo labels, the predictive methods have
415
+ different designs that can capture different levels of struc-
416
+ tural protein information. For example, [Chen et al., 2022]
417
+ proposes two predictive tasks, namely Distance Prediction
418
+ and Angle Prediction, which take hidden representations of
419
+ residues as input and aim to predict the relative distance be-
420
+ tween pairwise residues and the angle between two edges,
421
+ respectively, which helps to learn structure-aware protein rep-
422
+ resentations. Furthermore, [Hermosilla and Ropinski, 2022]
423
+ propose Residue Type Prediction and Dihedral Prediction
424
+ based on geometric or biochemical properties. Specifically,
425
+ Residue Type Prediction randomly masks the node features
426
+ of some residues and then lets the structure-based encoders
427
+ predict these masked residue types. Instead, Dihedral Pre-
428
+ diction constructs a learning objective by predicting the di-
429
+ hedral angle between three consecutive edges. Besides, [You
430
+ and Shen, 2022] proposes graph completion (GraphComp),
431
+ which takes as input a protein graph with partially masked
432
+ residues and then makes predictions for those masked tokens.
433
+ 4.3
434
+ Sequence-structure Pretext Task
435
+ Most of the existing methods design pretext tasks for a single
436
+ modality but ignore the dependencies between sequences and
437
+ structures. If we can design the pretext task based on both
438
+ protein sequences and structures, it should capture richer in-
439
+ formation than using single modality data. In practice, there
440
+ is no clear boundary between pretext tasks and downstream
441
+ tasks. For example, AlphaFold2 [Jumper et al., 2021] takes
442
+ full-atomic structure prediction as a downstream task. How-
443
+ ever, if we are concerned with protein property prediction,
444
+ structure prediction can also be considered as a pretext task
445
+
446
+ Table 1: Summary of representative protein representation learning methods.
447
+ Method
448
+ Category
449
+ Architecture
450
+ Pretext Task
451
+ Year
452
+ Bio2Vec-CNN [Wang et al., 2019]
453
+ Sequence-based
454
+ CNN
455
+ -
456
+ 2019
457
+ TAPE [Rao et al., 2019]
458
+ Sequence-based
459
+ ResNet, LSTM, Transformer
460
+ Masked Language Modeling,
461
+ Next Amino Acid Prediction
462
+ 2019
463
+ UniRep [Alley et al., 2019]
464
+ Sequence-based
465
+ Multiplicative LSTM
466
+ Next Amino Acid Prediction
467
+ 2019
468
+ TripletProt [Nourani et al., 2020]
469
+ Sequence-based
470
+ Siamese Networks
471
+ Contrastive Predictive Coding
472
+ 2020
473
+ PLP-CNN [Shanehsazzadeh et al., 2020]
474
+ Sequence-based
475
+ CNN
476
+ -
477
+ 2020
478
+ CPCProt [Lu et al., 2020]
479
+ Sequence-based
480
+ GRU, LSTM
481
+ Contrastive Predictive Coding
482
+ 2020
483
+ MuPIPR [Zhou et al., 2020]
484
+ Sequence-based
485
+ GRU, LSTM
486
+ Next Amino Acid Prediction
487
+ 2020
488
+ ProtTrans [Elnaggar et al., 2020]
489
+ Sequence-based
490
+ Transformer, Bert, XLNet
491
+ Masked Language Modeling
492
+ 2020
493
+ DMPfold [Kandathil et al., 2020]
494
+ Sequence-based
495
+ GRU, ResNet
496
+ -
497
+ 2020
498
+ Profile Prediction [Sturmfels et al., 2020]
499
+ Sequence-based
500
+ Transformer
501
+ HMM Profile Prediction
502
+ 2020
503
+ PRoBERTa [Nambiar et al., 2020]
504
+ Sequence-based
505
+ Transformer
506
+ Masked Language Modeling
507
+ 2020
508
+ UDSMProt [Strodthoff et al., 2020]
509
+ Sequence-based
510
+ LSTM
511
+ Next Amino Acid Prediction
512
+ 2020
513
+ ESM-1b [Rives et al., 2021]
514
+ Sequence-based
515
+ Transformer
516
+ Masked Language Modeling
517
+ 2021
518
+ PMLM [He et al., 2021]
519
+ Sequence-based
520
+ Transformer
521
+ Pairwise Masked Language Modeling
522
+ 2021
523
+ MSA Transformer [Rao et al., 2021]
524
+ Sequence-based
525
+ MSA Transformer
526
+ Masked Language Modeling
527
+ 2021
528
+ ProteinLM [Xiao et al., 2021]
529
+ Sequence-based
530
+ BERT
531
+ Masked Language Modeling
532
+ 2021
533
+ PLUS [Min et al., 2021]
534
+ Sequence-based
535
+ Bidirectional RNN
536
+ Masked Language Modeling,
537
+ Same-Family Prediction
538
+ 2021
539
+ Adversarial MLM [McDermott et al., 2021]
540
+ Sequence-based
541
+ Transformer
542
+ Masked Language Modeling,
543
+ Adversarial Training
544
+ 2021
545
+ ProteinBERT [Brandes et al., 2022]
546
+ Sequence-based
547
+ BERT
548
+ Masked Language Modeling
549
+ 2022
550
+ CARP [Yang et al., 2022a]
551
+ Sequence-based
552
+ CNN
553
+ Masked Language Modeling
554
+ 2022
555
+ 3DCNN [Derevyanko et al., 2018]
556
+ Structure-based
557
+ 3DCNN
558
+ -
559
+ 2018
560
+ IEConv [Hermosilla et al., 2020]
561
+ Structure-based
562
+ IEConv
563
+ -
564
+ 2020
565
+ GVP-GNN [Jing et al., 2020]
566
+ Structure-based
567
+ GVP
568
+ -
569
+ 2020
570
+ GraphMS [Cheng et al., 2021]
571
+ Structure-based
572
+ GCN
573
+ Multiview Contrast
574
+ 2021
575
+ DL-MSFM [Gelman et al., 2021]
576
+ Structure-based
577
+ GCN
578
+ -
579
+ 2021
580
+ PG-GNN [Xia and Ku, 2021]
581
+ Structure-based
582
+ PG-GNN
583
+ -
584
+ 2021
585
+ CRL [Hermosilla and Ropinski, 2022]
586
+ Structure-based
587
+ IEConv
588
+ Multiview Contrast
589
+ 2022
590
+ DW-GNN [Li et al., 2022]
591
+ Structure-based
592
+ DWP
593
+ -
594
+ 2022
595
+ GBPNet [Aykent and Xia, 2022]
596
+ Structure-based
597
+ GBP
598
+ -
599
+ 2022
600
+ GearNet [Zhang et al., 2022]
601
+ Structure-based
602
+ GearNet
603
+ Multiview Contrast,
604
+ Distance and Dihedral Prediction,
605
+ Residue Type Prediction
606
+ 2022
607
+ ATOMRefine [Wu and Cheng, 2022]
608
+ Structure-based
609
+ SE(3) Transformer
610
+ -
611
+ 2022
612
+ STEPS [Chen et al., 2022]
613
+ Structure-based
614
+ GIN
615
+ Distance and Dihedral Prediction
616
+ 2022
617
+ GraphCPI [Quan et al., 2019]
618
+ Co-Modeling
619
+ CNN, GNN
620
+ -
621
+ 2019
622
+ MT-LSTM [Bepler and Berger, 2019]
623
+ Co-Modeling
624
+ Bidirectional LSTM
625
+ Contact prediction,
626
+ Pairwise Similarity Prediction
627
+ 2019
628
+ LM-GVP [Wang et al., 2021]
629
+ Co-Modeling
630
+ Transformer, GVP
631
+ -
632
+ 2021
633
+ AlphaFold2 [Jumper et al., 2021]
634
+ Co-Modeling
635
+ Evoformer
636
+ Masked Language Modeling,
637
+ Full-atomic Structure Prediction
638
+ 2021
639
+ DeepFRI [Gligorijevi´c et al., 2021]
640
+ Co-Modeling
641
+ LSTM, GCN
642
+ -
643
+ 2021
644
+ HJRSS [Mansoor et al., 2021]
645
+ Co-Modeling
646
+ SE(3) Transformer
647
+ Masked Language Modeling,
648
+ Graph Completion
649
+ 2021
650
+ GraSR [Xia et al., 2022]
651
+ Co-Modeling
652
+ LSTM, GCN
653
+ Momentum Contrast
654
+ 2022
655
+ CPAC [You and Shen, 2022]
656
+ Co-Modeling
657
+ Hierarchical RNN, GAT
658
+ Masked Language Modeling,
659
+ Graph Completion
660
+ 2022
661
+ MIF-ST [Yang et al., 2022b]
662
+ Co-Modeling
663
+ CNN, GNN
664
+ Masked Inverse Folding
665
+ 2022
666
+ OmegaFold [Wu et al., 2022]
667
+ Co-Modeling
668
+ Geoformer
669
+ Masked Language Modeling,
670
+ Full-atomic Structure Prediction
671
+ 2022
672
+ that enables the learned sequence representations to contain
673
+ sufficient structural information. It was found by [Hu et al.,
674
+ 2022] that the representations from AlphFold2’s Evoformer
675
+ could work well on various protein-related downstream tasks,
676
+ including fold classification, stability prediction, etc. More-
677
+ over, [Yang et al., 2022b] proposes a novel pre-training pre-
678
+ text task, namely Masked Inverse Folding (MIF), which trains
679
+ a model to reconstruct the original amino acids conditioned
680
+ on the corrupted sequence and the backbone structure.
681
+ 5
682
+ Downstream Tasks (Applications)
683
+ In the above, we have presented a variety of commonly used
684
+ model architectures and pretext tasks for protein representa-
685
+
686
+ tion learning, based on which we summarized the surveyed
687
+ works in Table. 1, listing their categories, model architec-
688
+ tures, pretext tasks, and publication years. In this section, we
689
+ can divide existing downstream tasks for protein representa-
690
+ tion learning into the following four main categories: protein
691
+ property prediction, protein (complex) structure prediction,
692
+ protein design, and structure-based drug design.
693
+ It is worth noting that some downstream tasks have labels
694
+ (i.e., model outputs) that do not change with rigid body trans-
695
+ formations of the inputs (if they can, e.g., protein structures).
696
+ For example, various protein property prediction tasks take
697
+ a transformable protein structure as input and output a con-
698
+ stant prediction, usually modeled as a simple multi-label clas-
699
+ sification problem or multiple binary classification problem.
700
+ However, the labels of some downstream tasks will change
701
+ equivariantly with the inputs, and these tasks are getting more
702
+ and more attention. Typically, the learning objectives of these
703
+ tasks are structure-related, and they usually have higher re-
704
+ quirements on the model architecture, requiring the model to
705
+ be SE(3)-equivariant. We believe that from the perspective
706
+ of protein representation learning, the approaches to different
707
+ downstream tasks can also learn from each other.
708
+ 5.1
709
+ Protein Property Prediction
710
+ The protein property prediction aims to regress or classify
711
+ some important properties from protein sequences or struc-
712
+ tures that are closely related to biological functions, such
713
+ as the types of secondary structure, the strength of connec-
714
+ tions between amino acids, types of protein folding, fluo-
715
+ rescence intensity, protein stability, etc. [Rao et al., 2019].
716
+ Besides, several protein-specific prediction tasks can also be
717
+ grouped into this category, including quality evaluation of
718
+ protein folding [Baldassarre et al., 2021], predicting the ef-
719
+ fect of mutations on protein function [Meier et al., 2021], and
720
+ predicting protein-protein interactions [Wang et al., 2019].
721
+ 5.2
722
+ Protein (Complex) Structure Prediction
723
+ The primary goal of protein structure prediction is to pre-
724
+ dict the structural coordinates from a given set of amino
725
+ acid sequences. Some approaches aim to predict only back-
726
+ bone coordinates [Baek et al., 2021; Si et al., 2020], while
727
+ others focus on the more challenging full-atomic coordi-
728
+ nate predictions [Jumper et al., 2021; Wu et al., 2022;
729
+ Rao et al., 2021]. On the other hand, protein structure refine-
730
+ ment [Hiranuma et al., 2021; Wu and Cheng, 2022] proposes
731
+ to update a coarse protein structure to generate a more fine-
732
+ grained structure in an iterative manner. Besides, the task of
733
+ protein structure inpainting aims to reconstruct the complete
734
+ protein structure from a partially given sub-structure [McPart-
735
+ lon and Xu, 2022] or distance map [Lee and Kim, 2022].
736
+ 5.3
737
+ Protein Design
738
+ Deep learning-based protein design has made tremendous
739
+ progress in recent years, and the major works can be di-
740
+ vided into three categories. The first one is to pre-train the
741
+ model with a large number of sequences from the same pro-
742
+ tein family, and then use it to generate new homologous se-
743
+ quences [Smith and Smith, 1990]. The structure-based meth-
744
+ ods aim to directly generate the protein sequences under the
745
+ condition of a given protein structure [Ingraham et al., 2019].
746
+ The last and most challenging one is the de novo protein de-
747
+ sign [Huang et al., 2016; Korendovych and DeGrado, 2020;
748
+ Koepnick et al., 2019], which aims to generate both protein
749
+ sequences and structures conditioned on taxonomic and key-
750
+ word tags such as molecular function and cellular component.
751
+ 5.4
752
+ Structure-Based Drug Design
753
+ Structure-Based Drug Design (SBDD) is a promising direc-
754
+ tion for fast and cost-efficient compound discovery. Specif-
755
+ ically, SBDD designs inhibitors or activators (usually small
756
+ molecules, i.e., drugs) directly against protein targets of inter-
757
+ est, which means a high success rate and efficiency [Kuntz,
758
+ 1992; Drews, 2000]. In the past two years, a line of auto-
759
+ regressive methods have been proposed for SBDD [Liu et al.,
760
+ 2022a; Peng et al., 2022; Masuda et al., 2020], which gener-
761
+ ate molecule atoms one by one conditioned on given structure
762
+ context of protein targets. Recently, there are some works
763
+ based on Denoising Diffusion Probabilistic Model (DDPM)
764
+ [Lin et al., 2022; Schneuing et al., 2022]. Targeting on spe-
765
+ cific protein pockets, the diffusion-based methods generate
766
+ molecule atoms as a whole from random gaussian noise.
767
+ The above methods are all dependent on a proper repre-
768
+ sentation module of protein, especially the protein structure.
769
+ The early attempt of deep generative models in this field [Luo
770
+ et al., 2021] uses 3D CNN as the protein structure context
771
+ encoder to get meaningful and roto-translation invariant fea-
772
+ tures. With the development of protein structure representa-
773
+ tion methods, particularly the geometric-aware models, sub-
774
+ sequent methods widely use geometric-(equi/in)variant net-
775
+ works, such as EGNN [Gong and Cheng, 2019], GVP [Jing
776
+ et al., 2020], and IPA [Jumper et al., 2021], as the backbones.
777
+ It is worth noting that protein representation models are not
778
+ only common in various protein structure context encoders,
779
+ but many generative decoders can also adopt its architectural
780
+ design. From this example, we can see that protein represen-
781
+ tation is a very fundamental problem and that many down-
782
+ stream tasks involving proteins can benefit from advances of
783
+ protein representation research in various aspects, including
784
+ better embeddings and more excellent model architectures.
785
+ 6
786
+ Deep Insights and Future Outlooks
787
+ 6.1
788
+ Deeper Insights
789
+ On the basis of a detailed review of the model architectures,
790
+ pretext tasks, and downstream tasks, we would like to provide
791
+ some deeper insights into protein representation learning.
792
+ Insights 1: PRL is the core of deep protein modeling
793
+ With the development of deep learning, deep protein mod-
794
+ eling is becoming a popular research topic, and one of its
795
+ core is how to learn “meaningful” representations for pro-
796
+ teins. This involves three key issues: (1) Feature Extraction:
797
+ model architectures; (2) Pre-training: pretext tasks; and (3)
798
+ Application: downstream tasks. An in-depth investigation of
799
+ the above three key issues is of great importance for the de-
800
+ velopment of more deep protein modeling methods.
801
+
802
+ Insights 2: Task-level convertibility
803
+ Throughout this survey, one of the main points we have em-
804
+ phasized is the convertibility between downstream tasks and
805
+ pretext tasks. We believe we are the first to explain the role
806
+ of pretext tasks from this perspective, which seems to have
807
+ been rarely involved in previous work. For example, we di-
808
+ rectly categorize some well-known downstream tasks, such as
809
+ full-atomic structure prediction, as a specific kinds of pretext
810
+ tasks. The motivation behind such an understanding lies in
811
+ the fact that the definition of a task is itself a relative concept
812
+ and that different tasks can help the model extract different
813
+ aspects of information, which may be complementary to each
814
+ other. For example, full-atomic structure prediction helps the
815
+ model capture rich structural information, which is also ben-
816
+ eficial for various protein property prediction tasks, such as
817
+ folding prediction, since it is known that protein structure of-
818
+ ten determines protein function. This suggests that whether
819
+ a specific task is a downstream task or a pretext task usually
820
+ depends on what we are concerned about, and the role of a
821
+ task may keep changing from application to application.
822
+ Insights 3: Data-specific criterion for design selections
823
+ It is tricky to discuss the advantages and disadvantages of dif-
824
+ ferent methods or designs because the effectiveness of differ-
825
+ ent methods depends heavily on the size, format, and com-
826
+ plexity of the data. For example, for simple small-scale data,
827
+ Transformer is not necessarily more effective than traditional
828
+ LSTM for sequence modeling, and the situation may be com-
829
+ pletely opposite for large-scale complex data.
830
+ Therefore,
831
+ there is no “optimal” architecture or pretext task that works
832
+ for all data types and downstream tasks, and the criterion for
833
+ the selection of architecture and pretext task is data-specific.
834
+ 6.2
835
+ Future Outlooks
836
+ Despite the great progress of existing methods, challenges
837
+ still exist due to the complexity of proteins. In this section,
838
+ we suggest some promising directions for future work.
839
+ Direction 1: Broader application scenarios
840
+ The biological research topics on proteins are diverse, but
841
+ most of the existing work has delved into only a small subset
842
+ of them, due to the fact that these topics have been well for-
843
+ malized by some representative works, such as AlphaFlod2
844
+ [Jumper et al., 2021] for protein structure prediction and
845
+ TAPE [Rao et al., 2019] for protein property prediction. As
846
+ a result, it is more worthwhile to explore the role of protein
847
+ representation learning in a wider range of biological applica-
848
+ tion scenarios than to design some overly complex modules
849
+ for subtle performance gains in a well-formalized application.
850
+ Direction 2: Unified evaluation protocols
851
+ Research in protein representation learning is now in an era of
852
+ barbarism. While a great deal of new works are emerging ev-
853
+ ery day, most of them are on unfair comparisons, such as with
854
+ different datasets, architectures, metrics, etc. For example,
855
+ some MSA-based works on structure prediction have been
856
+ blatantly compared with those single-sequence-based works
857
+ and claimed to be better. To promote the health of the field,
858
+ there is an urgent need to establish unified evaluation proto-
859
+ cols in various downstream tasks to provide fair comparisons.
860
+ Direction 3: Protein-specific designs
861
+ Previous PRL methods directly take mature architectures and
862
+ pretext tasks from the natural language processing field to
863
+ train proteins. For example, modeling protein sequences us-
864
+ ing LSTM may be a major innovation, but replacing LSTM
865
+ with Bi-LSTM for stuble performance improvements makes
866
+ little sense. Now, it is time to step out of this comfort zone
867
+ of scientific research, and we should no longer be satisfied
868
+ with simply extending techniques from other domains to the
869
+ protein domain. PRL is not only a machine learning problem
870
+ but also a biological problem, so we should consider design-
871
+ ing more protein-specific architectures and pretext tasks by
872
+ incorporating protein-related domain knowledge. In particu-
873
+ lar, most of the existing work on PRL is based on unimodal
874
+ protein sequences or structures, and it requires more work ex-
875
+ ploring sequence-structure co-modeling to fully explore the
876
+ correspondence between 1D sequences and 3D structures.
877
+ Direction 4: Margin from pre-training to fine-tuning
878
+ Currently, tremendous efforts are focusing on protein pre-
879
+ training strategies.
880
+ However, how to fine-tune these pre-
881
+ trained models to specific downstream tasks is still under-
882
+ explored. Though numerous strategies have been proposed
883
+ to address this problem in the fields of computer vision and
884
+ natural language processing [Zhuang et al., 2020], they are
885
+ difficult to be directly applied to proteins. One obstacle to
886
+ knowledge transfer is the huge variability between different
887
+ protein datasets, both in terms of sequence length and struc-
888
+ tural complexity. The second one is poor generalization of
889
+ pre-trained models especially for various tasks where collect-
890
+ ing labeled data is laborious. Therefore, it is an important
891
+ issue to design protein-specific techniques to minimize the
892
+ margin between pre-training and downstream tasks.
893
+ Direction 5: Lack of explainability
894
+ While existing protein representation learning methods have
895
+ achieved promising results on a variety of downstream tasks,
896
+ we still know little about what the model has learned from
897
+ protein data. Which of the feature patterns, sequence frag-
898
+ ments, or sequence-structure relationships has been learned?
899
+ These are important issues for understanding and interpret-
900
+ ing model behavior, especially for those privacy-secure tasks
901
+ such as drug design, but are missing in current PRL works.
902
+ Overall, the interpretability of PRL methods remains to be
903
+ explored further in many respects, which helps us understand
904
+ how the model works and provides a guide for better usage.
905
+ 7
906
+ Conclusions
907
+ A comprehensive survey of the literature on protein repre-
908
+ sentation learning is conducted in this paper. We develop a
909
+ general unified framework for PRL methods. Moreover, we
910
+ systematically divide existing PRL methods into three main
911
+ categories: sequence-based, structure-based, and sequence-
912
+ structure co-modeling from three different perspectives, in-
913
+ cluding model architectures, pretext tasks, and downstream
914
+ applications. Finally, we point out the technical limitations
915
+ of the current research and provide promising directions for
916
+ future work on PRL. We hope this survey to pave the way for
917
+ follow-up AI researchers with no bioinformatics background,
918
+ setting the stage for the development of more future works.
919
+
920
+ References
921
+ [Alley et al., 2019] Ethan C Alley, Grigory Khimulya, Suro-
922
+ jit Biswas,
923
+ Mohammed AlQuraishi,
924
+ and George M
925
+ Church.
926
+ Unified rational protein engineering with
927
+ sequence-based deep representation learning.
928
+ Nature
929
+ methods, 16(12):1315–1322, 2019.
930
+ [Amidi et al., 2018] Afshine Amidi, Shervine Amidi, Dim-
931
+ itrios Vlachakis, Vasileios Megalooikonomou, Nikos Para-
932
+ gios, and Evangelia I Zacharaki. Enzynet: enzyme classi-
933
+ fication using 3d convolutional neural networks on spatial
934
+ representation. PeerJ, 6:e4750, 2018.
935
+ [Armenteros et al., 2020] Jose Juan Almagro Armenteros,
936
+ Alexander Rosenberg Johansen, Ole Winther, and Henrik
937
+ Nielsen. Language modelling for biological sequences–
938
+ curated datasets and baselines. BioRxiv, 2020.
939
+ [Asgari et al., 2019] Ehsaneddin Asgari, Nina Poerner, Al-
940
+ ice C McHardy, and Mohammad RK Mofrad.
941
+ Deep-
942
+ prime2sec: deep learning for protein secondary structure
943
+ prediction from the primary sequences.
944
+ BioRxiv, page
945
+ 705426, 2019.
946
+ [Aykent and Xia, 2022] Sarp Aykent and Tian Xia.
947
+ Gbp-
948
+ net: Universal geometric representation learning on pro-
949
+ tein structures. In Proceedings of the 28th ACM SIGKDD
950
+ Conference on Knowledge Discovery and Data Mining,
951
+ pages 4–14, 2022.
952
+ [Baek et al., 2021] Minkyung Baek, Frank DiMaio, Ivan
953
+ Anishchenko,
954
+ Justas Dauparas,
955
+ Sergey Ovchinnikov,
956
+ Gyu Rie Lee, Jue Wang, Qian Cong, Lisa N Kinch,
957
+ R Dustin Schaeffer, et al. Accurate prediction of protein
958
+ structures and interactions using a three-track neural net-
959
+ work. Science, 373(6557):871–876, 2021.
960
+ [Baldassarre et al., 2021] Federico
961
+ Baldassarre,
962
+ David
963
+ Men´endez Hurtado, Arne Elofsson, and Hossein Az-
964
+ izpour.
965
+ Graphqa:
966
+ protein model quality assessment
967
+ using graph convolutional networks.
968
+ Bioinformatics,
969
+ 37(3):360–366, 2021.
970
+ [Bepler and Berger, 2019] Tristan
971
+ Bepler
972
+ and
973
+ Bonnie
974
+ Berger.
975
+ Learning
976
+ protein
977
+ sequence
978
+ embeddings
979
+ using information from structure.
980
+ arXiv preprint
981
+ arXiv:1902.08661, 2019.
982
+ [Brandes et al., 2022] Nadav Brandes, Dan Ofer, Yam Peleg,
983
+ Nadav Rappoport, and Michal Linial. Proteinbert: A uni-
984
+ versal deep-learning model of protein sequence and func-
985
+ tion. Bioinformatics, 38(8):2102–2110, 2022.
986
+ [Chen et al., 2022] Can Chen, Jingbo Zhou, Fan Wang, Xue
987
+ Liu, and Dejing Dou.
988
+ Structure-aware protein self-
989
+ supervised learning.
990
+ arXiv preprint arXiv:2204.04213,
991
+ 2022.
992
+ [Cheng et al., 2021] Shicheng Cheng, Liang Zhang, Bo Jin,
993
+ Qiang Zhang, Xinjiang Lu, Mao You, and Xueqing Tian.
994
+ Graphms: Drug target prediction using graph represen-
995
+ tation learning with substructures.
996
+ Applied Sciences,
997
+ 11(7):3239, 2021.
998
+ [Derevyanko et al., 2018] Georgy Derevyanko, Sergei Gru-
999
+ dinin, Yoshua Bengio, and Guillaume Lamoureux. Deep
1000
+ convolutional networks for quality assessment of protein
1001
+ folds. Bioinformatics, 34(23):4046–4053, 2018.
1002
+ [Devlin et al., 2018] Jacob Devlin, Ming-Wei Chang, Ken-
1003
+ ton Lee, and Kristina Toutanova.
1004
+ Bert: Pre-training of
1005
+ deep bidirectional transformers for language understand-
1006
+ ing. arXiv preprint arXiv:1810.04805, 2018.
1007
+ [Ding et al., 2019] Xinqiang Ding,
1008
+ Zhengting Zou,
1009
+ and
1010
+ Charles L Brooks III. Deciphering protein evolution and
1011
+ fitness landscapes with latent space models. Nature com-
1012
+ munications, 10(1):1–13, 2019.
1013
+ [Drews, 2000] J¨urgen Drews. Drug discovery: A historical
1014
+ perspective. Science, 287(5460):1960–1964, 2000.
1015
+ [Elnaggar et al., 2020] Ahmed
1016
+ Elnaggar,
1017
+ Michael
1018
+ Heinzinger, Christian Dallago, Ghalia Rihawi, Yu Wang,
1019
+ Llion Jones, Tom Gibbs, Tamas Feher, Christoph Angerer,
1020
+ Martin Steinegger, et al.
1021
+ Prottrans: towards cracking
1022
+ the language of life’s code through self-supervised deep
1023
+ learning and high performance computing. arXiv preprint
1024
+ arXiv:2007.06225, 2020.
1025
+ [Gainza et al., 2020] Pablo Gainza, Freyr Sverrisson, Fred-
1026
+ erico Monti, Emanuele Rodola, D Boscaini, MM Bron-
1027
+ stein, and BE Correia.
1028
+ Deciphering interaction finger-
1029
+ prints from protein molecular surfaces using geometric
1030
+ deep learning. Nature Methods, 17(2):184–192, 2020.
1031
+ [Gelman et al., 2021] Sam Gelman, Sarah A Fahlberg, Pete
1032
+ Heinzelman, Philip A Romero, and Anthony Gitter. Neu-
1033
+ ral networks to learn protein sequence–function relation-
1034
+ ships from deep mutational scanning data. Proceedings of
1035
+ the National Academy of Sciences, 118(48):e2104878118,
1036
+ 2021.
1037
+ [Gligorijevi´c et al., 2021] Vladimir Gligorijevi´c, P Douglas
1038
+ Renfrew, Tomasz Kosciolek, Julia Koehler Leman, Daniel
1039
+ Berenberg, Tommi Vatanen, Chris Chandler, Bryn C Tay-
1040
+ lor, Ian M Fisk, Hera Vlamakis, et al.
1041
+ Structure-based
1042
+ protein function prediction using graph convolutional net-
1043
+ works. Nature communications, 12(1):1–14, 2021.
1044
+ [Gong and Cheng, 2019] Liyu Gong and Qiang Cheng. Ex-
1045
+ ploiting edge features for graph neural networks. In Pro-
1046
+ ceedings of the IEEE/CVF conference on computer vision
1047
+ and pattern recognition, pages 9211–9219, 2019.
1048
+ [Hamilton et al., 2017] Will Hamilton, Zhitao Ying, and Jure
1049
+ Leskovec.
1050
+ Inductive representation learning on large
1051
+ graphs. In Neural information processing systems, pages
1052
+ 1024–1034, 2017.
1053
+ [He et al., 2016] Kaiming He, Xiangyu Zhang, Shaoqing
1054
+ Ren, and Jian Sun. Deep residual learning for image recog-
1055
+ nition. In Proceedings of the IEEE conference on computer
1056
+ vision and pattern recognition, pages 770–778, 2016.
1057
+ [He et al., 2021] Liang He, Shizhuo Zhang, Lijun Wu,
1058
+ Huanhuan Xia, Fusong Ju, He Zhang, Siyuan Liu, Yingce
1059
+ Xia, Jianwei Zhu, Pan Deng, et al.
1060
+ Pre-training co-
1061
+ evolutionary protein representation via a pairwise masked
1062
+ language model. arXiv preprint arXiv:2110.15527, 2021.
1063
+ [Hermosilla and Ropinski, 2022] Pedro
1064
+ Hermosilla
1065
+ and
1066
+ Timo Ropinski.
1067
+ Contrastive representation learning for
1068
+
1069
+ 3d protein structures. arXiv preprint arXiv:2205.15675,
1070
+ 2022.
1071
+ [Hermosilla et al., 2020] Pedro Hermosilla, Marco Sch¨afer,
1072
+ Matˇej Lang, Gloria Fackelmann, Pere Pau V´azquez,
1073
+ Barbora Kozl´ıkov´a, Michael Krone, Tobias Ritschel, and
1074
+ Timo Ropinski. Intrinsic-extrinsic convolution and pool-
1075
+ ing for learning on 3d protein structures. arXiv preprint
1076
+ arXiv:2007.06252, 2020.
1077
+ [Hiranuma et al., 2021] Naozumi
1078
+ Hiranuma,
1079
+ Hahnbeom
1080
+ Park, Minkyung Baek, Ivan Anishchenko, Justas Dau-
1081
+ paras, and David Baker.
1082
+ Improved protein structure
1083
+ refinement guided by deep learning based accuracy
1084
+ estimation. Nature communications, 12(1):1–11, 2021.
1085
+ [Hochreiter and Schmidhuber, 1997] Sepp Hochreiter and
1086
+ J¨urgen Schmidhuber. Long short-term memory. Neural
1087
+ computation, 9(8):1735–1780, 1997.
1088
+ [Hospital et al., 2015] Adam Hospital, Josep Ramon Go˜ni,
1089
+ Modesto Orozco, and Josep L Gelp´ı. Molecular dynamics
1090
+ simulations: advances and applications. Advances and ap-
1091
+ plications in bioinformatics and chemistry: AABC, 8:37,
1092
+ 2015.
1093
+ [Hu et al., 2021] Lun Hu, Xiaojuan Wang, Yu-An Huang,
1094
+ Pengwei Hu, and Zhu-Hong You. A survey on computa-
1095
+ tional models for predicting protein–protein interactions.
1096
+ Briefings in Bioinformatics, 22(5):bbab036, 2021.
1097
+ [Hu et al., 2022] Mingyang Hu, Fajie Yuan, Kevin K Yang,
1098
+ Fusong Ju, Jin Su, Hui Wang, Fei Yang, and Qiuyang
1099
+ Ding. Exploring evolution-based &-free protein language
1100
+ models as protein function predictors.
1101
+ arXiv preprint
1102
+ arXiv:2206.06583, 2022.
1103
+ [Huang et al., 2016] Po-Ssu Huang, Scott E Boyken, and
1104
+ David Baker. The coming of age of de novo protein de-
1105
+ sign. Nature, 537(7620):320–327, 2016.
1106
+ [Ingraham et al., 2019] John Ingraham, Vikas Garg, Regina
1107
+ Barzilay, and Tommi Jaakkola.
1108
+ Generative models for
1109
+ graph-based protein design. Advances in neural informa-
1110
+ tion processing systems, 32, 2019.
1111
+ [Iuchi et al., 2021] Hitoshi Iuchi, Taro Matsutani, Keisuke
1112
+ Yamada, Natsuki Iwano, Shunsuke Sumi, Shion Hosoda,
1113
+ Shitao Zhao, Tsukasa Fukunaga, and Michiaki Hamada.
1114
+ Representation learning applications in biological se-
1115
+ quence analysis. Computational and Structural Biotech-
1116
+ nology Journal, 19:3198–3208, 2021.
1117
+ [Jing et al., 2020] Bowen Jing, Stephan Eismann, Patricia
1118
+ Suriana, Raphael JL Townshend, and Ron Dror. Learning
1119
+ from protein structure with geometric vector perceptrons.
1120
+ arXiv preprint arXiv:2009.01411, 2020.
1121
+ [Jumper et al., 2021] John Jumper, Richard Evans, Alexan-
1122
+ der Pritzel, Tim Green, Michael Figurnov, Olaf Ron-
1123
+ neberger, Kathryn Tunyasuvunakool, Russ Bates, Au-
1124
+ gustin ˇZ´ıdek, Anna Potapenko, et al.
1125
+ Highly accu-
1126
+ rate protein structure prediction with alphafold. Nature,
1127
+ 596(7873):583–589, 2021.
1128
+ [Kandathil et al., 2020] Shaun M Kandathil, Joe G Greener,
1129
+ Andy M Lau, and David T Jones. Deep learning-based
1130
+ prediction of protein structure using learned representa-
1131
+ tions of multiple sequence alignments.
1132
+ Biorxiv, pages
1133
+ 2020–11, 2020.
1134
+ [Karplus and Petsko, 1990] Martin Karplus and Gregory A
1135
+ Petsko. Molecular dynamics simulations in biology. Na-
1136
+ ture, 347(6294):631–639, 1990.
1137
+ [Kipf and Welling, 2016] Thomas N Kipf and Max Welling.
1138
+ Semi-supervised classification with graph convolutional
1139
+ networks. arXiv preprint arXiv:1609.02907, 2016.
1140
+ [Koepnick et al., 2019] Brian Koepnick, Jeff Flatten, Tamir
1141
+ Husain, Alex Ford, Daniel-Adriano Silva, Matthew J
1142
+ Bick, Aaron Bauer, Gaohua Liu, Yojiro Ishida, Alexander
1143
+ Boykov, et al. De novo protein design by citizen scientists.
1144
+ Nature, 570(7761):390–394, 2019.
1145
+ [Korendovych and DeGrado, 2020] Ivan
1146
+ V
1147
+ Korendovych
1148
+ and William F DeGrado.
1149
+ De novo protein design, a
1150
+ retrospective. Quarterly reviews of biophysics, 53, 2020.
1151
+ [Kuntz, 1992] Irwin D. Kuntz.
1152
+ Structure-based strategies
1153
+ for drug design and discovery. Science, 257(5073):1078–
1154
+ 1082, 1992.
1155
+ [Lapedes et al., 1999] Alan S Lapedes, Bertrand G Giraud,
1156
+ LonChang Liu, and Gary D Stormo. Correlated mutations
1157
+ in models of protein sequences: phylogenetic and struc-
1158
+ tural effects. Lecture Notes-Monograph Series, pages 236–
1159
+ 256, 1999.
1160
+ [LeCun et al., 1995] Yann LeCun, Yoshua Bengio, et al.
1161
+ Convolutional networks for images, speech, and time se-
1162
+ ries. The handbook of brain theory and neural networks,
1163
+ 3361(10):1995, 1995.
1164
+ [Lee and Kim, 2022] Jin Sub Lee and Philip M Kim. Pro-
1165
+ teinsgm: Score-based generative modeling for de novo
1166
+ protein design. bioRxiv, 2022.
1167
+ [Li et al., 2022] Jiahan Li, Shitong Luo, Congyue Deng,
1168
+ Chaoran Cheng, Jiaqi Guan, Leonidas Guibas, Jian Peng,
1169
+ and Jianzhu Ma.
1170
+ Directed weight neural networks for
1171
+ protein structure representation learning. arXiv preprint
1172
+ arXiv:2201.13299, 2022.
1173
+ [Lin et al., 2022] Haitao Lin, Yufei Huang, Meng Liu, Xu-
1174
+ anjing Li, Shuiwang Ji, and Stan Z Li. Diffbp: Generative
1175
+ diffusion of 3d molecules for target protein binding. arXiv
1176
+ preprint arXiv:2211.11214, 2022.
1177
+ [Liu et al., 2022a] Meng Liu, Youzhi Luo, Kanji Uchino,
1178
+ Koji Maruhashi, and Shuiwang Ji.
1179
+ Generating 3d
1180
+ molecules for target protein binding. In International Con-
1181
+ ference on Machine Learning, 2022.
1182
+ [Liu et al., 2022b] Yixin Liu, Ming Jin, Shirui Pan, Chuan
1183
+ Zhou, Yu Zheng, Feng Xia, and Philip Yu. Graph self-
1184
+ supervised learning: A survey.
1185
+ IEEE Transactions on
1186
+ Knowledge and Data Engineering, 2022.
1187
+ [Lu et al., 2020] Amy X Lu, Haoran Zhang, Marzyeh Ghas-
1188
+ semi, and Alan Moses. Self-supervised contrastive learn-
1189
+ ing of protein representations by mutual information max-
1190
+ imization. BioRxiv, 2020.
1191
+
1192
+ [Luo et al., 2021] Shitong Luo, Jiaqi Guan, Jianzhu Ma, and
1193
+ Jian Peng. A 3D generative model for structure-based drug
1194
+ design. In Thirty-Fifth Conference on Neural Information
1195
+ Processing Systems, 2021.
1196
+ [Madani et al., 2020] Ali Madani, Bryan McCann, Nikhil
1197
+ Naik, Nitish Shirish Keskar, Namrata Anand, Raphael R
1198
+ Eguchi, Po-Ssu Huang, and Richard Socher. Progen: Lan-
1199
+ guage modeling for protein generation.
1200
+ arXiv preprint
1201
+ arXiv:2004.03497, 2020.
1202
+ [Mansoor et al., 2021] Sanaa Mansoor,
1203
+ Minkyung Baek,
1204
+ Umesh Madan, and Eric Horvitz. Toward more general
1205
+ embeddings for protein design: Harnessing joint represen-
1206
+ tations of sequence and structure. bioRxiv, 2021.
1207
+ [Masuda et al., 2020] Tomohide Masuda, Matthew Ragoza,
1208
+ and David Ryan Koes. Generating 3d molecular structures
1209
+ conditional on a receptor binding site with deep generative
1210
+ models. arXiv preprint arXiv:2010.14442, 2020.
1211
+ [McDermott et al., 2021] Matthew
1212
+ McDermott,
1213
+ Brendan
1214
+ Yap, Harry Hsu, Di Jin, and Peter Szolovits. Adversarial
1215
+ contrastive pre-training for protein sequences.
1216
+ arXiv
1217
+ preprint arXiv:2102.00466, 2021.
1218
+ [McPartlon and Xu, 2022] Matthew McPartlon and Jinbo
1219
+ Xu. Attnpacker: An end-to-end deep learning method for
1220
+ rotamer-free protein side-chain packing. bioRxiv, 2022.
1221
+ [Meier et al., 2021] Joshua Meier,
1222
+ Roshan Rao,
1223
+ Robert
1224
+ Verkuil, Jason Liu, Tom Sercu, and Alex Rives. Language
1225
+ models enable zero-shot prediction of the effects of muta-
1226
+ tions on protein function. Advances in Neural Information
1227
+ Processing Systems, 34:29287–29303, 2021.
1228
+ [Min et al., 2021] Seonwoo Min, Seunghyun Park, Siwon
1229
+ Kim, Hyun-Soo Choi, Byunghan Lee, and Sungroh Yoon.
1230
+ Pre-training of deep bidirectional protein sequence rep-
1231
+ resentations with structural information.
1232
+ IEEE Access,
1233
+ 9:123912–123926, 2021.
1234
+ [Nambiar et al., 2020] Ananthan Nambiar, Maeve Heflin,
1235
+ Simon Liu, Sergei Maslov, Mark Hopkins, and Anna Ritz.
1236
+ Transforming the language of life: transformer neural net-
1237
+ works for protein prediction tasks. In Proceedings of the
1238
+ 11th ACM International Conference on Bioinformatics,
1239
+ Computational Biology and Health Informatics, pages 1–
1240
+ 8, 2020.
1241
+ [Nourani et al., 2020] Esmaeil Nourani, Ehsaneddin Asgari,
1242
+ Alice C McHardy, and Mohammad RK Mofrad. Triplet-
1243
+ prot: Deep representation learning of proteins based on
1244
+ siamese networks. Biorxiv, 2020.
1245
+ [Peng et al., 2022] Xingang Peng, Shitong Luo, Jiaqi Guan,
1246
+ Qi Xie, Jian Peng, and Jianzhu Ma. Pocket2mol: Efficient
1247
+ molecular sampling based on 3d protein pockets. In Inter-
1248
+ national Conference on Machine Learning, 2022.
1249
+ [Preparata and Shamos, 2012] Franco
1250
+ P
1251
+ Preparata
1252
+ and
1253
+ Michael I Shamos.
1254
+ Computational geometry:
1255
+ an
1256
+ introduction. Springer Science & Business Media, 2012.
1257
+ [Quan et al., 2019] Zhe Quan, Yan Guo, Xuan Lin, Zhi-Jie
1258
+ Wang, and Xiangxiang Zeng.
1259
+ Graphcpi: Graph neural
1260
+ representation learning for compound-protein interaction.
1261
+ In 2019 IEEE International Conference on Bioinformatics
1262
+ and Biomedicine (BIBM), pages 717–722. IEEE, 2019.
1263
+ [Rao et al., 2019] Roshan Rao, Nicholas Bhattacharya, Neil
1264
+ Thomas, Yan Duan, Peter Chen, John Canny, Pieter
1265
+ Abbeel, and Yun Song. Evaluating protein transfer learn-
1266
+ ing with tape. Advances in neural information processing
1267
+ systems, 32, 2019.
1268
+ [Rao et al., 2021] Roshan M Rao, Jason Liu, Robert Verkuil,
1269
+ Joshua Meier, John Canny, Pieter Abbeel, Tom Sercu, and
1270
+ Alexander Rives. Msa transformer. In International Con-
1271
+ ference on Machine Learning, pages 8844–8856. PMLR,
1272
+ 2021.
1273
+ [Rives et al., 2021] Alexander Rives, Joshua Meier, Tom
1274
+ Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi
1275
+ Guo, Myle Ott, C Lawrence Zitnick, Jerry Ma, et al.
1276
+ Biological structure and function emerge from scal-
1277
+ ing unsupervised learning to 250 million protein se-
1278
+ quences.
1279
+ Proceedings of the National Academy of Sci-
1280
+ ences, 118(15):e2016239118, 2021.
1281
+ [Rohl et al., 2004] Carol A Rohl,
1282
+ Charlie EM Strauss,
1283
+ Kira MS Misura, and David Baker. Protein structure pre-
1284
+ diction using rosetta. In Methods in enzymology, volume
1285
+ 383, pages 66–93. Elsevier, 2004.
1286
+ [Schaap et al., 2001] Marcel G Schaap, Feike J Leij, and
1287
+ Martinus Th Van Genuchten. Rosetta: A computer pro-
1288
+ gram for estimating soil hydraulic parameters with hi-
1289
+ erarchical pedotransfer functions. Journal of hydrology,
1290
+ 251(3-4):163–176, 2001.
1291
+ [Schneuing et al., 2022] Arne
1292
+ Schneuing,
1293
+ Yuanqi
1294
+ Du,
1295
+ Charles Harris, Arian Jamasb, Ilia Igashov, Weitao Du,
1296
+ Tom Blundell, Pietro Li´o, Carla Gomes, Max Welling,
1297
+ et al.
1298
+ Structure-based drug design with equivariant
1299
+ diffusion models. arXiv preprint arXiv:2210.13695, 2022.
1300
+ [Shanehsazzadeh et al., 2020] Amir Shanehsazzadeh, David
1301
+ Belanger, and David Dohan. Is transfer learning neces-
1302
+ sary for protein landscape prediction?
1303
+ arXiv preprint
1304
+ arXiv:2011.03443, 2020.
1305
+ [Si et al., 2020] Dong Si, Spencer A Moritz, Jonas Pfab, Jie
1306
+ Hou, Renzhi Cao, Liguo Wang, Tianqi Wu, and Jianlin
1307
+ Cheng. Deep learning to predict protein backbone struc-
1308
+ ture from high-resolution cryo-em density maps. Scientific
1309
+ reports, 10(1):1–22, 2020.
1310
+ [Sinai et al., 2017] Sam Sinai,
1311
+ Eric Kelsic,
1312
+ George M
1313
+ Church, and Martin A Nowak. Variational auto-encoding
1314
+ of protein sequences. arXiv preprint arXiv:1712.03346,
1315
+ 2017.
1316
+ [Smith and Smith, 1990] Randall F Smith and Temple F
1317
+ Smith. Automatic generation of primary sequence patterns
1318
+ from sets of related protein sequences. Proceedings of the
1319
+ National Academy of Sciences, 87(1):118–122, 1990.
1320
+ [Strodthoff et al., 2020] Nils Strodthoff,
1321
+ Patrick Wagner,
1322
+ Markus Wenzel, and Wojciech Samek. Udsmprot: univer-
1323
+ sal deep sequence models for protein classification. Bioin-
1324
+ formatics, 36(8):2401–2409, 2020.
1325
+
1326
+ [Sturmfels et al., 2020] Pascal Sturmfels, Jesse Vig, Ali
1327
+ Madani, and Nazneen Fatema Rajani. Profile prediction:
1328
+ An alignment-based pre-training task for protein sequence
1329
+ models. arXiv preprint arXiv:2012.00195, 2020.
1330
+ [Sverrisson et al., 2021] Freyr
1331
+ Sverrisson,
1332
+ Jean
1333
+ Feydy,
1334
+ Bruno E Correia, and Michael M Bronstein. Fast end-to-
1335
+ end learning on protein surfaces. In Proceedings of the
1336
+ IEEE/CVF Conference on Computer Vision and Pattern
1337
+ Recognition, pages 15272–15281, 2021.
1338
+ [Thomas et al., 2005] John Thomas, Naren Ramakrishnan,
1339
+ and Chris Bailey-Kellogg. Graphical models of residue
1340
+ coupling in protein families.
1341
+ In Proceedings of the 5th
1342
+ international workshop on Bioinformatics, pages 12–20,
1343
+ 2005.
1344
+ [Torrisi et al., 2020] Mirko Torrisi, Gianluca Pollastri, and
1345
+ Quan Le.
1346
+ Deep learning methods in protein structure
1347
+ prediction. Computational and Structural Biotechnology
1348
+ Journal, 18:1301–1310, 2020.
1349
+ [Townshend et al., 2019] Raphael Townshend, Rishi Bedi,
1350
+ Patricia Suriana, and Ron Dror. End-to-end learning on
1351
+ 3d protein structure for interface prediction. Advances in
1352
+ Neural Information Processing Systems, 32, 2019.
1353
+ [Unsal et al., 2020] Serbulent Unsal, Heval Atas¸, Muam-
1354
+ mer Albayrak, Kemal Turhan, Aybar C Acar, and Tunca
1355
+ Do˘gan. Evaluation of methods for protein representation
1356
+ learning: a quantitative analysis. bioRxiv, 2020.
1357
+ [Vaswani et al., 2017] Ashish Vaswani, Noam Shazeer, Niki
1358
+ Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
1359
+ Łukasz Kaiser, and Illia Polosukhin. Attention is all you
1360
+ need. Advances in neural information processing systems,
1361
+ 30, 2017.
1362
+ [Wang et al., 2019] Yanbin Wang, Zhu-Hong You, Shan
1363
+ Yang, Xiao Li, Tong-Hai Jiang, and Xi Zhou. A high ef-
1364
+ ficient biological language model for predicting protein–
1365
+ protein interactions. Cells, 8(2):122, 2019.
1366
+ [Wang et al., 2021] Zichen Wang, Steven A Combs, Ryan
1367
+ Brand, Miguel Romero Calvo, Panpan Xu, George Price,
1368
+ Nataliya Golovach, Emannuel O Salawu, Colby J Wise,
1369
+ Sri Priya Ponnapalli, et al. Lm-gvp: A generalizable deep
1370
+ learning framework for protein property prediction from
1371
+ sequence and structure. bioRxiv, 2021.
1372
+ [Weigt et al., 2009] Martin Weigt, Robert A White, Hendrik
1373
+ Szurmant, James A Hoch, and Terence Hwa. Identification
1374
+ of direct residue contacts in protein–protein interaction by
1375
+ message passing. Proceedings of the National Academy of
1376
+ Sciences, 106(1):67–72, 2009.
1377
+ [Wu and Cheng, 2022] Tianqi
1378
+ Wu
1379
+ and
1380
+ Jianlin
1381
+ Cheng.
1382
+ Atomic
1383
+ protein
1384
+ structure
1385
+ refinement
1386
+ using
1387
+ all-atom
1388
+ graph representations and se (3)-equivariant graph neural
1389
+ networks. bioRxiv, 2022.
1390
+ [Wu et al., 2021] Lirong Wu,
1391
+ Haitao Lin,
1392
+ Cheng Tan,
1393
+ Zhangyang Gao, and Stan Z Li. Self-supervised learning
1394
+ on graphs: Contrastive, generative, or predictive. IEEE
1395
+ Transactions on Knowledge and Data Engineering, 2021.
1396
+ [Wu et al., 2022] Ruidong Wu, Fan Ding, Rui Wang, Rui
1397
+ Shen, Xiwen Zhang, Shitong Luo, Chenpeng Su, Zuo-
1398
+ fan Wu, Qi Xie, Bonnie Berger, et al. High-resolution de
1399
+ novo structure prediction from primary sequence. bioRxiv,
1400
+ 2022.
1401
+ [Xia and Ku, 2021] Tian Xia and Wei-Shinn Ku. Geomet-
1402
+ ric graph representation learning on protein structure pre-
1403
+ diction. In Proceedings of the 27th ACM SIGKDD Con-
1404
+ ference on Knowledge Discovery & Data Mining, pages
1405
+ 1873–1883, 2021.
1406
+ [Xia et al., 2022] Chunqiu Xia, Shi-Hao Feng, Ying Xia, Xi-
1407
+ aoyong Pan, and Hong-Bin Shen. Fast protein structure
1408
+ comparison through effective representation learning with
1409
+ contrastive graph neural networks. PLoS computational
1410
+ biology, 18(3):e1009986, 2022.
1411
+ [Xiao et al., 2021] Yijia Xiao, Jiezhong Qiu, Ziang Li,
1412
+ Chang-Yu Hsieh, and Jie Tang.
1413
+ Modeling protein us-
1414
+ ing large-scale pretrain language model. arXiv preprint
1415
+ arXiv:2108.07435, 2021.
1416
+ [Xie et al., 2022] Yaochen Xie, Zhao Xu, Jingtun Zhang,
1417
+ Zhengyang Wang, and Shuiwang Ji. Self-supervised learn-
1418
+ ing of graph neural networks: A unified review.
1419
+ IEEE
1420
+ Transactions on Pattern Analysis and Machine Intelli-
1421
+ gence, 2022.
1422
+ [Xu and Zhang, 2011] Dong Xu and Yang Zhang. Improv-
1423
+ ing the physical realism and structural accuracy of protein
1424
+ models by a two-step atomic-level energy minimization.
1425
+ Biophysical journal, 101(10):2525–2534, 2011.
1426
+ [Xu et al., 2018] Keyulu Xu, Weihua Hu, Jure Leskovec, and
1427
+ Stefanie Jegelka.
1428
+ How powerful are graph neural net-
1429
+ works? arXiv preprint arXiv:1810.00826, 2018.
1430
+ [Yang et al., 2022a] Kevin K Yang, Alex X Lu, and Nicolo K
1431
+ Fusi. Convolutions are competitive with transformers for
1432
+ protein sequence pretraining. bioRxiv, 2022.
1433
+ [Yang et al., 2022b] Kevin K Yang, Niccol`o Zanichelli, and
1434
+ Hugh Yeh. Masked inverse folding with sequence transfer
1435
+ for protein representation learning. bioRxiv, 2022.
1436
+ [You and Shen, 2022] Yuning You and Yang Shen. Cross-
1437
+ modality and self-supervised protein embedding for
1438
+ compound–protein affinity and contact prediction. Bioin-
1439
+ formatics, 38(Supplement 2):ii68–ii74, 2022.
1440
+ [Zhang and Zhang, 2010] Jian Zhang and Yang Zhang.
1441
+ A
1442
+ novel side-chain orientation dependent potential derived
1443
+ from random-walk reference state for protein fold selec-
1444
+ tion and structure prediction.
1445
+ PloS one, 5(10):e15386,
1446
+ 2010.
1447
+ [Zhang et al., 2022] Zuobai Zhang, Minghao Xu, Arian Ja-
1448
+ masb, Vijil Chenthamarakshan, Aurelie Lozano, Payel
1449
+ Das, and Jian Tang.
1450
+ Protein representation learn-
1451
+ ing by geometric structure pretraining.
1452
+ arXiv preprint
1453
+ arXiv:2203.06125, 2022.
1454
+ [Zhou et al., 2020] Guangyu Zhou, Muhao Chen, Chelsea JT
1455
+ Ju, Zheng Wang, Jyun-Yu Jiang, and Wei Wang. Muta-
1456
+ tion effect estimation on protein–protein interactions us-
1457
+
1458
+ ing deep contextualized representation learning. NAR ge-
1459
+ nomics and bioinformatics, 2(2):lqaa015, 2020.
1460
+ [Zhuang et al., 2020] Fuzhen Zhuang, Zhiyuan Qi, Keyu
1461
+ Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui
1462
+ Xiong, and Qing He. A comprehensive survey on transfer
1463
+ learning. Proceedings of the IEEE, 109(1):43–76, 2020.
1464
+
I9AyT4oBgHgl3EQf5_o0/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
ItAzT4oBgHgl3EQfjv0x/content/tmp_files/2301.01520v1.pdf.txt ADDED
@@ -0,0 +1,680 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 1
2
+ Counterfactual Explanations for Land Cover
3
+ Mapping in a Multi-class Setting
4
+ Cassio F. Dantas, Diego Marcos, Dino Ienco
5
+ Abstract—Counterfactual explanations are an emerging tool
6
+ to enhance interpretability of deep learning models. Given a
7
+ sample, these methods seek to find and display to the user similar
8
+ samples across the decision boundary. In this paper, we propose a
9
+ generative adversarial counterfactual approach for satellite image
10
+ time series in a multi-class setting for the land cover classification
11
+ task. One of the distinctive features of the proposed approach is
12
+ the lack of prior assumption on the targeted class for a given
13
+ counterfactual explanation. This inherent flexibility allows for the
14
+ discovery of interesting information on the relationship between
15
+ land cover classes. The other feature consists of encouraging the
16
+ counterfactual to differ from the original sample only in a small
17
+ and compact temporal segment. These time-contiguous perturba-
18
+ tions allow for a much sparser and, thus, interpretable solution.
19
+ Furthermore, plausibility/realism of the generated counterfactual
20
+ explanations is enforced via the proposed adversarial learning
21
+ strategy.
22
+ I. INTRODUCTION
23
+ Deep learning techniques have gained widespread popu-
24
+ larity in the remote sensing field due to impressive results
25
+ on a variety of tasks such as image super-resolution, image
26
+ restoration, biophysical variables estimation and land cover
27
+ classification from satellite image time series (SITS) data [1].
28
+ Of particular importance, this last task provides useful knowl-
29
+ edge to support many downstream geospatial analyses [2].
30
+ Despite the high performances achieved by recent deep learn-
31
+ ing frameworks on this task, they remain black-box models
32
+ with limited understanding on their internal behavior. Due
33
+ to this limitation, there is a growing need for improving the
34
+ interpretability of deep learning models in remote sensing with
35
+ the objective to raise up their acceptability and usefulness, as
36
+ their decision-making processes are often not transparent [3]–
37
+ [5]. Counterfactual explanation methods have recently received
38
+ increasing attention as a means to provide some level of
39
+ interpretability [6]–[8] to these black-box models. Counter-
40
+ factual explanations aim to describe the behaviour of a model
41
+ by providing minimal changes to the input data that would
42
+ result in realistic samples that result in the model predicting
43
+ a different class.
44
+ For these perturbations to be more easily interpretable it is
45
+ desirable that they are sparse and that they can be identified
46
+ with some semantic element of the input data. In the case
47
+ of time series, this would require to perturb a short and
48
+ contiguous section of the timeline [9].
49
+ Cassio F. Dantas and Dino Ienco are with UMR-TETIS laboratory, IN-
50
+ RAE, University of Montpellier, France (email: [email protected];
51
52
+ Diego Marcos is with Inria, University of Montpellier, France (email:
53
54
+ Related work: Most papers on counterfactual explana-
55
+ tions focus on image data, while much fewer concentrate on
56
+ time series [9]–[15]. To the best of our knowledge, this is the
57
+ first paper focusing more specifically on counterfactuals for
58
+ remote sensing time series data. While [9], [10] also generate
59
+ time-contiguous perturbations, counterfactual plausibility is
60
+ achieved by replacing an interval of the time series by a portion
61
+ of another sample from the dataset [9] or shapelet motifs [10]
62
+ (also used in [12]). In contrast, we use an adversarial approach
63
+ to learn a counterfactual generator. In a multivariate setting,
64
+ the approach in [11] replaces entire variables (not just a time
65
+ section) with variables from another multivariate sample in
66
+ the dataset. Related adversarial approaches are proposed in
67
+ [13], [14], but time localization is not enforced. Finally, in
68
+ many existing approaches only the binary classification case
69
+ is considered [10], [14], [15], and when applied to the multi-
70
+ class case, it usually requires explicitly picking a target class
71
+ for every counterfactual explanation [11], [13]–[15].
72
+ Contributions: Here, we propose a counterfactual genera-
73
+ tion approach in a multi-class land cover classification setting
74
+ for satellite image time series data. The proposed approach
75
+ generates counterfactual explanations that are plausible (i.e.
76
+ belong as much as possible to the data distribution) and close
77
+ to the original data (modifying only a limited and contiguous
78
+ set of time entries by a small amount). Finally, it is not
79
+ necessary to pre-determine a target class for the generated
80
+ counterfactual.
81
+ Paper outline: In Section II we describe the considered
82
+ study case with the associated remote sensing data. After
83
+ detailing the proposed method in Section III, we present the
84
+ experimental results in Section IV. Concluding remarks and
85
+ future works are outlined in Section V.
86
+ II. STUDY AREA
87
+ The study site covers an area around the town of Koumbia,
88
+ in the Province of Tuy, Hauts-Bassins region, in the south-
89
+ west of Burkina Faso. This area has a surface of about 2338
90
+ km2, and is situated in the sub-humid sudanian zone. The
91
+ surface is covered mainly by natural savannah (herbaceous and
92
+ shrubby) and forests, interleaved with a large portion of land
93
+ (around 35%) used for rainfed agricultural production (mostly
94
+ smallholder farming). The main crops are cereals (maize,
95
+ sorghum and millet) and cotton, followed by oleaginous and
96
+ leguminous crops. Several temporary watercourses constitute
97
+ the hydrographic network around the city of Koumbia. Fig-
98
+ ure 1 presents the study site with the reference data (ground
99
+ truth) superposed on a Sentinel-2 image.
100
+ arXiv:2301.01520v1 [cs.LG] 4 Jan 2023
101
+
102
+ 2
103
+ Fig. 1: Location of the Koumbia study site. The corresponding
104
+ ground truth is shown on the right.
105
+ Fig. 2: Acquisition dates of the Sentinel-2 Satellite Image Time
106
+ Series on the year 2020.
107
+ Concerning the satellite data, we collected a time series
108
+ of Sentinel-2 images spanning the year 2020 from January
109
+ to December. All images were provided by the THEIA Pole
110
+ platform1 at level-2A, which consist of atmospherically cor-
111
+ rected surface reflectances (cf. MAJA processing chain [16])
112
+ and relative cloud/shadow masks. A standard pre-processing
113
+ was performed over each band to replace cloudy pixel values
114
+ as detected by the available cloud masks based on the method
115
+ proposed in [17]. Figure 2 depicts the acquisition dates of the
116
+ Sentinel-2 satellite image time series. Finally, from the spectral
117
+ raw bands at 10-m of spatial resolution the NDVI (Normalized
118
+ Differential Vegetation Index) was derived.
119
+ The GT (ground truth) data for the study site is a collection
120
+ of (i) digitized plots from a GPS field mission performed in
121
+ October 2020 and mostly covering classes within cropland and
122
+ (ii) additional reference plots on non-crop classes obtained by
123
+ photo-interpretation by an expert. Finally, the polygons have
124
+ been rasterized at the S2 spatial resolution (10-m), resulting
125
+ in 79961 labeled pixels. The statistics related to the GT are
126
+ reported in Table I.
127
+ Class
128
+ Label
129
+ Pixels
130
+ 1
131
+ Cereals
132
+ 9 731
133
+ 2
134
+ Cotton
135
+ 6 971
136
+ 3
137
+ Oleaginous
138
+ 7 950
139
+ 4
140
+ Grassland
141
+ 12 998
142
+ 5
143
+ Shrubland
144
+ 22 546
145
+ 6
146
+ Forest
147
+ 17 435
148
+ 7
149
+ Bare Soil/Built-up
150
+ 1 125
151
+ 8
152
+ Water
153
+ 1 205
154
+ Total
155
+ 79 961
156
+ TABLE I: Koumbia study site Ground Truth statistics.
157
+ Classi er
158
+ Real
159
+ Counterfactual
160
+ (frozen)
161
+ Noiser
162
+ Class A
163
+ Class B
164
+ Discriminator
165
+ Fig. 3: Schematic representation of the proposed approach.
166
+ III. PROPOSED METHOD
167
+ A. Architecture overview
168
+ For the counterfactual generation, we propose a GAN
169
+ (generative adversarial network) inspired architecture which
170
+ is summarized in Fig. 3.
171
+ A counterfactual xCF is obtained for each input sample x
172
+ by adding a perturbation δ to the original signal:
173
+ xCF = x + δ
174
+ (1)
175
+ The perturbation δ is generated by a Noiser module which is
176
+ learned with the goal to swap the prediction of the Classifier.
177
+ Finally, a Discriminator module is leveraged to ensure the
178
+ generation of realistic counterfactual examples.
179
+ B. Networks implementation and training
180
+ Regarding the different components on which our frame-
181
+ work is built on, we get inspiration by state of the art
182
+ literature in the field of satellite image time series land cover
183
+ mapping. For the Classifier network we leverage the Temporal
184
+ Convolutional Neural Network (TempCNN) model proposed
185
+ in [18]. This architecture has an encoder based on several
186
+ one-dimensional convolutional layers to explicitly cope with
187
+ the temporal dimension of the time series data followed by
188
+ two fully connected layers and a final output layer to provide
189
+ the multi-class decision.
190
+ For the Discriminator network we adopt the same archi-
191
+ tecture as the Classifier network and we replace the output
192
+ layer with a single neuron with sigmoid activation function
193
+ as commonly done for discriminator networks in adversarial
194
+ learning [19].
195
+ Concerning the Noiser module, it is implemented as a multi-
196
+ layer perceptron network with two hidden layers (each with
197
+ 128 neurons) and an output layer with the same dimensionality
198
+ of the time series data. For each of the hidden layers, batch
199
+ normalization, tangent activation function and a drop-out reg-
200
+ ularization are employed in this order while for the output
201
+ layer only the tangent activation function is used. The tangent
202
+ activation function allows us to restrict the output domain
203
+ between -1 and +1 thus, facilitating the learning process of
204
+ the different networks.
205
+ The Classifier model is pre-trained on the training set and,
206
+ successively, frozen during the adversarial learning stage since
207
+ this stage is devoted to learn the model weights associated to
208
+ the Noiser and the Discriminator (see section III-D).
209
+ 1http://theia.cnes.fr
210
+
211
+ Legend:
212
+ 000000
213
+ Cereals
214
+ Cotton
215
+ Oleag./Legum
216
+ Grassland
217
+ Shrubland
218
+ Forest
219
+ B. Soil/Built-up
220
+ WaterDD
221
+ DD
222
+ DDD
223
+ B
224
+ 2020-01
225
+ 2020-03
226
+ 2020-05
227
+ 2020-07
228
+ 2020-09
229
+ 2020-11
230
+ 2021-013
231
+ The Noiser module is updated with respect to a composite
232
+ loss made of three parts detailed in sections III-C to III-E.
233
+ Lnoiser = Lcl + λgenLgen + λw-ℓ1Lw-ℓ1
234
+ (2)
235
+ C. Class-swapping loss
236
+ To generate counterfactuals that effectively change the pre-
237
+ dicted class for a given input we use the following loss:
238
+ Lcl = − 1
239
+ n
240
+ n
241
+
242
+ i=1
243
+ y(i) log(1 − p(y(i)))
244
+ (3)
245
+ It enforces the reduction of the classifier’s softmax output for
246
+ the original label y(i), here denoted p(y(i)), eventually leading
247
+ to a change on the predicted class.
248
+ Note that, conversely to standard literature [13], [15] in
249
+ which a target class for the counterfactual example is chosen
250
+ a priori, here we purposely do not enforce the prediction of
251
+ a predefined target class. Instead, we let the Noiser free to
252
+ generate a perturbation δ that will change the classifier output
253
+ to any other class different from yi.
254
+ D. GAN-based regularization for plausibility
255
+ Counterfactual plausibility is enforced via a GAN-inspired
256
+ architecture, where a discriminator is trained to identify unreal-
257
+ istic counterfactuals while, simultaneously, the Noiser module
258
+ acts as a generator with the goal to fool the discriminator in
259
+ a two player game.
260
+ The Discriminator is updated with respect to a standard
261
+ GAN loss classifying real versus fake (counterfactual) sam-
262
+ ples:
263
+ Ldsc = − 1
264
+ n
265
+ n
266
+
267
+ i=1
268
+
269
+ log D(x(i)) + log
270
+
271
+ 1 − D(x(i)
272
+ CF)
273
+ ��
274
+ (4)
275
+ where D(x(i)) denotes the discriminator’s output for a real
276
+ input x(i) (with expected output 1) and D(x(i)
277
+ CF) its output for
278
+ a fake input x(i)
279
+ CF (with expected output 0).
280
+ The following non-saturating generator loss is used in the
281
+ Noiser update:
282
+ Lgen = − 1
283
+ n
284
+ n
285
+
286
+ i=1
287
+ log
288
+
289
+ D(x(i)
290
+ CF)
291
+
292
+ (5)
293
+ Lgen is minimized when the discriminator wrongly identifies
294
+ the counterfactuals as real inputs.
295
+ E. Unimodal regularization for time-contiguity
296
+ To generate perturbations concentrated around a contiguous
297
+ time frame we employ a weighted L1-norm penalization,
298
+ with weights growing quadratically around a central time ˜t(i)
299
+ chosen independently for each sample i ∈ {1, . . . , n}:
300
+ Lw-ℓ1 = 1
301
+ n
302
+ n
303
+
304
+ i=1
305
+ T
306
+
307
+ t=1
308
+ d(t, ˜t(i))2|δ(i)
309
+ t |
310
+ (6)
311
+ where, for the i-th sample, ˜t(i) is chosen as the time step with
312
+ the highest absolute value perturbation ˜t(i) = argmaxt |δ(i)
313
+ t |.
314
+ To avoid biasing ˜t towards the center, we use the modulo
315
+ distance d(t, ˜t) = min
316
+
317
+ (t − ˜t)%T, (˜t − t)%T
318
+
319
+ which treats
320
+ the time samples as a circular list.
321
+ This regularization also brings a degree of sparsity to the
322
+ generated perturbation δ, since its entries will tend to vanish
323
+ when getting far away from ˜t. Finally, penalizing the entries
324
+ of δ enforces the proximity (similarity) between xCF and x.
325
+ IV. RESULTS
326
+ In this section we inspect the behaviour of the proposed
327
+ method considering the study case introduced in Section II.
328
+ More precisely, we first provide a general analysis of the class
329
+ transitions induced by the counterfactual generation process.
330
+ Secondly, we discuss per-class average perturbations generated
331
+ by our framework as well as specific counterfactual examples.
332
+ Then, we assess the plausibility of the generated counterfactual
333
+ examples via anomaly detection strategies as suggested in [15].
334
+ Finally, we perform an ablation analysis to assess the role of
335
+ the different loss functions involved in the learning process of
336
+ our framework.
337
+ A. Experimental setup
338
+ The Koumbia study case described in Section II was split
339
+ into training, validation and test sets containing respectively
340
+ 50-17-33% of the 79961 samples. Each data sample cor-
341
+ responds to a (univariate) NDVI time series with 24 time
342
+ samples (cf. Fig. 2).
343
+ First, the Classifier was trained over 1000 epochs with batch
344
+ size 32 and Adam optimizer with learning rate 10−4 and
345
+ weight decay of same value. The model weights corresponding
346
+ to the best obtained F1-score on the validation set were kept.
347
+ Then, with the classifier weights frozen, the Noiser and
348
+ Discriminator modules are simultaneously trained over 100
349
+ epochs with batch size 128 and Adam optimizer.
350
+ Regularization parameters: we set λgen = 5 · 10−1 and
351
+ λw-ℓ1 = 5 · 10−2 on the reported results. In practice, in-
352
+ creasing these weights implies in further constraining the
353
+ set of admissible perturbations which, in turn, leads to a
354
+ smaller rate of successful counterfactual samples –i.e., those
355
+ that actually change the classifier’s prediction (see details in
356
+ section IV-E). The chosen values lead to a success rate of
357
+ about 50%. Naturally, by further relaxing these constraints
358
+ (reducing λgen and λw-ℓ1) would lead to higher success rates,
359
+ but the generated counterfactual samples would be of lesser
360
+ quality in terms of plausibility (due to λgen) as well as time
361
+ localization and proximity (due to λw-ℓ1).
362
+ B. Visualizing class relationships
363
+ The class transitions induced by the counterfactual samples
364
+ are summarized in Fig.
365
+ 4. The left (resp. right) graph was
366
+ generated by feeding the obtained network with each of the
367
+ training (resp. test) data samples. They present very similar
368
+ behavior, which attests the fact that the proposed method
369
+ generalizes well to previously unseen data. We recall that the
370
+ class transitions are to no extent pre-defined on our approach;
371
+ on the contrary, our method allows input samples from the
372
+
373
+ 4
374
+ CEREALS
375
+ COTTON
376
+ OLEAGINOUS
377
+ GRASSLAND
378
+ SHRUBLAND
379
+ FOREST
380
+ B.
381
+ W.
382
+ CEREALS
383
+ COTTON
384
+ OLEAGINOUS
385
+ GRASSLAND
386
+ SHRUBLAND
387
+ FOREST
388
+ B.
389
+ W.
390
+ Fig. 4: Summary of class transitions induced by the counter-
391
+ factuals. Training data (left) and test data (right), where B.
392
+ stands for Bare Soil and W. for Water classes.
393
+ Fig. 5: Examples of average counterfactual perturbations be-
394
+ tween classes Cereals and Grassland on both ways. Shaded
395
+ area corresponds to the standard deviation.
396
+ same class to freely split-up into multiple target classes.
397
+ Transitions obtained in such a way thus bring up valuable
398
+ insights on the relation between classes.
399
+ The obtained transitions are very much in line with the
400
+ intuitive relation between the different classes. For instance,
401
+ the three crop-related classes (Cereals, Cotton and Oleaginous)
402
+ form a very coherent cluster, with almost all transitions staying
403
+ within the sub-group. The vegetation classes Shrubland and
404
+ Forest are most often sent to one another, while Grassland
405
+ remains much closer to the crop classes (especially Oleagi-
406
+ nous). The Bare Soil class is also most often transformed into
407
+ Oleaginous. Finally, the Water class is very rarely modified
408
+ by the counterfactual learning process, which is somewhat ex-
409
+ pected due to its very distinct characteristic (NDVI signature)
410
+ compared to the other classes.
411
+ The ratio of successful class-swapping counterfactual sam-
412
+ ples –i.e., those that actually change the classifier’s prediction–
413
+ was 52.7% (17947 over 34066) for the training data and 43.8%
414
+ (8765 over 20006) for the test data, considering only the
415
+ samples that were correctly classified before counterfactuals.
416
+ C. Counterfactual examples
417
+ Examples of average perturbation profiles for two different
418
+ class transitions are depicted in Fig 5.
419
+ It is interesting to notice how the perturbations correspond
420
+ roughly to the opposite of each other, which is quite suitable
421
+ since they correspond to opposite transitions between the same
422
+ two classes.
423
+ Fig. 6: Examples of original time series with corresponding
424
+ counterfactual from classes Shrubland (4) and Forest (5) on
425
+ both ways.
426
+ Two illustrative examples of counterfactual explanations are
427
+ shown in Fig. 6. It is interesting to observe the similarity
428
+ between the generated counterfactual and a real data example
429
+ from the same class (on the neighboring plot).
430
+ To transform a Shrubland sample into a Forest one, NDVI is
431
+ added between the months of July and October. The opposite
432
+ is done to obtain the reverse transition, which matches the
433
+ general knowledge of such land cover classes on the consid-
434
+ ered study area. Also note that the NDVI peak is slight shifted
435
+ from one class to another.
436
+ From the provided examples, one can verify that the ob-
437
+ tained counterfactual do look realistic (this aspect is further
438
+ evaluated in section IV-D) besides differing from the real
439
+ signal only on a contiguous time window. These two properties
440
+ have been explicitly enforced via the losses in eqs. (5) and (6).
441
+ D. Plausibility analysis
442
+ In this section, we quantify to what extent the proposed
443
+ counterfactual explanations fit the original data distribution.
444
+ To do so, we run an anomaly detection method, Isolation
445
+ Forest [20], on both the original data and corresponding
446
+ counterfactuals. To attest the importance of the proposed
447
+ adversarial training for the generation of realistic/plausible
448
+ counterfactuals, we perform an ablation study confronting
449
+ the proposed model trained with and without the generator
450
+ loss in Eq. (5). Fig. 7 shows contingency matrices relating
451
+ the isolation forest outputs on the original data (rows) and
452
+ on the corresponding counterfactual explanations (columns).
453
+ Two counterfactual generation approaches are investigated: the
454
+ proposed method (left matrix) and its non-adversarial variant
455
+ (right matrix). In the figures, diagonal entries correspond
456
+ to matching isolation forest outputs –i.e., same prediction
457
+ (inlier/outlier) for both real and counterfactual data. Later,
458
+ in Table II we compute some metrics on such contingency
459
+ matrices to further quantify and summarize the behaviour of
460
+ the compared methods. The proposed counterfactual model
461
+ achieves impressive results, even leading to more samples
462
+ identified as inliers than the real data itself (23806 against
463
+ 23755), since proposed approach converts less inliers into
464
+ outliers (164) than the other way around (215).
465
+ The non-adversarial variant, on the other hand, obtains
466
+ considerably more degraded results, as it converts as many
467
+ as 4338 real inlier samples into outliers (about 20 times
468
+ more). Such a gap becomes evident when looking at the
469
+
470
+ Cereals -→Grassland(876 CFs)
471
+ 0.3
472
+ 0.2
473
+ 0.1
474
+ 0.0
475
+ 0.1
476
+ -0.20.9
477
+ 0.8
478
+ 0.7 -
479
+ NDVI
480
+ 0.6
481
+ 0.5
482
+ 0.4
483
+ Real (Shrubland)
484
+ CF (Forest)
485
+ 0.3
486
+ 20-
487
+ -03
488
+ 2020-
489
+ 2020-
490
+ -09
491
+ 2020-
492
+ -01
493
+ 2020-
494
+ 2020-
495
+ 2021Grassland-→Cereals(1394CFs)
496
+ 0.1
497
+ 0.0
498
+ 0.1-
499
+ 0.2
500
+ 心0.9
501
+ 0.8 -
502
+ 0.7 -
503
+ 0.6
504
+ 0.5
505
+ 0.4 -
506
+ Real (Forest)
507
+ 0.3
508
+ CF (Shrubland)
509
+ 2020-
510
+ 2020-
511
+ -09
512
+ -03
513
+ 2020-
514
+ 2020-
515
+ 2020-
516
+ 2021-5
517
+ Inlier
518
+ Outlier
519
+ Counterfactual
520
+ Inlier
521
+ Outlier
522
+ Real
523
+ 99.3%
524
+ (23591)
525
+ 0.7%
526
+ (164)
527
+ 7.1%
528
+ (215)
529
+ 92.9%
530
+ (2820)
531
+ Proposed model
532
+ Inlier
533
+ Outlier
534
+ Counterfactual
535
+ Inlier
536
+ Outlier
537
+ Real
538
+ 81.7%
539
+ (19417)
540
+ 18.3%
541
+ (4338)
542
+ 1.2%
543
+ (35)
544
+ 98.8%
545
+ (3000)
546
+ Non-adversarial
547
+ Fig. 7: Isolation forest results on real (rows) and counterfactual
548
+ data (columns). Proposed model with (left) and without (right)
549
+ adversarial loss during training. Row-normalized percentages.
550
+ corresponding accuracy and normalized mutual information
551
+ (NMI) computed w.r.t. the isolation forest results on the
552
+ original data (cf. Table II). Such scores measure to what degree
553
+ the inlier/outlier partitioning obtained on the counterfactual
554
+ samples (for each of the two compared variants) matches the
555
+ one obtained on the original data. The higher they are the
556
+ better the two partitions match. The obtained results clearly
557
+ show that counterfactual plausibility is achieved thanks to the
558
+ adversarial training process.
559
+ Method
560
+ Accuracy
561
+ NMI
562
+ Inliers ratio
563
+ Proposed
564
+ 98.6%
565
+ 0.808
566
+ 88.9%
567
+ Non-adversarial
568
+ 83.7%
569
+ 0.337
570
+ 72.6%
571
+ TABLE II: Plausibility analysis using different performance
572
+ metrics. Isolation Forest results on the real data were used as
573
+ ground truth for the accuracy and NMI scores.
574
+ E. Other ablation studies
575
+ In Table III we compare the number of successful class-
576
+ swapping counterfactual samples as well as the average ℓ2
577
+ and ℓ1 norms of the perturbations δ generated by the proposed
578
+ model and two variants ignoring the generator loss (Lgen) and
579
+ the weighted-ℓ1 loss (Lw-ℓ1), respectively.
580
+ One can see that the removal of the auxiliary losses signif-
581
+ icantly bumps the class-swapping rate, but it happens at the
582
+ expense of either: 1) counterfactual plausibility, as shown in
583
+ the Section IV-D for the removal of Lgen; 2) counterfactual
584
+ proximity/similarity, as demonstrated by the dramatic increase
585
+ on the norm of the generated perturbations (or, equivalently,
586
+ the distance between x and xCF) upon removal of Lw-ℓ1.
587
+ Method
588
+ Class-swap CF
589
+ Average ∥δ∥2
590
+ Average ∥δ∥1
591
+ Proposed
592
+ 43.8%
593
+ 0.24 ± 0.18
594
+ 0.76 ± 0.54
595
+ Without Lgen
596
+ 83.7%
597
+ 0.97 ± 0.47
598
+ 1.69 ± 0.99
599
+ Without Lw-ℓ1
600
+ 99.6%
601
+ 4.79 ± 0.07
602
+ 23.3 ± 0.53
603
+ TABLE III: Ablation study on test data.
604
+ V. CONCLUSION
605
+ In this letter we have presented a new framework to generate
606
+ counterfactual SITS samples of vegetation indices (i.e. NDVI)
607
+ for the land cover classification task. The proposed method
608
+ overcomes the restriction to apriori define the source and the
609
+ target classes for the counterfactual generation process while
610
+ it exploits adversarial learning to ensure realistic counterfac-
611
+ tual samples. As possible future work, we would extend the
612
+ framework to the case of multivariate time series satellite data
613
+ as well as leverage the feedback provided by the generated
614
+ counterfactual samples to improve the robustness of the land
615
+ cover classifier regarding the most frequent class confusions.
616
+ REFERENCES
617
+ [1] Q. Yuan, H. Shen, T. Li, Z. Li, S. Li, Y. Jiang, H. Xu, W. Tan, Q. Yang,
618
+ J. Wang, J. Gao, and L. Zhang, “Deep learning in environmental remote
619
+ sensing: Achievements and challenges,” Remote Sensing of Environment,
620
+ vol. 241, p. 111716, 2020.
621
+ [2] J. Inglada, A. Vincent, M. Arias, B. Tardy, D. Morin, and I. Rodes,
622
+ “Operational high resolution land cover map production at the country
623
+ scale using satellite image time series,” Remote. Sens., vol. 9, no. 1,
624
+ p. 95, 2017.
625
+ [3] A. Adadi and M. Berrada, “Peeking inside the black-box: a survey on
626
+ explainable artificial intelligence (XAI),” IEEE access, vol. 6, 2018.
627
+ [4] R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and
628
+ D. Pedreschi, “A survey of methods for explaining black box models,”
629
+ ACM Comput. Surv., vol. 51, no. 5, Sep. 2019.
630
+ [5] A. B. Arrieta, N. D´ıaz-Rodr´ıguez, J. Del Ser, A. Bennetot, S. Tabik,
631
+ A. Barbado, S. Garc´ıa, S. Gil-L´opez, D. Molina, R. Benjamins et al.,
632
+ “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, op-
633
+ portunities and challenges toward responsible AI,” Information fusion,
634
+ vol. 58, pp. 82–115, 2020.
635
+ [6] S. Wachter, B. Mittelstadt, and C. Russell, “Counterfactual explanations
636
+ without opening the black box: Automated decisions and the gdpr,”
637
+ Harv. JL & Tech., vol. 31, p. 841, 2017.
638
+ [7] S. Verma, J. Dickerson, and K. Hines, “Counterfactual explanations for
639
+ machine learning: A review,” arXiv preprint arXiv:2010.10596, 2020.
640
+ [8] R. Guidotti, “Counterfactual explanations and how to find them: litera-
641
+ ture review and benchmarking,” Data Mining and Knowledge Discovery,
642
+ pp. 1–55, 2022.
643
+ [9] E. Delaney, D. Greene, and M. T. Keane, “Instance-based counterfactual
644
+ explanations for time series classification,” in International Conference
645
+ on Case-Based Reasoning.
646
+ Springer, 2021, pp. 32–47.
647
+ [10] P. Li, S. F. Boubrahimi, and S. M. Hamd, “Motif-guided time series
648
+ counterfactual explanations,” arXiv preprint arXiv:2211.04411, 2022.
649
+ [11] E. Ates, B. Aksar, V. J. Leung, and A. K. Coskun, “Counterfactual
650
+ explanations for multivariate time series,” in International Conference
651
+ on Applied Artificial Intelligence (ICAPAI), 2021, pp. 1–8.
652
+ [12] R. Guidotti, A. Monreale, F. Spinnato, D. Pedreschi, and F. Giannotti,
653
+ “Explaining any time series classifier,” in IEEE International Conference
654
+ on Cognitive Machine Intelligence (CogMI), 2020, pp. 167–176.
655
+ [13] J. Lang, M. Giese, W. Ilg, and S. Otte, “Generating sparse coun-
656
+ terfactual explanations for multivariate time series,” arXiv preprint
657
+ arXiv:2206.00931, 2022.
658
+ [14] A. Van Looveren, J. Klaise, G. Vacanti, and O. Cobb, “Conditional
659
+ generative models for counterfactual explanations,” arXiv preprint
660
+ arXiv:2101.10123, 2021.
661
+ [15] S. Filali Boubrahimi and S. M. Hamdi, “On the mining of time series
662
+ data counterfactual explanations using barycenters,” in ACM CIKM.
663
+ ACM, 2022, p. 3943–3947.
664
+ [16] O. Hagolle, M. Huc, D. Villa Pascual, and G. Dedieu, “A multi-temporal
665
+ and multi-spectral method to estimate aerosol optical thickness over
666
+ land, for the atmospheric correction of formosat-2, landsat, venµs and
667
+ sentinel-2 images,” Rem. Sens., vol. 7, no. 3, pp. 2668–2691, 2015.
668
+ [17] J. Inglada, A. Vincent, M. Arias, and B. Tardy, “iota2-a25386,” Jul.
669
+ 2016. [Online]. Available: https://doi.org/10.5281/zenodo.58150
670
+ [18] C. Pelletier, G. I. Webb, and F. Petitjean, “Temporal convolutional neural
671
+ network for the classification of satellite image time series,” Remote.
672
+ Sens., vol. 11, no. 5, p. 523, 2019.
673
+ [19] A. Creswell, T. White, V. Dumoulin, K. Arulkumaran, B. Sengupta, and
674
+ A. A. Bharath, “Generative adversarial networks: An overview,” IEEE
675
+ Signal Process. Mag., vol. 35, no. 1, pp. 53–65, 2018.
676
+ [20] O. Li, H. Liu, C. Chen, and C. Rudin, “Deep learning for case-
677
+ based reasoning through prototypes: A neural network that explains its
678
+ predictions,” AAAI Conference on Artificial Intelligence, vol. 32, no. 1,
679
+ Apr. 2018.
680
+
ItAzT4oBgHgl3EQfjv0x/content/tmp_files/load_file.txt ADDED
@@ -0,0 +1,435 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf,len=434
2
+ page_content='1 Counterfactual Explanations for Land Cover Mapping in a Multi-class Setting Cassio F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
3
+ page_content=' Dantas, Diego Marcos, Dino Ienco Abstract—Counterfactual explanations are an emerging tool to enhance interpretability of deep learning models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
4
+ page_content=' Given a sample, these methods seek to find and display to the user similar samples across the decision boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
5
+ page_content=' In this paper, we propose a generative adversarial counterfactual approach for satellite image time series in a multi-class setting for the land cover classification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
6
+ page_content=' One of the distinctive features of the proposed approach is the lack of prior assumption on the targeted class for a given counterfactual explanation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
7
+ page_content=' This inherent flexibility allows for the discovery of interesting information on the relationship between land cover classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
8
+ page_content=' The other feature consists of encouraging the counterfactual to differ from the original sample only in a small and compact temporal segment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
9
+ page_content=' These time-contiguous perturba- tions allow for a much sparser and, thus, interpretable solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
10
+ page_content=' Furthermore, plausibility/realism of the generated counterfactual explanations is enforced via the proposed adversarial learning strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
11
+ page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
12
+ page_content=' INTRODUCTION Deep learning techniques have gained widespread popu- larity in the remote sensing field due to impressive results on a variety of tasks such as image super-resolution, image restoration, biophysical variables estimation and land cover classification from satellite image time series (SITS) data [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
13
+ page_content=' Of particular importance, this last task provides useful knowl- edge to support many downstream geospatial analyses [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
14
+ page_content=' Despite the high performances achieved by recent deep learn- ing frameworks on this task, they remain black-box models with limited understanding on their internal behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
15
+ page_content=' Due to this limitation, there is a growing need for improving the interpretability of deep learning models in remote sensing with the objective to raise up their acceptability and usefulness, as their decision-making processes are often not transparent [3]– [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
16
+ page_content=' Counterfactual explanation methods have recently received increasing attention as a means to provide some level of interpretability [6]–[8] to these black-box models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
17
+ page_content=' Counter- factual explanations aim to describe the behaviour of a model by providing minimal changes to the input data that would result in realistic samples that result in the model predicting a different class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
18
+ page_content=' For these perturbations to be more easily interpretable it is desirable that they are sparse and that they can be identified with some semantic element of the input data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
19
+ page_content=' In the case of time series, this would require to perturb a short and contiguous section of the timeline [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
20
+ page_content=' Cassio F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
21
+ page_content=' Dantas and Dino Ienco are with UMR-TETIS laboratory, IN- RAE, University of Montpellier, France (email: cassio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
22
+ page_content='fraga-dantas@inrae.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
23
+ page_content='fr;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
24
+ page_content=' dino.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
25
+ page_content='ienco@inrae.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
26
+ page_content='fr).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
27
+ page_content=' Diego Marcos is with Inria, University of Montpellier, France (email: diego.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
28
+ page_content='marcos@inria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
29
+ page_content='fr) Related work: Most papers on counterfactual explana- tions focus on image data, while much fewer concentrate on time series [9]–[15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
30
+ page_content=' To the best of our knowledge, this is the first paper focusing more specifically on counterfactuals for remote sensing time series data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
31
+ page_content=' While [9], [10] also generate time-contiguous perturbations, counterfactual plausibility is achieved by replacing an interval of the time series by a portion of another sample from the dataset [9] or shapelet motifs [10] (also used in [12]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
32
+ page_content=' In contrast, we use an adversarial approach to learn a counterfactual generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
33
+ page_content=' In a multivariate setting, the approach in [11] replaces entire variables (not just a time section) with variables from another multivariate sample in the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
34
+ page_content=' Related adversarial approaches are proposed in [13], [14], but time localization is not enforced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
35
+ page_content=' Finally, in many existing approaches only the binary classification case is considered [10], [14], [15], and when applied to the multi- class case, it usually requires explicitly picking a target class for every counterfactual explanation [11], [13]–[15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
36
+ page_content=' Contributions: Here, we propose a counterfactual genera- tion approach in a multi-class land cover classification setting for satellite image time series data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
37
+ page_content=' The proposed approach generates counterfactual explanations that are plausible (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
38
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
39
+ page_content=' belong as much as possible to the data distribution) and close to the original data (modifying only a limited and contiguous set of time entries by a small amount).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
40
+ page_content=' Finally, it is not necessary to pre-determine a target class for the generated counterfactual.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
41
+ page_content=' Paper outline: In Section II we describe the considered study case with the associated remote sensing data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
42
+ page_content=' After detailing the proposed method in Section III, we present the experimental results in Section IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
43
+ page_content=' Concluding remarks and future works are outlined in Section V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
44
+ page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
45
+ page_content=' STUDY AREA The study site covers an area around the town of Koumbia, in the Province of Tuy, Hauts-Bassins region, in the south- west of Burkina Faso.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
46
+ page_content=' This area has a surface of about 2338 km2, and is situated in the sub-humid sudanian zone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
47
+ page_content=' The surface is covered mainly by natural savannah (herbaceous and shrubby) and forests, interleaved with a large portion of land (around 35%) used for rainfed agricultural production (mostly smallholder farming).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
48
+ page_content=' The main crops are cereals (maize, sorghum and millet) and cotton, followed by oleaginous and leguminous crops.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
49
+ page_content=' Several temporary watercourses constitute the hydrographic network around the city of Koumbia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
50
+ page_content=' Fig- ure 1 presents the study site with the reference data (ground truth) superposed on a Sentinel-2 image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
51
+ page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
52
+ page_content='01520v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
53
+ page_content='LG] 4 Jan 2023 2 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
54
+ page_content=' 1: Location of the Koumbia study site.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
55
+ page_content=' The corresponding ground truth is shown on the right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
56
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
57
+ page_content=' 2: Acquisition dates of the Sentinel-2 Satellite Image Time Series on the year 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
58
+ page_content=' Concerning the satellite data, we collected a time series of Sentinel-2 images spanning the year 2020 from January to December.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
59
+ page_content=' All images were provided by the THEIA Pole platform1 at level-2A, which consist of atmospherically cor- rected surface reflectances (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
60
+ page_content=' MAJA processing chain [16]) and relative cloud/shadow masks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
61
+ page_content=' A standard pre-processing was performed over each band to replace cloudy pixel values as detected by the available cloud masks based on the method proposed in [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
62
+ page_content=' Figure 2 depicts the acquisition dates of the Sentinel-2 satellite image time series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
63
+ page_content=' Finally, from the spectral raw bands at 10-m of spatial resolution the NDVI (Normalized Differential Vegetation Index) was derived.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
64
+ page_content=' The GT (ground truth) data for the study site is a collection of (i) digitized plots from a GPS field mission performed in October 2020 and mostly covering classes within cropland and (ii) additional reference plots on non-crop classes obtained by photo-interpretation by an expert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
65
+ page_content=' Finally, the polygons have been rasterized at the S2 spatial resolution (10-m), resulting in 79961 labeled pixels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
66
+ page_content=' The statistics related to the GT are reported in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
67
+ page_content=' Class Label Pixels 1 Cereals 9 731 2 Cotton 6 971 3 Oleaginous 7 950 4 Grassland 12 998 5 Shrubland 22 546 6 Forest 17 435 7 Bare Soil/Built-up 1 125 8 Water 1 205 Total 79 961 TABLE I: Koumbia study site Ground Truth statistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
68
+ page_content=' Classi er Real Counterfactual (frozen) Noiser Class A Class B Discriminator Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
69
+ page_content=' 3: Schematic representation of the proposed approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
70
+ page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
71
+ page_content=' PROPOSED METHOD A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
72
+ page_content=' Architecture overview For the counterfactual generation, we propose a GAN (generative adversarial network) inspired architecture which is summarized in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
73
+ page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
74
+ page_content=' A counterfactual xCF is obtained for each input sample x by adding a perturbation δ to the original signal: xCF = x + δ (1) The perturbation δ is generated by a Noiser module which is learned with the goal to swap the prediction of the Classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
75
+ page_content=' Finally, a Discriminator module is leveraged to ensure the generation of realistic counterfactual examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
76
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
77
+ page_content=' Networks implementation and training Regarding the different components on which our frame- work is built on, we get inspiration by state of the art literature in the field of satellite image time series land cover mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
78
+ page_content=' For the Classifier network we leverage the Temporal Convolutional Neural Network (TempCNN) model proposed in [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
79
+ page_content=' This architecture has an encoder based on several one-dimensional convolutional layers to explicitly cope with the temporal dimension of the time series data followed by two fully connected layers and a final output layer to provide the multi-class decision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
80
+ page_content=' For the Discriminator network we adopt the same archi- tecture as the Classifier network and we replace the output layer with a single neuron with sigmoid activation function as commonly done for discriminator networks in adversarial learning [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
81
+ page_content=' Concerning the Noiser module, it is implemented as a multi- layer perceptron network with two hidden layers (each with 128 neurons) and an output layer with the same dimensionality of the time series data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
82
+ page_content=' For each of the hidden layers, batch normalization, tangent activation function and a drop-out reg- ularization are employed in this order while for the output layer only the tangent activation function is used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
83
+ page_content=' The tangent activation function allows us to restrict the output domain between -1 and +1 thus, facilitating the learning process of the different networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
84
+ page_content=' The Classifier model is pre-trained on the training set and, successively, frozen during the adversarial learning stage since this stage is devoted to learn the model weights associated to the Noiser and the Discriminator (see section III-D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
85
+ page_content=' 1http://theia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
86
+ page_content='cnes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
87
+ page_content='fr Legend: 000000 Cereals Cotton Oleag.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
88
+ page_content='/Legum Grassland Shrubland Forest B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
89
+ page_content=' Soil/Built-up WaterDD DD DDD B 2020-01 2020-03 2020-05 2020-07 2020-09 2020-11 2021-013 The Noiser module is updated with respect to a composite loss made of three parts detailed in sections III-C to III-E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
90
+ page_content=' Lnoiser = Lcl + λgenLgen + λw-ℓ1Lw-ℓ1 (2) C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
91
+ page_content=' Class-swapping loss To generate counterfactuals that effectively change the pre- dicted class for a given input we use the following loss: Lcl = − 1 n n � i=1 y(i) log(1 − p(y(i))) (3) It enforces the reduction of the classifier’s softmax output for the original label y(i), here denoted p(y(i)), eventually leading to a change on the predicted class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
92
+ page_content=' Note that, conversely to standard literature [13], [15] in which a target class for the counterfactual example is chosen a priori, here we purposely do not enforce the prediction of a predefined target class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
93
+ page_content=' Instead, we let the Noiser free to generate a perturbation δ that will change the classifier output to any other class different from yi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
94
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
95
+ page_content=' GAN-based regularization for plausibility Counterfactual plausibility is enforced via a GAN-inspired architecture, where a discriminator is trained to identify unreal- istic counterfactuals while, simultaneously, the Noiser module acts as a generator with the goal to fool the discriminator in a two player game.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
96
+ page_content=' The Discriminator is updated with respect to a standard GAN loss classifying real versus fake (counterfactual) sam- ples: Ldsc = − 1 n n � i=1 � log D(x(i)) + log � 1 − D(x(i) CF) �� (4) where D(x(i)) denotes the discriminator’s output for a real input x(i) (with expected output 1) and D(x(i) CF) its output for a fake input x(i) CF (with expected output 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
97
+ page_content=' The following non-saturating generator loss is used in the Noiser update: Lgen = − 1 n n � i=1 log � D(x(i) CF) � (5) Lgen is minimized when the discriminator wrongly identifies the counterfactuals as real inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
98
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
99
+ page_content=' Unimodal regularization for time-contiguity To generate perturbations concentrated around a contiguous time frame we employ a weighted L1-norm penalization, with weights growing quadratically around a central time ˜t(i) chosen independently for each sample i ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
100
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
101
+ page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
102
+ page_content=' , n}: Lw-ℓ1 = 1 n n � i=1 T � t=1 d(t, ˜t(i))2|δ(i) t | (6) where, for the i-th sample, ˜t(i) is chosen as the time step with the highest absolute value perturbation ˜t(i) = argmaxt |δ(i) t |.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
103
+ page_content=' To avoid biasing ˜t towards the center, we use the modulo distance d(t, ˜t) = min � (t − ˜t)%T, (˜t − t)%T � which treats the time samples as a circular list.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
104
+ page_content=' This regularization also brings a degree of sparsity to the generated perturbation δ, since its entries will tend to vanish when getting far away from ˜t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
105
+ page_content=' Finally, penalizing the entries of δ enforces the proximity (similarity) between xCF and x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
106
+ page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
107
+ page_content=' RESULTS In this section we inspect the behaviour of the proposed method considering the study case introduced in Section II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
108
+ page_content=' More precisely, we first provide a general analysis of the class transitions induced by the counterfactual generation process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
109
+ page_content=' Secondly, we discuss per-class average perturbations generated by our framework as well as specific counterfactual examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
110
+ page_content=' Then, we assess the plausibility of the generated counterfactual examples via anomaly detection strategies as suggested in [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
111
+ page_content=' Finally, we perform an ablation analysis to assess the role of the different loss functions involved in the learning process of our framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
112
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
113
+ page_content=' Experimental setup The Koumbia study case described in Section II was split into training, validation and test sets containing respectively 50-17-33% of the 79961 samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
114
+ page_content=' Each data sample cor- responds to a (univariate) NDVI time series with 24 time samples (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
115
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
116
+ page_content=' 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
117
+ page_content=' First, the Classifier was trained over 1000 epochs with batch size 32 and Adam optimizer with learning rate 10−4 and weight decay of same value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
118
+ page_content=' The model weights corresponding to the best obtained F1-score on the validation set were kept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
119
+ page_content=' Then, with the classifier weights frozen, the Noiser and Discriminator modules are simultaneously trained over 100 epochs with batch size 128 and Adam optimizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
120
+ page_content=' Regularization parameters: we set λgen = 5 · 10−1 and λw-ℓ1 = 5 · 10−2 on the reported results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
121
+ page_content=' In practice, in- creasing these weights implies in further constraining the set of admissible perturbations which, in turn, leads to a smaller rate of successful counterfactual samples –i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
122
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
123
+ page_content=', those that actually change the classifier’s prediction (see details in section IV-E).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
124
+ page_content=' The chosen values lead to a success rate of about 50%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
125
+ page_content=' Naturally, by further relaxing these constraints (reducing λgen and λw-ℓ1) would lead to higher success rates, but the generated counterfactual samples would be of lesser quality in terms of plausibility (due to λgen) as well as time localization and proximity (due to λw-ℓ1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
126
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
127
+ page_content=' Visualizing class relationships The class transitions induced by the counterfactual samples are summarized in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
128
+ page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
129
+ page_content=' The left (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
130
+ page_content=' right) graph was generated by feeding the obtained network with each of the training (resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
131
+ page_content=' test) data samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
132
+ page_content=' They present very similar behavior, which attests the fact that the proposed method generalizes well to previously unseen data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
133
+ page_content=' We recall that the class transitions are to no extent pre-defined on our approach;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
134
+ page_content=' on the contrary, our method allows input samples from the 4 CEREALS COTTON OLEAGINOUS GRASSLAND SHRUBLAND FOREST B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
135
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
136
+ page_content=' CEREALS COTTON OLEAGINOUS GRASSLAND SHRUBLAND FOREST B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
137
+ page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
138
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
139
+ page_content=' 4: Summary of class transitions induced by the counter- factuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
140
+ page_content=' Training data (left) and test data (right), where B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
141
+ page_content=' stands for Bare Soil and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
142
+ page_content=' for Water classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
143
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
144
+ page_content=' 5: Examples of average counterfactual perturbations be- tween classes Cereals and Grassland on both ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
145
+ page_content=' Shaded area corresponds to the standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
146
+ page_content=' same class to freely split-up into multiple target classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
147
+ page_content=' Transitions obtained in such a way thus bring up valuable insights on the relation between classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
148
+ page_content=' The obtained transitions are very much in line with the intuitive relation between the different classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
149
+ page_content=' For instance, the three crop-related classes (Cereals, Cotton and Oleaginous) form a very coherent cluster, with almost all transitions staying within the sub-group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
150
+ page_content=' The vegetation classes Shrubland and Forest are most often sent to one another, while Grassland remains much closer to the crop classes (especially Oleagi- nous).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
151
+ page_content=' The Bare Soil class is also most often transformed into Oleaginous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
152
+ page_content=' Finally, the Water class is very rarely modified by the counterfactual learning process, which is somewhat ex- pected due to its very distinct characteristic (NDVI signature) compared to the other classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
153
+ page_content=' The ratio of successful class-swapping counterfactual sam- ples –i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
154
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
155
+ page_content=', those that actually change the classifier’s prediction– was 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
156
+ page_content='7% (17947 over 34066) for the training data and 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
157
+ page_content='8% (8765 over 20006) for the test data, considering only the samples that were correctly classified before counterfactuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
158
+ page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
159
+ page_content=' Counterfactual examples Examples of average perturbation profiles for two different class transitions are depicted in Fig 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
160
+ page_content=' It is interesting to notice how the perturbations correspond roughly to the opposite of each other, which is quite suitable since they correspond to opposite transitions between the same two classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
161
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
162
+ page_content=' 6: Examples of original time series with corresponding counterfactual from classes Shrubland (4) and Forest (5) on both ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
163
+ page_content=' Two illustrative examples of counterfactual explanations are shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
164
+ page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
165
+ page_content=' It is interesting to observe the similarity between the generated counterfactual and a real data example from the same class (on the neighboring plot).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
166
+ page_content=' To transform a Shrubland sample into a Forest one, NDVI is added between the months of July and October.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
167
+ page_content=' The opposite is done to obtain the reverse transition, which matches the general knowledge of such land cover classes on the consid- ered study area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
168
+ page_content=' Also note that the NDVI peak is slight shifted from one class to another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
169
+ page_content=' From the provided examples, one can verify that the ob- tained counterfactual do look realistic (this aspect is further evaluated in section IV-D) besides differing from the real signal only on a contiguous time window.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
170
+ page_content=' These two properties have been explicitly enforced via the losses in eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
171
+ page_content=' (5) and (6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
172
+ page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
173
+ page_content=' Plausibility analysis In this section, we quantify to what extent the proposed counterfactual explanations fit the original data distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
174
+ page_content=' To do so, we run an anomaly detection method, Isolation Forest [20], on both the original data and corresponding counterfactuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
175
+ page_content=' To attest the importance of the proposed adversarial training for the generation of realistic/plausible counterfactuals, we perform an ablation study confronting the proposed model trained with and without the generator loss in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
176
+ page_content=' (5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
177
+ page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
178
+ page_content=' 7 shows contingency matrices relating the isolation forest outputs on the original data (rows) and on the corresponding counterfactual explanations (columns).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
179
+ page_content=' Two counterfactual generation approaches are investigated: the proposed method (left matrix) and its non-adversarial variant (right matrix).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
180
+ page_content=' In the figures, diagonal entries correspond to matching isolation forest outputs –i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
181
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
182
+ page_content=', same prediction (inlier/outlier) for both real and counterfactual data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
183
+ page_content=' Later, in Table II we compute some metrics on such contingency matrices to further quantify and summarize the behaviour of the compared methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
184
+ page_content=' The proposed counterfactual model achieves impressive results, even leading to more samples identified as inliers than the real data itself (23806 against 23755), since proposed approach converts less inliers into outliers (164) than the other way around (215).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
185
+ page_content=' The non-adversarial variant, on the other hand, obtains considerably more degraded results, as it converts as many as 4338 real inlier samples into outliers (about 20 times more).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
186
+ page_content=' Such a gap becomes evident when looking at the Cereals -→Grassland(876 CFs) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
187
+ page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
188
+ page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
189
+ page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
190
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
191
+ page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
192
+ page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
193
+ page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
194
+ page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
195
+ page_content='7 - NDVI 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
196
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
197
+ page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
198
+ page_content='4 Real (Shrubland) CF (Forest) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
199
+ page_content='3 20- 03 2020- 2020- 09 2020- 01 2020- 2020- 2021Grassland-→Cereals(1394CFs) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
200
+ page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
201
+ page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
202
+ page_content='1- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
203
+ page_content='2 心0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
204
+ page_content='9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
205
+ page_content='8 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
206
+ page_content='7 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
207
+ page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
208
+ page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
209
+ page_content='4 - Real (Forest) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
210
+ page_content='3 CF (Shrubland) 2020- 2020- 09 03 2020- 2020- 2020- 2021-5 Inlier Outlier Counterfactual Inlier Outlier Real 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
211
+ page_content='3% (23591) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
212
+ page_content='7% (164) 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
213
+ page_content='1% (215) 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
214
+ page_content='9% (2820) Proposed model Inlier Outlier Counterfactual Inlier Outlier Real 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
215
+ page_content='7% (19417) 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
216
+ page_content='3% (4338) 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
217
+ page_content='2% (35) 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
218
+ page_content='8% (3000) Non-adversarial Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
219
+ page_content=' 7: Isolation forest results on real (rows) and counterfactual data (columns).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
220
+ page_content=' Proposed model with (left) and without (right) adversarial loss during training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
221
+ page_content=' Row-normalized percentages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
222
+ page_content=' corresponding accuracy and normalized mutual information (NMI) computed w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
223
+ page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
224
+ page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
225
+ page_content=' the isolation forest results on the original data (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
226
+ page_content=' Table II).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
227
+ page_content=' Such scores measure to what degree the inlier/outlier partitioning obtained on the counterfactual samples (for each of the two compared variants) matches the one obtained on the original data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
228
+ page_content=' The higher they are the better the two partitions match.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
229
+ page_content=' The obtained results clearly show that counterfactual plausibility is achieved thanks to the adversarial training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
230
+ page_content=' Method Accuracy NMI Inliers ratio Proposed 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
231
+ page_content='6% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
232
+ page_content='808 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
233
+ page_content='9% Non-adversarial 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
234
+ page_content='7% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
235
+ page_content='337 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
236
+ page_content='6% TABLE II: Plausibility analysis using different performance metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
237
+ page_content=' Isolation Forest results on the real data were used as ground truth for the accuracy and NMI scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
238
+ page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
239
+ page_content=' Other ablation studies In Table III we compare the number of successful class- swapping counterfactual samples as well as the average ℓ2 and ℓ1 norms of the perturbations δ generated by the proposed model and two variants ignoring the generator loss (Lgen) and the weighted-ℓ1 loss (Lw-ℓ1), respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
240
+ page_content=' One can see that the removal of the auxiliary losses signif- icantly bumps the class-swapping rate, but it happens at the expense of either: 1) counterfactual plausibility, as shown in the Section IV-D for the removal of Lgen;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
241
+ page_content=' 2) counterfactual proximity/similarity, as demonstrated by the dramatic increase on the norm of the generated perturbations (or, equivalently, the distance between x and xCF) upon removal of Lw-ℓ1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
242
+ page_content=' Method Class-swap CF Average ∥δ∥2 Average ∥δ∥1 Proposed 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
243
+ page_content='8% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
244
+ page_content='24 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
245
+ page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
246
+ page_content='76 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
247
+ page_content='54 Without Lgen 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
248
+ page_content='7% 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
249
+ page_content='97 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
250
+ page_content='47 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
251
+ page_content='69 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
252
+ page_content='99 Without Lw-ℓ1 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
253
+ page_content='6% 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
254
+ page_content='79 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
255
+ page_content='07 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
256
+ page_content='3 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
257
+ page_content='53 TABLE III: Ablation study on test data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
258
+ page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
259
+ page_content=' CONCLUSION In this letter we have presented a new framework to generate counterfactual SITS samples of vegetation indices (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
260
+ page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
261
+ page_content=' NDVI) for the land cover classification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
262
+ page_content=' The proposed method overcomes the restriction to apriori define the source and the target classes for the counterfactual generation process while it exploits adversarial learning to ensure realistic counterfac- tual samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
263
+ page_content=' As possible future work, we would extend the framework to the case of multivariate time series satellite data as well as leverage the feedback provided by the generated counterfactual samples to improve the robustness of the land cover classifier regarding the most frequent class confusions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
264
+ page_content=' REFERENCES [1] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
265
+ page_content=' Yuan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
266
+ page_content=' Shen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
267
+ page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
268
+ page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
269
+ page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
270
+ page_content=' Jiang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
271
+ page_content=' Xu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
272
+ page_content=' Tan, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
273
+ page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
274
+ page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
275
+ page_content=' Gao, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
276
+ page_content=' Zhang, “Deep learning in environmental remote sensing: Achievements and challenges,” Remote Sensing of Environment, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
277
+ page_content=' 241, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
278
+ page_content=' 111716, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
279
+ page_content=' [2] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
280
+ page_content=' Inglada, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
281
+ page_content=' Vincent, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
282
+ page_content=' Arias, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
283
+ page_content=' Tardy, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
284
+ page_content=' Morin, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
285
+ page_content=' Rodes, “Operational high resolution land cover map production at the country scale using satellite image time series,” Remote.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
286
+ page_content=' Sens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
287
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
288
+ page_content=' 9, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
289
+ page_content=' 1, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
290
+ page_content=' 95, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
291
+ page_content=' [3] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
292
+ page_content=' Adadi and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
293
+ page_content=' Berrada, “Peeking inside the black-box: a survey on explainable artificial intelligence (XAI),” IEEE access, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
294
+ page_content=' 6, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
295
+ page_content=' [4] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
296
+ page_content=' Guidotti, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
297
+ page_content=' Monreale, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
298
+ page_content=' Ruggieri, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
299
+ page_content=' Turini, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
300
+ page_content=' Giannotti, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
301
+ page_content=' Pedreschi, “A survey of methods for explaining black box models,” ACM Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
302
+ page_content=' Surv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
303
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
304
+ page_content=' 51, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
305
+ page_content=' 5, Sep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
306
+ page_content=' 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
307
+ page_content=' [5] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
308
+ page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
309
+ page_content=' Arrieta, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
310
+ page_content=' D´ıaz-Rodr´ıguez, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
311
+ page_content=' Del Ser, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
312
+ page_content=' Bennetot, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
313
+ page_content=' Tabik, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
314
+ page_content=' Barbado, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
315
+ page_content=' Garc´ıa, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
316
+ page_content=' Gil-L´opez, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
317
+ page_content=' Molina, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
318
+ page_content=' Benjamins et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
319
+ page_content=', “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, op- portunities and challenges toward responsible AI,” Information fusion, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
320
+ page_content=' 58, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
321
+ page_content=' 82–115, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
322
+ page_content=' [6] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
323
+ page_content=' Wachter, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
324
+ page_content=' Mittelstadt, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
325
+ page_content=' Russell, “Counterfactual explanations without opening the black box: Automated decisions and the gdpr,” Harv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
326
+ page_content=' JL & Tech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
327
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
328
+ page_content=' 31, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
329
+ page_content=' 841, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
330
+ page_content=' [7] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
331
+ page_content=' Verma, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
332
+ page_content=' Dickerson, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
333
+ page_content=' Hines, “Counterfactual explanations for machine learning: A review,” arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
334
+ page_content='10596, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
335
+ page_content=' [8] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
336
+ page_content=' Guidotti, “Counterfactual explanations and how to find them: litera- ture review and benchmarking,” Data Mining and Knowledge Discovery, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
337
+ page_content=' 1–55, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
338
+ page_content=' [9] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
339
+ page_content=' Delaney, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
340
+ page_content=' Greene, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
341
+ page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
342
+ page_content=' Keane, “Instance-based counterfactual explanations for time series classification,” in International Conference on Case-Based Reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
343
+ page_content=' Springer, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
344
+ page_content=' 32–47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
345
+ page_content=' [10] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
346
+ page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
347
+ page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
348
+ page_content=' Boubrahimi, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
349
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
350
+ page_content=' Hamd, “Motif-guided time series counterfactual explanations,” arXiv preprint arXiv:2211.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
351
+ page_content='04411, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
352
+ page_content=' [11] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
353
+ page_content=' Ates, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
354
+ page_content=' Aksar, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
355
+ page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
356
+ page_content=' Leung, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
357
+ page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
358
+ page_content=' Coskun, “Counterfactual explanations for multivariate time series,” in International Conference on Applied Artificial Intelligence (ICAPAI), 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
359
+ page_content=' 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
360
+ page_content=' [12] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
361
+ page_content=' Guidotti, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
362
+ page_content=' Monreale, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
363
+ page_content=' Spinnato, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
364
+ page_content=' Pedreschi, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
365
+ page_content=' Giannotti, “Explaining any time series classifier,” in IEEE International Conference on Cognitive Machine Intelligence (CogMI), 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
366
+ page_content=' 167–176.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
367
+ page_content=' [13] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
368
+ page_content=' Lang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
369
+ page_content=' Giese, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
370
+ page_content=' Ilg, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
371
+ page_content=' Otte, “Generating sparse coun- terfactual explanations for multivariate time series,” arXiv preprint arXiv:2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
372
+ page_content='00931, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
373
+ page_content=' [14] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
374
+ page_content=' Van Looveren, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
375
+ page_content=' Klaise, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
376
+ page_content=' Vacanti, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
377
+ page_content=' Cobb, “Conditional generative models for counterfactual explanations,” arXiv preprint arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
378
+ page_content='10123, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
379
+ page_content=' [15] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
380
+ page_content=' Filali Boubrahimi and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
381
+ page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
382
+ page_content=' Hamdi, “On the mining of time series data counterfactual explanations using barycenters,” in ACM CIKM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
383
+ page_content=' ACM, 2022, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
384
+ page_content=' 3943–3947.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
385
+ page_content=' [16] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
386
+ page_content=' Hagolle, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
387
+ page_content=' Huc, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
388
+ page_content=' Villa Pascual, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
389
+ page_content=' Dedieu, “A multi-temporal and multi-spectral method to estimate aerosol optical thickness over land, for the atmospheric correction of formosat-2, landsat, venµs and sentinel-2 images,” Rem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
390
+ page_content=' Sens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
391
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
392
+ page_content=' 7, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
393
+ page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
394
+ page_content=' 2668–2691, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
395
+ page_content=' [17] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
396
+ page_content=' Inglada, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
397
+ page_content=' Vincent, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
398
+ page_content=' Arias, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
399
+ page_content=' Tardy, “iota2-a25386,” Jul.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
400
+ page_content=' 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
401
+ page_content=' [Online].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
402
+ page_content=' Available: https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
403
+ page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
404
+ page_content='5281/zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
405
+ page_content='58150 [18] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
406
+ page_content=' Pelletier, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
407
+ page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
408
+ page_content=' Webb, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
409
+ page_content=' Petitjean, “Temporal convolutional neural network for the classification of satellite image time series,” Remote.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
410
+ page_content=' Sens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
411
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
412
+ page_content=' 11, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
413
+ page_content=' 5, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
414
+ page_content=' 523, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
415
+ page_content=' [19] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
416
+ page_content=' Creswell, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
417
+ page_content=' White, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
418
+ page_content=' Dumoulin, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
419
+ page_content=' Arulkumaran, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
420
+ page_content=' Sengupta, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
421
+ page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
422
+ page_content=' Bharath, “Generative adversarial networks: An overview,” IEEE Signal Process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
423
+ page_content=' Mag.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
424
+ page_content=', vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
425
+ page_content=' 35, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
426
+ page_content=' 1, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
427
+ page_content=' 53–65, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
428
+ page_content=' [20] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
429
+ page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
430
+ page_content=' Liu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
431
+ page_content=' Chen, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
432
+ page_content=' Rudin, “Deep learning for case- based reasoning through prototypes: A neural network that explains its predictions,” AAAI Conference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
433
+ page_content=' 32, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
434
+ page_content=' 1, Apr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
435
+ page_content=' 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/ItAzT4oBgHgl3EQfjv0x/content/2301.01520v1.pdf'}
JNAzT4oBgHgl3EQfVPyV/content/2301.01281v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38e7de525e2553ae5d28c6ce47e462381056a460b6d7f1559479ed34ec2bddb6
3
+ size 7156519
JdAzT4oBgHgl3EQfyP4s/content/2301.01749v1.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9423b51eb57b966b7df033bdce5e7193654dc7a93f8a702ca71caf9a8411585a
3
+ size 1365279
JdAzT4oBgHgl3EQfyP4s/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e1f4fef5e0f1b6d39837576f8e60cbb6510e713623c42d2edbbd7a0ba3d4827
3
+ size 764179
KNE1T4oBgHgl3EQfYgRo/content/tmp_files/2301.03139v1.pdf.txt ADDED
The diff for this file is too large to render. See raw diff
 
KNE1T4oBgHgl3EQfYgRo/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
MtAzT4oBgHgl3EQfy_6c/content/tmp_files/2301.01762v1.pdf.txt ADDED
@@ -0,0 +1,2466 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 1
2
+ Modeling Sequential Recommendation as
3
+ Missing Information Imputation
4
+ Yujie Lin, Zhumin Chen, Zhaochun Ren, Chenyang Wang, Qiang Yan, Maarten de Rijke, Xiuzhen Cheng,
5
+ Fellow, IEEE, and Pengjie Ren
6
+ Abstract—Side information is being used extensively to improve the effectiveness of sequential recommendation models. It is said to
7
+ help capture the transition patterns among items. Most previous work on sequential recommendation that uses side information models
8
+ item IDs and side information separately. This can only model part of relations between items and their side information. Moreover, in
9
+ real-world systems, not all values of item feature fields are available. This hurts the performance of models that rely on side
10
+ information. Existing methods tend to neglect the context of missing item feature fields, and fill them with generic or special values,
11
+ e.g., unknown, which might lead to sub-optimal performance. To address the limitation of sequential recommenders with side
12
+ information, we define a way to fuse side information and alleviate the problem of missing side information by proposing a unified task,
13
+ namely the missing information imputation (MII), which randomly masks some feature fields in a given sequence of items, including
14
+ item IDs, and then forces a predictive model to recover them. By considering the next item as a missing feature field, sequential
15
+ recommendation can be formulated as a special case of MII. We propose a sequential recommendation model, called missing
16
+ information imputation recommender (MIIR), that builds on the idea of MII and simultaneously imputes missing item feature values and
17
+ predicts the next item. We devise a dense fusion self-attention (DFSA) for MIIR to capture all pairwise relations between items and
18
+ their side information. Empirical studies on three benchmark datasets demonstrate that MIIR, supervised by MII, achieves a
19
+ significantly better sequential recommendation performance than state-of-the-art baselines.
20
+ Index Terms—Sequential recommendation, side information fusion, missing information imputation
21
+ !
22
+ 1
23
+ INTRODUCTION
24
+ S
25
+ EQUENTIAL recommendation models transition patterns
26
+ among items and generates a recommendation for the
27
+ next item [1]. Traditional sequential recommendation so-
28
+ lutions use the item ID as the only item feature field [2,
29
+ 3, 4, 5, 6, 7, 8]. In real-world cases, however, there is
30
+ rich side information in the form of multiple types of
31
+ structural feature fields, such as categories and brands,
32
+ and unstructured feature fields, e.g., titles and descriptions,
33
+ that can help to better model transitions between items.
34
+ In recent years, several publications have exploited side
35
+ information to improve sequential recommendation perfor-
36
+ mance [9, 10, 11, 12, 13, 14, 15]. Most focus on designing
37
+ different mechanisms to fuse side information into rec-
38
+ ommendation models. For example, Hidasi et al. [9] use
39
+ parallel recurrent neural networks (RNNs) [16] to encode
40
+
41
+ Yujie Lin, School of Computer Science and Technology, Shandong Univer-
42
+ sity, Qingdao, China, E-mail: [email protected]
43
+
44
+ Zhumin Chen, School of Computer Science and Technology, Shandong
45
+ University, Qingdao, China, E-mail: [email protected]
46
+
47
+ Zhaochun Ren, School of Computer Science and Technology, Shandong
48
+ University, Qingdao, China, E-mail: [email protected]
49
+
50
+ Chenyang Wang, School of Computer Science and Technology, Shandong
51
+ University, Qingdao, China, E-mail: [email protected]
52
+
53
+ Qiang
54
+ Yan,
55
+ WeChat,
56
+ Tencent,
57
+ Guangzhou,
58
+ China,
59
+ E-mail:
60
61
+
62
+ Maarten de Rijke, Informatics Institute, University of Amsterdam, Ams-
63
+ terdam, The Netherlands, E-mail: [email protected]
64
+
65
+ Xiuzhen Cheng, School of Computer Science and Technology, Shandong
66
+ University, Qingdao, China, E-mail: [email protected]
67
+
68
+ Pengjie Ren, School of Computer Science and Technology, Shandong
69
+ University, Qingdao, China, E-mail: [email protected]
70
+ the information in item IDs and attributes, respectively, and
71
+ then combine the outputs of RNNs for item recommenda-
72
+ tion. Zhang et al. [10] employ two groups of self-attention
73
+ blocks [17] for modeling items and features, and fuse them
74
+ in the final stage.
75
+ Importantly, previous work for sequential recommenda-
76
+ tion with side information usually regards side information
77
+ as an auxiliary representation of the item, so models item
78
+ IDs and side information separately. As a result, such meth-
79
+ ods only encode partial relations in item sequences, e.g., the
80
+ relation between an item and its side information, while the
81
+ relation between an item and the side information of other
82
+ items in the sequence is not well captured.
83
+ Even more importantly, previous studies often assume
84
+ that all side information is available, which is rarely the
85
+ case in real-world scenarios. As illustrated in Fig. 1(a),
86
+ i.e., the second and third items lack category and title
87
+ information, respectively. Previous work has proposed to
88
+ fill such gaps with special values, such as a general category
89
+ and a padding text, to make models trainable and produce
90
+ outputs. However, for different items and item sequences,
91
+ these special values are the same: they do not provide useful
92
+ and specific information for recommendations and might
93
+ introduce biases into the model learning instead [18]. As a
94
+ result, as illustrated in Fig. 1(b), a model might recommend
95
+ the wrong item. Instead, we propose to impute the missing
96
+ side information, so that the recommendation model can use
97
+ information from missing feature fields based on contexts,
98
+ as illustrated in Fig. 1(c).
99
+ Some recent studies address the probem of missing
100
+ side information in recommendation data. Wang et al. [19]
101
+ arXiv:2301.01762v1 [cs.IR] 4 Jan 2023
102
+
103
+ 2
104
+ (a) Original sequence.
105
+ (b) Existing work without imputation.
106
+ (c) Our work with imputation.
107
+ Fig. 1. Sequential recommendation of items with side information. Gray
108
+ blocks represent missing information. “[PAD]” (in (b)) indicates padding
109
+ with generic or special values as often done in existing work. “[Impute]”
110
+ (in (c)) indicates imputation with actual values for missing feature fields.
111
+ employ an auto-encoder (AE) with a modality dropout
112
+ to recover the missing rating and side information. Shi
113
+ et al. [18] propose an adaptive feature sampling strategy
114
+ to introduce more missing feature fields into the training
115
+ process, which increases the robustness of the recommen-
116
+ dation model against missing side information. Wu et al.
117
+ [20] define item recommendation and attribute inference in
118
+ a user-item bipartite graph with attributes, and propose a
119
+ graph convolutional network (GCN) [21] based model to
120
+ join these two tasks. However, the work just listed mainly
121
+ targets non-sequential recommendation. Moreover, it treats
122
+ item recommendation and side information imputation as
123
+ different tasks.
124
+ In this work, we seek to design a sequential recommen-
125
+ dation model that can handle missing feature fields of items
126
+ in items sequences. The main challenge is how to adap-
127
+ tively impute missing information, including missing side
128
+ information and the next item, according to the information
129
+ available in the item sequence. First, we propose a task,
130
+ the missing information imputation (MII) task that randomly
131
+ masks some non-missing feature fields, including item IDs,
132
+ in the input sequence, and then asks the model to recover
133
+ them in the output. Since the next item to be recommended
134
+ can also be seen as a missing feature field in the sequence,
135
+ MII unifies the missing side information imputation task
136
+ with the next item prediction task. MII can be considered
137
+ as the extension of the masked item prediction task [22]
138
+ that only considers and masks item IDs. Based on the
139
+ MII task, we propose a sequential recommendation model,
140
+ called missing information imputation recommender (MIIR),
141
+ that jointly imputes missing side information and predicts
142
+ the next item for the given item sequence. MIIR employs
143
+ a dense fusion self-attention (DFSA) mechanism to fuse the
144
+ information in IDs and other feature fields for predicting
145
+ both missing side information and the next item. DFSA
146
+ captures the relation between any pair of feature fields in
147
+ the input sequence, allowing it to fully fuse various types
148
+ of (side) information to impute missing feature values and
149
+ address the main recommendation challenge.
150
+ We conduct extensive experiments on three public
151
+ datasets and show that MIIR significantly outperforms
152
+ state-of-the-art sequential recommendation baselines. We
153
+ also confirm that (i) imputing missing side information
154
+ and (ii) DFSA both help to improve the performance of
155
+ sequential recommendation.
156
+ The main contributions of this work are as follows:
157
+ • We propose to unify the missing side information imputa-
158
+ tion task and the sequential recommendation task through
159
+ missing information imputation (MII). To the best of our
160
+ knowledge, this is the first work of its kind in sequential
161
+ recommendation.
162
+ • We present a novel sequential recommendation model,
163
+ missing information imputation recommender (MIIR),
164
+ that employs MII to provide the signal for simultaneously
165
+ imputing the missing item side information and predict-
166
+ ing the next item and dense fusion self-attention (DFSA)
167
+ to fuse various information.
168
+ • We conduct extensive experiments to verify the effective-
169
+ ness of MII, MIIR, and DFSA in sequential recommenda-
170
+ tion.
171
+ 2
172
+ RELATED WORK
173
+ 2.1
174
+ Sequential recommendation with side information
175
+ Side information fusion has been widely used in sequential
176
+ recommendation because it can help to capture transition
177
+ patterns among items. We classify existing work into work
178
+ that uses self-attention and work that does not.
179
+ As to work that does not use self-attention, Hidasi et al.
180
+ [9] employ parallel RNNs to extract the information from
181
+ ID sequences of item IDs and sequences of features; they
182
+ then examine different ways of combining the outputs of
183
+ the RNNs. Zhou et al. [23] propose self-supervised tasks
184
+ to maximize the mutual information between an item and
185
+ its attributes or between a sequence of item IDs and the
186
+ sequence of their attributes. Yuan et al. [24] construct a het-
187
+ erogeneous graph to aggregate different types of categorical
188
+ attributes, then aggregate the representations of attribute
189
+ types to get item representations.
190
+ Inspired by the success of self-attention mechanisms
191
+ [25, 26, 27], some work uses self-attention to fuse items and
192
+ side information. Zhang et al. [10] first use a vanilla atten-
193
+ tion mechanism to fuse different types of side information
194
+ on each item, and then use two branches of self-attention
195
+ blocks to model transition patterns between IDs and side
196
+ information; they then concatenate the hidden states of the
197
+ two blocks for item recommendation. Liu et al. [28] pro-
198
+ pose a non-invasive self-attention mechanism that uses pure
199
+ item ID representations as values and representations that
200
+ integrate side information as queries and keys to calculate
201
+ the attention. Xie et al. [15] decouple the non-invasive self-
202
+ attention of different types of side information to get fused
203
+ attention matrices for items.
204
+ Although many methods have been proposed for se-
205
+ quential recommendation with side information, they (i) ne-
206
+ glect the missing information problem, and use fixed special
207
+ values to fill missing feature fields, which might harm the
208
+ performance, and (ii) hardly explore the relation between
209
+ an item and the side information of other items in the same
210
+ sequence. These are aspects that we contribute on top of
211
+ prior work.
212
+
213
+ Title
214
+ Title
215
+ Truth
216
+ Title
217
+ Item
218
+ Item
219
+ Item
220
+ Item
221
+ Category
222
+ Category
223
+ CategoryTitle
224
+ Title
225
+ [PAD]
226
+ Title
227
+ Predict
228
+ Item
229
+ Item
230
+ Item
231
+ Item
232
+ Category
233
+ [PAD]
234
+ Category
235
+ CategoryTitle
236
+ Title
237
+ [Impute]
238
+ Title
239
+ Predict
240
+ Item
241
+ Item
242
+ Item
243
+ Item
244
+ Category
245
+ Category
246
+ [Impute]
247
+ Category3
248
+ 2.2
249
+ Missing side information in recommendation
250
+ In real-world applications, the side information of users and
251
+ items may be incomplete or missing, which may hurt the
252
+ performance of recommendation models that rely on side
253
+ information.
254
+ The traditional way to solve the problem of missing side
255
+ information is to fill the missing feature fields with heuristic
256
+ values [29, 30, 18], such as the most frequent feature values,
257
+ average values, randomized values, the value unknown,
258
+ or padding. As some studies have reported, these special
259
+ values are independent of the context, and using them
260
+ may lead to biased parameter estimation and prediction
261
+ [31, 32]. Another way to deal with missing feature fields
262
+ is to impute their missing values. Early approaches use
263
+ KNN-based methods [33] or auto-encoders (AEs) [34, 35]
264
+ to predict the missing data. Wang et al. [19] propose an AE-
265
+ based model with modality dropout, which randomly drops
266
+ representations of user or item information of different
267
+ modalities in hidden states and reconstructs them by an AE.
268
+ Cao et al. [36] present a translation-based recommendation
269
+ model that models preferences as translations from users to
270
+ items, and jointly trains it with a knowledge graph (KG)
271
+ completion model that predicts the missing relations in the
272
+ KG for incorporating knowledge into the recommendation
273
+ model. Instead of imputing the missing side information,
274
+ Shi et al. [18] propose an adaptive feature sampling strategy,
275
+ which employs layer-wise relevance propagation [37] to
276
+ calculate the importance of different features and samples
277
+ features to make the model more robust against unknown
278
+ features. Wu et al. [20] propose a GCN-based model to
279
+ jointly predict users’ preferences to items and predict the
280
+ missing attribute values of users or items.
281
+ What we add on top of prior work on missing infor-
282
+ mation in recommendation is that we focus on missing
283
+ information in the context of sequential recommendation.
284
+ 3
285
+ METHOD
286
+ 3.1
287
+ Overview
288
+ Before going into details of the proposed MII task and
289
+ MIIR model, we introduce notation used in this paper.
290
+ We denote the item set as I = {i1, . . . , iNi}, where Ni
291
+ is the number of items and each item ID ik ∈ RNi is
292
+ represented as a one-hot vector. In addition to IDs, items
293
+ have other feature fields corresponding to their side infor-
294
+ mation. In this work, we consider categorical feature fields,
295
+ including category and brand, and textual feature fields,
296
+ including title and description. We denote the category set
297
+ as C = {c1, . . . , cNc}, where Nc is the number of categories
298
+ and each category ck ∈ RNc is a one-hot vector. Similarly,
299
+ we denote the brand set as B = {b1, . . . , bNb}, where Nb
300
+ is the number of brands and each brand bk ∈ RNb. For
301
+ titles and descriptions of items, we employ BERT [38] to
302
+ encode them into fixed-length vectors of size 768. We denote
303
+ all titles and all descriptions as T = {t1, . . . , tNi} and
304
+ D = {d1, . . . , dNi}, respectively, where tk and dk ∈ R768.
305
+ We use S = [s1, . . . , sn] to denote a sequence with n items,
306
+ where sk = [si
307
+ k, sc
308
+ k, sb
309
+ k, st
310
+ k, sd
311
+ k] is the sequence of features
312
+ fields of the k-th item, si
313
+ k ∈ I, sc
314
+ k ⊆ C, sb
315
+ k ∈ B, st
316
+ k ∈ T,
317
+ and sd
318
+ k ∈ D. As an item may have multiple categories,
319
+ (a) Sequential recommendation task.
320
+ (b) Missing information imputation task.
321
+ Fig. 2. Comparing the sequential recommendation task and the missing
322
+ information imputation task. (Same visual conventions as in Fig. 1.)
323
+ we let sc
324
+ k be a subset of C, which can be represented as
325
+ a multi-hot vector sc
326
+ k ∈ RNc. For missing item IDs, cate-
327
+ gories and brands, we have special one-hot vectors denoted
328
+ as imiss ∈ I, cmiss ∈ C and bmiss ∈ B, respectively.
329
+ For missing titles and descriptions, we use the vector of
330
+ “[CLS][SEP]” encoded by BERT to represent them, which
331
+ are denoted as tmiss ∈ T and dmiss ∈ D, respectively. These
332
+ missing representations will be used in both MIIR and the
333
+ baselines. It is worth noting that other feature fields can be
334
+ formalized and modeled in a similar way.
335
+ The
336
+ missing
337
+ information
338
+ imputation
339
+ task
340
+ is
341
+ to
342
+ im-
343
+ pute
344
+ the
345
+ values
346
+ of
347
+ the
348
+ missing
349
+ feature
350
+ fields
351
+ in S.
352
+ The sequential recommendation task is to predict the next
353
+ item sn+1 for S. By appending a new item sn+1
354
+ =
355
+ [imiss, cmiss, bmiss, tmiss, dmiss] to the end of S and im-
356
+ puting the imiss of sn+1, we can formulate the next item
357
+ prediction task as a special case of missing information
358
+ imputation task. In Fig. 2, we compare the sequential rec-
359
+ ommendation task and the missing information imputation
360
+ task. In the sequential recommendation task, the next item is
361
+ not considered as a missing data. In the missing information
362
+ imputation task, the next item is simply a missing feature
363
+ field. A model for the missing information imputation task
364
+ that follows a unified way to impute both the next item
365
+ and the other missing side information can be used for
366
+ sequential recommendation.
367
+ To unify the missing side information imputation and
368
+ next item recommendation tasks, we propose a sequential
369
+ recommendation model called missing information imputa-
370
+ tion recommender (MIIR). As we illustrate in Fig. 3, MIIR
371
+ consists of three main components: (i) an embedding layer,
372
+ (ii) a dense fusion self-attention (DFSA) mechanims, and
373
+
374
+ Fusion
375
+ T
376
+ ID
377
+ Side information808
378
+ Missing information imputation
379
+ 0808
380
+ 8.084
381
+ Fig. 3. Architecture of the missing information imputation recommender
382
+ (MIIR). MIIR takes a sequence of randomly masked feature fields as
383
+ input. It transforms the input sequence into embeddings using the em-
384
+ bedding layer. Then it employs a dense fusion self-attention mechanism
385
+ to fuse information in the sequence. Finally, MIIR uses an output layer
386
+ to reconstruct the input sequence and calculate the MII loss on masked
387
+ feature fields. (Same visual conventions as in Fig. 1.)
388
+ (iii) an output layer. First, the embedding layer translates
389
+ the input sequence into a series of embeddings. Then, the
390
+ DFSA mechanism employs several transformer [17] layers
391
+ to model the relation between any pair of feature fields in
392
+ the sequence and fuse side information into the model for
393
+ both imputation and recommendation. Finally, the output
394
+ layer imputes the missing feature values including item IDs
395
+ in the sequence based on the output of DFSA. Next, we will
396
+ introduce the details of these main components.
397
+ 3.2
398
+ Embedding layer
399
+ The embedding layer projects all item feature fields in the
400
+ input sequence into low-dimensional dense vectors with a
401
+ unified length.
402
+ For the k-th item sk = [si
403
+ k, sc
404
+ k, sb
405
+ k, st
406
+ k, sd
407
+ k] in the given
408
+ sequence S, the embedding layer uses different ways to
409
+ translate different feature fields. For the high-dimensional
410
+ sparse vectors of si
411
+ k, sc
412
+ k and sb
413
+ k, we follow Eq. 1 to get the
414
+ item embedding ei
415
+ k ∈ Re, the category embedding ec
416
+ k ∈ Re,
417
+ and the brand embedding eb
418
+ k ∈ Re:
419
+ ei
420
+ k = Eisi
421
+ k,
422
+ ec
423
+ k = Ecsc
424
+ k,
425
+ eb
426
+ k = Ebsb
427
+ k,
428
+ (1)
429
+ where Ei ∈ Re×Ni is the item embedding matrix, Ec ∈
430
+ Re×Nc is the category embedding matrix, Eb ∈ Re×Nb is
431
+ the brand embedding matrix, and e is the embedding size.
432
+ For the high-dimensional dense vectors of st
433
+ k and sd
434
+ k, we
435
+ project them into low-dimensional embeddings, i.e., the title
436
+ embedding et
437
+ k ∈ Re and the description embedding ed
438
+ k ∈
439
+ Re, respectively, using Eq. 2:
440
+ et
441
+ k = Etst
442
+ k,
443
+ ed
444
+ k = Edsd
445
+ k,
446
+ (2)
447
+ where Et ∈ Re×768 and Ed ∈ Re×768 are the projection
448
+ matrices.
449
+ In order to distinguish different types of feature fields
450
+ in the same item, we learn a field embedding for each
451
+ type of feature fields. We denote the field embeddings of
452
+ ID, category, brand, title and description as f i, f c, f b, f t
453
+ and f d ∈ Re, respectively. To distinguish different items
454
+ in different positions in the same sequence, we also inject
455
+ the position information into the model by learning position
456
+ embeddings, where the k-th position embedding is denoted
457
+ as pk ∈ Re. Finally, we add each field embedding to the
458
+ corresponding item or feature embedding of sk, and add pk
459
+ to all embeddings of sk, as shown in Eq. 3:
460
+ Hk =
461
+
462
+ �����
463
+ hi
464
+ k
465
+ hc
466
+ k
467
+ hb
468
+ k
469
+ ht
470
+ k
471
+ hd
472
+ k
473
+
474
+ �����
475
+ =
476
+
477
+ �����
478
+ ei
479
+ k + f i + pk
480
+ ec
481
+ k + f c + pk
482
+ eb
483
+ k + f b + pk
484
+ et
485
+ k + f t + pk
486
+ ed
487
+ k + f d + pk
488
+
489
+ �����
490
+ ,
491
+ (3)
492
+ where hi
493
+ k, hc
494
+ k, hb
495
+ k, ht
496
+ k, hd
497
+ k ∈ Re, and Hk ∈ R5×e is the hidden
498
+ state of sk that is the stack of all embeddings of its feature
499
+ fields in order.
500
+ 3.3
501
+ Dense fusion self-attention
502
+ The dense fusion self-attention (DFSA) mechanism follows a
503
+ unified way to impute missing feature fields, both item IDs
504
+ and side information. To exploit the information in a given
505
+ context for imputation, we need to model the relations be-
506
+ tween different feature fields and fuse the representations of
507
+ various feature fields. DFSA calculates the attention values
508
+ between any pair of feature fields and fuses the information
509
+ of other feature fields based on the attention value. By
510
+ calculating the attention value, DFSA captures all possible
511
+ (hence dense) pairwise relations between feature fields to
512
+ facilitate missing information imputation.
513
+ Specifically, we first stack the hidden states of all items
514
+ in S in order by Eq. 4:
515
+ H =
516
+
517
+ ����
518
+ H1
519
+ H2
520
+ ...
521
+ Hn
522
+
523
+ ���� ,
524
+ (4)
525
+ where H ∈ R5n×e is the hidden state matrix of S. Then,
526
+ DFSA employs a transformer with L layers to update H.
527
+ Each transformer layer Trm(·) is composed of two sub-
528
+ layers: (i) multi-head self-attention MH(·) and (ii) position–
529
+ wise feed-forward PFFN(·), as defined in Eq. 5:
530
+ Hl+1 = Trm(Hl) = LN( �Hl + Dropout(PFFN( �Hl)))
531
+ �Hl = LN(Hl + Dropout(MH(Hl)))
532
+ MH(Hl) = [head1; . . . ; headh]WH
533
+ headi = Attn(HlWQ
534
+ i , HlWK
535
+ i , HlWV
536
+ i )
537
+ Attn(Q, K, V) = softmax(QK⊤/√e + M)V
538
+ PFFN( �Hl) = GELU( �HlWF
539
+ 1 + bF
540
+ 1 )WF
541
+ 2 + bF
542
+ 2 ,
543
+ (5)
544
+
545
+ 808
546
+ Output sequence
547
+ Output layer
548
+
549
+ Dense fusion self-attention
550
+ Position embedding
551
+ +
552
+ Field embedding
553
+ +
554
+ Item/feature embedding
555
+ Embedding layer
556
+ 0808
557
+ 08
558
+ Randomly masked
559
+ Mask
560
+ Input sequence5
561
+ where LN is layer normalization [39], Dropout is dropout
562
+ [40], Attn is attention, GELU is a Gaussian error linear unit
563
+ activation [41], [. . . ; . . .] is the concatenation operation, h
564
+ is the number of heads, WH ∈ Re×e, WQ
565
+ i , WK
566
+ i , WV
567
+ i
568
+
569
+ Re×e/h, WF
570
+ 1 ∈ Re×4e, WF
571
+ 2 ∈ R4e×e, bF
572
+ 1 ∈ R4e and bF
573
+ 2 ∈
574
+ Re are trainable parameters, Hl and Hl+1 ∈ R5n×e are the
575
+ output hidden state matrices in the l-th layer and the (l+1)-
576
+ th layer, and H0 = H.
577
+ The matrix M ∈ R5n×5n in Eq. 5 is the attention mask
578
+ which is defined as:
579
+ Mj,y
580
+ i,x =
581
+ � 0,
582
+ allow to attend,
583
+ −∞,
584
+ prevent from attending,
585
+ (6)
586
+ where i and j ∈ {1, . . . , n}, x and y ∈ {i, c, b, t, d}, Mj,y
587
+ i,x ∈
588
+ M is the mask to control whether the feature field sy
589
+ j can
590
+ attend to the feature field sx
591
+ i . We set all Mj,y
592
+ i,x = 0,1 which
593
+ means we allow to attend between any pair of feature fields
594
+ in the sequence. Therefore, the DFSA can model relations
595
+ and fuse information between all possible pairs of feature
596
+ fields to facilitate both imputation and recommendation.
597
+ 3.4
598
+ Output layer
599
+ The output layer reconstructs the input feature fields based
600
+ on the output hidden states of DFSA. First, we split the final
601
+ output hidden state matrix HL of DFSA by Eq. 7:
602
+ HL = �E =
603
+
604
+ �����
605
+ �E1
606
+ �E2
607
+ ...
608
+ �En
609
+
610
+ �����
611
+ ,
612
+ where �Ek =
613
+
614
+ �����
615
+ ˆei
616
+ k
617
+ ˆec
618
+ k
619
+ ˆeb
620
+ k
621
+ ˆet
622
+ k
623
+ ˆed
624
+ k
625
+
626
+ �����
627
+ ,
628
+ (7)
629
+ and ˆei
630
+ k, ˆec
631
+ k, ˆeb
632
+ k, ˆet
633
+ k, ˆed
634
+ k ∈ Re. Similar to the embedding layer,
635
+ the output layer takes different ways to reconstruct different
636
+ types of feature fields. Specifically, for the categorical feature
637
+ fields, we calculate the probability distributions pi
638
+ k ∈ RNi,
639
+ pc
640
+ k ∈ RNc and pb
641
+ k ∈ RNb of the item ID, category and brand
642
+ of the k-th item sk as follows:
643
+ pi
644
+ k = softmax(Ei⊤ˆei
645
+ k)
646
+ pc
647
+ k = sigmoid(Ec⊤ˆec
648
+ k)
649
+ pb
650
+ k = softmax(Eb⊤ˆeb
651
+ k),
652
+ (8)
653
+ where Ei ∈ Re×Ni, Ec ∈ Re×Nc, Eb ∈ Re×Nb are the re-
654
+ used item embedding matrix, category embedding matrix
655
+ and brand embedding matrix in the embedding layer, re-
656
+ spectively. Note that we see the category prediction as a
657
+ series of binary classifications, because an item may contain
658
+ multiple categories. Then we get the reconstructed item ID
659
+ ˆsi
660
+ k ∈ RNi, category ˆsc
661
+ k ∈ RNc and brand ˆsb
662
+ k ∈ RNb based on
663
+ the probability distributions, as shown in Eq. 9:
664
+ ˆsi
665
+ k = argmax(pi
666
+ k)
667
+ ˆsc
668
+ k = 1(pc
669
+ k > 0.5)
670
+ ˆsb
671
+ k = argmax(pb
672
+ k),
673
+ (9)
674
+ where 1(α) is the indicator function that equals 1 if α is true
675
+ and 0 otherwise. Meanwhile, for the textual feature fields,
676
+ 1Here we neglect the padding items.
677
+ we follow Eq. 10 to get the reconstructed title ˆst
678
+ k ∈ R768 and
679
+ description ˆsd
680
+ k ∈ R768 directly:
681
+ ˆst
682
+ k = Otˆet
683
+ k,
684
+ ˆsd
685
+ k = Odˆed
686
+ k,
687
+ (10)
688
+ where Ot ∈ R768×e and Od ∈ R768×e are the projection
689
+ matrices.
690
+ 3.5
691
+ Missing information imputation loss
692
+ We train MIIR with MII. MII first randomly masks feature
693
+ fields in the sequence with probability p, i.e., replacing a
694
+ non-missing feature value with the corresponding missing
695
+ feature value imiss, cmiss, bmiss, tmiss or dmiss. For the
696
+ k-th item sk in the sequence S, we use mi
697
+ k, mc
698
+ k, mb
699
+ k, mt
700
+ k
701
+ and md
702
+ k ∈ {true, false} to denote whether its ID, category,
703
+ brand, title and description are masked. Then, MIIR learns
704
+ to recover the masked feature fields by MII and impute the
705
+ missing feature values based on the context.
706
+ Specifically, there are differences in the calculation of
707
+ the missing information imputation loss for different types
708
+ of feature fields. For the categorical feature fields (i.e., ID,
709
+ category and brand), our goal is to minimize the cross-
710
+ entropy loss:
711
+ Li
712
+ k = −1(mi
713
+ k)si
714
+ k
715
+ ⊤ log(pi
716
+ k)
717
+ Lc
718
+ k = −1(mc
719
+ k)(sc
720
+ k
721
+ ⊤ log(pc
722
+ k) + (1 − sc
723
+ k
724
+ ⊤) log(1 − pc
725
+ k))/Nc
726
+ Lb
727
+ k = −1(mb
728
+ k)sb
729
+ k
730
+ ⊤ log(pb
731
+ k),
732
+ (11)
733
+ where Li
734
+ k, Lc
735
+ k and Lb
736
+ k are the imputation loss for the item
737
+ ID, category and brand of sk, respectively. For the textual
738
+ feature fields (i.e., title and description), our goal is to
739
+ minimize the mean square error loss:
740
+ Lt
741
+ k = 1(mt
742
+ k)∥st
743
+ k − ˆst
744
+ k∥2
745
+ Ld
746
+ k = 1(md
747
+ k)∥sd
748
+ k − ˆsd
749
+ k∥2,
750
+ (12)
751
+ where Lt
752
+ k and Ld
753
+ k are the imputation loss for the title and
754
+ description of sk. The missing information imputation objective
755
+ of the entire model on S is shown in Eq. 13:
756
+ Lmii
757
+ S
758
+ = 1/n
759
+ n
760
+
761
+ k=1
762
+ Lmii
763
+ k
764
+ Lmii
765
+ k
766
+ = Li
767
+ k + Lc
768
+ k + Lb
769
+ k + Lt
770
+ k + Ld
771
+ k.
772
+ (13)
773
+ Note that since the item ID is one of the feature fields and
774
+ the next item prediction is a MII task, MIIR trained by MII
775
+ can directly be applied to sequential recommendation.
776
+ In experiments, we also consider to further fine-tune
777
+ MIIR or directly train MIIR with the masked item prediction
778
+ loss to make the model only focus on the item prediction
779
+ task. Specifically, we randomly mask some items with their
780
+ all feature fields in the given sequence, and then let MIIR
781
+ predict the masked item IDs only. The recommendation loss
782
+ (i.e., the masked item prediction loss) on S is defined as:
783
+ Lrec
784
+ S
785
+ = 1/n
786
+ n
787
+
788
+ k=1
789
+ Lrec
790
+ k
791
+ Lrec
792
+ k
793
+ = Li
794
+ k = −1(mi
795
+ k)si
796
+ k
797
+ ⊤log(pi
798
+ k),
799
+ (14)
800
+ where Lrec
801
+ k
802
+ is the recommendation loss for sk.
803
+
804
+ 6
805
+ TABLE 1
806
+ Summary of the datasets. The missing rate is the percentage of
807
+ missing feature fields in all feature fields. Especially, “Missing rate D” is
808
+ the missing rate on the dataset after discarding side information.
809
+ Dataset
810
+ Beauty Sports and Outdoors Toys and Games
811
+ #items
812
+ 121,291
813
+ 194,715
814
+ 164,978
815
+ #sequences
816
+ 52,374
817
+ 84,368
818
+ 58,314
819
+ Average length
820
+ 8.97
821
+ 8.50
822
+ 8.99
823
+ #categories
824
+ 656
825
+ 3,035
826
+ 957
827
+ #brands
828
+ 13,188
829
+ 14,163
830
+ 14,135
831
+ Missing rate
832
+ 12.54%
833
+ 20.11%
834
+ 11.20%
835
+ Missing rate D
836
+ 56.32%
837
+ 60.12%
838
+ 55.51%
839
+ 4
840
+ EXPERIMENTAL SETUP
841
+ 4.1
842
+ Research questions
843
+ In this paper, we seek to answer the following research
844
+ questions:
845
+ (RQ1) How does MIIR perform on the sequential recom-
846
+ mendation task compared to state-of-the-art meth-
847
+ ods?
848
+ (RQ2) What are the benefits of training MIIR with MII?
849
+ (RQ3) Does modeling the relation between any pair of
850
+ feature fields in item sequences help sequential rec-
851
+ ommendation?
852
+ (RQ4) How about the performance of MIIR on imputing
853
+ the missing side information?
854
+ (RQ5) What can we find about MIIR by the case study?
855
+ 4.2
856
+ Datasets
857
+ There are many public datasets for experimenting with
858
+ sequential recommendation; see [1]. However, we need
859
+ sequential recommendation datasets that come with side
860
+ information. We conduct experiments on three public
861
+ datasets: “Beauty”, “Sports and Outdoors” and “Toys and
862
+ Games” [42], as they have rich item side information, in-
863
+ cluding category, brand, title and description.
864
+ We follow common practices [10, 28] to process the
865
+ datasets. We sort each user’s records in chronological order
866
+ to construct an item sequence. We filter out item sequences
867
+ whose length is less than 5 to avoid noise from the cold-
868
+ start problem. For each item sequence, we use the last
869
+ item for test, the second last item for validation, and the
870
+ rest items for training. For each test or validation item,
871
+ we randomly sample 99 negative items for ranking. We
872
+ randomly discard side information of items with probability
873
+ 0.5. We use “Beauty D”, “Sports and Outdoors D” and
874
+ “Toys and Games D” to denote the datasets after discarding
875
+ side information. The statistics of the datasets after pre-
876
+ processing are summarized in Table 1.
877
+ 4.3
878
+ Baselines
879
+ We compare MIIR with the following recommendation
880
+ baselines, which can be grouped into (i) methods without
881
+ side information fusion, (ii) methods with side information
882
+ fusion and (iii) methods with missing feature values.
883
+ • Methods without side information fusion:
884
+ – GRU4Rec employs RNNs to capture sequential pat-
885
+ terns between items for sequential recommendation [2].
886
+ – SASRec uses the self-attention mechanism to model
887
+ item sequences for next item recommendations [6].
888
+ – BERT4Rec uses a bidirectional self-attention network
889
+ train-ed by a masked item prediction task for sequential
890
+ recommendation [8].
891
+ • Methods with side information fusion:
892
+ – PRNN employs parallel RNNs to process items and
893
+ their side information respectively, then combines the
894
+ hidden states of the RNNs for next item prediction [9].
895
+ – FDSA leverages two separate self-attention networks
896
+ to model the ID transition patterns and the feature
897
+ transition patterns respectively, then concatenates the
898
+ outputs of two networks for next item prediction [10].
899
+ – NOVA adopts a non-invasive self-attention mecha-
900
+ nism to leverage side information under the BERT4Rec
901
+ framework for sequential recommendation [28].
902
+ • Methods with missing feature values:
903
+ – RFS randomly samples feature fields to introduce more
904
+ missing information when training [18]. RFS is to make
905
+ the model more robust with missing feature values
906
+ instead of imputing missing feature fields. We combine
907
+ RFS with FDSA and NOVA, and denote the variants as
908
+ FDSA+RFS and NOVA+RFS.
909
+ – LRMM designs an auto-encoder with the modality
910
+ dropout to impute both user ratings and missing side
911
+ information for each item [19]. Note that LRMM is not
912
+ a sequential model. Furthermore, we use the imputed
913
+ missing side information by LRMM to train FDSA
914
+ and NOVA, and denote them as FDSA+LRMM and
915
+ NOVA+LRMM.
916
+ Other methods with side information fusion, such as [23,
917
+ 24], can only model categorical item side information; for
918
+ a fair comparison, we do not consider them as baselines.
919
+ In addition to the baselines listed above, we compare MIIR
920
+ against four variants, namely MIIR-F, MIIR-R, MIIR-M, and
921
+ Sparse-MIIR, to be defined in Section 5.1, 5.2 and 5.3.
922
+ We unify the sequential recommendation loss in all base-
923
+ lines, MIIR, and its variants to the cross-entropy loss, rather
924
+ than the pairwise loss [43], to avoid noise due to negative
925
+ sampling in the pairwise loss.
926
+ 4.4
927
+ Metrics and implementation
928
+ To evaluate the performance of sequential recommendation
929
+ methods, we employ two widely used evaluation metrics:
930
+ HR@k (hit ratio) and MRR (mean reciprocal rank) [1], where
931
+ k ∈ {5, 10}.
932
+ • HR measures the proportion of the sequences whose
933
+ ground-truth items are amongst the top ranked items in
934
+ all test sequences.
935
+ • MRR is the average of reciprocal ranks of the ground-
936
+ truth items.
937
+ For all baselines and our proposed model, we initialize
938
+ the trainable parameters randomly with the Xavier method
939
+ [44]. We train all methods with the Adam optimizer [45]
940
+ for 100 epochs, with a batch size of 128 and a learning rate
941
+ of 0.0001. We also apply gradient clipping [46] with range
942
+ [−5, 5] during training. According to the average length in
943
+ Table 1, we set the maximum sequence length to 20 for both
944
+ datasets for all methods.
945
+
946
+ 7
947
+ TABLE 2
948
+ Performance comparison of MIIR, variants, and the baselines on the
949
+ “Beauty” dataset. MIIR-F is a variant of MIIR that is fine-tuned using the
950
+ recommendation loss (see Section 5.1) and MIIR-R is a variant trained
951
+ using the recommendation loss only (see Section 5.2). The highest
952
+ overall performance is denoted in bold face. The highest performance
953
+ among the baselines is underlined. Impr. (%) is the performance gain
954
+ of MIIR against the best baseline method. ∗ indicates that an
955
+ improvement is statistically significant based on a two-sided paired
956
+ t-test with p < 0.05.
957
+ Beauty
958
+ Beauty D
959
+ Method
960
+ HR@5 HR@10 MRR HR@5 HR@10 MRR
961
+ GRU4Rec
962
+ 31.58
963
+ 42.50
964
+ 21.47
965
+ 31.58
966
+ 42.50
967
+ 21.47
968
+ SASRec
969
+ 32.83
970
+ 43.61
971
+ 23.16
972
+ 32.83
973
+ 43.61
974
+ 23.16
975
+ BERT4Rec
976
+ 33.22
977
+ 43.77
978
+ 23.58
979
+ 33.22
980
+ 43.77
981
+ 23.58
982
+ PRNN
983
+ 32.27
984
+ 42.70
985
+ 23.08
986
+ 31.80
987
+ 42.55
988
+ 22.23
989
+ FDSA
990
+ 35.22
991
+ 44.83
992
+ 25.39
993
+ 35.02
994
+ 44.68
995
+ 25.33
996
+ NOVA
997
+ 34.99
998
+ 45.07
999
+ 25.02
1000
+ 34.21
1001
+ 44.38
1002
+ 24.80
1003
+ FDSA+RFS
1004
+ 35.45
1005
+ 45.40
1006
+ 25.68
1007
+ 34.73
1008
+ 44.56
1009
+ 25.17
1010
+ NOVA+RFS
1011
+ 35.57
1012
+ 45.61
1013
+ 25.74
1014
+ 34.26
1015
+ 44.24
1016
+ 24.97
1017
+ LRMM
1018
+ 22.74
1019
+ 32.95
1020
+ 17.09
1021
+ 18.04
1022
+ 26.94
1023
+ 13.96
1024
+ FDSA+LRMM
1025
+ 35.35
1026
+ 45.15
1027
+ 25.62
1028
+ 35.10
1029
+ 44.73
1030
+ 25.52
1031
+ NOVA+LRMM
1032
+ 35.35
1033
+ 45.31
1034
+ 25.50
1035
+ 34.31
1036
+ 44.53
1037
+ 25.01
1038
+ MIIR
1039
+ 38.92
1040
+ 48.61
1041
+ 29.46
1042
+ 37.30
1043
+ 46.85
1044
+ 27.90
1045
+ MIIR-F
1046
+ 38.73
1047
+ 48.01
1048
+ 29.28
1049
+ 37.12
1050
+ 46.48
1051
+ 27.95
1052
+ MIIR-R
1053
+ 35.59
1054
+ 45.60
1055
+ 25.85
1056
+ 34.92
1057
+ 44.96
1058
+ 25.41
1059
+ Impr. (%)
1060
+ +3.35∗
1061
+ +3.00∗ +3.72∗ +2.20∗
1062
+ +2.12∗ +2.38∗
1063
+ TABLE 3
1064
+ Performance comparison of MIIR, variants, and the baselines on the
1065
+ “Sports and Outdoors” dataset.
1066
+ Sports and Outdoors
1067
+ Sports and Outdoors D
1068
+ Method
1069
+ HR@5 HR@10 MRR HR@5 HR@10
1070
+ MRR
1071
+ GRU4Rec
1072
+ 33.54
1073
+ 44.57
1074
+ 23.70
1075
+ 33.54
1076
+ 44.57
1077
+ 23.70
1078
+ SASRec
1079
+ 34.46
1080
+ 44.69
1081
+ 25.41
1082
+ 34.46
1083
+ 44.69
1084
+ 25.41
1085
+ BERT4Rec
1086
+ 35.12
1087
+ 45.24
1088
+ 26.11
1089
+ 35.12
1090
+ 45.24
1091
+ 26.11
1092
+ PRNN
1093
+ 37.41
1094
+ 47.25
1095
+ 27.23
1096
+ 36.01
1097
+ 46.18
1098
+ 26.12
1099
+ FDSA
1100
+ 39.16
1101
+ 48.08
1102
+ 29.27
1103
+ 37.30
1104
+ 46.74
1105
+ 27.20
1106
+ NOVA
1107
+ 37.95
1108
+ 47.54
1109
+ 28.08
1110
+ 36.15
1111
+ 45.96
1112
+ 26.90
1113
+ FDSA+RFS
1114
+ 38.18
1115
+ 47.18
1116
+ 28.31
1117
+ 37.17
1118
+ 46.65
1119
+ 27.01
1120
+ NOVA+RFS
1121
+ 37.63
1122
+ 47.41
1123
+ 27.33
1124
+ 35.86
1125
+ 45.52
1126
+ 26.84
1127
+ LRMM
1128
+ 28.65
1129
+ 41.36
1130
+ 20.50
1131
+ 19.79
1132
+ 30.34
1133
+ 15.13
1134
+ FDSA+LRMM
1135
+ 39.48
1136
+ 48.52
1137
+ 29.41
1138
+ 38.46
1139
+ 47.67
1140
+ 28.24
1141
+ NOVA+LRMM
1142
+ 38.18
1143
+ 47.76
1144
+ 28.30
1145
+ 37.28
1146
+ 46.78
1147
+ 27.32
1148
+ MIIR
1149
+ 43.66
1150
+ 52.63
1151
+ 32.66
1152
+ 40.55
1153
+ 49.80
1154
+ 30.04
1155
+ MIIR-F
1156
+ 42.66
1157
+ 51.49
1158
+ 32.01
1159
+ 39.98
1160
+ 48.98
1161
+ 29.86
1162
+ MIIR-R
1163
+ 40.01
1164
+ 49.70
1165
+ 29.40
1166
+ 38.07
1167
+ 47.82
1168
+ 27.77
1169
+ Impr. (%)
1170
+ +4.18∗
1171
+ +4.11∗ +3.25∗ +2.09∗
1172
+ +2.13∗
1173
+ +1.80∗
1174
+ All hyper-parameters of the baselines are set following
1175
+ the suggestions from the original papers. For the hyper-
1176
+ parameters of MIIR, we set the embedding size e to 64, the
1177
+ number of heads h to 4, and the number of layers L to 3. We
1178
+ set the dropout rate in DFSA and the mask probability p in
1179
+ MII to 0.5.
1180
+ 5
1181
+ EXPERIMENTAL RESULTS
1182
+ 5.1
1183
+ Overall performance
1184
+ To answer RQ1, we compare MIIR against the recommenda-
1185
+ tion models listed in Section 4.3 on the three datasets from
1186
+ TABLE 4
1187
+ Performance comparison of MIIR, variants, and the baselines on the
1188
+ “Toys and Games” dataset.
1189
+ Toys and Games
1190
+ Toys and Games D
1191
+ Method
1192
+ HR@5 HR@10 MRR HR@5 HR@10 MRR
1193
+ GRU4Rec
1194
+ 31.19
1195
+ 42.15
1196
+ 21.90
1197
+ 31.19
1198
+ 42.15
1199
+ 21.90
1200
+ SASRec
1201
+ 31.74
1202
+ 41.22
1203
+ 24.51
1204
+ 31.74
1205
+ 41.22
1206
+ 24.51
1207
+ BERT4Rec
1208
+ 31.45
1209
+ 41.22
1210
+ 23.25
1211
+ 31.45
1212
+ 41.22
1213
+ 23.25
1214
+ PRNN
1215
+ 34.00
1216
+ 44.25
1217
+ 24.32
1218
+ 32.71
1219
+ 42.98
1220
+ 23.23
1221
+ FDSA
1222
+ 34.44
1223
+ 43.89
1224
+ 26.03
1225
+ 32.70
1226
+ 42.33
1227
+ 24.69
1228
+ NOVA
1229
+ 34.50
1230
+ 44.34
1231
+ 25.86
1232
+ 34.00
1233
+ 43.74
1234
+ 25.06
1235
+ FDSA+RFS
1236
+ 34.81
1237
+ 44.62
1238
+ 26.30
1239
+ 33.41
1240
+ 43.64
1241
+ 25.22
1242
+ NOVA+RFS
1243
+ 35.33
1244
+ 45.29
1245
+ 26.27
1246
+ 33.39
1247
+ 43.26
1248
+ 24.73
1249
+ LRMM
1250
+ 29.88
1251
+ 40.96
1252
+ 21.87
1253
+ 19.85
1254
+ 29.83
1255
+ 15.15
1256
+ FDSA+LRMM
1257
+ 35.20
1258
+ 44.50
1259
+ 26.49
1260
+ 33.43
1261
+ 42.94
1262
+ 25.18
1263
+ NOVA+LRMM
1264
+ 35.65
1265
+ 45.50
1266
+ 26.61
1267
+ 34.51
1268
+ 44.47
1269
+ 25.51
1270
+ MIIR
1271
+ 40.11
1272
+ 49.80
1273
+ 29.64
1274
+ 39.01
1275
+ 48.89
1276
+ 28.74
1277
+ MIIR-F
1278
+ 39.00
1279
+ 47.76
1280
+ 29.57
1281
+ 38.25
1282
+ 47.45
1283
+ 28.75
1284
+ MIIR-R
1285
+ 35.80
1286
+ 45.37
1287
+ 26.00
1288
+ 34.69
1289
+ 44.30
1290
+ 24.81
1291
+ Impr. (%)
1292
+ +4.46∗
1293
+ +4.30∗ +3.03∗ +4.50∗
1294
+ +4.42∗ +3.23∗
1295
+ Section 4.2. Table 2, 3 and 4 list the evaluation results of
1296
+ all methods on each dataset, respectively. Based on these
1297
+ results, we have the following observations.
1298
+ First, on all datasets, MIIR performs significantly better
1299
+ than all baselines by a large margin despite the different
1300
+ missing rates, in terms of HR@5, HR@10 and MRR. MIIR
1301
+ has two major advantages: (i) MIIR trains the model using
1302
+ MII to enhance its ability to deal with missing side infor-
1303
+ mation in sequential recommendation (see detailed analysis
1304
+ in Section 5.2), and (ii) MIIR employs DFSA to improve the
1305
+ side information fusion in the model (see Section 5.3 for
1306
+ further analysis).
1307
+ Second, the item side information can help sequential
1308
+ recommender systems to more accurately model the tran-
1309
+ sition patterns among items. To verify this, we divide all
1310
+ methods into three groups: (i) GRU4Rec and PRNN that
1311
+ are based on RNNs; (ii) SASRec and FDSA that are based
1312
+ on left-to-right self-attention networks; and (iii) BERT4Rec,
1313
+ NOVA, and MIIR that employ bidirectional self-attention
1314
+ networks and the masked item prediction task. In each
1315
+ group, we see that methods that fuse side information
1316
+ outperform methods that only rely on item IDs, which
1317
+ illustrates that item side information does help.
1318
+ Third, the performance of PRNN, FDSA, NOVA and
1319
+ MIIR on the “Beauty”, “Sports and Outdoors” and “Toys
1320
+ and Games” datasets is higher than that on the discarded
1321
+ versions of the datasets (i.e., “Beauty D”, “Sports and Out-
1322
+ doors D” and “Toys and Games D”). We see two reasons for
1323
+ this difference: (i) the “Beauty D”, “Sports and Outdoors D”
1324
+ and “Toys and Games D” datasets discard some side infor-
1325
+ mation, so the available side information becomes less, and
1326
+ (ii) using the special values (i.e., imiss, cmiss, bmiss, tmiss
1327
+ and dmiss) to fill missing feature fields may be harmful to
1328
+ PRNN, FDSA and NOVA.
1329
+ Fourth, by comparing FDSA+RFS and NOVA+RFS with
1330
+ FDSA and NOVA, we see RFS cannot consistently improve
1331
+ the performance of FDSA and NOVA on all datasets. What’s
1332
+ worse, RFS would degrade the performance of FDSA and
1333
+ NOVA in some cases. Because RFS is to introduce more
1334
+
1335
+ 8
1336
+ TABLE 5
1337
+ Performance comparison of whether to exploit missing feature fields on
1338
+ the “Beauty” dataset. MIIR-M and MIIR-R-M are the variants of MIIR
1339
+ and MIIR-R respectively that mask missing feature fields in
1340
+ self-attention (see Section 5.2).
1341
+ Beauty
1342
+ Beauty D
1343
+ Method
1344
+ HR@5 HR@10 MRR HR@5 HR@10 MRR
1345
+ MIIR
1346
+ 38.92
1347
+ 48.61
1348
+ 29.46
1349
+ 37.30
1350
+ 46.85
1351
+ 27.90
1352
+ MIIR-R
1353
+ 35.59
1354
+ 45.60
1355
+ 25.85
1356
+ 34.92
1357
+ 44.96
1358
+ 25.41
1359
+ MIIR-M
1360
+ 39.16
1361
+ 48.67
1362
+ 29.45
1363
+ 37.12
1364
+ 46.58
1365
+ 27.83
1366
+ MIIR-R-M
1367
+ 36.40
1368
+ 46.31
1369
+ 27.11
1370
+ 34.71
1371
+ 45.01
1372
+ 25.42
1373
+ TABLE 6
1374
+ Performance comparison of whether to exploit missing feature fields on
1375
+ the “Sports and Outdoors” dataset.
1376
+ Sports and Outdoors
1377
+ Sports and Outdoors D
1378
+ Method
1379
+ HR@5 HR@10 MRR HR@5 HR@10
1380
+ MRR
1381
+ MIIR
1382
+ 43.66
1383
+ 52.63
1384
+ 32.66
1385
+ 40.55
1386
+ 49.80
1387
+ 30.04
1388
+ MIIR-R
1389
+ 40.01
1390
+ 49.70
1391
+ 29.40
1392
+ 38.07
1393
+ 47.82
1394
+ 27.77
1395
+ MIIR-M
1396
+ 43.04
1397
+ 52.12
1398
+ 32.16
1399
+ 40.36
1400
+ 49.65
1401
+ 29.81
1402
+ MIIR-R-M
1403
+ 39.71
1404
+ 48.98
1405
+ 29.15
1406
+ 38.33
1407
+ 48.10
1408
+ 28.12
1409
+ missing feature values into the model training instead of
1410
+ imputing missing feature fields, it cannot deal with the
1411
+ missing side information problem fundamentally.
1412
+ Fifth, the performance of LRMM is significantly worse
1413
+ than that of the sequential recommendation models with
1414
+ side
1415
+ information.
1416
+ LRMM
1417
+ even
1418
+ performs
1419
+ worse
1420
+ than
1421
+ GRU4Rec, SASRec and BERT4Rec that neglect the item
1422
+ side information. The main reason is that LRMM is not
1423
+ a sequential model, so it cannot exploit the relation and
1424
+ information in sequences to make recommendation and
1425
+ imputation, however it is essential in the sequential rec-
1426
+ ommendation task. We can also observe that FDSA+LRMM
1427
+ and NOVA+LRMM outperform FDSA and NOVA in exper-
1428
+ iments, which verifies the effectiveness of the imputation
1429
+ results of LRMM. It also proves imputing missing feature
1430
+ values is a better way to alleviate the missing side informa-
1431
+ tion problem than using fixed special values and RFS.
1432
+ Sixth, modeling sequential recommendation as missing
1433
+ information imputation is sufficient to train a recommen-
1434
+ dation model. To verify this, we conduct an experiment
1435
+ that first pre-trains MIIR using the missing information
1436
+ imputation loss (Eq. 13), and then fine-tunes it using the
1437
+ recommendation loss (Eq. 14). We use MIIR-F to denote this
1438
+ variant of MIIR. In Table 2 we see that MIIR-F performs
1439
+ worse than MIIR in most cases. Fine-tuning MIIR-F with the
1440
+ recommendation loss might lead to overfitting, resulting in
1441
+ performance decreases. This result supports the conclusion
1442
+ that with MII we can unify the sequential recommendation
1443
+ task as a particular type of missing information imputation
1444
+ task to train MIIR together with the other imputation task
1445
+ for missing item side information.
1446
+ 5.2
1447
+ Benefits of MII
1448
+ To answer RQ2, we analyze how MIIR benefits from training
1449
+ with MII.
1450
+ TABLE 7
1451
+ Performance comparison of whether to exploit missing feature fields on
1452
+ the “Toys and Games” dataset.
1453
+ Toys and Games
1454
+ Toys and Games D
1455
+ Method
1456
+ HR@5 HR@10 MRR HR@5 HR@10 MRR
1457
+ MIIR
1458
+ 40.11
1459
+ 49.80
1460
+ 29.64
1461
+ 39.01
1462
+ 48.89
1463
+ 28.74
1464
+ MIIR-R
1465
+ 35.80
1466
+ 45.37
1467
+ 26.00
1468
+ 34.69
1469
+ 44.30
1470
+ 24.81
1471
+ MIIR-M
1472
+ 39.33
1473
+ 49.22
1474
+ 28.97
1475
+ 37.80
1476
+ 47.58
1477
+ 27.82
1478
+ MIIR-R-M
1479
+ 35.22
1480
+ 45.29
1481
+ 26.28
1482
+ 34.53
1483
+ 44.47
1484
+ 25.58
1485
+ In Table 2, 3 and 4, we report on results of a variant of
1486
+ MIIR that directly trains MIIR with the recommendation loss
1487
+ shown in Eq. 14. We write MIIR-R for this variant of MIIR
1488
+ without the supervised signal of MII. When we compare the
1489
+ performance of MIIR and MIIR-R, we see very substantial
1490
+ gaps. This confirms the effectiveness of training MIIR with
1491
+ MII, which accounts for the main part of the improvement
1492
+ of MIIR over other methods.
1493
+ To demonstrate that MIIR can mine useful information
1494
+ from missing feature fields by training with MII, we design
1495
+ a variant of MIIR called MIIR-M by masking missing feature
1496
+ fields. In MIIR-M, we revise the attention mask M used in
1497
+ Eq. 5, which is a null matrix in MIIR. The revision in M is
1498
+ defined as:
1499
+ Mj,y
1500
+ i,x =
1501
+ � −∞,
1502
+ if sx
1503
+ i or sy
1504
+ j ∈ {cmiss, bmiss, tmiss, dmiss},
1505
+ 0,
1506
+ otherwise,
1507
+ (15)
1508
+ where the condition of sx
1509
+ i or sy
1510
+ j ∈ {cmiss, bmiss, tmiss,
1511
+ dmiss} depends on the original input sequence instead
1512
+ of the sequence after randomly masking. The purpose of
1513
+ the variant is to prevent the model from attending to the
1514
+ missing feature fields about item side information in the
1515
+ sequence. On the one hand, MIIR-M cannot mine and fuse
1516
+ any information in missing feature fields for sequential
1517
+ recommendation. On the other hand, MIIR-M is unable
1518
+ to exploit the information in non-missing feature fields to
1519
+ impute the missing side information. Besides, we mask
1520
+ missing feature fields for MIIR-R to analyze how missing
1521
+ feature values affects the performance of MIIR without MII,
1522
+ denoted as MIIR-R-M.
1523
+ In Table 5, 6 and 7, we compare MIIR and MIIR-R with
1524
+ MIIR-M and MIIR-R-M, respectively. We can find that MIIR
1525
+ outperforms MIIR-M in most cases, which illustrates that
1526
+ MIIR can extract useful information from missing feature
1527
+ fields to improve the sequential recommendation perfor-
1528
+ mance. We can also observe that MIIR-R-M performs better
1529
+ than MIIR-R in some cases. This phenomenon indicates that
1530
+ using fixed special values for filling missing feature fields
1531
+ can suffer the model performance, therefore masking miss-
1532
+ ing feature fields may be a better way without imputation.
1533
+ On the “Beauty” dataset, MIIR only achieves comparable
1534
+ performance with MIIR-M, and MIIR-R also performs worse
1535
+ than MIIR-R-M. However, the performance gap from MIIR
1536
+ to MIIR-M is smaller than that from MIIR-R to MIIR-R-
1537
+ M, and we have similar observations on other datasets.
1538
+ It illustrates that imputing missing feature values has the
1539
+ superiority over masking them for alleviating missing side
1540
+ information problem.
1541
+
1542
+ 9
1543
+ TABLE 8
1544
+ Performance comparison of dense and sparse attention on the
1545
+ “Beauty” dataset. Sparse-MIIR and Sparse-MIIR-R are variants of MIIR
1546
+ and MIIR-R, respectively, in which DFSA is replaced by SFSA (see
1547
+ Section 5.3).
1548
+ Beauty
1549
+ Beauty D
1550
+ Method
1551
+ HR@5 HR@10 MRR HR@5 HR@10 MRR
1552
+ MIIR
1553
+ 38.92
1554
+ 48.61
1555
+ 29.46
1556
+ 37.30
1557
+ 46.85
1558
+ 27.90
1559
+ MIIR-R
1560
+ 35.59
1561
+ 45.60
1562
+ 25.85
1563
+ 34.92
1564
+ 44.96
1565
+ 25.41
1566
+ Sparse-MIIR
1567
+ 36.71
1568
+ 46.60
1569
+ 26.87
1570
+ 36.04
1571
+ 45.98
1572
+ 26.34
1573
+ Sparse-MIIR-R
1574
+ 34.95
1575
+ 45.02
1576
+ 25.35
1577
+ 34.61
1578
+ 44.84
1579
+ 25.19
1580
+ TABLE 9
1581
+ Performance comparison of dense and sparse attention on the “Sports
1582
+ and Outdoors” dataset.
1583
+ Sports and Outdoors
1584
+ Sports and Outdoors D
1585
+ Method
1586
+ HR@5 HR@10 MRR HR@5 HR@10
1587
+ MRR
1588
+ MIIR
1589
+ 43.66
1590
+ 52.63
1591
+ 32.66
1592
+ 40.55
1593
+ 49.80
1594
+ 30.04
1595
+ MIIR-R
1596
+ 40.01
1597
+ 49.70
1598
+ 29.40
1599
+ 38.07
1600
+ 47.82
1601
+ 27.77
1602
+ Sparse-MIIR
1603
+ 40.52
1604
+ 50.04
1605
+ 29.64
1606
+ 39.24
1607
+ 48.91
1608
+ 28.67
1609
+ Sparse-MIIR-R
1610
+ 38.61
1611
+ 48.29
1612
+ 28.25
1613
+ 37.56
1614
+ 47.72
1615
+ 27.21
1616
+ MIIR-M also outperforms all baselines on the three
1617
+ datasets with different missing rates. Training MIIR with
1618
+ MII helps MIIR to make use non-missing feature fields.
1619
+ Imputing the masked non-missing feature values requires
1620
+ the model to capture the relations between different feature
1621
+ fields, so MII guides MIIR to better fuse side information
1622
+ into the model for improving the sequential recommenda-
1623
+ tion performance.
1624
+ 5.3
1625
+ Effectiveness of DFSA
1626
+ To answer RQ3, we conduct an ablation study to analyze the
1627
+ effectiveness of DFSA in MIIR.
1628
+ We first compare MIIR-R, the variant of MIIR that is
1629
+ trained with recommendation loss only, with the baselines
1630
+ in Table 2, 3, 4. MIIR-R achieves better or comparable per-
1631
+ formance with the baselines on most evaluation metrics of
1632
+ all datasets, even without the help of MII. The main reason
1633
+ is that MIIR-R has dense fusion self-attention (DFSA) to
1634
+ better fuse information in the item sequence for improving
1635
+ sequential recommendation.
1636
+ In order to validate that it is important to model all pos-
1637
+ sible pairwise relations in an item sequence for sequential
1638
+ recommendation, we design another self-attention mecha-
1639
+ nism called sparse fusion self-attention (SFSA). SFSA modifies
1640
+ the attention mask M in Eq. 5 into:
1641
+ Mj,y
1642
+ i,x =
1643
+ � 0,
1644
+ if i == j or x == y,
1645
+ −∞,
1646
+ otherwise,
1647
+ (16)
1648
+ where the condition i == j or x == y means that SFSA
1649
+ only allows to attend between the pair of feature fields
1650
+ belonging to the same item or the same type. Therefore,
1651
+ SFSA only models the relation between different feature
1652
+ fields of the same item or the relation between the same type
1653
+ of feature fields of different items in the sequence. These
1654
+ relations are also modeled in some baselines, such as PRNN
1655
+ and FDSA.
1656
+ TABLE 10
1657
+ Performance comparison of dense and sparse attention on the “Toys
1658
+ and Games” dataset.
1659
+ Toys and Games
1660
+ Toys and Games D
1661
+ Method
1662
+ HR@5 HR@10 MRR HR@5 HR@10 MRR
1663
+ MIIR
1664
+ 40.11
1665
+ 49.80
1666
+ 29.64
1667
+ 39.01
1668
+ 48.89
1669
+ 28.74
1670
+ MIIR-R
1671
+ 35.80
1672
+ 45.37
1673
+ 26.00
1674
+ 34.69
1675
+ 44.30
1676
+ 24.81
1677
+ Sparse-MIIR
1678
+ 37.61
1679
+ 47.77
1680
+ 27.23
1681
+ 37.06
1682
+ 47.27
1683
+ 26.79
1684
+ Sparse-MIIR-R
1685
+ 35.58
1686
+ 45.66
1687
+ 25.80
1688
+ 34.46
1689
+ 44.54
1690
+ 24.53
1691
+ TABLE 11
1692
+ Performance comparison of LRMM and MIIR for the missing side
1693
+ information imputation on the “Beauty D”, “Sports and Outdoors D” and
1694
+ “Toys and Games D” datasets, where P: precision, R: recall, F1: F1
1695
+ score, ACC: accuracy, MSE: mean square error.
1696
+ Dataset
1697
+ Field
1698
+ Metric LRMM
1699
+ MIIR
1700
+ Beauty D
1701
+ Category
1702
+ P
1703
+ 70.15
1704
+ 79.64
1705
+ R
1706
+ 48.41
1707
+ 36.97
1708
+ F1
1709
+ 52.96
1710
+ 48.61
1711
+ Brand
1712
+ ACC
1713
+ 7.84
1714
+ 5.01
1715
+ Title
1716
+ MSE
1717
+ 0.0871
1718
+ 0.0514
1719
+ Description MSE
1720
+ 0.1454
1721
+ 0.0704
1722
+ Sports and
1723
+ Outdoors D
1724
+ Category
1725
+ P
1726
+ 57.02
1727
+ 74.97
1728
+ R
1729
+ 51.31
1730
+ 35.91
1731
+ F1
1732
+ 46.38
1733
+ 46.40
1734
+ Brand
1735
+ ACC
1736
+ 6.06
1737
+ 4.43
1738
+ Title
1739
+ MSE
1740
+ 0.0927
1741
+ 0.0534
1742
+ Description MSE
1743
+ 0.1474
1744
+ 0.0835
1745
+ Toys and
1746
+ Games D
1747
+ Category
1748
+ P
1749
+ 72.32
1750
+ 89.31
1751
+ R
1752
+ 51.08
1753
+ 42.31
1754
+ F1
1755
+ 54.11
1756
+ 55.50
1757
+ Brand
1758
+ ACC
1759
+ 18.61
1760
+ 14.49
1761
+ Title
1762
+ MSE
1763
+ 0.0858
1764
+ 0.0514
1765
+ Description MSE
1766
+ 0.1427
1767
+ 0.0777
1768
+ In Table 8, 9 and 10, we compare the performance of
1769
+ DFSA and SFSA as components of MIIR and MIIR-R. We
1770
+ write Sparse-MIIR and Sparse-MIIR-R for the variants of
1771
+ MIIR and MIIR-R, respectively, in which DFSA is replaced
1772
+ by SFSA. We can see that MIIR outperforms Sparse-MIIR
1773
+ on all datasets despite different missing rates. What’s more,
1774
+ MIIR-R outperforms Sparse-MIIR-R in most cases too. Mod-
1775
+ eling the relations between any pair of feature fields helps to
1776
+ make more effective use of item side information to improve
1777
+ sequential recommendation performance.
1778
+ By comparing MIIR with Sparse-MIIR, we also notice
1779
+ that the improvement by DFSA on the three datasets is
1780
+ higher than the improvement on the discarded versions of
1781
+ the datasets. We guess that DFSA may also model more
1782
+ noisy relations related to missing feature fields when the
1783
+ missing rate increases, which would make the performance
1784
+ degeneration.
1785
+ 5.4
1786
+ Imputation performance (RQ4)
1787
+ To answer RQ4, we compare LRMM and MIIR based on
1788
+ their imputation results for the discarded side information
1789
+ of all datasets.
1790
+ For the test sequences from the “Beauty D”, “Sports and
1791
+ Outdoors D” and “Toys and Games D” datasets, we can
1792
+ compare the imputed results with the ground-truth before
1793
+ discard. For different types of feature fields, we consider
1794
+
1795
+ 10
1796
+ Fig. 4. (a) and (b) are two sequences with their imputed categories from the “Beauty D” dataset, (c) and (d) are two sequences with their imputed
1797
+ brands from the “Toys and Games D” dataset, (e) is the sequence with its imputed categories and brands from the “Sports and Outdoors D” dataset.
1798
+ different metrics: (i) for category that is corresponding to
1799
+ the multi-class classification task, we calculate the preci-
1800
+ sion, recall and F1 score for evaluation which bigger is
1801
+ better; (ii) for brand that is corresponding to the one-class
1802
+ classification task, we calculate the accuracy for evaluation
1803
+ which bigger is better; (iii) for title and description that are
1804
+ both corresponding to the multi-variable regression task,
1805
+ we calculate the mean square error (averaged by the length
1806
+ of title/description vector) for evaluation which smaller is
1807
+ better.
1808
+ In Table 11, we list the evaluation results for compari-
1809
+ son. We can observe that MIIR achieves better imputation
1810
+ performance than LRMM on the precision of category and
1811
+ the MSE of title and description. Whereas, LRMM outper-
1812
+ forms MIIR on the recall of category and the accuracy of
1813
+ brand. Both LRMM and MIIR can infer some discarded side
1814
+ information, so they can alleviate the missing side infor-
1815
+ mation problem. Comparing to LRMM, MIIR can exploit
1816
+ more information from the sequence to impute the missing
1817
+ side information. However, MIIR may also impute some
1818
+ inaccurate results due to the over-dependence on the given
1819
+ context.
1820
+ 5.5
1821
+ Case Study (RQ5)
1822
+ Finally, to answer RQ5, we sample some test cases from
1823
+ datasets.
1824
+ As shown in Fig.4, we list some sequences with their
1825
+ imputed results. We can observe that MIIR can generate
1826
+ different feature values for missing feature fields according
1827
+ to different contexts (i.e., items and sequences), which is
1828
+ better than using fixed predefined values. What’s more,
1829
+ MIIR may infer the ground-truth missing value, including
1830
+ the side information of the target next item, to give the
1831
+ model with a more accurate guidance for recommendation.
1832
+ For example, MIIR imputes a part of the discarded cate-
1833
+ gories in the sequence (b) and the discarded brands in the
1834
+ sequence (d). We can also observe that the side information
1835
+ of items in the same sequence may be related, which is why
1836
+ MIIR can infer the ground-truth missing value in light of
1837
+ the given context. However, MIIR may be over-dependent
1838
+ on the information from the sequence, leading to impute
1839
+ inaccurate results. For instance, in the sequence (e), MIIR
1840
+ imputes the wrong categories and brand for the item 5401.
1841
+ Additionally, we visualize the attention weights from
1842
+ the missing item ID (i.e., the next item ID) to all feature
1843
+ fields in the given sequence in DFSA, as shown in Fig.5. We
1844
+ reshape the attention weights into a matrix with the shape
1845
+ of 5 × n, where 5 is the number of the feature field types
1846
+ and n is the sequence length. First, we can see that MIIR
1847
+ exploits the information from all feature fields of the given
1848
+ sequence to predict the next item, which emphasizes the
1849
+ necessity to model the relation between any pair of feature
1850
+ fields. Second, we can observe that different layers focus on
1851
+ different types of feature fields, where the first layer mainly
1852
+ attends to ID, and the third layer mainly attends to title
1853
+ and description. It illustrates MIIR gradually fuses different
1854
+ types of side information into the model by different layers.
1855
+ Because the information in textual feature fields is more
1856
+ difficult to extract, MIIR needs more deeper layers to fuse
1857
+ textual feature fields. Third, we can find different heads
1858
+ in the same layers have similar attention patterns, which
1859
+ means there may be some redundant parameters in MIIR.
1860
+
1861
+ (a)
1862
+ id: 22434
1863
+ id: 69550
1864
+ id: 52863
1865
+ id: 52866
1866
+ id: 52867
1867
+ id: 52868
1868
+ id: 65782
1869
+ id: 65790
1870
+ id: 83354
1871
+ Original
1872
+ Sequence
1873
+ category: 384, 487,
1874
+ category: 38, 271,
1875
+ category: 185, 384,
1876
+ category: 185, 384,
1877
+ category: 384, 487,
1878
+ category: 185, 384,
1879
+ category: 384, 487,
1880
+ category: 384, 487,
1881
+ category: 185, 384,
1882
+ 489
1883
+ 479
1884
+ 487
1885
+ 487
1886
+ 489
1887
+ 487
1888
+ 489
1889
+ 489
1890
+ 487
1891
+ id: 22434
1892
+ id: 69550
1893
+ id: 52863
1894
+ id: 52866
1895
+ id: 52867
1896
+ id: 52868
1897
+ id: 65782
1898
+ id: 65790
1899
+ id: 83354
1900
+ Imputed
1901
+ Sequence
1902
+ category: 384, 487,
1903
+ category: 185, 384,
1904
+ category: 185, 384,
1905
+ category: 185, 384,
1906
+ category: 384, 487,
1907
+ category: 384, 487,
1908
+ category: 185, 384,
1909
+ category: 384, 487
1910
+ category: 384, 487
1911
+ 489
1912
+ 487
1913
+ 487
1914
+ 487
1915
+ 489
1916
+ 489
1917
+ 487
1918
+ (b)
1919
+ id: 19904
1920
+ id: 101600
1921
+ id: 53133
1922
+ id: 25434
1923
+ id: 89249
1924
+ id: 44195
1925
+ id: 25437
1926
+ id: 103752
1927
+ Original
1928
+ id: 89561
1929
+ Sequence
1930
+ category: 114, 454,
1931
+ category: 108, 114,
1932
+ category: 114, 367,
1933
+ category: 114, 369,
1934
+ category: 114, 367,
1935
+ category: 43, 114,
1936
+ category: 41, 47, 114,
1937
+ category: 43, 114
1938
+ category: 15, 26, 56
1939
+ 558
1940
+ 493, 598
1941
+ 598
1942
+ 535, 598
1943
+ 598
1944
+ 267
1945
+ 369, 598
1946
+ id: 19904
1947
+ id: 101600
1948
+ id: 53133
1949
+ id: 25434
1950
+ id: 44195
1951
+ id: 25437
1952
+ id: 89249
1953
+ id: 89561
1954
+ id: 103752
1955
+ Imputed
1956
+ Sequence
1957
+ category: 114, 454,
1958
+ category: 108, 114,
1959
+ category: 43, 114,
1960
+ category: 43, 114
1961
+ category: 114, 598
1962
+ category: 114, 598
1963
+ category: 114, 598
1964
+ category: 15, 26, 56
1965
+ category: 114, 598
1966
+ 558
1967
+ 493, 598
1968
+ 267
1969
+ (c)
1970
+ id: 6177
1971
+ id: 96866
1972
+ id: 108189
1973
+ id: 117401
1974
+ id: 117402
1975
+ id: 134383
1976
+ id: 134384
1977
+ id: 157885
1978
+ Original
1979
+ Sequence
1980
+ brand: <missing>
1981
+ brand: 3614
1982
+ brand: 3614
1983
+ brand: 3614
1984
+ brand: 3614
1985
+ brand: <missing>
1986
+ brand: 7166
1987
+ brand: 3614
1988
+ id: 6177
1989
+ id: 96866
1990
+ id: 108189
1991
+ id: 117401
1992
+ id: 117402
1993
+ id: 134383
1994
+ id: 134384
1995
+ id: 157885
1996
+ Imputed
1997
+ Sequence
1998
+ brand: 3614
1999
+ brand: 3614
2000
+ brand: 3614
2001
+ brand: 3614
2002
+ brand: 3614
2003
+ brand: 3614
2004
+ brand: 7166
2005
+ brand: 3614
2006
+ (d)
2007
+ id: 27109
2008
+ id: 4139
2009
+ L096 :p!
2010
+ id: 63016
2011
+ id: 80413
2012
+ id: 16994
2013
+ id: 83174
2014
+ Original
2015
+ id: 23768
2016
+ Sequence
2017
+ brand: 11595
2018
+ brand: <missing>
2019
+ brand: 11595
2020
+ brand: 6844
2021
+ brand: 12538
2022
+ brand: 2308
2023
+ brand: 11595
2024
+ brand: 5472
2025
+ id: 27109
2026
+ id: 4139
2027
+ id: 9607
2028
+ id: 63016
2029
+ id: 80413
2030
+ id: 16994
2031
+ id: 83174
2032
+ id: 23768
2033
+ Imputed
2034
+ Sequence
2035
+ brand: 11595
2036
+ brand: 11595
2037
+ brand: 5472
2038
+ brand: 11595
2039
+ brand: 6844
2040
+ brand: 12538
2041
+ brand: 2308
2042
+ brand: 11595
2043
+ (e)
2044
+ id: 55944
2045
+ id: 1956
2046
+ id: 5401
2047
+ id: 22000
2048
+ id: 36936
2049
+ id: 173423
2050
+ id: 25085
2051
+ id: 101607
2052
+ Original
2053
+ Sequence
2054
+ category: 952, 1286,
2055
+ category: 632, 1562,
2056
+ category: 952, 1286,
2057
+ category: 952, 1286,
2058
+ category: 252, 1286,
2059
+ category: 296, 1286,
2060
+ category: 952, 1286,
2061
+ category: 2078, 2299
2062
+ Discard
2063
+ 2191, 2479
2064
+ 2533
2065
+ 2191, 2479
2066
+ 2191, 2479
2067
+ 2191, 2479
2068
+ 2230, 2479
2069
+ 2233, 2479
2070
+ brand: 4696
2071
+ brand: 4087
2072
+ brand: 6636
2073
+ brand: 4087
2074
+ brand: 4087
2075
+ brand: <missing>
2076
+ brand: 9545
2077
+ brand: 10778
2078
+ Impute
2079
+ id: 55944
2080
+ id: 1956
2081
+ id: 5401
2082
+ id: 22000
2083
+ id: 36936
2084
+ id: 173423
2085
+ id: 25085
2086
+ id: 101607
2087
+ Imputed
2088
+ Sequence
2089
+ category: 952, 1286,
2090
+ category: 952, 1286,
2091
+ category: 952, 1286,
2092
+ category: 952, 1286,
2093
+ Target Item
2094
+ category: 2078, 2299
2095
+ category: 1286, 2479
2096
+ category: 1286, 2479
2097
+ category: 1286, 2479
2098
+ 2191, 2479
2099
+ 2191, 2479
2100
+ 2191, 2479
2101
+ 2191, 2479
2102
+ brand: 4696
2103
+ brand: 4087
2104
+ brand: 4087
2105
+ brand: 4087
2106
+ brand: 4087
2107
+ brand: 4087
2108
+ brand: 9545
2109
+ brand: 408711
2110
+ (a) A sequence from the “Beauty D” dataset.
2111
+ (b) A sequence from the “Sports and Outdoors D” dataset.
2112
+ (c) A sequence from the “Toys and Games D” dataset.
2113
+ Fig. 5. Visualization for the attention weights from the missing item ID field to all feature fields of all heads and layers in MIIR on three sequences
2114
+ from different datasets.
2115
+ 6
2116
+ CONCLUSION
2117
+ We have studied the missing side information problem in
2118
+ sequential recommendation. We have proposed the missing
2119
+ information imputation (MII) task to unify the missing side
2120
+ information imputation task and the sequential recommen-
2121
+ dation task. We have presented a novel sequential recom-
2122
+ mendation model named missing information imputation
2123
+ recommender (MIIR) to simultaneously impute missing fea-
2124
+ ture values and predict the next item for a given sequence
2125
+ of items. We have proposed a dense fusion self-attention
2126
+ (DFSA) mechanism to model different relations in the item
2127
+ sequence and to fuse side information.
2128
+ Based on experiments and analyses on three datasets
2129
+ with different settings of the missing rates we have found
2130
+ that MIIR outperforms state-of-the-art methods for sequen-
2131
+ tial recommendation with side information. We have ver-
2132
+ ified that MIIR can identify useful side information from
2133
+ missing feature fields by training with the MII task, and
2134
+ that the DFSA mechanism improves the recommendation
2135
+ effectiveness of MIIR.
2136
+ As to broader implications of our work, we offer a new
2137
+ perspective by revealing a correlation between missing side
2138
+ information imputation and the sequential recommendation
2139
+ task. They both concern the prediction of missing infor-
2140
+ mation. The perspective operationalized with MIIR can be
2141
+ adopted as a foundational paradigm. Other prediction tasks
2142
+ related to recommendation, such as rating prediction, user
2143
+ profile prediction, and next basket recommendation can also
2144
+ be formulated as a MII task.
2145
+ Limitations of our work include the following: (i) since
2146
+ DFSA treats side information as part of the sequence (e.g.,
2147
+ in our case, the actual sequence length is 5x the number of
2148
+ items) and models all possible pairwise relations in an item
2149
+ sequence, it is computationally costly and not easy to scale
2150
+ to long sequences; and (ii) we have not optimized the MII
2151
+ losses on different types of feature fields in MIIR for the
2152
+
2153
+ Layer 3 Head 2
2154
+ category
2155
+ category
2156
+ category
2157
+ brand
2158
+ brand
2159
+ branc
2160
+ itle
2161
+ title
2162
+ description
2163
+ description
2164
+ description
2165
+ Layer 1 Head 3
2166
+ Layer 1 Head 4
2167
+ Layer 2 Head 3
2168
+ Layer 2 Head 4
2169
+ Layer 3 Head 3
2170
+ Layer 3 Head 4
2171
+ category
2172
+ category
2173
+ category
2174
+ brand
2175
+ branc
2176
+ brand
2177
+ title
2178
+ itle
2179
+ title
2180
+ description
2181
+ description
2182
+ description
2183
+ 1
2184
+ 0.000 0.009
2185
+ 0.090
2186
+ 0.000
2187
+ 0.0900.100
2188
+ 0.000
2189
+ 0.190
2190
+ 0.081
2191
+ 0.010Layer 1 Head 1
2192
+ Layer 1 Head 2
2193
+ Layer 2 Head 1
2194
+ Layer 2 Head 2
2195
+ category
2196
+ category
2197
+ category
2198
+ brand
2199
+ brand
2200
+ brand
2201
+ title
2202
+ title
2203
+ description
2204
+ description
2205
+ description
2206
+ Layer 2 Head 4
2207
+ Layer 1 Head 4
2208
+ Layer 2 Head 3
2209
+ Layer 3 Head 3
2210
+ Layer 3 Head 4
2211
+ d
2212
+ category
2213
+ category
2214
+ category
2215
+ brand
2216
+ brand
2217
+ brand
2218
+ title
2219
+ title
2220
+ description
2221
+ description
2222
+ description
2223
+ 97899800 1469157567833 540810595m1557
2224
+ 0.190
2225
+ 0.000
2226
+ 0.000ayer 1 Head 2
2227
+ Layer 2 Head 2
2228
+ category
2229
+ category
2230
+ category
2231
+ brand
2232
+ bran
2233
+ branc
2234
+ title
2235
+ title
2236
+ title
2237
+ description
2238
+ description
2239
+ description
2240
+ ayer 1 Head 4
2241
+ Layer 2 Head 3
2242
+ Layer 2 Head 4
2243
+ Layer 3 Head 3
2244
+ Layer 3 Head 4
2245
+ category
2246
+ category
2247
+ category
2248
+ brand
2249
+ brand
2250
+ brand
2251
+ title
2252
+ title
2253
+ description
2254
+ description
2255
+ description
2256
+ 1699 83004 90253151526203262038 5850mis5716999 3004902531515220322038 5850m557
2257
+ 16999 300402531515220322038850m5716999 3009025315152203620385850mi57
2258
+ 16999 8300490253151526203262038 5850m55716999 3004902531515262032620385850ms57
2259
+ +o-0.100
2260
+ 0.000
2261
+ 0.084
2262
+ 0.000
2263
+ 0.011
2264
+ 0.05512
2265
+ recommendation task.
2266
+ We aim to further improve MIIR in different directions.
2267
+ We will assess the ability of the linear transformer [47, 48]
2268
+ to reduce the computational costs of DFSA and design a
2269
+ mechanism to filter out useless relations at an early stage.
2270
+ We also plan to design a tailored loss for MIIR by building
2271
+ on recent loss weighting methods [49, 50].
2272
+ REPRODUCIBILITY
2273
+ To facilitate reproducibility of the results reported in this
2274
+ paper, the code and data used in experiments are available
2275
+ at https://github.com/TempSDU/MIIR.
2276
+ REFERENCES
2277
+ [1]
2278
+ H. Fang, D. Zhang, Y. Shu, and G. Guo, “Deep learning for
2279
+ sequential recommendation: Algorithms, in��uential factors, and
2280
+ evaluations,” ACM Transactions on Information Systems, vol. 39,
2281
+ no. 1, pp. 1–42, 2020.
2282
+ [2]
2283
+ B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk, “Session-
2284
+ based recommendations with recurrent neural networks,” in In-
2285
+ ternational Conference on Learning Representations, 2016.
2286
+ [3]
2287
+ J. Li, P. Ren, Z. Chen, Z. Ren, T. Lian, and J. Ma, “Neural attentive
2288
+ session-based recommendation,” in Conference on Information and
2289
+ Knowledge Management, 2017, pp. 1419–1428.
2290
+ [4]
2291
+ B. Hidasi and A. Karatzoglou, “Recurrent neural networks with
2292
+ top-k gains for session-based recommendations,” in Conference on
2293
+ Information and Knowledge Management, 2018, pp. 843–852.
2294
+ [5]
2295
+ J. Tang and K. Wang, “Personalized top-n sequential recommen-
2296
+ dation via convolutional sequence embedding,” in International
2297
+ Conference on Web Search and Data Mining, 2018, pp. 565–573.
2298
+ [6]
2299
+ W.-C. Kang and J. McAuley, “Self-attentive sequential recommen-
2300
+ dation,” in IEEE International Conference on Data Mining, 2018, pp.
2301
+ 197–206.
2302
+ [7]
2303
+ S. Wu, Y. Tang, Y. Zhu, L. Wang, X. Xie, and T. Tan, “Session-based
2304
+ recommendation with graph neural networks,” in Proceedings of
2305
+ the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, 2019,
2306
+ pp. 346–353.
2307
+ [8]
2308
+ F. Sun, J. Liu, J. Wu, C. Pei, X. Lin, W. Ou, and P. Jiang,
2309
+ “Bert4rec: Sequential recommendation with bidirectional encoder
2310
+ representations from transformer,” in Conference on Information and
2311
+ Knowledge Management, 2019, pp. 1441–1450.
2312
+ [9]
2313
+ B. Hidasi, M. Quadrana, A. Karatzoglou, and D. Tikk, “Parallel re-
2314
+ current neural network architectures for feature-rich session-based
2315
+ recommendations,” in ACM Conference on Recommender Systems,
2316
+ 2016, pp. 241–248.
2317
+ [10] T. Zhang, P. Zhao, Y. Liu, V. S. Sheng, J. Xu, D. Wang, G. Liu,
2318
+ and X. Zhou, “Feature-level deeper self-attention network for
2319
+ sequential recommendation.” in International Joint Conference on
2320
+ Artificial Intelligence, 2019, pp. 4320–4326.
2321
+ [11] P. Wang, Y. Fan, L. Xia, W. X. Zhao, S. Niu, and J. Huang, “Kerl:
2322
+ A knowledge-guided reinforcement learning model for sequential
2323
+ recommendation,” in International ACM SIGIR Conference on Re-
2324
+ search and Development in Information Retrieval, 2020, pp. 209–218.
2325
+ [12] G. de Souza Pereira Moreira, S. Rabhi, J. M. Lee, R. Ak, and
2326
+ E. Oldridge, “Transformers4rec: Bridging the gap between nlp and
2327
+ sequential/session-based recommendation,” in ACM Conference on
2328
+ Recommender Systems, 2021, pp. 143–153.
2329
+ [13] R. Cai, J. Wu, A. San, C. Wang, and H. Wang, “Category-aware
2330
+ collaborative sequential recommendation,” in International ACM
2331
+ SIGIR Conference on Research and Development in Information Re-
2332
+ trieval, 2021, pp. 388–397.
2333
+ [14] U. Singer, H. Roitman, Y. Eshel, A. Nus, I. Guy, O. Levi, I. Hasson,
2334
+ and E. Kiperwasser, “Sequential modeling with multiple attributes
2335
+ for watchlist recommendation in e-commerce,” in International
2336
+ Conference on Web Search and Data Mining, 2022, pp. 937–946.
2337
+ [15] Y. Xie, P. Zhou, and S. Kim, “Decoupled side information fusion
2338
+ for sequential recommendation,” in International ACM SIGIR Con-
2339
+ ference on Research and Development in Information Retrieval, 2022.
2340
+ [16] Z. C. Lipton, J. Berkowitz, and C. Elkan, “A critical review of
2341
+ recurrent neural networks for sequence learning,” arXiv preprint
2342
+ arXiv:1506.00019, 2015.
2343
+ [17] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N.
2344
+ Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,”
2345
+ in Neural Information Processing Systems, vol. 30, 2017.
2346
+ [18] S. Shi, M. Zhang, X. Yu, Y. Zhang, B. Hao, Y. Liu, and S. Ma,
2347
+ “Adaptive feature sampling for recommendation with missing
2348
+ content feature values,” in Conference on Information and Knowledge
2349
+ Management, 2019, pp. 1451–1460.
2350
+ [19] C. Wang, M. Niepert, and H. Li, “Lrmm: Learning to recommend
2351
+ with missing modalities,” in Proceedings of the Conference on Empir-
2352
+ ical Methods in Natural Language Processing, 2018, pp. 3360–3370.
2353
+ [20] L. Wu, Y. Yang, K. Zhang, R. Hong, Y. Fu, and M. Wang, “Joint
2354
+ item recommendation and attribute inference: An adaptive graph
2355
+ convolutional network approach,” in International ACM SIGIR
2356
+ Conference on Research and Development in Information Retrieval,
2357
+ 2020, pp. 679–688.
2358
+ [21] T. N. Kipf and M. Welling, “Semi-supervised classification with
2359
+ graph convolutional networks,” in International Conference on
2360
+ Learning Representations, 2017.
2361
+ [22] Z. Zeng, C. Xiao, Y. Yao, R. Xie, Z. Liu, F. Lin, L. Lin, and M. Sun,
2362
+ “Knowledge transfer via pre-training for recommendation: A re-
2363
+ view and prospect,” Frontiers in Big Data, p. 4, 2021.
2364
+ [23] K. Zhou, H. Wang, W. X. Zhao, Y. Zhu, S. Wang, F. Zhang, Z. Wang,
2365
+ and J.-R. Wen, “S3-rec: Self-supervised learning for sequential
2366
+ recommendation with mutual information maximization,” in Con-
2367
+ ference on Information and Knowledge Management, 2020, pp. 1893–
2368
+ 1902.
2369
+ [24] X. Yuan, D. Duan, L. Tong, L. Shi, and C. Zhang, “Icai-sr: Item
2370
+ categorical attribute integrated sequential recommendation,” in
2371
+ International ACM SIGIR Conference on Research and Development
2372
+ in Information Retrieval, 2021, pp. 1687–1691.
2373
+ [25] X. Huang, S. Qian, Q. Fang, J. Sang, and C. Xu, “Csan: Contextual
2374
+ self-attention network for user sequential recommendation,” in
2375
+ Proceedings of the ACM International Conference on Multimedia, 2018,
2376
+ pp. 447–455.
2377
+ [26] G. Tang, M. M¨uller, A. R. Gonzales, and R. Sennrich, “Why self-
2378
+ attention? a targeted evaluation of neural machine translation
2379
+ architectures,” in Proceedings of the Conference on Empirical Methods
2380
+ in Natural Language Processing, 2018, pp. 4263–4272.
2381
+ [27] H. Zhao, J. Jia, and V. Koltun, “Exploring self-attention for image
2382
+ recognition,” in Proceedings of the IEEE/CVF Conference on Computer
2383
+ Vision and Pattern Recognition, 2020, pp. 10 076–10 085.
2384
+ [28] C. Liu, X. Li, G. Cai, Z. Dong, H. Zhu, and L. Shang, “Non-
2385
+ invasive self-attention for side information fusion in sequential
2386
+ recommendation,” in Proceedings of the AAAI Conference on Artificial
2387
+ Intelligence, vol. 35, no. 5, 2021, pp. 4249–4256.
2388
+ [29] Y. Lee, S.-W. Kim, S. Park, and X. Xie, “How to impute missing
2389
+ ratings? claims, solution, and its application to collaborative filter-
2390
+ ing,” in The Web Conference, 2018, pp. 783–792.
2391
+ [30] F. Biessmann, D. Salinas, S. Schelter, P. Schmidt, and D. Lange,
2392
+ ““deep” learning for missing value imputation in tables with
2393
+ non-numerical data,” in Conference on Information and Knowledge
2394
+ Management, 2018, pp. 2017–2025.
2395
+ [31] B. M. Marlin and R. S. Zemel, “Collaborative prediction and
2396
+ ranking with non-random missing data,” in ACM Conference on
2397
+ Recommender Systems, 2009, pp. 5–12.
2398
+ [32] J. M. Hern´andez-Lobato, N. Houlsby, and Z. Ghahramani, “Prob-
2399
+ abilistic matrix factorization with non-random missing data,” in
2400
+ International Conference on Machine Learning, 2014, pp. 1512–1520.
2401
+ [33] R. Pan, T. Yang, J. Cao, K. Lu, and Z. Zhang, “Missing data
2402
+ imputation by k nearest neighbours based on grey relational
2403
+ structure and mutual information,” Applied Intelligence, vol. 43,
2404
+ no. 3, pp. 614–632, 2015.
2405
+ [34] B. K. Beaulieu-Jones and J. H. Moore, “Missing data imputation in
2406
+ the electronic health record using deeply learned autoencoders,”
2407
+ in Pacific Symposium on Biocomputing, 2017, pp. 207–218.
2408
+ [35] R. C. Pereira, M. S. Santos, P. P. Rodrigues, and P. H. Abreu,
2409
+ “Reviewing autoencoders for missing data imputation: Technical
2410
+ trends, applications and outcomes,” Journal of Artificial Intelligence
2411
+ Research, vol. 69, pp. 1255–1285, 2020.
2412
+ [36] Y. Cao, X. Wang, X. He, Z. Hu, and T.-S. Chua, “Unifying
2413
+ knowledge graph learning and recommendation: Towards a better
2414
+ understanding of user preferences,” in The Web Conference, 2019,
2415
+ pp. 151–161.
2416
+ [37] A. Binder, S. Bach, G. Montavon, K.-R. M¨uller, and W. Samek,
2417
+ “Layer-wise relevance propagation for deep neural network ar-
2418
+ chitectures,” International Conference on Information Science and
2419
+ Applications, pp. 913–922, 2016.
2420
+
2421
+ 13
2422
+ [38] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-
2423
+ training of deep bidirectional transformers for language under-
2424
+ standing,” in Proceedings of the Conference of the North American
2425
+ Chapter of the Association for Computational Linguistics, 2019, pp.
2426
+ 4171–4186.
2427
+ [39] J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv
2428
+ preprint arXiv:1607.06450, 2016.
2429
+ [40] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and
2430
+ R. Salakhutdinov, “Dropout: A simple way to prevent neural net-
2431
+ works from overfitting,” The Journal of Machine Learning Research,
2432
+ vol. 15, no. 1, pp. 1929–1958, 2014.
2433
+ [41] D. Hendrycks and K. Gimpel, “Gaussian error linear units
2434
+ (gelus),” arXiv preprint arXiv:1606.08415, 2016.
2435
+ [42] J. Ni, J. Li, and J. McAuley, “Justifying recommendations using
2436
+ distantly-labeled reviews and fine-grained aspects,” in Proceedings
2437
+ of the Conference on Empirical Methods in Natural Language Process-
2438
+ ing, 2019, pp. 188–197.
2439
+ [43] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme,
2440
+ “Bpr: Bayesian personalized ranking from implicit feedback,” in
2441
+ Proceedings of the Conference on Uncertainty in Artificial Intelligence,
2442
+ 2009, pp. 452–461.
2443
+ [44] X. Glorot and Y. Bengio, “Understanding the difficulty of training
2444
+ deep feedforward neural networks,” in Proceedings of the Interna-
2445
+ tional Conference on Artificial Intelligence and Statistics, 2010, pp. 249–
2446
+ 256.
2447
+ [45] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimiza-
2448
+ tion,” in International Conference on Learning Representations, 2015.
2449
+ [46] R. Pascanu, T. Mikolov, and Y. Bengio, “On the difficulty of
2450
+ training recurrent neural networks,” in International Conference on
2451
+ Machine Learning, 2013, pp. 1310–1318.
2452
+ [47] S. Wang, B. Z. Li, M. Khabsa, H. Fang, and H. Ma, “Linformer: Self-
2453
+ attention with linear complexity,” arXiv preprint arXiv:2006.04768,
2454
+ 2020.
2455
+ [48] Y. Xiong, Z. Zeng, R. Chakraborty, M. Tan, G. Fung, Y. Li, and
2456
+ V. Singh, “Nystr¨omformer: A nystr¨om-based algorithm for ap-
2457
+ proximating self-attention,” in Proceedings of the AAAI Conference
2458
+ on Artificial Intelligence, vol. 35, no. 16, 2021, pp. 14 138–14 148.
2459
+ [49] Y. Du, W. M. Czarnecki, S. M. Jayakumar, M. Farajtabar, R. Pas-
2460
+ canu, and B. Lakshminarayanan, “Adapting auxiliary losses using
2461
+ gradient similarity,” arXiv preprint arXiv:1812.02224, 2018.
2462
+ [50] Y. Xu, X. Liu, Y. Shen, J. Liu, and J. Gao, “Multi-task learning
2463
+ with sample re-weighting for machine reading comprehension,”
2464
+ in Proceedings of the Conference of the North American Chapter of the
2465
+ Association for Computational Linguistics, 2019, pp. 2644–2655.
2466
+
MtAzT4oBgHgl3EQfy_6c/content/tmp_files/load_file.txt ADDED
The diff for this file is too large to render. See raw diff
 
NNE1T4oBgHgl3EQftgVE/vector_store/index.pkl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10a1d27afd34c23b271cf23c0f960489c7c6c84c85a0e0336f1c1f6ba0964896
3
+ size 646939